pax_global_header00006660000000000000000000000064122120072740014507gustar00rootroot0000000000000052 comment=52d0b1802c4e01ecb8d3e043d0a31bd3cbbc7d59 ruby-mongo-1.9.2/000077500000000000000000000000001221200727400136165ustar00rootroot00000000000000ruby-mongo-1.9.2/LICENSE000066400000000000000000000250151221200727400146260ustar00rootroot00000000000000 Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS Copyright (C) 2008-2013 10gen, Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ruby-mongo-1.9.2/README.md000066400000000000000000000303151221200727400150770ustar00rootroot00000000000000# Build Status [travis-img]: https://travis-ci.org/mongodb/mongo-ruby-driver.png?branch=1.x-stable [travis-url]: http://travis-ci.org/mongodb/mongo-ruby-driver [jenkins-img]: https://jenkins.10gen.com/job/mongo-ruby-driver-1.x-stable/badge/icon [jenkins-url]: https://jenkins.10gen.com/job/mongo-ruby-driver-1.x-stable/ [api-url]: http://api.mongodb.org/ruby/current - TravisCI [![Travis Status][travis-img]][travis-url] - Jenkins [![Jenkins Status][jenkins-img]][jenkins-url] # Documentation This API documentation is available online at [http://api.mongodb.org/ruby](http://api.mongodb.org/ruby) for all releases of the MongoDB Ruby driver. Please reference the exact version of the documentation that matches the release of the Ruby driver that you are using. Note that the [Ruby Language Center for MongoDB](http://www.mongodb.org/display/DOCS/Ruby+Language+Center) has a link to API Documentation for the current release. If you have the source, you can generate the matching documentation by typing ```sh $ rake docs ``` Once generated, the API documentation can be found in the docs/ folder. # Introduction This is the 10gen-supported Ruby driver for [MongoDB](http://www.mongodb.org). For the api reference please see the [API][api-url] The [wiki](https://github.com/mongodb/mongo-ruby-driver/wiki) has other articles of interest, including: 1. [A tutorial](https://github.com/mongodb/mongo-ruby-driver/wiki/Tutorial). 2. [Replica Sets in Ruby](https://github.com/mongodb/mongo-ruby-driver/wiki/Replica-Sets). 3. [Write Concern in Ruby](https://github.com/mongodb/mongo-ruby-driver/wiki/Write-Concern). 4. [Tailable Cursors in Ruby](https://github.com/mongodb/mongo-ruby-driver/wiki/Tailable-Cursors). 5. [Read Preference in Ruby](https://github.com/mongodb/mongo-ruby-driver/wiki/Read-Preference). 6. [GridFS in Ruby](https://github.com/mongodb/mongo-ruby-driver/wiki/GridFS). 7. [Frequently Asked Questions](https://github.com/mongodb/mongo-ruby-driver/wiki/FAQ). 8. [History](https://github.com/mongodb/mongo-ruby-driver/wiki/History). 9. [Release plan](https://github.com/mongodb/mongo-ruby-driver/wiki/Releases). 10. [Credits](https://github.com/mongodb/mongo-ruby-driver/wiki/Credits). Here's a quick code sample. Again, see the [MongoDB Ruby Tutorial](https://github.com/mongodb/mongo-ruby-driver/wiki/Tutorial) for much more: ```ruby require 'rubygems' require 'mongo' include Mongo @client = MongoClient.new('localhost', 27017) @db = @client['sample-db'] @coll = @db['test'] @coll.remove 3.times do |i| @coll.insert({'a' => i+1}) end puts "There are #{@coll.count} records. Here they are:" @coll.find.each { |doc| puts doc.inspect } ``` # Installation ### Ruby Versions The driver works and is consistently tested on Ruby 1.8.7 and 1.9.3 as well as JRuby 1.6.x and 1.7.x. Note that if you're on 1.8.7, be sure that you're using a patchlevel >= 249. There are some IO bugs in earlier versions. ### Gems ```sh $ gem update --system $ gem install mongo ``` For a significant performance boost, you'll want to install the C extension: ```sh $ gem install bson_ext ``` Note that bson_ext isn't used with JRuby. Instead, we use some native Java extensions are bundled with the bson gem. If you ever need to modify these extensions, you can recompile with the following rake task: ```sh $ rake compile:jbson ``` ### From the GitHub source The source code is available at http://github.com/mongodb/mongo-ruby-driver. You can either clone the git repository or download a tarball or zip file. Once you have the source, you can use it from wherever you downloaded it or you can install it as a gem from the source by typing: ```sh $ rake install ``` # Examples For extensive examples, see the [MongoDB Ruby Tutorial](https://github.com/mongodb/mongo-ruby-driver/wiki/Tutorial). # GridFS The Ruby driver include two abstractions for storing large files: Grid and GridFileSystem. The Grid class is a Ruby implementation of MongoDB's GridFS file storage specification. GridFileSystem is essentially the same, but provides a more filesystem-like API and assumes that filenames are unique. An instance of both classes represents an individual file store. See the API reference for details. Examples: ```ruby # Write a file on disk to the Grid file = File.open('image.jpg') grid = Mongo::Grid.new(db) id = grid.put(file) # Retrieve the file file = grid.get(id) file.read # Get all the file's metata file.filename file.content_type file.metadata ``` # Notes ## Thread Safety The driver is thread-safe. ## Connection Pooling The driver implements connection pooling. By default, only one socket connection will be opened to MongoDB. However, if you're running a multi-threaded application, you can specify a maximum pool size and a maximum timeout for waiting for old connections to be released to the pool. To set up a pooled connection to a single MongoDB instance: ```ruby @client = MongoClient.new("localhost", 27017, :pool_size => 5, :pool_timeout => 5) ``` Though the pooling architecture will undoubtedly evolve, it currently owes much credit to the connection pooling implementations in ActiveRecord and PyMongo. ## Forking Certain Ruby application servers work by forking, and it has long been necessary to re-establish the child process's connection to the database after fork. But with the release of v1.3.0, the Ruby driver detects forking and reconnects automatically. ## Environment variable `MONGODB_URI` `Mongo::MongoClient.from_uri`, `Mongo::MongoClient.new` and `Mongo::MongoReplicaSetClient.new` will use ENV["MONGODB_URI"] if no other args are provided. The URI must fit this specification: mongodb://[username:password@]host1[:port1][,host2[:port2],...[,hostN[:portN]]][/[database][?options]] If the type of connection (direct or replica set) should be determined entirely from ENV["MONGODB_URI"], you may want to use `Mongo::MongoClient.from_uri` because it will return either `Mongo::MongoClient` or a `Mongo::MongoReplicaSetClient` depending on how many hosts are specified. Trying to use `Mongo::MongoClient.new` with multiple hosts in ENV["MONGODB_URI"] will raise an exception. ## String Encoding The BSON ("Binary JSON") format used to communicate with Mongo requires that strings be UTF-8 (http://en.wikipedia.org/wiki/UTF-8). Ruby 1.9 has built-in character encoding support. All strings sent to Mongo and received from Mongo are converted to UTF-8 when necessary, and strings read from Mongo will have their character encodings set to UTF-8. When used with Ruby 1.8, the bytes in each string are written to and read from Mongo as is. If the string is ASCII, all is well, because ASCII is a subset of UTF-8. If the string is not ASCII, it may not be a well-formed UTF-8 string. ## Primary Keys The `_id` field is a primary key. It is treated specially by the database, and its use makes many operations more efficient. The value of an _id may be of any type. The database itself inserts an _id value if none is specified when a record is inserted. ### Primary Key Factories A primary key factory is a class you supply to a DB object that knows how to generate _id values. If you want to control _id values or even their types, using a PK factory lets you do so. You can tell the Ruby Mongo driver how to create primary keys by passing in the :pk option to the MongoClient#db method. ```ruby include Mongo db = MongoClient.new('localhost', 27017).db('dbname', :pk => MyPKFactory.new) ``` A primary key factory object must respond to :create_pk, which should take a hash and return a hash which merges the original hash with any primary key fields the factory wishes to inject. NOTE: if the object already has a primary key, the factory should not inject a new key; this means that the object may already exist in the database. The idea here is that whenever a record is inserted, the :pk object's +create_pk+ method will be called and the new hash returned will be inserted. Here is a sample primary key factory, taken from the tests: ```ruby class TestPKFactory def create_pk(doc) doc['_id'] ||= BSON::ObjectId.new doc end end ``` Here's a slightly more sophisticated one that handles both symbol and string keys. This is the PKFactory that comes with the MongoRecord code (an ActiveRecord-like framework for non-Rails apps) and the AR Mongo adapter code (for Rails): ```ruby class PKFactory def create_pk(doc) return doc if doc[:_id] doc.delete(:_id) # in case it exists but the value is nil doc['_id'] ||= BSON::ObjectId.new doc end end ``` A database's PK factory object may be set either when a DB object is created or immediately after you obtain it, but only once. The only reason it is changeable at all is so that libraries such as MongoRecord that use this driver can set the PK factory after obtaining the database but before using it for the first time. ## The DB Class ### Strict mode _**NOTE:** Support for strict mode has been deprecated and will be removed in version 2.0 of the driver._ Each database has an optional strict mode. If strict mode is on, then asking for a collection that does not exist will raise an error, as will asking to create a collection that already exists. Note that both these operations are completely harmless; strict mode is a programmer convenience only. To turn on strict mode, either pass in :strict => true when obtaining a DB object or call the `:strict=` method: ```ruby db = MongoClient.new('localhost', 27017).db('dbname', :strict => true) # I'm feeling lax db.strict = false # No, I'm not! db.strict = true ``` The method DB#strict? returns the current value of that flag. ## Cursors Notes: * Cursors are enumerable (and have a #to_a method). * The query doesn't get run until you actually attempt to retrieve data from a cursor. * Cursors will timeout on the server after 10 minutes. If you need to keep a cursor open for more than 10 minutes, specify `:timeout => false` when you create the cursor. ## Socket timeouts The Ruby driver support timeouts on socket read operations. To enable them, set the `:op_timeout` option when you create a `Mongo::MongoClient` object. If implementing higher-level timeouts, using tools like `Rack::Timeout`, it's very important to call `Mongo::MongoClient#close` to prevent the subsequent operation from receiving the previous request. # Testing Before running the tests, make sure you install all test dependencies by running: ```sh $ gem install bundler; bundle install ``` To run all default test suites (without the BSON extensions) just type: ```sh $ rake test ``` If you want to run the default test suite using the BSON extensions: ```sh $ rake test:ext ``` These will run both unit and functional tests. To run these tests alone: ```sh $ rake test:unit $ rake test:functional ``` To run any individual rake tasks with the BSON extension disabled, just pass BSON_EXT_DISABLED=true to the task: ```sh $ rake test:unit BSON_EXT_DISABLED=true ``` If you want to test replica set, you can run the following task: ```sh $ rake test:replica_set ``` To run a single test at the top level, add -Itest since we no longer modify LOAD_PATH: ```sh $ ruby -Itest -Ilib test/bson/bson_test.rb ``` To run a single test from the test directory, add -I. since we no longer modify LOAD_PATH: ```sh $ ruby -I. -I../lib bson/bson_test.rb ``` To run a single test from its subdirectory, add -I.. since we no longer modify LOAD_PATH: ```sh $ ruby -I.. -I../../lib bson_test.rb ``` To fix the following error on Mac OS X - "/.../lib/bson_ext/cbson.bundle: [BUG] Segmentation fault": ```sh $ rake compile ``` # Release Notes See [history](https://github.com/mongodb/mongo-ruby-driver/wiki/History). # Credits See [credits](https://github.com/mongodb/mongo-ruby-driver/wiki/Credits). # License Copyright (C) 2008-2013 10gen Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ruby-mongo-1.9.2/Rakefile000066400000000000000000000017411221200727400152660ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'rubygems' begin require 'bundler' rescue LoadError raise '[FAIL] Bundler not found! Install it with `gem install bundler; bundle install`.' end if ENV.has_key?('TEST') || ENV.has_key?('TRAVIS_TEST') Bundler.require(:default, :testing) else Bundler.require(:default, :testing, :deploy, :development) end Dir.glob(File.join('tasks', '**', '*.rake')).sort.each { |rake| load File.expand_path(rake) }ruby-mongo-1.9.2/VERSION000066400000000000000000000000051221200727400146610ustar00rootroot000000000000001.9.2ruby-mongo-1.9.2/bin/000077500000000000000000000000001221200727400143665ustar00rootroot00000000000000ruby-mongo-1.9.2/bin/mongo_console000077500000000000000000000022011221200727400171500ustar00rootroot00000000000000#!/usr/bin/env ruby # Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. org_argv = ARGV.dup ARGV.clear require 'irb' $LOAD_PATH[0,0] = File.join(File.dirname(__FILE__), '..', 'lib') require 'mongo' include Mongo host = org_argv[0] || ENV['MONGO_RUBY_DRIVER_HOST'] || 'localhost' port = org_argv[1] || ENV['MONGO_RUBY_DRIVER_PORT'] || MongoClient::DEFAULT_PORT dbnm = org_argv[2] || ENV['MONGO_RUBY_DRIVER_DB'] || 'ruby-mongo-console' puts "Connecting to #{host}:#{port} (CLIENT) on with database #{dbnm} (DB)" CLIENT = MongoClient.new(host, port) DB = CLIENT.db(dbnm) puts "Starting IRB session..." IRB.start(__FILE__) ruby-mongo-1.9.2/checksums.yaml.gz000066400000000000000000000004131221200727400171040ustar00rootroot00000000000000YRe;n0"HЭ[0tRedw._~^~^X3Rle WvR8OTt4QPOǣU^z @5{I Âe&{.(6ک7(/N01ѹNPg=Ԧ!sX`yֽ<@ReH^ Y%j.iVgɭ8CRwe#ddH3͢FM1zڤ/: ruby-mongo-1.9.2/checksums.yaml.gz.sig000066400000000000000000000004001221200727400176610ustar00rootroot000000000000001hkz!G=Oʢ /_ Ld ? J_C8(@+# -=5ՃTliG #E鑸'ȉpQt_ :8CA?is&ȐDdxTU|֡jy(X ϱl#]Pruby-mongo-1.9.2/data.tar.gz.sig000066400000000000000000000004001221200727400164310ustar00rootroot00000000000000TOtwHYz@1)n^gDZ8ϊuJex,lˤXi"\7YMٿ_r,vVtXzrE'ϲ>MO( :F2l?;6W 0EN$ؿ {ňX[AcrT]@TE zny IBUș\1SObp1lBYA7!"=E"qY[ž5ruby-mongo-1.9.2/lib/000077500000000000000000000000001221200727400143645ustar00rootroot00000000000000ruby-mongo-1.9.2/lib/mongo.rb000066400000000000000000000051711221200727400160340ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo ASCENDING = 1 DESCENDING = -1 GEO2D = '2d' GEO2DSPHERE = '2dsphere' GEOHAYSTACK = 'geoHaystack' TEXT = 'text' HASHED = 'hashed' INDEX_TYPES = { 'ASCENDING' => ASCENDING, 'DESCENDING' => DESCENDING, 'GEO2D' => GEO2D, 'GEO2DSPHERE' => GEO2DSPHERE, 'GEOHAYSTACK' => GEOHAYSTACK, 'TEXT' => TEXT, 'HASHED' => HASHED } DEFAULT_MAX_BSON_SIZE = 4 * 1024 * 1024 MESSAGE_SIZE_FACTOR = 2 module Constants OP_REPLY = 1 OP_MSG = 1000 OP_UPDATE = 2001 OP_INSERT = 2002 OP_QUERY = 2004 OP_GET_MORE = 2005 OP_DELETE = 2006 OP_KILL_CURSORS = 2007 OP_QUERY_TAILABLE = 2 ** 1 OP_QUERY_SLAVE_OK = 2 ** 2 OP_QUERY_OPLOG_REPLAY = 2 ** 3 OP_QUERY_NO_CURSOR_TIMEOUT = 2 ** 4 OP_QUERY_AWAIT_DATA = 2 ** 5 OP_QUERY_EXHAUST = 2 ** 6 REPLY_CURSOR_NOT_FOUND = 2 ** 0 REPLY_QUERY_FAILURE = 2 ** 1 REPLY_SHARD_CONFIG_STALE = 2 ** 2 REPLY_AWAIT_CAPABLE = 2 ** 3 end end require 'bson' require 'mongo/util/thread_local_variable_manager' require 'mongo/util/conversions' require 'mongo/util/support' require 'mongo/util/read_preference' require 'mongo/util/write_concern' require 'mongo/util/core_ext' require 'mongo/util/logging' require 'mongo/util/node' require 'mongo/util/pool' require 'mongo/util/pool_manager' require 'mongo/util/sharding_pool_manager' require 'mongo/util/server_version' require 'mongo/util/socket_util' require 'mongo/util/ssl_socket' require 'mongo/util/tcp_socket' require 'mongo/util/unix_socket' require 'mongo/util/uri_parser' require 'mongo/networking' require 'mongo/mongo_client' require 'mongo/mongo_replica_set_client' require 'mongo/mongo_sharded_client' require 'mongo/legacy' require 'mongo/collection' require 'mongo/cursor' require 'mongo/db' require 'mongo/exceptions' require 'mongo/gridfs/grid_ext' require 'mongo/gridfs/grid' require 'mongo/gridfs/grid_io' require 'mongo/gridfs/grid_file_system' ruby-mongo-1.9.2/lib/mongo/000077500000000000000000000000001221200727400155035ustar00rootroot00000000000000ruby-mongo-1.9.2/lib/mongo/collection.rb000066400000000000000000001422551221200727400201740ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo # A named collection of documents in a database. class Collection include Mongo::Logging include Mongo::WriteConcern attr_reader :db, :name, :pk_factory, :hint, :write_concern, :capped # Read Preference attr_accessor :read, :tag_sets, :acceptable_latency # Initialize a collection object. # # @param [String, Symbol] name the name of the collection. # @param [DB] db a MongoDB database instance. # # @option opts [String, Integer, Symbol] :w (1) Set default number of nodes to which a write # should be acknowledged # @option opts [Boolean] :j (false) Set journal acknowledgement # @option opts [Integer] :wtimeout (nil) Set replica set acknowledgement timeout # @option opts [Boolean] :fsync (false) Set fsync acknowledgement. # # Notes about write concern: # These write concern options will be used for insert, update, and remove methods called on this # Collection instance. If no value is provided, the default values set on this instance's DB will be used. # These option values can be overridden for any invocation of insert, update, or remove. # # @option opts [:create_pk] :pk (BSON::ObjectId) A primary key factory to use # other than the default BSON::ObjectId. # @option opts [:primary, :secondary] :read The default read preference for queries # initiates from this connection object. If +:secondary+ is chosen, reads will be sent # to one of the closest available secondary nodes. If a secondary node cannot be located, the # read will be sent to the primary. If this option is left unspecified, the value of the read # preference for this collection's associated Mongo::DB object will be used. # # @raise [InvalidNSName] # if collection name is empty, contains '$', or starts or ends with '.' # # @raise [TypeError] # if collection name is not a string or symbol # # @return [Collection] # # @core collections constructor_details def initialize(name, db, opts={}) if db.is_a?(String) && name.is_a?(Mongo::DB) warn "Warning: the order of parameters to initialize a collection have changed. " + "Please specify the collection name first, followed by the db. This will be made permanent" + "in v2.0." db, name = name, db end raise TypeError, "Collection name must be a String or Symbol." unless [String, Symbol].include?(name.class) name = name.to_s raise Mongo::InvalidNSName, "Collection names cannot be empty." if name.empty? || name.include?("..") if name.include?("$") raise Mongo::InvalidNSName, "Collection names must not contain '$'" unless name =~ /((^\$cmd)|(oplog\.\$main))/ end raise Mongo::InvalidNSName, "Collection names must not start or end with '.'" if name.match(/^\./) || name.match(/\.$/) pk_factory = nil if opts.respond_to?(:create_pk) || !opts.is_a?(Hash) warn "The method for specifying a primary key factory on a Collection has changed.\n" + "Please specify it as an option (e.g., :pk => PkFactory)." pk_factory = opts end @db, @name = db, name @connection = @db.connection @logger = @connection.logger @cache_time = @db.cache_time @cache = Hash.new(0) unless pk_factory @write_concern = get_write_concern(opts, db) @read = opts[:read] || @db.read Mongo::ReadPreference::validate(@read) @capped = opts[:capped] @tag_sets = opts.fetch(:tag_sets, @db.tag_sets) @acceptable_latency = opts.fetch(:acceptable_latency, @db.acceptable_latency) end @pk_factory = pk_factory || opts[:pk] || BSON::ObjectId @hint = nil end # Indicate whether this is a capped collection. # # @raise [Mongo::OperationFailure] # if the collection doesn't exist. # # @return [Boolean] def capped? @capped ||= [1, true].include?(@db.command({:collstats => @name})['capped']) end # Return a sub-collection of this collection by name. If 'users' is a collection, then # 'users.comments' is a sub-collection of users. # # @param [String, Symbol] name # the collection to return # # @raise [Mongo::InvalidNSName] # if passed an invalid collection name # # @return [Collection] # the specified sub-collection def [](name) name = "#{self.name}.#{name}" return Collection.new(name, db) if !db.strict? || db.collection_names.include?(name.to_s) raise "Collection #{name} doesn't exist. Currently in strict mode." end # Set a hint field for query optimizer. Hint may be a single field # name, array of field names, or a hash (preferably an [OrderedHash]). # If using MongoDB > 1.1, you probably don't ever need to set a hint. # # @param [String, Array, OrderedHash] hint a single field, an array of # fields, or a hash specifying fields def hint=(hint=nil) @hint = normalize_hint_fields(hint) self end # Set a hint field using a named index. # @param [String] hinted index name def named_hint=(hint=nil) @hint = hint self end # Query the database. # # The +selector+ argument is a prototype document that all results must # match. For example: # # collection.find({"hello" => "world"}) # # only matches documents that have a key "hello" with value "world". # Matches can have other keys *in addition* to "hello". # # If given an optional block +find+ will yield a Cursor to that block, # close the cursor, and then return nil. This guarantees that partially # evaluated cursors will be closed. If given no block +find+ returns a # cursor. # # @param [Hash] selector # a document specifying elements which must be present for a # document to be included in the result set. Note that in rare cases, # (e.g., with $near queries), the order of keys will matter. To preserve # key order on a selector, use an instance of BSON::OrderedHash (only applies # to Ruby 1.8). # # @option opts [Array, Hash] :fields field names that should be returned in the result # set ("_id" will be included unless explicitly excluded). By limiting results to a certain subset of fields, # you can cut down on network traffic and decoding time. If using a Hash, keys should be field # names and values should be either 1 or 0, depending on whether you want to include or exclude # the given field. # @option opts [:primary, :secondary] :read The default read preference for queries # initiates from this connection object. If +:secondary+ is chosen, reads will be sent # to one of the closest available secondary nodes. If a secondary node cannot be located, the # read will be sent to the primary. If this option is left unspecified, the value of the read # preference for this Collection object will be used. # @option opts [Integer] :skip number of documents to skip from the beginning of the result set # @option opts [Integer] :limit maximum number of documents to return # @option opts [Array] :sort an array of [key, direction] pairs to sort by. Direction should # be specified as Mongo::ASCENDING (or :ascending / :asc) or Mongo::DESCENDING (or :descending / :desc) # @option opts [String, Array, OrderedHash] :hint hint for query optimizer, usually not necessary if # using MongoDB > 1.1 # @option opts [String] :named_hint for specifying a named index as a hint, will be overriden by :hint # if :hint is also provided. # @option opts [Boolean] :snapshot (false) if true, snapshot mode will be used for this query. # Snapshot mode assures no duplicates are returned, or objects missed, which were preset at both the start and # end of the query's execution. # For details see http://www.mongodb.org/display/DOCS/How+to+do+Snapshotting+in+the+Mongo+Database # @option opts [Boolean] :batch_size (100) the number of documents to returned by the database per # GETMORE operation. A value of 0 will let the database server decide how many results to return. # This option can be ignored for most use cases. # @option opts [Boolean] :timeout (true) when +true+, the returned cursor will be subject to # the normal cursor timeout behavior of the mongod process. When +false+, the returned cursor will # never timeout. Note that disabling timeout will only work when #find is invoked with a block. # This is to prevent any inadvertent failure to close the cursor, as the cursor is explicitly # closed when block code finishes. # @option opts [Integer] :max_scan (nil) Limit the number of items to scan on both collection scans and indexed queries.. # @option opts [Boolean] :show_disk_loc (false) Return the disk location of each query result (for debugging). # @option opts [Boolean] :return_key (false) Return the index key used to obtain the result (for debugging). # @option opts [Block] :transformer (nil) a block for transforming returned documents. # This is normally used by object mappers to convert each returned document to an instance of a class. # @option opts [String] :comment (nil) a comment to include in profiling logs # # @raise [ArgumentError] # if timeout is set to false and find is not invoked in a block # # @raise [RuntimeError] # if given unknown options # # @core find find-instance_method def find(selector={}, opts={}) opts = opts.dup fields = opts.delete(:fields) fields = ["_id"] if fields && fields.empty? skip = opts.delete(:skip) || skip || 0 limit = opts.delete(:limit) || 0 sort = opts.delete(:sort) hint = opts.delete(:hint) named_hint = opts.delete(:named_hint) snapshot = opts.delete(:snapshot) batch_size = opts.delete(:batch_size) timeout = (opts.delete(:timeout) == false) ? false : true max_scan = opts.delete(:max_scan) return_key = opts.delete(:return_key) transformer = opts.delete(:transformer) show_disk_loc = opts.delete(:show_disk_loc) comment = opts.delete(:comment) read = opts.delete(:read) || @read tag_sets = opts.delete(:tag_sets) || @tag_sets acceptable_latency = opts.delete(:acceptable_latency) || @acceptable_latency if timeout == false && !block_given? raise ArgumentError, "Collection#find must be invoked with a block when timeout is disabled." end if hint hint = normalize_hint_fields(hint) else hint = @hint # assumed to be normalized already end raise RuntimeError, "Unknown options [#{opts.inspect}]" unless opts.empty? cursor = Cursor.new(self, { :selector => selector, :fields => fields, :skip => skip, :limit => limit, :order => sort, :hint => hint || named_hint, :snapshot => snapshot, :timeout => timeout, :batch_size => batch_size, :transformer => transformer, :max_scan => max_scan, :show_disk_loc => show_disk_loc, :return_key => return_key, :read => read, :tag_sets => tag_sets, :comment => comment, :acceptable_latency => acceptable_latency }) if block_given? begin yield cursor ensure cursor.close end nil else cursor end end # Return a single object from the database. # # @return [OrderedHash, Nil] # a single document or nil if no result is found. # # @param [Hash, ObjectId, Nil] spec_or_object_id a hash specifying elements # which must be present for a document to be included in the result set or an # instance of ObjectId to be used as the value for an _id query. # If nil, an empty selector, {}, will be used. # # @option opts [Hash] # any valid options that can be send to Collection#find # # @raise [TypeError] # if the argument is of an improper type. def find_one(spec_or_object_id=nil, opts={}) spec = case spec_or_object_id when nil {} when BSON::ObjectId {:_id => spec_or_object_id} when Hash spec_or_object_id else raise TypeError, "spec_or_object_id must be an instance of ObjectId or Hash, or nil" end find(spec, opts.merge(:limit => -1)).next_document end # Save a document to this collection. # # @param [Hash] doc # the document to be saved. If the document already has an '_id' key, # then an update (upsert) operation will be performed, and any existing # document with that _id is overwritten. Otherwise an insert operation is performed. # # @return [ObjectId] the _id of the saved document. # # @option opts [Hash] :w, :j, :wtimeout, :fsync Set the write concern for this operation. # :w > 0 will run a +getlasterror+ command on the database to report any assertion. # :j will confirm a write has been committed to the journal, # :wtimeout specifies how long to wait for write confirmation, # :fsync will confirm that a write has been fsynced. # Options provided here will override any write concern options set on this collection, # its database object, or the current connection. See the options # for DB#get_last_error. # # @raise [Mongo::OperationFailure] will be raised iff :w > 0 and the operation fails. def save(doc, opts={}) if doc.has_key?(:_id) || doc.has_key?('_id') id = doc[:_id] || doc['_id'] update({:_id => id}, doc, opts.merge!({:upsert => true})) id else insert(doc, opts) end end # Insert one or more documents into the collection. # # @param [Hash, Array] doc_or_docs # a document (as a hash) or array of documents to be inserted. # # @return [ObjectId, Array] # The _id of the inserted document or a list of _ids of all inserted documents. # @return [[ObjectId, Array], [Hash, Array]] # 1st, the _id of the inserted document or a list of _ids of all inserted documents. # 2nd, a list of invalid documents. # Return this result format only when :collect_on_error is true. # # @option opts [String, Integer, Symbol] :w (1) Set default number of nodes to which a write # should be acknowledged # @option opts [Boolean] :j (false) Set journal acknowledgement # @option opts [Integer] :wtimeout (nil) Set replica set acknowledgement timeout # @option opts [Boolean] :fsync (false) Set fsync acknowledgement. # # Notes on write concern: # Options provided here will override any write concern options set on this collection, # its database object, or the current connection. See the options for +DB#get_last_error+. # # @option opts [Boolean] :continue_on_error (+false+) If true, then # continue a bulk insert even if one of the documents inserted # triggers a database assertion (as in a duplicate insert, for instance). # If not acknowledging writes, the list of ids returned will # include the object ids of all documents attempted on insert, even # if some are rejected on error. When acknowledging writes, any error will raise an # OperationFailure exception. # MongoDB v2.0+. # @option opts [Boolean] :collect_on_error (+false+) if true, then # collects invalid documents as an array. Note that this option changes the result format. # # @raise [Mongo::OperationFailure] will be raised iff :w > 0 and the operation fails. # # @core insert insert-instance_method def insert(doc_or_docs, opts={}) doc_or_docs = [doc_or_docs] unless doc_or_docs.is_a?(Array) doc_or_docs.collect! { |doc| @pk_factory.create_pk(doc) } write_concern = get_write_concern(opts, self) result = insert_documents(doc_or_docs, @name, true, write_concern, opts) result.size > 1 ? result : result.first end alias_method :<<, :insert # Remove all documents from this collection. # # @param [Hash] selector # If specified, only matching documents will be removed. # # @option opts [String, Integer, Symbol] :w (1) Set default number of nodes to which a write # should be acknowledged # @option opts [Boolean] :j (false) Set journal acknowledgement # @option opts [Integer] :wtimeout (nil) Set replica set acknowledgement timeout # @option opts [Boolean] :fsync (false) Set fsync acknowledgement. # # Notes on write concern: # Options provided here will override any write concern options set on this collection, # its database object, or the current connection. See the options for +DB#get_last_error+. # # @example remove all documents from the 'users' collection: # users.remove # users.remove({}) # # @example remove only documents that have expired: # users.remove({:expire => {"$lte" => Time.now}}) # # @return [Hash, true] Returns a Hash containing the last error object if acknowledging writes # Otherwise, returns true. # # @raise [Mongo::OperationFailure] will be raised iff :w > 0 and the operation fails. # # @core remove remove-instance_method def remove(selector={}, opts={}) write_concern = get_write_concern(opts, self) message = BSON::ByteBuffer.new("\0\0\0\0", @connection.max_message_size) BSON::BSON_RUBY.serialize_cstr(message, "#{@db.name}.#{@name}") message.put_int(0) message.put_binary(BSON::BSON_CODER.serialize(selector, false, true, @connection.max_bson_size).to_s) instrument(:remove, :database => @db.name, :collection => @name, :selector => selector) do if Mongo::WriteConcern.gle?(write_concern) @connection.send_message_with_gle(Mongo::Constants::OP_DELETE, message, @db.name, nil, write_concern) else @connection.send_message(Mongo::Constants::OP_DELETE, message) true end end end # Update one or more documents in this collection. # # @param [Hash] selector # a hash specifying elements which must be present for a document to be updated. Note: # the update command currently updates only the first document matching the # given selector. If you want all matching documents to be updated, be sure # to specify :multi => true. # @param [Hash] document # a hash specifying the fields to be changed in the selected document, # or (in the case of an upsert) the document to be inserted # # @option opts [Boolean] :upsert (+false+) if true, performs an upsert (update or insert) # @option opts [Boolean] :multi (+false+) update all documents matching the selector, as opposed to # just the first matching document. Note: only works in MongoDB 1.1.3 or later. # @option opts [String, Integer, Symbol] :w (1) Set default number of nodes to which a write # should be acknowledged # @option opts [Boolean] :j (false) Set journal acknowledgement # @option opts [Integer] :wtimeout (nil) Set replica set acknowledgement timeout # @option opts [Boolean] :fsync (false) Set fsync acknowledgement. # # Notes on write concern: # Options provided here will override any write concern options set on this collection, # its database object, or the current connection. See the options for DB#get_last_error. # # @return [Hash, true] Returns a Hash containing the last error object if acknowledging writes. # Otherwise, returns true. # # @raise [Mongo::OperationFailure] will be raised iff :w > 0 and the operation fails. # # @core update update-instance_method def update(selector, document, opts={}) # Initial byte is 0. write_concern = get_write_concern(opts, self) message = BSON::ByteBuffer.new("\0\0\0\0", @connection.max_message_size) BSON::BSON_RUBY.serialize_cstr(message, "#{@db.name}.#{@name}") update_options = 0 update_options += 1 if opts[:upsert] update_options += 2 if opts[:multi] # Determine if update document has modifiers and check keys if so check_keys = document.keys.first.to_s.start_with?("$") ? false : true message.put_int(update_options) message.put_binary(BSON::BSON_CODER.serialize(selector, false, true, @connection.max_bson_size).to_s) message.put_binary(BSON::BSON_CODER.serialize(document, check_keys, true, @connection.max_bson_size).to_s) instrument(:update, :database => @db.name, :collection => @name, :selector => selector, :document => document) do if Mongo::WriteConcern.gle?(write_concern) @connection.send_message_with_gle(Mongo::Constants::OP_UPDATE, message, @db.name, nil, write_concern) else @connection.send_message(Mongo::Constants::OP_UPDATE, message) end end end # Create a new index. # # @param [String, Array] spec # should be either a single field name or an array of # [field name, type] pairs. Index types should be specified # as Mongo::ASCENDING, Mongo::DESCENDING, Mongo::GEO2D, Mongo::GEO2DSPHERE, Mongo::GEOHAYSTACK, # Mongo::TEXT or Mongo::HASHED. # # Note that geospatial indexing only works with versions of MongoDB >= 1.3.3+. Keep in mind, too, # that in order to geo-index a given field, that field must reference either an array or a sub-object # where the first two values represent x- and y-coordinates. Examples can be seen below. # # Also note that it is permissible to create compound indexes that include a geospatial index as # long as the geospatial index comes first. # # If your code calls create_index frequently, you can use Collection#ensure_index to cache these calls # and thereby prevent excessive round trips to the database. # # @option opts [Boolean] :unique (false) if true, this index will enforce a uniqueness constraint. # @option opts [Boolean] :background (false) indicate that the index should be built in the background. This # feature is only available in MongoDB >= 1.3.2. # @option opts [Boolean] :drop_dups (nil) If creating a unique index on a collection with pre-existing records, # this option will keep the first document the database indexes and drop all subsequent with duplicate values. # @option opts [Integer] :bucket_size (nil) For use with geoHaystack indexes. Number of documents to group # together within a certain proximity to a given longitude and latitude. # @option opts [Integer] :min (nil) specify the minimum longitude and latitude for a geo index. # @option opts [Integer] :max (nil) specify the maximum longitude and latitude for a geo index. # # @example Creating a compound index using a hash: (Ruby 1.9+ Syntax) # @posts.create_index({'subject' => Mongo::ASCENDING, 'created_at' => Mongo::DESCENDING}) # # @example Creating a compound index: # @posts.create_index([['subject', Mongo::ASCENDING], ['created_at', Mongo::DESCENDING]]) # # @example Creating a geospatial index using a hash: (Ruby 1.9+ Syntax) # @restaurants.create_index(:location => Mongo::GEO2D) # # @example Creating a geospatial index: # @restaurants.create_index([['location', Mongo::GEO2D]]) # # # Note that this will work only if 'location' represents x,y coordinates: # {'location': [0, 50]} # {'location': {'x' => 0, 'y' => 50}} # {'location': {'latitude' => 0, 'longitude' => 50}} # # @example A geospatial index with alternate longitude and latitude: # @restaurants.create_index([['location', Mongo::GEO2D]], :min => 500, :max => 500) # # @return [String] the name of the index created. # # @core indexes create_index-instance_method def create_index(spec, opts={}) opts[:dropDups] = opts[:drop_dups] if opts[:drop_dups] opts[:bucketSize] = opts[:bucket_size] if opts[:bucket_size] field_spec = parse_index_spec(spec) opts = opts.dup name = opts.delete(:name) || generate_index_name(field_spec) name = name.to_s if name generate_indexes(field_spec, name, opts) name end # Calls create_index and sets a flag to not do so again for another X minutes. # this time can be specified as an option when initializing a Mongo::DB object as options[:cache_time] # Any changes to an index will be propagated through regardless of cache time (e.g., a change of index direction) # # The parameters and options for this methods are the same as those for Collection#create_index. # # @example Call sequence (Ruby 1.9+ Syntax): # Time t: @posts.ensure_index(:subject => Mongo::ASCENDING) -- calls create_index and # sets the 5 minute cache # Time t+2min : @posts.ensure_index(:subject => Mongo::ASCENDING) -- doesn't do anything # Time t+3min : @posts.ensure_index(:something_else => Mongo::ASCENDING) -- calls create_index # and sets 5 minute cache # Time t+10min : @posts.ensure_index(:subject => Mongo::ASCENDING) -- calls create_index and # resets the 5 minute counter # # @return [String] the name of the index. def ensure_index(spec, opts={}) now = Time.now.utc.to_i opts[:dropDups] = opts[:drop_dups] if opts[:drop_dups] opts[:bucketSize] = opts[:bucket_size] if opts[:bucket_size] field_spec = parse_index_spec(spec) name = opts[:name] || generate_index_name(field_spec) name = name.to_s if name if !@cache[name] || @cache[name] <= now generate_indexes(field_spec, name, opts) end # Reset the cache here in case there are any errors inserting. Best to be safe. @cache[name] = now + @cache_time name end # Drop a specified index. # # @param [String] name # # @core indexes def drop_index(name) if name.is_a?(Array) return drop_index(index_name(name)) end @cache[name.to_s] = nil @db.drop_index(@name, name) end # Drop all indexes. # # @core indexes def drop_indexes @cache = {} # Note: calling drop_indexes with no args will drop them all. @db.drop_index(@name, '*') end # Drop the entire collection. USE WITH CAUTION. def drop @db.drop_collection(@name) end # Atomically update and return a document using MongoDB's findAndModify command. (MongoDB > 1.3.0) # # @option opts [Hash] :query ({}) a query selector document for matching # the desired document. # @option opts [Hash] :update (nil) the update operation to perform on the # matched document. # @option opts [Array, String, OrderedHash] :sort ({}) specify a sort # option for the query using any # of the sort options available for Cursor#sort. Sort order is important # if the query will be matching multiple documents since only the first # matching document will be updated and returned. # @option opts [Boolean] :remove (false) If true, removes the the returned # document from the collection. # @option opts [Boolean] :new (false) If true, returns the updated # document; otherwise, returns the document prior to update. # @option opts [Boolean] :full_response (false) If true, returns the entire # response object from the server including 'ok' and 'lastErrorObject'. # # @return [Hash] the matched document. # # @core findandmodify find_and_modify-instance_method def find_and_modify(opts={}) full_response = opts.delete(:full_response) cmd = BSON::OrderedHash.new cmd[:findandmodify] = @name cmd.merge!(opts) cmd[:sort] = Mongo::Support.format_order_clause(opts[:sort]) if opts[:sort] full_response ? @db.command(cmd) : @db.command(cmd)['value'] end # Perform an aggregation using the aggregation framework on the current collection. # @note Aggregate requires server version >= 2.1.1 # @note Field References: Within an expression, field names must be quoted and prefixed by a dollar sign ($). # # @example Define the pipeline as an array of operator hashes: # coll.aggregate([ {"$project" => {"last_name" => 1, "first_name" => 1 }}, {"$match" => {"last_name" => "Jones"}} ]) # # @param [Array] pipeline Should be a single array of pipeline operator hashes. # # '$project' Reshapes a document stream by including fields, excluding fields, inserting computed fields, # renaming fields,or creating/populating fields that hold sub-documents. # # '$match' Query-like interface for filtering documents out of the aggregation pipeline. # # '$limit' Restricts the number of documents that pass through the pipeline. # # '$skip' Skips over the specified number of documents and passes the rest along the pipeline. # # '$unwind' Peels off elements of an array individually, returning one document for each member. # # '$group' Groups documents for calculating aggregate values. # # '$sort' Sorts all input documents and returns them to the pipeline in sorted order. # # @option opts [:primary, :secondary] :read Read preference indicating which server to perform this query # on. See Collection#find for more details. # @option opts [String] :comment (nil) a comment to include in profiling logs # # @return [Array] An Array with the aggregate command's results. # # @raise MongoArgumentError if operators either aren't supplied or aren't in the correct format. # @raise MongoOperationFailure if the aggregate command fails. # def aggregate(pipeline=nil, opts={}) raise MongoArgumentError, "pipeline must be an array of operators" unless pipeline.class == Array raise MongoArgumentError, "pipeline operators must be hashes" unless pipeline.all? { |op| op.class == Hash } hash = BSON::OrderedHash.new hash['aggregate'] = self.name hash['pipeline'] = pipeline result = @db.command(hash, command_options(opts)) unless Mongo::Support.ok?(result) raise Mongo::OperationFailure, "aggregate failed: #{result['errmsg']}" end return result["result"] end # Perform a map-reduce operation on the current collection. # # @param [String, BSON::Code] map a map function, written in JavaScript. # @param [String, BSON::Code] reduce a reduce function, written in JavaScript. # # @option opts [Hash] :query ({}) a query selector document, like what's passed to #find, to limit # the operation to a subset of the collection. # @option opts [Array] :sort ([]) an array of [key, direction] pairs to sort by. Direction should # be specified as Mongo::ASCENDING (or :ascending / :asc) or Mongo::DESCENDING (or :descending / :desc) # @option opts [Integer] :limit (nil) if passing a query, number of objects to return from the collection. # @option opts [String, BSON::Code] :finalize (nil) a javascript function to apply to the result set after the # map/reduce operation has finished. # @option opts [String] :out (nil) a valid output type. In versions of MongoDB prior to v1.7.6, # this option takes the name of a collection for the output results. In versions 1.7.6 and later, # this option specifies the output type. See the core docs for available output types. # @option opts [Boolean] :keeptemp (false) if true, the generated collection will be persisted. The default # is false. Note that this option has no effect is versions of MongoDB > v1.7.6. # @option opts [Boolean ] :verbose (false) if true, provides statistics on job execution time. # @option opts [Boolean] :raw (false) if true, return the raw result object from the map_reduce command, and not # the instantiated collection that's returned by default. Note if a collection name isn't returned in the # map-reduce output (as, for example, when using :out => { :inline => 1 }), then you must specify this option # or an ArgumentError will be raised. # @option opts [:primary, :secondary] :read Read preference indicating which server to run this map-reduce # on. See Collection#find for more details. # @option opts [String] :comment (nil) a comment to include in profiling logs # # @return [Collection, Hash] a Mongo::Collection object or a Hash with the map-reduce command's results. # # @raise ArgumentError if you specify { :out => { :inline => true }} but don't specify :raw => true. # # @see http://www.mongodb.org/display/DOCS/MapReduce Offical MongoDB map/reduce documentation. # # @core mapreduce map_reduce-instance_method def map_reduce(map, reduce, opts={}) map = BSON::Code.new(map) unless map.is_a?(BSON::Code) reduce = BSON::Code.new(reduce) unless reduce.is_a?(BSON::Code) raw = opts.delete(:raw) hash = BSON::OrderedHash.new hash['mapreduce'] = self.name hash['map'] = map hash['reduce'] = reduce hash.merge! opts if hash[:sort] hash[:sort] = Mongo::Support.format_order_clause(hash[:sort]) end result = @db.command(hash, command_options(opts)) unless Mongo::Support.ok?(result) raise Mongo::OperationFailure, "map-reduce failed: #{result['errmsg']}" end if raw result elsif result["result"] if result['result'].is_a? BSON::OrderedHash and result['result'].has_key? 'db' and result['result'].has_key? 'collection' otherdb = @db.connection[result['result']['db']] otherdb[result['result']['collection']] else @db[result["result"]] end else raise ArgumentError, "Could not instantiate collection from result. If you specified " + "{:out => {:inline => true}}, then you must also specify :raw => true to get the results." end end alias :mapreduce :map_reduce # Perform a group aggregation. # # @param [Hash] opts the options for this group operation. The minimum required are :initial # and :reduce. # # @option opts [Array, String, Symbol] :key (nil) Either the name of a field or a list of fields to group by (optional). # @option opts [String, BSON::Code] :keyf (nil) A JavaScript function to be used to generate the grouping keys (optional). # @option opts [String, BSON::Code] :cond ({}) A document specifying a query for filtering the documents over # which the aggregation is run (optional). # @option opts [Hash] :initial the initial value of the aggregation counter object (required). # @option opts [String, BSON::Code] :reduce (nil) a JavaScript aggregation function (required). # @option opts [String, BSON::Code] :finalize (nil) a JavaScript function that receives and modifies # each of the resultant grouped objects. Available only when group is run with command # set to true. # @option opts [:primary, :secondary] :read Read preference indicating which server to perform this group # on. See Collection#find for more details. # @option opts [String] :comment (nil) a comment to include in profiling logs # # @return [Array] the command response consisting of grouped items. def group(opts, condition={}, initial={}, reduce=nil, finalize=nil) if opts.is_a?(Hash) return new_group(opts) else warn "Collection#group no longer take a list of parameters. This usage is deprecated and will be remove in v2.0." + "Check out the new API at http://api.mongodb.org/ruby/current/Mongo/Collection.html#group-instance_method" end reduce = BSON::Code.new(reduce) unless reduce.is_a?(BSON::Code) group_command = { "group" => { "ns" => @name, "$reduce" => reduce, "cond" => condition, "initial" => initial } } if opts.is_a?(Symbol) raise MongoArgumentError, "Group takes either an array of fields to group by or a JavaScript function" + "in the form of a String or BSON::Code." end unless opts.nil? if opts.is_a? Array key_type = "key" key_value = {} opts.each { |k| key_value[k] = 1 } else key_type = "$keyf" key_value = opts.is_a?(BSON::Code) ? opts : BSON::Code.new(opts) end group_command["group"][key_type] = key_value end finalize = BSON::Code.new(finalize) if finalize.is_a?(String) if finalize.is_a?(BSON::Code) group_command['group']['finalize'] = finalize end result = @db.command(group_command) if Mongo::Support.ok?(result) result["retval"] else raise OperationFailure, "group command failed: #{result['errmsg']}" end end private def new_group(opts={}) reduce = opts[:reduce] finalize = opts[:finalize] cond = opts.fetch(:cond, {}) initial = opts[:initial] if !(reduce && initial) raise MongoArgumentError, "Group requires at minimum values for initial and reduce." end cmd = { "group" => { "ns" => @name, "$reduce" => reduce.to_bson_code, "cond" => cond, "initial" => initial } } if finalize cmd['group']['finalize'] = finalize.to_bson_code end if key = opts[:key] if key.is_a?(String) || key.is_a?(Symbol) key = [key] end key_value = {} key.each { |k| key_value[k] = 1 } cmd["group"]["key"] = key_value elsif keyf = opts[:keyf] cmd["group"]["$keyf"] = keyf.to_bson_code end result = @db.command(cmd, command_options(opts)) result["retval"] end public # Return a list of distinct values for +key+ across all # documents in the collection. The key may use dot notation # to reach into an embedded object. # # @param [String, Symbol, OrderedHash] key or hash to group by. # @param [Hash] query a selector for limiting the result set over which to group. # @param [Hash] opts the options for this distinct operation. # # @option opts [:primary, :secondary] :read Read preference indicating which server to perform this query # on. See Collection#find for more details. # @option opts [String] :comment (nil) a comment to include in profiling logs # # @example Saving zip codes and ages and returning distinct results. # @collection.save({:zip => 10010, :name => {:age => 27}}) # @collection.save({:zip => 94108, :name => {:age => 24}}) # @collection.save({:zip => 10010, :name => {:age => 27}}) # @collection.save({:zip => 99701, :name => {:age => 24}}) # @collection.save({:zip => 94108, :name => {:age => 27}}) # # @collection.distinct(:zip) # [10010, 94108, 99701] # @collection.distinct("name.age") # [27, 24] # # # You may also pass a document selector as the second parameter # # to limit the documents over which distinct is run: # @collection.distinct("name.age", {"name.age" => {"$gt" => 24}}) # [27] # # @return [Array] an array of distinct values. def distinct(key, query=nil, opts={}) raise MongoArgumentError unless [String, Symbol].include?(key.class) command = BSON::OrderedHash.new command[:distinct] = @name command[:key] = key.to_s command[:query] = query @db.command(command, command_options(opts))["values"] end # Rename this collection. # # Note: If operating in auth mode, the client must be authorized as an admin to # perform this operation. # # @param [String] new_name the new name for this collection # # @return [String] the name of the new collection. # # @raise [Mongo::InvalidNSName] if +new_name+ is an invalid collection name. def rename(new_name) case new_name when Symbol, String else raise TypeError, "new_name must be a string or symbol" end new_name = new_name.to_s if new_name.empty? or new_name.include? ".." raise Mongo::InvalidNSName, "collection names cannot be empty" end if new_name.include? "$" raise Mongo::InvalidNSName, "collection names must not contain '$'" end if new_name.match(/^\./) or new_name.match(/\.$/) raise Mongo::InvalidNSName, "collection names must not start or end with '.'" end @db.rename_collection(@name, new_name) @name = new_name end # Get information on the indexes for this collection. # # @return [Hash] a hash where the keys are index names. # # @core indexes def index_information @db.index_information(@name) end # Return a hash containing options that apply to this collection. # For all possible keys and values, see DB#create_collection. # # @return [Hash] options that apply to this collection. def options @db.collections_info(@name).next_document['options'] end # Return stats on the collection. Uses MongoDB's collstats command. # # @return [Hash] def stats @db.command({:collstats => @name}) end # Get the number of documents in this collection. # # @option opts [Hash] :query ({}) A query selector for filtering the documents counted. # @option opts [Integer] :skip (nil) The number of documents to skip. # @option opts [Integer] :limit (nil) The number of documents to limit. # @option opts [:primary, :secondary] :read Read preference for this command. See Collection#find for # more details. # @option opts [String] :comment (nil) a comment to include in profiling logs # # @return [Integer] def count(opts={}) find(opts[:query], :skip => opts[:skip], :limit => opts[:limit], :read => opts[:read], :comment => opts[:comment]).count(true) end alias :size :count protected # Parse common options for read-only commands from an input @opts # hash and return a hash suitable for passing to DB#command. def command_options(opts) out = {} if read = opts[:read] Mongo::ReadPreference::validate(read) else read = @read end out[:read] = read out[:comment] = opts[:comment] if opts[:comment] out end def normalize_hint_fields(hint) case hint when String {hint => 1} when Hash hint when nil nil else h = BSON::OrderedHash.new hint.to_a.each { |k| h[k] = 1 } h end end private def index_name(spec) field_spec = parse_index_spec(spec) index_information.each do |index| return index[0] if index[1]['key'] == field_spec end nil end def parse_index_spec(spec) field_spec = BSON::OrderedHash.new if spec.is_a?(String) || spec.is_a?(Symbol) field_spec[spec.to_s] = 1 elsif spec.is_a?(Hash) if RUBY_VERSION < '1.9' && !spec.is_a?(BSON::OrderedHash) raise MongoArgumentError, "Must use OrderedHash in Ruby < 1.9.0" end validate_index_types(spec.values) field_spec = spec.is_a?(BSON::OrderedHash) ? spec : BSON::OrderedHash.try_convert(spec) elsif spec.is_a?(Array) && spec.all? {|field| field.is_a?(Array) } spec.each do |f| validate_index_types(f[1]) field_spec[f[0].to_s] = f[1] end else raise MongoArgumentError, "Invalid index specification #{spec.inspect}; " + "should be either a hash (OrderedHash), string, symbol, or an array of arrays." end field_spec end def validate_index_types(*types) types.flatten! types.each do |t| unless Mongo::INDEX_TYPES.values.include?(t) raise MongoArgumentError, "Invalid index field #{t.inspect}; " + "should be one of " + Mongo::INDEX_TYPES.map {|k,v| "Mongo::#{k} (#{v})"}.join(', ') end end end def generate_indexes(field_spec, name, opts) selector = { :name => name, :ns => "#{@db.name}.#{@name}", :key => field_spec } selector.merge!(opts) begin insert_documents([selector], Mongo::DB::SYSTEM_INDEX_COLLECTION, false, {:w => 1}) rescue Mongo::OperationFailure => e if selector[:dropDups] && e.message =~ /^11000/ # NOP. If the user is intentionally dropping dups, we can ignore duplicate key errors. else raise Mongo::OperationFailure, "Failed to create index #{selector.inspect} with the following error: " + "#{e.message}" end end nil end def generate_index_name(spec) indexes = [] spec.each_pair do |field, type| indexes.push("#{field}_#{type}") end indexes.join("_") end def insert_buffer(collection_name, continue_on_error) message = BSON::ByteBuffer.new("", @connection.max_message_size) message.put_int(continue_on_error ? 1 : 0) BSON::BSON_RUBY.serialize_cstr(message, "#{@db.name}.#{collection_name}") message end def insert_batch(message, documents, write_concern, continue_on_error, errors, collection_name=@name) begin send_insert_message(message, documents, collection_name, write_concern) rescue OperationFailure => ex raise ex unless continue_on_error errors << ex end end def send_insert_message(message, documents, collection_name, write_concern) instrument(:insert, :database => @db.name, :collection => collection_name, :documents => documents) do if Mongo::WriteConcern.gle?(write_concern) @connection.send_message_with_gle(Mongo::Constants::OP_INSERT, message, @db.name, nil, write_concern) else @connection.send_message(Mongo::Constants::OP_INSERT, message) end end end # Sends a Mongo::Constants::OP_INSERT message to the database. # Takes an array of +documents+, an optional +collection_name+, and a # +check_keys+ setting. def insert_documents(documents, collection_name=@name, check_keys=true, write_concern={}, flags={}) continue_on_error = !!flags[:continue_on_error] collect_on_error = !!flags[:collect_on_error] error_docs = [] # docs with errors on serialization errors = [] # for all errors on insertion batch_start = 0 message = insert_buffer(collection_name, continue_on_error) documents.each_with_index do |doc, index| begin serialized_doc = BSON::BSON_CODER.serialize(doc, check_keys, true, @connection.max_bson_size) rescue BSON::InvalidDocument, BSON::InvalidKeyName, BSON::InvalidStringEncoding => ex raise ex unless collect_on_error error_docs << doc next end # Check if the current msg has room for this doc. If not, send current msg and create a new one. # GLE is a sep msg with its own header so shouldn't be included in padding with header size. total_message_size = Networking::STANDARD_HEADER_SIZE + message.size + serialized_doc.size if total_message_size > @connection.max_message_size docs_to_insert = documents[batch_start..index] - error_docs insert_batch(message, docs_to_insert, write_concern, continue_on_error, errors, collection_name) batch_start = index message = insert_buffer(collection_name, continue_on_error) redo else message.put_binary(serialized_doc.to_s) end end docs_to_insert = documents[batch_start..-1] - error_docs inserted_docs = documents - error_docs inserted_ids = inserted_docs.collect {|o| o[:_id] || o['_id']} # Avoid insertion if all docs failed serialization and collect_on_error if error_docs.empty? || !docs_to_insert.empty? insert_batch(message, docs_to_insert, write_concern, continue_on_error, errors, collection_name) # insert_batch collects errors if w > 0 and continue_on_error is true, # so raise the error here, as this is the last or only msg sent raise errors.last unless errors.empty? end collect_on_error ? [inserted_ids, error_docs] : inserted_ids end end end ruby-mongo-1.9.2/lib/mongo/cursor.rb000066400000000000000000000473161221200727400173600ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo # A cursor over query results. Returned objects are hashes. class Cursor include Enumerable include Mongo::Constants include Mongo::Conversions include Mongo::Logging include Mongo::ReadPreference attr_reader :collection, :selector, :fields, :order, :hint, :snapshot, :timeout, :full_collection_name, :transformer, :options, :cursor_id, :show_disk_loc, :comment, :read, :tag_sets, :acceptable_latency # Create a new cursor. # # Note: cursors are created when executing queries using [Collection#find] and other # similar methods. Application developers shouldn't have to create cursors manually. # # @return [Cursor] # # @core cursors constructor_details def initialize(collection, opts={}) @cursor_id = nil @db = collection.db @collection = collection @connection = @db.connection @logger = @connection.logger # Query selector @selector = opts[:selector] || {} # Special operators that form part of $query @order = opts[:order] @explain = opts[:explain] @hint = opts[:hint] @snapshot = opts[:snapshot] @max_scan = opts.fetch(:max_scan, nil) @return_key = opts.fetch(:return_key, nil) @show_disk_loc = opts.fetch(:show_disk_loc, nil) @comment = opts[:comment] # Wire-protocol settings @fields = convert_fields_for_query(opts[:fields]) @skip = opts[:skip] || 0 @limit = opts[:limit] || 0 @tailable = opts[:tailable] || false @timeout = opts.fetch(:timeout, true) @options = 0 # Use this socket for the query @socket = opts[:socket] @pool = nil @closed = false @query_run = false @transformer = opts[:transformer] @read = opts[:read] || @collection.read Mongo::ReadPreference::validate(@read) @tag_sets = opts[:tag_sets] || @collection.tag_sets @acceptable_latency = opts[:acceptable_latency] || @collection.acceptable_latency batch_size(opts[:batch_size] || 0) @full_collection_name = "#{@collection.db.name}.#{@collection.name}" @cache = [] @returned = 0 if(!@timeout) add_option(OP_QUERY_NO_CURSOR_TIMEOUT) end if(@read != :primary) add_option(OP_QUERY_SLAVE_OK) end if(@tailable) add_option(OP_QUERY_TAILABLE) end if @collection.name =~ /^\$cmd/ || @collection.name =~ /^system/ @command = true else @command = false end end # Guess whether the cursor is alive on the server. # # Note that this method only checks whether we have # a cursor id. The cursor may still have timed out # on the server. This will be indicated in the next # call to Cursor#next. # # @return [Boolean] def alive? @cursor_id && @cursor_id != 0 end # Get the next document specified the cursor options. # # @return [Hash, Nil] the next document or Nil if no documents remain. def next if @cache.length == 0 if @query_run && exhaust? close return nil else refresh end end doc = @cache.shift if doc && doc['$err'] err = doc['$err'] # If the server has stopped being the master (e.g., it's one of a # pair but it has died or something like that) then we close that # connection. The next request will re-open on master server. if err.include?("not master") @connection.close raise ConnectionFailure.new(err, doc['code'], doc) end raise OperationFailure.new(err, doc['code'], doc) end if @transformer.nil? doc else @transformer.call(doc) if doc end end alias :next_document :next # Reset this cursor on the server. Cursor options, such as the # query string and the values for skip and limit, are preserved. def rewind! close @cache.clear @cursor_id = nil @closed = false @query_run = false @n_received = nil true end # Determine whether this cursor has any remaining results. # # @return [Boolean] def has_next? num_remaining > 0 end # Get the size of the result set for this query. # # @param [Boolean] whether of not to take notice of skip and limit # # @return [Integer] the number of objects in the result set for this query. # # @raise [OperationFailure] on a database error. def count(skip_and_limit = false) command = BSON::OrderedHash["count", @collection.name, "query", @selector] if skip_and_limit command.merge!(BSON::OrderedHash["limit", @limit]) if @limit != 0 command.merge!(BSON::OrderedHash["skip", @skip]) if @skip != 0 end command.merge!(BSON::OrderedHash["fields", @fields]) response = @db.command(command, :read => @read, :comment => @comment) return response['n'].to_i if Mongo::Support.ok?(response) return 0 if response['errmsg'] == "ns missing" raise OperationFailure.new("Count failed: #{response['errmsg']}", response['code'], response) end # Sort this cursor's results. # # This method overrides any sort order specified in the Collection#find # method, and only the last sort applied has an effect. # # @param [Symbol, Array, Hash, OrderedHash] order either 1) a key to sort by 2) # an array of [key, direction] pairs to sort by or 3) a hash of # field => direction pairs to sort by. Direction should be specified as # Mongo::ASCENDING (or :ascending / :asc) or Mongo::DESCENDING # (or :descending / :desc) # # @raise [InvalidOperation] if this cursor has already been used. # # @raise [InvalidSortValueError] if the specified order is invalid. def sort(order, direction=nil) check_modifiable order = [[order, direction]] unless direction.nil? @order = order self end # Limit the number of results to be returned by this cursor. # # This method overrides any limit specified in the Collection#find method, # and only the last limit applied has an effect. # # @return [Integer] the current number_to_return if no parameter is given. # # @raise [InvalidOperation] if this cursor has already been used. # # @core limit limit-instance_method def limit(number_to_return=nil) return @limit unless number_to_return check_modifiable if (number_to_return != 0) && exhaust? raise MongoArgumentError, "Limit is incompatible with exhaust option." end @limit = number_to_return self end # Skips the first +number_to_skip+ results of this cursor. # Returns the current number_to_skip if no parameter is given. # # This method overrides any skip specified in the Collection#find method, # and only the last skip applied has an effect. # # @return [Integer] # # @raise [InvalidOperation] if this cursor has already been used. def skip(number_to_skip=nil) return @skip unless number_to_skip check_modifiable @skip = number_to_skip self end # Set the batch size for server responses. # # Note that the batch size will take effect only on queries # where the number to be returned is greater than 100. # # This can not override MongoDB's limit on the amount of data it will # return to the client. Depending on server version this can be 4-16mb. # # @param [Integer] size either 0 or some integer greater than 1. If 0, # the server will determine the batch size. # # @return [Cursor] def batch_size(size=nil) return @batch_size unless size check_modifiable if size < 0 || size == 1 raise ArgumentError, "Invalid value for batch_size #{size}; must be 0 or > 1." else @batch_size = @limit != 0 && size > @limit ? @limit : size end self end # Iterate over each document in this cursor, yielding it to the given # block, if provided. An Enumerator is returned if no block is given. # # Iterating over an entire cursor will close it. # # @yield passes each document to a block for processing. # # @example if 'comments' represents a collection of comments: # comments.find.each do |doc| # puts doc['user'] # end def each if block_given? || !defined?(Enumerator) while doc = self.next yield doc end else Enumerator.new do |yielder| while doc = self.next yielder.yield doc end end end end # Receive all the documents from this cursor as an array of hashes. # # Notes: # # If you've already started iterating over the cursor, the array returned # by this method contains only the remaining documents. See Cursor#rewind! if you # need to reset the cursor. # # Use of this method is discouraged - in most cases, it's much more # efficient to retrieve documents as you need them by iterating over the cursor. # # @return [Array] an array of documents. def to_a super end # Get the explain plan for this cursor. # # @return [Hash] a document containing the explain plan for this cursor. # # @core explain explain-instance_method def explain c = Cursor.new(@collection, query_options_hash.merge(:limit => -@limit.abs, :explain => true)) explanation = c.next_document c.close explanation end # Close the cursor. # # Note: if a cursor is read until exhausted (read until Mongo::Constants::OP_QUERY or # Mongo::Constants::OP_GETMORE returns zero for the cursor id), there is no need to # close it manually. # # Note also: Collection#find takes an optional block argument which can be used to # ensure that your cursors get closed. # # @return [True] def close if @cursor_id && @cursor_id != 0 message = BSON::ByteBuffer.new([0, 0, 0, 0]) message.put_int(1) message.put_long(@cursor_id) log(:debug, "Cursor#close #{@cursor_id}") @connection.send_message( Mongo::Constants::OP_KILL_CURSORS, message, :pool => @pool ) end @cursor_id = 0 @closed = true end # Is this cursor closed? # # @return [Boolean] def closed? @closed end # Returns an integer indicating which query options have been selected. # # @return [Integer] # # @see http://www.mongodb.org/display/DOCS/Mongo+Wire+Protocol#MongoWireProtocol-Mongo::Constants::OPQUERY # The MongoDB wire protocol. def query_opts warn "The method Cursor#query_opts has been deprecated " + "and will removed in v2.0. Use Cursor#options instead." @options end # Add an option to the query options bitfield. # # @param opt a valid query option # # @raise InvalidOperation if this method is run after the cursor has bee # iterated for the first time. # # @return [Integer] the current value of the options bitfield for this cursor. # # @see http://www.mongodb.org/display/DOCS/Mongo+Wire+Protocol#MongoWireProtocol-Mongo::Constants::OPQUERY def add_option(opt) check_modifiable if exhaust?(opt) if @limit != 0 raise MongoArgumentError, "Exhaust is incompatible with limit." elsif @connection.mongos? raise MongoArgumentError, "Exhaust is incompatible with mongos." end end @options |= opt @options end # Remove an option from the query options bitfield. # # @param opt a valid query option # # @raise InvalidOperation if this method is run after the cursor has bee # iterated for the first time. # # @return [Integer] the current value of the options bitfield for this cursor. # # @see http://www.mongodb.org/display/DOCS/Mongo+Wire+Protocol#MongoWireProtocol-Mongo::Constants::OPQUERY def remove_option(opt) check_modifiable @options &= ~opt @options end # Get the query options for this Cursor. # # @return [Hash] def query_options_hash BSON::OrderedHash[ :selector => @selector, :fields => @fields, :skip => @skip, :limit => @limit, :order => @order, :hint => @hint, :snapshot => @snapshot, :timeout => @timeout, :max_scan => @max_scan, :return_key => @return_key, :show_disk_loc => @show_disk_loc, :comment => @comment ] end # Clean output for inspect. def inspect "" end private # Convert the +:fields+ parameter from a single field name or an array # of fields names to a hash, with the field names for keys and '1' for each # value. def convert_fields_for_query(fields) case fields when String, Symbol {fields => 1} when Array return nil if fields.length.zero? fields.inject({}) { |hash, field| hash[field] = 1; hash } when Hash return fields end end # Return the number of documents remaining for this cursor. def num_remaining if @cache.length == 0 if @query_run && exhaust? close return 0 else refresh end end @cache.length end # Refresh the documents in @cache. This means either # sending the initial query or sending a GET_MORE operation. def refresh if !@query_run send_initial_query elsif !@cursor_id.zero? send_get_more end end # Sends initial query -- which is always a read unless it is a command # # Upon ConnectionFailure, tries query 3 times if socket was not provided # and the query is either not a command or is a secondary_ok command. # # Pins pools upon successful read and unpins pool upon ConnectionFailure # def send_initial_query tries = 0 instrument(:find, instrument_payload) do begin message = construct_query_message socket = @socket || checkout_socket_from_connection results, @n_received, @cursor_id = @connection.receive_message( Mongo::Constants::OP_QUERY, message, nil, socket, @command, nil, exhaust?) rescue ConnectionFailure => ex socket.close if socket @pool = nil @connection.unpin_pool @connection.refresh if tries < 3 && !@socket && (!@command || Mongo::Support::secondary_ok?(@selector)) tries += 1 retry else raise ex end rescue OperationFailure, OperationTimeout => ex raise ex ensure socket.checkin unless @socket || socket.nil? end if !@socket && !@command @connection.pin_pool(socket.pool, read_preference) end @returned += @n_received @cache += results @query_run = true close_cursor_if_query_complete end end def send_get_more message = BSON::ByteBuffer.new([0, 0, 0, 0]) # DB name. BSON::BSON_RUBY.serialize_cstr(message, "#{@db.name}.#{@collection.name}") # Number of results to return. if @limit > 0 limit = @limit - @returned if @batch_size > 0 limit = limit < @batch_size ? limit : @batch_size end message.put_int(limit) else message.put_int(@batch_size) end # Cursor id. message.put_long(@cursor_id) log(:debug, "cursor.refresh() for cursor #{@cursor_id}") if @logger socket = @pool.checkout begin results, @n_received, @cursor_id = @connection.receive_message( Mongo::Constants::OP_GET_MORE, message, nil, socket, @command, nil) ensure socket.checkin end @returned += @n_received @cache += results close_cursor_if_query_complete end def checkout_socket_from_connection begin if @pool socket = @pool.checkout elsif @command && !Mongo::Support::secondary_ok?(@selector) socket = @connection.checkout_reader({:mode => :primary}) else socket = @connection.checkout_reader(read_preference) end rescue SystemStackError, NoMemoryError, SystemCallError => ex @connection.close raise ex end @pool = socket.pool socket end def checkin_socket(sock) @connection.checkin(sock) end def construct_query_message message = BSON::ByteBuffer.new("", @connection.max_message_size) message.put_int(@options) BSON::BSON_RUBY.serialize_cstr(message, "#{@db.name}.#{@collection.name}") message.put_int(@skip) @batch_size > 1 ? message.put_int(@batch_size) : message.put_int(@limit) spec = query_contains_special_fields? ? construct_query_spec : @selector message.put_binary(BSON::BSON_CODER.serialize(spec, false, false, @connection.max_bson_size).to_s) message.put_binary(BSON::BSON_CODER.serialize(@fields, false, false, @connection.max_bson_size).to_s) if @fields message end def instrument_payload log = { :database => @db.name, :collection => @collection.name, :selector => selector } log[:fields] = @fields if @fields log[:skip] = @skip if @skip && (@skip != 0) log[:limit] = @limit if @limit && (@limit != 0) log[:order] = @order if @order log end def construct_query_spec return @selector if @selector.has_key?('$query') spec = BSON::OrderedHash.new spec['$query'] = @selector spec['$orderby'] = Mongo::Support.format_order_clause(@order) if @order spec['$hint'] = @hint if @hint && @hint.length > 0 spec['$explain'] = true if @explain spec['$snapshot'] = true if @snapshot spec['$maxScan'] = @max_scan if @max_scan spec['$returnKey'] = true if @return_key spec['$showDiskLoc'] = true if @show_disk_loc spec['$comment'] = @comment if @comment if needs_read_pref? read_pref = Mongo::ReadPreference::mongos(@read, @tag_sets) spec['$readPreference'] = read_pref if read_pref end spec end def needs_read_pref? @connection.mongos? && @read != :primary end def query_contains_special_fields? @order || @explain || @hint || @snapshot || @show_disk_loc || @max_scan || @return_key || @comment || needs_read_pref? end def close_cursor_if_query_complete if @limit > 0 && @returned >= @limit close end end # Check whether the exhaust option is set # # @return [true, false] The state of the exhaust flag. def exhaust?(opts = options) !(opts & OP_QUERY_EXHAUST).zero? end def check_modifiable if @query_run || @closed raise InvalidOperation, "Cannot modify the query once it has been run or closed." end end end end ruby-mongo-1.9.2/lib/mongo/db.rb000066400000000000000000000576231221200727400164320ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'socket' require 'thread' module Mongo # A MongoDB database. class DB include Mongo::WriteConcern SYSTEM_NAMESPACE_COLLECTION = 'system.namespaces' SYSTEM_INDEX_COLLECTION = 'system.indexes' SYSTEM_PROFILE_COLLECTION = 'system.profile' SYSTEM_USER_COLLECTION = 'system.users' SYSTEM_JS_COLLECTION = 'system.js' SYSTEM_COMMAND_COLLECTION = '$cmd' PROFILE_LEVEL = { :off => 0, :slow_only => 1, :all => 2 } # Counter for generating unique request ids. @@current_request_id = 0 # Strict mode enforces collection existence checks. When +true+, # asking for a collection that does not exist, or trying to create a # collection that already exists, raises an error. # # Strict mode is disabled by default, but enabled (+true+) at any time. # # @deprecated Support for strict will be removed in version 2.0 of the driver. def strict=(value) unless ENV['TEST_MODE'] warn "Support for strict mode has been deprecated and will be " + "removed in version 2.0 of the driver." end @strict = value end # Returns the value of the +strict+ flag. # # @deprecated Support for strict will be removed in version 2.0 of the driver. def strict? @strict end # The name of the database and the local write concern options. attr_reader :name, :write_concern # The Mongo::MongoClient instance connecting to the MongoDB server. attr_reader :connection # The length of time that Collection.ensure_index should cache index calls attr_accessor :cache_time # Read Preference attr_accessor :read, :tag_sets, :acceptable_latency # Instances of DB are normally obtained by calling Mongo#db. # # @param [String] name the database name. # @param [Mongo::MongoClient] client a connection object pointing to MongoDB. Note # that databases are usually instantiated via the MongoClient class. See the examples below. # # @option opts [Boolean] :strict (False) [DEPRECATED] If true, collections existence checks are # performed during a number of relevant operations. See DB#collection, DB#create_collection and # DB#drop_collection. # # @option opts [Object, #create_pk(doc)] :pk (BSON::ObjectId) A primary key factory object, # which should take a hash and return a hash which merges the original hash with any primary key # fields the factory wishes to inject. (NOTE: if the object already has a primary key, # the factory should not inject a new key). # # @option opts [String, Integer, Symbol] :w (1) Set default number of nodes to which a write # should be acknowledged # @option opts [Boolean] :j (false) Set journal acknowledgement # @option opts [Integer] :wtimeout (nil) Set replica set acknowledgement timeout # @option opts [Boolean] :fsync (false) Set fsync acknowledgement. # # Notes on write concern: # These write concern options are propagated to Collection objects instantiated off of this DB. If no # options are provided, the default write concern set on this instance's MongoClient object will be used. This # default can be overridden upon instantiation of any collection by explicitly setting write concern options # on initialization or at the time of an operation. # # @option opts [Integer] :cache_time (300) Set the time that all ensure_index calls should cache the command. # # @core databases constructor_details def initialize(name, client, opts={}) @name = Mongo::Support.validate_db_name(name) @connection = client @strict = opts[:strict] @pk_factory = opts[:pk] @write_concern = get_write_concern(opts, client) @read = opts[:read] || @connection.read Mongo::ReadPreference::validate(@read) @tag_sets = opts.fetch(:tag_sets, @connection.tag_sets) @acceptable_latency = opts.fetch(:acceptable_latency, @connection.acceptable_latency) @cache_time = opts[:cache_time] || 300 #5 minutes. end # Authenticate with the given username and password. Note that mongod # must be started with the --auth option for authentication to be enabled. # # @param [String] username # @param [String] password # @param [Boolean] save_auth # Save this authentication to the client object using MongoClient#add_auth. This # will ensure that the authentication will be applied to all sockets and upon # database reconnect. # @param source [String] Database with user credentials. This should be used to # authenticate against a database when the credentials exist elsewhere. # # @note save_auth must be true when using connection pooling or providing a source # for credentials. # # @return [Boolean] # # @raise [AuthenticationError] # # @core authenticate authenticate-instance_method def authenticate(username, password=nil, save_auth=true, source=nil) if (@connection.pool_size > 1 || source) && !save_auth raise MongoArgumentError, "If using connection pooling or delegated auth, " + ":save_auth must be set to true." end begin socket = @connection.checkout_reader(:mode => :primary_preferred) issue_authentication(username, password, save_auth, :socket => socket, :source => source) ensure socket.checkin if socket end @connection.authenticate_pools true end def issue_authentication(username, password, save_auth=true, opts={}) doc = command({:getnonce => 1}, :check_response => false, :socket => opts[:socket]) raise MongoDBError, "Error retrieving nonce: #{doc}" unless ok?(doc) nonce = doc['nonce'] # issue authentication against this database if source option not provided source = opts[:source] db = source ? @connection[source] : self auth = BSON::OrderedHash.new auth['authenticate'] = 1 auth['user'] = username auth['nonce'] = nonce auth['key'] = Mongo::Support.auth_key(username, password, nonce) if ok?(doc = db.command(auth, :check_response => false, :socket => opts[:socket])) @connection.add_auth(name, username, password, source) if save_auth else message = "Failed to authenticate user '#{username}' on db '#{db.name}'" raise Mongo::AuthenticationError.new(message, doc['code'], doc) end true end # Adds a stored Javascript function to the database which can executed # server-side in map_reduce, db.eval and $where clauses. # # @param [String] function_name # @param [String] code # # @return [String] the function name saved to the database def add_stored_function(function_name, code) self[SYSTEM_JS_COLLECTION].save( { "_id" => function_name, :value => BSON::Code.new(code) } ) end # Removes stored Javascript function from the database. Returns # false if the function does not exist # # @param [String] function_name # # @return [Boolean] def remove_stored_function(function_name) return false unless self[SYSTEM_JS_COLLECTION].find_one({"_id" => function_name}) self[SYSTEM_JS_COLLECTION].remove({"_id" => function_name}, :w => 1) end # Adds a user to this database for use with authentication. If the user already # exists in the system, the password and any additional fields provided in opts # will be updated. # # @param [String] username # @param [String] password # @param [Boolean] read_only # Create a read-only user. # # @param [Hash] opts # Optional fields for the user document (e.g. +userSource+, or +roles+) # # See {http://docs.mongodb.org/manual/reference/privilege-documents} # for more information. # # @note The use of the opts argument to provide or update additional fields # on the user document requires MongoDB >= 2.4.0 # # @return [Hash] an object representing the user. def add_user(username, password=nil, read_only=false, opts={}) users = self[SYSTEM_USER_COLLECTION] user = users.find_one({:user => username}) || {:user => username} user['pwd'] = Mongo::Support.hash_password(username, password) if password user['readOnly'] = true if read_only user.merge!(opts) begin users.save(user) rescue OperationFailure => ex # adding first admin user fails GLE in MongoDB 2.2 raise ex unless ex.message =~ /login/ end user end # Remove the given user from this database. Returns false if the user # doesn't exist in the system. # # @param [String] username # # @return [Boolean] def remove_user(username) if self[SYSTEM_USER_COLLECTION].find_one({:user => username}) self[SYSTEM_USER_COLLECTION].remove({:user => username}, :w => 1) else false end end # Deauthorizes use for this database for this client connection. Also removes # any saved authentication in the MongoClient class associated with this # database. # # @raise [MongoDBError] if logging out fails. # # @return [Boolean] def logout(opts={}) auth = @connection.auths.find { |a| a[:db_name] == name } db = auth && auth[:source] ? @connection[auth[:source]] : self auth ? @connection.logout_pools(db.name) : db.issue_logout(opts) @connection.remove_auth(db.name) end def issue_logout(opts={}) unless ok?(doc = command({:logout => 1}, :socket => opts[:socket])) raise MongoDBError, "Error logging out: #{doc.inspect}" end true end # Get an array of collection names in this database. # # @return [Array] def collection_names names = collections_info.collect { |doc| doc['name'] || '' } names = names.delete_if {|name| name.index(@name).nil? || name.index('$')} names.map {|name| name.sub(@name + '.', '')} end # Get an array of Collection instances, one for each collection in this database. # # @return [Array] def collections collection_names.map do |name| Collection.new(name, self) end end # Get info on system namespaces (collections). This method returns # a cursor which can be iterated over. For each collection, a hash # will be yielded containing a 'name' string and, optionally, an 'options' hash. # # @param [String] coll_name return info for the specified collection only. # # @return [Mongo::Cursor] def collections_info(coll_name=nil) selector = {} selector[:name] = full_collection_name(coll_name) if coll_name Cursor.new(Collection.new(SYSTEM_NAMESPACE_COLLECTION, self), :selector => selector) end # Create a collection. # # new collection. If +strict+ is true, will raise an error if # collection +name+ already exists. # # @param [String, Symbol] name the name of the new collection. # # @option opts [Boolean] :capped (False) created a capped collection. # # @option opts [Integer] :size (Nil) If +capped+ is +true+, # specifies the maximum number of bytes for the capped collection. # If +false+, specifies the number of bytes allocated # for the initial extent of the collection. # # @option opts [Integer] :max (Nil) If +capped+ is +true+, indicates # the maximum number of records in a capped collection. # # @raise [MongoDBError] raised under two conditions: # either we're in +strict+ mode and the collection # already exists or collection creation fails on the server. # # @return [Mongo::Collection] def create_collection(name, opts={}) name = name.to_s if strict? && collection_names.include?(name) raise MongoDBError, "Collection '#{name}' already exists. (strict=true)" end begin cmd = BSON::OrderedHash.new cmd[:create] = name doc = command(cmd.merge(opts || {})) return Collection.new(name, self, :pk => @pk_factory) if ok?(doc) rescue OperationFailure => e return Collection.new(name, self, :pk => @pk_factory) if e.message =~ /exists/ raise e end raise MongoDBError, "Error creating collection: #{doc.inspect}" end # Get a collection by name. # # @param [String, Symbol] name the collection name. # @param [Hash] opts any valid options that can be passed to Collection#new. # # @raise [MongoDBError] if collection does not already exist and we're in # +strict+ mode. # # @return [Mongo::Collection] def collection(name, opts={}) if strict? && !collection_names.include?(name.to_s) raise MongoDBError, "Collection '#{name}' doesn't exist. (strict=true)" else opts = opts.dup opts.merge!(:pk => @pk_factory) unless opts[:pk] Collection.new(name, self, opts) end end alias_method :[], :collection # Drop a collection by +name+. # # @param [String, Symbol] name # # @return [Boolean] +true+ on success or +false+ if the collection name doesn't exist. def drop_collection(name) return false if strict? && !collection_names.include?(name.to_s) begin ok?(command(:drop => name)) rescue OperationFailure false end end # Run the getlasterror command with the specified replication options. # # @option opts [Boolean] :fsync (false) # @option opts [Integer] :w (nil) # @option opts [Integer] :wtimeout (nil) # @option opts [Boolean] :j (false) # # @return [Hash] the entire response to getlasterror. # # @raise [MongoDBError] if the operation fails. def get_last_error(opts={}) cmd = BSON::OrderedHash.new cmd[:getlasterror] = 1 cmd.merge!(opts) doc = command(cmd, :check_response => false) raise MongoDBError, "Error retrieving last error: #{doc.inspect}" unless ok?(doc) doc end # Return +true+ if an error was caused by the most recently executed # database operation. # # @return [Boolean] def error? get_last_error['err'] != nil end # Get the most recent error to have occurred on this database. # # This command only returns errors that have occurred since the last call to # DB#reset_error_history - returns +nil+ if there is no such error. # # @return [String, Nil] the text of the error or +nil+ if no error has occurred. def previous_error error = command(:getpreverror => 1) error["err"] ? error : nil end # Reset the error history of this database # # Calls to DB#previous_error will only return errors that have occurred # since the most recent call to this method. # # @return [Hash] def reset_error_history command(:reseterror => 1) end # Dereference a DBRef, returning the document it points to. # # @param [Mongo::DBRef] dbref # # @return [Hash] the document indicated by the db reference. # # @see http://www.mongodb.org/display/DOCS/DB+Ref MongoDB DBRef spec. def dereference(dbref) collection(dbref.namespace).find_one("_id" => dbref.object_id) end # Evaluate a JavaScript expression in MongoDB. # # @param [String, Code] code a JavaScript expression to evaluate server-side. # @param [Integer, Hash] args any additional argument to be passed to the +code+ expression when # it's run on the server. # # @return [String] the return value of the function. def eval(code, *args) unless code.is_a?(BSON::Code) code = BSON::Code.new(code) end cmd = BSON::OrderedHash.new cmd[:$eval] = code cmd[:args] = args doc = command(cmd) doc['retval'] end # Rename a collection. # # @param [String] from original collection name. # @param [String] to new collection name. # # @return [True] returns +true+ on success. # # @raise MongoDBError if there's an error renaming the collection. def rename_collection(from, to) cmd = BSON::OrderedHash.new cmd[:renameCollection] = "#{@name}.#{from}" cmd[:to] = "#{@name}.#{to}" doc = DB.new('admin', @connection).command(cmd, :check_response => false) ok?(doc) || raise(MongoDBError, "Error renaming collection: #{doc.inspect}") end # Drop an index from a given collection. Normally called from # Collection#drop_index or Collection#drop_indexes. # # @param [String] collection_name # @param [String] index_name # # @return [True] returns +true+ on success. # # @raise MongoDBError if there's an error dropping the index. def drop_index(collection_name, index_name) cmd = BSON::OrderedHash.new cmd[:deleteIndexes] = collection_name cmd[:index] = index_name.to_s doc = command(cmd, :check_response => false) ok?(doc) || raise(MongoDBError, "Error with drop_index command: #{doc.inspect}") end # Get information on the indexes for the given collection. # Normally called by Collection#index_information. # # @param [String] collection_name # # @return [Hash] keys are index names and the values are lists of [key, type] pairs # defining the index. def index_information(collection_name) sel = {:ns => full_collection_name(collection_name)} info = {} Cursor.new(Collection.new(SYSTEM_INDEX_COLLECTION, self), :selector => sel).each do |index| info[index['name']] = index end info end # Return stats on this database. Uses MongoDB's dbstats command. # # @return [Hash] def stats self.command({:dbstats => 1}) end # Return +true+ if the supplied +doc+ contains an 'ok' field with the value 1. # # @param [Hash] doc # # @return [Boolean] def ok?(doc) Mongo::Support.ok?(doc) end # Send a command to the database. # # Note: DB commands must start with the "command" key. For this reason, # any selector containing more than one key must be an OrderedHash. # # Note also that a command in MongoDB is just a kind of query # that occurs on the system command collection ($cmd). Examine this method's implementation # to see how it works. # # @param [OrderedHash, Hash] selector an OrderedHash, or a standard Hash with just one # key, specifying the command to be performed. In Ruby 1.9, OrderedHash isn't necessary since # hashes are ordered by default. # # @option opts [Boolean] :check_response (true) If +true+, raises an exception if the # command fails. # @option opts [Socket] :socket a socket to use for sending the command. This is mainly for internal use. # @option opts [:primary, :secondary] :read Read preference for this command. See Collection#find for # more details. # @option opts [String] :comment (nil) a comment to include in profiling logs # # @return [Hash] # # @core commands command_instance-method def command(selector, opts={}) check_response = opts.fetch(:check_response, true) socket = opts[:socket] raise MongoArgumentError, "Command must be given a selector" unless selector.is_a?(Hash) && !selector.empty? if selector.keys.length > 1 && RUBY_VERSION < '1.9' && selector.class != BSON::OrderedHash raise MongoArgumentError, "DB#command requires an OrderedHash when hash contains multiple keys" end if read_pref = opts[:read] Mongo::ReadPreference::validate(read_pref) unless read_pref == :primary || Mongo::Support::secondary_ok?(selector) raise MongoArgumentError, "Command is not supported on secondaries: #{selector.keys.first}" end end begin result = Cursor.new( system_command_collection, :limit => -1, :selector => selector, :socket => socket, :read => read_pref, :comment => opts[:comment]).next_document rescue OperationFailure => ex raise OperationFailure, "Database command '#{selector.keys.first}' failed: #{ex.message}" end raise OperationFailure, "Database command '#{selector.keys.first}' failed: returned null." unless result if check_response && !ok?(result) message = "Database command '#{selector.keys.first}' failed: (" message << result.map do |key, value| "#{key}: '#{value}'" end.join('; ') message << ').' code = result['code'] || result['assertionCode'] raise OperationFailure.new(message, code, result) end result end # A shortcut returning db plus dot plus collection name. # # @param [String] collection_name # # @return [String] def full_collection_name(collection_name) "#{@name}.#{collection_name}" end # The primary key factory object (or +nil+). # # @return [Object, Nil] def pk_factory @pk_factory end # Specify a primary key factory if not already set. # # @raise [MongoArgumentError] if the primary key factory has already been set. def pk_factory=(pk_factory) raise MongoArgumentError, "Cannot change primary key factory once it's been set" if @pk_factory @pk_factory = pk_factory end # Return the current database profiling level. If profiling is enabled, you can # get the results using DB#profiling_info. # # @return [Symbol] :off, :slow_only, or :all # # @core profiling profiling_level-instance_method def profiling_level cmd = BSON::OrderedHash.new cmd[:profile] = -1 doc = command(cmd, :check_response => false) raise "Error with profile command: #{doc.inspect}" unless ok?(doc) level_sym = PROFILE_LEVEL.invert[doc['was'].to_i] raise "Error: illegal profiling level value #{doc['was']}" unless level_sym level_sym end # Set this database's profiling level. If profiling is enabled, you can # get the results using DB#profiling_info. # # @param [Symbol] level acceptable options are +:off+, +:slow_only+, or +:all+. def profiling_level=(level) cmd = BSON::OrderedHash.new cmd[:profile] = PROFILE_LEVEL[level] doc = command(cmd, :check_response => false) ok?(doc) || raise(MongoDBError, "Error with profile command: #{doc.inspect}") end # Get the current profiling information. # # @return [Array] a list of documents containing profiling information. def profiling_info Cursor.new(Collection.new(SYSTEM_PROFILE_COLLECTION, self), :selector => {}).to_a end # Validate a named collection. # # @param [String] name the collection name. # # @return [Hash] validation information. # # @raise [MongoDBError] if the command fails or there's a problem with the validation # data, or if the collection is invalid. def validate_collection(name) cmd = BSON::OrderedHash.new cmd[:validate] = name cmd[:full] = true doc = command(cmd, :check_response => false) raise MongoDBError, "Error with validate command: #{doc.inspect}" unless ok?(doc) if (doc.has_key?('valid') && !doc['valid']) || (doc['result'] =~ /\b(exception|corrupt)\b/i) raise MongoDBError, "Error: invalid collection #{name}: #{doc.inspect}" end doc end private def system_command_collection Collection.new(SYSTEM_COMMAND_COLLECTION, self) end end end ruby-mongo-1.9.2/lib/mongo/exceptions.rb000066400000000000000000000051611221200727400202140ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo # Generic Mongo Ruby Driver exception class. class MongoRubyError < StandardError; end # Raised when MongoDB itself has returned an error. class MongoDBError < RuntimeError # @return The entire failed command's response object, if available. attr_reader :result # @return The failed command's error code, if availab.e attr_reader :error_code def initialize(message=nil, error_code=nil, result=nil) @error_code = error_code @result = result super(message) end end # Raised on fatal errors to GridFS. class GridError < MongoRubyError; end # Raised on fatal errors to GridFS. class GridFileNotFound < GridError; end # Raised on fatal errors to GridFS. class GridMD5Failure < GridError; end # Raised when invalid arguments are sent to Mongo Ruby methods. class MongoArgumentError < MongoRubyError; end # Raised on failures in connection to the database server. class ConnectionError < MongoRubyError; end # Raised on failures in connection to the database server. class ReplicaSetConnectionError < ConnectionError; end # Raised on failures in connection to the database server. class ConnectionTimeoutError < MongoRubyError; end # Raised when no tags in a read preference maps to a given connection. class NodeWithTagsNotFound < MongoRubyError; end # Raised when a connection operation fails. class ConnectionFailure < MongoDBError; end # Raised when authentication fails. class AuthenticationError < MongoDBError; end # Raised when a database operation fails. class OperationFailure < MongoDBError; end # Raised when a socket read operation times out. class OperationTimeout < SocketError; end # Raised when a client attempts to perform an invalid operation. class InvalidOperation < MongoDBError; end # Raised when an invalid collection or database name is used (invalid namespace name). class InvalidNSName < RuntimeError; end # Raised when the client supplies an invalid value to sort by. class InvalidSortValueError < MongoRubyError; end end ruby-mongo-1.9.2/lib/mongo/gridfs/000077500000000000000000000000001221200727400167615ustar00rootroot00000000000000ruby-mongo-1.9.2/lib/mongo/gridfs/grid.rb000066400000000000000000000103171221200727400202350ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo # Implementation of the MongoDB GridFS specification. A file store. class Grid include GridExt::InstanceMethods DEFAULT_FS_NAME = 'fs' # Initialize a new Grid instance, consisting of a MongoDB database # and a filesystem prefix if not using the default. # # @core gridfs # # @see GridFileSystem def initialize(db, fs_name=DEFAULT_FS_NAME) raise MongoArgumentError, "db must be a Mongo::DB." unless db.is_a?(Mongo::DB) @db = db @files = @db["#{fs_name}.files"] @chunks = @db["#{fs_name}.chunks"] @fs_name = fs_name # This will create indexes only if we're connected to a primary node. begin @chunks.ensure_index([['files_id', Mongo::ASCENDING], ['n', Mongo::ASCENDING]], :unique => true) rescue Mongo::ConnectionFailure end end # Store a file in the file store. This method is designed only for writing new files; # if you need to update a given file, first delete it using Grid#delete. # # Note that arbitrary metadata attributes can be saved to the file by passing # them in as options. # # @param [String, #read] data a string or io-like object to store. # # @option opts [String] :filename (nil) a name for the file. # @option opts [Hash] :metadata ({}) any additional data to store with the file. # @option opts [ObjectId] :_id (ObjectId) a unique id for # the file to be use in lieu of an automatically generated one. # @option opts [String] :content_type ('binary/octet-stream') If no content type is specified, # the content type will may be inferred from the filename extension if the mime-types gem can be # loaded. Otherwise, the content type 'binary/octet-stream' will be used. # @option opts [Integer] (262144) :chunk_size size of file chunks in bytes. # @option opts [String, Integer, Symbol] :w (1) Set write concern # # Notes on write concern: # When :w > 0, the chunks sent to the server are validated using an md5 hash. # If validation fails, an exception will be raised. # # @return [BSON::ObjectId] the file's id. def put(data, opts={}) begin # Ensure there is an index on files_id and n, as state may have changed since instantiation of self. # Recall that index definitions are cached with ensure_index so this statement won't unneccesarily repeat index creation. @chunks.ensure_index([['files_id', Mongo::ASCENDING], ['n', Mongo::ASCENDING]], :unique => true) opts = opts.dup filename = opts.delete(:filename) opts.merge!(default_grid_io_opts) file = GridIO.new(@files, @chunks, filename, 'w', opts) file.write(data) file.close file.files_id rescue Mongo::ConnectionFailure => e raise e, "Failed to create necessary index and write data." end end # Read a file from the file store. # # @param [] id the file's unique id. # # @return [Mongo::GridIO] def get(id) opts = {:query => {'_id' => id}}.merge!(default_grid_io_opts) GridIO.new(@files, @chunks, nil, 'r', opts) end # Delete a file from the store. # # Note that deleting a GridFS file can result in read errors if another process # is attempting to read a file while it's being deleted. While the odds for this # kind of race condition are small, it's important to be aware of. # # @param [] id # # @return [Boolean] def delete(id) @files.remove({"_id" => id}) @chunks.remove({"files_id" => id}) end private def default_grid_io_opts {:fs_name => @fs_name} end end end ruby-mongo-1.9.2/lib/mongo/gridfs/grid_ext.rb000066400000000000000000000041171221200727400211160ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module GridExt module InstanceMethods # Check the existence of a file matching the given query selector. # # Note that this method can be used with both the Grid and GridFileSystem classes. Also # keep in mind that if you're going to be performing lots of existence checks, you should # keep an instance of Grid or GridFileSystem handy rather than instantiating for each existence # check. Alternatively, simply keep a reference to the proper files collection and query that # as needed. That's exactly how this methods works. # # @param [Hash] selector a query selector. # # @example # # # Check for the existence of a given filename # @grid = Mongo::GridFileSystem.new(@db) # @grid.exist?(:filename => 'foo.txt') # # # Check for existence filename and content type # @grid = Mongo::GridFileSystem.new(@db) # @grid.exist?(:filename => 'foo.txt', :content_type => 'image/jpg') # # # Check for existence by _id # @grid = Mongo::Grid.new(@db) # @grid.exist?(:_id => BSON::ObjectId.from_string('4bddcd24beffd95a7db9b8c8')) # # # Check for existence by an arbitrary attribute. # @grid = Mongo::Grid.new(@db) # @grid.exist?(:tags => {'$in' => ['nature', 'zen', 'photography']}) # # @return [nil, Hash] either nil for the file's metadata as a hash. def exist?(selector) @files.find_one(selector) end end end end ruby-mongo-1.9.2/lib/mongo/gridfs/grid_file_system.rb000066400000000000000000000146651221200727400226520ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo # A file store built on the GridFS specification featuring # an API and behavior similar to that of a traditional file system. class GridFileSystem include GridExt::InstanceMethods # Initialize a new GridFileSystem instance, consisting of a MongoDB database # and a filesystem prefix if not using the default. # # @param [Mongo::DB] db a MongoDB database. # @param [String] fs_name A name for the file system. The default name, based on # the specification, is 'fs'. def initialize(db, fs_name=Grid::DEFAULT_FS_NAME) raise MongoArgumentError, "db must be a Mongo::DB." unless db.is_a?(Mongo::DB) @db = db @files = @db["#{fs_name}.files"] @chunks = @db["#{fs_name}.chunks"] @fs_name = fs_name @default_query_opts = {:sort => [['filename', 1], ['uploadDate', -1]], :limit => 1} # This will create indexes only if we're connected to a primary node. begin @files.ensure_index([['filename', 1], ['uploadDate', -1]]) @chunks.ensure_index([['files_id', Mongo::ASCENDING], ['n', Mongo::ASCENDING]], :unique => true) rescue Mongo::ConnectionFailure end end # Open a file for reading or writing. Note that the options for this method only apply # when opening in 'w' mode. # # Note that arbitrary metadata attributes can be saved to the file by passing # them is as options. # # @param [String] filename the name of the file. # @param [String] mode either 'r' or 'w' for reading from # or writing to the file. # @param [Hash] opts see GridIO#new # # @option opts [Hash] :metadata ({}) any additional data to store with the file. # @option opts [ObjectId] :_id (ObjectId) a unique id for # the file to be use in lieu of an automatically generated one. # @option opts [String] :content_type ('binary/octet-stream') If no content type is specified, # the content type will may be inferred from the filename extension if the mime-types gem can be # loaded. Otherwise, the content type 'binary/octet-stream' will be used. # @option opts [Integer] (262144) :chunk_size size of file chunks in bytes. # @option opts [Boolean] :delete_old (false) ensure that old versions of the file are deleted. This option # only work in 'w' mode. Certain precautions must be taken when deleting GridFS files. See the notes under # GridFileSystem#delete. # @option opts [String, Integer, Symbol] :w (1) Set write concern # # Notes on write concern: # When :w > 0, the chunks sent to the server # will be validated using an md5 hash. If validation fails, an exception will be raised. # @option opts [Integer] :versions (false) deletes all versions which exceed the number specified to # retain ordered by uploadDate. This option only works in 'w' mode. Certain precautions must be taken when # deleting GridFS files. See the notes under GridFileSystem#delete. # # @example # # # Store the text "Hello, world!" in the grid file system. # @grid = Mongo::GridFileSystem.new(@db) # @grid.open('filename', 'w') do |f| # f.write "Hello, world!" # end # # # Output "Hello, world!" # @grid = Mongo::GridFileSystem.new(@db) # @grid.open('filename', 'r') do |f| # puts f.read # end # # # Write a file on disk to the GridFileSystem # @file = File.open('image.jpg') # @grid = Mongo::GridFileSystem.new(@db) # @grid.open('image.jpg, 'w') do |f| # f.write @file # end # # @return [Mongo::GridIO] def open(filename, mode, opts={}) opts = opts.dup opts.merge!(default_grid_io_opts(filename)) if mode == 'w' begin # Ensure there are the appropriate indexes, as state may have changed since instantiation of self. # Recall that index definitions are cached with ensure_index so this statement won't unneccesarily repeat index creation. @files.ensure_index([['filename', 1], ['uploadDate', -1]]) @chunks.ensure_index([['files_id', Mongo::ASCENDING], ['n', Mongo::ASCENDING]], :unique => true) versions = opts.delete(:versions) if opts.delete(:delete_old) || (versions && versions < 1) versions = 1 end rescue Mongo::ConnectionFailure => e raise e, "Failed to create necessary indexes and write data." return end end file = GridIO.new(@files, @chunks, filename, mode, opts) return file unless block_given? result = nil begin result = yield file ensure id = file.close if versions self.delete do @files.find({'filename' => filename, '_id' => {'$ne' => id}}, :fields => ['_id'], :sort => ['uploadDate', -1], :skip => (versions - 1)) end end end result end # Delete the file with the given filename. Note that this will delete # all versions of the file. # # Be careful with this. Deleting a GridFS file can result in read errors if another process # is attempting to read a file while it's being deleted. While the odds for this # kind of race condition are small, it's important to be aware of. # # @param [String] filename # # @yield [] pass a block that returns an array of documents to be deleted. # # @return [Boolean] def delete(filename=nil) if block_given? files = yield else files = @files.find({'filename' => filename}, :fields => ['_id']) end files.each do |file| @files.remove({'_id' => file['_id']}) @chunks.remove({'files_id' => file['_id']}) end end alias_method :unlink, :delete private def default_grid_io_opts(filename=nil) {:fs_name => @fs_name, :query => {'filename' => filename}, :query_opts => @default_query_opts} end end end ruby-mongo-1.9.2/lib/mongo/gridfs/grid_io.rb000066400000000000000000000366241221200727400207350ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'digest/md5' module Mongo # GridIO objects represent files in the GridFS specification. This class # manages the reading and writing of file chunks and metadata. class GridIO include Mongo::WriteConcern DEFAULT_CHUNK_SIZE = 256 * 1024 DEFAULT_CONTENT_TYPE = 'binary/octet-stream' PROTECTED_ATTRS = [:files_id, :file_length, :client_md5, :server_md5] attr_reader :content_type, :chunk_size, :upload_date, :files_id, :filename, :metadata, :server_md5, :client_md5, :file_length, :file_position # Create a new GridIO object. Note that most users will not need to use this class directly; # the Grid and GridFileSystem classes will instantiate this class # # @param [Mongo::Collection] files a collection for storing file metadata. # @param [Mongo::Collection] chunks a collection for storing file chunks. # @param [String] filename the name of the file to open or write. # @param [String] mode 'r' or 'w' or reading or creating a file. # # @option opts [Hash] :query a query selector used when opening the file in 'r' mode. # @option opts [Hash] :query_opts any query options to be used when opening the file in 'r' mode. # @option opts [String] :fs_name the file system prefix. # @option opts [Integer] (262144) :chunk_size size of file chunks in bytes. # @option opts [Hash] :metadata ({}) any additional data to store with the file. # @option opts [ObjectId] :_id (ObjectId) a unique id for # the file to be use in lieu of an automatically generated one. # @option opts [String] :content_type ('binary/octet-stream') If no content type is specified, # the content type will may be inferred from the filename extension if the mime-types gem can be # loaded. Otherwise, the content type 'binary/octet-stream' will be used. # @option opts [String, Integer, Symbol] :w (1) Set the write concern # # Notes on write concern: # When :w > 0, the chunks sent to the server # will be validated using an md5 hash. If validation fails, an exception will be raised. def initialize(files, chunks, filename, mode, opts={}) @files = files @chunks = chunks @filename = filename @mode = mode opts = opts.dup @query = opts.delete(:query) || {} @query_opts = opts.delete(:query_opts) || {} @fs_name = opts.delete(:fs_name) || Grid::DEFAULT_FS_NAME @write_concern = get_write_concern(opts) @local_md5 = Digest::MD5.new if Mongo::WriteConcern.gle?(@write_concern) @custom_attrs = {} case @mode when 'r' then init_read when 'w' then init_write(opts) else raise GridError, "Invalid file mode #{@mode}. Mode should be 'r' or 'w'." end end def [](key) @custom_attrs[key] || instance_variable_get("@#{key.to_s}") end def []=(key, value) if PROTECTED_ATTRS.include?(key.to_sym) warn "Attempting to overwrite protected value." return nil else @custom_attrs[key] = value end end # Read the data from the file. If a length if specified, will read from the # current file position. # # @param [Integer] length # # @return [String] # the data in the file def read(length=nil) return '' if @file_length.zero? if length == 0 return '' elsif length.nil? && @file_position.zero? read_all else read_length(length) end end alias_method :data, :read # Write the given string (binary) data to the file. # # @param [String] string # the data to write # # @return [Integer] # the number of bytes written. def write(io) raise GridError, "file not opened for write" unless @mode[0] == ?w if io.is_a? String if Mongo::WriteConcern.gle?(@write_concern) @local_md5.update(io) end write_string(io) else length = 0 if Mongo::WriteConcern.gle?(@write_concern) while(string = io.read(@chunk_size)) @local_md5.update(string) length += write_string(string) end else while(string = io.read(@chunk_size)) length += write_string(string) end end length end end # Position the file pointer at the provided location. # # @param [Integer] pos # the number of bytes to advance the file pointer. this can be a negative # number. # @param [Integer] whence # one of IO::SEEK_CUR, IO::SEEK_END, or IO::SEEK_SET # # @return [Integer] the new file position def seek(pos, whence=IO::SEEK_SET) raise GridError, "Seek is only allowed in read mode." unless @mode == 'r' target_pos = case whence when IO::SEEK_CUR @file_position + pos when IO::SEEK_END @file_length + pos when IO::SEEK_SET pos end new_chunk_number = (target_pos / @chunk_size).to_i if new_chunk_number != @current_chunk['n'] save_chunk(@current_chunk) if @mode[0] == ?w @current_chunk = get_chunk(new_chunk_number) end @file_position = target_pos @chunk_position = @file_position % @chunk_size @file_position end # The current position of the file. # # @return [Integer] def tell @file_position end alias :pos :tell # Rewind the file. This is equivalent to seeking to the zeroth position. # # @return [Integer] the position of the file after rewinding (always zero). def rewind raise GridError, "file not opened for read" unless @mode[0] == ?r seek(0) end # Return a boolean indicating whether the position pointer is # at the end of the file. # # @return [Boolean] def eof raise GridError, "file not opened for read #{@mode}" unless @mode[0] == ?r @file_position >= @file_length end alias :eof? :eof # Return the next line from a GridFS file. This probably # makes sense only if you're storing plain text. This method # has a somewhat tricky API, which it inherits from Ruby's # StringIO#gets. # # @param [String, Integer] separator or length. If a separator, # read up to the separator. If a length, read the +length+ number # of bytes. If nil, read the entire file. # @param [Integer] length If a separator is provided, then # read until either finding the separator or # passing over the +length+ number of bytes. # # @return [String] def gets(separator="\n", length=nil) if separator.nil? read_all elsif separator.is_a?(Integer) read_length(separator) elsif separator.length > 1 read_to_string(separator, length) else read_to_character(separator, length) end end # Return the next byte from the GridFS file. # # @return [String] def getc read_length(1) end # Creates or updates the document from the files collection that # stores the chunks' metadata. The file becomes available only after # this method has been called. # # This method will be invoked automatically when # on GridIO#open is passed a block. Otherwise, it must be called manually. # # @return [BSON::ObjectId] def close if @mode[0] == ?w if @current_chunk['n'].zero? && @chunk_position.zero? warn "Warning: Storing a file with zero length." end @upload_date = Time.now.utc id = @files.insert(to_mongo_object) end id end # Read a chunk of the data from the file and yield it to the given # block. # # Note that this method reads from the current file position. # # @yield Yields on chunk per iteration as defined by this file's # chunk size. # # @return [Mongo::GridIO] self def each return read_all unless block_given? while chunk = read(chunk_size) yield chunk break if chunk.empty? end self end def inspect "#" end private def create_chunk(n) chunk = BSON::OrderedHash.new chunk['_id'] = BSON::ObjectId.new chunk['n'] = n chunk['files_id'] = @files_id chunk['data'] = '' @chunk_position = 0 chunk end def save_chunk(chunk) @chunks.save(chunk) end def get_chunk(n) chunk = @chunks.find({'files_id' => @files_id, 'n' => n}).next_document @chunk_position = 0 chunk end # Read a file in its entirety. def read_all buf = '' if @current_chunk buf << @current_chunk['data'].to_s while buf.size < @file_length @current_chunk = get_chunk(@current_chunk['n'] + 1) break if @current_chunk.nil? buf << @current_chunk['data'].to_s end @file_position = @file_length end buf end # Read a file incrementally. def read_length(length) cache_chunk_data remaining = (@file_length - @file_position) if length.nil? to_read = remaining else to_read = length > remaining ? remaining : length end return nil unless remaining > 0 buf = '' while to_read > 0 if @chunk_position == @chunk_data_length @current_chunk = get_chunk(@current_chunk['n'] + 1) cache_chunk_data end chunk_remainder = @chunk_data_length - @chunk_position size = (to_read >= chunk_remainder) ? chunk_remainder : to_read buf << @current_chunk_data[@chunk_position, size] to_read -= size @chunk_position += size @file_position += size end buf end def read_to_character(character="\n", length=nil) result = '' len = 0 while char = getc result << char len += 1 break if char == character || (length ? len >= length : false) end result.length > 0 ? result : nil end def read_to_string(string="\n", length=nil) result = '' len = 0 match_idx = 0 match_num = string.length - 1 to_match = string[match_idx].chr if length matcher = lambda {|idx, num| idx < num && len < length } else matcher = lambda {|idx, num| idx < num} end while matcher.call(match_idx, match_num) && char = getc result << char len += 1 if char == to_match while match_idx < match_num do match_idx += 1 to_match = string[match_idx].chr char = getc result << char if char != to_match match_idx = 0 to_match = string[match_idx].chr break end end end end result.length > 0 ? result : nil end def cache_chunk_data @current_chunk_data = @current_chunk['data'].to_s if @current_chunk_data.respond_to?(:force_encoding) @current_chunk_data.force_encoding("binary") end @chunk_data_length = @current_chunk['data'].length end def write_string(string) # Since Ruby 1.9.1 doesn't necessarily store one character per byte. if string.respond_to?(:force_encoding) string.force_encoding("binary") end to_write = string.length while (to_write > 0) do if @current_chunk && @chunk_position == @chunk_size next_chunk_number = @current_chunk['n'] + 1 @current_chunk = create_chunk(next_chunk_number) end chunk_available = @chunk_size - @chunk_position step_size = (to_write > chunk_available) ? chunk_available : to_write @current_chunk['data'] = BSON::Binary.new((@current_chunk['data'].to_s << string[-to_write, step_size]).unpack("c*")) @chunk_position += step_size to_write -= step_size save_chunk(@current_chunk) end string.length - to_write end # Initialize the class for reading a file. def init_read doc = @files.find(@query, @query_opts).next_document raise GridFileNotFound, "Could not open file matching #{@query.inspect} #{@query_opts.inspect}" unless doc @files_id = doc['_id'] @content_type = doc['contentType'] @chunk_size = doc['chunkSize'] @upload_date = doc['uploadDate'] @aliases = doc['aliases'] @file_length = doc['length'] @metadata = doc['metadata'] @md5 = doc['md5'] @filename = doc['filename'] @custom_attrs = doc @current_chunk = get_chunk(0) @file_position = 0 end # Initialize the class for writing a file. def init_write(opts) opts = opts.dup @files_id = opts.delete(:_id) || BSON::ObjectId.new @content_type = opts.delete(:content_type) || (defined? MIME) && get_content_type || DEFAULT_CONTENT_TYPE @chunk_size = opts.delete(:chunk_size) || DEFAULT_CHUNK_SIZE @metadata = opts.delete(:metadata) @aliases = opts.delete(:aliases) @file_length = 0 opts.each {|k, v| self[k] = v} check_existing_file if Mongo::WriteConcern.gle?(@write_concern) @current_chunk = create_chunk(0) @file_position = 0 end def check_existing_file if @files.find_one('_id' => @files_id) raise GridError, "Attempting to overwrite with Grid#put. You must delete the file first." end end def to_mongo_object h = BSON::OrderedHash.new h['_id'] = @files_id h['filename'] = @filename if @filename h['contentType'] = @content_type h['length'] = @current_chunk ? @current_chunk['n'] * @chunk_size + @chunk_position : 0 h['chunkSize'] = @chunk_size h['uploadDate'] = @upload_date h['aliases'] = @aliases if @aliases h['metadata'] = @metadata if @metadata h['md5'] = get_md5 h.merge!(@custom_attrs) h end # Get a server-side md5 and validate against the client if running with acknowledged writes def get_md5 md5_command = BSON::OrderedHash.new md5_command['filemd5'] = @files_id md5_command['root'] = @fs_name @server_md5 = @files.db.command(md5_command)['md5'] if Mongo::WriteConcern.gle?(@write_concern) @client_md5 = @local_md5.hexdigest if @local_md5 == @server_md5 @server_md5 else raise GridMD5Failure, "File on server failed MD5 check" end else @server_md5 end end # Determine the content type based on the filename. def get_content_type if @filename if types = MIME::Types.type_for(@filename) types.first.simplified unless types.empty? end end end end end ruby-mongo-1.9.2/lib/mongo/legacy.rb000066400000000000000000000044111221200727400172740ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module LegacyWriteConcern @legacy_write_concern = true def safe=(value) @write_concern = value end def safe if @write_concern[:w] == 0 return false elsif @write_concern[:w] == 1 return true else return @write_concern end end def self.from_uri(uri = ENV['MONGODB_URI'], extra_opts={}) parser = URIParser.new uri parser.connection(extra_opts, true) end end end module Mongo # @deprecated Use Mongo::MongoClient instead. Support will be removed after v2.0 # Please see old documentation for the Connection class class Connection < MongoClient include Mongo::LegacyWriteConcern def initialize(*args) if args.last.is_a?(Hash) opts = args.pop write_concern_from_legacy(opts) args.push(opts) end super end end # @deprecated Use Mongo::MongoReplicaSetClient instead. Support will be removed after v2.0 # Please see old documentation for the ReplSetConnection class class ReplSetConnection < MongoReplicaSetClient include Mongo::LegacyWriteConcern def initialize(*args) if args.last.is_a?(Hash) opts = args.pop write_concern_from_legacy(opts) args.push(opts) end super end end # @deprecated Use Mongo::MongoShardedClient instead. Support will be removed after v2.0 # Please see old documentation for the ShardedConnection class class ShardedConnection < MongoShardedClient include Mongo::LegacyWriteConcern def initialize(*args) if args.last.is_a?(Hash) opts = args.pop write_concern_from_legacy(opts) args.push(opts) end super end end endruby-mongo-1.9.2/lib/mongo/mongo_client.rb000066400000000000000000000566251221200727400205230ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'set' require 'socket' require 'thread' module Mongo # Instantiates and manages self.connections to MongoDB. class MongoClient include Mongo::Logging include Mongo::Networking include Mongo::WriteConcern Mutex = ::Mutex ConditionVariable = ::ConditionVariable DEFAULT_HOST = 'localhost' DEFAULT_PORT = 27017 DEFAULT_DB_NAME = 'test' GENERIC_OPTS = [:auths, :logger, :connect, :default_db] TIMEOUT_OPTS = [:timeout, :op_timeout, :connect_timeout] SSL_OPTS = [:ssl, :ssl_key, :ssl_cert, :ssl_verify, :ssl_ca_cert] POOL_OPTS = [:pool_size, :pool_timeout] READ_PREFERENCE_OPTS = [:read, :tag_sets, :secondary_acceptable_latency_ms] WRITE_CONCERN_OPTS = [:w, :j, :fsync, :wtimeout] CLIENT_ONLY_OPTS = [:slave_ok] mongo_thread_local_accessor :connections attr_reader :logger, :size, :auths, :primary, :write_concern, :host_to_try, :pool_size, :connect_timeout, :pool_timeout, :primary_pool, :socket_class, :socket_opts, :op_timeout, :tag_sets, :acceptable_latency, :read # Create a connection to single MongoDB instance. # # If no args are provided, it will check ENV["MONGODB_URI"]. # # You may specify whether connection to slave is permitted. # In all cases, the default host is "localhost" and the default port is 27017. # # If you're connecting to a replica set, you'll need to use MongoReplicaSetClient.new instead. # # Once connected to a replica set, you can find out which nodes are primary, secondary, and # arbiters with the corresponding accessors: MongoClient#primary, MongoClient#secondaries, and # MongoClient#arbiters. This is useful if your application needs to connect manually to nodes other # than the primary. # # @overload initialize(host, port, opts={}) # @param [String] host hostname for the target MongoDB server. # @param [Integer] port specify a port number here if only one host is being specified. # @param [Hash] opts hash of optional settings and configuration values. # # @option opts [String, Integer, Symbol] :w (1) Set default number of nodes to which a write # should be acknowledged # @option opts [Boolean] :j (false) Set journal acknowledgement # @option opts [Integer] :wtimeout (nil) Set replica set acknowledgement timeout # @option opts [Boolean] :fsync (false) Set fsync acknowledgement. # # Notes about Write-Concern Options: # Write concern options are propagated to objects instantiated from this MongoClient. # These defaults can be overridden upon instantiation of any object by explicitly setting an options hash # on initialization. # # @option opts [Boolean] :ssl (false) If true, create the connection to the server using SSL. # @option opts [String] :ssl_cert (nil) The certificate file used to identify the local connection against MongoDB. # @option opts [String] :ssl_key (nil) The private keyfile used to identify the local connection against MongoDB. # If included with the :ssl_cert then only :ssl_cert is needed. # @option opts [Boolean] :ssl_verify (nil) Specifies whether or not peer certification validation should occur. # @option opts [String] :ssl_ca_cert (nil) The ca_certs file contains a set of concatenated "certification authority" # certificates, which are used to validate certificates passed from the other end of the connection. # Required for :ssl_verify. # @option opts [Boolean] :slave_ok (false) Must be set to +true+ when connecting # to a single, slave node. # @option opts [Logger, #debug] :logger (nil) A Logger instance for debugging driver ops. Note that # logging negatively impacts performance; therefore, it should not be used for high-performance apps. # @option opts [Integer] :pool_size (1) The maximum number of socket self.connections allowed per # connection pool. Note: this setting is relevant only for multi-threaded applications. # @option opts [Float] :pool_timeout (5.0) When all of the self.connections a pool are checked out, # this is the number of seconds to wait for a new connection to be released before throwing an exception. # Note: this setting is relevant only for multi-threaded applications. # @option opts [Float] :op_timeout (nil) The number of seconds to wait for a read operation to time out. # Disabled by default. # @option opts [Float] :connect_timeout (nil) The number of seconds to wait before timing out a # connection attempt. # # @example localhost, 27017 (or ENV["MONGODB_URI"] if available) # MongoClient.new # # @example localhost, 27017 # MongoClient.new("localhost") # # @example localhost, 3000, max 5 self.connections, with max 5 seconds of wait time. # MongoClient.new("localhost", 3000, :pool_size => 5, :pool_timeout => 5) # # @example localhost, 3000, where this node may be a slave # MongoClient.new("localhost", 3000, :slave_ok => true) # # @example Unix Domain Socket # MongoClient.new("/var/run/mongodb.sock") # # @see http://api.mongodb.org/ruby/current/file.REPLICA_SETS.html Replica sets in Ruby # # @raise [ReplicaSetConnectionError] This is raised if a replica set name is specified and the # driver fails to connect to a replica set with that name. # # @raise [MongoArgumentError] If called with no arguments and ENV["MONGODB_URI"] implies a replica set. # # @core self.connections def initialize(*args) opts = args.last.is_a?(Hash) ? args.pop : {} @host, @port = parse_init(args[0], args[1], opts) # Lock for request ids. @id_lock = Mutex.new # Connection pool for primary node @primary = nil @primary_pool = nil @mongos = false # Not set for direct connection @tag_sets = [] @acceptable_latency = 15 @max_message_size = nil @max_bson_size = nil check_opts(opts) setup(opts.dup) end # DEPRECATED # # Initialize a connection to a MongoDB replica set using an array of seed nodes. # # The seed nodes specified will be used on the initial connection to the replica set, but note # that this list of nodes will be replaced by the list of canonical nodes returned by running the # is_master command on the replica set. # # @param nodes [Array] An array of arrays, each of which specifies a host and port. # @param opts [Hash] Any of the available options that can be passed to MongoClient.new. # # @option opts [String] :rs_name (nil) The name of the replica set to connect to. An exception will be # raised if unable to connect to a replica set with this name. # @option opts [Boolean] :read_secondary (false) When true, this connection object will pick a random slave # to send reads to. # # @example # Mongo::MongoClient.multi([["db1.example.com", 27017], ["db2.example.com", 27017]]) # # @example This connection will read from a random secondary node. # Mongo::MongoClient.multi([["db1.example.com", 27017], ["db2.example.com", 27017], ["db3.example.com", 27017]], # :read_secondary => true) # # @return [Mongo::MongoClient] # # @deprecated def self.multi(nodes, opts={}) warn "MongoClient.multi is now deprecated and will be removed in v2.0. Please use MongoReplicaSetClient.new instead." MongoReplicaSetClient.new(nodes, opts) end # Initialize a connection to MongoDB using the MongoDB URI spec. # # Since MongoClient.new cannot be used with any ENV["MONGODB_URI"] that has multiple hosts (implying a replicaset), # you may use this when the type of your connection varies by environment and should be determined solely from ENV["MONGODB_URI"]. # # @param uri [String] # A string of the format mongodb://[username:password@]host1[:port1][,host2[:port2],...[,hostN[:portN]]][/database] # # @param opts Any of the options available for MongoClient.new # # @return [Mongo::MongoClient, Mongo::MongoReplicaSetClient] def self.from_uri(uri = ENV['MONGODB_URI'], extra_opts={}) parser = URIParser.new(uri) parser.connection(extra_opts) end def parse_init(host, port, opts) if host.nil? && port.nil? && ENV.has_key?('MONGODB_URI') parser = URIParser.new(ENV['MONGODB_URI']) if parser.replicaset? raise MongoArgumentError, "ENV['MONGODB_URI'] implies a replica set." end opts.merge!(parser.connection_options) [parser.host, parser.port] else [host || DEFAULT_HOST, port || DEFAULT_PORT] end end # The host name used for this connection. # # @return [String] def host @primary_pool.host end # The port used for this connection. # # @return [Integer] def port @primary_pool.port end def host_port [@host, @port] end # Fsync, then lock the mongod process against writes. Use this to get # the datafiles in a state safe for snapshotting, backing up, etc. # # @return [BSON::OrderedHash] the command response def lock! cmd = BSON::OrderedHash.new cmd[:fsync] = 1 cmd[:lock] = true self['admin'].command(cmd) end # Is this database locked against writes? # # @return [Boolean] def locked? [1, true].include? self['admin']['$cmd.sys.inprog'].find_one['fsyncLock'] end # Unlock a previously fsync-locked mongod process. # # @return [BSON::OrderedHash] command response def unlock! self['admin']['$cmd.sys.unlock'].find_one end # Apply each of the saved database authentications. # # @return [Boolean] returns true if authentications exist and succeeds, false # if none exists. # # @raise [AuthenticationError] raises an exception if any one # authentication fails. def apply_saved_authentication(opts={}) return false if @auths.empty? @auths.each do |auth| self[auth[:db_name]].issue_authentication(auth[:username], auth[:password], false, :source => auth[:source], :socket => opts[:socket]) end true end # Save an authentication to this connection. When connecting, # the connection will attempt to re-authenticate on every db # specified in the list of auths. This method is called automatically # by DB#authenticate. # # Note: this method will not actually issue an authentication command. To do that, # either run MongoClient#apply_saved_authentication or DB#authenticate. # # @param [String] db_name # @param [String] username # @param [String] password # # @return [Hash] a hash representing the authentication just added. def add_auth(db_name, username, password, source) if @auths.any? {|a| a[:db_name] == db_name} raise MongoArgumentError, "Cannot apply multiple authentications to database '#{db_name}'" end auth = { :db_name => db_name, :username => username, :password => password, :source => source } @auths << auth auth end # Remove a saved authentication for this connection. # # @param [String] db_name # # @return [Boolean] def remove_auth(db_name) return unless @auths @auths.reject! { |a| a[:db_name] == db_name } ? true : false end # Remove all authentication information stored in this connection. # # @return [true] this operation return true because it always succeeds. def clear_auths @auths = [] true end def authenticate_pools @primary_pool.authenticate_existing end def logout_pools(db) @primary_pool.logout_existing(db) end # Return a hash with all database names # and their respective sizes on disk. # # @return [Hash] def database_info doc = self['admin'].command({:listDatabases => 1}) doc['databases'].inject({}) do |info, db| info[db['name']] = db['sizeOnDisk'].to_i info end end # Return an array of database names. # # @return [Array] def database_names database_info.keys end # Return a database with the given name. # See DB#new for valid options hash parameters. # # @param [String] db_name a valid database name. # @param [Hash] opts options to be passed to the DB constructor. # # @return [Mongo::DB] # # @core databases db-instance_method def db(db_name = @default_db, opts = {}) DB.new(db_name, self, opts) end # Shortcut for returning a database. Use DB#db to accept options. # # @param [String] db_name a valid database name. # # @return [Mongo::DB] # # @core databases []-instance_method def [](db_name) DB.new(db_name, self) end def refresh end def pinned_pool @primary_pool end def pin_pool(pool, read_prefs) end def unpin_pool end # Drop a database. # # @param [String] name name of an existing database. def drop_database(name) self[name].command(:dropDatabase => 1) end # Copy the database +from+ to +to+ on localhost. The +from+ database is # assumed to be on localhost, but an alternate host can be specified. # # @param [String] from name of the database to copy from. # @param [String] to name of the database to copy to. # @param [String] from_host host of the 'from' database. # @param [String] username username for authentication against from_db (>=1.3.x). # @param [String] password password for authentication against from_db (>=1.3.x). def copy_database(from, to, from_host=DEFAULT_HOST, username=nil, password=nil) oh = BSON::OrderedHash.new oh[:copydb] = 1 oh[:fromhost] = from_host oh[:fromdb] = from oh[:todb] = to if username || password unless username && password raise MongoArgumentError, "Both username and password must be supplied for authentication." end nonce_cmd = BSON::OrderedHash.new nonce_cmd[:copydbgetnonce] = 1 nonce_cmd[:fromhost] = from_host result = self["admin"].command(nonce_cmd) oh[:nonce] = result["nonce"] oh[:username] = username oh[:key] = Mongo::Support.auth_key(username, password, oh[:nonce]) end self["admin"].command(oh) end # Checks if a server is alive. This command will return immediately # even if the server is in a lock. # # @return [Hash] def ping self["admin"].command({:ping => 1}) end # Get the build information for the current connection. # # @return [Hash] def server_info self["admin"].command({:buildinfo => 1}) end # Get the build version of the current server. # # @return [Mongo::ServerVersion] # object allowing easy comparability of version. def server_version ServerVersion.new(server_info["version"]) end # Is it okay to connect to a slave? # # @return [Boolean] def slave_ok? @slave_ok end def mongos? @mongos end # Create a new socket and attempt to connect to master. # If successful, sets host and port to master and returns the socket. # # If connecting to a replica set, this method will replace the # initially-provided seed list with any nodes known to the set. # # @raise [ConnectionFailure] if unable to connect to any host or port. def connect close config = check_is_master(host_port) if config if config['ismaster'] == 1 || config['ismaster'] == true @read_primary = true elsif @slave_ok @read_primary = false end if config.has_key?('msg') && config['msg'] == 'isdbgrid' @mongos = true end @max_bson_size = config['maxBsonObjectSize'] @max_message_size = config['maxMessageSizeBytes'] set_primary(host_port) end if !connected? raise ConnectionFailure, "Failed to connect to a master node at #{host_port.join(":")}" end true end alias :reconnect :connect # It's possible that we defined connected as all nodes being connected??? # NOTE: Do check if this needs to be more stringent. # Probably not since if any node raises a connection failure, all nodes will be closed. def connected? !!(@primary_pool && !@primary_pool.closed?) end # Determine if the connection is active. In a normal case the *server_info* operation # will be performed without issues, but if the connection was dropped by the server or # for some reason the sockets are unsynchronized, a ConnectionFailure will be raised and # the return will be false. # # @return [Boolean] def active? return false unless connected? ping true rescue ConnectionFailure false end # Determine whether we're reading from a primary node. If false, # this connection connects to a secondary node and @slave_ok is true. # # @return [Boolean] def read_primary? @read_primary end alias :primary? :read_primary? # The socket pool that this connection reads from. # # @return [Mongo::Pool] def read_pool @primary_pool end # Close the connection to the database. def close @primary_pool.close if @primary_pool @primary_pool = nil @primary = nil end # Returns the maximum BSON object size as returned by the core server. # Use the 4MB default when the server doesn't report this. # # @return [Integer] def max_bson_size @max_bson_size || DEFAULT_MAX_BSON_SIZE end def max_message_size @max_message_size || max_bson_size * MESSAGE_SIZE_FACTOR end # Checkout a socket for reading (i.e., a secondary node). # Note: this is overridden in MongoReplicaSetClient. def checkout_reader(read_preference) connect unless connected? @primary_pool.checkout end # Checkout a socket for writing (i.e., a primary node). # Note: this is overridden in MongoReplicaSetClient. def checkout_writer connect unless connected? @primary_pool.checkout end # Check a socket back into its pool. # Note: this is overridden in MongoReplicaSetClient. def checkin(socket) if @primary_pool && socket && socket.pool socket.checkin end end # Internal method for checking isMaster() on a given node. # # @param node [Array] Port and host for the target node # @return [Hash] Response from isMaster() # # @private def check_is_master(node) begin host, port = *node config = nil socket = @socket_class.new(host, port, @op_timeout, @connect_timeout, @socket_opts) if @connect_timeout Timeout::timeout(@connect_timeout, OperationTimeout) do config = self['admin'].command({:ismaster => 1}, :socket => socket) end else config = self['admin'].command({:ismaster => 1}, :socket => socket) end rescue OperationFailure, SocketError, SystemCallError, IOError close ensure socket.close unless socket.nil? || socket.closed? end config end protected def valid_opts GENERIC_OPTS + CLIENT_ONLY_OPTS + POOL_OPTS + READ_PREFERENCE_OPTS + WRITE_CONCERN_OPTS + TIMEOUT_OPTS + SSL_OPTS end def check_opts(opts) bad_opts = opts.keys.reject { |opt| valid_opts.include?(opt) } unless bad_opts.empty? bad_opts.each {|opt| warn "#{opt} is not a valid option for #{self.class}"} end end # Parse option hash def setup(opts) @slave_ok = opts.delete(:slave_ok) @ssl = opts.delete(:ssl) @unix = @host ? @host.end_with?('.sock') : false # if ssl options are present, but ssl is nil/false raise for misconfig ssl_opts = opts.keys.select { |k| k.to_s.start_with?('ssl') } if ssl_opts.size > 0 && !@ssl raise MongoArgumentError, "SSL has not been enabled (:ssl=false) " + "but the following SSL related options were " + "specified: #{ssl_opts.join(', ')}" end @socket_opts = {} if @ssl # construct ssl socket opts @socket_opts[:key] = opts.delete(:ssl_key) @socket_opts[:cert] = opts.delete(:ssl_cert) @socket_opts[:verify] = opts.delete(:ssl_verify) @socket_opts[:ca_cert] = opts.delete(:ssl_ca_cert) # verify peer requires ca_cert, raise if only one is present if @socket_opts[:verify] && !@socket_opts[:ca_cert] raise MongoArgumentError, "If :ssl_verify_mode has been specified, then you must include " + ":ssl_ca_cert in order to perform server validation." end @socket_class = Mongo::SSLSocket elsif @unix @socket_class = Mongo::UNIXSocket else @socket_class = Mongo::TCPSocket end # Authentication objects @auths = opts.delete(:auths) || [] # Pool size and timeout. @pool_size = opts.delete(:pool_size) || 1 if opts[:timeout] warn "The :timeout option has been deprecated " + "and will be removed in the 2.0 release. " + "Use :pool_timeout instead." end @pool_timeout = opts.delete(:pool_timeout) || opts.delete(:timeout) || 5.0 # Timeout on socket read operation. @op_timeout = opts.delete(:op_timeout) || nil # Timeout on socket connect. @connect_timeout = opts.delete(:connect_timeout) || 30 @logger = opts.delete(:logger) || nil if @logger write_logging_startup_message end # Determine read preference if defined?(@slave_ok) && (@slave_ok) || defined?(@read_secondary) && @read_secondary @read = :secondary_preferred else @read = opts.delete(:read) || :primary end Mongo::ReadPreference::validate(@read) @default_db = opts.delete(:default_db) || DEFAULT_DB_NAME @tag_sets = opts.delete(:tag_sets) || [] @acceptable_latency = opts.delete(:secondary_acceptable_latency_ms) || 15 # Connection level write concern options. @write_concern = get_write_concern(opts) connect if opts.fetch(:connect, true) end private # Set the specified node as primary. def set_primary(node) host, port = *node @primary = [host, port] @primary_pool = Pool.new(self, host, port, :size => @pool_size, :timeout => @pool_timeout) end end end ruby-mongo-1.9.2/lib/mongo/mongo_replica_set_client.rb000066400000000000000000000406151221200727400230650ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo # Instantiates and manages connections to a MongoDB replica set. class MongoReplicaSetClient < MongoClient include ReadPreference include ThreadLocalVariableManager REPL_SET_OPTS = [ :refresh_mode, :refresh_interval, :read_secondary, :rs_name, :name ] attr_reader :replica_set_name, :seeds, :refresh_interval, :refresh_mode, :refresh_version, :manager # Create a connection to a MongoDB replica set. # # If no args are provided, it will check ENV["MONGODB_URI"]. # # Once connected to a replica set, you can find out which nodes are primary, secondary, and # arbiters with the corresponding accessors: MongoClient#primary, MongoClient#secondaries, and # MongoClient#arbiters. This is useful if your application needs to connect manually to nodes other # than the primary. # # @overload initialize(seeds=ENV["MONGODB_URI"], opts={}) # @param [Array, Array] seeds # # @option opts [String, Integer, Symbol] :w (1) Set default number of nodes to which a write # should be acknowledged # @option opts [Boolean] :j (false) Set journal acknowledgement # @option opts [Integer] :wtimeout (nil) Set acknowledgement timeout # @option opts [Boolean] :fsync (false) Set fsync acknowledgement. # # Notes about write concern options: # Write concern options are propagated to objects instantiated from this MongoReplicaSetClient. # These defaults can be overridden upon instantiation of any object by explicitly setting an options hash # on initialization. # @option opts [:primary, :primary_preferred, :secondary, :secondary_preferred, :nearest] :read (:primary) # A "read preference" determines the candidate replica set members to which a query or command can be sent. # [:primary] # * Read from primary only. # * Cannot be combined with tags. # [:primary_preferred] # * Read from primary if available, otherwise read from a secondary. # [:secondary] # * Read from secondary if available. # [:secondary_preferred] # * Read from a secondary if available, otherwise read from the primary. # [:nearest] # * Read from any member. # @option opts [Array Tag Value }>] :tag_sets ([]) # Read from replica-set members with these tags. # @option opts [Integer] :secondary_acceptable_latency_ms (15) The acceptable # nearest available member for a member to be considered "near". # @option opts [Logger] :logger (nil) Logger instance to receive driver operation log. # @option opts [Integer] :pool_size (1) The maximum number of socket connections allowed per # connection pool. Note: this setting is relevant only for multi-threaded applications. # @option opts [Float] :pool_timeout (5.0) When all of the connections a pool are checked out, # this is the number of seconds to wait for a new connection to be released before throwing an exception. # Note: this setting is relevant only for multi-threaded applications. # @option opts [Float] :op_timeout (nil) The number of seconds to wait for a read operation to time out. # @option opts [Float] :connect_timeout (30) The number of seconds to wait before timing out a # connection attempt. # @option opts [Boolean] :ssl (false) If true, create the connection to the server using SSL. # @option opts [String] :ssl_cert (nil) The certificate file used to identify the local connection against MongoDB. # @option opts [String] :ssl_key (nil) The private keyfile used to identify the local connection against MongoDB. # If included with the :ssl_cert then only :ssl_cert is needed. # @option opts [Boolean] :ssl_verify (nil) Specifies whether or not peer certification validation should occur. # @option opts [String] :ssl_ca_cert (nil) The ca_certs file contains a set of concatenated "certification authority" # certificates, which are used to validate certificates passed from the other end of the connection. # Required for :ssl_verify. # @option opts [Boolean] :refresh_mode (false) Set this to :sync to periodically update the # state of the connection every :refresh_interval seconds. Replica set connection failures # will always trigger a complete refresh. This option is useful when you want to add new nodes # or remove replica set nodes not currently in use by the driver. # @option opts [Integer] :refresh_interval (90) If :refresh_mode is enabled, this is the number of seconds # between calls to check the replica set's state. # @note the number of seed nodes does not have to be equal to the number of replica set members. # The purpose of seed nodes is to permit the driver to find at least one replica set member even if a member is down. # # @example Connect to a replica set and provide two seed nodes. # MongoReplicaSetClient.new(['localhost:30000', 'localhost:30001']) # # @example Connect to a replica set providing two seed nodes and ensuring a connection to the replica set named 'prod': # MongoReplicaSetClient.new(['localhost:30000', 'localhost:30001'], :name => 'prod') # # @example Connect to a replica set providing two seed nodes and allowing reads from a secondary node: # MongoReplicaSetClient.new(['localhost:30000', 'localhost:30001'], :read => :secondary) # # @see http://api.mongodb.org/ruby/current/file.REPLICA_SETS.html Replica sets in Ruby # # @raise [MongoArgumentError] This is raised for usage errors. # # @raise [ConnectionFailure] This is raised for the various connection failures. def initialize(*args) opts = args.last.is_a?(Hash) ? args.pop : {} nodes = args.shift || [] raise MongoArgumentError, "Too many arguments" unless args.empty? # This is temporary until support for the old format is dropped @seeds = nodes.collect do |node| if node.is_a?(Array) warn "Initiating a MongoReplicaSetClient with seeds passed as individual [host, port] array arguments is deprecated." warn "Please specify hosts as an array of 'host:port' strings; the old format will be removed in v2.0" node elsif node.is_a?(String) Support.normalize_seeds(node) else raise MongoArgumentError "Bad seed format!" end end if @seeds.empty? && ENV.has_key?('MONGODB_URI') parser = URIParser.new ENV['MONGODB_URI'] if parser.direct? raise MongoArgumentError, "ENV['MONGODB_URI'] implies a direct connection." end opts = parser.connection_options.merge! opts @seeds = parser.nodes end if @seeds.length.zero? raise MongoArgumentError, "A MongoReplicaSetClient requires at least one seed node." end @seeds.freeze # Refresh @last_refresh = Time.now @refresh_version = 0 # No connection manager by default. @manager = nil # Lock for request ids. @id_lock = Mutex.new @connected = false @connect_mutex = Mutex.new @mongos = false check_opts(opts) setup(opts.dup) end def valid_opts super + REPL_SET_OPTS - CLIENT_ONLY_OPTS end def inspect "" end # Initiate a connection to the replica set. def connect(force = !connected?) return unless force log(:info, "Connecting...") # Prevent recursive connection attempts from the same thread. # This is done rather than using a Monitor to prevent potentially recursing # infinitely while attempting to connect and continually failing. Instead, fail fast. raise ConnectionFailure, "Failed to get node data." if thread_local[:locks][:connecting] == true current_version = @refresh_version @connect_mutex.synchronize do # don't try to connect if another thread has done so while we were waiting for the lock return unless current_version == @refresh_version begin thread_local[:locks][:connecting] = true if @manager ensure_manager @manager.refresh!(@seeds) else @manager = PoolManager.new(self, @seeds) ensure_manager @manager.connect end ensure thread_local[:locks][:connecting] = false end @refresh_version += 1 if @manager.pools.empty? close raise ConnectionFailure, "Failed to connect to any node." else @connected = true end end end # Determine whether a replica set refresh is # required. If so, run a hard refresh. You can # force a hard refresh by running # MongoReplicaSetClient#hard_refresh! # # @return [Boolean] +true+ unless a hard refresh # is run and the refresh lock can't be acquired. def refresh(opts={}) if !connected? log(:info, "Trying to check replica set health but not " + "connected...") return hard_refresh! end log(:debug, "Checking replica set connection health...") ensure_manager @manager.check_connection_health if @manager.refresh_required? return hard_refresh! end return true end # Force a hard refresh of this connection's view # of the replica set. # # @return [Boolean] +true+ if hard refresh # occurred. +false+ is returned when unable # to get the refresh lock. def hard_refresh! log(:info, "Initiating hard refresh...") connect(true) return true end def connected? @connected && !@manager.pools.empty? end # @deprecated def connecting? warn "MongoReplicaSetClient#connecting? is deprecated and will be removed in v2.0." false end # The replica set primary's host name. # # @return [String] def host @manager.primary_pool.host end # The replica set primary's port. # # @return [Integer] def port @manager.primary_pool.port end def nodes warn "MongoReplicaSetClient#nodes is DEPRECATED and will be removed in v2.0. " + "Please use MongoReplicaSetClient#seeds instead." @seeds end # Determine whether we're reading from a primary node. If false, # this connection connects to a secondary node and @read_secondaries is true. # # @return [Boolean] def read_primary? read_pool == primary_pool end alias :primary? :read_primary? # Close the connection to the database. def close(opts={}) if opts[:soft] @manager.close(:soft => true) if @manager else @manager.close if @manager end # Clear the reference to this object. thread_local[:managers].delete(self) unpin_pool @connected = false end # If a ConnectionFailure is raised, this method will be called # to close the connection and reset connection values. # @deprecated def reset_connection close warn "MongoReplicaSetClient#reset_connection is now deprecated and will be removed in v2.0. " + "Use MongoReplicaSetClient#close instead." end # Returns +true+ if it's okay to read from a secondary node. # # This method exist primarily so that Cursor objects will # generate query messages with a slaveOkay value of +true+. # # @return [Boolean] +true+ def slave_ok? @read != :primary end def authenticate_pools @manager.pools.each { |pool| pool.authenticate_existing } end def logout_pools(db) @manager.pools.each { |pool| pool.logout_existing(db) } end # Generic socket checkout # Takes a block that returns a socket from pool def checkout ensure_manager connected? ? sync_refresh : connect begin socket = yield rescue => ex checkin(socket) if socket raise ex end if socket return socket else @connected = false raise ConnectionFailure.new("Could not checkout a socket.") end end def checkout_reader(read_pref={}) checkout do pool = read_pool(read_pref) get_socket_from_pool(pool) end end # Checkout a socket for writing (i.e., a primary node). def checkout_writer checkout do get_socket_from_pool(primary_pool) end end # Checkin a socket used for reading. def checkin(socket) if socket && socket.pool socket.checkin end sync_refresh end def ensure_manager thread_local[:managers][self] = @manager end def pinned_pool thread_local[:pinned_pools][@manager.object_id] if @manager end def pin_pool(pool, read_preference) if @manager thread_local[:pinned_pools][@manager.object_id] = { :pool => pool, :read_preference => read_preference } end end def unpin_pool thread_local[:pinned_pools].delete @manager.object_id if @manager end def get_socket_from_pool(pool) begin pool.checkout if pool rescue ConnectionFailure nil end end def local_manager thread_local[:managers][self] end def arbiters local_manager.arbiters.nil? ? [] : local_manager.arbiters end def primary local_manager ? local_manager.primary : nil end # Note: might want to freeze these after connecting. def secondaries local_manager ? local_manager.secondaries : [] end def hosts local_manager ? local_manager.hosts : [] end def primary_pool local_manager ? local_manager.primary_pool : nil end def secondary_pool local_manager ? local_manager.secondary_pool : nil end def secondary_pools local_manager ? local_manager.secondary_pools : [] end def pools local_manager ? local_manager.pools : [] end def tag_map local_manager ? local_manager.tag_map : {} end def max_bson_size return local_manager.max_bson_size if local_manager DEFAULT_MAX_BSON_SIZE end def max_message_size return local_manager.max_message_size if local_manager max_bson_size * MESSAGE_SIZE_FACTOR end private # Parse option hash def setup(opts) # Refresh @refresh_mode = opts.delete(:refresh_mode) || false @refresh_interval = opts.delete(:refresh_interval) || 90 if @refresh_mode && @refresh_interval < 60 @refresh_interval = 60 unless ENV['TEST_MODE'] = 'TRUE' end if @refresh_mode == :async warn ":async refresh mode has been deprecated. Refresh mode will be disabled." elsif ![:sync, false].include?(@refresh_mode) raise MongoArgumentError, "Refresh mode must be either :sync or false." end if opts[:read_secondary] warn ":read_secondary options has now been deprecated and will " + "be removed in driver v2.0. Use the :read option instead." @read_secondary = opts.delete(:read_secondary) || false end # Replica set name if opts[:rs_name] warn ":rs_name option has been deprecated and will be removed in v2.0. " + "Please use :name instead." @replica_set_name = opts.delete(:rs_name) else @replica_set_name = opts.delete(:name) end super opts end def sync_refresh if @refresh_mode == :sync && ((Time.now - @last_refresh) > @refresh_interval) @last_refresh = Time.now refresh end end end end ruby-mongo-1.9.2/lib/mongo/mongo_sharded_client.rb000066400000000000000000000110161221200727400221760ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo # Instantiates and manages connections to a MongoDB sharded cluster for high availability. class MongoShardedClient < MongoReplicaSetClient include ThreadLocalVariableManager SHARDED_CLUSTER_OPTS = [:refresh_mode, :refresh_interval, :tag_sets, :read] attr_reader :seeds, :refresh_interval, :refresh_mode, :refresh_version, :manager def initialize(*args) opts = args.last.is_a?(Hash) ? args.pop : {} nodes = args.flatten if nodes.empty? and ENV.has_key?('MONGODB_URI') parser = URIParser.new ENV['MONGODB_URI'] opts = parser.connection_options.merge! opts nodes = parser.node_strings end unless nodes.length > 0 raise MongoArgumentError, "A MongoShardedClient requires at least one seed node." end @seeds = nodes.map do |host_port| Support.normalize_seeds(host_port) end # TODO: add a method for replacing this list of node. @seeds.freeze # Refresh @last_refresh = Time.now @refresh_version = 0 # No connection manager by default. @manager = nil # Lock for request ids. @id_lock = Mutex.new @connected = false @connect_mutex = Mutex.new @mongos = true check_opts(opts) setup(opts) end def valid_opts super + SHARDED_CLUSTER_OPTS end def inspect "" end # Initiate a connection to the sharded cluster. def connect(force = !connected?) return unless force log(:info, "Connecting...") # Prevent recursive connection attempts from the same thread. # This is done rather than using a Monitor to prevent potentially recursing # infinitely while attempting to connect and continually failing. Instead, fail fast. raise ConnectionFailure, "Failed to get node data." if thread_local[:locks][:connecting] @connect_mutex.synchronize do begin thread_local[:locks][:connecting] = true if @manager thread_local[:managers][self] = @manager @manager.refresh! @seeds else @manager = ShardingPoolManager.new(self, @seeds) ensure_manager @manager.connect end ensure thread_local[:locks][:connecting] = false end @refresh_version += 1 @last_refresh = Time.now @connected = true end end # Force a hard refresh of this connection's view # of the sharded cluster. # # @return [Boolean] +true+ if hard refresh # occurred. +false+ is returned when unable # to get the refresh lock. def hard_refresh! log(:info, "Initiating hard refresh...") connect(true) return true end def connected? !!(@connected && @manager.primary_pool) end # Returns +true+ if it's okay to read from a secondary node. # Since this is a sharded cluster, this must always be false. # # This method exist primarily so that Cursor objects will # generate query messages with a slaveOkay value of +true+. # # @return [Boolean] +true+ def slave_ok? false end def checkout(&block) tries = 0 begin super(&block) rescue ConnectionFailure tries +=1 tries < 2 ? retry : raise end end # Initialize a connection to MongoDB using the MongoDB URI spec. # # @param uri [ String ] string of the format: # mongodb://[username:password@]host1[:port1][,host2[:port2],...[,hostN[:portN]]][/database] # # @param opts [ Hash ] Any of the options available for MongoShardedClient.new # # @return [ Mongo::MongoShardedClient ] The sharded client. def self.from_uri(uri, options = {}) uri ||= ENV['MONGODB_URI'] URIParser.new(uri).connection(options, false, true) end end end ruby-mongo-1.9.2/lib/mongo/networking.rb000066400000000000000000000274261221200727400202320ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Networking STANDARD_HEADER_SIZE = 16 RESPONSE_HEADER_SIZE = 20 # Counter for generating unique request ids. @@current_request_id = 0 # Send a message to MongoDB, adding the necessary headers. # # @param [Integer] operation a MongoDB opcode. # @param [BSON::ByteBuffer] message a message to send to the database. # # @option opts [Symbol] :connection (:writer) The connection to which # this message should be sent. Valid options are :writer and :reader. # # @return [Integer] number of bytes sent def send_message(operation, message, opts={}) if opts.is_a?(String) warn "MongoClient#send_message no longer takes a string log message. " + "Logging is now handled within the Collection and Cursor classes." opts = {} end add_message_headers(message, operation) packed_message = message.to_s sock = nil pool = opts.fetch(:pool, nil) begin if pool #puts "send_message pool.port:#{pool.port}" sock = pool.checkout else sock ||= checkout_writer end send_message_on_socket(packed_message, sock) rescue SystemStackError, NoMemoryError, SystemCallError => ex close raise ex ensure if sock sock.checkin end end end # Sends a message to the database, waits for a response, and raises # an exception if the operation has failed. # # @param [Integer] operation a MongoDB opcode. # @param [BSON::ByteBuffer] message a message to send to the database. # @param [String] db_name the name of the database. used on call to get_last_error. # @param [Hash] last_error_params parameters to be sent to getLastError. See DB#error for # available options. # # @see DB#get_last_error for valid last error params. # # @return [Hash] The document returned by the call to getlasterror. def send_message_with_gle(operation, message, db_name, log_message=nil, write_concern=false) docs = num_received = cursor_id = '' add_message_headers(message, operation) last_error_message = build_get_last_error_message(db_name, write_concern) last_error_id = add_message_headers(last_error_message, Mongo::Constants::OP_QUERY) packed_message = message.append!(last_error_message).to_s sock = nil begin sock = checkout_writer send_message_on_socket(packed_message, sock) docs, num_received, cursor_id = receive(sock, last_error_id) checkin(sock) rescue ConnectionFailure, OperationFailure, OperationTimeout => ex checkin(sock) raise ex rescue SystemStackError, NoMemoryError, SystemCallError => ex close raise ex end if num_received == 1 && (error = docs[0]['err'] || docs[0]['errmsg']) if error.include?("not master") close raise ConnectionFailure.new(docs[0]['code'].to_s + ': ' + error, docs[0]['code'], docs[0]) else error = "wtimeout" if error == "timeout" raise OperationFailure.new(docs[0]['code'].to_s + ': ' + error, docs[0]['code'], docs[0]) end end docs[0] end # Sends a message to the database and waits for the response. # # @param [Integer] operation a MongoDB opcode. # @param [BSON::ByteBuffer] message a message to send to the database. # @param [String] log_message this is currently a no-op and will be removed. # @param [Socket] socket a socket to use in lieu of checking out a new one. # @param [Boolean] command (false) indicate whether this is a command. If this is a command, # the message will be sent to the primary node. # @param [Boolean] command (false) indicate whether the cursor should be exhausted. Set # this to true only when the OP_QUERY_EXHAUST flag is set. # # @return [Array] # An array whose indexes include [0] documents returned, [1] number of document received, # and [3] a cursor_id. def receive_message(operation, message, log_message=nil, socket=nil, command=false, read=:primary, exhaust=false) request_id = add_message_headers(message, operation) packed_message = message.to_s result = '' begin send_message_on_socket(packed_message, socket) result = receive(socket, request_id, exhaust) rescue ConnectionFailure => ex socket.close checkin(socket) raise ex rescue SystemStackError, NoMemoryError, SystemCallError => ex close raise ex rescue Exception => ex if defined?(IRB) close if ex.class == IRB::Abort end raise ex end result end private def receive(sock, cursor_id, exhaust=false) if exhaust docs = [] num_received = 0 while(cursor_id != 0) do receive_header(sock, cursor_id, exhaust) number_received, cursor_id = receive_response_header(sock) new_docs, n = read_documents(number_received, sock) docs += new_docs num_received += n end return [docs, num_received, cursor_id] else receive_header(sock, cursor_id, exhaust) number_received, cursor_id = receive_response_header(sock) docs, num_received = read_documents(number_received, sock) return [docs, num_received, cursor_id] end end def receive_header(sock, expected_response, exhaust=false) header = receive_message_on_socket(16, sock) # unpacks to size, request_id, response_to response_to = header.unpack('VVV')[2] if !exhaust && expected_response != response_to raise Mongo::ConnectionFailure, "Expected response #{expected_response} but got #{response_to}" end unless header.size == STANDARD_HEADER_SIZE raise "Short read for DB response header: " + "expected #{STANDARD_HEADER_SIZE} bytes, saw #{header.size}" end nil end def receive_response_header(sock) header_buf = receive_message_on_socket(RESPONSE_HEADER_SIZE, sock) if header_buf.length != RESPONSE_HEADER_SIZE raise "Short read for DB response header; " + "expected #{RESPONSE_HEADER_SIZE} bytes, saw #{header_buf.length}" end # unpacks to flags, cursor_id_a, cursor_id_b, starting_from, number_remaining flags, cursor_id_a, cursor_id_b, _, number_remaining = header_buf.unpack('VVVVV') check_response_flags(flags) cursor_id = (cursor_id_b << 32) + cursor_id_a [number_remaining, cursor_id] end def check_response_flags(flags) if flags & Mongo::Constants::REPLY_CURSOR_NOT_FOUND != 0 raise Mongo::OperationFailure, "Query response returned CURSOR_NOT_FOUND. " + "Either an invalid cursor was specified, or the cursor may have timed out on the server." elsif flags & Mongo::Constants::REPLY_QUERY_FAILURE != 0 # Mongo query reply failures are handled in Cursor#next. end end def read_documents(number_received, sock) docs = [] number_remaining = number_received while number_remaining > 0 do buf = receive_message_on_socket(4, sock) size = buf.unpack('V')[0] buf << receive_message_on_socket(size - 4, sock) number_remaining -= 1 docs << BSON::BSON_CODER.deserialize(buf) end [docs, number_received] end def build_command_message(db_name, query, projection=nil, skip=0, limit=-1) message = BSON::ByteBuffer.new("", max_message_size) message.put_int(0) BSON::BSON_RUBY.serialize_cstr(message, "#{db_name}.$cmd") message.put_int(skip) message.put_int(limit) message.put_binary(BSON::BSON_CODER.serialize(query, false, false, max_bson_size).to_s) message.put_binary(BSON::BSON_CODER.serialize(projection, false, false, max_bson_size).to_s) if projection message end # Constructs a getlasterror message. This method is used exclusively by # MongoClient#send_message_with_gle. def build_get_last_error_message(db_name, write_concern) gle = BSON::OrderedHash.new gle[:getlasterror] = 1 if write_concern.is_a?(Hash) write_concern.assert_valid_keys(:w, :wtimeout, :fsync, :j) gle.merge!(write_concern) gle.delete(:w) if gle[:w] == 1 end gle[:w] = gle[:w].to_s if gle[:w].is_a?(Symbol) build_command_message(db_name, gle) end # Prepares a message for transmission to MongoDB by # constructing a valid message header. # # Note: this method modifies message by reference. # # @return [Integer] the request id used in the header def add_message_headers(message, operation) headers = [ # Message size. 16 + message.size, # Unique request id. request_id = get_request_id, # Response id. 0, # Opcode. operation ].pack('VVVV') message.prepend!(headers) request_id end # Increment and return the next available request id. # # return [Integer] def get_request_id request_id = '' @id_lock.synchronize do request_id = @@current_request_id += 1 end request_id end # Low-level method for sending a message on a socket. # Requires a packed message and an available socket, # # @return [Integer] number of bytes sent def send_message_on_socket(packed_message, socket) begin total_bytes_sent = socket.send(packed_message) if total_bytes_sent != packed_message.size packed_message.slice!(0, total_bytes_sent) while packed_message.size > 0 byte_sent = socket.send(packed_message) total_bytes_sent += byte_sent packed_message.slice!(0, byte_sent) end end total_bytes_sent rescue => ex socket.close raise ConnectionFailure, "Operation failed with the following exception: #{ex}:#{ex.message}" end end # Low-level method for receiving data from socket. # Requires length and an available socket. def receive_message_on_socket(length, socket) begin message = receive_data(length, socket) rescue OperationTimeout, ConnectionFailure => ex socket.close if ex.class == OperationTimeout raise OperationTimeout, "Timed out waiting on socket read." else raise ConnectionFailure, "Operation failed with the following exception: #{ex}" end end message end def receive_data(length, socket) message = new_binary_string socket.read(length, message) raise ConnectionFailure, "connection closed" unless message && message.length > 0 if message.length < length chunk = new_binary_string while message.length < length socket.read(length - message.length, chunk) raise ConnectionFailure, "connection closed" unless chunk.length > 0 message << chunk end end message end if defined?(Encoding) BINARY_ENCODING = Encoding.find("binary") def new_binary_string "".force_encoding(BINARY_ENCODING) end else def new_binary_string "" end end end end ruby-mongo-1.9.2/lib/mongo/util/000077500000000000000000000000001221200727400164605ustar00rootroot00000000000000ruby-mongo-1.9.2/lib/mongo/util/conversions.rb000066400000000000000000000073741221200727400213700ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo #:nodoc: # Utility module to include when needing to convert certain types of # objects to mongo-friendly parameters. module Conversions ASCENDING_CONVERSION = ["ascending", "asc", "1"] DESCENDING_CONVERSION = ["descending", "desc", "-1"] # Allows sort parameters to be defined as a Hash. # Does not allow usage of un-ordered hashes, therefore # Ruby 1.8.x users must use BSON::OrderedHash. # # Example: # # hash_as_sort_parameters({:field1 => :asc, "field2" => :desc}) => # { "field1" => 1, "field2" => -1} def hash_as_sort_parameters(value) if RUBY_VERSION < '1.9' && !value.is_a?(BSON::OrderedHash) raise InvalidSortValueError.new( "Hashes used to supply sort order must maintain ordering." + "Use BSON::OrderedHash." ) else order_by = value.inject({}) do |memo, (key, direction)| memo[key.to_s] = sort_value(direction.to_s.downcase) memo end end order_by end # Converts the supplied +Array+ to a +Hash+ to pass to mongo as # sorting parameters. The returned +Hash+ will vary depending # on whether the passed +Array+ is one or two dimensional. # # Example: # # array_as_sort_parameters([["field1", :asc], ["field2", :desc]]) => # { "field1" => 1, "field2" => -1} def array_as_sort_parameters(value) order_by = BSON::OrderedHash.new if value.first.is_a? Array value.each do |param| if (param.class.name == "String") order_by[param] = 1 else order_by[param[0]] = sort_value(param[1]) unless param[1].nil? end end elsif !value.empty? if order_by.size == 1 order_by[value.first] = 1 else order_by[value.first] = sort_value(value[1]) end end order_by end # Converts the supplied +String+ or +Symbol+ to a +Hash+ to pass to mongo as # a sorting parameter with ascending order. If the +String+ # is empty then an empty +Hash+ will be returned. # # Example: # # *DEPRECATED # # string_as_sort_parameters("field") => { "field" => 1 } # string_as_sort_parameters("") => {} def string_as_sort_parameters(value) return {} if (str = value.to_s).empty? { str => 1 } end # Converts the +String+, +Symbol+, or +Integer+ to the # corresponding sort value in MongoDB. # # Valid conversions (case-insensitive): # # ascending, asc, :ascending, :asc, 1 => 1 # descending, desc, :descending, :desc, -1 => -1 # # If the value is invalid then an error will be raised. def sort_value(value) val = value.to_s.downcase return 1 if ASCENDING_CONVERSION.include?(val) return -1 if DESCENDING_CONVERSION.include?(val) raise InvalidSortValueError.new( "#{self} was supplied as a sort direction when acceptable values are: " + "Mongo::ASCENDING, 'ascending', 'asc', :ascending, :asc, 1, Mongo::DESCENDING, " + "'descending', 'desc', :descending, :desc, -1.") end end end ruby-mongo-1.9.2/lib/mongo/util/core_ext.rb000066400000000000000000000026561221200727400206260ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. #:nodoc: class Object #:nodoc: def tap yield self self end unless respond_to? :tap end #:nodoc: class Hash #:nodoc: def assert_valid_keys(*valid_keys) unknown_keys = keys - [valid_keys].flatten raise(ArgumentError, "Unknown key(s): #{unknown_keys.join(", ")}") unless unknown_keys.empty? end end #:nodoc: class String #:nodoc: def to_bson_code BSON::Code.new(self) end end #:nodoc: class Class def mongo_thread_local_accessor name, options = {} m = Module.new m.module_eval do class_variable_set :"@@#{name}", Hash.new {|h,k| h[k] = options[:default] } end m.module_eval %{ def #{name} @@#{name}[Thread.current.object_id] end def #{name}=(val) @@#{name}[Thread.current.object_id] = val end } class_eval do include m extend m end end end ruby-mongo-1.9.2/lib/mongo/util/logging.rb000066400000000000000000000050751221200727400204420ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Logging module Instrumenter def self.instrument(name, payload = {}) yield end end @instrumenter = Instrumenter def write_logging_startup_message log(:debug, "Logging level is currently :debug which could negatively impact " + "client-side performance. You should set your logging level no lower than " + ":info in production.") end # Log a message with the given level. def log(level, msg) return unless @logger case level when :fatal then @logger.fatal "MONGODB [FATAL] #{msg}" when :error then @logger.error "MONGODB [ERROR] #{msg}" when :warn then @logger.warn "MONGODB [WARNING] #{msg}" when :info then @logger.info "MONGODB [INFO] #{msg}" when :debug then @logger.debug "MONGODB [DEBUG] #{msg}" else @logger.debug "MONGODB [DEBUG] #{msg}" end end # Execute the block and log the operation described by name and payload. def instrument(name, payload = {}) start_time = Time.now res = Logging.instrumenter.instrument(name, payload) do yield end duration = Time.now - start_time log_operation(name, payload, duration) res end def self.instrumenter @instrumenter end def self.instrumenter=(instrumenter) @instrumenter = instrumenter end protected def log_operation(name, payload, duration) @logger && @logger.debug do msg = "MONGODB " msg << "(%.1fms) " % (duration * 1000) msg << "#{payload[:database]}['#{payload[:collection]}'].#{name}(" msg << payload.values_at(:selector, :document, :documents, :fields ).compact.map(&:inspect).join(', ') + ")" msg << ".skip(#{payload[:skip]})" if payload[:skip] msg << ".limit(#{payload[:limit]})" if payload[:limit] msg << ".sort(#{payload[:order]})" if payload[:order] msg end end end end ruby-mongo-1.9.2/lib/mongo/util/node.rb000066400000000000000000000144611221200727400177400ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Node attr_accessor :host, :port, :address, :client, :socket, :last_state def initialize(client, host_port) @client = client @manager = @client.local_manager @host, @port = Support.normalize_seeds(host_port) @address = "#{@host}:#{@port}" @config = nil @socket = nil @node_mutex = Mutex.new end def eql?(other) (other.is_a?(Node) && @address == other.address) end alias :== :eql? def =~(other) if other.is_a?(String) h, p = Support.normalize_seeds(other) h == @host && p == @port else false end end def host_string address end def config connect unless connected? set_config unless @config || !connected? @config end def inspect "" end # Create a connection to the provided node, # and, if successful, return the socket. Otherwise, # return nil. def connect @node_mutex.synchronize do begin @socket = @client.socket_class.new(@host, @port, @client.op_timeout, @client.connect_timeout, @client.socket_opts) rescue ConnectionTimeoutError, OperationTimeout, ConnectionFailure, OperationFailure, SocketError, SystemCallError, IOError => ex @client.log(:debug, "Failed connection to #{host_string} with #{ex.class}, #{ex.message}.") close end end end # This should only be called within a mutex def close if @socket && !@socket.closed? @socket.close end @socket = nil @config = nil end def connected? @socket != nil && !@socket.closed? end def active? begin result = @client['admin'].command({:ping => 1}, :socket => @socket) rescue OperationFailure, SocketError, SystemCallError, IOError return nil end result['ok'] == 1 end # Get the configuration for the provided node as returned by the # ismaster command. Additionally, check that the replica set name # matches with the name provided. def set_config @node_mutex.synchronize do begin if @config @last_state = @config['ismaster'] ? :primary : :other end if @client.connect_timeout Timeout::timeout(@client.connect_timeout, OperationTimeout) do @config = @client['admin'].command({:ismaster => 1}, :socket => @socket) end else @config = @client['admin'].command({:ismaster => 1}, :socket => @socket) end update_max_sizes if @config['msg'] @client.log(:warn, "#{config['msg']}") end unless @client.mongos? check_set_membership(@config) check_set_name(@config) end rescue ConnectionFailure, OperationFailure, OperationTimeout, SocketError, SystemCallError, IOError => ex @client.log(:warn, "Attempted connection to node #{host_string} raised " + "#{ex.class}: #{ex.message}") # Socket may already be nil from issuing command close end end end # Return a list of replica set nodes from the config. # Note: this excludes arbiters. def node_list nodes = [] nodes += config['hosts'] if config['hosts'] nodes += config['passives'] if config['passives'] nodes += ["#{@host}:#{@port}"] if @client.mongos? nodes end def arbiters return [] unless config['arbiters'] config['arbiters'].map do |arbiter| Support.normalize_seeds(arbiter) end end def primary? config['ismaster'] == true || config['ismaster'] == 1 end def secondary? config['secondary'] == true || config['secondary'] == 1 end def tags config['tags'] || {} end def host_port [@host, @port] end def hash address.hash end def healthy? connected? && config end def max_bson_size @max_bson_size || DEFAULT_MAX_BSON_SIZE end def max_message_size @max_message_size || max_bson_size * MESSAGE_SIZE_FACTOR end protected # Ensure that this node is a healthy member of a replica set. def check_set_membership(config) if !config.has_key?('hosts') message = "Will not connect to #{host_string} because it's not a member " + "of a replica set." raise ConnectionFailure, message elsif config['hosts'].length == 1 && !config['ismaster'] && !config['secondary'] message = "Attempting to connect to an unhealthy, single-node replica set." raise ConnectionFailure, message end end # Ensure that this node is part of a replica set of the expected name. def check_set_name(config) if @client.replica_set_name if !config['setName'] @client.log(:warn, "Could not verify replica set name for member #{host_string} " + "because ismaster does not return name in this version of MongoDB") elsif @client.replica_set_name != config['setName'] message = "Attempting to connect to replica set '#{config['setName']}' on member #{host_string} " + "but expected '#{@client.replica_set_name}'" raise ReplicaSetConnectionError, message end end end private def update_max_sizes @max_bson_size = config['maxBsonObjectSize'] || DEFAULT_MAX_BSON_SIZE @max_message_size = config['maxMessageSizeBytes'] || @max_bson_size * MESSAGE_SIZE_FACTOR end end end ruby-mongo-1.9.2/lib/mongo/util/pool.rb000066400000000000000000000220221221200727400177540ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Pool PING_ATTEMPTS = 6 MAX_PING_TIME = 1_000_000 PRUNE_INTERVAL = 10_000 attr_accessor :host, :port, :address, :size, :timeout, :checked_out, :client, :node # Create a new pool of connections. def initialize(client, host, port, opts={}) @client = client @host, @port = host, port # A Mongo::Node object. @node = opts[:node] # The string address @address = "#{@host}:#{@port}" # Pool size and timeout. @size = opts.fetch(:size, 20) @timeout = opts.fetch(:timeout, 30) # Mutex for synchronizing pool access @connection_mutex = Mutex.new # Mutex for synchronizing pings @ping_mutex = Mutex.new # Condition variable for signal and wait @queue = ConditionVariable.new # Operations to perform on a socket @socket_ops = Hash.new { |h, k| h[k] = [] } @sockets = [] @checked_out = [] @ping_time = nil @last_ping = nil @closed = false @thread_ids_to_sockets = {} @checkout_counter = 0 end # Close this pool. # # @option opts [Boolean]:soft (false) If true, # close only those sockets that are not checked out. def close(opts={}) @connection_mutex.synchronize do if opts[:soft] && !@checked_out.empty? @closing = true close_sockets(@sockets - @checked_out) else close_sockets(@sockets) @closed = true end @node.close if @node end true end def tags @node.tags end def healthy? close if @sockets.all?(&:closed?) !closed? && node.healthy? end def closed? @closed end def up? !@closed end def inspect "#" end def host_string "#{@host}:#{@port}" end def host_port [@host, @port] end # Refresh ping time only if we haven't # checked within the last five minutes. def ping_time @ping_mutex.synchronize do if !@last_ping || (Time.now - @last_ping) > 300 @ping_time = refresh_ping_time @last_ping = Time.now end end @ping_time end # Return the time it takes on average # to do a round-trip against this node. def refresh_ping_time trials = [] PING_ATTEMPTS.times do t1 = Time.now if !self.ping return MAX_PING_TIME end trials << (Time.now - t1) * 1000 end trials.sort! # Delete shortest and longest times trials.delete_at(trials.length-1) trials.delete_at(0) total = 0.0 trials.each { |t| total += t } (total / trials.length).ceil end def ping begin return self.client['admin'].command({:ping => 1}, :socket => @node.socket, :timeout => MAX_PING_TIME) rescue ConnectionFailure, OperationFailure, SocketError, SystemCallError, IOError return false end end # Return a socket to the pool. def checkin(socket) @connection_mutex.synchronize do if @checked_out.delete(socket) @queue.broadcast else return false end end true end # Adds a new socket to the pool and checks it out. # # This method is called exclusively from #checkout; # therefore, it runs within a mutex. def checkout_new_socket begin socket = @client.socket_class.new(@host, @port, @client.op_timeout, @client.connect_timeout, @client.socket_opts) socket.pool = self rescue => ex socket.close if socket @node.close if @node raise ConnectionFailure, "Failed to connect to host #{@host} and port #{@port}: #{ex}" end # If any saved authentications exist, we want to apply those # when creating new sockets. @client.apply_saved_authentication(:socket => socket) @sockets << socket @checked_out << socket @thread_ids_to_sockets[Thread.current.object_id] = socket socket end # If a user calls DB#authenticate, and several sockets exist, # then we need a way to apply the authentication on each socket. # So we store the apply_authentication method, and this will be # applied right before the next use of each socket. def authenticate_existing @connection_mutex.synchronize do @sockets.each do |socket| @socket_ops[socket] << Proc.new do @client.apply_saved_authentication(:socket => socket) end end end end # Store the logout op for each existing socket to be applied before # the next use of each socket. def logout_existing(db) @connection_mutex.synchronize do @sockets.each do |socket| @socket_ops[socket] << Proc.new do @client.db(db).issue_logout(:socket => socket) end end end end # Checks out the first available socket from the pool. # # If the pid has changed, remove the socket and check out # new one. # # This method is called exclusively from #checkout; # therefore, it runs within a mutex. def checkout_existing_socket(socket=nil) if !socket available = @sockets - @checked_out socket = available[rand(available.length)] end if socket.pid != Process.pid @sockets.delete(socket) if socket socket.close unless socket.closed? end checkout_new_socket else @checked_out << socket @thread_ids_to_sockets[Thread.current.object_id] = socket socket end end def prune_threads live_threads = Thread.list.map(&:object_id) @thread_ids_to_sockets.reject! do |key, value| !live_threads.include?(key) end end def check_prune if @checkout_counter > PRUNE_INTERVAL @checkout_counter = 0 prune_threads else @checkout_counter += 1 end end # Check out an existing socket or create a new socket if the maximum # pool size has not been exceeded. Otherwise, wait for the next # available socket. def checkout @client.connect if !@client.connected? start_time = Time.now loop do if (Time.now - start_time) > @timeout raise ConnectionTimeoutError, "could not obtain connection within " + "#{@timeout} seconds. The max pool size is currently #{@size}; " + "consider increasing the pool size or timeout." end @connection_mutex.synchronize do check_prune socket = nil if socket_for_thread = @thread_ids_to_sockets[Thread.current.object_id] if !@checked_out.include?(socket_for_thread) socket = checkout_existing_socket(socket_for_thread) end else if @sockets.size < @size socket = checkout_new_socket elsif @checked_out.size < @sockets.size socket = checkout_existing_socket end end if socket # This calls all procs, in order, scoped to existing sockets. # At the moment, we use this to lazily authenticate and # logout existing socket connections. @socket_ops[socket].reject! do |op| op.call end if socket.closed? @checked_out.delete(socket) @sockets.delete(socket) @thread_ids_to_sockets.delete(Thread.current.object_id) socket = checkout_new_socket end return socket else # Otherwise, wait @queue.wait(@connection_mutex) end end end end private def close_sockets(sockets) sockets.each do |socket| @sockets.delete(socket) begin socket.close unless socket.closed? rescue IOError => ex warn "IOError when attempting to close socket connected to #{@host}:#{@port}: #{ex.inspect}" end end end end end ruby-mongo-1.9.2/lib/mongo/util/pool_manager.rb000066400000000000000000000201301221200727400214440ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class PoolManager include ThreadLocalVariableManager attr_reader :client, :arbiters, :primary, :secondaries, :primary_pool, :secondary_pools, :hosts, :seeds, :pools, :max_bson_size, :max_message_size # Create a new set of connection pools. # # The pool manager will by default use the original seed list passed # to the connection objects, accessible via connection.seeds. In addition, # the user may pass an additional list of seeds nodes discovered in real # time. The union of these lists will be used when attempting to connect, # with the newly-discovered nodes being used first. def initialize(client, seeds=[]) @client = client @seeds = seeds @pools = Set.new @primary = nil @primary_pool = nil @secondaries = Set.new @secondary_pools = [] @hosts = Set.new @members = Set.new @refresh_required = false @max_bson_size = DEFAULT_MAX_BSON_SIZE @max_message_size = @max_bson_size * MESSAGE_SIZE_FACTOR @connect_mutex = Mutex.new thread_local[:locks][:connecting_manager] = false end def inspect "" end def connect @connect_mutex.synchronize do begin thread_local[:locks][:connecting_manager] = true @refresh_required = false disconnect_old_members connect_to_members initialize_pools(@members) update_max_sizes @seeds = discovered_seeds ensure thread_local[:locks][:connecting_manager] = false end end end def refresh!(additional_seeds) @seeds |= additional_seeds connect end # We're healthy if all members are pingable and if the view # of the replica set returned by isMaster is equivalent # to our view. If any of these isn't the case, # set @refresh_required to true, and return. def check_connection_health return if thread_local[:locks][:connecting_manager] members = copy_members begin seed = get_valid_seed_node rescue ConnectionFailure @refresh_required = true return end unless current_config = seed.config @refresh_required = true seed.close return end if current_config['hosts'].length != members.length @refresh_required = true seed.close return end current_config['hosts'].each do |host| member = members.detect do |m| m.address == host end if member && validate_existing_member(current_config, member) next else @refresh_required = true seed.close return end end seed.close end # The replica set connection should initiate a full refresh. def refresh_required? @refresh_required end def closed? pools.all? { |pool| pool.closed? } end def close(opts={}) begin pools.each { |pool| pool.close(opts) } rescue ConnectionFailure end end def read read_pool.host_port end private def update_max_sizes unless @members.size == 0 @max_bson_size = @members.map(&:max_bson_size).min @max_message_size = @members.map(&:max_message_size).min end end def validate_existing_member(current_config, member) if current_config['ismaster'] && member.last_state != :primary return false elsif member.last_state != :other return false end return true end # For any existing members, close and remove any that are unhealthy or already closed. def disconnect_old_members @pools.reject! {|pool| !pool.healthy? } @members.reject! {|node| !node.healthy? } end # Connect to each member of the replica set # as reported by the given seed node. def connect_to_members seed = get_valid_seed_node seed.node_list.each do |host| if existing = @members.detect {|node| node =~ host } if existing.healthy? # Refresh this node's configuration existing.set_config # If we are unhealthy after refreshing our config, drop from the set. if !existing.healthy? @members.delete(existing) else next end else existing.close @members.delete(existing) end end node = Mongo::Node.new(self.client, host) node.connect @members << node if node.healthy? end seed.close if @members.empty? raise ConnectionFailure, "Failed to connect to any given member." end end # Initialize the connection pools for the primary and secondary nodes. def initialize_pools(members) @primary_pool = nil @primary = nil @secondaries.clear @secondary_pools.clear @hosts.clear members.each do |member| member.last_state = nil @hosts << member.host_string if member.primary? assign_primary(member) elsif member.secondary? # member could be not primary but secondary still is false assign_secondary(member) end end @arbiters = members.first.arbiters end def assign_primary(member) member.last_state = :primary @primary = member.host_port if existing = @pools.detect {|pool| pool.node == member } @primary_pool = existing else @primary_pool = Pool.new(self.client, member.host, member.port, :size => self.client.pool_size, :timeout => self.client.pool_timeout, :node => member ) @pools << @primary_pool end end def assign_secondary(member) member.last_state = :secondary @secondaries << member.host_port if existing = @pools.detect {|pool| pool.node == member } @secondary_pools << existing else pool = Pool.new(self.client, member.host, member.port, :size => self.client.pool_size, :timeout => self.client.pool_timeout, :node => member ) @secondary_pools << pool @pools << pool end end # Iterate through the list of provided seed # nodes until we've gotten a response from the # replica set we're trying to connect to. # # If we don't get a response, raise an exception. def get_valid_seed_node @seeds.each do |seed| node = Mongo::Node.new(self.client, seed) node.connect return node if node.healthy? end raise ConnectionFailure, "Cannot connect to a replica set using seeds " + "#{@seeds.map {|s| "#{s[0]}:#{s[1]}" }.join(', ')}" end def discovered_seeds @members.map(&:host_port) end def copy_members members = Set.new @connect_mutex.synchronize do @members.map do |m| members << m.dup end end members end end end ruby-mongo-1.9.2/lib/mongo/util/read_preference.rb000066400000000000000000000073671221200727400221330ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module ReadPreference READ_PREFERENCES = [ :primary, :primary_preferred, :secondary, :secondary_preferred, :nearest ] MONGOS_MODES = { :primary => 'primary', :primary_preferred => 'primaryPreferred', :secondary => 'secondary', :secondary_preferred => 'secondaryPreferred', :nearest => 'nearest' } def self.mongos(mode, tag_sets) if mode != :secondary_preferred || !tag_sets.empty? mongos_read_preference = BSON::OrderedHash[:mode => MONGOS_MODES[mode]] mongos_read_preference[:tags] = tag_sets if !tag_sets.empty? end mongos_read_preference end def self.validate(value) if READ_PREFERENCES.include?(value) return true else raise MongoArgumentError, "#{value} is not a valid read preference. " + "Please specify one of the following read preferences as a symbol: #{READ_PREFERENCES}" end end def read_preference { :mode => @read, :tags => @tag_sets, :latency => @acceptable_latency } end def read_pool(read_preference_override={}) return primary_pool if mongos? read_pref = read_preference.merge(read_preference_override) if pinned_pool && pinned_pool[:read_preference] == read_pref pool = pinned_pool[:pool] else unpin_pool pool = select_pool(read_pref) end unless pool raise ConnectionFailure, "No replica set member available for query " + "with read preference matching mode #{read_pref[:mode]} and tags " + "matching #{read_pref[:tags]}." end pool end def select_pool(read_pref) if read_pref[:mode] == :primary && !read_pref[:tags].empty? raise MongoArgumentError, "Read preference :primary cannot be combined with tags" end case read_pref[:mode] when :primary primary_pool when :primary_preferred primary_pool || select_secondary_pool(secondary_pools, read_pref) when :secondary select_secondary_pool(secondary_pools, read_pref) when :secondary_preferred select_secondary_pool(secondary_pools, read_pref) || primary_pool when :nearest select_near_pool(pools, read_pref) end end def select_secondary_pool(candidates, read_pref) tag_sets = read_pref[:tags] if !tag_sets.empty? matches = [] tag_sets.detect do |tag_set| matches = candidates.select do |candidate| tag_set.none? { |k,v| candidate.tags[k.to_s] != v } && candidate.ping_time end !matches.empty? end else matches = candidates end matches.empty? ? nil : select_near_pool(matches, read_pref) end def select_near_pool(candidates, read_pref) latency = read_pref[:latency] nearest_pool = candidates.min_by { |candidate| candidate.ping_time } near_pools = candidates.select do |candidate| (candidate.ping_time - nearest_pool.ping_time) <= latency end near_pools[ rand(near_pools.length) ] end end end ruby-mongo-1.9.2/lib/mongo/util/server_version.rb000066400000000000000000000034511221200727400220630ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo # Simple class for comparing server versions. class ServerVersion include Comparable def initialize(version) @version = version end # Implements comparable. def <=>(new) local, new = self.to_a, to_array(new) for n in 0...local.size do break if elements_include_mods?(local[n], new[n]) if local[n] < new[n].to_i result = -1 break; elsif local[n] > new[n].to_i result = 1 break; end end result || 0 end # Return an array representation of this server version. def to_a to_array(@version) end # Return a string representation of this server version. def to_s @version end private # Returns true if any elements include mod symbols (-, +) def elements_include_mods?(*elements) elements.any? { |n| n =~ /[\-\+]/ } end # Converts argument to an array of integers, # appending any mods as the final element. def to_array(version) array = version.split(".").map {|n| (n =~ /^\d+$/) ? n.to_i : n } if array.last =~ /(\d+)([\-\+])/ array[array.length-1] = $1.to_i array << $2 end array end end end ruby-mongo-1.9.2/lib/mongo/util/sharding_pool_manager.rb000066400000000000000000000041231221200727400233270ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class ShardingPoolManager < PoolManager def inspect "" end # "Best" should be the member with the fastest ping time # but connect/connect_to_members reinitializes @members def best(members) Array(members.first) end def connect @connect_mutex.synchronize do begin thread_local[:locks][:connecting_manager] = true @refresh_required = false disconnect_old_members connect_to_members initialize_pools best(@members) update_max_sizes @seeds = discovered_seeds ensure thread_local[:locks][:connecting_manager] = false end end end # Checks that each node is healthy (via check_is_master) and that each # node is in fact a mongos. If either criteria are not true, a refresh is # set to be triggered and close() is called on the node. # # @return [Boolean] indicating if a refresh is required. def check_connection_health @refresh_required = false @members.each do |member| begin config = @client.check_is_master([member.host, member.port]) unless config && config.has_key?('msg') @refresh_required = true member.close end rescue OperationTimeout @refresh_required = true member.close end break if @refresh_required end @refresh_required end end end ruby-mongo-1.9.2/lib/mongo/util/socket_util.rb000066400000000000000000000014711221200727400213350ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module SocketUtil attr_accessor :pool, :pid def checkout @pool.checkout if @pool end def checkin @pool.checkin(self) if @pool end def close @socket.close unless closed? end def closed? @socket.closed? end end ruby-mongo-1.9.2/lib/mongo/util/ssl_socket.rb000066400000000000000000000051171221200727400211620ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'socket' require 'openssl' require 'timeout' module Mongo # A basic wrapper over Ruby's SSLSocket that initiates # a TCP connection over SSL and then provides an basic interface # mirroring Ruby's TCPSocket, vis., TCPSocket#send and TCPSocket#read. class SSLSocket include SocketUtil def initialize(host, port, op_timeout=nil, connect_timeout=nil, opts={}) @pid = Process.pid @op_timeout = op_timeout @connect_timeout = connect_timeout @tcp_socket = ::TCPSocket.new(host, port) @tcp_socket.setsockopt(Socket::IPPROTO_TCP, Socket::TCP_NODELAY, 1) @context = OpenSSL::SSL::SSLContext.new if opts[:cert] @context.cert = OpenSSL::X509::Certificate.new(File.open(opts[:cert])) end if opts[:key] @context.key = OpenSSL::PKey::RSA.new(File.open(opts[:key])) end if opts[:verify] @context.ca_file = opts[:ca_cert] @context.verify_mode = OpenSSL::SSL::VERIFY_PEER end begin @socket = OpenSSL::SSL::SSLSocket.new(@tcp_socket, @context) @socket.sync_close = true connect rescue OpenSSL::SSL::SSLError raise ConnectionFailure, "SSL handshake failed. MongoDB may " + "not be configured with SSL support." end if opts[:verify] unless OpenSSL::SSL.verify_certificate_identity(@socket.peer_cert, host) raise ConnectionFailure, "SSL handshake failed. Hostname mismatch." end end self end def connect if @connect_timeout Timeout::timeout(@connect_timeout, ConnectionTimeoutError) do @socket.connect end else @socket.connect end end def send(data) @socket.syswrite(data) end def read(length, buffer) if @op_timeout Timeout::timeout(@op_timeout, OperationTimeout) do @socket.sysread(length, buffer) end else @socket.sysread(length, buffer) end end end end ruby-mongo-1.9.2/lib/mongo/util/support.rb000066400000000000000000000074121221200727400205250ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'digest/md5' module Mongo module Support include Mongo::Conversions extend self # Commands that may be sent to replica-set secondaries, depending on # read preference and tags. All other commands are always run on the primary. SECONDARY_OK_COMMANDS = [ 'group', 'aggregate', 'collstats', 'dbstats', 'count', 'distinct', 'geonear', 'geosearch', 'geowalk', 'mapreduce', 'replsetgetstatus', 'ismaster', ] # Generate an MD5 for authentication. # # @param [String] username # @param [String] password # @param [String] nonce # # @return [String] a key for db authentication. def auth_key(username, password, nonce) Digest::MD5.hexdigest("#{nonce}#{username}#{hash_password(username, password)}") end # Return a hashed password for auth. # # @param [String] username # @param [String] plaintext # # @return [String] def hash_password(username, plaintext) Digest::MD5.hexdigest("#{username}:mongo:#{plaintext}") end def validate_db_name(db_name) unless [String, Symbol].include?(db_name.class) raise TypeError, "db_name must be a string or symbol" end [" ", ".", "$", "/", "\\"].each do |invalid_char| if db_name.include? invalid_char raise Mongo::InvalidNSName, "database names cannot contain the character '#{invalid_char}'" end end raise Mongo::InvalidNSName, "database name cannot be the empty string" if db_name.empty? db_name end def secondary_ok?(selector) command = selector.keys.first.to_s.downcase if command == 'mapreduce' out = selector.select { |k, v| k.to_s.downcase == 'out' }.first.last # mongo looks at the first key in the out object, and doesn't # look at the value out.is_a?(Hash) && out.keys.first.to_s.downcase == 'inline' ? true : false else SECONDARY_OK_COMMANDS.member?(command) end end def format_order_clause(order) case order when Hash, BSON::OrderedHash then hash_as_sort_parameters(order) when String, Symbol then string_as_sort_parameters(order) when Array then array_as_sort_parameters(order) else raise InvalidSortValueError, "Illegal sort clause, '#{order.class.name}'; must be of the form " + "[['field1', '(ascending|descending)'], ['field2', '(ascending|descending)']]" end end def normalize_seeds(seeds) pairs = Array(seeds) pairs = [ seeds ] if pairs.last.is_a?(Fixnum) pairs = pairs.collect do |hostport| if hostport.is_a?(String) host, port = hostport.split(':') [ host, port && port.to_i || MongoClient::DEFAULT_PORT ] else hostport end end pairs.length > 1 ? pairs : pairs.first end def is_i?(value) return !!(value =~ /^\d+$/) end # Determine if a database command has succeeded by # checking the document response. # # @param [Hash] doc # # @return [Boolean] true if the 'ok' key is either 1 or *true*. def ok?(doc) doc['ok'] == 1.0 || doc['ok'] == true end end end ruby-mongo-1.9.2/lib/mongo/util/tcp_socket.rb000066400000000000000000000041241221200727400211440ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'socket' require 'timeout' module Mongo # Wrapper class for Socket # # Emulates TCPSocket with operation and connection timeout # sans Timeout::timeout # class TCPSocket include SocketUtil def initialize(host, port, op_timeout=nil, connect_timeout=nil, opts={}) @op_timeout = op_timeout @connect_timeout = connect_timeout @pid = Process.pid # TODO: Prefer ipv6 if server is ipv6 enabled @address = Socket.getaddrinfo(host, nil, Socket::AF_INET).first[3] @port = port @socket_address = Socket.pack_sockaddr_in(@port, @address) @socket = Socket.new(Socket::AF_INET, Socket::SOCK_STREAM, 0) @socket.setsockopt(Socket::IPPROTO_TCP, Socket::TCP_NODELAY, 1) connect end def connect if @connect_timeout Timeout::timeout(@connect_timeout, ConnectionTimeoutError) do @socket.connect(@socket_address) end else @socket.connect(@socket_address) end end def send(data) @socket.write(data) end def read(maxlen, buffer) # Block on data to read for @op_timeout seconds begin ready = IO.select([@socket], nil, [@socket], @op_timeout) unless ready raise OperationTimeout end rescue IOError raise ConnectionFailure end # Read data from socket begin @socket.sysread(maxlen, buffer) rescue SystemCallError, IOError => ex raise ConnectionFailure, ex end end end end ruby-mongo-1.9.2/lib/mongo/util/thread_local_variable_manager.rb000066400000000000000000000014661221200727400247740ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. #:nodoc: module Mongo module ThreadLocalVariableManager def thread_local Thread.current[:mongo_thread_locals] ||= Hash.new do |hash, key| hash[key] = Hash.new unless hash.key? key hash[key] end end end endruby-mongo-1.9.2/lib/mongo/util/unix_socket.rb000066400000000000000000000022611221200727400213410ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'socket' module Mongo # Wrapper class for Socket # # Emulates UNIXSocket with operation and connection timeout # sans Timeout::timeout # class UNIXSocket < TCPSocket def initialize(socket_path, port=:socket, op_timeout=nil, connect_timeout=nil, opts={}) @op_timeout = op_timeout @connect_timeout = connect_timeout @address = socket_path @port = :socket # purposely override input @socket_address = Socket.pack_sockaddr_un(@address) @socket = Socket.new(Socket::AF_UNIX, Socket::SOCK_STREAM, 0) connect end end end ruby-mongo-1.9.2/lib/mongo/util/uri_parser.rb000066400000000000000000000264011221200727400211630ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'cgi' require 'uri' module Mongo class URIParser USER_REGEX = /(.+)/ PASS_REGEX = /([^@,]+)/ AUTH_REGEX = /(#{USER_REGEX}:#{PASS_REGEX}@)?/ HOST_REGEX = /([-.\w]+)/ PORT_REGEX = /(?::(\w+))?/ NODE_REGEX = /((#{HOST_REGEX}#{PORT_REGEX},?)+)/ PATH_REGEX = /(?:\/([-\w]+))?/ MONGODB_URI_MATCHER = /#{AUTH_REGEX}#{NODE_REGEX}#{PATH_REGEX}/ MONGODB_URI_SPEC = "mongodb://[username:password@]host1[:port1][,host2[:port2],...[,hostN[:portN]]][/[database][?options]]" SPEC_ATTRS = [:nodes, :auths] READ_PREFERENCES = { "primary" => :primary, "primarypreferred" => :primary_preferred, "secondary" => :secondary, "secondarypreferred" => :secondary_preferred, "nearest" => :nearest } OPT_ATTRS = [ :connect, :connecttimeoutms, :fsync, :journal, :pool_size, :readpreference, :replicaset, :safe, :slaveok, :sockettimeoutms, :ssl, :w, :wtimeout, :wtimeoutms ] OPT_VALID = { :connect => lambda { |arg| [ 'direct', 'replicaset', 'true', 'false', true, false ].include?(arg) }, :connecttimeoutms => lambda { |arg| arg =~ /^\d+$/ }, :fsync => lambda { |arg| ['true', 'false'].include?(arg) }, :journal => lambda { |arg| ['true', 'false'].include?(arg) }, :pool_size => lambda { |arg| arg.to_i > 0 }, :readpreference => lambda { |arg| READ_PREFERENCES.keys.include?(arg) }, :replicaset => lambda { |arg| arg.length > 0 }, :safe => lambda { |arg| ['true', 'false'].include?(arg) }, :slaveok => lambda { |arg| ['true', 'false'].include?(arg) }, :sockettimeoutms => lambda { |arg| arg =~ /^\d+$/ }, :ssl => lambda { |arg| ['true', 'false'].include?(arg) }, :w => lambda { |arg| arg =~ /^\w+$/ }, :wtimeout => lambda { |arg| arg =~ /^\d+$/ }, :wtimeoutms => lambda { |arg| arg =~ /^\d+$/ } } OPT_ERR = { :connect => "must be 'direct', 'replicaset', 'true', or 'false'", :connecttimeoutms => "must be an integer specifying milliseconds", :fsync => "must be 'true' or 'false'", :journal => "must be 'true' or 'false'", :pool_size => "must be an integer greater than zero", :readpreference => "must be on of #{READ_PREFERENCES.keys.map(&:inspect).join(",")}", :replicaset => "must be a string containing the name of the replica set to connect to", :safe => "must be 'true' or 'false'", :slaveok => "must be 'true' or 'false'", :sockettimeoutms => "must be an integer specifying milliseconds", :ssl => "must be 'true' or 'false'", :w => "must be an integer indicating number of nodes to replicate to or a string " + "specifying that replication is required to the majority or nodes with a " + "particilar getLastErrorMode.", :wtimeout => "must be an integer specifying milliseconds", :wtimeoutms => "must be an integer specifying milliseconds" } OPT_CONV = { :connect => lambda { |arg| arg == 'false' ? false : arg }, # convert 'false' to FalseClass :connecttimeoutms => lambda { |arg| arg.to_f / 1000 }, # stored as seconds :fsync => lambda { |arg| arg == 'true' ? true : false }, :journal => lambda { |arg| arg == 'true' ? true : false }, :pool_size => lambda { |arg| arg.to_i }, :readpreference => lambda { |arg| READ_PREFERENCES[arg] }, :replicaset => lambda { |arg| arg }, :safe => lambda { |arg| arg == 'true' ? true : false }, :slaveok => lambda { |arg| arg == 'true' ? true : false }, :sockettimeoutms => lambda { |arg| arg.to_f / 1000 }, # stored as seconds :ssl => lambda { |arg| arg == 'true' ? true : false }, :w => lambda { |arg| Mongo::Support.is_i?(arg) ? arg.to_i : arg.to_sym }, :wtimeout => lambda { |arg| arg.to_i }, :wtimeoutms => lambda { |arg| arg.to_i } } attr_reader :auths, :connect, :connecttimeoutms, :fsync, :journal, :nodes, :pool_size, :readpreference, :replicaset, :safe, :slaveok, :sockettimeoutms, :ssl, :w, :wtimeout, :wtimeoutms # Parse a MongoDB URI. This method is used by MongoClient.from_uri. # Returns an array of nodes and an array of db authorizations, if applicable. # # @note Passwords can contain any character except for ',' # # @param [String] uri The MongoDB URI string. # @param [Hash,nil] extra_opts Extra options. Will override anything already specified in the URI. # # @core connections def initialize(uri) if uri.start_with?('mongodb://') uri = uri[10..-1] else raise MongoArgumentError, "MongoDB URI must match this spec: #{MONGODB_URI_SPEC}" end hosts, opts = uri.split('?') parse_hosts(hosts) parse_options(opts) validate_connect end # Create a Mongo::MongoClient or a Mongo::MongoReplicaSetClient based on the URI. # # @note Don't confuse this with attribute getter method #connect. # # @return [MongoClient,MongoReplicaSetClient] def connection(extra_opts, legacy = false, sharded = false) opts = connection_options.merge!(extra_opts) if(legacy) if replicaset? ReplSetConnection.new(node_strings, opts) else Connection.new(host, port, opts) end else if sharded MongoShardedClient.new(node_strings, opts) elsif replicaset? MongoReplicaSetClient.new(node_strings, opts) else MongoClient.new(host, port, opts) end end end # Whether this represents a replica set. # @return [true,false] def replicaset? replicaset.is_a?(String) || nodes.length > 1 end # Whether to immediately connect to the MongoDB node[s]. Defaults to true. # @return [true, false] def connect? connect != false end # Whether this represents a direct connection. # # @note Specifying :connect => 'direct' has no effect... other than to raise an exception if other variables suggest a replicaset. # # @return [true,false] def direct? !replicaset? end # For direct connections, the host of the (only) node. # @return [String] def host nodes[0][0] end # For direct connections, the port of the (only) node. # @return [Integer] def port nodes[0][1].to_i end # Options that can be passed to MongoClient.new or MongoReplicaSetClient.new # @return [Hash] def connection_options opts = {} if @wtimeout warn "Using wtimeout in a URI is deprecated, please use wtimeoutMS. It will be removed in v2.0." opts[:wtimeout] = @wtimeout end opts[:wtimeout] = @wtimeoutms opts[:w] = 1 if @safe opts[:w] = @w if @w opts[:j] = @journal opts[:fsync] = @fsync if @connecttimeoutms opts[:connect_timeout] = @connecttimeoutms end if @sockettimeoutms opts[:op_timeout] = @sockettimeoutms end if @pool_size opts[:pool_size] = @pool_size end if @readpreference opts[:read] = @readpreference end if @slaveok && !@readpreference unless replicaset? opts[:slave_ok] = true else opts[:read] = :secondary_preferred end end opts[:ssl] = @ssl opts[:auths] = auths if replicaset.is_a?(String) opts[:name] = replicaset end opts[:default_db] = @db opts[:connect] = connect? opts end def node_strings nodes.map { |node| node.join(':') } end private def parse_hosts(uri_without_proto) @nodes = [] @auths = [] matches = MONGODB_URI_MATCHER.match(uri_without_proto) if !matches raise MongoArgumentError, "MongoDB URI must match this spec: #{MONGODB_URI_SPEC}" end uname = matches[2] pwd = matches[3] hosturis = matches[4].split(',') @db = matches[8] hosturis.each do |hosturi| # If port is present, use it, otherwise use default port host, port = hosturi.split(':') + [MongoClient::DEFAULT_PORT] if !(port.to_s =~ /^\d+$/) raise MongoArgumentError, "Invalid port #{port}; port must be specified as digits." end port = port.to_i @nodes << [host, port] end if @nodes.empty? raise MongoArgumentError, "No nodes specified. Please ensure that you've provided at least one node." end if uname && pwd && @db auths << { :db_name => @db, :username => URI.unescape(uname), :password => URI.unescape(pwd) } elsif uname || pwd raise MongoArgumentError, 'MongoDB URI must include username, ' + 'password, and db if username and ' + 'password are specified.' end end # This method uses the lambdas defined in OPT_VALID and OPT_CONV to validate # and convert the given options. def parse_options(string_opts) # initialize instance variables for available options OPT_VALID.keys.each { |k| instance_variable_set("@#{k}", nil) } string_opts ||= '' return if string_opts.empty? if string_opts.include?(';') and string_opts.include?('&') raise MongoArgumentError, "must not mix URL separators ; and &" end opts = CGI.parse(string_opts).inject({}) do |memo, (key, value)| value = value.first memo[key.downcase.to_sym] = value.strip.downcase memo end opts.each do |key, value| if !OPT_ATTRS.include?(key) raise MongoArgumentError, "Invalid Mongo URI option #{key}" end if OPT_VALID[key].call(value) instance_variable_set("@#{key}", OPT_CONV[key].call(value)) else raise MongoArgumentError, "Invalid value #{value.inspect} for #{key}: #{OPT_ERR[key]}" end end end def validate_connect if replicaset? and @connect == 'direct' # Make sure the user doesn't specify something contradictory raise MongoArgumentError, "connect=direct conflicts with setting a replicaset name" end end end end ruby-mongo-1.9.2/lib/mongo/util/write_concern.rb000066400000000000000000000041001221200727400216410ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module WriteConcern attr_reader :legacy_write_concern @@safe_warn = nil def write_concern_from_legacy(opts) # Warn if 'safe' parameter is being used, if opts.key?(:safe) && !@@safe_warn && !ENV['TEST_MODE'] warn "[DEPRECATED] The 'safe' write concern option has been deprecated in favor of 'w'." @@safe_warn = true end # nil: set :w => 0 # false: set :w => 0 # true: set :w => 1 # hash: set :w => 0 and merge with opts unless opts.has_key?(:w) opts[:w] = 0 # legacy default, unacknowledged safe = opts.delete(:safe) if(safe && safe.is_a?(Hash)) opts.merge!(safe) elsif(safe == true) opts[:w] = 1 end end end # todo: throw exception for conflicting write concern options def get_write_concern(opts, parent=nil) write_concern_from_legacy(opts) if opts.key?(:safe) || legacy_write_concern write_concern = { :w => 1, :j => false, :fsync => false, :wtimeout => nil } write_concern.merge!(parent.write_concern) if parent write_concern.merge!(opts.reject {|k,v| !write_concern.keys.include?(k)}) write_concern end def self.gle?(write_concern) (write_concern[:w].is_a? Symbol) || (write_concern[:w].is_a? String) || write_concern[:w] > 0 || write_concern[:j] || write_concern[:fsync] || write_concern[:wtimeout] end end end ruby-mongo-1.9.2/metadata.gz.sig000066400000000000000000000004001221200727400165130ustar00rootroot00000000000000d 8;hW55_FG;rFmh=;%Gʜ4ݦ{q5m:!{|{ JjC{Wl_YsK:LEc Ib !u ;5ʼn_]de41E.HRgv鸁\ z`O}t1T*4]y]<ĀwR*qڙcOpahg1_ZWa\0r!Lk%kNG7A֯Ђθ_ruby-mongo-1.9.2/metadata.yml000066400000000000000000000166051221200727400161310ustar00rootroot00000000000000--- !ruby/object:Gem::Specification name: mongo version: !ruby/object:Gem::Version version: 1.9.2 platform: ruby authors: - Tyler Brock - Gary Murakami - Emily Stolfo - Brandon Black - Durran Jordan autorequire: bindir: bin cert_chain: - | -----BEGIN CERTIFICATE----- MIIDODCCAiCgAwIBAgIBADANBgkqhkiG9w0BAQUFADBCMRQwEgYDVQQDDAtkcml2 ZXItcnVieTEVMBMGCgmSJomT8ixkARkWBTEwZ2VuMRMwEQYKCZImiZPyLGQBGRYD Y29tMB4XDTEzMDIwMTE0MTEzN1oXDTE0MDIwMTE0MTEzN1owQjEUMBIGA1UEAwwL ZHJpdmVyLXJ1YnkxFTATBgoJkiaJk/IsZAEZFgUxMGdlbjETMBEGCgmSJomT8ixk ARkWA2NvbTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBANFdSAa8fRm1 bAM9za6Z0fAH4g02bqM1NGnw8zJQrE/PFrFfY6IFCT2AsLfOwr1maVm7iU1+kdVI IQ+iI/9+E+ArJ+rbGV3dDPQ+SLl3mLT+vXjfjcxMqI2IW6UuVtt2U3Rxd4QU0kdT JxmcPYs5fDN6BgYc6XXgUjy3m+Kwha2pGctdciUOwEfOZ4RmNRlEZKCMLRHdFP8j 4WTnJSGfXDiuoXICJb5yOPOZPuaapPSNXp93QkUdsqdKC32I+KMpKKYGBQ6yisfA 5MyVPPCzLR1lP5qXVGJPnOqUAkvEUfCahg7EP9tI20qxiXrR6TSEraYhIFXL0EGY u8KAcPHm5KkCAwEAAaM5MDcwCQYDVR0TBAIwADAdBgNVHQ4EFgQUW3dZsX70mlSM CiPrZxAGA1vwfNcwCwYDVR0PBAQDAgSwMA0GCSqGSIb3DQEBBQUAA4IBAQCIa/Y6 xS7YWBxkn9WP0EMnJ3pY9vef9DTmLSi/2jz8PzwlKQ89zNTrqSUD8LoQZmBqCJBt dKSQ/RUnaHJuxh8HWvWubP8EBYTuf+I1DFnRv648IF3MR1tCQumVL0XcYMvZcxBj a/p+8DomWTQqUdNbNoGywwjtVBWfDdwFV8Po1XcN/AtpILOJQd9J77INIGGCHxZo 6SOHHaNknlE9H0w6q0SVxZKZI8/+2c447V0NrHIw1Qhe0tAGJ9V1u3ky8gyxe0SM 8v7zLF2XliYbfurYIwkcXs8yPn8ggApBIy9bX6VJxRs/l2+UvqzaHIFaFy/F8/GP RNTuXsVG5NDACo7Q -----END CERTIFICATE----- date: 2013-08-21 00:00:00.000000000 Z dependencies: - !ruby/object:Gem::Dependency name: bson requirement: !ruby/object:Gem::Requirement requirements: - - ~> - !ruby/object:Gem::Version version: 1.9.2 type: :runtime prerelease: false version_requirements: !ruby/object:Gem::Requirement requirements: - - ~> - !ruby/object:Gem::Version version: 1.9.2 description: A Ruby driver for MongoDB. For more information about Mongo, see http://www.mongodb.org. email: mongodb-dev@googlegroups.com executables: - mongo_console extensions: [] extra_rdoc_files: [] files: - mongo.gemspec - LICENSE - VERSION - README.md - Rakefile - bin/mongo_console - lib/mongo.rb - lib/mongo/collection.rb - lib/mongo/cursor.rb - lib/mongo/db.rb - lib/mongo/exceptions.rb - lib/mongo/gridfs/grid.rb - lib/mongo/gridfs/grid_ext.rb - lib/mongo/gridfs/grid_file_system.rb - lib/mongo/gridfs/grid_io.rb - lib/mongo/legacy.rb - lib/mongo/mongo_client.rb - lib/mongo/mongo_replica_set_client.rb - lib/mongo/mongo_sharded_client.rb - lib/mongo/networking.rb - lib/mongo/util/conversions.rb - lib/mongo/util/core_ext.rb - lib/mongo/util/logging.rb - lib/mongo/util/node.rb - lib/mongo/util/pool.rb - lib/mongo/util/pool_manager.rb - lib/mongo/util/read_preference.rb - lib/mongo/util/server_version.rb - lib/mongo/util/sharding_pool_manager.rb - lib/mongo/util/socket_util.rb - lib/mongo/util/ssl_socket.rb - lib/mongo/util/support.rb - lib/mongo/util/tcp_socket.rb - lib/mongo/util/thread_local_variable_manager.rb - lib/mongo/util/unix_socket.rb - lib/mongo/util/uri_parser.rb - lib/mongo/util/write_concern.rb - test/functional/authentication_test.rb - test/functional/collection_test.rb - test/functional/connection_test.rb - test/functional/conversions_test.rb - test/functional/cursor_fail_test.rb - test/functional/cursor_message_test.rb - test/functional/cursor_test.rb - test/functional/db_api_test.rb - test/functional/db_connection_test.rb - test/functional/db_test.rb - test/functional/grid_file_system_test.rb - test/functional/grid_io_test.rb - test/functional/grid_test.rb - test/functional/pool_test.rb - test/functional/safe_test.rb - test/functional/ssl_test.rb - test/functional/support_test.rb - test/functional/threading_test.rb - test/functional/timeout_test.rb - test/functional/uri_test.rb - test/functional/write_concern_test.rb - test/replica_set/authentication_test.rb - test/replica_set/basic_test.rb - test/replica_set/client_test.rb - test/replica_set/complex_connect_test.rb - test/replica_set/connection_test.rb - test/replica_set/count_test.rb - test/replica_set/cursor_test.rb - test/replica_set/insert_test.rb - test/replica_set/max_values_test.rb - test/replica_set/pinning_test.rb - test/replica_set/query_test.rb - test/replica_set/read_preference_test.rb - test/replica_set/refresh_test.rb - test/replica_set/replication_ack_test.rb - test/replica_set/ssl_test.rb - test/sharded_cluster/basic_test.rb - test/shared/authentication.rb - test/test_helper.rb - test/threading/basic_test.rb - test/tools/mongo_config.rb - test/tools/mongo_config_test.rb - test/unit/client_test.rb - test/unit/collection_test.rb - test/unit/connection_test.rb - test/unit/cursor_test.rb - test/unit/db_test.rb - test/unit/grid_test.rb - test/unit/mongo_sharded_client_test.rb - test/unit/node_test.rb - test/unit/pool_manager_test.rb - test/unit/pool_test.rb - test/unit/read_pref_test.rb - test/unit/read_test.rb - test/unit/safe_test.rb - test/unit/sharding_pool_manager_test.rb - test/unit/util_test.rb - test/unit/write_concern_test.rb homepage: http://www.mongodb.org licenses: - Apache License Version 2.0 metadata: {} post_install_message: rdoc_options: [] require_paths: - lib required_ruby_version: !ruby/object:Gem::Requirement requirements: - - '>=' - !ruby/object:Gem::Version version: '0' required_rubygems_version: !ruby/object:Gem::Requirement requirements: - - '>=' - !ruby/object:Gem::Version version: '0' requirements: [] rubyforge_project: mongo rubygems_version: 2.0.7 signing_key: specification_version: 4 summary: Ruby driver for MongoDB test_files: - test/functional/authentication_test.rb - test/functional/collection_test.rb - test/functional/connection_test.rb - test/functional/conversions_test.rb - test/functional/cursor_fail_test.rb - test/functional/cursor_message_test.rb - test/functional/cursor_test.rb - test/functional/db_api_test.rb - test/functional/db_connection_test.rb - test/functional/db_test.rb - test/functional/grid_file_system_test.rb - test/functional/grid_io_test.rb - test/functional/grid_test.rb - test/functional/pool_test.rb - test/functional/safe_test.rb - test/functional/ssl_test.rb - test/functional/support_test.rb - test/functional/threading_test.rb - test/functional/timeout_test.rb - test/functional/uri_test.rb - test/functional/write_concern_test.rb - test/replica_set/authentication_test.rb - test/replica_set/basic_test.rb - test/replica_set/client_test.rb - test/replica_set/complex_connect_test.rb - test/replica_set/connection_test.rb - test/replica_set/count_test.rb - test/replica_set/cursor_test.rb - test/replica_set/insert_test.rb - test/replica_set/max_values_test.rb - test/replica_set/pinning_test.rb - test/replica_set/query_test.rb - test/replica_set/read_preference_test.rb - test/replica_set/refresh_test.rb - test/replica_set/replication_ack_test.rb - test/replica_set/ssl_test.rb - test/sharded_cluster/basic_test.rb - test/shared/authentication.rb - test/test_helper.rb - test/threading/basic_test.rb - test/tools/mongo_config.rb - test/tools/mongo_config_test.rb - test/unit/client_test.rb - test/unit/collection_test.rb - test/unit/connection_test.rb - test/unit/cursor_test.rb - test/unit/db_test.rb - test/unit/grid_test.rb - test/unit/mongo_sharded_client_test.rb - test/unit/node_test.rb - test/unit/pool_manager_test.rb - test/unit/pool_test.rb - test/unit/read_pref_test.rb - test/unit/read_test.rb - test/unit/safe_test.rb - test/unit/sharding_pool_manager_test.rb - test/unit/util_test.rb - test/unit/write_concern_test.rb has_rdoc: yard ruby-mongo-1.9.2/mongo.gemspec000066400000000000000000000024041221200727400163020ustar00rootroot00000000000000Gem::Specification.new do |s| s.name = 'mongo' s.version = File.read(File.join(File.dirname(__FILE__), 'VERSION')) s.platform = Gem::Platform::RUBY s.authors = ['Tyler Brock', 'Gary Murakami', 'Emily Stolfo', 'Brandon Black', 'Durran Jordan'] s.email = 'mongodb-dev@googlegroups.com' s.homepage = 'http://www.mongodb.org' s.summary = 'Ruby driver for MongoDB' s.description = 'A Ruby driver for MongoDB. For more information about Mongo, see http://www.mongodb.org.' s.rubyforge_project = 'mongo' s.license = 'Apache License Version 2.0' if File.exists?('gem-private_key.pem') s.signing_key = 'gem-private_key.pem' s.cert_chain = ['gem-public_cert.pem'] else warn 'Warning: No private key present, creating unsigned gem.' end s.files = ['mongo.gemspec', 'LICENSE', 'VERSION'] s.files += ['README.md', 'Rakefile', 'bin/mongo_console'] s.files += ['lib/mongo.rb'] + Dir['lib/mongo/**/*.rb'] s.test_files = Dir['test/**/*.rb'] - Dir['test/bson/*'] s.executables = ['mongo_console'] s.require_paths = ['lib'] s.has_rdoc = 'yard' s.add_dependency('bson', "~> #{s.version}") end ruby-mongo-1.9.2/test/000077500000000000000000000000001221200727400145755ustar00rootroot00000000000000ruby-mongo-1.9.2/test/functional/000077500000000000000000000000001221200727400167375ustar00rootroot00000000000000ruby-mongo-1.9.2/test/functional/authentication_test.rb000066400000000000000000000024021221200727400233400ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' require 'shared/authentication' class AuthenticationTest < Test::Unit::TestCase include Mongo include AuthenticationTests def setup @client = MongoClient.new @db = @client[MONGO_TEST_DB] init_auth end def test_authenticate_with_connection_uri @db.add_user('eunice', 'uritest') client = MongoClient.from_uri("mongodb://eunice:uritest@#{host_port}/#{@db.name}") assert client assert_equal client.auths.size, 1 assert client[MONGO_TEST_DB]['auth_test'].count auth = client.auths.first assert_equal @db.name, auth[:db_name] assert_equal 'eunice', auth[:username] assert_equal 'uritest', auth[:password] end end ruby-mongo-1.9.2/test/functional/collection_test.rb000066400000000000000000001360111221200727400224600ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'rbconfig' require 'test_helper' class TestCollection < Test::Unit::TestCase @@client ||= standard_connection(:op_timeout => 10) @@db = @@client.db(MONGO_TEST_DB) @@test = @@db.collection("test") @@version = @@client.server_version def setup @@test.remove end def test_capped_method @@db.create_collection('normal') assert !@@db['normal'].capped? @@db.drop_collection('normal') @@db.create_collection('c', :capped => true, :size => 100_000) assert @@db['c'].capped? @@db.drop_collection('c') end def test_optional_pk_factory @coll_default_pk = @@db.collection('stuff') assert_equal BSON::ObjectId, @coll_default_pk.pk_factory @coll_default_pk = @@db.create_collection('more-stuff') assert_equal BSON::ObjectId, @coll_default_pk.pk_factory # Create a db with a pk_factory. @db = MongoClient.new(ENV['MONGO_RUBY_DRIVER_HOST'] || 'localhost', ENV['MONGO_RUBY_DRIVER_PORT'] || MongoClient::DEFAULT_PORT).db(MONGO_TEST_DB, :pk => Object.new) @coll = @db.collection('coll-with-pk') assert @coll.pk_factory.is_a?(Object) @coll = @db.create_collection('created_coll_with_pk') assert @coll.pk_factory.is_a?(Object) end class TestPK def self.create_pk end end def test_pk_factory_on_collection silently do @coll = Collection.new('foo', @@db, TestPK) assert_equal TestPK, @coll.pk_factory end @coll2 = Collection.new('foo', @@db, :pk => TestPK) assert_equal TestPK, @coll2.pk_factory end def test_valid_names assert_raise Mongo::InvalidNSName do @@db["te$t"] end assert_raise Mongo::InvalidNSName do @@db['$main'] end assert @@db['$cmd'] assert @@db['oplog.$main'] end def test_collection assert_kind_of Collection, @@db["test"] assert_equal @@db["test"].name(), @@db.collection("test").name() assert_equal @@db["test"].name(), @@db[:test].name() assert_kind_of Collection, @@db["test"]["foo"] assert_equal @@db["test"]["foo"].name(), @@db.collection("test.foo").name() assert_equal @@db["test"]["foo"].name(), @@db["test.foo"].name() @@db["test"]["foo"].remove @@db["test"]["foo"].insert("x" => 5) assert_equal 5, @@db.collection("test.foo").find_one()["x"] end def test_rename_collection @@db.drop_collection('foo1') @@db.drop_collection('bar1') @col = @@db.create_collection('foo1') assert_equal 'foo1', @col.name @col.rename('bar1') assert_equal 'bar1', @col.name end def test_nil_id assert_equal 5, @@test.insert({"_id" => 5, "foo" => "bar"}) assert_equal 5, @@test.save({"_id" => 5, "foo" => "baz"}) assert_equal nil, @@test.find_one("foo" => "bar") assert_equal "baz", @@test.find_one(:_id => 5)["foo"] assert_raise OperationFailure do @@test.insert({"_id" => 5, "foo" => "bar"}) end assert_equal nil, @@test.insert({"_id" => nil, "foo" => "bar"}) assert_equal nil, @@test.save({"_id" => nil, "foo" => "baz"}) assert_equal nil, @@test.find_one("foo" => "bar") assert_equal "baz", @@test.find_one(:_id => nil)["foo"] assert_raise OperationFailure do @@test.insert({"_id" => nil, "foo" => "bar"}) end assert_raise OperationFailure do @@test.insert({:_id => nil, "foo" => "bar"}) end end if @@version > "1.1" def setup_for_distinct @@test.remove @@test.insert([{:a => 0, :b => {:c => "a"}}, {:a => 1, :b => {:c => "b"}}, {:a => 1, :b => {:c => "c"}}, {:a => 2, :b => {:c => "a"}}, {:a => 3}, {:a => 3}]) end def test_distinct_queries setup_for_distinct assert_equal [0, 1, 2, 3], @@test.distinct(:a).sort assert_equal ["a", "b", "c"], @@test.distinct("b.c").sort end if @@version >= "1.2" def test_filter_collection_with_query setup_for_distinct assert_equal [2, 3], @@test.distinct(:a, {:a => {"$gt" => 1}}).sort end def test_filter_nested_objects setup_for_distinct assert_equal ["a", "b"], @@test.distinct("b.c", {"b.c" => {"$ne" => "c"}}).sort end end end def test_safe_insert @@test.create_index("hello", :unique => true) a = {"hello" => "world"} @@test.insert(a) @@test.insert(a, :w => 0) assert(@@db.get_last_error['err'].include?("11000")) assert_raise OperationFailure do @@test.insert(a) end end def test_bulk_insert docs = [] docs << {:foo => 1} docs << {:foo => 2} docs << {:foo => 3} response = @@test.insert(docs) assert_equal 3, response.length assert response.all? {|id| id.is_a?(BSON::ObjectId)} assert_equal 3, @@test.count end def test_bulk_insert_with_continue_on_error if @@version >= "2.0" @@test.create_index([["foo", 1]], :unique => true) docs = [] docs << {:foo => 1} docs << {:foo => 1} docs << {:foo => 2} docs << {:foo => 3} assert_raise OperationFailure do @@test.insert(docs) end assert_equal 1, @@test.count @@test.remove docs = [] docs << {:foo => 1} docs << {:foo => 1} docs << {:foo => 2} docs << {:foo => 3} assert_raise OperationFailure do @@test.insert(docs, :continue_on_error => true) end assert_equal 3, @@test.count @@test.remove @@test.drop_index("foo_1") end end def test_bson_valid_with_collect_on_error docs = [] docs << {:foo => 1} docs << {:bar => 1} doc_ids, error_docs = @@test.insert(docs, :collect_on_error => true) assert_equal 2, @@test.count assert_equal 2, doc_ids.count assert_equal error_docs, [] end def test_bson_invalid_key_serialize_error_with_collect_on_error docs = [] docs << {:foo => 1} docs << {:bar => 1} invalid_docs = [] invalid_docs << {'$invalid-key' => 1} invalid_docs << {'invalid.key' => 1} docs += invalid_docs assert_raise BSON::InvalidKeyName do @@test.insert(docs, :collect_on_error => false) end assert_equal 0, @@test.count doc_ids, error_docs = @@test.insert(docs, :collect_on_error => true) assert_equal 2, @@test.count assert_equal 2, doc_ids.count assert_equal error_docs, invalid_docs end def test_bson_invalid_encoding_serialize_error_with_collect_on_error # Broken for current JRuby if RUBY_PLATFORM == 'java' then return end docs = [] docs << {:foo => 1} docs << {:bar => 1} invalid_docs = [] invalid_docs << {"\223\372\226}" => 1} # non utf8 encoding docs += invalid_docs assert_raise BSON::InvalidStringEncoding do @@test.insert(docs, :collect_on_error => false) end assert_equal 0, @@test.count doc_ids, error_docs = @@test.insert(docs, :collect_on_error => true) assert_equal 2, @@test.count assert_equal 2, doc_ids.count assert_equal error_docs, invalid_docs end def test_insert_one_error_doc_with_collect_on_error invalid_doc = {'$invalid-key' => 1} invalid_docs = [invalid_doc] doc_ids, error_docs = @@test.insert(invalid_docs, :collect_on_error => true) assert_equal [], doc_ids assert_equal [invalid_doc], error_docs end def test_insert_empty_docs_raises_exception assert_raise OperationFailure do @@test.insert([]) end end def test_insert_empty_docs_with_collect_on_error_raises_exception assert_raise OperationFailure do @@test.insert([], :collect_on_error => true) end end def limited_collection conn = standard_connection(:connect => false) admin_db = Object.new admin_db.expects(:command).returns({ 'ok' => 1, 'ismaster' => 1, 'maxBsonObjectSize' => 1024, 'maxMessageSizeBytes' => 3 * 1024 }) conn.expects(:[]).with('admin').returns(admin_db) conn.connect return conn.db(MONGO_TEST_DB)["test"] end def test_non_operation_failure_halts_insertion_with_continue_on_error coll = limited_collection coll.stubs(:send_insert_message).raises(OperationTimeout).times(1) docs = [] 10.times do docs << {'foo' => 'a' * 950} end assert_raise OperationTimeout do coll.insert(docs, :continue_on_error => true) end end def test_chunking_batch_insert docs = [] 10.times do docs << {'foo' => 'a' * 950} end limited_collection.insert(docs) assert_equal 10, limited_collection.count end def test_chunking_batch_insert_without_collect_on_error docs = [] 4.times do docs << {'foo' => 'a' * 950} end invalid_docs = [] invalid_docs << {'$invalid-key' => 1} # non utf8 encoding docs += invalid_docs 4.times do docs << {'foo' => 'a' * 950} end assert_raise BSON::InvalidKeyName do limited_collection.insert(docs, :collect_on_error => false) end end def test_chunking_batch_insert_with_collect_on_error # Broken for current JRuby if RUBY_PLATFORM == 'java' then return end docs = [] 4.times do docs << {'foo' => 'a' * 950} end invalid_docs = [] invalid_docs << {'$invalid-key' => 1} # non utf8 encoding docs += invalid_docs 4.times do docs << {'foo' => 'a' * 950} end doc_ids, error_docs = limited_collection.insert(docs, :collect_on_error => true) assert_equal 8, doc_ids.count assert_equal doc_ids.count, limited_collection.count assert_equal error_docs, invalid_docs end def test_chunking_batch_insert_with_continue_on_error docs = [] 4.times do docs << {'foo' => 'a' * 950} end docs << {'_id' => 'b', 'foo' => 'a'} docs << {'_id' => 'b', 'foo' => 'c'} 4.times do docs << {'foo' => 'a' * 950} end assert_raise OperationFailure do limited_collection.insert(docs, :continue_on_error => true) end assert_equal 9, limited_collection.count end def test_chunking_batch_insert_without_continue_on_error docs = [] 4.times do docs << {'foo' => 'a' * 950} end docs << {'_id' => 'b', 'foo' => 'a'} docs << {'_id' => 'b', 'foo' => 'c'} 4.times do docs << {'foo' => 'a' * 950} end assert_raise OperationFailure do limited_collection.insert(docs, :continue_on_error => false) end assert_equal 5, limited_collection.count end def test_maximum_insert_size docs = [] 3.times do docs << {'foo' => 'a' * 950} end assert_equal limited_collection.insert(docs).length, 3 end def test_maximum_document_size assert_raise InvalidDocument do limited_collection.insert({'foo' => 'a' * 1024}) end end def test_maximum_save_size assert limited_collection.save({'foo' => 'a' * 950}) assert_raise InvalidDocument do limited_collection.save({'foo' => 'a' * 1024}) end end def test_maximum_remove_size assert limited_collection.remove({'foo' => 'a' * 950}) assert_raise InvalidDocument do limited_collection.remove({'foo' => 'a' * 1024}) end end def test_maximum_update_size assert_raise InvalidDocument do limited_collection.update( {'foo' => 'a' * 1024}, {'foo' => 'a' * 950} ) end assert_raise InvalidDocument do limited_collection.update( {'foo' => 'a' * 950}, {'foo' => 'a' * 1024} ) end assert_raise InvalidDocument do limited_collection.update( {'foo' => 'a' * 1024}, {'foo' => 'a' * 1024} ) end assert limited_collection.update( {'foo' => 'a' * 950}, {'foo' => 'a' * 950} ) end def test_maximum_query_size assert limited_collection.find({'foo' => 'a' * 950}).to_a assert limited_collection.find( {'foo' => 'a' * 950}, {:fields => {'foo' => 'a' * 950}} ).to_a assert_raise InvalidDocument do limited_collection.find({'foo' => 'a' * 1024}).to_a end assert_raise InvalidDocument do limited_collection.find( {'foo' => 'a' * 950}, {:fields => {'foo' => 'a' * 1024}} ).to_a end end #if @@version >= "1.5.1" # def test_safe_mode_with_advanced_safe_with_invalid_options # assert_raise_error ArgumentError, "Unknown key(s): wtime" do # @@test.insert({:foo => 1}, :w => 2, :wtime => 1, :fsync => true) # end # assert_raise_error ArgumentError, "Unknown key(s): wtime" do # @@test.update({:foo => 1}, {:foo => 2}, :w => 2, :wtime => 1, :fsync => true) # end # # assert_raise_error ArgumentError, "Unknown key(s): wtime" do # @@test.remove({:foo => 2}, :w => 2, :wtime => 1, :fsync => true) # end # end #end if @@version >= "2.0.0" def test_safe_mode_with_journal_commit_option @@test.insert({:foo => 1}, :j => true) @@test.update({:foo => 1}, {:foo => 2}, :j => true) @@test.remove({:foo => 2}, :j => true) end end def test_update id1 = @@test.save("x" => 5) @@test.update({}, {"$inc" => {"x" => 1}}) assert_equal 1, @@test.count() assert_equal 6, @@test.find_one(:_id => id1)["x"] id2 = @@test.save("x" => 1) @@test.update({"x" => 6}, {"$inc" => {"x" => 1}}) assert_equal 7, @@test.find_one(:_id => id1)["x"] assert_equal 1, @@test.find_one(:_id => id2)["x"] end def test_update_check_keys @@test.save("x" => 1) @@test.update({"x" => 1}, {"$set" => {"a.b" => 2}}) assert_equal 2, @@test.find_one("x" => 1)["a"]["b"] assert_raise_error BSON::InvalidKeyName do @@test.update({"x" => 1}, {"a.b" => 3}) end end if @@version >= "1.1.3" def test_multi_update @@test.save("num" => 10) @@test.save("num" => 10) @@test.save("num" => 10) assert_equal 3, @@test.count @@test.update({"num" => 10}, {"$set" => {"num" => 100}}, :multi => true) @@test.find.each do |doc| assert_equal 100, doc["num"] end end end def test_upsert @@test.update({"page" => "/"}, {"$inc" => {"count" => 1}}, :upsert => true) @@test.update({"page" => "/"}, {"$inc" => {"count" => 1}}, :upsert => true) assert_equal 1, @@test.count() assert_equal 2, @@test.find_one()["count"] end if @@version < "1.1.3" def test_safe_update @@test.create_index("x") @@test.insert("x" => 5) @@test.update({}, {"$inc" => {"x" => 1}}) assert @@db.error? # Can't change an index. assert_raise OperationFailure do @@test.update({}, {"$inc" => {"x" => 1}}) end @@test.drop end else def test_safe_update @@test.create_index("x", :unique => true) @@test.insert("x" => 5) @@test.insert("x" => 10) # Can update an indexed collection. @@test.update({}, {"$inc" => {"x" => 1}}) assert !@@db.error? # Can't duplicate an index. assert_raise OperationFailure do @@test.update({}, {"x" => 10}) end @@test.drop end end def test_safe_save @@test.create_index("hello", :unique => true) @@test.save("hello" => "world") @@test.save({"hello" => "world"}, :w => 0) assert_raise OperationFailure do @@test.save({"hello" => "world"}) end @@test.drop end def test_mocked_safe_remove @client = standard_connection @db = @client[MONGO_TEST_DB] @test = @db['test-safe-remove'] @test.save({:a => 20}) @client.stubs(:receive).returns([[{'ok' => 0, 'err' => 'failed'}], 1, 0]) assert_raise OperationFailure do @test.remove({}) end @test.drop end def test_safe_remove @client = standard_connection @db = @client[MONGO_TEST_DB] @test = @db['test-safe-remove'] @test.remove @test.save({:a => 50}) assert_equal 1, @test.remove({})["n"] @test.drop end def test_remove_return_value assert_equal true, @@test.remove({}, :w => 0) end def test_count @@test.drop assert_equal 0, @@test.count @@test.save(:x => 1) @@test.save(:x => 2) assert_equal 2, @@test.count assert_equal 1, @@test.count(:query => {:x => 1}) assert_equal 1, @@test.count(:limit => 1) assert_equal 0, @@test.count(:skip => 2) end # Note: #size is just an alias for #count. def test_size @@test.drop assert_equal 0, @@test.count assert_equal @@test.size, @@test.count @@test.save("x" => 1) @@test.save("x" => 2) assert_equal @@test.size, @@test.count end def test_no_timeout_option @@test.drop assert_raise ArgumentError, "Timeout can be set to false only when #find is invoked with a block." do @@test.find({}, :timeout => false) end @@test.find({}, :timeout => false) do |cursor| assert_equal 0, cursor.count end @@test.save("x" => 1) @@test.save("x" => 2) @@test.find({}, :timeout => false) do |cursor| assert_equal 2, cursor.count end end def test_defualt_timeout cursor = @@test.find assert_equal true, cursor.timeout end def test_fields_as_hash @@test.save(:a => 1, :b => 1, :c => 1) doc = @@test.find_one({:a => 1}, :fields => {:b => 0}) assert_nil doc['b'] assert doc['a'] assert doc['c'] doc = @@test.find_one({:a => 1}, :fields => {:a => 1, :b => 1}) assert_nil doc['c'] assert doc['a'] assert doc['b'] assert_raise Mongo::OperationFailure do @@test.find_one({:a => 1}, :fields => {:a => 1, :b => 0}) end end if @@version >= "1.5.1" def test_fields_with_slice @@test.save({:foo => [1, 2, 3, 4, 5, 6], :test => 'slice'}) doc = @@test.find_one({:test => 'slice'}, :fields => {'foo' => {'$slice' => [0, 3]}}) assert_equal [1, 2, 3], doc['foo'] @@test.remove end end def test_find_one id = @@test.save("hello" => "world", "foo" => "bar") assert_equal "world", @@test.find_one()["hello"] assert_equal @@test.find_one(id), @@test.find_one() assert_equal @@test.find_one(nil), @@test.find_one() assert_equal @@test.find_one({}), @@test.find_one() assert_equal @@test.find_one("hello" => "world"), @@test.find_one() assert_equal @@test.find_one(BSON::OrderedHash["hello", "world"]), @@test.find_one() assert @@test.find_one(nil, :fields => ["hello"]).include?("hello") assert !@@test.find_one(nil, :fields => ["foo"]).include?("hello") assert_equal ["_id"], @@test.find_one(nil, :fields => []).keys() assert_equal nil, @@test.find_one("hello" => "foo") assert_equal nil, @@test.find_one(BSON::OrderedHash["hello", "foo"]) assert_equal nil, @@test.find_one(ObjectId.new) assert_raise TypeError do @@test.find_one(6) end end def test_insert_adds_id doc = {"hello" => "world"} @@test.insert(doc) assert(doc.include?(:_id)) docs = [{"hello" => "world"}, {"hello" => "world"}] @@test.insert(docs) docs.each do |d| assert(d.include?(:_id)) end end def test_save_adds_id doc = {"hello" => "world"} @@test.save(doc) assert(doc.include?(:_id)) end def test_optional_find_block 10.times do |i| @@test.save("i" => i) end x = nil @@test.find("i" => 2) { |cursor| x = cursor.count() } assert_equal 1, x i = 0 @@test.find({}, :skip => 5) do |cursor| cursor.each do |doc| i = i + 1 end end assert_equal 5, i c = nil @@test.find() do |cursor| c = cursor end assert c.closed? end def setup_aggregate_data # save some data @@test.save( { "_id" => 1, "title" => "this is my title", "author" => "bob", "posted" => Time.utc(2000), "pageViews" => 5 , "tags" => [ "fun" , "good" , "fun" ], "comments" => [ { "author" => "joe", "text" => "this is cool" }, { "author" => "sam", "text" => "this is bad" } ], "other" => { "foo" => 5 } } ) @@test.save( { "_id" => 2, "title" => "this is your title", "author" => "dave", "posted" => Time.utc(2001), "pageViews" => 7, "tags" => [ "fun" , "nasty" ], "comments" => [ { "author" => "barbara" , "text" => "this is interesting" }, { "author" => "jenny", "text" => "i like to play pinball", "votes" => 10 } ], "other" => { "bar" => 14 } }) @@test.save( { "_id" => 3, "title" => "this is some other title", "author" => "jane", "posted" => Time.utc(2002), "pageViews" => 6 , "tags" => [ "nasty", "filthy" ], "comments" => [ { "author" => "will" , "text" => "i don't like the color" } , { "author" => "jenny" , "text" => "can i get that in green?" } ], "other" => { "bar" => 14 } }) end if @@version > '2.1.1' def test_reponds_to_aggregate assert_respond_to @@test, :aggregate end def test_aggregate_requires_arguments assert_raise MongoArgumentError do @@test.aggregate() end end def test_aggregate_requires_valid_arguments assert_raise MongoArgumentError do @@test.aggregate({}) end end def test_aggregate_pipeline_operator_format assert_raise Mongo::OperationFailure do @@test.aggregate([{"$project" => "_id"}]) end end def test_aggregate_pipeline_operators_using_strings setup_aggregate_data desired_results = [ {"_id"=>1, "pageViews"=>5, "tags"=>["fun", "good", "fun"]}, {"_id"=>2, "pageViews"=>7, "tags"=>["fun", "nasty"]}, {"_id"=>3, "pageViews"=>6, "tags"=>["nasty", "filthy"]} ] results = @@test.aggregate([{"$project" => {"tags" => 1, "pageViews" => 1}}]) assert_equal desired_results, results end def test_aggregate_pipeline_operators_using_symbols setup_aggregate_data desired_results = [ {"_id"=>1, "pageViews"=>5, "tags"=>["fun", "good", "fun"]}, {"_id"=>2, "pageViews"=>7, "tags"=>["fun", "nasty"]}, {"_id"=>3, "pageViews"=>6, "tags"=>["nasty", "filthy"]} ] results = @@test.aggregate([{"$project" => {:tags => 1, :pageViews => 1}}]) assert_equal desired_results, results end def test_aggregate_pipeline_multiple_operators setup_aggregate_data results = @@test.aggregate([{"$project" => {"tags" => 1, "pageViews" => 1}}, {"$match" => {"pageViews" => 7}}]) assert_equal 1, results.length end def test_aggregate_pipeline_unwind setup_aggregate_data desired_results = [ {"_id"=>1, "title"=>"this is my title", "author"=>"bob", "posted"=>Time.utc(2000), "pageViews"=>5, "tags"=>"fun", "comments"=>[{"author"=>"joe", "text"=>"this is cool"}, {"author"=>"sam", "text"=>"this is bad"}], "other"=>{"foo"=>5 } }, {"_id"=>1, "title"=>"this is my title", "author"=>"bob", "posted"=>Time.utc(2000), "pageViews"=>5, "tags"=>"good", "comments"=>[{"author"=>"joe", "text"=>"this is cool"}, {"author"=>"sam", "text"=>"this is bad"}], "other"=>{"foo"=>5 } }, {"_id"=>1, "title"=>"this is my title", "author"=>"bob", "posted"=>Time.utc(2000), "pageViews"=>5, "tags"=>"fun", "comments"=>[{"author"=>"joe", "text"=>"this is cool"}, {"author"=>"sam", "text"=>"this is bad"}], "other"=>{"foo"=>5 } }, {"_id"=>2, "title"=>"this is your title", "author"=>"dave", "posted"=>Time.utc(2001), "pageViews"=>7, "tags"=>"fun", "comments"=>[{"author"=>"barbara", "text"=>"this is interesting"}, {"author"=>"jenny", "text"=>"i like to play pinball", "votes"=>10 }], "other"=>{"bar"=>14 } }, {"_id"=>2, "title"=>"this is your title", "author"=>"dave", "posted"=>Time.utc(2001), "pageViews"=>7, "tags"=>"nasty", "comments"=>[{"author"=>"barbara", "text"=>"this is interesting"}, {"author"=>"jenny", "text"=>"i like to play pinball", "votes"=>10 }], "other"=>{"bar"=>14 } }, {"_id"=>3, "title"=>"this is some other title", "author"=>"jane", "posted"=>Time.utc(2002), "pageViews"=>6, "tags"=>"nasty", "comments"=>[{"author"=>"will", "text"=>"i don't like the color"}, {"author"=>"jenny", "text"=>"can i get that in green?"}], "other"=>{"bar"=>14 } }, {"_id"=>3, "title"=>"this is some other title", "author"=>"jane", "posted"=>Time.utc(2002), "pageViews"=>6, "tags"=>"filthy", "comments"=>[{"author"=>"will", "text"=>"i don't like the color"}, {"author"=>"jenny", "text"=>"can i get that in green?"}], "other"=>{"bar"=>14 } } ] results = @@test.aggregate([{"$unwind"=> "$tags"}]) assert_equal desired_results, results end end if @@version > "1.1.1" def test_map_reduce @@test << { "user_id" => 1 } @@test << { "user_id" => 2 } m = "function() { emit(this.user_id, 1); }" r = "function(k,vals) { return 1; }" res = @@test.map_reduce(m, r, :out => 'foo'); assert res.find_one({"_id" => 1}) assert res.find_one({"_id" => 2}) end def test_map_reduce_with_code_objects @@test << { "user_id" => 1 } @@test << { "user_id" => 2 } m = Code.new("function() { emit(this.user_id, 1); }") r = Code.new("function(k,vals) { return 1; }") res = @@test.map_reduce(m, r, :out => 'foo'); assert res.find_one({"_id" => 1}) assert res.find_one({"_id" => 2}) end def test_map_reduce_with_options @@test.remove @@test << { "user_id" => 1 } @@test << { "user_id" => 2 } @@test << { "user_id" => 3 } m = Code.new("function() { emit(this.user_id, 1); }") r = Code.new("function(k,vals) { return 1; }") res = @@test.map_reduce(m, r, :query => {"user_id" => {"$gt" => 1}}, :out => 'foo'); assert_equal 2, res.count assert res.find_one({"_id" => 2}) assert res.find_one({"_id" => 3}) end def test_map_reduce_with_raw_response m = Code.new("function() { emit(this.user_id, 1); }") r = Code.new("function(k,vals) { return 1; }") res = @@test.map_reduce(m, r, :raw => true, :out => 'foo') assert res["result"] assert res["counts"] assert res["timeMillis"] end def test_map_reduce_with_output_collection output_collection = "test-map-coll" m = Code.new("function() { emit(this.user_id, 1); }") r = Code.new("function(k,vals) { return 1; }") res = @@test.map_reduce(m, r, :raw => true, :out => output_collection) assert_equal output_collection, res["result"] assert res["counts"] assert res["timeMillis"] end if @@version >= "1.8.0" def test_map_reduce_with_collection_merge @@test << {:user_id => 1} @@test << {:user_id => 2} output_collection = "test-map-coll" m = Code.new("function() { emit(this.user_id, {count: 1}); }") r = Code.new("function(k,vals) { var sum = 0;" + " vals.forEach(function(v) { sum += v.count;} ); return {count: sum}; }") res = @@test.map_reduce(m, r, :out => output_collection) @@test.remove @@test << {:user_id => 3} res = @@test.map_reduce(m, r, :out => {:merge => output_collection}) assert res.find.to_a.any? {|doc| doc["_id"] == 3 && doc["value"]["count"] == 1} @@test.remove @@test << {:user_id => 3} res = @@test.map_reduce(m, r, :out => {:reduce => output_collection}) assert res.find.to_a.any? {|doc| doc["_id"] == 3 && doc["value"]["count"] == 2} assert_raise ArgumentError do @@test.map_reduce(m, r, :out => {:inline => 1}) end @@test.map_reduce(m, r, :raw => true, :out => {:inline => 1}) assert res["results"] end def test_map_reduce_with_collection_output_to_other_db @@test << {:user_id => 1} @@test << {:user_id => 2} m = Code.new("function() { emit(this.user_id, 1); }") r = Code.new("function(k,vals) { return 1; }") oh = BSON::OrderedHash.new oh[:replace] = 'foo' oh[:db] = MONGO_TEST_DB res = @@test.map_reduce(m, r, :out => (oh)) assert res["result"] assert res["counts"] assert res["timeMillis"] assert res.find.to_a.any? {|doc| doc["_id"] == 2 && doc["value"] == 1} end end end if @@version > "1.3.0" def test_find_and_modify @@test << { :a => 1, :processed => false } @@test << { :a => 2, :processed => false } @@test << { :a => 3, :processed => false } @@test.find_and_modify(:query => {}, :sort => [['a', -1]], :update => {"$set" => {:processed => true}}) assert @@test.find_one({:a => 3})['processed'] end def test_find_and_modify_with_invalid_options @@test << { :a => 1, :processed => false } @@test << { :a => 2, :processed => false } @@test << { :a => 3, :processed => false } assert_raise Mongo::OperationFailure do @@test.find_and_modify(:blimey => {}) end end def test_find_and_modify_with_full_response @@test << { :a => 1, :processed => false } @@test << { :a => 2, :processed => false } @@test << { :a => 3, :processed => false } doc = @@test.find_and_modify(:query => {}, :sort => [['a', -1]], :update => {"$set" => {:processed => true}}, :full_response => true, :new => true) assert doc['value']['processed'] assert ['ok', 'value', 'lastErrorObject'].all? { |key| doc.key?(key) } end end if @@version >= "1.3.5" def test_coll_stats @@test << {:n => 1} @@test.create_index("n") assert_equal "#{MONGO_TEST_DB}.test", @@test.stats['ns'] end end def test_saving_dates_pre_epoch if RbConfig::CONFIG['host_os'] =~ /mswin|mingw|cygwin/ then return true end begin @@test.save({'date' => Time.utc(1600)}) assert_in_delta Time.utc(1600), @@test.find_one()["date"], 2 rescue ArgumentError # See note in test_date_before_epoch (BSONTest) end end def test_save_symbol_find_string @@test.save(:foo => :mike) assert_equal :mike, @@test.find_one(:foo => :mike)["foo"] assert_equal :mike, @@test.find_one("foo" => :mike)["foo"] # TODO enable these tests conditionally based on server version (if >1.0) # assert_equal :mike, @@test.find_one(:foo => "mike")["foo"] # assert_equal :mike, @@test.find_one("foo" => "mike")["foo"] end def test_batch_size n_docs = 6 batch_size = n_docs/2 n_docs.times do |i| @@test.save(:foo => i) end doc_count = 0 cursor = @@test.find({}, :batch_size => batch_size) cursor.next assert_equal batch_size, cursor.instance_variable_get(:@returned) doc_count += batch_size batch_size.times { cursor.next } assert_equal doc_count + batch_size, cursor.instance_variable_get(:@returned) doc_count += batch_size assert_equal n_docs, doc_count end def test_batch_size_with_smaller_limit n_docs = 6 batch_size = n_docs/2 n_docs.times do |i| @@test.insert(:foo => i) end cursor = @@test.find({}, :batch_size => batch_size, :limit => 2) cursor.next assert_equal 2, cursor.instance_variable_get(:@returned) end def test_batch_size_with_larger_limit n_docs = 6 batch_size = n_docs/2 n_docs.times do |i| @@test.insert(:foo => i) end doc_count = 0 cursor = @@test.find({}, :batch_size => batch_size, :limit => n_docs + 5) cursor.next assert_equal batch_size, cursor.instance_variable_get(:@returned) doc_count += batch_size batch_size.times { cursor.next } assert_equal doc_count + batch_size, cursor.instance_variable_get(:@returned) doc_count += batch_size assert_equal n_docs, doc_count end def test_batch_size_with_negative_limit n_docs = 6 batch_size = n_docs/2 n_docs.times do |i| @@test.insert(:foo => i) end cursor = @@test.find({}, :batch_size => batch_size, :limit => -7) cursor.next assert_equal n_docs, cursor.instance_variable_get(:@returned) end def test_limit_and_skip 10.times do |i| @@test.save(:foo => i) end assert_equal 5, @@test.find({}, :skip => 5).next_document()["foo"] assert_equal nil, @@test.find({}, :skip => 10).next_document() assert_equal 5, @@test.find({}, :limit => 5).to_a.length assert_equal 3, @@test.find({}, :skip => 3, :limit => 5).next_document()["foo"] assert_equal 5, @@test.find({}, :skip => 3, :limit => 5).to_a.length end def test_large_limit 2000.times do |i| @@test.insert("x" => i, "y" => "mongomongo" * 1000) end assert_equal 2000, @@test.count i = 0 y = 0 @@test.find({}, :limit => 1900).each do |doc| i += 1 y += doc["x"] end assert_equal 1900, i assert_equal 1804050, y end def test_small_limit @@test.insert("x" => "hello world") @@test.insert("x" => "goodbye world") assert_equal 2, @@test.count x = 0 @@test.find({}, :limit => 1).each do |doc| x += 1 assert_equal "hello world", doc["x"] end assert_equal 1, x end def test_find_with_transformer klass = Struct.new(:id, :a) transformer = Proc.new { |doc| klass.new(doc['_id'], doc['a']) } cursor = @@test.find({}, :transformer => transformer) assert_equal(transformer, cursor.transformer) end def test_find_one_with_transformer klass = Struct.new(:id, :a) transformer = Proc.new { |doc| klass.new(doc['_id'], doc['a']) } id = @@test.insert('a' => 1) doc = @@test.find_one(id, :transformer => transformer) assert_instance_of(klass, doc) end def test_ensure_index @@test.drop_indexes @@test.insert("x" => "hello world") assert_equal 1, @@test.index_information.keys.count #default index @@test.ensure_index([["x", Mongo::DESCENDING]], {}) assert_equal 2, @@test.index_information.keys.count assert @@test.index_information.keys.include?("x_-1") @@test.ensure_index([["x", Mongo::ASCENDING]]) assert @@test.index_information.keys.include?("x_1") @@test.ensure_index([["type", 1], ["date", -1]]) assert @@test.index_information.keys.include?("type_1_date_-1") @@test.drop_index("x_1") assert_equal 3, @@test.index_information.keys.count @@test.drop_index("x_-1") assert_equal 2, @@test.index_information.keys.count @@test.ensure_index([["x", Mongo::DESCENDING]], {}) assert_equal 3, @@test.index_information.keys.count assert @@test.index_information.keys.include?("x_-1") # Make sure that drop_index expires cache properly @@test.ensure_index([['a', 1]]) assert @@test.index_information.keys.include?("a_1") @@test.drop_index("a_1") assert !@@test.index_information.keys.include?("a_1") @@test.ensure_index([['a', 1]]) assert @@test.index_information.keys.include?("a_1") @@test.drop_index("a_1") end def test_ensure_index_timeout @@db.cache_time = 1 coll = @@db['ensure_test'] coll.expects(:generate_indexes).twice coll.ensure_index([['a', 1]]) # These will be cached coll.ensure_index([['a', 1]]) coll.ensure_index([['a', 1]]) coll.ensure_index([['a', 1]]) coll.ensure_index([['a', 1]]) sleep(1) # This won't be, so generate_indexes will be called twice coll.ensure_index([['a', 1]]) end if @@version > '2.0.0' def test_show_disk_loc @@test.save({:a => 1}) @@test.save({:a => 2}) assert @@test.find({:a => 1}, :show_disk_loc => true).show_disk_loc assert @@test.find({:a => 1}, :show_disk_loc => true).next['$diskLoc'] @@test.remove end def test_max_scan 1000.times do |n| @@test.save({:a => n}) end assert @@test.find({:a => 999}).next assert !@@test.find({:a => 999}, :max_scan => 500).next @@test.remove end end context "Grouping" do setup do @@test.remove @@test.save("a" => 1) @@test.save("b" => 1) @initial = {"count" => 0} @reduce_function = "function (obj, prev) { prev.count += inc_value; }" end should "fail if missing required options" do assert_raise MongoArgumentError do @@test.group(:initial => {}) end assert_raise MongoArgumentError do @@test.group(:reduce => "foo") end end should "group results using eval form" do assert_equal 1, @@test.group(:initial => @initial, :reduce => Code.new(@reduce_function, {"inc_value" => 0.5}))[0]["count"] assert_equal 2, @@test.group(:initial => @initial, :reduce => Code.new(@reduce_function, {"inc_value" => 1}))[0]["count"] assert_equal 4, @@test.group(:initial => @initial, :reduce => Code.new(@reduce_function, {"inc_value" => 2}))[0]["count"] end should "finalize grouped results" do @finalize = "function(doc) {doc.f = doc.count + 200; }" assert_equal 202, @@test.group(:initial => @initial, :reduce => Code.new(@reduce_function, {"inc_value" => 1}), :finalize => @finalize)[0]["f"] end end context "Grouping with key" do setup do @@test.remove @@test.save("a" => 1, "pop" => 100) @@test.save("a" => 1, "pop" => 100) @@test.save("a" => 2, "pop" => 100) @@test.save("a" => 2, "pop" => 100) @initial = {"count" => 0, "foo" => 1} @reduce_function = "function (obj, prev) { prev.count += obj.pop; }" end should "group" do result = @@test.group(:key => :a, :initial => @initial, :reduce => @reduce_function) assert result.all? { |r| r['count'] == 200 } end end context "Grouping with a key function" do setup do @@test.remove @@test.save("a" => 1) @@test.save("a" => 2) @@test.save("a" => 3) @@test.save("a" => 4) @@test.save("a" => 5) @initial = {"count" => 0} @keyf = "function (doc) { if(doc.a % 2 == 0) { return {even: true}; } else {return {odd: true}} };" @reduce = "function (obj, prev) { prev.count += 1; }" end should "group results" do results = @@test.group(:keyf => @keyf, :initial => @initial, :reduce => @reduce).sort {|a, b| a['count'] <=> b['count']} assert results[0]['even'] && results[0]['count'] == 2.0 assert results[1]['odd'] && results[1]['count'] == 3.0 end should "group filtered results" do results = @@test.group(:keyf => @keyf, :cond => {:a => {'$ne' => 2}}, :initial => @initial, :reduce => @reduce).sort {|a, b| a['count'] <=> b['count']} assert results[0]['even'] && results[0]['count'] == 1.0 assert results[1]['odd'] && results[1]['count'] == 3.0 end end context "A collection with two records" do setup do @collection = @@db.collection('test-collection') @collection.remove @collection.insert({:name => "Jones"}) @collection.insert({:name => "Smith"}) end should "have two records" do assert_equal 2, @collection.size end should "remove the two records" do @collection.remove() assert_equal 0, @collection.size end should "remove all records if an empty document is specified" do @collection.remove({}) assert_equal 0, @collection.find.count end should "remove only matching records" do @collection.remove({:name => "Jones"}) assert_equal 1, @collection.size end end context "Drop index " do setup do @@db.drop_collection('test-collection') @collection = @@db.collection('test-collection') end should "drop an index" do @collection.create_index([['a', Mongo::ASCENDING]]) assert @collection.index_information['a_1'] @collection.drop_index([['a', Mongo::ASCENDING]]) assert_nil @collection.index_information['a_1'] end should "drop an index which was given a specific name" do @collection.create_index([['a', Mongo::DESCENDING]], {:name => 'i_will_not_fear'}) assert @collection.index_information['i_will_not_fear'] @collection.drop_index([['a', Mongo::DESCENDING]]) assert_nil @collection.index_information['i_will_not_fear'] end should "drops an composite index" do @collection.create_index([['a', Mongo::DESCENDING], ['b', Mongo::ASCENDING]]) assert @collection.index_information['a_-1_b_1'] @collection.drop_index([['a', Mongo::DESCENDING], ['b', Mongo::ASCENDING]]) assert_nil @collection.index_information['a_-1_b_1'] end should "drops an index with symbols" do @collection.create_index([['a', Mongo::DESCENDING], [:b, Mongo::ASCENDING]]) assert @collection.index_information['a_-1_b_1'] @collection.drop_index([['a', Mongo::DESCENDING], [:b, Mongo::ASCENDING]]) assert_nil @collection.index_information['a_-1_b_1'] end end context "Creating indexes " do setup do @@db.drop_collection('geo') @@db.drop_collection('test-collection') @collection = @@db.collection('test-collection') @geo = @@db.collection('geo') end should "create index using symbols" do @collection.create_index :foo, :name => :bar @geo.create_index :goo, :name => :baz assert @collection.index_information['bar'] @collection.drop_index :bar assert_nil @collection.index_information['bar'] assert @geo.index_information['baz'] @geo.drop_index(:baz) assert_nil @geo.index_information['baz'] end #should "create a text index" do # @geo.save({'title' => "some text"}) # @geo.create_index([['title', Mongo::TEXT]]) # assert @geo.index_information['title_text'] #end should "create a hashed index" do @geo.save({'a' => 1}) @geo.create_index([['a', Mongo::HASHED]]) assert @geo.index_information['a_hashed'] end should "create a geospatial index" do @geo.save({'loc' => [-100, 100]}) @geo.create_index([['loc', Mongo::GEO2D]]) assert @geo.index_information['loc_2d'] end should "create a geoHaystack index" do @geo.save({ "_id" => 100, "pos" => { "long" => 126.9, "lat" => 35.2 }, "type" => "restaurant"}) @geo.create_index([['pos', Mongo::GEOHAYSTACK], ['type', Mongo::ASCENDING]], :bucket_size => 1) end should "create a geo 2dsphere index" do @collection.insert({"coordinates" => [ 5 , 5 ], "type" => "Point"}) @geo.create_index([['coordinates', Mongo::GEO2DSPHERE]]) assert @geo.index_information['coordinates_2dsphere'] end should "create a unique index" do @collection.create_index([['a', Mongo::ASCENDING]], :unique => true) assert @collection.index_information['a_1']['unique'] == true end should "drop duplicates" do @collection.insert({:a => 1}) @collection.insert({:a => 1}) assert_equal 2, @collection.find({:a => 1}).count @collection.create_index([['a', Mongo::ASCENDING]], :unique => true, :dropDups => true) assert_equal 1, @collection.find({:a => 1}).count end should "drop duplicates with ruby-like drop_dups key" do @collection.insert({:a => 1}) @collection.insert({:a => 1}) assert_equal 2, @collection.find({:a => 1}).count @collection.create_index([['a', Mongo::ASCENDING]], :unique => true, :drop_dups => true) assert_equal 1, @collection.find({:a => 1}).count end should "drop duplicates with ensure_index and drop_dups key" do @collection.insert({:a => 1}) @collection.insert({:a => 1}) assert_equal 2, @collection.find({:a => 1}).count @collection.ensure_index([['a', Mongo::ASCENDING]], :unique => true, :drop_dups => true) assert_equal 1, @collection.find({:a => 1}).count end should "create an index in the background" do if @@version > '1.3.1' @collection.create_index([['b', Mongo::ASCENDING]], :background => true) assert @collection.index_information['b_1']['background'] == true else assert true end end should "require an array of arrays" do assert_raise MongoArgumentError do @collection.create_index(['c', Mongo::ASCENDING]) end end should "enforce proper index types" do assert_raise MongoArgumentError do @collection.create_index([['c', 'blah']]) end end should "raise an error if index name is greater than 128" do assert_raise Mongo::OperationFailure do @collection.create_index([['a' * 25, 1], ['b' * 25, 1], ['c' * 25, 1], ['d' * 25, 1], ['e' * 25, 1]]) end end should "allow for an alternate name to be specified" do @collection.create_index([['a' * 25, 1], ['b' * 25, 1], ['c' * 25, 1], ['d' * 25, 1], ['e' * 25, 1]], :name => 'foo_index') assert @collection.index_information['foo_index'] end should "generate indexes in the proper order" do @collection.expects(:insert_documents) do |sel, coll, safe| assert_equal 'b_1_a_1', sel[:name] end @collection.create_index([['b', 1], ['a', 1]]) end should "allow multiple calls to create_index" do end should "allow creation of multiple indexes" do assert @collection.create_index([['a', 1]]) assert @collection.create_index([['a', 1]]) end context "with an index created" do setup do @collection.create_index([['b', 1], ['a', 1]]) end should "return properly ordered index information" do assert @collection.index_information['b_1_a_1'] end end end context "Capped collections" do setup do @@db.drop_collection('log') @capped = @@db.create_collection('log', :capped => true, :size => 1024) 10.times { |n| @capped.insert({:n => n}) } end should "find using a standard cursor" do cursor = @capped.find 10.times do assert cursor.next_document end assert_nil cursor.next_document @capped.insert({:n => 100}) assert_nil cursor.next_document end should "fail tailable cursor on a non-capped collection" do col = @@db['regular-collection'] col.insert({:a => 1000}) tail = Cursor.new(col, :tailable => true, :order => [['$natural', 1]]) assert_raise OperationFailure do tail.next_document end end should "find using a tailable cursor" do tail = Cursor.new(@capped, :tailable => true, :order => [['$natural', 1]]) 10.times do assert tail.next_document end assert_nil tail.next_document @capped.insert({:n => 100}) assert tail.next_document end end end ruby-mongo-1.9.2/test/functional/connection_test.rb000066400000000000000000000340451221200727400224700ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' require 'logger' require 'stringio' require 'thread' class TestConnection < Test::Unit::TestCase include Mongo include BSON def setup @client = standard_connection end def teardown @client.close end def test_connection_failure assert_raise Mongo::ConnectionFailure do MongoClient.new('localhost', 27347) end end def test_host_port_accessors assert_equal @client.host, TEST_HOST assert_equal @client.port, TEST_PORT end def test_server_info server_info = @client.server_info assert server_info.keys.include?("version") assert Mongo::Support.ok?(server_info) end def test_ping ping = @client.ping assert ping['ok'] end def test_connection_uri con = MongoClient.from_uri("mongodb://#{host_port}") assert_equal mongo_host, con.primary_pool.host assert_equal mongo_port, con.primary_pool.port end def test_uri_with_extra_opts con = MongoClient.from_uri("mongodb://#{host_port}", :pool_size => 10, :slave_ok => true) assert_equal 10, con.pool_size assert con.slave_ok? end def test_env_mongodb_uri begin old_mongodb_uri = ENV['MONGODB_URI'] ENV['MONGODB_URI'] = "mongodb://#{host_port}" con = MongoClient.new assert_equal mongo_host, con.primary_pool.host assert_equal mongo_port, con.primary_pool.port ensure ENV['MONGODB_URI'] = old_mongodb_uri end end def test_from_uri_implicit_mongodb_uri begin old_mongodb_uri = ENV['MONGODB_URI'] ENV['MONGODB_URI'] = "mongodb://#{host_port}" con = MongoClient.from_uri assert_equal mongo_host, con.primary_pool.host assert_equal mongo_port, con.primary_pool.port ensure ENV['MONGODB_URI'] = old_mongodb_uri end end def test_db_from_uri_exists_no_options begin db_name = "_database" old_mongodb_uri = ENV['MONGODB_URI'] ENV['MONGODB_URI'] = "mongodb://#{host_port}/#{db_name}" con = MongoClient.from_uri db = con.db assert_equal db.name, db_name ensure ENV['MONGODB_URI'] = old_mongodb_uri end end def test_db_from_uri_exists_options begin db_name = "_database" old_mongodb_uri = ENV['MONGODB_URI'] ENV['MONGODB_URI'] = "mongodb://#{host_port}/#{db_name}?" con = MongoClient.from_uri db = con.db assert_equal db.name, db_name ensure ENV['MONGODB_URI'] = old_mongodb_uri end end def test_db_from_uri_exists_no_db_name begin old_mongodb_uri = ENV['MONGODB_URI'] ENV['MONGODB_URI'] = "mongodb://#{host_port}/" con = MongoClient.from_uri db = con.db assert_equal db.name, MongoClient::DEFAULT_DB_NAME ensure ENV['MONGODB_URI'] = old_mongodb_uri end end def test_db_from_uri_from_string_param db_name = "_database" db = MongoClient.from_uri("mongodb://#{host_port}/#{db_name}").db assert_equal db.name, db_name end def test_db_from_uri_from_string_param_no_db_name db = MongoClient.from_uri("mongodb://#{host_port}").db assert_equal db.name, MongoClient::DEFAULT_DB_NAME end def test_server_version assert_match(/\d\.\d+(\.\d+)?/, @client.server_version.to_s) end def test_invalid_database_names assert_raise TypeError do @client.db(4) end assert_raise Mongo::InvalidNSName do @client.db('') end assert_raise Mongo::InvalidNSName do @client.db('te$t') end assert_raise Mongo::InvalidNSName do @client.db('te.t') end assert_raise Mongo::InvalidNSName do @client.db('te\\t') end assert_raise Mongo::InvalidNSName do @client.db('te/t') end assert_raise Mongo::InvalidNSName do @client.db('te st') end end def test_options_passed_to_db @pk_mock = Object.new db = @client.db('test', :pk => @pk_mock, :strict => true) assert_equal @pk_mock, db.pk_factory assert db.strict? end def test_database_info @client.drop_database(MONGO_TEST_DB) @client.db(MONGO_TEST_DB).collection('info-test').insert('a' => 1) info = @client.database_info assert_not_nil info assert_kind_of Hash, info assert_not_nil info[MONGO_TEST_DB] assert info[MONGO_TEST_DB] > 0 @client.drop_database(MONGO_TEST_DB) end def test_copy_database @client.db('old').collection('copy-test').insert('a' => 1) @client.copy_database('old', 'new', host_port) old_object = @client.db('old').collection('copy-test').find.next_document new_object = @client.db('new').collection('copy-test').find.next_document assert_equal old_object, new_object @client.drop_database('old') @client.drop_database('new') end def test_copy_database_with_auth @client.db('old').collection('copy-test').insert('a' => 1) @client.db('old').add_user('bob', 'secret') assert_raise Mongo::OperationFailure do @client.copy_database('old', 'new', host_port, 'bob', 'badpassword') end result = @client.copy_database('old', 'new', host_port, 'bob', 'secret') assert Mongo::Support.ok?(result) @client.drop_database('old') @client.drop_database('new') end def test_database_names @client.drop_database(MONGO_TEST_DB) @client.db(MONGO_TEST_DB).collection('info-test').insert('a' => 1) names = @client.database_names assert_not_nil names assert_kind_of Array, names assert names.length >= 1 assert names.include?(MONGO_TEST_DB) end def test_logging output = StringIO.new logger = Logger.new(output) logger.level = Logger::DEBUG standard_connection(:logger => logger).db(MONGO_TEST_DB) assert output.string.include?("admin['$cmd'].find") end def test_logging_duration output = StringIO.new logger = Logger.new(output) logger.level = Logger::DEBUG standard_connection(:logger => logger).db(MONGO_TEST_DB) assert_match(/\(\d+.\d{1}ms\)/, output.string) assert output.string.include?("admin['$cmd'].find") end def test_connection_logger output = StringIO.new logger = Logger.new(output) logger.level = Logger::DEBUG connection = standard_connection(:logger => logger) assert_equal logger, connection.logger connection.logger.debug 'testing' assert output.string.include?('testing') end def test_drop_database db = @client.db('ruby-mongo-will-be-deleted') coll = db.collection('temp') coll.remove coll.insert(:name => 'temp') assert_equal 1, coll.count() assert @client.database_names.include?('ruby-mongo-will-be-deleted') @client.drop_database('ruby-mongo-will-be-deleted') assert !@client.database_names.include?('ruby-mongo-will-be-deleted') end def test_nodes silently do @client = MongoClient.multi([['foo', 27017], ['bar', 27018]], :connect => false) end seeds = @client.seeds assert_equal 2, seeds.length assert_equal ['foo', 27017], seeds[0] assert_equal ['bar', 27018], seeds[1] end def test_fsync_lock assert !@client.locked? @client.lock! assert @client.locked? assert [1, true].include?(@client['admin']['$cmd.sys.inprog'].find_one['fsyncLock']) assert_match(/unlock/, @client.unlock!['info']) unlocked = false counter = 0 while counter < 100 if @client['admin']['$cmd.sys.inprog'].find_one['fsyncLock'].nil? unlocked = true break else counter += 1 end end assert !@client.locked? assert unlocked, "mongod failed to unlock" end def test_max_bson_size_value conn = standard_connection(:connect => false) admin_db = Object.new admin_db.expects(:command).returns({'ok' => 1, 'ismaster' => 1, 'maxBsonObjectSize' => 15_000_000}) conn.expects(:[]).with('admin').returns(admin_db) conn.connect assert_equal 15_000_000, conn.max_bson_size conn = standard_connection if conn.server_version > "1.7.2" assert_equal conn['admin'].command({:ismaster => 1})['maxBsonObjectSize'], conn.max_bson_size end end def test_max_message_size_value conn = standard_connection(:connect => false) admin_db = Object.new admin_db.expects(:command).returns({'ok' => 1, 'ismaster' => 1, 'maxMessageSizeBytes' => 20_000_000}) conn.expects(:[]).with('admin').returns(admin_db) conn.connect assert_equal 20_000_000, conn.max_message_size conn = standard_connection maxMessageSizeBytes = conn['admin'].command({:ismaster => 1})['maxMessageSizeBytes'] if conn.server_version.to_s[/([^-]+)/,1] >= "2.4.0" assert_equal 48_000_000, maxMessageSizeBytes elsif conn.server_version > "2.3.2" assert_equal conn.max_bson_size, maxMessageSizeBytes end end def test_max_bson_size_with_no_reported_max_size conn = standard_connection(:connect => false) admin_db = Object.new admin_db.expects(:command).returns({'ok' => 1, 'ismaster' => 1}) conn.expects(:[]).with('admin').returns(admin_db) conn.connect assert_equal Mongo::DEFAULT_MAX_BSON_SIZE, conn.max_bson_size end def test_max_message_size_with_no_reported_max_size conn = standard_connection(:connect => false) admin_db = Object.new admin_db.expects(:command).returns({'ok' => 1, 'ismaster' => 1}) conn.expects(:[]).with('admin').returns(admin_db) conn.connect assert_equal Mongo::DEFAULT_MAX_BSON_SIZE * Mongo::MESSAGE_SIZE_FACTOR, conn.max_message_size end def test_connection_activity conn = standard_connection assert conn.active? conn.primary_pool.close assert !conn.active? # Simulate a dropped connection. dropped_socket = mock('dropped_socket') dropped_socket.stubs(:read).raises(Errno::ECONNRESET) dropped_socket.stubs(:send).raises(Errno::ECONNRESET) dropped_socket.stub_everything conn.primary_pool.host = 'localhost' conn.primary_pool.port = Mongo::MongoClient::DEFAULT_PORT conn.primary_pool.instance_variable_set("@pids", {dropped_socket => Process.pid}) conn.primary_pool.instance_variable_set("@sockets", [dropped_socket]) assert !conn.active? end context "Saved authentications" do setup do @client = standard_connection @auth = {:db_name => 'test', :username => 'bob', :password => 'secret', :source => nil} @client.add_auth(@auth[:db_name], @auth[:username], @auth[:password], @auth[:source]) end teardown do @client.clear_auths end should "save the authentication" do assert_equal @auth, @client.auths[0] end should "not allow multiple authentications for the same db" do auth = {:db_name => 'test', :username => 'mickey', :password => 'm0u53', :source => nil} assert_raise Mongo::MongoArgumentError do @client.add_auth(auth[:db_name], auth[:username], auth[:password], auth[:source]) end end should "remove auths by database" do @client.remove_auth('non-existent database') assert_equal 1, @client.auths.length @client.remove_auth('test') assert_equal 0, @client.auths.length end should "remove all auths" do @client.clear_auths assert_equal 0, @client.auths.length end end context "Socket pools" do context "checking out writers" do setup do @con = standard_connection(:pool_size => 10, :pool_timeout => 10) @coll = @con[MONGO_TEST_DB]['test-connection-exceptions'] end should "close the connection on send_message for major exceptions" do @con.expects(:checkout_writer).raises(SystemStackError) @con.expects(:close) begin @coll.insert({:foo => "bar"}) rescue SystemStackError end end should "close the connection on send_message_with_gle for major exceptions" do @con.expects(:checkout_writer).raises(SystemStackError) @con.expects(:close) begin @coll.insert({:foo => "bar"}, :w => 1) rescue SystemStackError end end should "close the connection on receive_message for major exceptions" do @con.expects(:checkout_reader).raises(SystemStackError) @con.expects(:close) begin @coll.find.next rescue SystemStackError end end end end context "Connection exceptions" do setup do @con = standard_connection(:pool_size => 10, :pool_timeout => 10) @coll = @con[MONGO_TEST_DB]['test-connection-exceptions'] end should "release connection if an exception is raised on send_message" do @con.stubs(:send_message_on_socket).raises(ConnectionFailure) assert_equal 0, @con.primary_pool.checked_out.size assert_raise ConnectionFailure do @coll.insert({:test => "insert"}) end assert_equal 0, @con.primary_pool.checked_out.size end should "release connection if an exception is raised on write concern :w => 1" do @con.stubs(:receive).raises(ConnectionFailure) assert_equal 0, @con.primary_pool.checked_out.size assert_raise ConnectionFailure do @coll.insert({:test => "insert"}, :w => 1) end assert_equal 0, @con.primary_pool.checked_out.size end should "release connection if an exception is raised on receive_message" do @con.stubs(:receive).raises(ConnectionFailure) assert_equal 0, @con.read_pool.checked_out.size assert_raise ConnectionFailure do @coll.find.to_a end assert_equal 0, @con.read_pool.checked_out.size end should "show a proper exception message if an IOError is raised while closing a socket" do TCPSocket.any_instance.stubs(:close).raises(IOError.new) @con.primary_pool.checkout_new_socket @con.primary_pool.expects(:warn) assert @con.primary_pool.close end end end ruby-mongo-1.9.2/test/functional/conversions_test.rb000066400000000000000000000071101221200727400226720ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' require 'mongo/exceptions' require 'mongo/util/conversions' class ConversionsTest < Test::Unit::TestCase include Mongo::Conversions def test_array_as_sort_parameters_with_array_of_key_and_value params = array_as_sort_parameters(["field1", "asc"]) assert_equal({"field1" => 1}, params) end def test_array_as_sort_parameters_with_array_of_string_and_values params = array_as_sort_parameters([["field1", :asc], ["field2", :desc]]) assert_equal({ "field1" => 1, "field2" => -1 }, params) end def test_string_as_sort_parameters_with_string params = string_as_sort_parameters("field") assert_equal({ "field" => 1 }, params) end def test_string_as_sort_parameters_with_empty_string params = string_as_sort_parameters("") assert_equal({}, params) end def test_symbol_as_sort_parameters params = string_as_sort_parameters(:field) assert_equal({ "field" => 1 }, params) end def test_sort_value_when_value_is_one assert_equal 1, sort_value(1) end def test_sort_value_when_value_is_one_as_a_string assert_equal 1, sort_value("1") end def test_sort_value_when_value_is_negative_one assert_equal(-1, sort_value(-1)) end def test_sort_value_when_value_is_negative_one_as_a_string assert_equal(-1, sort_value("-1")) end def test_sort_value_when_value_is_ascending assert_equal 1, sort_value("ascending") end def test_sort_value_when_value_is_asc assert_equal 1, sort_value("asc") end def test_sort_value_when_value_is_uppercase_ascending assert_equal 1, sort_value("ASCENDING") end def test_sort_value_when_value_is_uppercase_asc assert_equal 1, sort_value("ASC") end def test_sort_value_when_value_is_symbol_ascending assert_equal 1, sort_value(:ascending) end def test_sort_value_when_value_is_symbol_asc assert_equal 1, sort_value(:asc) end def test_sort_value_when_value_is_symbol_uppercase_ascending assert_equal 1, sort_value(:ASCENDING) end def test_sort_value_when_value_is_symbol_uppercase_asc assert_equal 1, sort_value(:ASC) end def test_sort_value_when_value_is_descending assert_equal(-1, sort_value("descending")) end def test_sort_value_when_value_is_desc assert_equal(-1, sort_value("desc")) end def test_sort_value_when_value_is_uppercase_descending assert_equal(-1, sort_value("DESCENDING")) end def test_sort_value_when_value_is_uppercase_desc assert_equal(-1, sort_value("DESC")) end def test_sort_value_when_value_is_symbol_descending assert_equal(-1, sort_value(:descending)) end def test_sort_value_when_value_is_symbol_desc assert_equal(-1, sort_value(:desc)) end def test_sort_value_when_value_is_uppercase_symbol_descending assert_equal(-1, sort_value(:DESCENDING)) end def test_sort_value_when_value_is_uppercase_symbol_desc assert_equal(-1, sort_value(:DESC)) end def test_sort_value_when_value_is_invalid assert_raise Mongo::InvalidSortValueError do sort_value(2) end end end ruby-mongo-1.9.2/test/functional/cursor_fail_test.rb000066400000000000000000000032361221200727400226370ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' require 'logger' class CursorFailTest < Test::Unit::TestCase include Mongo @@connection = standard_connection @@db = @@connection.db(MONGO_TEST_DB) @@coll = @@db.collection('test') @@version = @@connection.server_version def setup @@coll.remove({}) @@coll.insert({'a' => 1}) # collection not created until it's used @@coll_full_name = "#{MONGO_TEST_DB}.test" end def test_refill_via_get_more_alt_coll coll = @@db.collection('test-alt-coll') coll.remove coll.insert('a' => 1) # collection not created until it's used assert_equal 1, coll.count 1000.times { |i| assert_equal 1 + i, coll.count coll.insert('a' => i) } assert_equal 1001, coll.count count = 0 coll.find.each { |obj| count += obj['a'] } assert_equal 1001, coll.count # do the same thing again for debugging assert_equal 1001, coll.count count2 = 0 coll.find.each { |obj| count2 += obj['a'] } assert_equal 1001, coll.count assert_equal count, count2 assert_equal 499501, count end end ruby-mongo-1.9.2/test/functional/cursor_message_test.rb000066400000000000000000000030671221200727400233520ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' require 'logger' class CursorMessageTest < Test::Unit::TestCase include Mongo @@connection = standard_connection @@db = @@connection.db(MONGO_TEST_DB) @@coll = @@db.collection('test') @@version = @@connection.server_version def setup @@coll.remove @@coll.insert('a' => 1) # collection not created until it's used @@coll_full_name = "#{MONGO_TEST_DB}.test" end def test_valid_batch_sizes assert_raise ArgumentError do @@coll.find({}, :batch_size => 1, :limit => 5) end assert_raise ArgumentError do @@coll.find({}, :batch_size => -1, :limit => 5) end assert @@coll.find({}, :batch_size => 0, :limit => 5) end def test_batch_size @@coll.remove 200.times do |n| @@coll.insert({:a => n}) end list = @@coll.find({}, :batch_size => 2, :limit => 6).to_a assert_equal 6, list.length list = @@coll.find({}, :batch_size => 100, :limit => 101).to_a assert_equal 101, list.length end end ruby-mongo-1.9.2/test/functional/cursor_test.rb000066400000000000000000000354641221200727400216540ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' require 'logger' class CursorTest < Test::Unit::TestCase include Mongo include Mongo::Constants @@connection = standard_connection @@db = @@connection.db(MONGO_TEST_DB) @@coll = @@db.collection('test') @@version = @@connection.server_version def setup @@coll.remove @@coll.insert('a' => 1) # collection not created until it's used @@coll_full_name = "#{MONGO_TEST_DB}.test" end def test_alive batch = [] 5000.times do |n| batch << {:a => n} end @@coll.insert(batch) cursor = @@coll.find assert !cursor.alive? cursor.next assert cursor.alive? cursor.close assert !cursor.alive? @@coll.remove end def test_add_and_remove_options c = @@coll.find assert_equal 0, c.options & OP_QUERY_EXHAUST c.add_option(OP_QUERY_EXHAUST) assert_equal OP_QUERY_EXHAUST, c.options & OP_QUERY_EXHAUST c.remove_option(OP_QUERY_EXHAUST) assert_equal 0, c.options & OP_QUERY_EXHAUST c.next assert_raise Mongo::InvalidOperation do c.add_option(OP_QUERY_EXHAUST) end assert_raise Mongo::InvalidOperation do c.add_option(OP_QUERY_EXHAUST) end end def test_exhaust if @@version >= "2.0" @@coll.remove data = "1" * 10_000 5000.times do |n| @@coll.insert({:n => n, :data => data}) end c = Cursor.new(@@coll) c.add_option(OP_QUERY_EXHAUST) assert_equal @@coll.count, c.to_a.size assert c.closed? c = Cursor.new(@@coll) c.add_option(OP_QUERY_EXHAUST) 4999.times do c.next end assert c.has_next? assert c.next assert !c.has_next? assert c.closed? @@coll.remove end end def test_exhaust_after_limit_error c = Cursor.new(@@coll, :limit => 17) assert_raise MongoArgumentError do c.add_option(OP_QUERY_EXHAUST) end assert_raise MongoArgumentError do c.add_option(OP_QUERY_EXHAUST + OP_QUERY_SLAVE_OK) end end def test_limit_after_exhaust_error c = Cursor.new(@@coll) c.add_option(OP_QUERY_EXHAUST) assert_raise MongoArgumentError do c.limit(17) end end def test_exhaust_with_mongos @@connection.expects(:mongos?).returns(:true) c = Cursor.new(@@coll) assert_raise MongoArgumentError do c.add_option(OP_QUERY_EXHAUST) end end def test_inspect selector = {:a => 1} cursor = @@coll.find(selector) assert_equal "", cursor.inspect end def test_explain cursor = @@coll.find('a' => 1) explaination = cursor.explain assert_not_nil explaination['cursor'] assert_kind_of Numeric, explaination['n'] assert_kind_of Numeric, explaination['millis'] assert_kind_of Numeric, explaination['nscanned'] end def test_each_with_no_block assert_kind_of(Enumerator, @@coll.find().each) if defined? Enumerator end def test_count @@coll.remove assert_equal 0, @@coll.find().count() 10.times do |i| @@coll.save("x" => i) end assert_equal 10, @@coll.find().count() assert_kind_of Integer, @@coll.find().count() assert_equal 10, @@coll.find({}, :limit => 5).count() assert_equal 10, @@coll.find({}, :skip => 5).count() assert_equal 5, @@coll.find({}, :limit => 5).count(true) assert_equal 5, @@coll.find({}, :skip => 5).count(true) assert_equal 2, @@coll.find({}, :skip => 5, :limit => 2).count(true) assert_equal 1, @@coll.find({"x" => 1}).count() assert_equal 5, @@coll.find({"x" => {"$lt" => 5}}).count() a = @@coll.find() b = a.count() a.each do |doc| break end assert_equal b, a.count() assert_equal 0, @@db['acollectionthatdoesn'].count() end def test_sort @@coll.remove 5.times{|x| @@coll.insert({"age" => x}) } assert_kind_of Cursor, @@coll.find().sort(:age, 1) assert_equal 0, @@coll.find().sort(:age, 1).next_document["age"] assert_equal 4, @@coll.find().sort(:age, -1).next_document["age"] assert_equal 0, @@coll.find().sort([["age", :asc]]).next_document["age"] assert_kind_of Cursor, @@coll.find().sort([[:age, -1], [:b, 1]]) assert_equal 4, @@coll.find().sort(:age, 1).sort(:age, -1).next_document["age"] assert_equal 0, @@coll.find().sort(:age, -1).sort(:age, 1).next_document["age"] assert_equal 4, @@coll.find().sort([:age, :asc]).sort(:age, -1).next_document["age"] assert_equal 0, @@coll.find().sort([:age, :desc]).sort(:age, 1).next_document["age"] cursor = @@coll.find() cursor.next_document assert_raise InvalidOperation do cursor.sort(["age"]) end assert_raise InvalidSortValueError do @@coll.find().sort(:age, 25).next_document end assert_raise InvalidSortValueError do @@coll.find().sort(25).next_document end end def test_sort_date @@coll.remove 5.times{|x| @@coll.insert({"created_at" => Time.utc(2000 + x)}) } assert_equal 2000, @@coll.find().sort(:created_at, :asc).next_document["created_at"].year assert_equal 2004, @@coll.find().sort(:created_at, :desc).next_document["created_at"].year assert_equal 2000, @@coll.find().sort([:created_at, :asc]).next_document["created_at"].year assert_equal 2004, @@coll.find().sort([:created_at, :desc]).next_document["created_at"].year assert_equal 2000, @@coll.find().sort([[:created_at, :asc]]).next_document["created_at"].year assert_equal 2004, @@coll.find().sort([[:created_at, :desc]]).next_document["created_at"].year end def test_sort_min_max_keys @@coll.remove @@coll.insert({"n" => 1000000}) @@coll.insert({"n" => -1000000}) @@coll.insert({"n" => MaxKey.new}) @@coll.insert({"n" => MinKey.new}) results = @@coll.find.sort([:n, :asc]).to_a assert_equal MinKey.new, results[0]['n'] assert_equal(-1000000, results[1]['n']) assert_equal 1000000, results[2]['n'] assert_equal MaxKey.new, results[3]['n'] end def test_id_range_queries @@coll.remove t1 = Time.now t1_id = ObjectId.from_time(t1) @@coll.save({:t => 't1'}) @@coll.save({:t => 't1'}) @@coll.save({:t => 't1'}) sleep(1) t2 = Time.now t2_id = ObjectId.from_time(t2) @@coll.save({:t => 't2'}) @@coll.save({:t => 't2'}) @@coll.save({:t => 't2'}) assert_equal 3, @@coll.find({'_id' => {'$gt' => t1_id, '$lt' => t2_id}}).count @@coll.find({'_id' => {'$gt' => t2_id}}).each do |doc| assert_equal 't2', doc['t'] end end def test_limit @@coll.remove 10.times do |i| @@coll.save("x" => i) end assert_equal 10, @@coll.find().count() results = @@coll.find().limit(5).to_a assert_equal 5, results.length end def test_timeout_options cursor = Cursor.new(@@coll) assert_equal true, cursor.timeout cursor = @@coll.find assert_equal true, cursor.timeout cursor = @@coll.find({}, :timeout => nil) assert_equal true, cursor.timeout cursor = Cursor.new(@@coll, :timeout => false) assert_equal false, cursor.timeout @@coll.find({}, :timeout => false) do |c| assert_equal false, c.timeout end end def test_timeout opts = Cursor.new(@@coll).options assert_equal 0, opts & Mongo::Constants::OP_QUERY_NO_CURSOR_TIMEOUT opts = Cursor.new(@@coll, :timeout => false).options assert_equal Mongo::Constants::OP_QUERY_NO_CURSOR_TIMEOUT, opts & Mongo::Constants::OP_QUERY_NO_CURSOR_TIMEOUT end def test_limit_exceptions cursor = @@coll.find() cursor.next_document assert_raise InvalidOperation, "Cannot modify the query once it has been run or closed." do cursor.limit(1) end cursor = @@coll.find() cursor.close assert_raise InvalidOperation, "Cannot modify the query once it has been run or closed." do cursor.limit(1) end end def test_skip @@coll.remove 10.times do |i| @@coll.save("x" => i) end assert_equal 10, @@coll.find().count() all_results = @@coll.find().to_a skip_results = @@coll.find().skip(2).to_a assert_equal 10, all_results.length assert_equal 8, skip_results.length assert_equal all_results.slice(2...10), skip_results end def test_skip_exceptions cursor = @@coll.find() cursor.next_document assert_raise InvalidOperation, "Cannot modify the query once it has been run or closed." do cursor.skip(1) end cursor = @@coll.find() cursor.close assert_raise InvalidOperation, "Cannot modify the query once it has been run or closed." do cursor.skip(1) end end def test_limit_skip_chaining @@coll.remove 10.times do |i| @@coll.save("x" => i) end all_results = @@coll.find().to_a limited_skip_results = @@coll.find().limit(5).skip(3).to_a assert_equal all_results.slice(3...8), limited_skip_results end def test_close_no_query_sent begin cursor = @@coll.find('a' => 1) cursor.close assert cursor.closed? rescue => ex fail ex.to_s end end def test_refill_via_get_more assert_equal 1, @@coll.count 1000.times { |i| assert_equal 1 + i, @@coll.count @@coll.insert('a' => i) } assert_equal 1001, @@coll.count count = 0 @@coll.find.each { |obj| count += obj['a'] } assert_equal 1001, @@coll.count # do the same thing again for debugging assert_equal 1001, @@coll.count count2 = 0 @@coll.find.each { |obj| count2 += obj['a'] } assert_equal 1001, @@coll.count assert_equal count, count2 assert_equal 499501, count end def test_refill_via_get_more_alt_coll coll = @@db.collection('test-alt-coll') coll.remove coll.insert('a' => 1) # collection not created until it's used assert_equal 1, coll.count 1000.times { |i| assert_equal 1 + i, coll.count coll.insert('a' => i) } assert_equal 1001, coll.count count = 0 coll.find.each { |obj| count += obj['a'] } assert_equal 1001, coll.count # do the same thing again for debugging assert_equal 1001, coll.count count2 = 0 coll.find.each { |obj| count2 += obj['a'] } assert_equal 1001, coll.count assert_equal count, count2 assert_equal 499501, count end def test_close_after_query_sent begin cursor = @@coll.find('a' => 1) cursor.next_document cursor.close assert cursor.closed? rescue => ex fail ex.to_s end end def test_kill_cursors @@coll.drop client_cursors = @@db.command("cursorInfo" => 1)["clientCursors_size"] 10000.times do |i| @@coll.insert("i" => i) end assert_equal(client_cursors, @@db.command("cursorInfo" => 1)["clientCursors_size"]) 10.times do |i| @@coll.find_one() end assert_equal(client_cursors, @@db.command("cursorInfo" => 1)["clientCursors_size"]) 10.times do |i| a = @@coll.find() a.next_document a.close() end assert_equal(client_cursors, @@db.command("cursorInfo" => 1)["clientCursors_size"]) a = @@coll.find() a.next_document assert_not_equal(client_cursors, @@db.command("cursorInfo" => 1)["clientCursors_size"]) a.close() assert_equal(client_cursors, @@db.command("cursorInfo" => 1)["clientCursors_size"]) a = @@coll.find({}, :limit => 10).next_document assert_equal(client_cursors, @@db.command("cursorInfo" => 1)["clientCursors_size"]) @@coll.find() do |cursor| cursor.next_document end assert_equal(client_cursors, @@db.command("cursorInfo" => 1)["clientCursors_size"]) @@coll.find() { |cursor| cursor.next_document } assert_equal(client_cursors, @@db.command("cursorInfo" => 1)["clientCursors_size"]) end def test_count_with_fields @@coll.remove @@coll.save("x" => 1) if @@version < "1.1.3" assert_equal(0, @@coll.find({}, :fields => ["a"]).count()) else assert_equal(1, @@coll.find({}, :fields => ["a"]).count()) end end def test_has_next @@coll.remove 200.times do |n| @@coll.save("x" => n) end cursor = @@coll.find n = 0 while cursor.has_next? assert cursor.next n += 1 end assert_equal n, 200 assert_equal false, cursor.has_next? end def test_cursor_invalid @@coll.remove 10000.times do |n| @@coll.insert({:a => n}) end cursor = @@coll.find({}) assert_raise_error Mongo::OperationFailure, "CURSOR_NOT_FOUND" do 9999.times do cursor.next_document cursor.instance_variable_set(:@cursor_id, 1234567890) end end end def test_enumberables @@coll.remove 100.times do |n| @@coll.insert({:a => n}) end assert_equal 100, @@coll.find.to_a.length assert_equal 100, @@coll.find.to_set.length cursor = @@coll.find 50.times { |n| cursor.next_document } assert_equal 50, cursor.to_a.length end def test_rewind @@coll.remove 100.times do |n| @@coll.insert({:a => n}) end cursor = @@coll.find cursor.to_a assert_equal [], cursor.map {|doc| doc } cursor.rewind! assert_equal 100, cursor.map {|doc| doc }.length cursor.rewind! 5.times { cursor.next_document } cursor.rewind! assert_equal 100, cursor.map {|doc| doc }.length end def test_transformer transformer = Proc.new { |doc| doc } cursor = Cursor.new(@@coll, :transformer => transformer) assert_equal(transformer, cursor.transformer) end def test_instance_transformation_with_next klass = Struct.new(:id, :a) transformer = Proc.new { |doc| klass.new(doc['_id'], doc['a']) } cursor = Cursor.new(@@coll, :transformer => transformer) instance = cursor.next assert_instance_of(klass, instance) assert_instance_of(BSON::ObjectId, instance.id) assert_equal(1, instance.a) end def test_instance_transformation_with_each klass = Struct.new(:id, :a) transformer = Proc.new { |doc| klass.new(doc['_id'], doc['a']) } cursor = Cursor.new(@@coll, :transformer => transformer) cursor.each do |instance| assert_instance_of(klass, instance) end end end ruby-mongo-1.9.2/test/functional/db_api_test.rb000066400000000000000000000565451221200727400215600ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' class DBAPITest < Test::Unit::TestCase include Mongo include BSON @@client = standard_connection @@db = @@client.db(MONGO_TEST_DB) @@coll = @@db.collection('test') @@version = @@client.server_version def setup @@coll.remove @r1 = {'a' => 1} @@coll.insert(@r1) # collection not created until it's used @@coll_full_name = "#{MONGO_TEST_DB}.test" end def teardown @@coll.remove @@db.get_last_error end def test_clear assert_equal 1, @@coll.count @@coll.remove assert_equal 0, @@coll.count end def test_insert assert_kind_of BSON::ObjectId, @@coll.insert('a' => 2) assert_kind_of BSON::ObjectId, @@coll.insert('b' => 3) assert_equal 3, @@coll.count docs = @@coll.find().to_a assert_equal 3, docs.length assert docs.detect { |row| row['a'] == 1 } assert docs.detect { |row| row['a'] == 2 } assert docs.detect { |row| row['b'] == 3 } @@coll << {'b' => 4} docs = @@coll.find().to_a assert_equal 4, docs.length assert docs.detect { |row| row['b'] == 4 } end def test_save_ordered_hash oh = BSON::OrderedHash.new oh['a'] = -1 oh['b'] = 'foo' oid = @@coll.save(oh) assert_equal 'foo', @@coll.find_one(oid)['b'] oh = BSON::OrderedHash['a' => 1, 'b' => 'foo'] oid = @@coll.save(oh) assert_equal 'foo', @@coll.find_one(oid)['b'] end def test_insert_multiple ids = @@coll.insert([{'a' => 2}, {'b' => 3}]) ids.each do |i| assert_kind_of BSON::ObjectId, i end assert_equal 3, @@coll.count docs = @@coll.find().to_a assert_equal 3, docs.length assert docs.detect { |row| row['a'] == 1 } assert docs.detect { |row| row['a'] == 2 } assert docs.detect { |row| row['b'] == 3 } end def test_count_on_nonexisting @@db.drop_collection('foo') assert_equal 0, @@db.collection('foo').count() end def test_find_simple @r2 = @@coll.insert('a' => 2) @r3 = @@coll.insert('b' => 3) # Check sizes docs = @@coll.find().to_a assert_equal 3, docs.size assert_equal 3, @@coll.count # Find by other value docs = @@coll.find('a' => @r1['a']).to_a assert_equal 1, docs.size doc = docs.first # Can't compare _id values because at insert, an _id was added to @r1 by # the database but we don't know what it is without re-reading the record # (which is what we are doing right now). # assert_equal doc['_id'], @r1['_id'] assert_equal doc['a'], @r1['a'] end def test_find_advanced @@coll.insert('a' => 2) @@coll.insert('b' => 3) # Find by advanced query (less than) docs = @@coll.find('a' => { '$lt' => 10 }).to_a assert_equal 2, docs.size assert docs.detect { |row| row['a'] == 1 } assert docs.detect { |row| row['a'] == 2 } # Find by advanced query (greater than) docs = @@coll.find('a' => { '$gt' => 1 }).to_a assert_equal 1, docs.size assert docs.detect { |row| row['a'] == 2 } # Find by advanced query (less than or equal to) docs = @@coll.find('a' => { '$lte' => 1 }).to_a assert_equal 1, docs.size assert docs.detect { |row| row['a'] == 1 } # Find by advanced query (greater than or equal to) docs = @@coll.find('a' => { '$gte' => 1 }).to_a assert_equal 2, docs.size assert docs.detect { |row| row['a'] == 1 } assert docs.detect { |row| row['a'] == 2 } # Find by advanced query (between) docs = @@coll.find('a' => { '$gt' => 1, '$lt' => 3 }).to_a assert_equal 1, docs.size assert docs.detect { |row| row['a'] == 2 } # Find by advanced query (in clause) docs = @@coll.find('a' => {'$in' => [1,2]}).to_a assert_equal 2, docs.size assert docs.detect { |row| row['a'] == 1 } assert docs.detect { |row| row['a'] == 2 } end def test_find_sorting @@coll.remove @@coll.insert('a' => 1, 'b' => 2) @@coll.insert('a' => 2, 'b' => 1) @@coll.insert('a' => 3, 'b' => 2) @@coll.insert('a' => 4, 'b' => 1) # Sorting (ascending) docs = @@coll.find({'a' => { '$lt' => 10 }}, :sort => [['a', 1]]).to_a assert_equal 4, docs.size assert_equal 1, docs[0]['a'] assert_equal 2, docs[1]['a'] assert_equal 3, docs[2]['a'] assert_equal 4, docs[3]['a'] # Sorting (descending) docs = @@coll.find({'a' => { '$lt' => 10 }}, :sort => [['a', -1]]).to_a assert_equal 4, docs.size assert_equal 4, docs[0]['a'] assert_equal 3, docs[1]['a'] assert_equal 2, docs[2]['a'] assert_equal 1, docs[3]['a'] # Sorting using array of names; assumes ascending order. docs = @@coll.find({'a' => { '$lt' => 10 }}, :sort => 'a').to_a assert_equal 4, docs.size assert_equal 1, docs[0]['a'] assert_equal 2, docs[1]['a'] assert_equal 3, docs[2]['a'] assert_equal 4, docs[3]['a'] # Sorting using single name; assumes ascending order. docs = @@coll.find({'a' => { '$lt' => 10 }}, :sort => 'a').to_a assert_equal 4, docs.size assert_equal 1, docs[0]['a'] assert_equal 2, docs[1]['a'] assert_equal 3, docs[2]['a'] assert_equal 4, docs[3]['a'] docs = @@coll.find({'a' => { '$lt' => 10 }}, :sort => [['b', 'asc'], ['a', 'asc']]).to_a assert_equal 4, docs.size assert_equal 2, docs[0]['a'] assert_equal 4, docs[1]['a'] assert_equal 1, docs[2]['a'] assert_equal 3, docs[3]['a'] # Sorting using empty array; no order guarantee should not blow up. docs = @@coll.find({'a' => { '$lt' => 10 }}, :sort => []).to_a assert_equal 4, docs.size end def test_find_sorting_with_hash # Sorting using ordered hash. You can use an unordered one, but then the # order of the keys won't be guaranteed thus your sort won't make sense. @@coll.remove @@coll.insert('a' => 1, 'b' => 2) @@coll.insert('a' => 2, 'b' => 1) @@coll.insert('a' => 3, 'b' => 2) @@coll.insert('a' => 4, 'b' => 1) oh = BSON::OrderedHash.new oh['a'] = -1 # Sort as a method docs = @@coll.find.sort(oh).to_a assert_equal 4, docs.size assert_equal 4, docs[0]['a'] assert_equal 3, docs[1]['a'] assert_equal 2, docs[2]['a'] assert_equal 1, docs[3]['a'] # Sort as an option docs = @@coll.find({}, :sort => oh).to_a assert_equal 4, docs.size assert_equal 4, docs[0]['a'] assert_equal 3, docs[1]['a'] assert_equal 2, docs[2]['a'] assert_equal 1, docs[3]['a'] if RUBY_VERSION > '1.9' docs = @@coll.find({}, :sort => {:a => -1}).to_a assert_equal 4, docs.size assert_equal 4, docs[0]['a'] assert_equal 3, docs[1]['a'] assert_equal 2, docs[2]['a'] assert_equal 1, docs[3]['a'] docs = @@coll.find.sort(:a => -1).to_a assert_equal 4, docs.size assert_equal 4, docs[0]['a'] assert_equal 3, docs[1]['a'] assert_equal 2, docs[2]['a'] assert_equal 1, docs[3]['a'] docs = @@coll.find.sort(:b => -1, :a => 1).to_a assert_equal 4, docs.size assert_equal 1, docs[0]['a'] assert_equal 3, docs[1]['a'] assert_equal 2, docs[2]['a'] assert_equal 4, docs[3]['a'] else # Sort as an option assert_raise InvalidSortValueError do @@coll.find({}, :sort => {:a => -1}).to_a end # Sort as a method assert_raise InvalidSortValueError do @@coll.find.sort(:a => -1).to_a end end end def test_find_limits @@coll.insert('b' => 2) @@coll.insert('c' => 3) @@coll.insert('d' => 4) docs = @@coll.find({}, :limit => 1).to_a assert_equal 1, docs.size docs = @@coll.find({}, :limit => 2).to_a assert_equal 2, docs.size docs = @@coll.find({}, :limit => 3).to_a assert_equal 3, docs.size docs = @@coll.find({}, :limit => 4).to_a assert_equal 4, docs.size docs = @@coll.find({}).to_a assert_equal 4, docs.size docs = @@coll.find({}, :limit => 99).to_a assert_equal 4, docs.size end def test_find_one_no_records @@coll.remove x = @@coll.find_one('a' => 1) assert_nil x end def test_drop_collection assert @@db.drop_collection(@@coll.name), "drop of collection #{@@coll.name} failed" assert !@@db.collection_names.include?(@@coll.name) end def test_other_drop assert @@db.collection_names.include?(@@coll.name) @@coll.drop assert !@@db.collection_names.include?(@@coll.name) end def test_collection_names names = @@db.collection_names assert names.length >= 1 assert names.include?(@@coll.name) coll2 = @@db.collection('test2') coll2.insert('a' => 1) # collection not created until it's used names = @@db.collection_names assert names.length >= 2 assert names.include?(@@coll.name) assert names.include?('test2') ensure @@db.drop_collection('test2') end def test_collections_info cursor = @@db.collections_info rows = cursor.to_a assert rows.length >= 1 row = rows.detect { |r| r['name'] == @@coll_full_name } assert_not_nil row end def test_collection_options @@db.drop_collection('foobar') @@db.strict = true begin coll = @@db.create_collection('foobar', :capped => true, :size => 1024) options = coll.options assert_equal 'foobar', options['create'] assert_equal true, options['capped'] assert_equal 1024, options['size'] rescue => ex @@db.drop_collection('foobar') fail "did not expect exception \"#{ex}\"" ensure @@db.strict = false end end def test_collection_options_are_passed_to_the_existing_ones @@db.drop_collection('foobar') @@db.create_collection('foobar') coll = @@db.create_collection('foobar') assert_equal true, Mongo::WriteConcern.gle?(coll.write_concern) end def test_index_information assert_equal @@coll.index_information.length, 1 name = @@coll.create_index('a') info = @@db.index_information(@@coll.name) assert_equal name, "a_1" assert_equal @@coll.index_information, info assert_equal 2, info.length assert info.has_key?(name) assert_equal info[name]["key"], {"a" => 1} ensure @@db.drop_index(@@coll.name, name) end def test_index_create_with_symbol assert_equal @@coll.index_information.length, 1 name = @@coll.create_index([['a', 1]]) info = @@db.index_information(@@coll.name) assert_equal name, "a_1" assert_equal @@coll.index_information, info assert_equal 2, info.length assert info.has_key?(name) assert_equal info[name]['key'], {"a" => 1} ensure @@db.drop_index(@@coll.name, name) end def test_multiple_index_cols name = @@coll.create_index([['a', DESCENDING], ['b', ASCENDING], ['c', DESCENDING]]) info = @@db.index_information(@@coll.name) assert_equal 2, info.length assert_equal name, 'a_-1_b_1_c_-1' assert info.has_key?(name) assert_equal info[name]['key'], {"a" => -1, "b" => 1, "c" => -1} ensure @@db.drop_index(@@coll.name, name) end def test_multiple_index_cols_with_symbols name = @@coll.create_index([[:a, DESCENDING], [:b, ASCENDING], [:c, DESCENDING]]) info = @@db.index_information(@@coll.name) assert_equal 2, info.length assert_equal name, 'a_-1_b_1_c_-1' assert info.has_key?(name) assert_equal info[name]['key'], {"a" => -1, "b" => 1, "c" => -1} ensure @@db.drop_index(@@coll.name, name) end def test_unique_index @@db.drop_collection("blah") test = @@db.collection("blah") test.create_index("hello") test.insert("hello" => "world") test.insert("hello" => "mike") test.insert("hello" => "world") assert !@@db.error? @@db.drop_collection("blah") test = @@db.collection("blah") test.create_index("hello", :unique => true) test.insert("hello" => "world") test.insert("hello" => "mike") assert_raise OperationFailure do test.insert("hello" => "world") end end def test_index_on_subfield @@db.drop_collection("blah") test = @@db.collection("blah") test.insert("hello" => {"a" => 4, "b" => 5}) test.insert("hello" => {"a" => 7, "b" => 2}) test.insert("hello" => {"a" => 4, "b" => 10}) assert !@@db.error? @@db.drop_collection("blah") test = @@db.collection("blah") test.create_index("hello.a", :unique => true) test.insert("hello" => {"a" => 4, "b" => 5}) test.insert("hello" => {"a" => 7, "b" => 2}) assert_raise OperationFailure do test.insert("hello" => {"a" => 4, "b" => 10} ) end end def test_array @@coll.remove({'$atomic' => true}) @@coll.insert({'b' => [1, 2, 3]}) @@coll.insert({'b' => [1, 2, 3]}) rows = @@coll.find({}, {:fields => ['b']}).to_a assert_equal 2, rows.length assert_equal [1, 2, 3], rows[1]['b'] end def test_regex regex = /foobar/i @@coll << {'b' => regex} rows = @@coll.find({}, {:fields => ['b']}).to_a if @@version < "1.1.3" assert_equal 1, rows.length assert_equal regex, rows[0]['b'] else assert_equal 2, rows.length assert_equal regex, rows[1]['b'] end end def test_regex_multi_line if @@version >= "1.9.1" doc = < doc}) assert @@coll.find_one({:doc => /n.*x/m}) @@coll.remove end end def test_non_oid_id # Note: can't use Time.new because that will include fractional seconds, # which Mongo does not store. t = Time.at(1234567890) @@coll << {'_id' => t} rows = @@coll.find({'_id' => t}).to_a assert_equal 1, rows.length assert_equal t, rows[0]['_id'] end def test_strict assert !@@db.strict? @@db.strict = true assert @@db.strict? ensure @@db.strict = false end def test_strict_access_collection @@db.strict = true begin @@db.collection('does-not-exist') fail "expected exception" rescue => ex assert_equal Mongo::MongoDBError, ex.class assert_equal "Collection 'does-not-exist' doesn't exist. (strict=true)", ex.to_s ensure @@db.strict = false @@db.drop_collection('does-not-exist') end end def test_strict_create_collection @@db.drop_collection('foobar') @@db.strict = true begin assert @@db.create_collection('foobar') rescue => ex fail "did not expect exception \"#{ex}\"" end # Now the collection exists. This time we should see an exception. assert_raise Mongo::MongoDBError do @@db.create_collection('foobar') end @@db.strict = false @@db.drop_collection('foobar') # Now we're not in strict mode - should succeed @@db.create_collection('foobar') @@db.create_collection('foobar') @@db.drop_collection('foobar') end def test_where @@coll.insert('a' => 2) @@coll.insert('a' => 3) assert_equal 3, @@coll.count assert_equal 1, @@coll.find('$where' => BSON::Code.new('this.a > 2')).count() assert_equal 2, @@coll.find('$where' => BSON::Code.new('this.a > i', {'i' => 1})).count() end def test_eval assert_equal 3, @@db.eval('function (x) {return x;}', 3) assert_equal nil, @@db.eval("function (x) {db.test_eval.save({y:x});}", 5) assert_equal 5, @@db.collection('test_eval').find_one['y'] assert_equal 5, @@db.eval("function (x, y) {return x + y;}", 2, 3) assert_equal 5, @@db.eval("function () {return 5;}") assert_equal 5, @@db.eval("2 + 3;") assert_equal 5, @@db.eval(Code.new("2 + 3;")) assert_equal 2, @@db.eval(Code.new("return i;", {"i" => 2})) assert_equal 5, @@db.eval(Code.new("i + 3;", {"i" => 2})) assert_raise OperationFailure do @@db.eval("5 ++ 5;") end end def test_hint name = @@coll.create_index('a') begin assert_nil @@coll.hint assert_equal 1, @@coll.find({'a' => 1}, :hint => 'a').to_a.size assert_equal 1, @@coll.find({'a' => 1}, :hint => ['a']).to_a.size assert_equal 1, @@coll.find({'a' => 1}, :hint => {'a' => 1}).to_a.size @@coll.hint = 'a' assert_equal({'a' => 1}, @@coll.hint) assert_equal 1, @@coll.find('a' => 1).to_a.size @@coll.hint = ['a'] assert_equal({'a' => 1}, @@coll.hint) assert_equal 1, @@coll.find('a' => 1).to_a.size @@coll.hint = {'a' => 1} assert_equal({'a' => 1}, @@coll.hint) assert_equal 1, @@coll.find('a' => 1).to_a.size @@coll.hint = nil assert_nil @@coll.hint assert_equal 1, @@coll.find('a' => 1).to_a.size ensure @@coll.drop_index(name) end end def test_named_hint name = @@coll.create_index('a', :name => 'named_index') begin assert_nil @@coll.hint assert_equal 1, @@coll.find({'a' => 1}, :named_hint => 'named_index').to_a.size assert_equal 1, @@coll.find({'a' => 1}, :hint => 'a', :named_hint => "bad_hint").to_a.size ensure @@coll.drop_index('named_index') end end def test_hash_default_value_id val = Hash.new(0) val["x"] = 5 @@coll.insert val id = @@coll.find_one("x" => 5)["_id"] assert id != 0 end def test_group @@db.drop_collection("test") test = @@db.collection("test") assert_equal [], test.group(:initial => {"count" => 0}, :reduce => "function (obj, prev) { prev.count++; }") assert_equal [], test.group(:initial => {"count" => 0}, :reduce => "function (obj, prev) { prev.count++; }") test.insert("a" => 2) test.insert("b" => 5) test.insert("a" => 1) assert_equal 3, test.group(:initial => {"count" => 0}, :reduce => "function (obj, prev) { prev.count++; }")[0]["count"] assert_equal 3, test.group(:initial => {"count" => 0}, :reduce => "function (obj, prev) { prev.count++; }")[0]["count"] assert_equal 1, test.group(:cond => {"a" => {"$gt" => 1}}, :initial => {"count" => 0}, :reduce => "function (obj, prev) { prev.count++; }")[0]["count"] assert_equal 1, test.group(:cond => {"a" => {"$gt" => 1}}, :initial => {"count" => 0}, :reduce => "function (obj, prev) { prev.count++; }")[0]["count"] finalize = "function (obj) { obj.f = obj.count - 1; }" assert_equal 2, test.group(:initial => {"count" => 0}, :reduce => "function (obj, prev) { prev.count++; }", :finalize => finalize)[0]["f"] test.insert("a" => 2, "b" => 3) expected = [{"a" => 2, "count" => 2}, {"a" => nil, "count" => 1}, {"a" => 1, "count" => 1}] assert_equal expected, test.group(:key => ["a"], :initial => {"count" => 0}, :reduce => "function (obj, prev) { prev.count++; }") assert_equal expected, test.group(:key => :a, :initial => {"count" => 0}, :reduce => "function (obj, prev) { prev.count++; }") assert_raise OperationFailure do test.group(:initial => {}, :reduce => "5 ++ 5") end end def test_deref @@coll.remove assert_equal nil, @@db.dereference(DBRef.new("test", ObjectId.new)) @@coll.insert({"x" => "hello"}) key = @@coll.find_one()["_id"] assert_equal "hello", @@db.dereference(DBRef.new("test", key))["x"] assert_equal nil, @@db.dereference(DBRef.new("test", 4)) obj = {"_id" => 4} @@coll.insert(obj) assert_equal obj, @@db.dereference(DBRef.new("test", 4)) @@coll.remove @@coll.insert({"x" => "hello"}) assert_equal nil, @@db.dereference(DBRef.new("test", nil)) end def test_save @@coll.remove a = {"hello" => "world"} id = @@coll.save(a) assert_kind_of ObjectId, id assert_equal 1, @@coll.count assert_equal id, @@coll.save(a) assert_equal 1, @@coll.count assert_equal "world", @@coll.find_one()["hello"] a["hello"] = "mike" @@coll.save(a) assert_equal 1, @@coll.count assert_equal "mike", @@coll.find_one()["hello"] @@coll.save({"hello" => "world"}) assert_equal 2, @@coll.count end def test_save_long @@coll.remove @@coll.insert("x" => 9223372036854775807) assert_equal 9223372036854775807, @@coll.find_one()["x"] end def test_find_by_oid @@coll.remove @@coll.save("hello" => "mike") id = @@coll.save("hello" => "world") assert_kind_of ObjectId, id assert_equal "world", @@coll.find_one(:_id => id)["hello"] @@coll.find(:_id => id).to_a.each do |doc| assert_equal "world", doc["hello"] end id = ObjectId.from_string(id.to_s) assert_equal "world", @@coll.find_one(:_id => id)["hello"] end def test_save_with_object_that_has_id_but_does_not_actually_exist_in_collection @@coll.remove a = {'_id' => '1', 'hello' => 'world'} @@coll.save(a) assert_equal(1, @@coll.count) assert_equal("world", @@coll.find_one()["hello"]) a["hello"] = "mike" @@coll.save(a) assert_equal(1, @@coll.count) assert_equal("mike", @@coll.find_one()["hello"]) end def test_collection_names_errors assert_raise TypeError do @@db.collection(5) end assert_raise Mongo::InvalidNSName do @@db.collection("") end assert_raise Mongo::InvalidNSName do @@db.collection("te$t") end assert_raise Mongo::InvalidNSName do @@db.collection(".test") end assert_raise Mongo::InvalidNSName do @@db.collection("test.") end assert_raise Mongo::InvalidNSName do @@db.collection("tes..t") end end def test_rename_collection @@db.drop_collection("foo") @@db.drop_collection("bar") a = @@db.collection("foo") b = @@db.collection("bar") assert_raise TypeError do a.rename(5) end assert_raise Mongo::InvalidNSName do a.rename("") end assert_raise Mongo::InvalidNSName do a.rename("te$t") end assert_raise Mongo::InvalidNSName do a.rename(".test") end assert_raise Mongo::InvalidNSName do a.rename("test.") end assert_raise Mongo::InvalidNSName do a.rename("tes..t") end assert_equal 0, a.count() assert_equal 0, b.count() a.insert("x" => 1) a.insert("x" => 2) assert_equal 2, a.count() a.rename("bar") assert_equal 2, a.count() end # doesn't really test functionality, just that the option is set correctly def test_snapshot @@db.collection("test").find({}, :snapshot => true).to_a assert_raise OperationFailure do @@db.collection("test").find({}, :snapshot => true, :sort => 'a').to_a end end def test_encodings if RUBY_VERSION >= '1.9' default = "hello world" utf8 = "hello world".encode("UTF-8") iso8859 = "hello world".encode("ISO-8859-1") if RUBY_PLATFORM =~ /jruby/ assert_equal "ASCII-8BIT", default.encoding.name elsif RUBY_VERSION >= '2.0' assert_equal "UTF-8", default.encoding.name else assert_equal "US-ASCII", default.encoding.name end assert_equal "UTF-8", utf8.encoding.name assert_equal "ISO-8859-1", iso8859.encoding.name @@coll.remove @@coll.save("default" => default, "utf8" => utf8, "iso8859" => iso8859) doc = @@coll.find_one() assert_equal "UTF-8", doc["default"].encoding.name assert_equal "UTF-8", doc["utf8"].encoding.name assert_equal "UTF-8", doc["iso8859"].encoding.name end end end ruby-mongo-1.9.2/test/functional/db_connection_test.rb000066400000000000000000000016461221200727400231360ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' class DBConnectionTest < Test::Unit::TestCase def test_no_exceptions host = ENV['MONGO_RUBY_DRIVER_HOST'] || 'localhost' port = ENV['MONGO_RUBY_DRIVER_PORT'] || MongoClient::DEFAULT_PORT db = MongoClient.new(host, port).db(MONGO_TEST_DB) coll = db.collection('test') coll.remove db.get_last_error end end ruby-mongo-1.9.2/test/functional/db_test.rb000066400000000000000000000175171221200727400207230ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' require 'digest/md5' require 'stringio' require 'logger' class TestPKFactory def create_pk(row) row['_id'] ||= BSON::ObjectId.new row end end class DBTest < Test::Unit::TestCase include Mongo @@client = standard_connection @@db = @@client.db(MONGO_TEST_DB) @@users = @@db.collection('system.users') @@version = @@client.server_version def test_close @@client.close assert !@@client.connected? begin @@db.collection('test').insert('a' => 1) fail "expected 'NilClass' exception" rescue => ex assert_match(/NilClass/, ex.to_s) ensure @@db = standard_connection.db(MONGO_TEST_DB) @@users = @@db.collection('system.users') end end def test_create_collection col = @@db.create_collection('foo') assert_equal @@db['foo'].name, col.name col = @@db.create_collection(:foo) assert_equal @@db['foo'].name, col.name @@db.drop_collection('foo') end def test_get_and_drop_collection db = @@client.db(MONGO_TEST_DB, :strict => true) db.create_collection('foo') assert db.collection('foo') assert db.drop_collection('foo') db.create_collection(:foo) assert db.collection(:foo) assert db.drop_collection(:foo) end def test_logger output = StringIO.new logger = Logger.new(output) logger.level = Logger::DEBUG conn = standard_connection(:logger => logger) assert_equal logger, conn.logger conn.logger.debug 'testing' assert output.string.include?('testing') end def test_full_coll_name coll = @@db.collection('test') assert_equal "#{MONGO_TEST_DB}.test", @@db.full_collection_name(coll.name) end def test_collection_names @@db.collection("test").insert("foo" => 5) @@db.collection("test.mike").insert("bar" => 0) colls = @@db.collection_names() assert colls.include?("test") assert colls.include?("test.mike") colls.each { |name| assert !name.include?("$") } end def test_collections @@db.collection("test.durran").insert("foo" => 5) @@db.collection("test.les").insert("bar" => 0) colls = @@db.collections() assert_not_nil colls.select { |coll| coll.name == "test.durran" } assert_not_nil colls.select { |coll| coll.name == "test.les" } assert_equal [], colls.select { |coll| coll.name == "does_not_exist" } assert_kind_of Collection, colls[0] end def test_pk_factory db = standard_connection.db(MONGO_TEST_DB, :pk => TestPKFactory.new) coll = db.collection('test') coll.remove insert_id = coll.insert('name' => 'Fred', 'age' => 42) # new id gets added to returned object row = coll.find_one({'name' => 'Fred'}) oid = row['_id'] assert_not_nil oid assert_equal insert_id, oid oid = BSON::ObjectId.new data = {'_id' => oid, 'name' => 'Barney', 'age' => 41} coll.insert(data) row = coll.find_one({'name' => data['name']}) db_oid = row['_id'] assert_equal oid, db_oid assert_equal data, row coll.remove end def test_pk_factory_reset conn = standard_connection db = conn.db(MONGO_TEST_DB) db.pk_factory = Object.new # first time begin db.pk_factory = Object.new fail "error: expected exception" rescue => ex assert_match(/Cannot change/, ex.to_s) ensure conn.close end end def test_command assert_raise OperationFailure do @@db.command({:non_command => 1}, :check_response => true) end result = @@db.command({:non_command => 1}, :check_response => false) assert !Mongo::Support.ok?(result) end def test_error @@db.reset_error_history assert_nil @@db.get_last_error['err'] assert !@@db.error? assert_nil @@db.previous_error @@db.command({:forceerror => 1}, :check_response => false) assert @@db.error? assert_not_nil @@db.get_last_error['err'] assert_not_nil @@db.previous_error @@db.command({:forceerror => 1}, :check_response => false) assert @@db.error? assert @@db.get_last_error['err'] prev_error = @@db.previous_error assert_equal 1, prev_error['nPrev'] assert_equal prev_error["err"], @@db.get_last_error['err'] @@db.collection('test').find_one assert_nil @@db.get_last_error['err'] assert !@@db.error? assert @@db.previous_error assert_equal 2, @@db.previous_error['nPrev'] @@db.reset_error_history assert_nil @@db.get_last_error['err'] assert !@@db.error? assert_nil @@db.previous_error end def test_check_command_response command = {:forceerror => 1} raised = false begin @@db.command(command) rescue => ex raised = true assert ex.message.include?("forced error"), "error message does not contain 'forced error'" assert_equal 10038, ex.error_code if @@version >= "2.1.0" assert_equal 10038, ex.result['code'] else assert_equal 10038, ex.result['assertionCode'] end ensure assert raised, "No assertion raised!" end end def test_last_status @@db['test'].remove @@db['test'].save("i" => 1) @@db['test'].update({"i" => 1}, {"$set" => {"i" => 2}}) assert @@db.get_last_error()["updatedExisting"] @@db['test'].update({"i" => 1}, {"$set" => {"i" => 500}}) assert !@@db.get_last_error()["updatedExisting"] end def test_text_port_number_raises_no_errors client = standard_connection db = client[MONGO_TEST_DB] db.collection('users').remove end def test_stored_function_management @@db.add_stored_function("sum", "function (x, y) { return x + y; }") assert_equal @@db.eval("return sum(2,3);"), 5 assert @@db.remove_stored_function("sum") assert_raise OperationFailure do @@db.eval("return sum(2,3);") end end def test_eval @@db.eval("db.system.save({_id:'hello', value: function() { print('hello'); } })") assert_equal 'hello', @@db['system'].find_one['_id'] end if @@version >= "1.3.5" def test_db_stats stats = @@db.stats assert stats.has_key?('collections') assert stats.has_key?('dataSize') end end context "database profiling" do setup do @db = @@client[MONGO_TEST_DB] @coll = @db['test'] @coll.remove @r1 = @coll.insert('a' => 1) # collection not created until it's used end should "set default profiling level" do assert_equal :off, @db.profiling_level end should "change profiling level" do @db.profiling_level = :slow_only assert_equal :slow_only, @db.profiling_level @db.profiling_level = :off assert_equal :off, @db.profiling_level @db.profiling_level = :all assert_equal :all, @db.profiling_level begin @db.profiling_level = :medium fail "shouldn't be able to do this" rescue end end should "return profiling info" do @db.profiling_level = :all @coll.find() @db.profiling_level = :off info = @db.profiling_info assert_kind_of Array, info assert info.length >= 1 first = info.first assert_kind_of Time, first['ts'] assert_kind_of Numeric, first['millis'] end should "validate collection" do doc = @db.validate_collection(@coll.name) if @@version >= "1.9.1" assert doc['valid'] else assert doc['result'] end end end end ruby-mongo-1.9.2/test/functional/grid_file_system_test.rb000066400000000000000000000207731221200727400236640ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' class GridFileSystemTest < Test::Unit::TestCase context "GridFileSystem:" do setup do @con = standard_connection @db = @con.db(MONGO_TEST_DB) end teardown do @db.drop_collection('fs.files') @db.drop_collection('fs.chunks') end context "Initialization" do setup do @chunks_data = "CHUNKS" * 50000 @grid = GridFileSystem.new(@db) @opts = {:w => 1} @original_opts = @opts.dup @grid.open('sample.file', 'w', @opts) do |f| f.write @chunks_data end end should "not modify original opts" do assert_equal @original_opts, @opts end end context "When reading:" do setup do @chunks_data = "CHUNKS" * 50000 @grid = GridFileSystem.new(@db) @grid.open('sample.file', 'w') do |f| f.write @chunks_data end @grid = GridFileSystem.new(@db) end should "return existence of the file" do file = @grid.exist?(:filename => 'sample.file') assert_equal 'sample.file', file['filename'] end should "return nil if the file doesn't exist" do assert_nil @grid.exist?(:filename => 'foo.file') end should "read sample data" do data = @grid.open('sample.file', 'r') { |f| f.read } assert_equal data.length, @chunks_data.length end should "have a unique index on chunks" do assert @db['fs.chunks'].index_information['files_id_1_n_1']['unique'] end should "have an index on filename" do assert @db['fs.files'].index_information['filename_1_uploadDate_-1'] end should "return an empty string if length is zero" do data = @grid.open('sample.file', 'r') { |f| f.read(0) } assert_equal '', data end should "return the first n bytes" do data = @grid.open('sample.file', 'r') {|f| f.read(288888) } assert_equal 288888, data.length assert_equal @chunks_data[0...288888], data end should "return the first n bytes even with an offset" do data = @grid.open('sample.file', 'r') do |f| f.seek(1000) f.read(288888) end assert_equal 288888, data.length assert_equal @chunks_data[1000...289888], data end end context "When writing:" do setup do @data = "BYTES" * 50 @grid = GridFileSystem.new(@db) @grid.open('sample', 'w') do |f| f.write @data end end should "read sample data" do data = @grid.open('sample', 'r') { |f| f.read } assert_equal data.length, @data.length end should "return the total number of bytes written" do data = 'a' * 300000 assert_equal 300000, @grid.open('sample', 'w') {|f| f.write(data) } end should "more read sample data" do data = @grid.open('sample', 'r') { |f| f.read } assert_equal data.length, @data.length end should "raise exception if file not found" do assert_raise GridFileNotFound do @grid.open('io', 'r') { |f| f.write('hello') } end end should "raise exception if not opened for write" do assert_raise GridError do @grid.open('sample', 'r') { |f| f.write('hello') } end end context "and when overwriting the file" do setup do @old = @grid.open('sample', 'r') @new_data = "DATA" * 10 @grid.open('sample', 'w') do |f| f.write @new_data end @new = @grid.open('sample', 'r') end should "have a newer upload date" do assert @new.upload_date > @old.upload_date, "New data is not greater than old date." end should "have a different files_id" do assert_not_equal @new.files_id, @old.files_id end should "contain the new data" do assert_equal @new_data, @new.read, "Expected DATA" end context "and on a second overwrite" do setup do @new_data = "NEW" * 1000 @grid.open('sample', 'w') do |f| f.write @new_data end @ids = @db['fs.files'].find({'filename' => 'sample'}).map {|file| file['_id']} end should "write a third version of the file" do assert_equal 3, @db['fs.files'].find({'filename' => 'sample'}).count assert_equal 3, @db['fs.chunks'].find({'files_id' => {'$in' => @ids}}).count end should "remove all versions and their data on delete" do @grid.delete('sample') assert_equal 0, @db['fs.files'].find({'filename' => 'sample'}).count assert_equal 0, @db['fs.chunks'].find({'files_id' => {'$in' => @ids}}).count end should "delete all versions which exceed the number of versions to keep specified by the option :versions" do @versions = 1 + rand(4-1) @grid.open('sample', 'w', :versions => @versions) do |f| f.write @new_data end @new_ids = @db['fs.files'].find({'filename' => 'sample'}).map {|file| file['_id']} assert_equal @versions, @new_ids.length id = @new_ids.first assert !@ids.include?(id) assert_equal @versions, @db['fs.files'].find({'filename' => 'sample'}).count end should "delete old versions on write with :delete_old is passed in" do @grid.open('sample', 'w', :delete_old => true) do |f| f.write @new_data end @new_ids = @db['fs.files'].find({'filename' => 'sample'}).map {|file| file['_id']} assert_equal 1, @new_ids.length id = @new_ids.first assert !@ids.include?(id) assert_equal 1, @db['fs.files'].find({'filename' => 'sample'}).count assert_equal 1, @db['fs.chunks'].find({'files_id' => id}).count end end end end context "When writing chunks:" do setup do data = "B" * 50000 @grid = GridFileSystem.new(@db) @grid.open('sample', 'w', :chunk_size => 1000) do |f| f.write data end end should "write the correct number of chunks" do file = @db['fs.files'].find_one({:filename => 'sample'}) chunks = @db['fs.chunks'].find({'files_id' => file['_id']}).to_a assert_equal 50, chunks.length end end context "Positioning:" do setup do data = 'hello, world' + '1' * 5000 + 'goodbye!' + '2' * 1000 + '!' @grid = GridFileSystem.new(@db) @grid.open('hello', 'w', :chunk_size => 1000) do |f| f.write data end end should "seek within chunks" do @grid.open('hello', 'r') do |f| f.seek(0) assert_equal 'h', f.read(1) f.seek(7) assert_equal 'w', f.read(1) f.seek(4) assert_equal 'o', f.read(1) f.seek(0) f.seek(7, IO::SEEK_CUR) assert_equal 'w', f.read(1) f.seek(-2, IO::SEEK_CUR) assert_equal ' ', f.read(1) f.seek(-4, IO::SEEK_CUR) assert_equal 'l', f.read(1) f.seek(3, IO::SEEK_CUR) assert_equal 'w', f.read(1) end end should "seek between chunks" do @grid.open('hello', 'r') do |f| f.seek(1000) assert_equal '11111', f.read(5) f.seek(5009) assert_equal '111goodbye!222', f.read(14) f.seek(-1, IO::SEEK_END) assert_equal '!', f.read(1) f.seek(-6, IO::SEEK_END) assert_equal '2', f.read(1) end end should "tell the current position" do @grid.open('hello', 'r') do |f| assert_equal 0, f.tell f.seek(999) assert_equal 999, f.tell end end should "seek only in read mode" do assert_raise GridError do silently do @grid.open('hello', 'w') { |f| f.seek(0) } end end end end end end ruby-mongo-1.9.2/test/functional/grid_io_test.rb000066400000000000000000000173601221200727400217460ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' class GridIOTest < Test::Unit::TestCase context "GridIO" do setup do @db = standard_connection.db(MONGO_TEST_DB) @files = @db.collection('fs.files') @chunks = @db.collection('fs.chunks') @chunks.create_index([['files_id', Mongo::ASCENDING], ['n', Mongo::ASCENDING]]) end teardown do @files.remove @chunks.remove end context "Options" do setup do @filename = 'test' @mode = 'w' end should "set default 256k chunk size" do file = GridIO.new(@files, @chunks, @filename, @mode) assert_equal 256 * 1024, file.chunk_size end should "set chunk size" do file = GridIO.new(@files, @chunks, @filename, @mode, :chunk_size => 1000) assert_equal 1000, file.chunk_size end end context "StringIO methods" do setup do @filename = 'test' @mode = 'w' @data = "012345678\n" * 100000 @file = GridIO.new(@files, @chunks, @filename, @mode) @file.write(@data) @file.close end should "read data character by character using" do bytes = 0 file = GridIO.new(@files, @chunks, nil, "r", :query => {:_id => @file.files_id}) while file.getc bytes += 1 end assert_equal bytes, 1_000_000 end should "read length is a length is given" do file = GridIO.new(@files, @chunks, nil, "r", :query => {:_id => @file.files_id}) string = file.gets(1000) assert_equal string.length, 1000 bytes = 0 bytes += string.length while string = file.gets(1000) bytes += string.length end assert_equal bytes, 1_000_000 end should "read to the end of the line by default and assign to $_" do file = GridIO.new(@files, @chunks, nil, "r", :query => {:_id => @file.files_id}) string = file.gets assert_equal 10, string.length end should "read to the end of the file one line at a time" do file = GridIO.new(@files, @chunks, nil, "r", :query => {:_id => @file.files_id}) bytes = 0 while string = file.gets bytes += string.length end assert_equal 1_000_000, bytes end should "read to the end of the file one multi-character separator at a time" do file = GridIO.new(@files, @chunks, nil, "r", :query => {:_id => @file.files_id}) bytes = 0 while string = file.gets("45") bytes += string.length end assert_equal 1_000_000, bytes end should "read to a given separator" do file = GridIO.new(@files, @chunks, nil, "r", :query => {:_id => @file.files_id}) string = file.gets("5") assert_equal 6, string.length end should "read a multi-character separator" do file = GridIO.new(@files, @chunks, nil, "r", :query => {:_id => @file.files_id}) string = file.gets("45") assert_equal 6, string.length string = file.gets("45") assert_equal "678\n012345", string string = file.gets("\n01") assert_equal "678\n01", string end should "read a mult-character separator with a length" do file = GridIO.new(@files, @chunks, nil, "r", :query => {:_id => @file.files_id}) string = file.gets("45", 3) assert_equal 3, string.length end should "tell position, eof, and rewind" do file = GridIO.new(@files, @chunks, nil, "r", :query => {:_id => @file.files_id}) file.read(1000) assert_equal 1000, file.pos assert !file.eof? file.read assert file.eof? file.rewind assert_equal 0, file.pos assert_equal 1_000_000, file.read.length end end context "Writing" do setup do @filename = 'test' @length = 50000 @times = 10 end should "correctly write multiple chunks from mutiple writes" do file = GridIO.new(@files, @chunks, @filename, 'w') @times.times do file.write("1" * @length) end file.close file = GridIO.new(@files, @chunks, @filename, 'r') total_size = 0 while !file.eof? total_size += file.read(@length).length end file.close assert_equal total_size, @times * @length end end context "Seeking" do setup do @filename = 'test' @mode = 'w' @data = "1" * 1024 * 1024 @file = GridIO.new(@files, @chunks, @filename, @mode) @file.write(@data) @file.close end should "read all data using read_length and then be able to seek" do file = GridIO.new(@files, @chunks, nil, "r", :query => {:_id => @file.files_id}) assert_equal @data, file.read(1024 * 1024) file.seek(0) assert_equal @data, file.read end should "read all data using read_all and then be able to seek" do file = GridIO.new(@files, @chunks, nil, "r", :query => {:_id => @file.files_id}) assert_equal @data, file.read file.seek(0) assert_equal @data, file.read file.seek(1024 * 512) assert_equal 524288, file.file_position assert_equal @data.length / 2, file.read.length assert_equal 1048576, file.file_position assert_nil file.read file.seek(1024 * 512) assert_equal 524288, file.file_position end end context "Grid MD5 check" do should "run in safe mode" do file = GridIO.new(@files, @chunks, 'smallfile', 'w') file.write("DATA" * 100) assert file.close assert_equal file.server_md5, file.client_md5 end should "validate with a large file" do io = File.open(File.join(TEST_DATA, 'sample_file.pdf'), 'r') file = GridIO.new(@files, @chunks, 'bigfile', 'w') file.write(io) assert file.close assert_equal file.server_md5, file.client_md5 end should "raise an exception when check fails" do io = File.open(File.join(TEST_DATA, 'sample_file.pdf'), 'r') @db.stubs(:command).returns({'md5' => '12345'}) file = GridIO.new(@files, @chunks, 'bigfile', 'w') file.write(io) assert_raise GridMD5Failure do assert file.close end assert_not_equal file.server_md5, file.client_md5 end end context "Content types" do if defined?(MIME) should "determine common content types from the extension" do file = GridIO.new(@files, @chunks, 'sample.pdf', 'w') assert_equal 'application/pdf', file.content_type file = GridIO.new(@files, @chunks, 'sample.txt', 'w') assert_equal 'text/plain', file.content_type end end should "default to binary/octet-stream when type is unknown" do file = GridIO.new(@files, @chunks, 'sample.l33t', 'w') assert_equal 'binary/octet-stream', file.content_type end should "use any provided content type by default" do file = GridIO.new(@files, @chunks, 'sample.l33t', 'w', :content_type => 'image/jpg') assert_equal 'image/jpg', file.content_type end end end end ruby-mongo-1.9.2/test/functional/grid_test.rb000066400000000000000000000171641221200727400212610ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' include Mongo def read_and_write_stream(filename, read_length, opts={}) io = File.open(File.join(TEST_DATA, filename), 'r+b') id = @grid.put(io, opts.merge!(:filename => filename + read_length.to_s)) file = @grid.get(id) io.rewind data = io.read if data.respond_to?(:force_encoding) data.force_encoding("binary") end read_data = "" while(chunk = file.read(read_length)) read_data << chunk break if chunk.empty? end assert_equal data.length, read_data.length end class GridTest < Test::Unit::TestCase context "Tests:" do setup do @db = standard_connection.db(MONGO_TEST_DB) @files = @db.collection('test-fs.files') @chunks = @db.collection('test-fs.chunks') end teardown do @files.remove @chunks.remove end context "A one-chunk grid-stored file" do setup do @data = "GRIDDATA" * 5 @grid = Grid.new(@db, 'test-fs') @id = @grid.put(@data, :filename => 'sample', :metadata => {'app' => 'photos'}) end should "retrieve the file" do data = @grid.get(@id).data assert_equal @data, data end end context "A basic grid-stored file" do setup do @data = "GRIDDATA" * 50000 @grid = Grid.new(@db, 'test-fs') @id = @grid.put(@data, :filename => 'sample', :metadata => {'app' => 'photos'}) end should "check existence" do file = @grid.exist?(:filename => 'sample') assert_equal 'sample', file['filename'] end should "return nil if it doesn't exist" do assert_nil @grid.exist?(:metadata => 'foo') end should "retrieve the stored data" do data = @grid.get(@id).data assert_equal @data.length, data.length end should "have a unique index on chunks" do assert @chunks.index_information['files_id_1_n_1']['unique'] end should "store the filename" do file = @grid.get(@id) assert_equal 'sample', file.filename end should "store any relevant metadata" do file = @grid.get(@id) assert_equal 'photos', file.metadata['app'] end should "delete the file and any chunks" do @grid.delete(@id) assert_raise GridFileNotFound do @grid.get(@id) end assert_equal nil, @db['test-fs']['chunks'].find_one({:files_id => @id}) end end context "Filename not required" do setup do @data = "GRIDDATA" * 50000 @grid = Grid.new(@db, 'test-fs') @metadata = {'app' => 'photos'} end should "store the file with the old filename api" do id = @grid.put(@data, :filename => 'sample', :metadata => @metadata) file = @grid.get(id) assert_equal 'sample', file.filename assert_equal @metadata, file.metadata end should "store without a filename" do id = @grid.put(@data, :metadata => @metadata) file = @grid.get(id) assert_nil file.filename file_doc = @files.find_one({'_id' => id}) assert !file_doc.has_key?('filename') assert_equal @metadata, file.metadata end should "store with filename and metadata with the new api" do id = @grid.put(@data, :filename => 'sample', :metadata => @metadata) file = @grid.get(id) assert_equal 'sample', file.filename assert_equal @metadata, file.metadata end end context "Writing arbitrary data fields" do setup do @data = "GRIDDATA" * 50000 @grid = Grid.new(@db, 'test-fs') end should "write random keys to the files collection" do id = @grid.put(@data, :phrases => ["blimey", "ahoy!"]) file = @grid.get(id) assert_equal ["blimey", "ahoy!"], file['phrases'] end should "ignore special keys" do id = silently do @grid.put(@data, :file_length => 100, :phrase => "blimey") end file = @grid.get(id) assert_equal "blimey", file['phrase'] assert_equal 400_000, file.file_length end end context "Storing data with a length of zero" do setup do @grid = Grid.new(@db, 'test-fs') @id = silently do @grid.put('', :filename => 'sample', :metadata => {'app' => 'photos'}) end end should "return the zero length" do data = @grid.get(@id) assert_equal 0, data.read.length end end context "Grid streaming: " do setup do @grid = Grid.new(@db, 'test-fs') filename = 'sample_data' @io = File.open(File.join(TEST_DATA, filename), 'r') id = @grid.put(@io, :filename => filename) @file = @grid.get(id) @io.rewind @data = @io.read if @data.respond_to?(:force_encoding) @data.force_encoding("binary") end end should "be equal in length" do @io.rewind assert_equal @io.read.length, @file.read.length end should "read the file" do read_data = "" @file.each do |chunk| read_data << chunk end assert_equal @data.length, read_data.length end should "read the file if no block is given" do read_data = @file.each assert_equal @data.length, read_data.length end end context "Grid streaming an empty file: " do setup do @grid = Grid.new(@db, 'test-fs') filename = 'empty_data' @io = File.open(File.join(TEST_DATA, filename), 'r') id = silently do @grid.put(@io, :filename => filename) end @file = @grid.get(id) @io.rewind @data = @io.read if @data.respond_to?(:force_encoding) @data.force_encoding("binary") end end should "be equal in length" do @io.rewind assert_equal @io.read.length, @file.read.length end should "read the file" do read_data = "" @file.each do |chunk| read_data << chunk end assert_equal @data.length, read_data.length end should "read the file if no block is given" do read_data = @file.each assert_equal @data.length, read_data.length end end context "Streaming: " do || {} setup do @grid = Grid.new(@db, 'test-fs') end should "put and get a small io object with a small chunk size" do read_and_write_stream('small_data.txt', 1, :chunk_size => 2) end should "put and get an empty io object" do silently do read_and_write_stream('empty_data', 1) end end should "put and get a small io object" do read_and_write_stream('small_data.txt', 1) end should "put and get a large io object if reading less than the chunk size" do read_and_write_stream('sample_data', 256 * 1024) end should "put and get a large io object if reading more than the chunk size" do read_and_write_stream('sample_data', 300 * 1024) end end end end ruby-mongo-1.9.2/test/functional/pool_test.rb000066400000000000000000000034171221200727400213010ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' require 'thread' class PoolTest < Test::Unit::TestCase include Mongo def setup @client ||= standard_connection({:pool_size => 15, :pool_timeout => 5}) @db = @client.db(MONGO_TEST_DB) @collection = @db.collection("pool_test") end def test_pool_affinity pool = Pool.new(@client, TEST_HOST, TEST_PORT, :size => 5) threads = [] 10.times do threads << Thread.new do original_socket = pool.checkout pool.checkin(original_socket) 500.times do socket = pool.checkout assert_equal original_socket, socket pool.checkin(socket) end end end threads.each { |t| t.join } end def test_pool_affinity_max_size docs = [] 8000.times {|x| docs << {:value => x}} @collection.insert(docs) threads = [] threads << Thread.new do @collection.find({"value" => {"$lt" => 100}}).each {|e| e} Thread.pass sleep(0.125) @collection.find({"value" => {"$gt" => 100}}).each {|e| e} end threads << Thread.new do @collection.find({'$where' => "function() {for(i=0;i<1000;i++) {this.value};}"}).each {|e| e} end threads.each(&:join) end end ruby-mongo-1.9.2/test/functional/safe_test.rb000066400000000000000000000055741221200727400212540ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' include Mongo class SafeTest < Test::Unit::TestCase context "Safe mode propogation: " do setup do @connection = standard_connection({:safe => true}, true) # Legacy @db = @connection[MONGO_TEST_DB] @collection = @db['test-safe'] @collection.create_index([[:a, 1]], :unique => true) @collection.remove end should "propogate safe option on insert" do @collection.insert({:a => 1}) assert_raise_error(OperationFailure, "duplicate key") do @collection.insert({:a => 1}) end end should "allow safe override on insert" do @collection.insert({:a => 1}) @collection.insert({:a => 1}, :safe => false) end should "allow safe override on save" do @collection.insert({:a => 1}) id = @collection.insert({:a => 2}) assert_nothing_raised do @collection.save({:_id => id.to_s, :a => 1}, :safe => false) end end should "propogate safe option on save" do @collection.insert({:a => 1}) id = @collection.insert({:a => 2}) assert_raise(OperationFailure) do @collection.save({:_id => id.to_s, :a => 1}) end end should "propogate safe option on update" do @collection.insert({:a => 1}) @collection.insert({:a => 2}) assert_raise_error(OperationFailure, "duplicate key") do @collection.update({:a => 2}, {:a => 1}) end end should "allow safe override on update" do @collection.insert({:a => 1}) @collection.insert({:a => 2}) @collection.update({:a => 2}, {:a => 1}, :safe => false) end end context "Safe error objects" do setup do @connection = standard_connection({:safe => true}, true) # Legacy @db = @connection[MONGO_TEST_DB] @collection = @db['test'] @collection.remove @collection.insert({:a => 1}) @collection.insert({:a => 1}) @collection.insert({:a => 1}) end should "return object on update" do response = @collection.update({:a => 1}, {"$set" => {:a => 2}}, :multi => true) assert response['updatedExisting'] assert_equal 3, response['n'] end should "return object on remove" do response = @collection.remove({}) assert_equal 3, response['n'] end end endruby-mongo-1.9.2/test/functional/ssl_test.rb000066400000000000000000000102101221200727400211160ustar00rootroot00000000000000require 'test_helper' class SSLCertValidationTest < Test::Unit::TestCase include Mongo CERT_PATH = "#{Dir.pwd}/test/fixtures/certificates/" CLIENT_CERT = "#{CERT_PATH}client.pem" CA_CERT = "#{CERT_PATH}ca.pem" # This test doesn't connect, no server config required def test_ssl_configuration # raises when ssl=false and ssl opts specified assert_raise MongoArgumentError do MongoClient.new('server', 27017, :connect => false, :ssl => false, :ssl_cert => CLIENT_CERT) end # raises when ssl=nil and ssl opts specified assert_raise MongoArgumentError do MongoClient.new('server', 27017, :connect => false, :ssl_key => CLIENT_CERT) end # raises when verify=true and no ca_cert assert_raise MongoArgumentError do MongoClient.new('server', 27017, :connect => false, :ssl => true, :ssl_key => CLIENT_CERT, :ssl_cert => CLIENT_CERT, :ssl_verify => true) end end # Requires MongoDB built with SSL and the follow options: # # mongod --dbpath /path/to/data/directory --sslOnNormalPorts \ # --sslPEMKeyFile /path/to/server.pem \ # --sslCAFile /path/to/ca.pem \ # --sslCRLFile /path/to/crl.pem \ # --sslWeakCertificateValidation # # Make sure you have 'server' as an alias for localhost in /etc/hosts # def test_ssl_basic client = MongoClient.new('server', 27017, :connect => false, :ssl => true) assert client.connect end # Requires MongoDB built with SSL and the follow options: # # mongod --dbpath /path/to/data/directory --sslOnNormalPorts \ # --sslPEMKeyFile /path/to/server.pem \ # --sslCAFile /path/to/ca.pem \ # --sslCRLFile /path/to/crl.pem # # Make sure you have 'server' as an alias for localhost in /etc/hosts # def test_ssl_with_cert client = MongoClient.new('server', 27017, :connect => false, :ssl => true, :ssl_cert => CLIENT_CERT, :ssl_key => CLIENT_CERT) assert client.connect end def test_ssl_with_peer_cert_validation client = MongoClient.new('server', 27017, :connect => false, :ssl => true, :ssl_key => CLIENT_CERT, :ssl_cert => CLIENT_CERT, :ssl_verify => true, :ssl_ca_cert => CA_CERT) assert client.connect end def test_ssl_peer_cert_validation_hostname_fail client = MongoClient.new('localhost', 27017, :connect => false, :ssl => true, :ssl_key => CLIENT_CERT, :ssl_cert => CLIENT_CERT, :ssl_verify => true, :ssl_ca_cert => CA_CERT) assert_raise ConnectionFailure do client.connect end end # Requires mongod built with SSL and the follow options: # # mongod --dbpath /path/to/data/directory --sslOnNormalPorts \ # --sslPEMKeyFile /path/to/server.pem \ # --sslCAFile /path/to/ca.pem \ # --sslCRLFile /path/to/crl_client_revoked.pem # # Make sure you have 'server' as an alias for localhost in /etc/hosts # def test_ssl_with_invalid_cert assert_raise ConnectionFailure do MongoClient.new('server', 27017, :ssl => true, :ssl_key => CLIENT_CERT, :ssl_cert => CLIENT_CERT, :ssl_verify => true, :ssl_ca_cert => CA_CERT) end end end ruby-mongo-1.9.2/test/functional/support_test.rb000066400000000000000000000037561221200727400220520ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' class SupportTest < Test::Unit::TestCase def test_command_response_succeeds assert Support.ok?('ok' => 1) assert Support.ok?('ok' => 1.0) assert Support.ok?('ok' => true) end def test_command_response_fails assert !Support.ok?('ok' => 0) assert !Support.ok?('ok' => 0.0) assert !Support.ok?('ok' => 0.0) assert !Support.ok?('ok' => 'str') assert !Support.ok?('ok' => false) end def test_array_of_pairs hps = [["localhost", 27017], ["localhost", 27018], ["localhost", 27019]] assert_equal [["localhost", 27017], ["localhost", 27018], ["localhost", 27019]], Support.normalize_seeds(hps) end def test_array_of_strings hps = ["localhost:27017", "localhost:27018", "localhost:27019"] assert_equal [["localhost", 27017], ["localhost", 27018], ["localhost", 27019]], Support.normalize_seeds(hps) end def test_single_string_with_host_port hps = "localhost:27017" assert_equal ["localhost", 27017], Support.normalize_seeds(hps) end def test_single_string_missing_port hps = "localhost" assert_equal ["localhost", 27017], Support.normalize_seeds(hps) end def test_single_element_array_missing_port hps = ["localhost"] assert_equal ["localhost", 27017], Support.normalize_seeds(hps) end def test_pair_doesnt_get_converted hps = ["localhost", 27017] assert_equal ["localhost", 27017], Support.normalize_seeds(hps) end end ruby-mongo-1.9.2/test/functional/threading_test.rb000066400000000000000000000052641221200727400222770ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' class TestThreading < Test::Unit::TestCase include Mongo @@client = standard_connection(:pool_size => 2, :pool_timeout => 30) @@db = @@client[MONGO_TEST_DB] @@coll = @@db.collection('thread-test-collection') def set_up_safe_data @@db.drop_collection('duplicate') @@db.drop_collection('unique') @duplicate = @@db.collection('duplicate') @unique = @@db.collection('unique') @duplicate.insert("test" => "insert") @duplicate.insert("test" => "update") @unique.insert("test" => "insert") @unique.insert("test" => "update") @unique.create_index("test", :unique => true) end def test_safe_update times = [] set_up_safe_data threads = [] 25.times do |i| threads[i] = Thread.new do 100.times do if i % 2 == 0 assert_raise Mongo::OperationFailure do t1 = Time.now @unique.update({"test" => "insert"}, {"$set" => {"test" => "update"}}) times << Time.now - t1 end else t1 = Time.now @duplicate.update({"test" => "insert"}, {"$set" => {"test" => "update"}}) times << Time.now - t1 end end end end 25.times do |i| threads[i].join end end def test_safe_insert set_up_safe_data threads = [] 25.times do |i| threads[i] = Thread.new do if i % 2 == 0 assert_raise Mongo::OperationFailure do @unique.insert({"test" => "insert"}) end else @duplicate.insert({"test" => "insert"}) end end end 25.times do |i| threads[i].join end end def test_threading @@coll.drop @@coll = @@db.collection('thread-test-collection') docs = [] 1000.times {|i| docs << {:x => i}} @@coll.insert(docs) threads = [] 10.times do |i| threads[i] = Thread.new do sum = 0 @@coll.find().each do |document| sum += document["x"] end assert_equal 499500, sum end end 10.times do |i| threads[i].join end end end ruby-mongo-1.9.2/test/functional/timeout_test.rb000066400000000000000000000037051221200727400220160ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' class TestTimeout < Test::Unit::TestCase def test_op_timeout connection = standard_connection(:op_timeout => 0.5) admin = connection.db('admin') command = {:eval => "sleep(100)"} # Should not timeout assert admin.command(command) # Should timeout command = {:eval => "sleep(1000)"} assert_raise Mongo::OperationTimeout do admin.command(command) end end def test_external_timeout_does_not_leave_socket_in_bad_state client = Mongo::MongoClient.new db = client[MONGO_TEST_DB] coll = db['timeout-tests'] # prepare the database coll.drop coll.insert({:a => 1}) # use external timeout to mangle socket begin Timeout::timeout(0.5) do db.command({:eval => "sleep(1000)"}) end rescue Timeout::Error #puts "Thread timed out and has now mangled the socket" end assert_nothing_raised do coll.find_one end end =begin def test_ssl_op_timeout connection = standard_connection(:op_timeout => 1, :ssl => true) coll = connection.db(MONGO_TEST_DB).collection("test") coll.insert({:a => 1}) # Should not timeout assert coll.find_one({"$where" => "sleep(100); return true;"}) # Should timeout assert_raise Mongo::OperationTimeout do coll.find_one({"$where" => "sleep(5 * 1000); return true;"}) end coll.remove end =end end ruby-mongo-1.9.2/test/functional/uri_test.rb000066400000000000000000000173601221200727400211310ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' class URITest < Test::Unit::TestCase include Mongo def test_uri_without_port parser = Mongo::URIParser.new('mongodb://localhost') assert_equal 1, parser.nodes.length assert_equal 'localhost', parser.nodes[0][0] assert_equal 27017, parser.nodes[0][1] end def test_basic_uri parser = Mongo::URIParser.new('mongodb://localhost:27018') assert_equal 1, parser.nodes.length assert_equal 'localhost', parser.nodes[0][0] assert_equal 27018, parser.nodes[0][1] end def test_multiple_uris parser = Mongo::URIParser.new('mongodb://a.example.com:27018,b.example.com') assert_equal 2, parser.nodes.length assert_equal ['a.example.com', 27018], parser.nodes[0] assert_equal ['b.example.com', 27017], parser.nodes[1] end def test_complex_passwords parser = Mongo::URIParser.new('mongodb://bob:secret.word@a.example.com:27018/test') assert_equal "bob", parser.auths[0][:username] assert_equal "secret.word", parser.auths[0][:password] parser = Mongo::URIParser.new('mongodb://bob:s-_3#%R.t@a.example.com:27018/test') assert_equal "bob", parser.auths[0][:username] assert_equal "s-_3#%R.t", parser.auths[0][:password] end def test_complex_usernames parser = Mongo::URIParser.new('mongodb://b:ob:secret.word@a.example.com:27018/test') assert_equal "b:ob", parser.auths[0][:username] end def test_username_with_encoded_symbol parser = Mongo::URIParser.new('mongodb://f%40o:bar@localhost/admin') username = parser.auths.first[:username] assert_equal 'f@o', username end def test_password_with_encoded_symbol parser = Mongo::URIParser.new('mongodb://foo:b%40r@localhost/admin') password = parser.auths.first[:password] assert_equal 'b@r', password end def test_passwords_contain_no_commas assert_raise MongoArgumentError do Mongo::URIParser.new('mongodb://bob:a,b@a.example.com:27018/test') end end def test_opts_with_semincolon_separator parser = Mongo::URIParser.new('mongodb://localhost:27018?connect=direct;slaveok=true;safe=true') assert_equal 'direct', parser.connect assert parser.direct? assert parser.slaveok assert parser.safe end def test_opts_with_amp_separator parser = Mongo::URIParser.new('mongodb://localhost:27018?connect=direct&slaveok=true&safe=true') assert_equal 'direct', parser.connect assert parser.direct? assert parser.slaveok assert parser.safe end def test_opts_with_uri_encoded_stuff parser = Mongo::URIParser.new('mongodb://localhost:27018?connect=%64%69%72%65%63%74&slaveok=%74%72%75%65&safe=true') assert_equal 'direct', parser.connect assert parser.direct? assert parser.slaveok assert parser.safe end def test_opts_made_invalid_by_mixed_separators assert_raise_error MongoArgumentError, "must not mix URL separators ; and &" do Mongo::URIParser.new('mongodb://localhost:27018?replicaset=foo;bar&slaveok=true&safe=true') end end def test_opts_safe parser = Mongo::URIParser.new('mongodb://localhost:27018?safe=true;w=2;journal=true;fsync=true;wtimeoutMS=200') assert parser.safe assert_equal 2, parser.w assert parser.fsync assert parser.journal assert_equal 200, parser.wtimeoutms end def test_opts_ssl parser = Mongo::URIParser.new('mongodb://localhost:27018?ssl=true;w=2;journal=true;fsync=true;wtimeoutMS=200') assert parser.ssl end def test_opts_nonsafe_timeout parser = Mongo::URIParser.new('mongodb://localhost:27018?connectTimeoutMS=5500&socketTimeoutMS=500') assert_equal 5.5, parser.connecttimeoutms assert_equal 0.5, parser.sockettimeoutms end def test_opts_replica_set parser = Mongo::URIParser.new('mongodb://localhost:27018?connect=replicaset;replicaset=foo') assert_equal 'foo', parser.replicaset assert_equal 'replicaset', parser.connect assert parser.replicaset? end def test_opts_conflicting_replica_set assert_raise_error MongoArgumentError, "connect=direct conflicts with setting a replicaset name" do Mongo::URIParser.new('mongodb://localhost:27018?connect=direct;replicaset=foo') end end def test_case_insensitivity parser = Mongo::URIParser.new('mongodb://localhost:27018?wtimeoutms=500&JOURNAL=true&SaFe=true') assert_equal 500, parser.wtimeoutms assert_equal true, parser.journal assert_equal true, parser.safe end def test_read_preference_option_primary parser = Mongo::URIParser.new("mongodb://localhost:27018?readPreference=primary") assert_equal :primary, parser.readpreference end def test_read_preference_option_primary_preferred parser = Mongo::URIParser.new("mongodb://localhost:27018?readPreference=primaryPreferred") assert_equal :primary_preferred, parser.readpreference end def test_read_preference_option_secondary parser = Mongo::URIParser.new("mongodb://localhost:27018?readPreference=secondary") assert_equal :secondary, parser.readpreference end def test_read_preference_option_secondary_preferred parser = Mongo::URIParser.new("mongodb://localhost:27018?readPreference=secondaryPreferred") assert_equal :secondary_preferred, parser.readpreference end def test_read_preference_option_nearest parser = Mongo::URIParser.new("mongodb://localhost:27018?readPreference=nearest") assert_equal :nearest, parser.readpreference end def test_read_preference_option_with_invalid assert_raise_error MongoArgumentError do Mongo::URIParser.new("mongodb://localhost:27018?readPreference=invalid") end end def test_read_preference_connection_options parser = Mongo::URIParser.new("mongodb://localhost:27018?replicaset=test&readPreference=nearest") assert_equal :nearest, parser.connection_options[:read] end def test_read_preference_connection_options_with_no_replica_set parser = Mongo::URIParser.new("mongodb://localhost:27018?readPreference=nearest") assert_equal :nearest, parser.connection_options[:read] end def test_read_preference_connection_options_prefers_preference_over_slaveok parser = Mongo::URIParser.new("mongodb://localhost:27018?replicaset=test&readPreference=nearest&slaveok=true") assert_equal :nearest, parser.connection_options[:read] end def test_connection_when_sharded_with_no_options parser = Mongo::URIParser.new("mongodb://localhost:27017,localhost:27018") client = parser.connection({}, false, true) assert_equal [[ "localhost", 27017 ], [ "localhost", 27018 ]], client.seeds assert_true client.mongos? end def test_connection_when_sharded_with_options parser = Mongo::URIParser.new("mongodb://localhost:27017,localhost:27018") client = parser.connection({ :refresh_interval => 10 }, false, true) assert_equal [[ "localhost", 27017 ], [ "localhost", 27018 ]], client.seeds assert_equal 10, client.refresh_interval assert_true client.mongos? end def test_connection_when_sharded_with_uri_options parser = Mongo::URIParser.new("mongodb://localhost:27017,localhost:27018?readPreference=nearest") client = parser.connection({}, false, true) assert_equal [[ "localhost", 27017 ], [ "localhost", 27018 ]], client.seeds assert_equal :nearest, client.read assert_true client.mongos? end end ruby-mongo-1.9.2/test/functional/write_concern_test.rb000066400000000000000000000062071221200727400231710ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' include Mongo class WriteConcernTest < Test::Unit::TestCase context "Write concern propogation: " do setup do @con = standard_connection @db = @con[MONGO_TEST_DB] @col = @db['test-safe'] @col.create_index([[:a, 1]], :unique => true) @col.remove end #TODO: add write concern tests for remove should "propogate write concern options on insert" do @col.insert({:a => 1}) assert_raise_error(OperationFailure, "duplicate key") do @col.insert({:a => 1}) end end should "allow write concern override on insert" do @col.insert({:a => 1}) @col.insert({:a => 1}, :w => 0) end should "propogate write concern option on update" do @col.insert({:a => 1}) @col.insert({:a => 2}) assert_raise_error(OperationFailure, "duplicate key") do @col.update({:a => 2}, {:a => 1}) end end should "allow write concern override on update" do @col.insert({:a => 1}) @col.insert({:a => 2}) @col.update({:a => 2}, {:a => 1}, :w => 0) end end context "Write concern error objects" do setup do @con = standard_connection @db = @con[MONGO_TEST_DB] @col = @db['test'] @col.remove @col.insert({:a => 1}) @col.insert({:a => 1}) @col.insert({:a => 1}) end should "return object on update" do response = @col.update({:a => 1}, {"$set" => {:a => 2}}, :multi => true) assert response['updatedExisting'] assert_equal 3, response['n'] end should "return object on remove" do response = @col.remove({}) assert_equal 3, response['n'] end end context "Write concern in gridfs" do setup do @db = standard_connection.db(MONGO_TEST_DB) @grid = Mongo::GridFileSystem.new(@db) @filename = 'sample' end teardown do @grid.delete(@filename) end should "should acknowledge writes by default using md5" do file = @grid.open(@filename, 'w') file.write "Hello world!" file.close assert_equal file.client_md5, file.server_md5 end should "should allow for unacknowledged writes" do file = @grid.open(@filename, 'w', {:w => 0} ) file.write "Hello world!" file.close assert_nil file.client_md5, file.server_md5 end should "should support legacy write concern api" do file = @grid.open(@filename, 'w', {:safe => false} ) file.write "Hello world!" file.close assert_nil file.client_md5, file.server_md5 end end end ruby-mongo-1.9.2/test/replica_set/000077500000000000000000000000001221200727400170675ustar00rootroot00000000000000ruby-mongo-1.9.2/test/replica_set/authentication_test.rb000066400000000000000000000027051221200727400234760ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' require 'shared/authentication' class ReplicaSetAuthenticationTest < Test::Unit::TestCase include Mongo include AuthenticationTests def setup ensure_cluster(:rs) @client = MongoReplicaSetClient.new(@rs.repl_set_seeds, :name => @rs.repl_set_name, :connect_timeout => 60) @db = @client[MONGO_TEST_DB] init_auth end def test_authenticate_with_connection_uri @db.add_user('eunice', 'uritest') client = MongoReplicaSetClient.from_uri( "mongodb://eunice:uritest@#{@rs.repl_set_seeds.join(',')}/#{@db.name}" + "?replicaSet=#{@rs.repl_set_name}") assert client assert_equal client.auths.size, 1 assert client[MONGO_TEST_DB]['auth_test'].count auth = client.auths.first assert_equal @db.name, auth[:db_name] assert_equal 'eunice', auth[:username] assert_equal 'uritest', auth[:password] end end ruby-mongo-1.9.2/test/replica_set/basic_test.rb000066400000000000000000000117641221200727400215450ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' class BasicTest < Test::Unit::TestCase def setup ensure_cluster(:rs) end def test_connect client = MongoReplicaSetClient.new(@rs.repl_set_seeds, :name => @rs.repl_set_name) assert client.connected? assert_equal @rs.primary_name, client.primary.join(':') assert_equal @rs.secondary_names.sort, client.secondaries.collect{|s| s.join(':')}.sort assert_equal @rs.arbiter_names.sort, client.arbiters.collect{|s| s.join(':')}.sort client.close silently do client = MongoReplicaSetClient.new(@rs.repl_set_seeds_old, :name => @rs.repl_set_name) end assert client.connected? client.close end def test_safe_option client = MongoReplicaSetClient.new(@rs.repl_set_seeds, :name => @rs.repl_set_name) assert client.connected? assert client.write_concern[:w] > 0 client.close client = MongoReplicaSetClient.new(@rs.repl_set_seeds, :name => @rs.repl_set_name, :w => 0) assert client.connected? assert client.write_concern[:w] < 1 client.close client = MongoReplicaSetClient.new(@rs.repl_set_seeds, :name => @rs.repl_set_name, :w => 2) assert client.connected? assert client.write_concern[:w] > 0 client.close end def test_multiple_concurrent_replica_set_connection client1 = MongoReplicaSetClient.new(@rs.repl_set_seeds, :name => @rs.repl_set_name) client2 = MongoReplicaSetClient.new(@rs.repl_set_seeds, :name => @rs.repl_set_name) assert client1.connected? assert client2.connected? assert client1.manager != client2.manager assert client1.local_manager != client2.local_manager client1.close client2.close end def test_cache_original_seed_nodes host = @rs.servers.first.host seeds = @rs.repl_set_seeds << "#{host}:19356" client = MongoReplicaSetClient.new(seeds, :name => @rs.repl_set_name) assert client.connected? assert client.seeds.include?([host, 19356]), "Original seed nodes not cached!" assert_equal [host, 19356], client.seeds.last, "Original seed nodes not cached!" client.close end def test_accessors seeds = @rs.repl_set_seeds args = {:name => @rs.repl_set_name} client = MongoReplicaSetClient.new(seeds, args) assert_equal @rs.primary_name, [client.host, client.port].join(':') assert_equal client.host, client.primary_pool.host assert_equal client.port, client.primary_pool.port assert_equal 2, client.secondaries.length assert_equal 2, client.secondary_pools.length assert_equal @rs.repl_set_name, client.replica_set_name assert client.secondary_pools.include?(client.read_pool({:mode => :secondary})) assert_equal 90, client.refresh_interval assert_equal client.refresh_mode, false client.close end context "Socket pools" do context "checking out writers" do setup do seeds = @rs.repl_set_seeds args = {:name => @rs.repl_set_name} @client = MongoReplicaSetClient.new(seeds, args) @coll = @client[MONGO_TEST_DB]['test-connection-exceptions'] end should "close the connection on send_message for major exceptions" do @client.expects(:checkout_writer).raises(SystemStackError) @client.expects(:close) begin @coll.insert({:foo => "bar"}) rescue SystemStackError end end should "close the connection on send_message_with_gle for major exceptions" do @client.expects(:checkout_writer).raises(SystemStackError) @client.expects(:close) begin @coll.insert({:foo => "bar"}) rescue SystemStackError end end should "close the connection on receive_message for major exceptions" do @client.expects(:checkout_reader).raises(SystemStackError) @client.expects(:close) begin @coll.find({}, :read => :primary).next rescue SystemStackError end end end context "checking out readers" do setup do seeds = @rs.repl_set_seeds args = {:name => @rs.repl_set_name} @client = MongoReplicaSetClient.new(seeds, args) @coll = @client[MONGO_TEST_DB]['test-connection-exceptions'] end should "close the connection on receive_message for major exceptions" do @client.expects(:checkout_reader).raises(SystemStackError) @client.expects(:close) begin @coll.find({}, :read => :secondary).next rescue SystemStackError end end end end end ruby-mongo-1.9.2/test/replica_set/client_test.rb000066400000000000000000000220001221200727400217230ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' class ClientTest < Test::Unit::TestCase def setup ensure_cluster(:rs) @client = nil end def teardown @client.close if @client end def test_reconnection @client = MongoReplicaSetClient.new @rs.repl_set_seeds assert @client.connected? manager = @client.local_manager @client.close assert !@client.connected? assert !@client.local_manager @client.connect assert @client.connected? assert_equal @client.local_manager, manager end # TODO: test connect timeout. def test_connect_with_deprecated_multi silently do # guaranteed to have one data-holding member @client = MongoClient.multi(@rs.repl_set_seeds_old, :name => @rs.repl_set_name) end assert !@client.nil? assert @client.connected? end def test_connect_bad_name assert_raise_error(ReplicaSetConnectionError, "-wrong") do @client = MongoReplicaSetClient.new(@rs.repl_set_seeds, :name => @rs.repl_set_name + "-wrong") end end def test_connect_with_first_secondary_node_terminated @rs.secondaries.first.stop rescue_connection_failure do @client = MongoReplicaSetClient.new @rs.repl_set_seeds end assert @client.connected? end def test_connect_with_last_secondary_node_terminated @rs.secondaries.last.stop rescue_connection_failure do @client = MongoReplicaSetClient.new @rs.repl_set_seeds end assert @client.connected? end def test_connect_with_primary_stepped_down @client = MongoReplicaSetClient.new @rs.repl_set_seeds @client[MONGO_TEST_DB]['bar'].save({:a => 1}, {:w => 3}) assert @client[MONGO_TEST_DB]['bar'].find_one primary = Mongo::MongoClient.new(*@client.primary) assert_raise Mongo::ConnectionFailure do primary['admin'].command(step_down_command) end assert @client.connected? rescue_connection_failure do @client[MONGO_TEST_DB]['bar'].find_one end @client[MONGO_TEST_DB]['bar'].find_one end def test_connect_with_primary_killed @client = MongoReplicaSetClient.new @rs.repl_set_seeds assert @client.connected? @client[MONGO_TEST_DB]['bar'].save({:a => 1}, {:w => 3}) assert @client[MONGO_TEST_DB]['bar'].find_one @rs.primary.kill(Signal.list['KILL']) sleep(3) rescue_connection_failure do @client[MONGO_TEST_DB]['bar'].find_one end @client[MONGO_TEST_DB]['bar'].find_one end def test_save_with_primary_stepped_down @client = MongoReplicaSetClient.new @rs.repl_set_seeds assert @client.connected? primary = Mongo::MongoClient.new(*@client.primary) assert_raise Mongo::ConnectionFailure do primary['admin'].command(step_down_command) end rescue_connection_failure do @client[MONGO_TEST_DB]['bar'].save({:a => 1}, {:w => 2}) end @client[MONGO_TEST_DB]['bar'].find_one end #def test_connect_with_first_node_removed # @client = MongoReplicaSetClient.new @rs.repl_set_seeds # @client[MONGO_TEST_DB]['bar'].save({:a => 1}, {:w => 3}) # # old_primary = [@client.primary_pool.host, @client.primary_pool.port] # old_primary_conn = Mongo::MongoClient.new(*old_primary) # assert_raise Mongo::ConnectionFailure do # old_primary_conn['admin'].command(step_down_command) # end # # # Wait for new primary # rescue_connection_failure do # sleep 1 until @rs.get_node_with_state(1) # end # # new_primary = @rs.get_all_host_pairs_with_state(1).first # new_primary_conn = Mongo::MongoClient.new(*new_primary) # # config = nil # # # Remove old primary from replset # rescue_connection_failure do # config = @client['local']['system.replset'].find_one # end # # old_member = config['members'].select {|m| m['host'] == old_primary.join(':')}.first # config['members'].reject! {|m| m['host'] == old_primary.join(':')} # config['version'] += 1 # # begin # new_primary_conn['admin'].command({'replSetReconfig' => config}) # rescue Mongo::ConnectionFailure # end # # # Wait for the dust to settle # rescue_connection_failure do # assert @client[MONGO_TEST_DB]['bar'].find_one # end # # # Make sure a new connection skips the old primary # @new_conn = MongoReplicaSetClient.new @rs.repl_set_seeds # @new_conn.connect # new_nodes = [@new_conn.primary] + @new_conn.secondaries # assert !(new_nodes).include?(old_primary) # # # Add the old primary back # config['members'] << old_member # config['version'] += 1 # # begin # new_primary_conn['admin'].command({'replSetReconfig' => config}) # rescue Mongo::ConnectionFailure # end #end #def test_connect_with_hung_first_node # hung_node = nil # begin # hung_node = IO.popen('nc -lk 127.0.0.1 29999 >/dev/null 2>&1') # # @client = MongoReplicaSetClient.new(['localhost:29999'] + @rs.repl_set_seeds, # :connect_timeout => 2) # @client.connect # assert ['localhost:29999'] != @client.primary # assert !@client.secondaries.include?('localhost:29999') # ensure # Process.kill("KILL", hung_node.pid) if hung_node # end #end def test_connect_with_connection_string @client = MongoClient.from_uri("mongodb://#{@rs.replicas[0].host_port},#{@rs.replicas[1].host_port}?replicaset=#{@rs.repl_set_name}") assert !@client.nil? assert @client.connected? end def test_connect_with_connection_string_in_env_var ENV['MONGODB_URI'] = "mongodb://#{@rs.replicas[0].host_port},#{@rs.replicas[1].host_port}?replicaset=#{@rs.repl_set_name}" @client = MongoReplicaSetClient.new assert !@client.nil? assert_equal 2, @client.seeds.length assert_equal @rs.replicas[0].host, @client.seeds[0][0] assert_equal @rs.replicas[1].host, @client.seeds[1][0] assert_equal @rs.replicas[0].port, @client.seeds[0][1] assert_equal @rs.replicas[1].port, @client.seeds[1][1] assert_equal @rs.repl_set_name, @client.replica_set_name assert @client.connected? end def test_connect_with_connection_string_in_implicit_mongodb_uri ENV['MONGODB_URI'] = "mongodb://#{@rs.replicas[0].host_port},#{@rs.replicas[1].host_port}?replicaset=#{@rs.repl_set_name}" @client = MongoClient.from_uri assert !@client.nil? assert_equal 2, @client.seeds.length assert_equal @rs.replicas[0].host, @client.seeds[0][0] assert_equal @rs.replicas[1].host, @client.seeds[1][0] assert_equal @rs.replicas[0].port, @client.seeds[0][1] assert_equal @rs.replicas[1].port, @client.seeds[1][1] assert_equal @rs.repl_set_name, @client.replica_set_name assert @client.connected? end def test_connect_with_new_seed_format @client = MongoReplicaSetClient.new @rs.repl_set_seeds assert @client.connected? end def test_connect_with_old_seed_format silently do @client = MongoReplicaSetClient.new(@rs.repl_set_seeds_old) end assert @client.connected? end def test_connect_with_full_connection_string @client = MongoClient.from_uri("mongodb://#{@rs.replicas[0].host_port},#{@rs.replicas[1].host_port}?replicaset=#{@rs.repl_set_name};w=2;fsync=true;slaveok=true") assert !@client.nil? assert @client.connected? assert_equal 2, @client.write_concern[:w] assert @client.write_concern[:fsync] assert @client.read_pool end def test_connect_with_full_connection_string_in_env_var ENV['MONGODB_URI'] = "mongodb://#{@rs.replicas[0].host_port},#{@rs.replicas[1].host_port}?replicaset=#{@rs.repl_set_name};w=2;fsync=true;slaveok=true" @client = MongoReplicaSetClient.new assert !@client.nil? assert @client.connected? assert_equal 2, @client.write_concern[:w] assert @client.write_concern[:fsync] assert @client.read_pool end def test_connect_options_override_env_var ENV['MONGODB_URI'] = "mongodb://#{@rs.replicas[0].host_port},#{@rs.replicas[1].host_port}?replicaset=#{@rs.repl_set_name};w=2;fsync=true;slaveok=true" @client = MongoReplicaSetClient.new({:w => 0}) assert !@client.nil? assert @client.connected? assert_equal 0, @client.write_concern[:w] end def test_find_and_modify_with_secondary_read_preference @client = MongoReplicaSetClient.new collection = @client[MONGO_TEST_DB].collection('test', :read => :secondary) collection << { :a => 1, :processed => false} collection.find_and_modify( :query => {}, :update => {"$set" => {:processed => true}} ) assert_equal collection.find_one({}, :fields => {:_id => 0}, :read => :primary), {'a' => 1, 'processed' => true} end end ruby-mongo-1.9.2/test/replica_set/complex_connect_test.rb000066400000000000000000000041701221200727400236350ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' class ComplexConnectTest < Test::Unit::TestCase def setup ensure_cluster(:rs) end def teardown @client.close if defined?(@conn) && @conn end def test_complex_connect host = @rs.servers.first.host primary = MongoClient.new(host, @rs.primary.port) @client = MongoReplicaSetClient.new([ @rs.servers[2].host_port, @rs.servers[1].host_port, @rs.servers[0].host_port ]) version = @client.server_version @client['test']['foo'].insert({:a => 1}) assert @client['test']['foo'].find_one config = primary['local']['system.replset'].find_one old_config = config.dup config['version'] += 1 # eliminate exception: can't find self in new replset config port_to_delete = @rs.servers.collect(&:port).find{|port| port != primary.port}.to_s config['members'].delete_if do |member| member['host'].include?(port_to_delete) end assert_raise ConnectionFailure do primary['admin'].command({:replSetReconfig => config}) end @rs.start assert_raise ConnectionFailure do primary['admin'].command(step_down_command) end # isMaster is currently broken in 2.1+ when called on removed nodes puts version if version < "2.1" rescue_connection_failure do assert @client['test']['foo'].find_one end assert @client['test']['foo'].find_one end primary = MongoClient.new(host, @rs.primary.port) assert_raise ConnectionFailure do primary['admin'].command({:replSetReconfig => old_config}) end end end ruby-mongo-1.9.2/test/replica_set/connection_test.rb000066400000000000000000000113151221200727400226130ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' class ConnectionTest < Test::Unit::TestCase def setup ensure_cluster(:rs) end def test_connect_with_deprecated_multi silently do @connection = Connection.multi(@rs.repl_set_seeds_old, :name => @rs.repl_set_name) end assert !@connection.nil? assert @connection.connected? end def test_connect_bad_name assert_raise_error(ReplicaSetConnectionError, "-wrong") do @connection = ReplSetConnection.new(@rs.repl_set_seeds, :safe => true, :name => @rs.repl_set_name + "-wrong") end end def test_connect_with_first_secondary_node_terminated @rs.secondaries.first.stop rescue_connection_failure do @connection = ReplSetConnection.new @rs.repl_set_seeds end assert @connection.connected? end def test_connect_with_last_secondary_node_terminated @rs.secondaries.last.stop rescue_connection_failure do @connection = ReplSetConnection.new @rs.repl_set_seeds end assert @connection.connected? end def test_connect_with_connection_string @connection = Connection.from_uri("mongodb://#{@rs.repl_set_seeds_uri}?replicaset=#{@rs.repl_set_name}") assert !@connection.nil? assert @connection.connected? end def test_connect_with_connection_string_in_env_var ENV['MONGODB_URI'] = "mongodb://#{@rs.repl_set_seeds_uri}?replicaset=#{@rs.repl_set_name}" @connection = ReplSetConnection.new assert !@connection.nil? assert_equal 3, @connection.seeds.length assert_equal @rs.replicas[0].host, @connection.seeds[0][0] assert_equal @rs.replicas[1].host, @connection.seeds[1][0] assert_equal @rs.replicas[2].host, @connection.seeds[2][0] assert_equal @rs.replicas[0].port, @connection.seeds[0][1] assert_equal @rs.replicas[1].port, @connection.seeds[1][1] assert_equal @rs.replicas[2].port, @connection.seeds[2][1] assert_equal @rs.repl_set_name, @connection.replica_set_name assert @connection.connected? end def test_connect_with_connection_string_in_implicit_mongodb_uri ENV['MONGODB_URI'] = "mongodb://#{@rs.repl_set_seeds_uri}?replicaset=#{@rs.repl_set_name}" @connection = Connection.from_uri assert !@connection.nil? assert_equal 3, @connection.seeds.length assert_equal @rs.replicas[0].host, @connection.seeds[0][0] assert_equal @rs.replicas[1].host, @connection.seeds[1][0] assert_equal @rs.replicas[2].host, @connection.seeds[2][0] assert_equal @rs.replicas[0].port, @connection.seeds[0][1] assert_equal @rs.replicas[1].port, @connection.seeds[1][1] assert_equal @rs.replicas[2].port, @connection.seeds[2][1] assert_equal @rs.repl_set_name, @connection.replica_set_name assert @connection.connected? end def test_connect_with_new_seed_format @connection = ReplSetConnection.new @rs.repl_set_seeds assert @connection.connected? end def test_connect_with_old_seed_format silently do @connection = ReplSetConnection.new(@rs.repl_set_seeds_old) end assert @connection.connected? end def test_connect_with_full_connection_string @connection = Connection.from_uri("mongodb://#{@rs.repl_set_seeds_uri}?replicaset=#{@rs.repl_set_name};safe=true;w=2;fsync=true;slaveok=true") assert !@connection.nil? assert @connection.connected? assert_equal 2, @connection.write_concern[:w] assert @connection.write_concern[:fsync] assert @connection.read_pool end def test_connect_with_full_connection_string_in_env_var ENV['MONGODB_URI'] = "mongodb://#{@rs.repl_set_seeds_uri}?replicaset=#{@rs.repl_set_name};safe=true;w=2;fsync=true;slaveok=true" @connection = ReplSetConnection.new assert !@connection.nil? assert @connection.connected? assert_equal 2, @connection.write_concern[:w] assert @connection.write_concern[:fsync] assert @connection.read_pool end def test_connect_options_override_env_var ENV['MONGODB_URI'] = "mongodb://#{@rs.repl_set_seeds_uri}?replicaset=#{@rs.repl_set_name};safe=true;w=2;fsync=true;slaveok=true" @connection = ReplSetConnection.new({:safe => {:w => 1}}) assert !@connection.nil? assert @connection.connected? assert_equal 1, @connection.write_concern[:w] end end ruby-mongo-1.9.2/test/replica_set/count_test.rb000066400000000000000000000042721221200727400216100ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' class ReplicaSetCountTest < Test::Unit::TestCase def setup ensure_cluster(:rs) @client = MongoReplicaSetClient.new(@rs.repl_set_seeds, :read => :primary_preferred) assert @client.primary_pool @primary = MongoClient.new(@client.primary_pool.host, @client.primary_pool.port) @db = @client.db(MONGO_TEST_DB) @db.drop_collection("test-sets") @coll = @db.collection("test-sets") end def teardown @client.close if @conn end def test_correct_count_after_insertion_reconnect @coll.insert({:a => 20}, :w => 3, :wtimeout => 10000) assert_equal 1, @coll.count # Kill the current master node @rs.primary.stop rescue_connection_failure do @coll.insert({:a => 30}) end @coll.insert({:a => 40}) assert_equal 3, @coll.count, "Second count failed" end def test_count_command_sent_to_primary @coll.insert({:a => 20}, :w => 3, :wtimeout => 10000) count_before = @primary['admin'].command({:serverStatus => 1})['opcounters']['command'] assert_equal 1, @coll.count count_after = @primary['admin'].command({:serverStatus => 1})['opcounters']['command'] assert_equal 2, count_after - count_before end def test_count_with_read @coll.insert({:a => 20}, :w => 3, :wtimeout => 10000) count_before = @primary['admin'].command({:serverStatus => 1})['opcounters']['command'] assert_equal 1, @coll.count(:read => :secondary) assert_equal 1, @coll.find({}, :read => :secondary).count() count_after = @primary['admin'].command({:serverStatus => 1})['opcounters']['command'] assert_equal 1, count_after - count_before end end ruby-mongo-1.9.2/test/replica_set/cursor_test.rb000066400000000000000000000143361221200727400217770ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' class ReplicaSetCursorTest < Test::Unit::TestCase def setup ensure_cluster(:rs) end def test_get_more_primary setup_client(:primary) cursor_get_more_test(:primary) end def test_get_more_secondary setup_client(:secondary) cursor_get_more_test(:secondary) end def test_close_primary setup_client(:primary) kill_cursor_test(:primary) end def test_close_secondary setup_client(:secondary) kill_cursor_test(:secondary) end def test_cursors_get_closed setup_client assert_cursors_on_members end def test_cursors_get_closed_secondary setup_client(:secondary) assert_cursors_on_members(:secondary) end def test_cursors_get_closed_secondary_query setup_client(:primary) assert_cursors_on_members(:secondary) end def test_intervening_query_secondary setup_client(:primary) refresh_while_iterating(:secondary) end private def setup_client(read=:primary) route_read ||= read # Setup ReplicaSet Connection @client = MongoReplicaSetClient.new(@rs.repl_set_seeds, :read => read) @db = @client.db(MONGO_TEST_DB) @db.drop_collection("cursor_tests") @coll = @db.collection("cursor_tests") insert_docs # Setup Direct Connections @primary = Mongo::MongoClient.new(*@client.manager.primary) end def insert_docs @n_docs = 102 # batch size is 101 @n_docs.times do |i| @coll.insert({ "x" => i }, :w => 3) end end def set_read_client_and_tag(read) read_opts = {:read => read} @tag = (0...3).map{|i|i.to_s}.detect do |tag| begin read_opts[:tag_sets] = [{:node => tag}] unless read == :primary cursor = @coll.find({}, read_opts) cursor.next pool = cursor.instance_variable_get(:@pool) cursor.close @read = Mongo::MongoClient.new(pool.host, pool.port, :slave_ok => true) tag rescue Mongo::ConnectionFailure false end end end def route_query(read) read_opts = {:read => read} read_opts[:tag_sets] = [{:node => @tag}] unless read == :primary object_id = BSON::ObjectId.new read_opts[:comment] = object_id # set profiling level to 2 on client and member to which the query will be routed @client.db(MONGO_TEST_DB).profiling_level = :all @client.secondaries.each do |node| node = Mongo::MongoClient.new(node[0], node[1], :slave_ok => true) node.db(MONGO_TEST_DB).profiling_level = :all end @cursor = @coll.find({}, read_opts) @cursor.next # on client and other members set profiling level to 0 @client.db(MONGO_TEST_DB).profiling_level = :off @client.secondaries.each do |node| node = Mongo::MongoClient.new(node[0], node[1], :slave_ok => true) node.db(MONGO_TEST_DB).profiling_level = :off end # do a query on system.profile of the reader to see if it was used for the query profiled_queries = @read.db(MONGO_TEST_DB).collection('system.profile').find({ 'ns' => "#{MONGO_TEST_DB}.cursor_tests", "query.$comment" => object_id }) assert_equal 1, profiled_queries.count end # batch from send_initial_query is 101 documents # check that you get n_docs back from the query, with the same port def cursor_get_more_test(read=:primary) set_read_client_and_tag(read) 10.times do # assert that the query went to the correct member route_query(read) docs_count = 1 port = @cursor.instance_variable_get(:@pool).port assert @cursor.alive? while @cursor.has_next? docs_count += 1 @cursor.next assert_equal port, @cursor.instance_variable_get(:@pool).port end assert !@cursor.alive? assert_equal @n_docs, docs_count @cursor.close #cursor is already closed end end # batch from get_more can be huge, so close after send_initial_query def kill_cursor_test(read=:primary) set_read_client_and_tag(read) 10.times do # assert that the query went to the correct member route_query(read) cursor_id = @cursor.cursor_id cursor_clone = @cursor.clone assert_equal cursor_id, cursor_clone.cursor_id assert @cursor.instance_variable_get(:@pool) # .next was called once already and leave one for get more (@n_docs-2).times { @cursor.next } @cursor.close # an exception confirms the cursor has indeed been closed assert_raise Mongo::OperationFailure do cursor_clone.next end end end def assert_cursors_on_members(read=:primary) set_read_client_and_tag(read) # assert that the query went to the correct member route_query(read) cursor_id = @cursor.cursor_id cursor_clone = @cursor.clone assert_equal cursor_id, cursor_clone.cursor_id assert @cursor.instance_variable_get(:@pool) port = @cursor.instance_variable_get(:@pool).port while @cursor.has_next? @cursor.next assert_equal port, @cursor.instance_variable_get(:@pool).port end # an exception confirms the cursor has indeed been closed after query assert_raise Mongo::OperationFailure do cursor_clone.next end end def refresh_while_iterating(read) set_read_client_and_tag(read) read_opts = {:read => read} read_opts[:tag_sets] = [{:node => @tag}] read_opts[:batch_size] = 2 cursor = @coll.find({}, read_opts) 2.times { cursor.next } port = cursor.instance_variable_get(:@pool).port host = cursor.instance_variable_get(:@pool).host # Refresh connection @client.refresh assert_nothing_raised do cursor.next end assert_equal port, cursor.instance_variable_get(:@pool).port assert_equal host, cursor.instance_variable_get(:@pool).host end endruby-mongo-1.9.2/test/replica_set/insert_test.rb000066400000000000000000000033471221200727400217660ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' class ReplicaSetInsertTest < Test::Unit::TestCase def setup ensure_cluster(:rs) @client = MongoReplicaSetClient.new @rs.repl_set_seeds @db = @client.db(MONGO_TEST_DB) @db.drop_collection("test-sets") @coll = @db.collection("test-sets") end def teardown @client.close if @conn end def test_insert @coll.save({:a => 20}, :w => 3) @rs.primary.stop rescue_connection_failure do @coll.save({:a => 30}, :w => 1) end @coll.save({:a => 40}, :w => 1) @coll.save({:a => 50}, :w => 1) @coll.save({:a => 60}, :w => 1) @coll.save({:a => 70}, :w => 1) # Restart the old master and wait for sync @rs.start sleep(5) results = [] rescue_connection_failure do @coll.find.each {|r| results << r} [20, 30, 40, 50, 60, 70].each do |a| assert results.any? {|r| r['a'] == a}, "Could not find record for a => #{a}" end end @coll.save({:a => 80}, :w => 3) @coll.find.each {|r| results << r} [20, 30, 40, 50, 60, 70, 80].each do |a| assert results.any? {|r| r['a'] == a}, "Could not find record for a => #{a} on second find" end end end ruby-mongo-1.9.2/test/replica_set/max_values_test.rb000066400000000000000000000061661221200727400226300ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' class MaxValuesTest < Test::Unit::TestCase include Mongo def setup ensure_cluster(:rs) @client = MongoReplicaSetClient.new(@rs.repl_set_seeds, :name => @rs.repl_set_name) @db = new_mock_db @client.stubs(:[]).returns(@db) @ismaster = { 'hosts' => @client.local_manager.hosts.to_a, 'arbiters' => @client.local_manager.arbiters } end def test_initial_max_sizes assert @client.max_message_size assert @client.max_bson_size end def test_updated_max_sizes_after_node_config_change @db.stubs(:command).returns( @ismaster.merge({'ismaster' => true}), @ismaster.merge({'secondary' => true, 'maxMessageSizeBytes' => 1024 * MESSAGE_SIZE_FACTOR}), @ismaster.merge({'secondary' => true, 'maxBsonObjectSize' => 1024}) ) @client.local_manager.stubs(:refresh_required?).returns(true) @client.refresh assert_equal 1024, @client.max_bson_size assert_equal 1024 * MESSAGE_SIZE_FACTOR, @client.max_message_size end def test_neither_max_sizes_in_config @db.stubs(:command).returns( @ismaster.merge({'ismaster' => true}), @ismaster.merge({'secondary' => true}), @ismaster.merge({'secondary' => true}) ) @client.local_manager.stubs(:refresh_required?).returns(true) @client.refresh assert_equal DEFAULT_MAX_BSON_SIZE, @client.max_bson_size assert_equal DEFAULT_MAX_BSON_SIZE * MESSAGE_SIZE_FACTOR, @client.max_message_size end def test_only_bson_size_in_config @db.stubs(:command).returns( @ismaster.merge({'ismaster' => true}), @ismaster.merge({'secondary' => true}), @ismaster.merge({'secondary' => true, 'maxBsonObjectSize' => 1024}) ) @client.local_manager.stubs(:refresh_required?).returns(true) @client.refresh assert_equal 1024, @client.max_bson_size assert_equal 1024 * MESSAGE_SIZE_FACTOR, @client.max_message_size end def test_both_sizes_in_config @db.stubs(:command).returns( @ismaster.merge({'ismaster' => true, 'maxMessageSizeBytes' => 1024 * 2 * MESSAGE_SIZE_FACTOR, 'maxBsonObjectSize' => 1024}), @ismaster.merge({'secondary' => true, 'maxMessageSizeBytes' => 1024 * 2 * MESSAGE_SIZE_FACTOR, 'maxBsonObjectSize' => 1024}), @ismaster.merge({'secondary' => true, 'maxMessageSizeBytes' => 1024 * 2 * MESSAGE_SIZE_FACTOR, 'maxBsonObjectSize' => 1024}) ) @client.local_manager.stubs(:refresh_required?).returns(true) @client.refresh assert_equal 1024, @client.max_bson_size assert_equal 1024 * 2 * MESSAGE_SIZE_FACTOR, @client.max_message_size end end ruby-mongo-1.9.2/test/replica_set/pinning_test.rb000066400000000000000000000033631221200727400221220ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' class ReplicaSetPinningTest < Test::Unit::TestCase def setup ensure_cluster(:rs) @client = MongoReplicaSetClient.new(@rs.repl_set_seeds, :name => @rs.repl_set_name) @db = @client.db(MONGO_TEST_DB) @coll = @db.collection("test-sets") @coll.insert({:a => 1}) end def test_unpinning # pin primary @coll.find_one assert_equal @client.pinned_pool[:pool], @client.primary_pool # pin secondary @coll.find_one({}, :read => :secondary_preferred) assert @client.secondary_pools.include? @client.pinned_pool[:pool] # repin primary @coll.find_one({}, :read => :primary_preferred) assert_equal @client.pinned_pool[:pool], @client.primary_pool end def test_pinned_pool_is_local_to_thread threads = [] 30.times do |i| threads << Thread.new do if i % 2 == 0 @coll.find_one({}, :read => :secondary_preferred) assert @client.secondary_pools.include? @client.pinned_pool[:pool] else @coll.find_one({}, :read => :primary_preferred) assert_equal @client.pinned_pool[:pool], @client.primary_pool end end end threads.each(&:join) end end ruby-mongo-1.9.2/test/replica_set/query_test.rb000066400000000000000000000044041221200727400216220ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' class ReplicaSetQueryTest < Test::Unit::TestCase def setup ensure_cluster(:rs) @client = MongoReplicaSetClient.new @rs.repl_set_seeds @db = @client.db(MONGO_TEST_DB) @db.drop_collection("test-sets") @coll = @db.collection("test-sets") end def teardown @client.close if @conn end def test_query @coll.save({:a => 20}, :w => 3) @coll.save({:a => 30}, :w => 3) @coll.save({:a => 40}, :w => 3) results = [] @coll.find.each {|r| results << r} [20, 30, 40].each do |a| assert results.any? {|r| r['a'] == a}, "Could not find record for a => #{a}" end @rs.primary.stop results = [] rescue_connection_failure do @coll.find.each {|r| results << r} [20, 30, 40].each do |a| assert results.any? {|r| r['a'] == a}, "Could not find record for a => #{a}" end end end # Create a large collection and do a secondary query that returns # enough records to require sending a GETMORE. In between opening # the cursor and sending the GETMORE, do a :primary query. Confirm # that the cursor reading from the secondary continues to talk to # the secondary, rather than trying to read the cursor from the # primary, where it does not exist. # def test_secondary_getmore # 200.times do |i| # @coll.save({:a => i}, :w => 3) # end # as = [] # # Set an explicit batch size, in case the default ever changes. # @coll.find({}, { :batch_size => 100, :read => :secondary }) do |c| # c.each do |result| # as << result['a'] # @coll.find({:a => result['a']}, :read => :primary).map # end # end # assert_equal(as.sort, 0.upto(199).to_a) # end end ruby-mongo-1.9.2/test/replica_set/read_preference_test.rb000066400000000000000000000154551221200727400235760ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' class ReadPreferenceTest < Test::Unit::TestCase def setup ensure_cluster(:rs, :replicas => 2, :arbiters => 0) # Insert data primary = @rs.primary conn = Connection.new(primary.host, primary.port) db = conn.db(MONGO_TEST_DB) coll = db.collection("test-sets") coll.save({:a => 20}, {:w => 2}) end def test_read_primary conn = make_connection rescue_connection_failure do assert conn.read_primary? assert conn.primary? end conn = make_connection(:primary_preferred) rescue_connection_failure do assert conn.read_primary? assert conn.primary? end conn = make_connection(:secondary) rescue_connection_failure do assert !conn.read_primary? assert !conn.primary? end conn = make_connection(:secondary_preferred) rescue_connection_failure do assert !conn.read_primary? assert !conn.primary? end end def test_connection_pools conn = make_connection assert conn.primary_pool, "No primary pool!" assert conn.read_pool, "No read pool!" assert conn.primary_pool.port == conn.read_pool.port, "Primary port and read port are not the same!" conn = make_connection(:primary_preferred) assert conn.primary_pool, "No primary pool!" assert conn.read_pool, "No read pool!" assert conn.primary_pool.port == conn.read_pool.port, "Primary port and read port are not the same!" conn = make_connection(:secondary) assert conn.primary_pool, "No primary pool!" assert conn.read_pool, "No read pool!" assert conn.primary_pool.port != conn.read_pool.port, "Primary port and read port are the same!" conn = make_connection(:secondary_preferred) assert conn.primary_pool, "No primary pool!" assert conn.read_pool, "No read pool!" assert conn.primary_pool.port != conn.read_pool.port, "Primary port and read port are the same!" end def test_read_routing prepare_routing_test # Test that reads are going to the right members assert_query_route(@primary, @primary_direct) assert_query_route(@primary_preferred, @primary_direct) assert_query_route(@secondary, @secondary_direct) assert_query_route(@secondary_preferred, @secondary_direct) end def test_read_routing_with_primary_down prepare_routing_test # Test that reads are going to the right members assert_query_route(@primary, @primary_direct) assert_query_route(@primary_preferred, @primary_direct) assert_query_route(@secondary, @secondary_direct) assert_query_route(@secondary_preferred, @secondary_direct) # Kill the primary so only a single secondary exists @rs.primary.kill # Test that reads are going to the right members assert_raise_error ConnectionFailure do @primary[MONGO_TEST_DB]['test-sets'].find_one end assert_query_route(@primary_preferred, @secondary_direct) assert_query_route(@secondary, @secondary_direct) assert_query_route(@secondary_preferred, @secondary_direct) # Restore set @rs.restart sleep(1) @repl_cons.each { |con| con.refresh } sleep(1) @primary_direct = Connection.new( @rs.config['host'], @primary.read_pool.port ) # Test that reads are going to the right members assert_query_route(@primary, @primary_direct) assert_query_route(@primary_preferred, @primary_direct) assert_query_route(@secondary, @secondary_direct) assert_query_route(@secondary_preferred, @secondary_direct) end def test_read_routing_with_secondary_down prepare_routing_test # Test that reads are going to the right members assert_query_route(@primary, @primary_direct) assert_query_route(@primary_preferred, @primary_direct) assert_query_route(@secondary, @secondary_direct) assert_query_route(@secondary_preferred, @secondary_direct) # Kill the secondary so that only primary exists @rs.secondaries.first.kill # Test that reads are going to the right members assert_query_route(@primary, @primary_direct) assert_query_route(@primary_preferred, @primary_direct) assert_raise_error ConnectionFailure do @secondary[MONGO_TEST_DB]['test-sets'].find_one end assert_query_route(@secondary_preferred, @primary_direct) # Restore set @rs.restart sleep(1) @repl_cons.each { |con| con.refresh } sleep(1) @secondary_direct = Connection.new( @rs.config['host'], @secondary.read_pool.port, :slave_ok => true ) # Test that reads are going to the right members assert_query_route(@primary, @primary_direct) assert_query_route(@primary_preferred, @primary_direct) assert_query_route(@secondary, @secondary_direct) assert_query_route(@secondary_preferred, @secondary_direct) end def test_write_lots_of_data @conn = make_connection(:secondary_preferred) @db = @conn[MONGO_TEST_DB] @coll = @db.collection("test-sets", {:w => 2}) 6000.times do |n| @coll.save({:a => n}) end cursor = @coll.find() cursor.next cursor.close end private def prepare_routing_test # Setup replica set connections @primary = make_connection(:primary) @primary_preferred = make_connection(:primary_preferred) @secondary = make_connection(:secondary) @secondary_preferred = make_connection(:secondary_preferred) @repl_cons = [@primary, @primary_preferred, @secondary, @secondary_preferred] # Setup direct connections @primary_direct = Connection.new(@rs.config['host'], @primary.read_pool.port) @secondary_direct = Connection.new(@rs.config['host'], @secondary.read_pool.port, :slave_ok => true) end def make_connection(mode = :primary, opts = {}) opts.merge!({:read => mode}) MongoReplicaSetClient.new(@rs.repl_set_seeds, opts) end def query_count(connection) connection['admin'].command({:serverStatus => 1})['opcounters']['query'] end def assert_query_route(test_connection, expected_target) #puts "#{test_connection.read_pool.port} #{expected_target.read_pool.port}" queries_before = query_count(expected_target) assert_nothing_raised do test_connection['MONGO_TEST_DB']['test-sets'].find_one end queries_after = query_count(expected_target) assert_equal 1, queries_after - queries_before end endruby-mongo-1.9.2/test/replica_set/refresh_test.rb000066400000000000000000000130221221200727400221070ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' class ReplicaSetRefreshTest < Test::Unit::TestCase def setup ensure_cluster(:rs) end def test_connect_and_manual_refresh_with_secondary_down num_secondaries = @rs.secondaries.size client = MongoReplicaSetClient.new(@rs.repl_set_seeds, :refresh_mode => false) assert_equal num_secondaries, client.secondaries.size assert client.connected? assert_equal client.read_pool, client.primary_pool old_refresh_version = client.refresh_version @rs.stop_secondary client.refresh assert_equal num_secondaries - 1, client.secondaries.size assert client.connected? assert_equal client.read_pool, client.primary_pool assert client.refresh_version > old_refresh_version old_refresh_version = client.refresh_version # Test no changes after restart until manual refresh @rs.restart assert_equal num_secondaries - 1, client.secondaries.size assert client.connected? assert_equal client.read_pool, client.primary_pool assert_equal client.refresh_version, old_refresh_version # Refresh and ensure state client.refresh assert_equal num_secondaries, client.secondaries.size assert client.connected? assert_equal client.read_pool, client.primary_pool assert client.refresh_version > old_refresh_version end def test_automated_refresh_with_secondary_down num_secondaries = @rs.secondaries.size client = MongoReplicaSetClient.new(@rs.repl_set_seeds, :refresh_interval => 1, :refresh_mode => :sync, :read => :secondary_preferred) # Ensure secondaries are all recognized by client and client is connected assert_equal num_secondaries, client.secondaries.size assert client.connected? assert client.secondary_pools.include?(client.read_pool) pool = client.read_pool @rs.member_by_name(pool.host_string).stop sleep(2) old_refresh_version = client.refresh_version # Trigger synchronous refresh client['foo']['bar'].find_one assert client.connected? assert client.refresh_version > old_refresh_version assert_equal num_secondaries - 1, client.secondaries.size assert client.secondary_pools.include?(client.read_pool) assert_not_equal pool, client.read_pool # Restart nodes and ensure refresh interval has passed @rs.restart sleep(2) old_refresh_version = client.refresh_version # Trigger synchronous refresh client['foo']['bar'].find_one assert client.connected? assert client.refresh_version > old_refresh_version, "Refresh version hasn't changed." assert_equal num_secondaries, client.secondaries.size "No secondaries have been added." assert_equal num_secondaries, client.secondary_pools.size end def test_concurrent_refreshes factor = 5 nthreads = factor * 10 threads = [] client = MongoReplicaSetClient.new(@rs.repl_set_seeds, :refresh_mode => :sync, :refresh_interval => 1) nthreads.times do |i| threads << Thread.new do # force a connection failure every couple of threads that causes a refresh if i % factor == 0 cursor = client['foo']['bar'].find cursor.stubs(:checkout_socket_from_connection).raises(ConnectionFailure) begin cursor.next rescue => ex raise ex unless ex.class == ConnectionFailure next end else # synchronous refreshes will happen every couple of find_ones cursor = client['foo']['bar'].find_one end end end threads.each do |t| t.join end end =begin def test_automated_refresh_with_removed_node client = MongoReplicaSetClient.new(@rs.repl_set_seeds, :refresh_interval => 1, :refresh_mode => :sync) num_secondaries = client.secondary_pools.length old_refresh_version = client.refresh_version n = @rs.repl_set_remove_node(2) sleep(2) rescue_connection_failure do client['foo']['bar'].find_one end assert client.refresh_version > old_refresh_version, "Refresh version hasn't changed." assert_equal num_secondaries - 1, client.secondaries.length assert_equal num_secondaries - 1, client.secondary_pools.length #@rs.add_node(n) end def test_adding_and_removing_nodes client = MongoReplicaSetClient.new(build_seeds(3), :refresh_interval => 2, :refresh_mode => :sync) @rs.add_node sleep(4) client['foo']['bar'].find_one @conn2 = MongoReplicaSetClient.new(build_seeds(3), :refresh_interval => 2, :refresh_mode => :sync) assert @conn2.secondaries.sort == client.secondaries.sort, "Second connection secondaries not equal to first." assert_equal 3, client.secondary_pools.length assert_equal 3, client.secondaries.length config = client['admin'].command({:ismaster => 1}) @rs.remove_secondary_node sleep(4) config = client['admin'].command({:ismaster => 1}) assert_equal 2, client.secondary_pools.length assert_equal 2, client.secondaries.length end =end endruby-mongo-1.9.2/test/replica_set/replication_ack_test.rb000066400000000000000000000054111221200727400236030ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' class ReplicaSetAckTest < Test::Unit::TestCase def setup ensure_cluster(:rs) @client = MongoReplicaSetClient.new(@rs.repl_set_seeds) @slave1 = MongoClient.new( @client.secondary_pools.first.host, @client.secondary_pools.first.port, :slave_ok => true) assert !@slave1.read_primary? @db = @client.db(MONGO_TEST_DB) @db.drop_collection("test-sets") @col = @db.collection("test-sets") end def teardown @client.close if @conn end def test_safe_mode_with_w_failure assert_raise_error OperationFailure, "timeout" do @col.insert({:foo => 1}, :w => 4, :wtimeout => 1, :fsync => true) end assert_raise_error OperationFailure, "timeout" do @col.update({:foo => 1}, {:foo => 2}, :w => 4, :wtimeout => 1, :fsync => true) end assert_raise_error OperationFailure, "timeout" do @col.remove({:foo => 2}, :w => 4, :wtimeout => 1, :fsync => true) end assert_raise_error OperationFailure do @col.insert({:foo => 3}, :w => "test-tag") end end def test_safe_mode_replication_ack @col.insert({:baz => "bar"}, :w => 3, :wtimeout => 5000) assert @col.insert({:foo => "0" * 5000}, :w => 3, :wtimeout => 5000) assert_equal 2, @slave1[MONGO_TEST_DB]["test-sets"].count assert @col.update({:baz => "bar"}, {:baz => "foo"}, :w => 3, :wtimeout => 5000) assert @slave1[MONGO_TEST_DB]["test-sets"].find_one({:baz => "foo"}) assert @col.insert({:foo => "bar"}, :w => "majority") assert @col.insert({:bar => "baz"}, :w => :majority) assert @col.remove({}, :w => 3, :wtimeout => 5000) assert_equal 0, @slave1[MONGO_TEST_DB]["test-sets"].count end def test_last_error_responses 20.times { @col.insert({:baz => "bar"}) } response = @db.get_last_error(:w => 3, :wtimeout => 5000) assert response['ok'] == 1 assert response['lastOp'] @col.update({}, {:baz => "foo"}) response = @db.get_last_error(:w => 3, :wtimeout => 5000) assert response['ok'] == 1 assert response['lastOp'] @col.remove({}) response = @db.get_last_error(:w => 3, :wtimeout => 5000) assert response['ok'] == 1 assert response['n'] == 20 assert response['lastOp'] end end ruby-mongo-1.9.2/test/replica_set/ssl_test.rb000066400000000000000000000106241221200727400212570ustar00rootroot00000000000000require 'test_helper' # Note: For testing with MongoReplicaSetClient you *MUST* use the # hostname 'server' for all members of the replica set. class ReplicaSetSSLCertValidationTest < Test::Unit::TestCase include Mongo CERT_PATH = "#{Dir.pwd}/test/fixtures/certificates/" CLIENT_CERT = "#{CERT_PATH}client.pem" CA_CERT = "#{CERT_PATH}ca.pem" SEEDS = ['server:3000','server:3001','server:3002'] BAD_SEEDS = ['localhost:3000','localhost:3001','localhost:3002'] # This test doesn't connect, no server config required def test_ssl_configuration # raises when ssl=false and ssl opts specified assert_raise MongoArgumentError do MongoReplicaSetClient.new(SEEDS, :connect => false, :ssl => false, :ssl_cert => CLIENT_CERT) end # raises when ssl=nil and ssl opts specified assert_raise MongoArgumentError do MongoReplicaSetClient.new(SEEDS, :connect => false, :ssl_key => CLIENT_CERT) end # raises when verify=true and no ca_cert assert_raise MongoArgumentError do MongoReplicaSetClient.new(SEEDS, :connect => false, :ssl => true, :ssl_key => CLIENT_CERT, :ssl_cert => CLIENT_CERT, :ssl_verify => true) end end # Requires MongoDB built with SSL and the follow options: # # mongod --dbpath /path/to/data/directory --sslOnNormalPorts \ # --sslPEMKeyFile /path/to/server.pem \ # --sslCAFile /path/to/ca.pem \ # --sslCRLFile /path/to/crl.pem \ # --sslWeakCertificateValidation # # Make sure you have 'server' as an alias for localhost in /etc/hosts # def test_ssl_basic client = MongoReplicaSetClient.new(SEEDS, :connect => false, :ssl => true) assert client.connect end # Requires MongoDB built with SSL and the follow options: # # mongod --dbpath /path/to/data/directory --sslOnNormalPorts \ # --sslPEMKeyFile /path/to/server.pem \ # --sslCAFile /path/to/ca.pem \ # --sslCRLFile /path/to/crl.pem # # Make sure you have 'server' as an alias for localhost in /etc/hosts # def test_ssl_with_cert client = MongoReplicaSetClient.new(SEEDS, :connect => false, :ssl => true, :ssl_cert => CLIENT_CERT, :ssl_key => CLIENT_CERT) assert client.connect end def test_ssl_with_peer_cert_validation client = MongoReplicaSetClient.new(SEEDS, :connect => false, :ssl => true, :ssl_key => CLIENT_CERT, :ssl_cert => CLIENT_CERT, :ssl_verify => true, :ssl_ca_cert => CA_CERT) assert client.connect end def test_ssl_peer_cert_validation_hostname_fail client = MongoReplicaSetClient.new(BAD_SEEDS, :connect => false, :ssl => true, :ssl_key => CLIENT_CERT, :ssl_cert => CLIENT_CERT, :ssl_verify => true, :ssl_ca_cert => CA_CERT) assert_raise ConnectionFailure do client.connect end end # Requires mongod built with SSL and the follow options: # # mongod --dbpath /path/to/data/directory --sslOnNormalPorts \ # --sslPEMKeyFile /path/to/server.pem \ # --sslCAFile /path/to/ca.pem \ # --sslCRLFile /path/to/crl_client_revoked.pem # # Make sure you have 'server' as an alias for localhost in /etc/hosts # def test_ssl_with_invalid_cert assert_raise ConnectionFailure do MongoReplicaSetClient.new(SEEDS, :ssl => true, :ssl_key => CLIENT_CERT, :ssl_cert => CLIENT_CERT, :ssl_verify => true, :ssl_ca_cert => CA_CERT) end end end ruby-mongo-1.9.2/test/sharded_cluster/000077500000000000000000000000001221200727400177505ustar00rootroot00000000000000ruby-mongo-1.9.2/test/sharded_cluster/basic_test.rb000066400000000000000000000121641221200727400224210ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' include Mongo class Cursor public :construct_query_spec end class BasicTest < Test::Unit::TestCase def setup ensure_cluster(:sc) @document = { "name" => "test_user" } @seeds = @sc.mongos_seeds end # TODO member.primary? ==> true def test_connect @client = MongoShardedClient.new(@seeds) assert @client.connected? assert_equal(@seeds.size, @client.seeds.size) probe(@seeds.size) @client.close end def test_connect_from_standard_client mongos = @seeds.first @client = MongoClient.new(*mongos.split(':')) assert @client.connected? assert @client.mongos? @client.close end def test_read_from_client host, port = @seeds.first.split(':') tags = [{:dc => "mongolia"}] @client = MongoClient.new(host, port, {:read => :secondary, :tag_sets => tags}) assert @client.connected? cursor = Cursor.new(@client[MONGO_TEST_DB]['whatever'], {}) assert_equal cursor.construct_query_spec['$readPreference'], {:mode => 'secondary', :tags => tags} end def test_find_one_with_read_secondary @client = MongoShardedClient.new(@seeds, { :read => :secondary }) @client[MONGO_TEST_DB]["users"].insert([ @document ]) assert_equal @client[MONGO_TEST_DB]['users'].find_one["name"], "test_user" end def test_find_one_with_read_secondary_preferred @client = MongoShardedClient.new(@seeds, { :read => :secondary_preferred }) @client[MONGO_TEST_DB]["users"].insert([ @document ]) assert_equal @client[MONGO_TEST_DB]['users'].find_one["name"], "test_user" end def test_find_one_with_read_primary @client = MongoShardedClient.new(@seeds, { :read => :primary }) @client[MONGO_TEST_DB]["users"].insert([ @document ]) assert_equal @client[MONGO_TEST_DB]['users'].find_one["name"], "test_user" end def test_find_one_with_read_primary_preferred @client = MongoShardedClient.new(@seeds, { :read => :primary_preferred }) @client[MONGO_TEST_DB]["users"].insert([ @document ]) assert_equal @client[MONGO_TEST_DB]['users'].find_one["name"], "test_user" end def test_read_from_sharded_client tags = [{:dc => "mongolia"}] @client = MongoShardedClient.new(@seeds, {:read => :secondary, :tag_sets => tags}) assert @client.connected? cursor = Cursor.new(@client[MONGO_TEST_DB]['whatever'], {}) assert_equal cursor.construct_query_spec['$readPreference'], {:mode => 'secondary', :tags => tags} end def test_hard_refresh @client = MongoShardedClient.new(@seeds) assert @client.connected? @client.hard_refresh! assert @client.connected? @client.close end def test_reconnect @client = MongoShardedClient.new(@seeds) assert @client.connected? router = @sc.servers(:routers).first router.stop probe(@seeds.size) assert @client.connected? @client.close end def test_mongos_failover @client = MongoShardedClient.new(@seeds, :refresh_interval => 5, :refresh_mode => :sync) assert @client.connected? # do a find to pin a pool @client['MONGO_TEST_DB']['test'].find_one original_primary = @client.manager.primary # stop the pinned member @sc.member_by_name("#{original_primary[0]}:#{original_primary[1]}").stop # assert that the client fails over to the next available mongos assert_nothing_raised do @client['MONGO_TEST_DB']['test'].find_one end assert_not_equal original_primary, @client.manager.primary assert @client.connected? @client.close end def test_all_down @client = MongoShardedClient.new(@seeds) assert @client.connected? @sc.servers(:routers).each{|router| router.stop} assert_raises Mongo::ConnectionFailure do probe(@seeds.size) end assert_false @client.connected? @client.close end def test_cycle @client = MongoShardedClient.new(@seeds) assert @client.connected? routers = @sc.servers(:routers) while routers.size > 0 do rescue_connection_failure do probe(@seeds.size) end probe(@seeds.size) router = routers.detect{|r| r.port == @client.manager.primary.last} routers.delete(router) router.stop end assert_raises Mongo::ConnectionFailure do probe(@seeds.size) end assert_false @client.connected? routers = @sc.servers(:routers).reverse routers.each do |r| r.start @client.hard_refresh! rescue_connection_failure do probe(@seeds.size) end probe(@seeds.size) end @client.close end private def probe(size) assert_equal(size, @client['config']['mongos'].find.to_a.size) end end ruby-mongo-1.9.2/test/shared/000077500000000000000000000000001221200727400160435ustar00rootroot00000000000000ruby-mongo-1.9.2/test/shared/authentication.rb000066400000000000000000000065161221200727400214170ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module AuthenticationTests def init_auth # enable authentication by creating and logging in as admin user @admin = @client['admin'] @admin.add_user('admin', 'password') @admin.authenticate('admin', 'password') end def teardown @admin.logout @admin.authenticate('admin','password') @admin['system.users'].remove @db['system.users'].remove @db['test'].remove @admin.logout end def test_add_user @db.add_user('bob','user') assert @db['system.users'].find_one({:user => 'bob'}) end def test_remove_user @db.remove_user('bob') assert_nil @db['system.users'].find_one({:user => 'bob'}) end def test_remove_non_existent_user assert_equal @db.remove_user('joe'), false end def test_authenticate @db.add_user('peggy', 'user') assert @db.authenticate('peggy', 'user') @db.remove_user('peggy') @db.logout end def test_authenticate_non_existent_user assert_raise Mongo::AuthenticationError do @db.authenticate('frank', 'thetank') end end def test_delegated_authentication return if @client.server_version < '2.4' # TODO: remove this line when slaves have this code: # https://github.com/travis-ci/travis-cookbooks/pull/180 return if ENV['TRAVIS'] doc = {'_id' => 'test'} # create accounts database to hold user credentials accounts = @client['accounts'] accounts['system.users'].remove accounts.add_user('tyler', 'brock', nil, :roles => []) # insert test data and give user permissions on test db @db['test'].remove @db['test'].insert(doc) @db.add_user('tyler', nil, nil, :roles => ['read'], :userSource => 'accounts') @admin.logout # auth must occur on the db where the user is defined assert_raise Mongo::AuthenticationError do @db.authenticate('tyler', 'brock') end # auth directly assert accounts.authenticate('tyler', 'brock') assert_equal doc, @db['test'].find_one accounts.logout assert_raise Mongo::OperationFailure do @db['test'].find_one end # auth using source @db.authenticate('tyler', 'brock', true, 'accounts') assert_equal doc, @db['test'].find_one @db.logout assert_raise Mongo::OperationFailure do @db['test'].find_one end end def test_logout @db.add_user('peggy', 'user') assert @db.authenticate('peggy', 'user') assert @db.logout @db.remove_user('peggy') end def test_authenticate_with_special_characters assert @db.add_user('foo:bar','@foo') assert @db.authenticate('foo:bar','@foo') @db.remove_user('foo:bar') @db.logout end def test_authenticate_read_only @db.add_user('randy', 'readonly', true) assert @db.authenticate('randy', 'readonly') @db.remove_user('randy') @db.logout end end ruby-mongo-1.9.2/test/test_helper.rb000077500000000000000000000123351221200727400174470ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. begin require 'pry-rescue' require 'pry-nav' rescue LoadError # failed to load, skipping pry end # SimpleCov must load before our code - A coverage report summary line will print after each test suite if RUBY_VERSION >= '1.9.0' && RUBY_ENGINE == 'ruby' if ENV.key?('COVERAGE') require 'simplecov' SimpleCov.start do add_group "Mongo", 'lib/mongo' add_group "BSON", 'lib/bson' add_filter "/test/" merge_timeout 3600 command_name ENV['SIMPLECOV_COMMAND_NAME'] if ENV.has_key?('SIMPLECOV_COMMAND_NAME') end end end gem 'test-unit' # Do NOT remove this line - gem version is needed for Test::Unit::TestCase.shutdown require 'test/unit' require 'tools/mongo_config' class Test::Unit::TestCase TEST_DATA = File.join(File.dirname(__FILE__), 'fixtures/data') def ensure_cluster(kind=nil, opts={}) @@cluster ||= nil unless @@cluster if kind == :rs cluster_opts = Mongo::Config::DEFAULT_REPLICA_SET.dup else cluster_opts = Mongo::Config::DEFAULT_SHARDED_SIMPLE.dup end cluster_opts.merge!(opts) dbpath = ENV['DBPATH'] || 'data' cluster_opts.merge!(:dbpath => dbpath) #debug 1, opts config = Mongo::Config.cluster(cluster_opts) #debug 1, config @@cluster = Mongo::Config::ClusterManager.new(config) Test::Unit::TestCase.class_eval do @@force_shutdown = false def self.shutdown if @@force_shutdown || /rake_test_loader/ !~ $0 @@cluster.stop @@cluster.clobber end end end end @@cluster.start instance_variable_set("@#{kind}", @@cluster) end # Generic code for rescuing connection failures and retrying operations. # This could be combined with some timeout functionality. def rescue_connection_failure(max_retries=30) retries = 0 begin yield rescue Mongo::ConnectionFailure => ex #puts "Rescue attempt #{retries}: from #{ex}" retries += 1 raise ex if retries > max_retries sleep(2) retry end end end def silently warn_level = $VERBOSE $VERBOSE = nil begin result = yield ensure $VERBOSE = warn_level end result end begin silently { require 'shoulda' } silently { require 'mocha/setup' } rescue LoadError puts < e if klass.to_s != e.class.to_s flunk "Expected exception class #{klass} but got #{e.class}.\n #{e.backtrace}" end if message && !e.message.include?(message) p e.backtrace flunk "#{e.message} does not include #{message}.\n#{e.backtrace}" end else flunk "Expected assertion #{klass} but none was raised." end end end ruby-mongo-1.9.2/test/threading/000077500000000000000000000000001221200727400165425ustar00rootroot00000000000000ruby-mongo-1.9.2/test/threading/basic_test.rb000066400000000000000000000046351221200727400212170ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' class TestThreading < Test::Unit::TestCase include Mongo def setup @client = standard_connection(:pool_size => 10, :pool_timeout => 30) @db = @client.db(MONGO_TEST_DB) @coll = @db.collection('thread-test-collection') @coll.drop collections = ['duplicate', 'unique'] collections.each do |coll_name| coll = @db.collection(coll_name) coll.drop coll.insert("test" => "insert") coll.insert("test" => "update") instance_variable_set("@#{coll_name}", coll) end @unique.create_index("test", :unique => true) end def test_safe_update threads = [] 300.times do |i| threads << Thread.new do if i % 2 == 0 assert_raise Mongo::OperationFailure do @unique.update({"test" => "insert"}, {"$set" => {"test" => "update"}}) end else @duplicate.update({"test" => "insert"}, {"$set" => {"test" => "update"}}) @duplicate.update({"test" => "update"}, {"$set" => {"test" => "insert"}}) end end end threads.each {|thread| thread.join} end def test_safe_insert threads = [] 300.times do |i| threads << Thread.new do if i % 2 == 0 assert_raise Mongo::OperationFailure do @unique.insert({"test" => "insert"}) end else @duplicate.insert({"test" => "insert"}) end end end threads.each {|thread| thread.join} end def test_concurrent_find n_threads = 50 1000.times do |i| @coll.insert({ "x" => "a" }) end threads = [] n_threads.times do |i| threads << Thread.new do sum = 0 @coll.find.to_a.size end end thread_values = threads.map(&:value) assert thread_values.all?{|v| v == 1000} assert_equal thread_values.size, n_threads end end ruby-mongo-1.9.2/test/tools/000077500000000000000000000000001221200727400157355ustar00rootroot00000000000000ruby-mongo-1.9.2/test/tools/mongo_config.rb000077500000000000000000000421231221200727400207330ustar00rootroot00000000000000#!/usr/bin/env ruby # Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'socket' require 'fileutils' require 'mongo' require 'sfl' $debug_level = 2 STDOUT.sync = true def debug(level, arg) if level <= $debug_level file_line = caller[0][/(.*:\d+):/, 1] calling_method = caller[0][/`([^']*)'/, 1] puts "#{file_line}:#{calling_method}:#{arg.class == String ? arg : arg.inspect}" end end # # Design Notes # Configuration and Cluster Management are modularized with the concept that the Cluster Manager # can be supplied with any configuration to run. # A configuration can be edited, modified, copied into a test file, and supplied to a cluster manager # as a parameter. # module Mongo class Config DEFAULT_BASE_OPTS = { :host => 'localhost', :dbpath => 'data', :logpath => 'data/log' } DEFAULT_REPLICA_SET = DEFAULT_BASE_OPTS.merge( :replicas => 3, :arbiters => 0 ) DEFAULT_SHARDED_SIMPLE = DEFAULT_BASE_OPTS.merge( :shards => 2, :configs => 1, :routers => 4 ) DEFAULT_SHARDED_REPLICA = DEFAULT_SHARDED_SIMPLE.merge( :replicas => 3, :arbiters => 0) IGNORE_KEYS = [:host, :command, :_id] SHARDING_OPT_KEYS = [:shards, :configs, :routers] REPLICA_OPT_KEYS = [:replicas, :arbiters] MONGODS_OPT_KEYS = [:mongods] CLUSTER_OPT_KEYS = SHARDING_OPT_KEYS + REPLICA_OPT_KEYS + MONGODS_OPT_KEYS FLAGS = [:noprealloc, :smallfiles, :logappend, :configsvr, :shardsvr, :quiet, :fastsync, :auth] DEFAULT_VERIFIES = 60 BASE_PORT = 3000 @@port = BASE_PORT def self.configdb(config) config[:configs].collect{|c|"#{c[:host]}:#{c[:port]}"}.join(' ') end def self.cluster(opts = DEFAULT_SHARDED_SIMPLE) raise "missing required option" if [:host, :dbpath].any?{|k| !opts[k]} config = opts.reject {|k,v| CLUSTER_OPT_KEYS.include?(k)} kinds = CLUSTER_OPT_KEYS.select{|key| opts.has_key?(key)} # order is significant replica_count = 0 kinds.each do |kind| config[kind] = opts.fetch(kind,1).times.collect do |i| #default to 1 of whatever if kind == :shards && opts[:replicas] self.cluster(opts.reject{|k,v| SHARDING_OPT_KEYS.include?(k)}.merge(:dbpath => path)) else node = case kind when :replicas make_replica(opts, replica_count) when :arbiters make_replica(opts, replica_count) when :configs make_config(opts) when :routers make_router(config, opts) else make_mongod(kind, opts) end replica_count += 1 if [:replicas, :arbiters].member?(kind) node end end end config end def self.make_mongo(kind, opts) dbpath = opts[:dbpath] port = self.get_available_port path = "#{dbpath}/#{kind}-#{port}" logpath = "#{path}/#{kind}.log" { :host => opts[:host], :port => port, :logpath => logpath, :logappend => true } end def self.make_mongod(kind, opts) params = make_mongo('mongods', opts) mongod = ENV['MONGOD'] || 'mongod' path = File.dirname(params[:logpath]) noprealloc = opts[:noprealloc] || true smallfiles = opts[:smallfiles] || true quiet = opts[:quiet] || true fast_sync = opts[:fastsync] || false auth = opts[:auth] || true params.merge(:command => mongod, :dbpath => path, :smallfiles => smallfiles, :noprealloc => noprealloc, :quiet => quiet, :fastsync => fast_sync, :auth => auth) end def self.make_replica(opts, id) params = make_mongod('replicas', opts) replSet = opts[:replSet] || 'ruby-driver-test' oplogSize = opts[:oplog_size] || 5 keyFile = opts[:key_file] || '/test/fixtures/auth/keyfile' keyFile = Dir.pwd << keyFile system "chmod 600 #{keyFile}" params.merge(:_id => id, :replSet => replSet, :oplogSize => oplogSize, :keyFile => keyFile) end def self.make_config(opts) params = make_mongod('configs', opts) params.merge(:configsvr => nil) end def self.make_router(config, opts) params = make_mongo('routers', opts) mongos = ENV['MONGOS'] || 'mongos' params.merge( :command => mongos, :configdb => self.configdb(config) ) end def self.port_available?(port) ret = false socket = Socket.new(Socket::Constants::AF_INET, Socket::Constants::SOCK_STREAM, 0) socket.setsockopt(Socket::SOL_SOCKET, Socket::SO_REUSEADDR, 1) sockaddr = Socket.sockaddr_in(port, '0.0.0.0') begin socket.bind(sockaddr) ret = true rescue Exception end socket.close ret end def self.get_available_port while true port = @@port @@port += 1 break if port_available?(port) end port end class SysProc attr_reader :pid, :cmd def initialize(cmd = nil) @pid = nil @cmd = cmd end def clear_zombie if @pid begin pid = Process.waitpid(@pid, Process::WNOHANG) rescue Errno::ECHILD # JVM might have already reaped the exit status end @pid = nil if pid && pid > 0 end end def start(verifies = 0) clear_zombie return @pid if running? begin # redirection not supported in jruby if defined?(RUBY_ENGINE) && RUBY_ENGINE == 'jruby' @pid = Process.spawn(*@cmd) else cmd_and_opts = [@cmd, {:out => '/dev/null'}].flatten @pid = Process.spawn(*cmd_and_opts) end verify(verifies) if verifies > 0 @pid end end def stop kill wait end def kill(signal_no = 2) begin @pid && Process.kill(signal_no, @pid) && true rescue Errno::ESRCH false end # cleanup lock if unclean shutdown begin File.delete(File.join(@config[:dbpath], 'mongod.lock')) if @config[:dbpath] rescue Errno::ENOENT end end def wait begin Process.waitpid(@pid) if @pid rescue Errno::ECHILD # JVM might have already reaped the exit status end @pid = nil end def running? begin @pid && Process.kill(0, @pid) && true rescue Errno::ESRCH false end end def verify(verifies = DEFAULT_VERIFIES) verifies.times do |i| return @pid if running? sleep 1 end nil end end class Server < SysProc attr_reader :host, :port def initialize(cmd = nil, host = nil, port = nil) super(cmd) @host = host @port = port end def host_port [@host, @port].join(':') end def host_port_a # for old format [@host, @port] end end class DbServer < Server attr_accessor :config def initialize(config) @config = config dbpath = @config[:dbpath] [dbpath, File.dirname(@config[:logpath])].compact.each{|dir| FileUtils.mkdir_p(dir) unless File.directory?(dir) } command = @config[:command] || 'mongod' params = @config.reject{|k,v| IGNORE_KEYS.include?(k)} arguments = params.sort{|a, b| a[0].to_s <=> b[0].to_s}.collect do |arg, value| # sort block is needed for 1.8.7 which lacks Symbol#<=> argument = '--' + arg.to_s if FLAGS.member?(arg) && value == true [argument] elsif !FLAGS.member?(arg) [argument, value.to_s] end end cmd = [command, arguments].flatten.compact super(cmd, @config[:host], @config[:port]) end def start(verifies = DEFAULT_VERIFIES) super(verifies) verify(verifies) end def verify(verifies = 600) verifies.times do |i| #puts "DbServer.verify via connection probe - port:#{@port.inspect} iteration:#{i} @pid:#{@pid.inspect} kill:#{Process.kill(0, @pid).inspect} running?:#{running?.inspect} cmd:#{cmd.inspect}" begin raise Mongo::ConnectionFailure unless running? Mongo::MongoClient.new(@host, @port).close #puts "DbServer.verified via connection - port: #{@port} iteration: #{i}" return @pid rescue Mongo::ConnectionFailure sleep 1 end end system "ps -fp #{@pid}; cat #{@config[:logpath]}" raise Mongo::ConnectionFailure, "DbServer.start verify via connection probe failed - port:#{@port.inspect} @pid:#{@pid.inspect} kill:#{Process.kill(0, @pid).inspect} running?:#{running?.inspect} cmd:#{cmd.inspect}" end end class ClusterManager attr_reader :config def initialize(config) @config = config @servers = {} Mongo::Config::CLUSTER_OPT_KEYS.each do |key| @servers[key] = @config[key].collect{|conf| DbServer.new(conf)} if @config[key] end end def servers(key = nil) @servers.collect{|k,v| (!key || key == k) ? v : nil}.flatten.compact end def command( cmd_servers, db_name, cmd, opts = {} ) ret = [] cmd = cmd.class == Array ? cmd : [ cmd ] debug 3, "ClusterManager.command cmd:#{cmd.inspect}" cmd_servers = cmd_servers.class == Array ? cmd_servers : [cmd_servers] cmd_servers.each do |cmd_server| debug 3, cmd_server.inspect cmd_server = cmd_server.config if cmd_server.is_a?(DbServer) client = Mongo::MongoClient.new(cmd_server[:host], cmd_server[:port]) cmd.each do |c| debug 3, "ClusterManager.command c:#{c.inspect}" response = client[db_name].command( c, opts ) debug 3, "ClusterManager.command response:#{response.inspect}" raise Mongo::OperationFailure, "c:#{c.inspect} opts:#{opts.inspect} failed" unless response["ok"] == 1.0 || opts.fetch(:check_response, true) == false ret << response end client.close end debug 3, "command ret:#{ret.inspect}" ret.size == 1 ? ret.first : ret end def repl_set_get_status command( @config[:replicas], 'admin', { :replSetGetStatus => 1 }, {:check_response => false } ) end def repl_set_get_config host, port = primary_name.split(":") client = Mongo::MongoClient.new(host, port) client['local']['system.replset'].find_one end def repl_set_config members = [] @config[:replicas].each{|s| members << { :_id => s[:_id], :host => "#{s[:host]}:#{s[:port]}", :tags => { :node => s[:_id].to_s } } } @config[:arbiters].each{|s| members << { :_id => s[:_id], :host => "#{s[:host]}:#{s[:port]}", :arbiterOnly => true } } { :_id => @config[:replicas].first[:replSet], :members => members } end def repl_set_initiate( cfg = nil ) command( @config[:replicas].first, 'admin', { :replSetInitiate => cfg || repl_set_config } ) end def repl_set_startup states = nil 60.times do states = repl_set_get_status.zip(repl_set_is_master) healthy = states.all? do |status, is_master| members = status['members'] if status['ok'] == 1.0 && members.collect{|m| m['state']}.all?{|state| [1,2,7].index(state)} members.any?{|m| m['state'] == 1} && case status['myState'] when 1 is_master['ismaster'] == true && is_master['secondary'] == false when 2 is_master['ismaster'] == false && is_master['secondary'] == true when 7 is_master['ismaster'] == false && is_master['secondary'] == false end end end return true if healthy sleep(1) end raise Mongo::OperationFailure, "replSet startup failed - status: #{states.inspect}" end def repl_set_seeds @config[:replicas].collect{|node| "#{node[:host]}:#{node[:port]}"} end def repl_set_seeds_old @config[:replicas].collect{|node| [node[:host], node[:port]]} end def repl_set_seeds_uri repl_set_seeds.join(',') end def repl_set_name @config[:replicas].first[:replSet] end def member_names_by_state(state) states = Array(state) status = repl_set_get_status.first status['members'].find_all{|member| states.index(member['state']) }.collect{|member| member['name']} end def primary_name member_names_by_state(1).first end def secondary_names member_names_by_state(2) end def replica_names member_names_by_state([1,2]) end def arbiter_names member_names_by_state(7) end def members_by_name(names) names.collect do |name| member_by_name(name) end.compact end def member_by_name(name) servers.find{|server| server.host_port == name} end def primary members_by_name([primary_name]).first end def secondaries members_by_name(secondary_names) end def stop_primary primary.stop end def stop_secondary secondaries[rand(secondaries.length)].stop end def replicas members_by_name(replica_names) end def arbiters members_by_name(arbiter_names) end def config_names_by_kind(kind) @config[kind].collect{|conf| "#{conf[:host]}:#{conf[:port]}"} end def shards members_by_name(config_names_by_kind(:shards)) end def repl_set_reconfig(new_config) new_config['version'] = repl_set_get_config['version'] + 1 command( primary, 'admin', { :replSetReconfig => new_config } ) repl_set_startup end def repl_set_remove_node(state = [1,2]) names = member_names_by_state(state) name = names[rand(names.length)] @config[:replicas].delete_if{|node| "#{node[:host]}:#{node[:port]}" == name} repl_set_reconfig(repl_set_config) end def repl_set_add_node end def configs members_by_name(config_names_by_kind(:configs)) end def routers members_by_name(config_names_by_kind(:routers)) end def mongos_seeds config_names_by_kind(:routers) end def ismaster(servers) command( servers, 'admin', { :ismaster => 1 } ) end def sharded_cluster_is_master ismaster(@config[:routers]) end def repl_set_is_master ismaster(@config[:replicas]) end def addshards(shards = @config[:shards]) command( @config[:routers].first, 'admin', Array(shards).collect{|s| { :addshard => "#{s[:host]}:#{s[:port]}" } } ) end def listshards command( @config[:routers].first, 'admin', { :listshards => 1 } ) end def enablesharding( dbname ) command( @config[:routers].first, 'admin', { :enablesharding => dbname } ) end def shardcollection( namespace, key, unique = false ) command( @config[:routers].first, 'admin', { :shardcollection => namespace, :key => key, :unique => unique } ) end def mongos_discover # can also do @config[:routers] find but only want mongos for connections (@config[:configs]).collect do |cmd_server| client = Mongo::MongoClient.new(cmd_server[:host], cmd_server[:port]) result = client['config']['mongos'].find.to_a client.close result end end def start # Must start configs before mongos -- hash order not guaranteed on 1.8.X servers(:configs).each{|server| server.start} servers.each{|server| server.start} # TODO - sharded replica sets - pending if @config[:replicas] repl_set_initiate if repl_set_get_status.first['startupStatus'] == 3 repl_set_startup end if @config[:routers] addshards if listshards['shards'].size == 0 end self end alias :restart :start def stop servers.each{|server| server.stop} self end def clobber FileUtils.rm_rf @config[:dbpath] self end end end end ruby-mongo-1.9.2/test/tools/mongo_config_test.rb000066400000000000000000000116021221200727400217650ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' class MongoConfig < Test::Unit::TestCase def startup @sys_proc = nil end def shutdown @sys_proc.stop if @sys_proc && @sys_proc.running? end test "config defaults" do [ Mongo::Config::DEFAULT_BASE_OPTS, Mongo::Config::DEFAULT_REPLICA_SET, Mongo::Config::DEFAULT_SHARDED_SIMPLE, Mongo::Config::DEFAULT_SHARDED_REPLICA ].each do |params| config = Mongo::Config.cluster(params) assert(config.size > 0) end end test "get available port" do assert_not_nil(Mongo::Config.get_available_port) end test "SysProc start" do cmd = "true" @sys_proc = Mongo::Config::SysProc.new(cmd) assert_equal(cmd, @sys_proc.cmd) assert_nil(@sys_proc.pid) start_and_assert_running?(@sys_proc) end test "SysProc wait" do @sys_proc = Mongo::Config::SysProc.new("true") start_and_assert_running?(@sys_proc) assert(@sys_proc.running?) @sys_proc.wait assert(!@sys_proc.running?) end test "SysProc kill" do @sys_proc = Mongo::Config::SysProc.new("true") start_and_assert_running?(@sys_proc) @sys_proc.kill @sys_proc.wait assert(!@sys_proc.running?) end test "SysProc stop" do @sys_proc = Mongo::Config::SysProc.new("true") start_and_assert_running?(@sys_proc) @sys_proc.stop assert(!@sys_proc.running?) end test "SysProc zombie respawn" do @sys_proc = Mongo::Config::SysProc.new("true") start_and_assert_running?(@sys_proc) prev_pid = @sys_proc.pid @sys_proc.kill # don't wait, leaving a zombie assert(@sys_proc.running?) start_and_assert_running?(@sys_proc) assert(prev_pid && @sys_proc.pid && prev_pid != @sys_proc.pid, 'SysProc#start should spawn a new process after a zombie') @sys_proc.stop assert(!@sys_proc.running?) end test "Server" do server = Mongo::Config::Server.new('a cmd', 'host', 1234) assert_equal('a cmd', server.cmd) assert_equal('host', server.host) assert_equal(1234, server.port) end test "DbServer" do config = Mongo::Config::DEFAULT_BASE_OPTS server = Mongo::Config::DbServer.new(config) assert_equal(config, server.config) assert_equal("mongod --dbpath data --logpath data/log", server.cmd) assert_equal(config[:host], server.host) assert_equal(config[:port], server.port) end def cluster_test(opts) #debug 1, opts.inspect config = Mongo::Config.cluster(opts) #debug 1, config.inspect manager = Mongo::Config::ClusterManager.new(config) assert_equal(config, manager.config) manager.start yield manager manager.stop manager.servers.each{|s| assert(!s.running?)} manager.clobber end test "cluster manager base" do cluster_test(Mongo::Config::DEFAULT_BASE_OPTS) do |manager| end end test "cluster manager replica set" do cluster_test(Mongo::Config::DEFAULT_REPLICA_SET) do |manager| servers = manager.servers servers.each do |server| assert_not_nil(Mongo::MongoClient.new(server.host, server.port)) assert_match(/oplogSize/, server.cmd, '--oplogSize option should be specified') assert_match(/smallfiles/, server.cmd, '--smallfiles option should be specified') assert_no_match(/nojournal/, server.cmd, '--nojournal option should not be specified') assert_match(/noprealloc/, server.cmd, '--noprealloc option should be specified') end end end test "cluster manager sharded simple" do cluster_test(Mongo::Config::DEFAULT_SHARDED_SIMPLE) do |manager| servers = manager.shards + manager.configs servers.each do |server| assert_not_nil(Mongo::MongoClient.new(server.host, server.port)) assert_match(/oplogSize/, server.cmd, '--oplogSize option should be specified') assert_match(/smallfiles/, server.cmd, '--smallfiles option should be specified') assert_no_match(/nojournal/, server.cmd, '--nojournal option should not be specified') assert_match(/noprealloc/, server.cmd, '--noprealloc option should be specified') end end end test "cluster manager sharded replica" do #cluster_test(Mongo::Config::DEFAULT_SHARDED_REPLICA) # not yet supported by ClusterManager end private def start_and_assert_running?(sys_proc) assert_not_nil(sys_proc.start(0)) assert_not_nil(sys_proc.pid) assert(sys_proc.running?) end end ruby-mongo-1.9.2/test/unit/000077500000000000000000000000001221200727400155545ustar00rootroot00000000000000ruby-mongo-1.9.2/test/unit/client_test.rb000066400000000000000000000227071221200727400204260ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' include Mongo class ClientTest < Test::Unit::TestCase context "Mongo::MongoClient initialization " do context "given a single node" do setup do @client = MongoClient.new('localhost', 27017, :connect => false) TCPSocket.stubs(:new).returns(new_mock_socket) admin_db = new_mock_db admin_db.expects(:command).returns({'ok' => 1, 'ismaster' => 1}) @client.expects(:[]).with('admin').returns(admin_db) @client.connect end should "gle writes by default" do assert_equal 1, @client.write_concern[:w] end should "set localhost and port to master" do assert_equal 'localhost', @client.primary_pool.host assert_equal 27017, @client.primary_pool.port end should "set connection pool to 1" do assert_equal 1, @client.primary_pool.size end should "default slave_ok to false" do assert !@client.slave_ok? end should "not raise error if no host or port is supplied" do assert_nothing_raised do MongoClient.new(:w => 1, :connect => false) end assert_nothing_raised do MongoClient.new('localhost', :w => 1, :connect=> false) end end should "warn if invalid options are specified" do client = MongoClient.allocate opts = {:connect => false} MongoReplicaSetClient::REPL_SET_OPTS.each do |opt| client.expects(:warn).with("#{opt} is not a valid option for #{client.class}") opts[opt] = true end args = ['localhost', 27017, opts] client.send(:initialize, *args) end context "given a replica set" do should "warn if invalid options are specified" do client = MongoReplicaSetClient.allocate opts = {:connect => false} MongoClient::CLIENT_ONLY_OPTS.each do |opt| client.expects(:warn).with("#{opt} is not a valid option for #{client.class}") opts[opt] = true end args = [['localhost:27017'], opts] client.send(:initialize, *args) end should "throw error if superflous arguments are specified" do assert_raise MongoArgumentError do MongoReplicaSetClient.new(['localhost:27017'], ['localhost:27018'], {:connect => false}) end end end end context "initializing with a unix socket" do setup do @connection = Mongo::Connection.new('/tmp/mongod.sock', :safe => true, :connect => false) UNIXSocket.stubs(:new).returns(new_mock_unix_socket) end should "parse a unix socket" do assert_equal "/tmp/mongod.sock", @connection.host_port.first end end context "initializing with a mongodb uri" do should "parse a simple uri" do @client = MongoClient.from_uri("mongodb://localhost", :connect => false) assert_equal ['localhost', 27017], @client.host_port end #should "parse a unix socket" do # socket_address = "/tmp/mongodb-27017.sock" # @client = MongoClient.from_uri("mongodb://#{socket_address}") # assert_equal socket_address, @client.host_port.first #end should "allow a complex host names" do host_name = "foo.bar-12345.org" @client = MongoClient.from_uri("mongodb://#{host_name}", :connect => false) assert_equal [host_name, 27017], @client.host_port end should "allow db without username and password" do host_name = "foo.bar-12345.org" @client = MongoClient.from_uri("mongodb://#{host_name}/foo", :connect => false) assert_equal [host_name, 27017], @client.host_port end should "set write concern options on connection" do host_name = "localhost" opts = "w=2&wtimeoutMS=1000&fsync=true&journal=true" @client = MongoClient.from_uri("mongodb://#{host_name}/foo?#{opts}", :connect => false) assert_equal({:w => 2, :wtimeout => 1000, :fsync => true, :j => true}, @client.write_concern) end should "set timeout options on connection" do host_name = "localhost" opts = "connectTimeoutMS=1000&socketTimeoutMS=5000" @client = MongoClient.from_uri("mongodb://#{host_name}/foo?#{opts}", :connect => false) assert_equal 1, @client.connect_timeout assert_equal 5, @client.op_timeout end should "parse a uri with a hyphen & underscore in the username or password" do @client = MongoClient.from_uri("mongodb://hyphen-user_name:p-s_s@localhost:27017/db", :connect => false) assert_equal ['localhost', 27017], @client.host_port auth_hash = { :db_name => 'db', :username => 'hyphen-user_name', :password => 'p-s_s' } assert_equal auth_hash, @client.auths[0] end should "attempt to connect" do TCPSocket.stubs(:new).returns(new_mock_socket) @client = MongoClient.from_uri("mongodb://localhost", :connect => false) admin_db = new_mock_db admin_db.expects(:command).returns({'ok' => 1, 'ismaster' => 1}) @client.expects(:[]).with('admin').returns(admin_db) @client.connect end should "raise an error on invalid uris" do assert_raise MongoArgumentError do MongoClient.from_uri("mongo://localhost", :connect => false) end assert_raise MongoArgumentError do MongoClient.from_uri("mongodb://localhost:abc", :connect => false) end end should "require all of username, if password and db are specified" do assert MongoClient.from_uri("mongodb://kyle:jones@localhost/db", :connect => false) assert_raise MongoArgumentError do MongoClient.from_uri("mongodb://kyle:password@localhost", :connect => false) end end end context "initializing with ENV['MONGODB_URI']" do setup do @old_mongodb_uri = ENV['MONGODB_URI'] end teardown do ENV['MONGODB_URI'] = @old_mongodb_uri end should "parse a simple uri" do ENV['MONGODB_URI'] = "mongodb://localhost?connect=false" @client = MongoClient.new assert_equal ['localhost', 27017], @client.host_port end should "allow a complex host names" do host_name = "foo.bar-12345.org" ENV['MONGODB_URI'] = "mongodb://#{host_name}?connect=false" @client = MongoClient.new assert_equal [host_name, 27017], @client.host_port end should "allow db without username and password" do host_name = "foo.bar-12345.org" ENV['MONGODB_URI'] = "mongodb://#{host_name}/foo?connect=false" @client = MongoClient.new assert_equal [host_name, 27017], @client.host_port end should "set write concern options on connection" do host_name = "localhost" opts = "w=2&wtimeoutMS=1000&fsync=true&journal=true&connect=false" ENV['MONGODB_URI'] = "mongodb://#{host_name}/foo?#{opts}" @client = MongoClient.new assert_equal({:w => 2, :wtimeout => 1000, :fsync => true, :j => true}, @client.write_concern) end should "set timeout options on connection" do host_name = "localhost" opts = "connectTimeoutMS=1000&socketTimeoutMS=5000&connect=false" ENV['MONGODB_URI'] = "mongodb://#{host_name}/foo?#{opts}" @client = MongoClient.new assert_equal 1, @client.connect_timeout assert_equal 5, @client.op_timeout end should "parse a uri with a hyphen & underscore in the username or password" do ENV['MONGODB_URI'] = "mongodb://hyphen-user_name:p-s_s@localhost:27017/db?connect=false" @client = MongoClient.new assert_equal ['localhost', 27017], @client.host_port auth_hash = { :db_name => 'db', :username => 'hyphen-user_name', :password => 'p-s_s' } assert_equal auth_hash, @client.auths[0] end should "attempt to connect" do TCPSocket.stubs(:new).returns(new_mock_socket) ENV['MONGODB_URI'] = "mongodb://localhost?connect=false" # connect=false ?? @client = MongoClient.new admin_db = new_mock_db admin_db.expects(:command).returns({'ok' => 1, 'ismaster' => 1}) @client.expects(:[]).with('admin').returns(admin_db) @client.connect end should "raise an error on invalid uris" do ENV['MONGODB_URI'] = "mongo://localhost" assert_raise MongoArgumentError do MongoClient.new end ENV['MONGODB_URI'] = "mongodb://localhost:abc" assert_raise MongoArgumentError do MongoClient.new end end should "require all of username, if password and db are specified" do ENV['MONGODB_URI'] = "mongodb://kyle:jones@localhost/db?connect=false" assert MongoClient.new ENV['MONGODB_URI'] = "mongodb://kyle:password@localhost" assert_raise MongoArgumentError do MongoClient.new end end end end end ruby-mongo-1.9.2/test/unit/collection_test.rb000066400000000000000000000156061221200727400213030ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' class CollectionTest < Test::Unit::TestCase context "Basic operations: " do setup do @logger = mock() @logger.stubs(:level => 0) @logger.expects(:debug) end should "send update message" do @client = MongoClient.new('localhost', 27017, :logger => @logger, :connect => false) @db = @client['testing'] @coll = @db.collection('books') @client.expects(:send_message_with_gle).with do |op, msg, log| op == 2001 end @coll.stubs(:log_operation) @coll.update({}, {:title => 'Moby Dick'}) end should "send insert message" do @client = MongoClient.new('localhost', 27017, :logger => @logger, :connect => false) @db = @client['testing'] @coll = @db.collection('books') @client.expects(:send_message_with_gle).with do |op, msg, log| op == 2002 end @coll.expects(:log_operation).with do |name, payload| (name == :insert) && payload[:documents][0][:title].include?('Moby') end @coll.insert({:title => 'Moby Dick'}) end should "send sort data" do @client = MongoClient.new('localhost', 27017, :logger => @logger, :connect => false) @db = @client['testing'] @coll = @db.collection('books') @client.expects(:checkout_reader).returns(new_mock_socket) @client.expects(:receive_message).with do |op, msg, log, sock| op == 2004 end.returns([[], 0, 0]) @logger.expects(:debug) @coll.find({:title => 'Moby Dick'}).sort([['title', 1], ['author', 1]]).next_document end should "not log binary data" do @client = MongoClient.new('localhost', 27017, :logger => @logger, :connect => false) @db = @client['testing'] @coll = @db.collection('books') data = BSON::Binary.new(("BINARY " * 1000).unpack("c*")) @client.expects(:send_message_with_gle).with do |op, msg, log| op == 2002 end @coll.expects(:log_operation).with do |name, payload| (name == :insert) && payload[:documents][0][:data].inspect.include?('Binary') end @coll.insert({:data => data}) end should "send safe update message" do @client = MongoClient.new('localhost', 27017, :logger => @logger, :connect => false) @db = @client['testing'] @coll = @db.collection('books') @client.expects(:send_message_with_gle).with do |op, msg, db_name, log| op == 2001 end @coll.expects(:log_operation).with do |name, payload| (name == :update) && payload[:document][:title].include?('Moby') end @coll.update({}, {:title => 'Moby Dick'}) end should "send safe update message with legacy" do @connection = Connection.new('localhost', 27017, :logger => @logger, :safe => true, :connect => false) @db = @connection['testing'] @coll = @db.collection('books') @connection.expects(:send_message_with_gle).with do |op, msg, db_name, log| op == 2001 end @coll.expects(:log_operation).with do |name, payload| (name == :update) && payload[:document][:title].include?('Moby') end @coll.update({}, {:title => 'Moby Dick'}) end should "send safe insert message" do @client = MongoClient.new('localhost', 27017, :logger => @logger, :connect => false) @db = @client['testing'] @coll = @db.collection('books') @client.expects(:send_message_with_gle).with do |op, msg, db_name, log| op == 2001 end @coll.stubs(:log_operation) @coll.update({}, {:title => 'Moby Dick'}) end should "not call insert for each ensure_index call" do @client = MongoClient.new('localhost', 27017, :logger => @logger, :connect => false) @db = @client['testing'] @coll = @db.collection('books') @coll.expects(:generate_indexes).once @coll.ensure_index [["x", Mongo::DESCENDING]] @coll.ensure_index [["x", Mongo::DESCENDING]] end should "call generate_indexes for a new type on the same field for ensure_index" do @client = MongoClient.new('localhost', 27017, :logger => @logger, :connect => false) @db = @client['testing'] @coll = @db.collection('books') @coll.expects(:generate_indexes).twice @coll.ensure_index [["x", Mongo::DESCENDING]] @coll.ensure_index [["x", Mongo::ASCENDING]] end should "call generate_indexes twice because the cache time is 0 seconds" do @client = MongoClient.new('localhost', 27017, :logger => @logger, :connect => false) @db = @client['testing'] @db.cache_time = 0 @coll = @db.collection('books') @coll.expects(:generate_indexes).twice @coll.ensure_index [["x", Mongo::DESCENDING]] @coll.ensure_index [["x", Mongo::DESCENDING]] end should "call generate_indexes for each key when calling ensure_indexes" do @client = MongoClient.new('localhost', 27017, :logger => @logger, :connect => false) @db = @client['testing'] @db.cache_time = 300 @coll = @db.collection('books') @coll.expects(:generate_indexes).once.with do |a, b, c| a == {"x"=>-1, "y"=>-1} end @coll.ensure_index [["x", Mongo::DESCENDING], ["y", Mongo::DESCENDING]] end should "call generate_indexes for each key when calling ensure_indexes with a hash" do @client = MongoClient.new('localhost', 27017, :logger => @logger, :connect => false) @db = @client['testing'] @db.cache_time = 300 @coll = @db.collection('books') oh = BSON::OrderedHash.new oh['x'] = Mongo::DESCENDING oh['y'] = Mongo::DESCENDING @coll.expects(:generate_indexes).once.with do |a, b, c| a == oh end if RUBY_VERSION > '1.9' @coll.ensure_index({"x" => Mongo::DESCENDING, "y" => Mongo::DESCENDING}) else ordered_hash = BSON::OrderedHash.new ordered_hash['x'] = Mongo::DESCENDING ordered_hash['y'] = Mongo::DESCENDING @coll.ensure_index(ordered_hash) end end should "use the connection's logger" do @client = MongoClient.new('localhost', 27017, :logger => @logger, :connect => false) @db = @client['testing'] @coll = @db.collection('books') @logger.expects(:warn).with do |msg| msg == "MONGODB [WARNING] test warning" end @coll.log(:warn, "test warning") end end end ruby-mongo-1.9.2/test/unit/connection_test.rb000066400000000000000000000231471221200727400213060ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' class ConnectionTest < Test::Unit::TestCase context "Mongo::MongoClient initialization " do context "given a single node" do setup do @connection = Mongo::Connection.new('localhost', 27017, :safe => true, :connect => false) TCPSocket.stubs(:new).returns(new_mock_socket) admin_db = new_mock_db admin_db.expects(:command).returns({'ok' => 1, 'ismaster' => 1}) @connection.expects(:[]).with('admin').returns(admin_db) @connection.connect end should "set safe mode true" do assert_equal true, @connection.safe end should "set localhost and port to master" do assert_equal 'localhost', @connection.primary_pool.host assert_equal 27017, @connection.primary_pool.port end should "set connection pool to 1" do assert_equal 1, @connection.primary_pool.size end should "default slave_ok to false" do assert !@connection.slave_ok? end should "not raise error if no host or port is supplied" do assert_nothing_raised do Mongo::Connection.new(:safe => true, :connect => false) end assert_nothing_raised do Mongo::Connection.new('localhost', :safe => true, :connect => false) end end should "warn if invalid options are specified" do connection = Mongo::Connection.allocate opts = {:connect => false} Mongo::ReplSetConnection::REPL_SET_OPTS.each do |opt| connection.expects(:warn).with("#{opt} is not a valid option for #{connection.class}") opts[opt] = true end args = ['localhost', 27017, opts] connection.send(:initialize, *args) end context "given a replica set" do # should "warn if invalid options are specified" do # connection = Mongo::ReplSetConnection.allocate # opts = {:connect => false} # # Mongo::Connection::CLIENT_ONLY_OPTS.each do |opt| # connection.expects(:warn).with("#{:slave_ok} is not a valid option for #{connection.class}") # opts[:slave_ok] = true # # end # args = [['localhost:27017'], opts] # connection.send(:initialize, *args) # end end end context "initializing with a unix socket" do setup do @connection = Mongo::Connection.new('/tmp/mongod.sock', :safe => true, :connect => false) UNIXSocket.stubs(:new).returns(new_mock_unix_socket) end should "parse a unix socket" do assert_equal "/tmp/mongod.sock", @connection.host_port.first end end context "initializing with a mongodb uri" do should "parse a simple uri" do @connection = Mongo::Connection.from_uri("mongodb://localhost", :connect => false) assert_equal ['localhost', 27017], @connection.host_port end #should "parse a unix socket" do # socket_address = "/tmp/mongodb-27017.sock" # @client = MongoClient.from_uri("mongodb://#{socket_address}") # assert_equal socket_address, @client.host_port.first #end should "allow a complex host names" do host_name = "foo.bar-12345.org" @connection = Mongo::Connection.from_uri("mongodb://#{host_name}", :connect => false) assert_equal [host_name, 27017], @connection.host_port end should "allow db without username and password" do host_name = "foo.bar-12345.org" @connection = Mongo::Connection.from_uri("mongodb://#{host_name}/foo", :connect => false) assert_equal [host_name, 27017], @connection.host_port end should "set safe options on connection" do host_name = "localhost" opts = "safe=true&w=2&wtimeoutMS=1000&fsync=true&journal=true" @connection = Mongo::Connection.from_uri("mongodb://#{host_name}/foo?#{opts}", :connect => false) assert_equal({:w => 2, :wtimeout => 1000, :fsync => true, :j => true}, @connection.write_concern) end should "set timeout options on connection" do host_name = "localhost" opts = "connectTimeoutMS=1000&socketTimeoutMS=5000" @connection = Mongo::Connection.from_uri("mongodb://#{host_name}/foo?#{opts}", :connect => false) assert_equal 1, @connection.connect_timeout assert_equal 5, @connection.op_timeout end should "parse a uri with a hyphen & underscore in the username or password" do @connection = Mongo::Connection.from_uri("mongodb://hyphen-user_name:p-s_s@localhost:27017/db", :connect => false) assert_equal ['localhost', 27017], @connection.host_port auth_hash = { :db_name => 'db', :username => 'hyphen-user_name', :password => 'p-s_s' } assert_equal auth_hash, @connection.auths[0] end should "attempt to connect" do TCPSocket.stubs(:new).returns(new_mock_socket) @connection = Mongo::Connection.from_uri("mongodb://localhost", :connect => false) admin_db = new_mock_db admin_db.expects(:command).returns({'ok' => 1, 'ismaster' => 1}) @connection.expects(:[]).with('admin').returns(admin_db) @connection.connect end should "raise an error on invalid uris" do assert_raise MongoArgumentError do Mongo::Connection.from_uri("mongo://localhost", :connect => false) end assert_raise MongoArgumentError do Mongo::Connection.from_uri("mongodb://localhost:abc", :connect => false) end end should "require all of username, if password and db are specified" do assert Mongo::Connection.from_uri("mongodb://kyle:jones@localhost/db", :connect => false) assert_raise MongoArgumentError do Mongo::Connection.from_uri("mongodb://kyle:password@localhost", :connect => false) end end end context "initializing with ENV['MONGODB_URI']" do setup do @old_mongodb_uri = ENV['MONGODB_URI'] end teardown do ENV['MONGODB_URI'] = @old_mongodb_uri end should "parse a simple uri" do ENV['MONGODB_URI'] = "mongodb://localhost?connect=false" @connection = Mongo::Connection.new assert_equal ['localhost', 27017], @connection.host_port end should "allow a complex host names" do host_name = "foo.bar-12345.org" ENV['MONGODB_URI'] = "mongodb://#{host_name}?connect=false" @connection = Mongo::Connection.new assert_equal [host_name, 27017], @connection.host_port end should "allow db without username and password" do host_name = "foo.bar-12345.org" ENV['MONGODB_URI'] = "mongodb://#{host_name}/foo?connect=false" @connection = Mongo::Connection.new assert_equal [host_name, 27017], @connection.host_port end should "set safe options on connection" do host_name = "localhost" opts = "safe=true&w=2&wtimeoutMS=1000&fsync=true&journal=true&connect=false" ENV['MONGODB_URI'] = "mongodb://#{host_name}/foo?#{opts}" @connection = Mongo::Connection.new assert_equal({:w => 2, :wtimeout => 1000, :fsync => true, :j => true}, @connection.safe) end should "set timeout options on connection" do host_name = "localhost" opts = "connectTimeoutMS=1000&socketTimeoutMS=5000&connect=false" ENV['MONGODB_URI'] = "mongodb://#{host_name}/foo?#{opts}" @connection = Mongo::Connection.new assert_equal 1, @connection.connect_timeout assert_equal 5, @connection.op_timeout end should "parse a uri with a hyphen & underscore in the username or password" do ENV['MONGODB_URI'] = "mongodb://hyphen-user_name:p-s_s@localhost:27017/db?connect=false" @connection = Mongo::Connection.new assert_equal ['localhost', 27017], @connection.host_port auth_hash = { :db_name => 'db', :username => 'hyphen-user_name', :password => 'p-s_s' } assert_equal auth_hash, @connection.auths[0] end should "attempt to connect" do TCPSocket.stubs(:new).returns(new_mock_socket) ENV['MONGODB_URI'] = "mongodb://localhost?connect=false" # connect=false ?? @connection = Mongo::Connection.new admin_db = new_mock_db admin_db.expects(:command).returns({'ok' => 1, 'ismaster' => 1}) @connection.expects(:[]).with('admin').returns(admin_db) @connection.connect end should "raise an error on invalid uris" do ENV['MONGODB_URI'] = "mongo://localhost" assert_raise MongoArgumentError do Mongo::Connection.new end ENV['MONGODB_URI'] = "mongodb://localhost:abc" assert_raise MongoArgumentError do Mongo::Connection.new end end should "require all of username, if password and db are specified" do ENV['MONGODB_URI'] = "mongodb://kyle:jones@localhost/db?connect=false" assert Mongo::Connection.new ENV['MONGODB_URI'] = "mongodb://kyle:password@localhost" assert_raise MongoArgumentError do Mongo::Connection.new end end end end end ruby-mongo-1.9.2/test/unit/cursor_test.rb000066400000000000000000000245231221200727400204630ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' class CursorTest < Test::Unit::TestCase class Mongo::Cursor public :construct_query_spec end context "Cursor options" do setup do @logger = mock() @logger.stubs(:debug) @connection = stub(:class => MongoClient, :logger => @logger, :slave_ok? => false, :read => :primary, :log_duration => false, :tag_sets => [], :acceptable_latency => 10) @db = stub(:name => "testing", :slave_ok? => false, :connection => @connection, :read => :primary, :tag_sets => [], :acceptable_latency => 10) @collection = stub(:db => @db, :name => "items", :read => :primary, :tag_sets => [], :acceptable_latency => 10) @cursor = Cursor.new(@collection) end should "set timeout" do assert @cursor.timeout assert @cursor.query_options_hash[:timeout] end should "set selector" do assert_equal({}, @cursor.selector) @cursor = Cursor.new(@collection, :selector => {:name => "Jones"}) assert_equal({:name => "Jones"}, @cursor.selector) assert_equal({:name => "Jones"}, @cursor.query_options_hash[:selector]) end should "set fields" do assert_nil @cursor.fields @cursor = Cursor.new(@collection, :fields => [:name, :date]) assert_equal({:name => 1, :date => 1}, @cursor.fields) assert_equal({:name => 1, :date => 1}, @cursor.query_options_hash[:fields]) end should "set mix fields 0 and 1" do assert_nil @cursor.fields @cursor = Cursor.new(@collection, :fields => {:name => 1, :date => 0}) assert_equal({:name => 1, :date => 0}, @cursor.fields) assert_equal({:name => 1, :date => 0}, @cursor.query_options_hash[:fields]) end should "set limit" do assert_equal 0, @cursor.limit @cursor = Cursor.new(@collection, :limit => 10) assert_equal 10, @cursor.limit assert_equal 10, @cursor.query_options_hash[:limit] end should "set skip" do assert_equal 0, @cursor.skip @cursor = Cursor.new(@collection, :skip => 5) assert_equal 5, @cursor.skip assert_equal 5, @cursor.query_options_hash[:skip] end should "set sort order" do assert_nil @cursor.order @cursor = Cursor.new(@collection, :order => "last_name") assert_equal "last_name", @cursor.order assert_equal "last_name", @cursor.query_options_hash[:order] end should "set hint" do assert_nil @cursor.hint @cursor = Cursor.new(@collection, :hint => "name") assert_equal "name", @cursor.hint assert_equal "name", @cursor.query_options_hash[:hint] end should "set comment" do assert_nil @cursor.comment @cursor = Cursor.new(@collection, :comment => "comment") assert_equal "comment", @cursor.comment assert_equal "comment", @cursor.query_options_hash[:comment] end should "cache full collection name" do assert_equal "testing.items", @cursor.full_collection_name end should "raise error when batch_size is 1" do e = assert_raise ArgumentError do @cursor.batch_size(1) end assert_equal "Invalid value for batch_size 1; must be 0 or > 1.", e.message end should "use the limit for batch size when it's smaller than the specified batch_size" do @cursor.limit(99) @cursor.batch_size(100) assert_equal 99, @cursor.batch_size end should "use the specified batch_size" do @cursor.batch_size(100) assert_equal 100, @cursor.batch_size end context "conected to mongos" do setup do @connection.stubs(:mongos?).returns(true) @tag_sets = [{:dc => "ny"}] end should "set $readPreference" do # secondary cursor = Cursor.new(@collection, { :read => :secondary }) spec = cursor.construct_query_spec assert spec.has_key?('$readPreference') assert_equal 'secondary', spec['$readPreference'][:mode] assert !spec['$readPreference'].has_key?(:tags) # secondary preferred with tags cursor = Cursor.new(@collection, { :read => :secondary_preferred, :tag_sets => @tag_sets }) spec = cursor.construct_query_spec assert spec.has_key?('$readPreference') assert_equal 'secondaryPreferred', spec['$readPreference'][:mode] assert_equal @tag_sets, spec['$readPreference'][:tags] # primary preferred cursor = Cursor.new(@collection, { :read => :primary_preferred }) spec = cursor.construct_query_spec assert spec.has_key?('$readPreference') assert_equal 'primaryPreferred', spec['$readPreference'][:mode] assert !spec['$readPreference'].has_key?(:tags) # primary preferred with tags cursor = Cursor.new(@collection, { :read => :primary_preferred, :tag_sets => @tag_sets }) spec = cursor.construct_query_spec assert spec.has_key?('$readPreference') assert_equal 'primaryPreferred', spec['$readPreference'][:mode] assert_equal @tag_sets, spec['$readPreference'][:tags] # nearest cursor = Cursor.new(@collection, { :read => :nearest }) spec = cursor.construct_query_spec assert spec.has_key?('$readPreference') assert_equal 'nearest', spec['$readPreference'][:mode] assert !spec['$readPreference'].has_key?(:tags) # nearest with tags cursor = Cursor.new(@collection, { :read => :nearest, :tag_sets => @tag_sets }) spec = cursor.construct_query_spec assert spec.has_key?('$readPreference') assert_equal 'nearest', spec['$readPreference'][:mode] assert_equal @tag_sets, spec['$readPreference'][:tags] end should "not set $readPreference" do # for primary cursor = Cursor.new(@collection, { :read => :primary, :tag_sets => @tag_sets }) assert !cursor.construct_query_spec.has_key?('$readPreference') # for secondary_preferred with no tags cursor = Cursor.new(@collection, { :read => :secondary_preferred }) assert !cursor.construct_query_spec.has_key?('$readPreference') cursor = Cursor.new(@collection, { :read => :secondary_preferred, :tag_sets => [] }) assert !cursor.construct_query_spec.has_key?('$readPreference') cursor = Cursor.new(@collection, { :read => :secondary_preferred, :tag_sets => nil }) assert !cursor.construct_query_spec.has_key?('$readPreference') end end context "not conected to mongos" do setup do @connection.stubs(:mongos?).returns(false) end should "not set $readPreference" do cursor = Cursor.new(@collection, { :read => :primary }) assert !cursor.construct_query_spec.has_key?('$readPreference') cursor = Cursor.new(@collection, { :read => :primary_preferred }) assert !cursor.construct_query_spec.has_key?('$readPreference') cursor = Cursor.new(@collection, { :read => :secondary }) assert !cursor.construct_query_spec.has_key?('$readPreference') cursor = Cursor.new(@collection, { :read => :secondary_preferred }) assert !cursor.construct_query_spec.has_key?('$readPreference') cursor = Cursor.new(@collection, { :read => :nearest }) assert !cursor.construct_query_spec.has_key?('$readPreference') cursor = Cursor.new(@collection, { :read => :secondary , :tag_sets => @tag_sets}) assert !cursor.construct_query_spec.has_key?('$readPreference') end end end context "Query fields" do setup do @logger = mock() @logger.stubs(:debug) @connection = stub(:class => MongoClient, :logger => @logger, :slave_ok? => false, :log_duration => false, :tag_sets =>{}, :acceptable_latency => 10) @db = stub(:slave_ok? => true, :name => "testing", :connection => @connection, :tag_sets => {}, :acceptable_latency => 10) @collection = stub(:db => @db, :name => "items", :read => :primary, :tag_sets => {}, :acceptable_latency => 10) end should "when an array should return a hash with each key" do @cursor = Cursor.new(@collection, :fields => [:name, :age]) result = @cursor.fields assert_equal result.keys.sort{|a,b| a.to_s <=> b.to_s}, [:age, :name].sort{|a,b| a.to_s <=> b.to_s} assert result.values.all? {|v| v == 1} end should "when a string, return a hash with just the key" do @cursor = Cursor.new(@collection, :fields => "name") result = @cursor.fields assert_equal result.keys.sort, ["name"] assert result.values.all? {|v| v == 1} end should "return nil when neither hash nor string nor symbol" do @cursor = Cursor.new(@collection, :fields => 1234567) assert_nil @cursor.fields end end context "counts" do setup do @logger = mock() @logger.stubs(:debug) @connection = stub(:class => Connection, :logger => @logger, :slave_ok? => false, :read => :primary, :log_duration => false, :tag_sets => {}, :acceptable_latency => 10) @db = stub(:name => "testing", :slave_ok? => false, :connection => @connection, :read => :primary, :tag_sets => {}, :acceptable_latency => 10) @collection = stub(:db => @db, :name => "items", :read => :primary, :tag_sets => {}, :acceptable_latency => 10) @cursor = Cursor.new(@collection) end should "pass the comment parameter" do query = {:field => 7} @db.expects(:command).with({ 'count' => "items", 'query' => query, 'fields' => nil}, { :read => :primary, :comment => "my comment"}). returns({'ok' => 1, 'n' => 1}) assert_equal(1, Cursor.new(@collection, :selector => query, :comment => 'my comment').count()) end end end ruby-mongo-1.9.2/test/unit/db_test.rb000066400000000000000000000103321221200727400175240ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' def insert_message(db, documents) documents = [documents] unless documents.is_a?(Array) message = ByteBuffer.new message.put_int(0) Mongo::BSON_CODER.serialize_cstr(message, "#{db.name}.test") documents.each { |doc| message.put_array(Mongo::BSON_CODER.new.serialize(doc, true).to_a) } message = db.add_message_headers(Mongo::Constants::OP_INSERT, message) end class DBTest < Test::Unit::TestCase context "DBTest: " do context "DB commands" do setup do @client = stub() @client.stubs(:write_concern).returns({}) @client.stubs(:read).returns(:primary) @client.stubs(:tag_sets) @client.stubs(:acceptable_latency) @db = DB.new("testing", @client) @db.stubs(:safe) @db.stubs(:read) @db.stubs(:tag_sets) @db.stubs(:acceptable_latency) @collection = mock() @db.stubs(:system_command_collection).returns(@collection) end should "raise an error if given a hash with more than one key" do if RUBY_VERSION < '1.9' assert_raise MongoArgumentError do @db.command(:buildinfo => 1, :somekey => 1) end end end should "raise an error if the selector is omitted" do assert_raise MongoArgumentError do @db.command({}, :check_response => true) end end should "create the proper cursor" do @cursor = mock(:next_document => {"ok" => 1}) Cursor.expects(:new).with(@collection, :limit => -1, :selector => {:buildinfo => 1}, :socket => nil, :read => nil, :comment => nil).returns(@cursor) command = {:buildinfo => 1} @db.command(command, :check_response => true) end should "raise an error when the command fails" do @cursor = mock(:next_document => {"ok" => 0}) Cursor.expects(:new).with(@collection, :limit => -1, :selector => {:buildinfo => 1}, :socket => nil, :read => nil, :comment => nil).returns(@cursor) assert_raise OperationFailure do command = {:buildinfo => 1} @db.command(command, :check_response => true) end end should "pass on the comment" do @cursor = mock(:next_document => {"ok" => 0}) Cursor.expects(:new).with(@collection, :limit => -1, :selector => {:buildinfo => 1}, :socket => nil, :read => nil, :comment => "my comment").returns(@cursor) assert_raise OperationFailure do command = {:buildinfo => 1} @db.command(command, :check_response => true, :comment => 'my comment') end end should "raise an error if logging out fails" do @db.expects(:command).returns({}) @client.expects(:auths).returns([]) assert_raise Mongo::MongoDBError do @db.logout end end should "raise an error if collection creation fails" do @db.expects(:command).returns({'ok' => 0}) assert_raise Mongo::MongoDBError do @db.create_collection("foo") end end should "raise an error if getlasterror fails" do @db.expects(:command).returns({}) assert_raise Mongo::MongoDBError do @db.get_last_error end end should "raise an error if drop_index fails" do @db.expects(:command).returns({}) assert_raise Mongo::MongoDBError do @db.drop_index("foo", "bar") end end should "raise an error if set_profiling_level fails" do @db.expects(:command).returns({}) assert_raise Mongo::MongoDBError do @db.profiling_level = :slow_only end end end end end ruby-mongo-1.9.2/test/unit/grid_test.rb000066400000000000000000000042241221200727400200670ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' class GridTest < Test::Unit::TestCase context "GridFS: " do setup do @client = stub() @client.stubs(:write_concern).returns({}) @client.stubs(:read).returns(:primary) @client.stubs(:tag_sets) @client.stubs(:acceptable_latency) @db = DB.new("testing", @client) @files = mock() @chunks = mock() @db.stubs(:[]).with('fs.files').returns(@files) @db.stubs(:[]).with('fs.chunks').returns(@chunks) @db.stubs(:safe) @db.stubs(:read).returns(:primary) end context "Grid classes with standard connections" do setup do @chunks.expects(:ensure_index) end should "create indexes for Grid" do Grid.new(@db) end should "create indexes for GridFileSystem" do @files.expects(:ensure_index) GridFileSystem.new(@db) end end context "Grid classes with slave connection" do setup do @chunks.stubs(:ensure_index).raises(Mongo::ConnectionFailure) @files.stubs(:ensure_index).raises(Mongo::ConnectionFailure) end should "not create indexes for Grid" do grid = Grid.new(@db) data = "hello world!" assert_raise Mongo::ConnectionFailure do grid.put(data) end end should "not create indexes for GridFileSystem" do gridfs = GridFileSystem.new(@db) data = "hello world!" assert_raise Mongo::ConnectionFailure do gridfs.open('image.jpg', 'w') do |f| f.write data end end end end end endruby-mongo-1.9.2/test/unit/mongo_sharded_client_test.rb000066400000000000000000000032161221200727400233110ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require "test_helper" class MongoShardedClientTest < Test::Unit::TestCase include Mongo def setup ENV["MONGODB_URI"] = nil end def test_initialize_with_single_mongos_uri ENV["MONGODB_URI"] = "mongodb://localhost:27017" client = MongoShardedClient.new(:connect => false) assert_equal [[ "localhost", 27017 ]], client.seeds end def test_initialize_with_multiple_mongos_uris ENV["MONGODB_URI"] = "mongodb://localhost:27017,localhost:27018" client = MongoShardedClient.new(:connect => false) assert_equal [[ "localhost", 27017 ], [ "localhost", 27018 ]], client.seeds end def test_from_uri_with_string client = MongoShardedClient.from_uri("mongodb://localhost:27017,localhost:27018", :connect => false) assert_equal [[ "localhost", 27017 ], [ "localhost", 27018 ]], client.seeds end def test_from_uri_with_env_variable ENV["MONGODB_URI"] = "mongodb://localhost:27017,localhost:27018" client = MongoShardedClient.from_uri(nil, :connect => false) assert_equal [[ "localhost", 27017 ], [ "localhost", 27018 ]], client.seeds end end ruby-mongo-1.9.2/test/unit/node_test.rb000066400000000000000000000062261221200727400200730ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' class NodeTest < Test::Unit::TestCase def setup @client = stub() manager = mock('pool_manager') manager.stubs(:update_max_sizes) @client.stubs(:local_manager).returns(manager) end should "refuse to connect to node without 'hosts' key" do tcp = mock() node = Node.new(@client, ['localhost', 27017]) tcp.stubs(:new).returns(new_mock_socket) @client.stubs(:socket_class).returns(tcp) admin_db = new_mock_db admin_db.stubs(:command).returns({'ok' => 1, 'ismaster' => 1}) @client.stubs(:[]).with('admin').returns(admin_db) @client.stubs(:op_timeout).returns(nil) @client.stubs(:connect_timeout).returns(nil) @client.expects(:log) @client.expects(:mongos?).returns(false) @client.stubs(:socket_opts) assert node.connect node.config end should "load a node from an array" do node = Node.new(@client, ['power.level.com', 9001]) assert_equal 'power.level.com', node.host assert_equal 9001, node.port assert_equal 'power.level.com:9001', node.address end should "should default the port for an array" do node = Node.new(@client, ['power.level.com']) assert_equal 'power.level.com', node.host assert_equal MongoClient::DEFAULT_PORT, node.port assert_equal "power.level.com:#{MongoClient::DEFAULT_PORT}", node.address end should "load a node from a string" do node = Node.new(@client, 'localhost:1234') assert_equal 'localhost', node.host assert_equal 1234, node.port assert_equal 'localhost:1234', node.address end should "should default the port for a string" do node = Node.new(@client, '192.168.0.1') assert_equal '192.168.0.1', node.host assert_equal MongoClient::DEFAULT_PORT, node.port assert_equal "192.168.0.1:#{MongoClient::DEFAULT_PORT}", node.address end should "two nodes with the same address should be equal" do assert_equal Node.new(@client, '192.168.0.1'), Node.new(@client, ['192.168.0.1', MongoClient::DEFAULT_PORT]) end should "two nodes with the same address should have the same hash" do assert_equal Node.new(@client, '192.168.0.1').hash, Node.new(@client, ['192.168.0.1', MongoClient::DEFAULT_PORT]).hash end should "two nodes with different addresses should not be equal" do assert_not_equal Node.new(@client, '192.168.0.2'), Node.new(@client, ['192.168.0.1', MongoClient::DEFAULT_PORT]) end should "two nodes with the same address should have the same hash negate" do assert_not_equal Node.new(@client, '192.168.0.1').hash, Node.new(@client, '1239.33.4.2393:29949').hash end end ruby-mongo-1.9.2/test/unit/pool_manager_test.rb000066400000000000000000000070561221200727400216130ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' include Mongo class PoolManagerTest < Test::Unit::TestCase context "Initialization: " do setup do TCPSocket.stubs(:new).returns(new_mock_socket) @db = new_mock_db @client = stub("MongoClient") @client.stubs(:connect_timeout).returns(5) @client.stubs(:op_timeout).returns(5) @client.stubs(:pool_size).returns(2) @client.stubs(:pool_timeout).returns(100) @client.stubs(:seeds).returns(['localhost:30000']) @client.stubs(:socket_class).returns(TCPSocket) @client.stubs(:mongos?).returns(false) @client.stubs(:[]).returns(@db) @client.stubs(:socket_opts) @client.stubs(:replica_set_name).returns(nil) @client.stubs(:log) @arbiters = ['localhost:27020'] @hosts = [ 'localhost:27017', 'localhost:27018', 'localhost:27019', 'localhost:27020' ] @ismaster = { 'hosts' => @hosts, 'arbiters' => @arbiters, 'maxMessageSizeBytes' => 1024 * 2.5, 'maxBsonObjectSize' => 1024 } end should "populate pools correctly" do @db.stubs(:command).returns( # First call to get a socket. @ismaster.merge({'ismaster' => true}), # Subsequent calls to configure pools. @ismaster.merge({'ismaster' => true}), @ismaster.merge({'secondary' => true, 'maxMessageSizeBytes' => 700}), @ismaster.merge({'secondary' => true, 'maxBsonObjectSize' => 500}), @ismaster.merge({'arbiterOnly' => true}) ) seeds = [['localhost', 27017]] manager = Mongo::PoolManager.new(@client, seeds) @client.stubs(:local_manager).returns(manager) manager.connect assert_equal ['localhost', 27017], manager.primary assert_equal 27017, manager.primary_pool.port assert_equal 2, manager.secondaries.length assert_equal [27018, 27019], manager.secondary_pools.map(&:port).sort assert_equal [['localhost', 27020]], manager.arbiters assert_equal 500, manager.max_bson_size assert_equal 700 , manager.max_message_size end should "populate pools with single unqueryable seed" do @db.stubs(:command).returns( # First call to recovering node @ismaster.merge({'ismaster' => false, 'secondary' => false}), # Subsequent calls to configure pools. @ismaster.merge({'ismaster' => false, 'secondary' => false}), @ismaster.merge({'ismaster' => true}), @ismaster.merge({'secondary' => true}), @ismaster.merge({'arbiterOnly' => true}) ) seeds = [['localhost', 27017]] manager = PoolManager.new(@client, seeds) @client.stubs(:local_manager).returns(manager) manager.connect assert_equal ['localhost', 27018], manager.primary assert_equal 27018, manager.primary_pool.port assert_equal 1, manager.secondaries.length assert_equal 27019, manager.secondary_pools[0].port assert_equal [['localhost', 27020]], manager.arbiters end end end ruby-mongo-1.9.2/test/unit/pool_test.rb000066400000000000000000000013211221200727400201060ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' include Mongo class PoolTest < Test::Unit::TestCase context "Initialization: " do should "do" do end end end ruby-mongo-1.9.2/test/unit/read_pref_test.rb000066400000000000000000000020501221200727400210640ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' class ReadPrefTest < Test::Unit::TestCase include ReadPreference def setup mock_pool = mock() mock_pool.stubs(:ping_time).returns(Pool::MAX_PING_TIME) stubs(:primary_pool).returns(mock_pool) stubs(:secondary_pools).returns([mock_pool]) stubs(:pools).returns([mock_pool]) end def test_select_pool ReadPreference::READ_PREFERENCES.map do |rp| assert select_pool({:mode => rp, :tags => [], :latency => 15}) end end end ruby-mongo-1.9.2/test/unit/read_test.rb000066400000000000000000000124431221200727400200570ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' class ReadTest < Test::Unit::TestCase context "Read mode on standard connection: " do setup do @read = :secondary @client = MongoClient.new('localhost', 27017, :read => @read, :connect => false) end end context "Read preferences on replica set connection: " do setup do @read = :secondary_preferred @acceptable_latency = 100 @tags = {"dc" => "Tyler", "rack" => "Brock"} @bad_tags = {"wow" => "cool"} @client = MongoReplicaSetClient.new( ['localhost:27017'], :read => @read, :tag_sets => @tags, :secondary_acceptable_latency_ms => @acceptable_latency, :connect => false ) end should "store read preference on MongoClient" do assert_equal @read, @client.read assert_equal @tags, @client.tag_sets assert_equal @acceptable_latency, @client.acceptable_latency end should "propogate to DB" do db = @client['foo'] assert_equal @read, db.read assert_equal @tags, db.tag_sets assert_equal @acceptable_latency, db.acceptable_latency db = @client.db('foo') assert_equal @read, db.read assert_equal @tags, db.tag_sets assert_equal @acceptable_latency, db.acceptable_latency db = DB.new('foo', @client) assert_equal @read, db.read assert_equal @tags, db.tag_sets assert_equal @acceptable_latency, db.acceptable_latency end should "allow db override" do db = DB.new('foo', @client, :read => :primary, :tag_sets => @bad_tags, :acceptable_latency => 25) assert_equal :primary, db.read assert_equal @bad_tags, db.tag_sets assert_equal 25, db.acceptable_latency db = @client.db('foo', :read => :primary, :tag_sets => @bad_tags, :acceptable_latency => 25) assert_equal :primary, db.read assert_equal @bad_tags, db.tag_sets assert_equal 25, db.acceptable_latency end context "on DB: " do setup do @db = @client['foo'] end should "propogate to collection" do col = @db.collection('bar') assert_equal @read, col.read assert_equal @tags, col.tag_sets assert_equal @acceptable_latency, col.acceptable_latency col = @db['bar'] assert_equal @read, col.read assert_equal @tags, col.tag_sets assert_equal @acceptable_latency, col.acceptable_latency col = Collection.new('bar', @db) assert_equal @read, col.read assert_equal @tags, col.tag_sets assert_equal @acceptable_latency, col.acceptable_latency end should "allow override on collection" do col = @db.collection('bar', :read => :primary, :tag_sets => @bad_tags, :acceptable_latency => 25) assert_equal :primary, col.read assert_equal @bad_tags, col.tag_sets assert_equal 25, col.acceptable_latency col = Collection.new('bar', @db, :read => :primary, :tag_sets => @bad_tags, :acceptable_latency => 25) assert_equal :primary, col.read assert_equal @bad_tags, col.tag_sets assert_equal 25, col.acceptable_latency end end context "on read mode ops" do setup do @col = @client['foo']['bar'] @mock_socket = new_mock_socket end should "use default value on query" do @cursor = @col.find({:a => 1}) sock = new_mock_socket read_pool = stub(:checkin => true) @client.stubs(:read_pool).returns(read_pool) local_manager = PoolManager.new(@client, @client.seeds) @client.stubs(:local_manager).returns(local_manager) primary_pool = stub(:checkin => true) sock.stubs(:pool).returns(primary_pool) @client.stubs(:primary_pool).returns(primary_pool) @client.expects(:checkout_reader).returns(sock) @client.expects(:receive_message).with do |o, m, l, s, c, r| r == nil end.returns([[], 0, 0]) @cursor.next end should "allow override default value on query" do @cursor = @col.find({:a => 1}, :read => :primary) sock = new_mock_socket local_manager = PoolManager.new(@client, @client.seeds) @client.stubs(:local_manager).returns(local_manager) primary_pool = stub(:checkin => true) sock.stubs(:pool).returns(primary_pool) @client.stubs(:primary_pool).returns(primary_pool) @client.expects(:checkout_reader).returns(sock) @client.expects(:receive_message).with do |o, m, l, s, c, r| r == nil end.returns([[], 0, 0]) @cursor.next end should "allow override alternate value on query" do assert_raise MongoArgumentError do @col.find_one({:a => 1}, :read => {:dc => "ny"}) end end end end end ruby-mongo-1.9.2/test/unit/safe_test.rb000066400000000000000000000114131221200727400200560ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' class SafeTest < Test::Unit::TestCase context "Write-Concern modes on Mongo::Connection " do setup do @safe_value = {:w => 7, :j => false, :fsync => false, :wtimeout => nil} @connection = Mongo::Connection.new('localhost', 27017, :safe => @safe_value, :connect => false) end should "propogate to DB" do db = @connection['foo'] assert_equal @safe_value[:w], db.write_concern[:w] db = @connection.db('foo') assert_equal @safe_value[:w], db.write_concern[:w] db = DB.new('foo', @connection) assert_equal @safe_value[:w], db.write_concern[:w] end should "allow db override" do db = DB.new('foo', @connection, :safe => false) assert_equal 0, db.write_concern[:w] db = @connection.db('foo', :safe => false) assert_equal 0, db.write_concern[:w] end context "on DB: " do setup do @db = @connection['foo'] end should "propogate to collection" do col = @db.collection('bar') assert_equal @safe_value, col.write_concern col = @db['bar'] assert_equal @safe_value, col.write_concern col = Collection.new('bar', @db) assert_equal @safe_value, col.write_concern end should "allow override on collection" do col = @db.collection('bar', :safe => false) assert_equal 0, col.write_concern[:w] col = Collection.new('bar', @db, :safe => false) assert_equal 0, col.write_concern[:w] end end context "on operations supporting safe mode" do setup do @col = @connection['foo']['bar'] end should "use default value on insert" do @connection.expects(:send_message_with_gle).with do |op, msg, log, n, safe| safe == @safe_value end @col.insert({:a => 1}) end should "allow override alternate value on insert" do @connection.expects(:send_message_with_gle).with do |op, msg, log, n, safe| safe == {:w => 100, :j => false, :fsync => false, :wtimeout => nil} end @col.insert({:a => 1}, :safe => {:w => 100}) end should "allow override to disable on insert" do @connection.expects(:send_message) @col.insert({:a => 1}, :safe => false) end should "use default value on update" do @connection.expects(:send_message_with_gle).with do |op, msg, log, n, safe| safe == @safe_value end @col.update({:a => 1}, {:a => 2}) end should "allow override alternate value on update" do @connection.expects(:send_message_with_gle).with do |op, msg, log, n, safe| safe == {:w => 100, :j => false, :fsync => false, :wtimeout => nil} end @col.update({:a => 1}, {:a => 2}, :safe => {:w => 100}) end should "allow override to disable on update" do @connection.expects(:send_message) @col.update({:a => 1}, {:a => 2}, :safe => false) end should "use default value on save" do @connection.expects(:send_message_with_gle).with do |op, msg, log, n, safe| safe == @safe_value end @col.save({:a => 1}) end should "allow override alternate value on save" do @connection.expects(:send_message_with_gle).with do |op, msg, log, n, safe| safe == @safe_value.merge(:w => 1) end @col.save({:a => 1}, :safe => true) end should "allow override to disable on save" do @connection.expects(:send_message) @col.save({:a => 1}, :safe => false) end should "use default value on remove" do @connection.expects(:send_message_with_gle).with do |op, msg, log, n, safe| safe == @safe_value end @col.remove end should "allow override alternate value on remove" do @connection.expects(:send_message_with_gle).with do |op, msg, log, n, safe| safe == {:w => 100, :j => false, :fsync => false, :wtimeout => nil} end @col.remove({}, :safe => {:w => 100}) end should "allow override to disable on remove" do @connection.expects(:send_message) @col.remove({}, :safe => false) end end end end ruby-mongo-1.9.2/test/unit/sharding_pool_manager_test.rb000066400000000000000000000046101221200727400234630ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' include Mongo class ShardingPoolManagerTest < Test::Unit::TestCase context "Initialization: " do setup do TCPSocket.stubs(:new).returns(new_mock_socket) @db = new_mock_db @client = stub("MongoShardedClient") @client.stubs(:connect_timeout).returns(5) @client.stubs(:op_timeout).returns(5) @client.stubs(:pool_size).returns(2) @client.stubs(:pool_timeout).returns(100) @client.stubs(:socket_class).returns(TCPSocket) @client.stubs(:mongos?).returns(true) @client.stubs(:[]).returns(@db) @client.stubs(:socket_opts) @client.stubs(:replica_set_name).returns(nil) @client.stubs(:log) @arbiters = ['localhost:27020'] @hosts = [ 'localhost:27017', 'localhost:27018', 'localhost:27019' ] @ismaster = { 'hosts' => @hosts, 'arbiters' => @arbiters, 'maxMessageSizeBytes' => 1024 * 2.5, 'maxBsonObjectSize' => 1024 } end should "populate pools correctly" do @db.stubs(:command).returns( # First call to get a socket. @ismaster.merge({'ismaster' => true}), # Subsequent calls to configure pools. @ismaster.merge({'ismaster' => true}), @ismaster.merge({'secondary' => true, 'maxMessageSizeBytes' => 700}), @ismaster.merge({'secondary' => true, 'maxBsonObjectSize' => 500}), @ismaster.merge({'arbiterOnly' => true}) ) seed = ['localhost:27017'] manager = Mongo::ShardingPoolManager.new(@client, seed) @client.stubs(:local_manager).returns(manager) manager.connect formatted_seed = ['localhost', 27017] assert manager.seeds.include? formatted_seed assert_equal 500, manager.max_bson_size assert_equal 700 , manager.max_message_size end end end ruby-mongo-1.9.2/test/unit/util_test.rb000066400000000000000000000047021221200727400201200ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require File.expand_path("../../test_helper", __FILE__) class UtilTest < Test::Unit::TestCase context "Support" do context ".secondary_ok?" do should "return false for mapreduces with a string for out" do assert_equal false, Mongo::Support.secondary_ok?(BSON::OrderedHash[ 'mapreduce', 'test-collection', 'out', 'new-test-collection' ]) end should "return false for mapreduces replacing a collection" do assert_equal false, Mongo::Support.secondary_ok?(BSON::OrderedHash[ 'mapreduce', 'test-collection', 'out', BSON::OrderedHash['replace', 'new-test-collection'] ]) end should "return false for mapreduces replacing the inline collection" do assert_equal false, Mongo::Support.secondary_ok?(BSON::OrderedHash[ 'mapreduce', 'test-collection', 'out', 'inline' ]) end should "return true for inline output mapreduces when inline is a symbol" do assert_equal true, Mongo::Support.secondary_ok?(BSON::OrderedHash[ 'mapreduce', 'test-collection', 'out', BSON::OrderedHash[:inline, 'true'] ]) end should "return true for inline output mapreduces when inline is a string" do assert_equal true, Mongo::Support.secondary_ok?(BSON::OrderedHash[ 'mapreduce', 'test-collection', 'out', BSON::OrderedHash['inline', 'true'] ]) end should 'return true for count' do assert_equal true, Mongo::Support.secondary_ok?(BSON::OrderedHash[ 'count', 'test-collection', 'query', BSON::OrderedHash['a', 'b'] ]) end should 'return false for serverStatus' do assert_equal false, Mongo::Support.secondary_ok?(BSON::OrderedHash[ 'serverStatus', 1 ]) end end end end ruby-mongo-1.9.2/test/unit/write_concern_test.rb000066400000000000000000000122061221200727400220020ustar00rootroot00000000000000# Copyright (C) 2013 10gen Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' class WriteConcernTest < Test::Unit::TestCase context "Write-Concern modes on Mongo::MongoClient " do setup do @write_concern = { :w => 7, :j => false, :fsync => false, :wtimeout => nil } class Mongo::MongoClient public :build_get_last_error_message, :build_command_message end @client = MongoClient.new('localhost', 27017, @write_concern.merge({:connect => false})) end should "propogate to DB" do db = @client['foo'] assert_equal @write_concern, db.write_concern db = @client.db('foo') assert_equal @write_concern, db.write_concern db = DB.new('foo', @client) assert_equal @write_concern, db.write_concern end should "allow db override" do db = DB.new('foo', @client, :w => 0) assert_equal 0, db.write_concern[:w] db = @client.db('foo', :w => 0) assert_equal 0, db.write_concern[:w] end context "on DB: " do setup do @db = @client['foo'] end should "propogate to collection" do collection = @db.collection('bar') assert_equal @write_concern, collection.write_concern collection = @db['bar'] assert_equal @write_concern, collection.write_concern collection = Collection.new('bar', @db) assert_equal @write_concern, collection.write_concern end should "allow override on collection" do collection = @db.collection('bar', :w => 0) assert_equal 0, collection.write_concern[:w] collection = Collection.new('bar', @db, :w => 0) assert_equal 0, collection.write_concern[:w] end end context "on operations supporting 'gle' mode" do setup do @collection = @client['foo']['bar'] end should "not send w = 1 to the server" do gle = @client.build_get_last_error_message("fake", {:w => 1}) assert_equal gle, @client.build_command_message("fake", {:getlasterror => 1}) end should "use default value on insert" do @client.expects(:send_message_with_gle).with do |op, msg, log, n, wc| wc == @write_concern end @collection.insert({:a => 1}) end should "allow override alternate value on insert" do @client.expects(:send_message_with_gle).with do |op, msg, log, n, wc| wc == {:w => 100, :j => false, :fsync => false, :wtimeout => nil} end @collection.insert({:a => 1}, {:w => 100}) end should "allow override to disable on insert" do @client.expects(:send_message) @collection.insert({:a => 1}, :w => 0) end should "use default value on update" do @client.expects(:send_message_with_gle).with do |op, msg, log, n, wc| wc == @write_concern end @collection.update({:a => 1}, {:a => 2}) end should "allow override alternate value on update" do @client.expects(:send_message_with_gle).with do |op, msg, log, n, wc| wc == {:w => 100, :j => false, :fsync => false, :wtimeout => nil} end @collection.update({:a => 1}, {:a => 2}, {:w => 100}) end should "allow override to disable on update" do @client.expects(:send_message) @collection.update({:a => 1}, {:a => 2}, :w => 0) end should "use default value on save" do @client.expects(:send_message_with_gle).with do |op, msg, log, n, wc| wc == @write_concern end @collection.save({:a => 1}) end should "allow override alternate value on save" do @client.expects(:send_message_with_gle).with do |op, msg, log, n, wc| wc == @write_concern.merge(:w => 1) end @collection.save({:a => 1}, :w => 1) end should "allow override to disable on save" do @client.expects(:send_message) @collection.save({:a => 1}, :w => 0) end should "use default value on remove" do @client.expects(:send_message_with_gle).with do |op, msg, log, n, wc| wc == @write_concern end @collection.remove end should "allow override alternate value on remove" do @client.expects(:send_message_with_gle).with do |op, msg, log, n, wc| wc == {:w => 100, :j => false, :fsync => false, :wtimeout => nil} end @collection.remove({}, {:w => 100}) end should "allow override to disable on remove" do @client.expects(:send_message) @collection.remove({}, :w => 0) end end end end