pax_global_header00006660000000000000000000000064123346100610014506gustar00rootroot0000000000000052 comment=e25781a2e0237bc6ba6b4ad4faccea045625f706 ruby-mongo-1.10.0/000077500000000000000000000000001233461006100136635ustar00rootroot00000000000000ruby-mongo-1.10.0/LICENSE000066400000000000000000000250171233461006100146750ustar00rootroot00000000000000 Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS Copyright (C) 2008-2013 MongoDB, Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ruby-mongo-1.10.0/README.md000066400000000000000000000121301233461006100151370ustar00rootroot00000000000000MongoDB Ruby Driver [![Build Status][travis-img]][travis-url] [![Code Climate][codeclimate-img]][codeclimate-url] [![Coverage Status][coveralls-img]][coveralls-url] [![Gem Version][rubygems-img]][rubygems-url] ----- The officially supported Ruby driver for [MongoDB](http://www.mongodb.org). Installation ----- **Gem Installation**
The Ruby driver is released and distributed through RubyGems and it can be installed with the following command: ```bash gem install mongo ``` For a significant performance boost, you'll want to install the C-extension: ```bash gem install bson_ext ``` **Github Installation**
For development and test environments (not recommended for production) you can also install the Ruby driver directly from source: ```bash # clone the repository git clone https://github.com/mongodb/mongo-ruby-driver.git cd mongo-ruby-driver # checkout a specific version by tag (optional) git checkout 1.x.x # install all development dependencies gem install bundler bundle install # install the ruby driver rake install ``` Usage ----- Here is a quick example of basic usage for the Ruby driver: ```ruby require 'mongo' include Mongo # connecting to the database client = MongoClient.new # defaults to localhost:27017 db = client['example-db'] coll = db['example-collection'] # inserting documents 10.times { |i| coll.insert({ :count => i+1 }) } # finding documents puts "There are #{coll.count} total documents. Here they are:" coll.find.each { |doc| puts doc.inspect } # updating documents coll.update({ :count => 5 }, { :count => 'foobar' }) # removing documents coll.remove({ :count => 8 }) coll.remove ``` Wiki - Tutorials & Examples ----- For many more usage examples and a full tutorial, please visit our [wiki](https://github.com/mongodb/mongo-ruby-driver/wiki).
API Reference Documentation ----- For API reference documentation, please visit [here](http://api.mongodb.org/ruby). Compatibility ----- The MongoDB Ruby driver requires Ruby 1.8.7 or greater and is regularly tested against the platforms and environments listed below. Ruby Platforms | Operating Systems | Architectures -------------- | ----------------- | ------------- MRI 1.8.7, 1.9.3, 2.0.0
JRuby 1.7.x | Windows
Linux
OS X | x86
x64
ARM Support & Feedback ----- For issues, questions or feedback related to the Ruby driver, please look into our [support channels](http://www.mongodb.org/about/support). Please do not email any of the Ruby developers directly with issues or questions - you're more likely to get an answer quickly on the [mongodb-user list](http://groups.google.com/group/mongodb-user) on Google Groups. Bugs & Feature Requests ----- Do you have a bug to report or a feature request to make? 1. Visit [our issue tracker](https://jira.mongodb.org) and login (or create an account if necessary). 2. Navigate to the [RUBY](https://jira.mongodb.org/browse/RUBY) project. 3. Click 'Create Issue' and fill out all the applicable form fields. When reporting an issue, please keep in mind that all information in JIRA for all driver projects (ex. RUBY, CSHARP, JAVA) and the Core Server (ex. SERVER) project is **PUBLICLY** visible. **PLEASE DO** * Provide as much information as possible about the issue. * Provide detailed steps for reproducing the issue. * Provide any applicable code snippets, stack traces and log data. * Specify version information for the driver and MongoDB. **PLEASE DO NOT** * Provide any sensitive data or server logs. * Report potential security issues publicly (see 'Security Issues'). Security Issues ----- If you’ve identified a potential security related issue in a driver or any other MongoDB project, please report it by following the [instructions here](http://docs.mongodb.org/manual/tutorial/create-a-vulnerability-report). Release History ----- Full release notes and release history are available [here](https://github.com/mongodb/mongo-ruby-driver/releases). License ----- Copyright (C) 2009-2013 MongoDB, Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. [rubygems-img]: https://badge.fury.io/rb/mongo.png [rubygems-url]: http://badge.fury.io/rb/mongo [travis-img]: https://secure.travis-ci.org/mongodb/mongo-ruby-driver.png?branch=1.x-stable [travis-url]: http://travis-ci.org/mongodb/mongo-ruby-driver?branch=1.x-stable [codeclimate-img]: https://codeclimate.com/github/mongodb/mongo-ruby-driver.png?branch=1.x-stable [codeclimate-url]: https://codeclimate.com/github/mongodb/mongo-ruby-driver?branch=1.x-stable [coveralls-img]: https://coveralls.io/repos/mongodb/mongo-ruby-driver/badge.png?branch=1.x-stable [coveralls-url]: https://coveralls.io/r/mongodb/mongo-ruby-driver?branch=1.x-stable ruby-mongo-1.10.0/Rakefile000066400000000000000000000020341233461006100153270ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'rubygems' begin require 'bundler' rescue LoadError raise '[FAIL] Bundler not found! Install it with `gem install bundler && bundle`.' end rake_tasks = Dir.glob(File.join('tasks', '**', '*.rake')).sort if ENV.keys.any? { |k| k.end_with?('_CI') } Bundler.require(:default, :testing) rake_tasks.reject! { |r| r =~ /deploy/ } else Bundler.require(:default, :testing, :deploy, :development) end rake_tasks.each { |rake| load File.expand_path(rake) } ruby-mongo-1.10.0/VERSION000066400000000000000000000000061233461006100147270ustar00rootroot000000000000001.10.0ruby-mongo-1.10.0/bin/000077500000000000000000000000001233461006100144335ustar00rootroot00000000000000ruby-mongo-1.10.0/bin/mongo_console000077500000000000000000000024471233461006100172310ustar00rootroot00000000000000#!/usr/bin/env ruby # Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. org_argv = ARGV.dup ARGV.clear $LOAD_PATH[0,0] = File.join(File.dirname(__FILE__), '..', 'lib') require 'mongo' include Mongo host = org_argv[0] || ENV['MONGO_RUBY_DRIVER_HOST'] || 'localhost' port = org_argv[1] || ENV['MONGO_RUBY_DRIVER_PORT'] || MongoClient::DEFAULT_PORT dbnm = org_argv[2] || ENV['MONGO_RUBY_DRIVER_DB'] || 'ruby-mongo-console' puts "Connecting to #{host}:#{port} (CLIENT) on with database #{dbnm} (DB)" CLIENT = MongoClient.new(host, port) DB = CLIENT.db(dbnm) # try pry if available, fall back to irb begin require 'pry' CONSOLE_CLASS = Pry rescue LoadError require 'irb' CONSOLE_CLASS = IRB end puts "Starting #{CONSOLE_CLASS.name} session..." CONSOLE_CLASS.start(__FILE__) ruby-mongo-1.10.0/checksums.yaml.gz000066400000000000000000000004141233461006100171520ustar00rootroot00000000000000j=Se@ b^tb"z5!Rz|?~{ۿtiw5 vZRŶ]yONjvHX&H`Ĉw#*^E 5-:A=H=C,E͔@X'!REE}6C n=+تWfE#M2Qbf2Q(U\^wg8ȋ,aslخųCȝ{"K>\Lruby-mongo-1.10.0/checksums.yaml.gz.sig000066400000000000000000000004001233461006100177260ustar00rootroot00000000000000JW$ NJ.ZⲤՃxI؋^]yPr)@,JVT/ҫ{~9ۛL Ȟ {ƣgLjOm~NuWA$9 Ak1bM-^Ajf1(ɪ7#4( ?uw5̈N4WɌ[=G[ܥTRsQVY0v ^)ZvW^NTpXxM<üruby-mongo-1.10.0/data.tar.gz.sig000066400000000000000000000004001233461006100164760ustar00rootroot00000000000000Eq¡|ab2^ՄQ+jrL9T H~_HxŌx"#'kQͫR7|㑒Agk ޥt4%_`YETp8|Q : R`6)x!5𼒏ln҆uܡsx 'FW%.0J_lvs1mh[k\ R '9,pI+AmjSxDs`P1{aruby-mongo-1.10.0/lib/000077500000000000000000000000001233461006100144315ustar00rootroot00000000000000ruby-mongo-1.10.0/lib/mongo.rb000066400000000000000000000047211233461006100161010ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo ASCENDING = 1 DESCENDING = -1 GEO2D = '2d' GEO2DSPHERE = '2dsphere' GEOHAYSTACK = 'geoHaystack' TEXT = 'text' HASHED = 'hashed' INDEX_TYPES = { 'ASCENDING' => ASCENDING, 'DESCENDING' => DESCENDING, 'GEO2D' => GEO2D, 'GEO2DSPHERE' => GEO2DSPHERE, 'GEOHAYSTACK' => GEOHAYSTACK, 'TEXT' => TEXT, 'HASHED' => HASHED } DEFAULT_MAX_BSON_SIZE = 4 * 1024 * 1024 MESSAGE_SIZE_FACTOR = 2 module Constants OP_REPLY = 1 OP_MSG = 1000 OP_UPDATE = 2001 OP_INSERT = 2002 OP_QUERY = 2004 OP_GET_MORE = 2005 OP_DELETE = 2006 OP_KILL_CURSORS = 2007 OP_QUERY_TAILABLE = 2 ** 1 OP_QUERY_SLAVE_OK = 2 ** 2 OP_QUERY_OPLOG_REPLAY = 2 ** 3 OP_QUERY_NO_CURSOR_TIMEOUT = 2 ** 4 OP_QUERY_AWAIT_DATA = 2 ** 5 OP_QUERY_EXHAUST = 2 ** 6 OP_QUERY_PARTIAL = 2 ** 7 REPLY_CURSOR_NOT_FOUND = 2 ** 0 REPLY_QUERY_FAILURE = 2 ** 1 REPLY_SHARD_CONFIG_STALE = 2 ** 2 REPLY_AWAIT_CAPABLE = 2 ** 3 end module ErrorCode # MongoDB Core Server src/mongo/base/error_codes.err BAD_VALUE = 2 UNKNOWN_ERROR = 8 INVALID_BSON = 22 COMMAND_NOT_FOUND = 59 WRITE_CONCERN_FAILED = 64 MULTIPLE_ERRORS_OCCURRED = 65 end end require 'bson' require 'set' require 'thread' require 'mongo/utils' require 'mongo/exception' require 'mongo/functional' require 'mongo/connection' require 'mongo/collection_writer' require 'mongo/collection' require 'mongo/bulk_write_collection_view' require 'mongo/cursor' require 'mongo/db' require 'mongo/gridfs' require 'mongo/networking' require 'mongo/mongo_client' require 'mongo/mongo_replica_set_client' require 'mongo/mongo_sharded_client' require 'mongo/legacy' ruby-mongo-1.10.0/lib/mongo/000077500000000000000000000000001233461006100155505ustar00rootroot00000000000000ruby-mongo-1.10.0/lib/mongo/bulk_write_collection_view.rb000066400000000000000000000350751233461006100235230ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo # A bulk write view to a collection of documents in a database. class BulkWriteCollectionView include Mongo::WriteConcern DEFAULT_OP_ARGS = {:q => nil} MULTIPLE_ERRORS_MSG = "batch item errors occurred" EMPTY_BATCH_MSG = "batch is empty" attr_reader :collection, :options, :ops, :op_args # Initialize a bulk-write-view object to a collection with default query selector {}. # # A bulk write operation is initialized from a collection object. # For example, for an ordered bulk write view: # # bulk = collection.initialize_ordered_bulk_op # # or for an unordered bulk write view: # # bulk = collection.initialize_unordered_bulk_op # # The bulk write view collects individual write operations together so that they can be # executed as a batch for significant performance gains. # The ordered bulk operation will execute each operation serially in order. # Execution will stop at the first occurrence of an error for an ordered bulk operation. # The unordered bulk operation will be executed and may take advantage of parallelism. # There are no guarantees for the order of execution of the operations on the server. # Execution will continue even if there are errors for an unordered bulk operation. # # A bulk operation is programmed as a sequence of individual operations. # An individual operation is composed of a method chain of modifiers or setters terminated by a write method. # A modify method sets a value on the current object. # A set methods returns a duplicate of the current object with a value set. # A terminator write method appends a write operation to the bulk batch collected in the view. # # The API supports mixing of write operation types in a bulk operation. # However, server support affects the implementation and performance of bulk operations. # # MongoDB version 2.6 servers currently support only bulk commands of the same type. # With an ordered bulk operation, # contiguous individual ops of the same type can be batched into the same db request, # and the next op of a different type must be sent separately in the next request. # Performance will improve if you can arrange your ops to reduce the number of db requests. # With an unordered bulk operation, # individual ops can be grouped by type and sent in at most three requests, # one each per insert, update, or delete. # # MongoDB pre-version 2.6 servers do not support bulk write commands. # The bulk operation must be sent one request per individual op. # This also applies to inserts in order to have accurate counts and error reporting. # # Important note on pre-2.6 performance: # Performance is very poor compared to version 2.6. # We recommend bulk operation with pre-2.6 only for compatibility or # for development in preparation for version 2.6. # For better performance with pre-version 2.6, use bulk insertion with Collection#insert. # # @param [Collection] collection the parent collection object # # @option opts [Boolean] :ordered (true) Set bulk execution for ordered or unordered # # @return [BulkWriteCollectionView] def initialize(collection, options = {}) @collection = collection @options = options @ops = [] @op_args = DEFAULT_OP_ARGS.dup end def inspect vars = [:@options, :@ops, :@op_args] vars_inspect = vars.collect{|var| "#{var}=#{instance_variable_get(var).inspect}"} "#, #{vars_inspect.join(', ')}>" end # Modify the query selector for subsequent bulk write operations. # The default query selector on creation of the bulk write view is {}. # For operations that require a query selector, find() must be set # per operation, or set once for all operations on the bulk object. # For example, these operations: # # bulk.find({"a" => 2}).update({"$inc" => {"x" => 2}}) # bulk.find({"a" => 2}).update({"$set" => {"b" => 3}}) # # may be rewritten as: # # bulk = find({"a" => 2}) # bulk.update({"$inc" => {"x" => 2}}) # bulk.update({"$set" => {"b" => 3}}) # # Note that modifying the query selector in this way will not affect # operations that do not use a query selector, like insert(). # # @param [Hash] q the query selector # # @return [BulkWriteCollectionView] def find(q) op_args_set(:q, q) end # Modify the upsert option argument for subsequent bulk write operations. # # @param [Boolean] value (true) the upsert option value # # @return [BulkWriteCollectionView] def upsert!(value = true) op_args_set(:upsert, value) end # Set the upsert option argument for subsequent bulk write operations. # # @param [Boolean] value (true) the upsert option value # # @return [BulkWriteCollectionView] a duplicated object def upsert(value = true) dup.upsert!(value) end # Update one document matching the selector. # # bulk.find({"a" => 1}).update_one({"$inc" => {"x" => 1}}) # # Use the upsert! or upsert method to specify an upsert. For example: # # bulk.find({"a" => 1}).upsert.updateOne({"$inc" => {"x" => 1}}) # # @param [Hash] u the update document # # @return [BulkWriteCollectionView] def update_one(u) raise MongoArgumentError, "document must start with an operator" unless update_doc?(u) op_push([:update, @op_args.merge(:u => u, :multi => false)]) end # Update all documents matching the selector. For example: # # bulk.find({"a" => 2}).update({"$inc" => {"x" => 2}}) # # Use the upsert! or upsert method to specify an upsert. For example: # # bulk.find({"a" => 2}).upsert.update({"$inc" => {"x" => 2}}) # # @param [Hash] u the update document # # @return [BulkWriteCollectionView] def update(u) raise MongoArgumentError, "document must start with an operator" unless update_doc?(u) op_push([:update, @op_args.merge(:u => u, :multi => true)]) end # Replace entire document (update with whole doc replace). For example: # # bulk.find({"a" => 3}).replace_one({"x" => 3}) # # @param [Hash] u the replacement document # # @return [BulkWriteCollectionView] def replace_one(u) raise MongoArgumentError, "document must not contain any operators" unless replace_doc?(u) op_push([:update, @op_args.merge(:u => u, :multi => false)]) end # Remove a single document matching the selector. For example: # # bulk.find({"a" => 4}).remove_one; # # @return [BulkWriteCollectionView] def remove_one op_push([:delete, @op_args.merge(:limit => 1)]) end # Remove all documents matching the selector. For example: # # bulk.find({"a" => 5}).remove; # # @return [BulkWriteCollectionView] def remove op_push([:delete, @op_args.merge(:limit => 0)]) end # Insert a document. For example: # # bulk.insert({"x" => 4}) # # @return [BulkWriteCollectionView] def insert(document) # TODO - check keys op_push([:insert, {:d => document}]) end # Execute the bulk operation, with an optional write concern overwriting the default w:1. # For example: # # write_concern = {:w => 1, :j => 1} # bulk.execute({write_concern}) # # On return from execute, the bulk operation is cleared, # but the selector and upsert settings are preserved. # # @return [BulkWriteCollectionView] def execute(opts = {}) raise MongoArgumentError, EMPTY_BATCH_MSG if @ops.empty? write_concern = get_write_concern(opts, @collection) @ops.each_with_index{|op, index| op.last.merge!(:ord => index)} # infuse ordinal here to avoid issues with upsert if @collection.db.connection.use_write_command?(write_concern) errors, write_concern_errors, exchanges = @collection.command_writer.bulk_execute(@ops, @options, opts) else errors, write_concern_errors, exchanges = @collection.operation_writer.bulk_execute(@ops, @options, opts) end @ops = [] return true if errors.empty? && (exchanges.empty? || exchanges.first[:response] == true) # w 0 without GLE result = merge_result(errors + write_concern_errors, exchanges) raise BulkWriteError.new(MULTIPLE_ERRORS_MSG, Mongo::ErrorCode::MULTIPLE_ERRORS_OCCURRED, result) if !errors.empty? || !write_concern_errors.empty? result end private def hash_except(h, *keys) keys.each { |key| h.delete(key) } h end def hash_select(h, *keys) Hash[*keys.zip(h.values_at(*keys)).flatten] end def tally(h, key, n) h[key] = h.fetch(key, 0) + n end def nil_tally(h, key, n) if !h.has_key?(key) h[key] = n elsif h[key] h[key] = n ? h[key] + n : n end end def append(h, key, obj) h[key] = h.fetch(key, []) << obj end def concat(h, key, a) h[key] = h.fetch(key, []) + a end def merge_index(h, exchange) h.merge("index" => exchange[:batch][h.fetch("index", 0)][:ord]) end def merge_indexes(a, exchange) a.collect{|h| merge_index(h, exchange)} end def merge_result(errors, exchanges) ok = 0 result = {"ok" => 0, "n" => 0} unless errors.empty? unless (writeErrors = errors.select { |error| error.class != Mongo::OperationFailure && error.class != WriteConcernError }).empty? # assignment concat(result, "writeErrors", writeErrors.collect { |error| {"index" => error.result[:ord], "code" => error.error_code, "errmsg" => error.result[:error].message} }) end result.merge!("code" => Mongo::ErrorCode::MULTIPLE_ERRORS_OCCURRED, "errmsg" => MULTIPLE_ERRORS_MSG) end exchanges.each do |exchange| response = exchange[:response] next unless response ok += response["ok"].to_i n = response["n"] || 0 op_type = exchange[:op_type] if op_type == :insert n = 1 if response.key?("err") && (response["err"].nil? || response["err"] == "norepl" || response["err"] == "timeout") # OP_INSERT override n = 0 bug, n = exchange[:batch].size always 1 tally(result, "nInserted", n) elsif op_type == :update n_upserted = 0 if (upserted = response.fetch("upserted", nil)) # assignment upserted = [{"_id" => upserted}] if upserted.class == BSON::ObjectId # OP_UPDATE non-array n_upserted = upserted.size concat(result, "upserted", merge_indexes(upserted, exchange)) end tally(result, "nUpserted", n_upserted) if n_upserted > 0 tally(result, "nMatched", n - n_upserted) nil_tally(result, "nModified", response["nModified"]) elsif op_type == :delete tally(result, "nRemoved", n) end result["n"] += n write_concern_error = nil errmsg = response["errmsg"] || response["err"] # top level if (writeErrors = response["writeErrors"] || response["errDetails"]) # assignment concat(result, "writeErrors", merge_indexes(writeErrors, exchange)) elsif response["err"] == "timeout" # errmsg == "timed out waiting for slaves" # OP_* write_concern_error = {"errmsg" => errmsg, "code" => Mongo::ErrorCode::WRITE_CONCERN_FAILED, "errInfo" => {"wtimeout" => response["wtimeout"]}} # OP_* does not have "code" elsif errmsg == "norepl" # OP_* write_concern_error = {"errmsg" => errmsg, "code" => Mongo::ErrorCode::WRITE_CONCERN_FAILED} # OP_* does not have "code" elsif errmsg # OP_INSERT, OP_UPDATE have "err" append(result, "writeErrors", merge_index({"errmsg" => errmsg, "code" => response["code"]}, exchange)) end if response["writeConcernError"] write_concern_error = response["writeConcernError"] elsif (wnote = response["wnote"]) # assignment - OP_* write_concern_error = {"errmsg" => wnote, "code" => Mongo::ErrorCode::WRITE_CONCERN_FAILED} # OP_* does not have "code" elsif (jnote = response["jnote"]) # assignment - OP_* write_concern_error = {"errmsg" => jnote, "code" => Mongo::ErrorCode::BAD_VALUE} # OP_* does not have "code" end append(result, "writeConcernError", merge_index(write_concern_error, exchange)) if write_concern_error end result.delete("nModified") if result.has_key?("nModified") && !result["nModified"] result.merge!("ok" => [ok + result["n"], 1].min) end def initialize_copy(other) other.instance_variable_set(:@options, other.options.dup) end def op_args_set(op, value) @op_args[op] = value self end def op_push(op) raise MongoArgumentError, "non-nil query must be set via find" if op.first != :insert && !op.last[:q] @ops << op self end def update_doc?(doc) !doc.empty? && doc.keys.first.to_s =~ /^\$/ end def replace_doc?(doc) doc.keys.all?{|key| key !~ /^\$/} end end class Collection # Initialize an ordered bulk write view for this collection # Execution will stop at the first occurrence of an error for an ordered bulk operation. # # @return [BulkWriteCollectionView] def initialize_ordered_bulk_op BulkWriteCollectionView.new(self, :ordered => true) end # Initialize an unordered bulk write view for this collection # The unordered bulk operation will be executed and may take advantage of parallelism. # There are no guarantees for the order of execution of the operations on the server. # Execution will continue even if there are errors for an unordered bulk operation. # # @return [BulkWriteCollectionView] def initialize_unordered_bulk_op BulkWriteCollectionView.new(self, :ordered => false) end end end ruby-mongo-1.10.0/lib/mongo/collection.rb000066400000000000000000001464041233461006100202410ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo # A named collection of documents in a database. class Collection include Mongo::Logging include Mongo::WriteConcern attr_reader :db, :name, :pk_factory, :hint, :write_concern, :capped, :operation_writer, :command_writer # Read Preference attr_accessor :read, :tag_sets, :acceptable_latency # Initialize a collection object. # # @param [String, Symbol] name the name of the collection. # @param [DB] db a MongoDB database instance. # # @option opts [String, Integer, Symbol] :w (1) Set default number of nodes to which a write # should be acknowledged. # @option opts [Integer] :wtimeout (nil) Set replica set acknowledgement timeout. # @option opts [Boolean] :j (false) If true, block until write operations have been committed # to the journal. Cannot be used in combination with 'fsync'. Prior to MongoDB 2.6 this option was # ignored if the server was running without journaling. Starting with MongoDB 2.6, write operations will # fail with an exception if this option is used when the server is running without journaling. # @option opts [Boolean] :fsync (false) If true, and the server is running without journaling, blocks until # the server has synced all data files to disk. If the server is running with journaling, this acts the same as # the 'j' option, blocking until write operations have been committed to the journal. # Cannot be used in combination with 'j'. # # Notes about write concern: # These write concern options will be used for insert, update, and remove methods called on this # Collection instance. If no value is provided, the default values set on this instance's DB will be used. # These option values can be overridden for any invocation of insert, update, or remove. # # @option opts [:create_pk] :pk (BSON::ObjectId) A primary key factory to use # other than the default BSON::ObjectId. # @option opts [:primary, :secondary] :read The default read preference for queries # initiates from this connection object. If +:secondary+ is chosen, reads will be sent # to one of the closest available secondary nodes. If a secondary node cannot be located, the # read will be sent to the primary. If this option is left unspecified, the value of the read # preference for this collection's associated Mongo::DB object will be used. # # @raise [InvalidNSName] # if collection name is empty, contains '$', or starts or ends with '.' # # @raise [TypeError] # if collection name is not a string or symbol # # @return [Collection] def initialize(name, db, opts={}) if db.is_a?(String) && name.is_a?(Mongo::DB) warn "Warning: the order of parameters to initialize a collection have changed. " + "Please specify the collection name first, followed by the db. This will be made permanent" + "in v2.0." db, name = name, db end raise TypeError, "Collection name must be a String or Symbol." unless [String, Symbol].include?(name.class) name = name.to_s raise Mongo::InvalidNSName, "Collection names cannot be empty." if name.empty? || name.include?("..") if name.include?("$") raise Mongo::InvalidNSName, "Collection names must not contain '$'" unless name =~ /((^\$cmd)|(oplog\.\$main))/ end raise Mongo::InvalidNSName, "Collection names must not start or end with '.'" if name.match(/^\./) || name.match(/\.$/) pk_factory = nil if opts.respond_to?(:create_pk) || !opts.is_a?(Hash) warn "The method for specifying a primary key factory on a Collection has changed.\n" + "Please specify it as an option (e.g., :pk => PkFactory)." pk_factory = opts end @db, @name = db, name @connection = @db.connection @logger = @connection.logger @cache_time = @db.cache_time @cache = Hash.new(0) unless pk_factory @write_concern = get_write_concern(opts, db) @read = opts[:read] || @db.read Mongo::ReadPreference::validate(@read) @capped = opts[:capped] @tag_sets = opts.fetch(:tag_sets, @db.tag_sets) @acceptable_latency = opts.fetch(:acceptable_latency, @db.acceptable_latency) end @pk_factory = pk_factory || opts[:pk] || BSON::ObjectId @hint = nil @operation_writer = CollectionOperationWriter.new(self) @command_writer = CollectionCommandWriter.new(self) end # Indicate whether this is a capped collection. # # @raise [Mongo::OperationFailure] # if the collection doesn't exist. # # @return [Boolean] def capped? @capped ||= [1, true].include?(@db.command({:collstats => @name})['capped']) end # Return a sub-collection of this collection by name. If 'users' is a collection, then # 'users.comments' is a sub-collection of users. # # @param [String, Symbol] name # the collection to return # # @raise [Mongo::InvalidNSName] # if passed an invalid collection name # # @return [Collection] # the specified sub-collection def [](name) name = "#{self.name}.#{name}" return Collection.new(name, db) if !db.strict? || db.collection_names.include?(name.to_s) raise "Collection #{name} doesn't exist. Currently in strict mode." end # Set a hint field for query optimizer. Hint may be a single field # name, array of field names, or a hash (preferably an [OrderedHash]). # If using MongoDB > 1.1, you probably don't ever need to set a hint. # # @param [String, Array, OrderedHash] hint a single field, an array of # fields, or a hash specifying fields def hint=(hint=nil) @hint = normalize_hint_fields(hint) self end # Set a hint field using a named index. # @param [String] hint index name def named_hint=(hint=nil) @hint = hint self end # Query the database. # # The +selector+ argument is a prototype document that all results must # match. For example: # # collection.find({"hello" => "world"}) # # only matches documents that have a key "hello" with value "world". # Matches can have other keys *in addition* to "hello". # # If given an optional block +find+ will yield a Cursor to that block, # close the cursor, and then return nil. This guarantees that partially # evaluated cursors will be closed. If given no block +find+ returns a # cursor. # # @param [Hash] selector # a document specifying elements which must be present for a # document to be included in the result set. Note that in rare cases, # (e.g., with $near queries), the order of keys will matter. To preserve # key order on a selector, use an instance of BSON::OrderedHash (only applies # to Ruby 1.8). # # @option opts [Array, Hash] :fields field names that should be returned in the result # set ("_id" will be included unless explicitly excluded). By limiting results to a certain subset of fields, # you can cut down on network traffic and decoding time. If using a Hash, keys should be field # names and values should be either 1 or 0, depending on whether you want to include or exclude # the given field. # @option opts [:primary, :secondary] :read The default read preference for queries # initiates from this connection object. If +:secondary+ is chosen, reads will be sent # to one of the closest available secondary nodes. If a secondary node cannot be located, the # read will be sent to the primary. If this option is left unspecified, the value of the read # preference for this Collection object will be used. # @option opts [Integer] :skip number of documents to skip from the beginning of the result set # @option opts [Integer] :limit maximum number of documents to return # @option opts [Array] :sort an array of [key, direction] pairs to sort by. Direction should # be specified as Mongo::ASCENDING (or :ascending / :asc) or Mongo::DESCENDING (or :descending / :desc) # @option opts [String, Array, OrderedHash] :hint hint for query optimizer, usually not necessary if # using MongoDB > 1.1 # @option opts [String] :named_hint for specifying a named index as a hint, will be overriden by :hint # if :hint is also provided. # @option opts [Boolean] :snapshot (false) if true, snapshot mode will be used for this query. # Snapshot mode assures no duplicates are returned, or objects missed, which were preset at both the start and # end of the query's execution. # For details see http://www.mongodb.org/display/DOCS/How+to+do+Snapshotting+in+the+Mongo+Database # @option opts [Boolean] :batch_size (100) the number of documents to returned by the database per # GETMORE operation. A value of 0 will let the database server decide how many results to return. # This option can be ignored for most use cases. # @option opts [Boolean] :timeout (true) when +true+, the returned cursor will be subject to # the normal cursor timeout behavior of the mongod process. When +false+, the returned cursor will # never timeout. Note that disabling timeout will only work when #find is invoked with a block. # This is to prevent any inadvertent failure to close the cursor, as the cursor is explicitly # closed when block code finishes. # @option opts [Integer] :max_scan (nil) Limit the number of items to scan on both collection scans and indexed queries.. # @option opts [Boolean] :show_disk_loc (false) Return the disk location of each query result (for debugging). # @option opts [Boolean] :return_key (false) Return the index key used to obtain the result (for debugging). # @option opts [Block] :transformer (nil) a block for transforming returned documents. # This is normally used by object mappers to convert each returned document to an instance of a class. # @option opts [String] :comment (nil) a comment to include in profiling logs # @option opts [Boolean] :compile_regex (true) whether BSON regex objects should be compiled into Ruby regexes. # If false, a BSON::Regex object will be returned instead. # # @raise [ArgumentError] # if timeout is set to false and find is not invoked in a block # # @raise [RuntimeError] # if given unknown options def find(selector={}, opts={}) opts = opts.dup fields = opts.delete(:fields) fields = ["_id"] if fields && fields.empty? skip = opts.delete(:skip) || skip || 0 limit = opts.delete(:limit) || 0 sort = opts.delete(:sort) hint = opts.delete(:hint) named_hint = opts.delete(:named_hint) snapshot = opts.delete(:snapshot) batch_size = opts.delete(:batch_size) timeout = (opts.delete(:timeout) == false) ? false : true max_scan = opts.delete(:max_scan) return_key = opts.delete(:return_key) transformer = opts.delete(:transformer) show_disk_loc = opts.delete(:show_disk_loc) comment = opts.delete(:comment) read = opts.delete(:read) || @read tag_sets = opts.delete(:tag_sets) || @tag_sets acceptable_latency = opts.delete(:acceptable_latency) || @acceptable_latency compile_regex = opts.key?(:compile_regex) ? opts.delete(:compile_regex) : true if timeout == false && !block_given? raise ArgumentError, "Collection#find must be invoked with a block when timeout is disabled." end if hint hint = normalize_hint_fields(hint) else hint = @hint # assumed to be normalized already end raise RuntimeError, "Unknown options [#{opts.inspect}]" unless opts.empty? cursor = Cursor.new(self, { :selector => selector, :fields => fields, :skip => skip, :limit => limit, :order => sort, :hint => hint || named_hint, :snapshot => snapshot, :timeout => timeout, :batch_size => batch_size, :transformer => transformer, :max_scan => max_scan, :show_disk_loc => show_disk_loc, :return_key => return_key, :read => read, :tag_sets => tag_sets, :comment => comment, :acceptable_latency => acceptable_latency, :compile_regex => compile_regex }) if block_given? begin yield cursor ensure cursor.close end nil else cursor end end # Return a single object from the database. # # @return [OrderedHash, Nil] # a single document or nil if no result is found. # # @param [Hash, ObjectId, Nil] spec_or_object_id a hash specifying elements # which must be present for a document to be included in the result set or an # instance of ObjectId to be used as the value for an _id query. # If nil, an empty selector, {}, will be used. # # @option opts [Hash] # any valid options that can be send to Collection#find # # @raise [TypeError] # if the argument is of an improper type. def find_one(spec_or_object_id=nil, opts={}) spec = case spec_or_object_id when nil {} when BSON::ObjectId {:_id => spec_or_object_id} when Hash spec_or_object_id else raise TypeError, "spec_or_object_id must be an instance of ObjectId or Hash, or nil" end timeout = opts.delete(:max_time_ms) cursor = find(spec, opts.merge(:limit => -1)) timeout ? cursor.max_time_ms(timeout).next_document : cursor.next_document end # Save a document to this collection. # # @param [Hash] doc # the document to be saved. If the document already has an '_id' key, # then an update (upsert) operation will be performed, and any existing # document with that _id is overwritten. Otherwise an insert operation is performed. # # @return [ObjectId] the _id of the saved document. # # @option opts [String, Integer, Symbol] :w (1) Set default number of nodes to which a write # should be acknowledged. # @option opts [Integer] :wtimeout (nil) Set replica set acknowledgement timeout. # @option opts [Boolean] :j (false) If true, block until write operations have been committed # to the journal. Cannot be used in combination with 'fsync'. Prior to MongoDB 2.6 this option was # ignored if the server was running without journaling. Starting with MongoDB 2.6, write operations will # fail with an exception if this option is used when the server is running without journaling. # @option opts [Boolean] :fsync (false) If true, and the server is running without journaling, blocks until # the server has synced all data files to disk. If the server is running with journaling, this acts the same as # the 'j' option, blocking until write operations have been committed to the journal. # Cannot be used in combination with 'j'. # # Options provided here will override any write concern options set on this collection, # its database object, or the current connection. See the options # for DB#get_last_error. # # @raise [Mongo::OperationFailure] will be raised iff :w > 0 and the operation fails. def save(doc, opts={}) if doc.has_key?(:_id) || doc.has_key?('_id') id = doc[:_id] || doc['_id'] update({:_id => id}, doc, opts.merge!({:upsert => true})) id else insert(doc, opts) end end # Insert one or more documents into the collection. # # @param [Hash, Array] doc_or_docs # a document (as a hash) or array of documents to be inserted. # # @return [ObjectId, Array] # The _id of the inserted document or a list of _ids of all inserted documents. # @return [[ObjectId, Array], [Hash, Array]] # 1st, the _id of the inserted document or a list of _ids of all inserted documents. # 2nd, a list of invalid documents. # Return this result format only when :collect_on_error is true. # # @option opts [String, Integer, Symbol] :w (1) Set default number of nodes to which a write # should be acknowledged. # @option opts [Integer] :wtimeout (nil) Set replica set acknowledgement timeout. # @option opts [Boolean] :j (false) If true, block until write operations have been committed # to the journal. Cannot be used in combination with 'fsync'. Prior to MongoDB 2.6 this option was # ignored if the server was running without journaling. Starting with MongoDB 2.6, write operations will # fail with an exception if this option is used when the server is running without journaling. # @option opts [Boolean] :fsync (false) If true, and the server is running without journaling, blocks until # the server has synced all data files to disk. If the server is running with journaling, this acts the same as # the 'j' option, blocking until write operations have been committed to the journal. # Cannot be used in combination with 'j'. # # Notes on write concern: # Options provided here will override any write concern options set on this collection, # its database object, or the current connection. See the options for +DB#get_last_error+. # # @option opts [Boolean] :continue_on_error (+false+) If true, then # continue a bulk insert even if one of the documents inserted # triggers a database assertion (as in a duplicate insert, for instance). # If not acknowledging writes, the list of ids returned will # include the object ids of all documents attempted on insert, even # if some are rejected on error. When acknowledging writes, any error will raise an # OperationFailure exception. # MongoDB v2.0+. # @option opts [Boolean] :collect_on_error (+false+) if true, then # collects invalid documents as an array. Note that this option changes the result format. # # @raise [Mongo::OperationFailure] will be raised iff :w > 0 and the operation fails. def insert(doc_or_docs, opts={}) if doc_or_docs.respond_to?(:collect!) doc_or_docs.collect! { |doc| @pk_factory.create_pk(doc) } error_docs, errors, write_concern_errors, rest_ignored = batch_write(:insert, doc_or_docs, true, opts) errors = write_concern_errors + errors raise errors.last if !opts[:collect_on_error] && !errors.empty? inserted_docs = doc_or_docs - error_docs inserted_ids = inserted_docs.collect {|o| o[:_id] || o['_id']} opts[:collect_on_error] ? [inserted_ids, error_docs] : inserted_ids else @pk_factory.create_pk(doc_or_docs) send_write(:insert, nil, doc_or_docs, true, opts) return doc_or_docs[:_id] || doc_or_docs['_id'] end end alias_method :<<, :insert # Remove all documents from this collection. # # @param [Hash] selector # If specified, only matching documents will be removed. # # @option opts [String, Integer, Symbol] :w (1) Set default number of nodes to which a write # should be acknowledged. # @option opts [Integer] :wtimeout (nil) Set replica set acknowledgement timeout. # @option opts [Boolean] :j (false) If true, block until write operations have been committed # to the journal. Cannot be used in combination with 'fsync'. Prior to MongoDB 2.6 this option was # ignored if the server was running without journaling. Starting with MongoDB 2.6, write operations will # fail with an exception if this option is used when the server is running without journaling. # @option opts [Boolean] :fsync (false) If true, and the server is running without journaling, blocks until # the server has synced all data files to disk. If the server is running with journaling, this acts the same as # the 'j' option, blocking until write operations have been committed to the journal. # Cannot be used in combination with 'j'. # @option opts [Integer] :limit (0) Set limit option, currently only 0 for all or 1 for just one. # # Notes on write concern: # Options provided here will override any write concern options set on this collection, # its database object, or the current connection. See the options for +DB#get_last_error+. # # @example remove all documents from the 'users' collection: # users.remove # users.remove({}) # # @example remove only documents that have expired: # users.remove({:expire => {"$lte" => Time.now}}) # # @return [Hash, true] Returns a Hash containing the last error object if acknowledging writes # Otherwise, returns true. # # @raise [Mongo::OperationFailure] will be raised iff :w > 0 and the operation fails. def remove(selector={}, opts={}) send_write(:delete, selector, nil, nil, opts) end # Update one or more documents in this collection. # # @param [Hash] selector # a hash specifying elements which must be present for a document to be updated. Note: # the update command currently updates only the first document matching the # given selector. If you want all matching documents to be updated, be sure # to specify :multi => true. # @param [Hash] document # a hash specifying the fields to be changed in the selected document, # or (in the case of an upsert) the document to be inserted # # @option opts [Boolean] :upsert (+false+) if true, performs an upsert (update or insert) # @option opts [Boolean] :multi (+false+) update all documents matching the selector, as opposed to # just the first matching document. Note: only works in MongoDB 1.1.3 or later. # @option opts [String, Integer, Symbol] :w (1) Set default number of nodes to which a write # should be acknowledged. # @option opts [Integer] :wtimeout (nil) Set replica set acknowledgement timeout. # @option opts [Boolean] :j (false) If true, block until write operations have been committed # to the journal. Cannot be used in combination with 'fsync'. Prior to MongoDB 2.6 this option was # ignored if the server was running without journaling. Starting with MongoDB 2.6, write operations will # fail with an exception if this option is used when the server is running without journaling. # @option opts [Boolean] :fsync (false) If true, and the server is running without journaling, blocks until # the server has synced all data files to disk. If the server is running with journaling, this acts the same as # the 'j' option, blocking until write operations have been committed to the journal. # Cannot be used in combination with 'j'. # # Notes on write concern: # Options provided here will override any write concern options set on this collection, # its database object, or the current connection. See the options for DB#get_last_error. # # @return [Hash, true] Returns a Hash containing the last error object if acknowledging writes. # Otherwise, returns true. # # @raise [Mongo::OperationFailure] will be raised iff :w > 0 and the operation fails. def update(selector, document, opts={}) send_write(:update, selector, document, !document.keys.first.to_s.start_with?("$"), opts) end # Create a new index. # # @param [String, Array] spec # should be either a single field name or an array of # [field name, type] pairs. Index types should be specified # as Mongo::ASCENDING, Mongo::DESCENDING, Mongo::GEO2D, Mongo::GEO2DSPHERE, Mongo::GEOHAYSTACK, # Mongo::TEXT or Mongo::HASHED. # # Note that geospatial indexing only works with versions of MongoDB >= 1.3.3+. Keep in mind, too, # that in order to geo-index a given field, that field must reference either an array or a sub-object # where the first two values represent x- and y-coordinates. Examples can be seen below. # # Also note that it is permissible to create compound indexes that include a geospatial index as # long as the geospatial index comes first. # # If your code calls create_index frequently, you can use Collection#ensure_index to cache these calls # and thereby prevent excessive round trips to the database. # # @option opts [Boolean] :unique (false) if true, this index will enforce a uniqueness constraint. # @option opts [Boolean] :background (false) indicate that the index should be built in the background. This # feature is only available in MongoDB >= 1.3.2. # @option opts [Boolean] :drop_dups (nil) If creating a unique index on a collection with pre-existing records, # this option will keep the first document the database indexes and drop all subsequent with duplicate values. # @option opts [Integer] :bucket_size (nil) For use with geoHaystack indexes. Number of documents to group # together within a certain proximity to a given longitude and latitude. # @option opts [Integer] :min (nil) specify the minimum longitude and latitude for a geo index. # @option opts [Integer] :max (nil) specify the maximum longitude and latitude for a geo index. # # @example Creating a compound index using a hash: (Ruby 1.9+ Syntax) # @posts.create_index({'subject' => Mongo::ASCENDING, 'created_at' => Mongo::DESCENDING}) # # @example Creating a compound index: # @posts.create_index([['subject', Mongo::ASCENDING], ['created_at', Mongo::DESCENDING]]) # # @example Creating a geospatial index using a hash: (Ruby 1.9+ Syntax) # @restaurants.create_index(:location => Mongo::GEO2D) # # @example Creating a geospatial index: # @restaurants.create_index([['location', Mongo::GEO2D]]) # # # Note that this will work only if 'location' represents x,y coordinates: # {'location': [0, 50]} # {'location': {'x' => 0, 'y' => 50}} # {'location': {'latitude' => 0, 'longitude' => 50}} # # @example A geospatial index with alternate longitude and latitude: # @restaurants.create_index([['location', Mongo::GEO2D]], :min => 500, :max => 500) # # @return [String] the name of the index created. def create_index(spec, opts={}) opts[:dropDups] = opts[:drop_dups] if opts[:drop_dups] opts[:bucketSize] = opts[:bucket_size] if opts[:bucket_size] field_spec = parse_index_spec(spec) opts = opts.dup name = opts.delete(:name) || generate_index_name(field_spec) name = name.to_s if name generate_indexes(field_spec, name, opts) name end # Calls create_index and sets a flag to not do so again for another X minutes. # this time can be specified as an option when initializing a Mongo::DB object as options[:cache_time] # Any changes to an index will be propagated through regardless of cache time (e.g., a change of index direction) # # The parameters and options for this methods are the same as those for Collection#create_index. # # @example Call sequence (Ruby 1.9+ Syntax): # Time t: @posts.ensure_index(:subject => Mongo::ASCENDING) -- calls create_index and # sets the 5 minute cache # Time t+2min : @posts.ensure_index(:subject => Mongo::ASCENDING) -- doesn't do anything # Time t+3min : @posts.ensure_index(:something_else => Mongo::ASCENDING) -- calls create_index # and sets 5 minute cache # Time t+10min : @posts.ensure_index(:subject => Mongo::ASCENDING) -- calls create_index and # resets the 5 minute counter # # @return [String] the name of the index. def ensure_index(spec, opts={}) now = Time.now.utc.to_i opts[:dropDups] = opts[:drop_dups] if opts[:drop_dups] opts[:bucketSize] = opts[:bucket_size] if opts[:bucket_size] field_spec = parse_index_spec(spec) name = opts[:name] || generate_index_name(field_spec) name = name.to_s if name if !@cache[name] || @cache[name] <= now generate_indexes(field_spec, name, opts) end # Reset the cache here in case there are any errors inserting. Best to be safe. @cache[name] = now + @cache_time name end # Drop a specified index. # # @param [String] name def drop_index(name) if name.is_a?(Array) return drop_index(index_name(name)) end @cache[name.to_s] = nil @db.drop_index(@name, name) end # Drop all indexes. def drop_indexes @cache = {} # Note: calling drop_indexes with no args will drop them all. @db.drop_index(@name, '*') end # Drop the entire collection. USE WITH CAUTION. def drop @db.drop_collection(@name) end # Atomically update and return a document using MongoDB's findAndModify command. (MongoDB > 1.3.0) # # @option opts [Hash] :query ({}) a query selector document for matching # the desired document. # @option opts [Hash] :update (nil) the update operation to perform on the # matched document. # @option opts [Array, String, OrderedHash] :sort ({}) specify a sort # option for the query using any # of the sort options available for Cursor#sort. Sort order is important # if the query will be matching multiple documents since only the first # matching document will be updated and returned. # @option opts [Boolean] :remove (false) If true, removes the returned # document from the collection. # @option opts [Boolean] :new (false) If true, returns the updated # document; otherwise, returns the document prior to update. # @option opts [Boolean] :upsert (false) If true, creates a new document # if the query returns no document. # @option opts [Hash] :fields (nil) A subset of fields to return. # Specify an inclusion of a field with 1. _id is included by default and must # be explicitly excluded. # @option opts [Boolean] :full_response (false) If true, returns the entire # response object from the server including 'ok' and 'lastErrorObject'. # # @return [Hash] the matched document. def find_and_modify(opts={}) full_response = opts.delete(:full_response) cmd = BSON::OrderedHash.new cmd[:findandmodify] = @name cmd.merge!(opts) cmd[:sort] = Mongo::Support.format_order_clause(opts[:sort]) if opts[:sort] full_response ? @db.command(cmd) : @db.command(cmd)['value'] end # Perform an aggregation using the aggregation framework on the current collection. # @note Aggregate requires server version >= 2.1.1 # @note Field References: Within an expression, field names must be quoted and prefixed by a dollar sign ($). # # @example Define the pipeline as an array of operator hashes: # coll.aggregate([ {"$project" => {"last_name" => 1, "first_name" => 1 }}, {"$match" => {"last_name" => "Jones"}} ]) # # @example With server version 2.5.1 or newer, pass a cursor option to retrieve unlimited aggregation results: # coll.aggregate([ {"$group" => { :_id => "$_id", :count => { "$sum" => "$members" }}} ], :cursor => {} ) # # @param [Array] pipeline Should be a single array of pipeline operator hashes. # # '$project' Reshapes a document stream by including fields, excluding fields, inserting computed fields, # renaming fields,or creating/populating fields that hold sub-documents. # # '$match' Query-like interface for filtering documents out of the aggregation pipeline. # # '$limit' Restricts the number of documents that pass through the pipeline. # # '$skip' Skips over the specified number of documents and passes the rest along the pipeline. # # '$unwind' Peels off elements of an array individually, returning one document for each member. # # '$group' Groups documents for calculating aggregate values. # # '$sort' Sorts all input documents and returns them to the pipeline in sorted order. # # '$out' The name of a collection to which the result set will be saved. # # @option opts [:primary, :secondary] :read Read preference indicating which server to perform this operation # on. If $out is specified and :read is not :primary, the aggregation will be rerouted to the primary with # a warning. See Collection#find for more details. # @option opts [String] :comment (nil) a comment to include in profiling logs # @option opts [Hash] :cursor return a cursor object instead of an Array. Takes an optional batchSize parameter # to specify the maximum size, in documents, of the first batch returned. # # @return [Array] An Array with the aggregate command's results. # # @raise MongoArgumentError if operators either aren't supplied or aren't in the correct format. # @raise MongoOperationFailure if the aggregate command fails. # def aggregate(pipeline=nil, opts={}) raise MongoArgumentError, "pipeline must be an array of operators" unless pipeline.class == Array raise MongoArgumentError, "pipeline operators must be hashes" unless pipeline.all? { |op| op.class == Hash } selector = BSON::OrderedHash.new selector['aggregate'] = self.name selector['pipeline'] = pipeline result = @db.command(selector, command_options(opts)) unless Mongo::Support.ok?(result) raise Mongo::OperationFailure, "aggregate failed: #{result['errmsg']}" end if result.key?('cursor') cursor_info = result['cursor'] seed = { :cursor_id => cursor_info['id'], :first_batch => cursor_info['firstBatch'], :pool => @connection.pinned_pool } return Cursor.new(self, seed.merge!(opts)) elsif selector['pipeline'].any? { |op| op.key?('$out') || op.key?(:$out) } return result end result['result'] || result end # Perform a map-reduce operation on the current collection. # # @param [String, BSON::Code] map a map function, written in JavaScript. # @param [String, BSON::Code] reduce a reduce function, written in JavaScript. # # @option opts [Hash] :query ({}) a query selector document, like what's passed to #find, to limit # the operation to a subset of the collection. # @option opts [Array] :sort ([]) an array of [key, direction] pairs to sort by. Direction should # be specified as Mongo::ASCENDING (or :ascending / :asc) or Mongo::DESCENDING (or :descending / :desc) # @option opts [Integer] :limit (nil) if passing a query, number of objects to return from the collection. # @option opts [String, BSON::Code] :finalize (nil) a javascript function to apply to the result set after the # map/reduce operation has finished. # @option opts [String, Hash] :out Location of the result of the map-reduce operation. You can output to a # collection, output to a collection with an action, or output inline. You may output to a collection # when performing map reduce operations on the primary members of the set; on secondary members you # may only use the inline output. See the server mapReduce documentation for available options. # @option opts [Boolean] :keeptemp (false) if true, the generated collection will be persisted. The default # is false. Note that this option has no effect is versions of MongoDB > v1.7.6. # @option opts [Boolean ] :verbose (false) if true, provides statistics on job execution time. # @option opts [Boolean] :raw (false) if true, return the raw result object from the map_reduce command, and not # the instantiated collection that's returned by default. Note if a collection name isn't returned in the # map-reduce output (as, for example, when using :out => { :inline => 1 }), then you must specify this option # or an ArgumentError will be raised. # @option opts [:primary, :secondary] :read Read preference indicating which server to run this map-reduce # on. See Collection#find for more details. # @option opts [String] :comment (nil) a comment to include in profiling logs # # @return [Collection, Hash] a Mongo::Collection object or a Hash with the map-reduce command's results. # # @raise ArgumentError if you specify { :out => { :inline => true }} but don't specify :raw => true. # # @see http://www.mongodb.org/display/DOCS/MapReduce Offical MongoDB map/reduce documentation. def map_reduce(map, reduce, opts={}) opts = opts.dup map = BSON::Code.new(map) unless map.is_a?(BSON::Code) reduce = BSON::Code.new(reduce) unless reduce.is_a?(BSON::Code) raw = opts.delete(:raw) hash = BSON::OrderedHash.new hash['mapreduce'] = self.name hash['map'] = map hash['reduce'] = reduce hash['out'] = opts.delete(:out) hash['sort'] = Mongo::Support.format_order_clause(opts.delete(:sort)) if opts.key?(:sort) result = @db.command(hash, command_options(opts)) unless Mongo::Support.ok?(result) raise Mongo::OperationFailure, "map-reduce failed: #{result['errmsg']}" end if raw result elsif result['result'] if result['result'].is_a?(BSON::OrderedHash) && result['result'].key?('db') && result['result'].key?('collection') otherdb = @db.connection[result['result']['db']] otherdb[result['result']['collection']] else @db[result["result"]] end else raise ArgumentError, "Could not instantiate collection from result. If you specified " + "{:out => {:inline => true}}, then you must also specify :raw => true to get the results." end end alias :mapreduce :map_reduce # Perform a group aggregation. # # @param [Hash] opts the options for this group operation. The minimum required are :initial # and :reduce. # # @option opts [Array, String, Symbol] :key (nil) Either the name of a field or a list of fields to group by (optional). # @option opts [String, BSON::Code] :keyf (nil) A JavaScript function to be used to generate the grouping keys (optional). # @option opts [String, BSON::Code] :cond ({}) A document specifying a query for filtering the documents over # which the aggregation is run (optional). # @option opts [Hash] :initial the initial value of the aggregation counter object (required). # @option opts [String, BSON::Code] :reduce (nil) a JavaScript aggregation function (required). # @option opts [String, BSON::Code] :finalize (nil) a JavaScript function that receives and modifies # each of the resultant grouped objects. Available only when group is run with command # set to true. # @option opts [:primary, :secondary] :read Read preference indicating which server to perform this group # on. See Collection#find for more details. # @option opts [String] :comment (nil) a comment to include in profiling logs # # @return [Array] the command response consisting of grouped items. def group(opts, condition={}, initial={}, reduce=nil, finalize=nil) opts = opts.dup if opts.is_a?(Hash) return new_group(opts) elsif opts.is_a?(Symbol) raise MongoArgumentError, "Group takes either an array of fields to group by or a JavaScript function" + "in the form of a String or BSON::Code." end warn "Collection#group no longer takes a list of parameters. This usage is deprecated and will be removed in v2.0." + "Check out the new API at http://api.mongodb.org/ruby/current/Mongo/Collection.html#group-instance_method" reduce = BSON::Code.new(reduce) unless reduce.is_a?(BSON::Code) group_command = { "group" => { "ns" => @name, "$reduce" => reduce, "cond" => condition, "initial" => initial } } unless opts.nil? if opts.is_a? Array key_type = "key" key_value = {} opts.each { |k| key_value[k] = 1 } else key_type = "$keyf" key_value = opts.is_a?(BSON::Code) ? opts : BSON::Code.new(opts) end group_command["group"][key_type] = key_value end finalize = BSON::Code.new(finalize) if finalize.is_a?(String) if finalize.is_a?(BSON::Code) group_command['group']['finalize'] = finalize end result = @db.command(group_command) if Mongo::Support.ok?(result) result["retval"] else raise OperationFailure, "group command failed: #{result['errmsg']}" end end # Scan this entire collection in parallel. # Returns a list of up to num_cursors cursors that can be iterated concurrently. As long as the collection # is not modified during scanning, each document appears once in one of the cursors' result sets. # # @note Requires server version >= 2.5.5 # # @param [Integer] num_cursors the number of cursors to return. # @param [Hash] opts # # @return [Array] An array of up to num_cursors cursors for iterating over the collection. def parallel_scan(num_cursors, opts={}) cmd = BSON::OrderedHash.new cmd[:parallelCollectionScan] = self.name cmd[:numCursors] = num_cursors result = @db.command(cmd, command_options(opts)) result['cursors'].collect do |cursor_info| seed = { :cursor_id => cursor_info['cursor']['id'], :first_batch => cursor_info['cursor']['firstBatch'], :pool => @connection.pinned_pool } Cursor.new(self, seed.merge!(opts)) end end private def new_group(opts={}) reduce = opts.delete(:reduce) finalize = opts.delete(:finalize) cond = opts.delete(:cond) || {} initial = opts.delete(:initial) if !(reduce && initial) raise MongoArgumentError, "Group requires at minimum values for initial and reduce." end cmd = { "group" => { "ns" => @name, "$reduce" => reduce.to_bson_code, "cond" => cond, "initial" => initial } } if finalize cmd['group']['finalize'] = finalize.to_bson_code end if key = opts.delete(:key) if key.is_a?(String) || key.is_a?(Symbol) key = [key] end key_value = {} key.each { |k| key_value[k] = 1 } cmd["group"]["key"] = key_value elsif keyf = opts.delete(:keyf) cmd["group"]["$keyf"] = keyf.to_bson_code end result = @db.command(cmd, command_options(opts)) result["retval"] end public # Return a list of distinct values for +key+ across all # documents in the collection. The key may use dot notation # to reach into an embedded object. # # @param [String, Symbol, OrderedHash] key or hash to group by. # @param [Hash] query a selector for limiting the result set over which to group. # @param [Hash] opts the options for this distinct operation. # # @option opts [:primary, :secondary] :read Read preference indicating which server to perform this query # on. See Collection#find for more details. # @option opts [String] :comment (nil) a comment to include in profiling logs # # @example Saving zip codes and ages and returning distinct results. # @collection.save({:zip => 10010, :name => {:age => 27}}) # @collection.save({:zip => 94108, :name => {:age => 24}}) # @collection.save({:zip => 10010, :name => {:age => 27}}) # @collection.save({:zip => 99701, :name => {:age => 24}}) # @collection.save({:zip => 94108, :name => {:age => 27}}) # # @collection.distinct(:zip) # [10010, 94108, 99701] # @collection.distinct("name.age") # [27, 24] # # # You may also pass a document selector as the second parameter # # to limit the documents over which distinct is run: # @collection.distinct("name.age", {"name.age" => {"$gt" => 24}}) # [27] # # @return [Array] an array of distinct values. def distinct(key, query=nil, opts={}) raise MongoArgumentError unless [String, Symbol].include?(key.class) command = BSON::OrderedHash.new command[:distinct] = @name command[:key] = key.to_s command[:query] = query @db.command(command, command_options(opts))["values"] end # Rename this collection. # # Note: If operating in auth mode, the client must be authorized as an admin to # perform this operation. # # @param [String] new_name the new name for this collection # # @return [String] the name of the new collection. # # @raise [Mongo::InvalidNSName] if +new_name+ is an invalid collection name. def rename(new_name) case new_name when Symbol, String else raise TypeError, "new_name must be a string or symbol" end new_name = new_name.to_s if new_name.empty? or new_name.include? ".." raise Mongo::InvalidNSName, "collection names cannot be empty" end if new_name.include? "$" raise Mongo::InvalidNSName, "collection names must not contain '$'" end if new_name.match(/^\./) or new_name.match(/\.$/) raise Mongo::InvalidNSName, "collection names must not start or end with '.'" end @db.rename_collection(@name, new_name) @name = new_name end # Get information on the indexes for this collection. # # @return [Hash] a hash where the keys are index names. def index_information @db.index_information(@name) end # Return a hash containing options that apply to this collection. # For all possible keys and values, see DB#create_collection. # # @return [Hash] options that apply to this collection. def options @db.collections_info(@name).next_document['options'] end # Return stats on the collection. Uses MongoDB's collstats command. # # @return [Hash] def stats @db.command({:collstats => @name}) end # Get the number of documents in this collection. # # @option opts [Hash] :query ({}) A query selector for filtering the documents counted. # @option opts [Integer] :skip (nil) The number of documents to skip. # @option opts [Integer] :limit (nil) The number of documents to limit. # @option opts [:primary, :secondary] :read Read preference for this command. See Collection#find for # more details. # @option opts [String] :comment (nil) a comment to include in profiling logs # # @return [Integer] def count(opts={}) find(opts[:query], :skip => opts[:skip], :limit => opts[:limit], :read => opts[:read], :comment => opts[:comment]).count(true) end alias :size :count protected # Provide required command options if they are missing in the command options hash. # # @return [Hash] The command options hash def command_options(opts) opts[:read] ? opts : opts.merge(:read => @read) end def normalize_hint_fields(hint) case hint when String {hint => 1} when Hash hint when nil nil else h = BSON::OrderedHash.new hint.to_a.each { |k| h[k] = 1 } h end end private def send_write(op_type, selector, doc_or_docs, check_keys, opts, collection_name=@name) write_concern = get_write_concern(opts, self) if @db.connection.use_write_command?(write_concern) @command_writer.send_write_command(op_type, selector, doc_or_docs, check_keys, opts, write_concern, collection_name) else @operation_writer.send_write_operation(op_type, selector, doc_or_docs, check_keys, opts, write_concern, collection_name) end end def index_name(spec) field_spec = parse_index_spec(spec) index_information.each do |index| return index[0] if index[1]['key'] == field_spec end nil end def parse_index_spec(spec) field_spec = BSON::OrderedHash.new if spec.is_a?(String) || spec.is_a?(Symbol) field_spec[spec.to_s] = 1 elsif spec.is_a?(Hash) if RUBY_VERSION < '1.9' && !spec.is_a?(BSON::OrderedHash) raise MongoArgumentError, "Must use OrderedHash in Ruby < 1.9.0" end validate_index_types(spec.values) field_spec = spec.is_a?(BSON::OrderedHash) ? spec : BSON::OrderedHash.try_convert(spec) elsif spec.is_a?(Array) && spec.all? {|field| field.is_a?(Array) } spec.each do |f| validate_index_types(f[1]) field_spec[f[0].to_s] = f[1] end else raise MongoArgumentError, "Invalid index specification #{spec.inspect}; " + "should be either a hash (OrderedHash), string, symbol, or an array of arrays." end field_spec end def validate_index_types(*types) types.flatten! types.each do |t| unless Mongo::INDEX_TYPES.values.include?(t) raise MongoArgumentError, "Invalid index field #{t.inspect}; " + "should be one of " + Mongo::INDEX_TYPES.map {|k,v| "Mongo::#{k} (#{v})"}.join(', ') end end end def generate_indexes(field_spec, name, opts) selector = { :name => name, :key => field_spec } selector.merge!(opts) begin cmd = BSON::OrderedHash[:createIndexes, @name, :indexes, [selector]] @db.command(cmd) rescue Mongo::OperationFailure => ex if ex.error_code == Mongo::ErrorCode::COMMAND_NOT_FOUND || ex.error_code.nil? selector[:ns] = "#{@db.name}.#{@name}" send_write(:insert, nil, selector, false, {:w => 1}, Mongo::DB::SYSTEM_INDEX_COLLECTION) else raise Mongo::OperationFailure, "Failed to create index #{selector.inspect} with the following error: " + "#{ex.message}" end end nil end def generate_index_name(spec) indexes = [] spec.each_pair do |field, type| indexes.push("#{field}_#{type}") end indexes.join("_") end def batch_write(op_type, documents, check_keys=true, opts={}) write_concern = get_write_concern(opts, self) if @db.connection.use_write_command?(write_concern) return @command_writer.batch_write(op_type, documents, check_keys, opts) else return @operation_writer.batch_write(op_type, documents, check_keys, opts) end end end end ruby-mongo-1.10.0/lib/mongo/collection_writer.rb000066400000000000000000000403551233461006100216330ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class CollectionWriter include Mongo::Logging include Mongo::WriteConcern OPCODE = { :insert => Mongo::Constants::OP_INSERT, :update => Mongo::Constants::OP_UPDATE, :delete => Mongo::Constants::OP_DELETE } WRITE_COMMAND_ARG_KEY = { :insert => :documents, :update => :updates, :delete => :deletes } def initialize(collection) @collection = collection @name = @collection.name @db = @collection.db @connection = @db.connection @logger = @connection.logger @max_write_batch_size = Mongo::MongoClient::DEFAULT_MAX_WRITE_BATCH_SIZE end # common implementation only for new batch write commands (insert, update, delete) and old batch insert def batch_write_incremental(op_type, documents, check_keys=true, opts={}) raise Mongo::OperationFailure, "Request contains no documents" if documents.empty? write_concern = get_write_concern(opts, @collection) max_message_size, max_append_size, max_serialize_size = batch_write_max_sizes(write_concern) ordered = opts[:ordered] continue_on_error = !!opts[:continue_on_error] || ordered == false collect_on_error = !!opts[:collect_on_error] || ordered == false error_docs = [] # docs with serialization errors errors = [] write_concern_errors = [] exchanges = [] serialized_doc = nil message = BSON::ByteBuffer.new("", max_message_size) @max_write_batch_size = @collection.db.connection.max_write_batch_size docs = documents.dup catch(:error) do until docs.empty? || (!errors.empty? && !collect_on_error) # process documents a batch at a time batch_docs = [] batch_message_initialize(message, op_type, continue_on_error, write_concern) while !docs.empty? && batch_docs.size < @max_write_batch_size begin doc = docs.first doc = doc[:d] if op_type == :insert && !ordered.nil? #check_keys for :update outside of serialize serialized_doc ||= BSON::BSON_CODER.serialize(doc, check_keys, true, max_serialize_size) rescue BSON::InvalidDocument, BSON::InvalidKeyName, BSON::InvalidStringEncoding => ex bulk_message = "Bulk write error - #{ex.message} - examine result for complete information" ex = BulkWriteError.new(bulk_message, Mongo::ErrorCode::INVALID_BSON, {:op_type => op_type, :serialize => doc, :ord => docs.first[:ord], :error => ex}) unless ordered.nil? error_docs << docs.shift errors << ex next if collect_on_error throw(:error) if batch_docs.empty? break # defer exit and send batch end break if message.size + serialized_doc.size > max_append_size batch_docs << docs.shift batch_message_append(message, serialized_doc, write_concern) serialized_doc = nil end begin response = batch_message_send(message, op_type, batch_docs, write_concern, continue_on_error) if batch_docs.size > 0 exchanges << {:op_type => op_type, :batch => batch_docs, :opts => opts, :response => response} rescue Mongo::WriteConcernError => ex write_concern_errors << ex exchanges << {:op_type => op_type, :batch => batch_docs, :opts => opts, :response => ex.result} rescue Mongo::OperationFailure => ex errors << ex exchanges << {:op_type => op_type, :batch => batch_docs, :opts => opts, :response => ex.result} throw(:error) unless continue_on_error end end end [error_docs, errors, write_concern_errors, exchanges] end def batch_write_partition(op_type, documents, check_keys, opts) raise Mongo::OperationFailure, "Request contains no documents" if documents.empty? write_concern = get_write_concern(opts, @collection) ordered = opts[:ordered] continue_on_error = !!opts[:continue_on_error] || ordered == false # continue_on_error default false collect_on_error = !!opts[:collect_on_error] # collect_on_error default false error_docs = [] # docs with serialization errors errors = [] write_concern_errors = [] exchanges = [] @max_write_batch_size = @collection.db.connection.max_write_batch_size @write_batch_size = [documents.size, @max_write_batch_size].min docs = documents.dup until docs.empty? batch = docs.take(@write_batch_size) begin batch_to_send = batch #(op_type == :insert && !ordered.nil?) ? batch.collect{|doc|doc[:d]} : batch if @collection.db.connection.use_write_command?(write_concern) # TODO - polymorphic send_write including legacy insert response = send_bulk_write_command(op_type, batch_to_send, check_keys, opts) else response = send_write_operation(op_type, nil, batch_to_send, check_keys, opts, write_concern) end exchanges << {:op_type => op_type, :batch => batch, :opts => opts, :response => response} docs = docs.drop(batch.size) @write_batch_size = [(@write_batch_size*1097) >> 10, @write_batch_size+1].max unless docs.empty? # 2**(1/10) multiplicative increase @write_batch_size = @max_write_batch_size if @write_batch_size > @max_write_batch_size rescue BSON::InvalidDocument, BSON::InvalidKeyName, BSON::InvalidStringEncoding => ex if @write_batch_size > 1 # decrease batch size @write_batch_size = (@write_batch_size+1) >> 1 # 2**(-1) multiplicative decrease next end # error on a single document bulk_message = "Bulk write error - #{ex.message} - examine result for complete information" ex = BulkWriteError.new(bulk_message, Mongo::ErrorCode::INVALID_BSON, {:op_type => op_type, :batch => batch, :ord => batch.first[:ord], :opts => opts, :error => ex}) unless ordered.nil? error_docs << docs.shift next if collect_on_error errors << ex break unless continue_on_error rescue Mongo::WriteConcernError => ex write_concern_errors << ex exchanges << {:op_type => op_type, :batch => batch_docs, :opts => opts, :response => ex.result} docs = docs.drop(batch.size) rescue Mongo::OperationFailure => ex errors << ex exchanges << {:op_type => op_type, :batch => batch, :opts => opts, :response => ex.result} docs = docs.drop(batch.size) break if !continue_on_error && !collect_on_error end end [error_docs, errors, write_concern_errors, exchanges] end alias :batch_write :batch_write_incremental def send_bulk_write_command(op_type, documents, check_keys, opts, collection_name=@name) if op_type == :insert documents = documents.collect{|doc| doc[:d]} if opts.key?(:ordered) documents.each do |doc| # TODO - @pk_factory.create_pk(doc) if check_keys doc.each_key do |key| key = key.to_s raise BSON::InvalidKeyName.new("key #{key} must not start with '$'") if key[0] == ?$ raise BSON::InvalidKeyName.new("key #{key} must not contain '.'") if key.include? ?. end end end #elsif op_type == :update # TODO - check keys #elsif op_type == :delete #else # raise ArgumentError, "Write operation type must be :insert, :update or :delete" end request = BSON::OrderedHash[op_type, collection_name].merge!( Mongo::CollectionWriter::WRITE_COMMAND_ARG_KEY[op_type] => documents, :writeConcern => get_write_concern(opts, @collection), :ordered => opts[:ordered] || !opts[:continue_on_error] ) @db.command(request) end private def sort_by_first_sym(pairs) pairs = pairs.collect{|first, rest| [first.to_s, rest]} #stringify_first pairs = pairs.sort{|x,y| x.first <=> y.first } pairs.collect{|first, rest| [first.to_sym, rest]} #symbolize_first end def ordered_group_by_first(pairs) pairs.inject([[], nil]) do |memo, pair| result, previous_value = memo current_value = pair.first result << [current_value, []] if previous_value != current_value result.last.last << pair.last [result, current_value] end.first end end class CollectionOperationWriter < CollectionWriter def initialize(collection) super(collection) end def send_write_operation(op_type, selector, doc_or_docs, check_keys, opts, write_concern, collection_name=@name) message = BSON::ByteBuffer.new("", @connection.max_message_size) message.put_int((op_type == :insert && !!opts[:continue_on_error]) ? 1 : 0) BSON::BSON_RUBY.serialize_cstr(message, "#{@db.name}.#{collection_name}") if op_type == :update update_options = 0 update_options += 1 if opts[:upsert] update_options += 2 if opts[:multi] message.put_int(update_options) elsif op_type == :delete delete_options = 0 delete_options += 1 if opts[:limit] && opts[:limit] != 0 message.put_int(delete_options) end message.put_binary(BSON::BSON_CODER.serialize(selector, false, true, @connection.max_bson_size).to_s) if selector [doc_or_docs].flatten(1).compact.each do |document| message.put_binary(BSON::BSON_CODER.serialize(document, check_keys, true, @connection.max_bson_size).to_s) if message.size > @connection.max_message_size raise BSON::InvalidDocument, "Message is too large. This message is limited to #{@connection.max_message_size} bytes." end end instrument(op_type, :database => @db.name, :collection => collection_name, :selector => selector, :documents => doc_or_docs) do op_code = OPCODE[op_type] if Mongo::WriteConcern.gle?(write_concern) @connection.send_message_with_gle(op_code, message, @db.name, nil, write_concern) else @connection.send_message(op_code, message) end end end def bulk_execute(ops, options, opts = {}) write_concern = get_write_concern(opts, @collection) errors = [] write_concern_errors = [] exchanges = [] ops.each do |op_type, doc| doc = {:d => @collection.pk_factory.create_pk(doc[:d]), :ord => doc[:ord]} if op_type == :insert doc_opts = doc.merge(opts) d = doc_opts.delete(:d) q = doc_opts.delete(:q) u = doc_opts.delete(:u) begin # use single and NOT batch inserts since there no index for an error response = @collection.operation_writer.send_write_operation(op_type, q, d || u, check_keys = false, doc_opts, write_concern) exchanges << {:op_type => op_type, :batch => [doc], :opts => opts, :response => response} rescue BSON::InvalidDocument, BSON::InvalidKeyName, BSON::InvalidStringEncoding => ex bulk_message = "Bulk write error - #{ex.message} - examine result for complete information" ex = BulkWriteError.new(bulk_message, Mongo::ErrorCode::INVALID_BSON, {:op_type => op_type, :serialize => doc, :ord => doc[:ord], :error => ex}) errors << ex break if options[:ordered] rescue Mongo::WriteConcernError => ex write_concern_errors << ex exchanges << {:op_type => op_type, :batch => [doc], :opts => opts, :response => ex.result} rescue Mongo::OperationFailure => ex errors << ex exchanges << {:op_type => op_type, :batch => [doc], :opts => opts, :response => ex.result} break if options[:ordered] && ex.result["err"] != "norepl" end end [errors, write_concern_errors, exchanges] end private def batch_message_initialize(message, op_type, continue_on_error, write_concern) message.clear!.clear message.put_int(continue_on_error ? 1 : 0) BSON::BSON_RUBY.serialize_cstr(message, "#{@db.name}.#{@name}") end def batch_message_append(message, serialized_doc, write_concern) message.put_binary(serialized_doc.to_s) end def batch_message_send(message, op_type, batch_docs, write_concern, continue_on_error) instrument(:insert, :database => @db.name, :collection => @name, :documents => batch_docs) do if Mongo::WriteConcern.gle?(write_concern) @connection.send_message_with_gle(Mongo::Constants::OP_INSERT, message, @db.name, nil, write_concern) else @connection.send_message(Mongo::Constants::OP_INSERT, message) end end end def batch_write_max_sizes(write_concern) [@connection.max_message_size, @connection.max_message_size, @connection.max_bson_size] end end class CollectionCommandWriter < CollectionWriter def initialize(collection) super(collection) end def send_write_command(op_type, selector, doc_or_docs, check_keys, opts, write_concern, collection_name=@name) if op_type == :insert argument = [doc_or_docs].flatten(1).compact elsif op_type == :update argument = [{:q => selector, :u => doc_or_docs, :multi => !!opts[:multi]}] argument.first.merge!(:upsert => opts[:upsert]) if opts[:upsert] elsif op_type == :delete argument = [{:q => selector, :limit => (opts[:limit] || 0)}] else raise ArgumentError, "Write operation type must be :insert, :update or :delete" end request = BSON::OrderedHash[op_type, collection_name, WRITE_COMMAND_ARG_KEY[op_type], argument] request.merge!(:writeConcern => write_concern, :ordered => !opts[:continue_on_error]) request.merge!(opts) instrument(op_type, :database => @db.name, :collection => collection_name, :selector => selector, :documents => doc_or_docs) do @db.command(request) end end def bulk_execute(ops, options, opts = {}) errors = [] write_concern_errors = [] exchanges = [] ops = (options[:ordered] == false) ? sort_by_first_sym(ops) : ops # sort by write-type ordered_group_by_first(ops).each do |op_type, documents| documents.collect! {|doc| {:d => @collection.pk_factory.create_pk(doc[:d]), :ord => doc[:ord]} } if op_type == :insert error_docs, batch_errors, batch_write_concern_errors, batch_exchanges = batch_write(op_type, documents, check_keys = false, opts.merge(:ordered => options[:ordered])) errors += batch_errors write_concern_errors += batch_write_concern_errors exchanges += batch_exchanges break if options[:ordered] && !batch_errors.empty? end [errors, write_concern_errors, exchanges] end private def batch_message_initialize(message, op_type, continue_on_error, write_concern) message.clear!.clear @bson_empty ||= BSON::BSON_CODER.serialize({}) message.put_binary(@bson_empty.to_s) message.unfinish!.array!(WRITE_COMMAND_ARG_KEY[op_type]) end def batch_message_append(message, serialized_doc, write_concern) message.push_doc!(serialized_doc) end def batch_message_send(message, op_type, batch_docs, write_concern, continue_on_error) message.finish! request = BSON::OrderedHash[op_type, @name, :bson, message] request.merge!(:writeConcern => write_concern, :ordered => !continue_on_error) instrument(:insert, :database => @db.name, :collection => @name, :documents => batch_docs) do @db.command(request) end end def batch_write_max_sizes(write_concern) [MongoClient::COMMAND_HEADROOM, MongoClient::APPEND_HEADROOM, MongoClient::SERIALIZE_HEADROOM].collect{|h| @connection.max_bson_size + h} end end end ruby-mongo-1.10.0/lib/mongo/connection.rb000066400000000000000000000014041233461006100202330ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/connection/socket' require 'mongo/connection/node' require 'mongo/connection/pool' require 'mongo/connection/pool_manager' require 'mongo/connection/sharding_pool_manager' ruby-mongo-1.10.0/lib/mongo/connection/000077500000000000000000000000001233461006100177075ustar00rootroot00000000000000ruby-mongo-1.10.0/lib/mongo/connection/node.rb000066400000000000000000000155741233461006100211750ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Node attr_accessor :host, :port, :address, :client, :socket, :last_state def initialize(client, host_port) @client = client @manager = @client.local_manager @host, @port = Support.normalize_seeds(host_port) @address = "#{@host}:#{@port}" @config = nil @socket = nil @node_mutex = Mutex.new end def eql?(other) (other.is_a?(Node) && @address == other.address) end alias :== :eql? def =~(other) if other.is_a?(String) h, p = Support.normalize_seeds(other) h == @host && p == @port else false end end def host_string address end def config connect unless connected? set_config unless @config || !connected? @config end def inspect "" end # Create a connection to the provided node, # and, if successful, return the socket. Otherwise, # return nil. def connect @node_mutex.synchronize do begin @socket = @client.socket_class.new(@host, @port, @client.op_timeout, @client.connect_timeout, @client.socket_opts) rescue ConnectionTimeoutError, OperationTimeout, ConnectionFailure, OperationFailure, SocketError, SystemCallError, IOError => ex @client.log(:debug, "Failed connection to #{host_string} with #{ex.class}, #{ex.message}.") close end end end # This should only be called within a mutex def close if @socket && !@socket.closed? @socket.close end @socket = nil @config = nil end def connected? @socket != nil && !@socket.closed? end def active? begin result = @client['admin'].command({:ping => 1}, :socket => @socket) rescue OperationFailure, SocketError, SystemCallError, IOError return nil end result['ok'] == 1 end # Get the configuration for the provided node as returned by the # ismaster command. Additionally, check that the replica set name # matches with the name provided. def set_config @node_mutex.synchronize do begin if @config @last_state = @config['ismaster'] ? :primary : :other end if @client.connect_timeout Timeout::timeout(@client.connect_timeout, OperationTimeout) do @config = @client['admin'].command({:ismaster => 1}, :socket => @socket) end else @config = @client['admin'].command({:ismaster => 1}, :socket => @socket) end update_max_sizes if @config['msg'] @client.log(:warn, "#{config['msg']}") end unless @client.mongos? check_set_membership(@config) check_set_name(@config) end rescue ConnectionFailure, OperationFailure, OperationTimeout, SocketError, SystemCallError, IOError => ex @client.log(:warn, "Attempted connection to node #{host_string} raised " + "#{ex.class}: #{ex.message}") # Socket may already be nil from issuing command close end end end # Return a list of replica set nodes from the config. # Note: this excludes arbiters. def node_list nodes = [] nodes += config['hosts'] if config['hosts'] nodes += config['passives'] if config['passives'] nodes += ["#{@host}:#{@port}"] if @client.mongos? nodes end def arbiters return [] unless config['arbiters'] config['arbiters'].map do |arbiter| Support.normalize_seeds(arbiter) end end def primary? config['ismaster'] == true || config['ismaster'] == 1 end def secondary? config['secondary'] == true || config['secondary'] == 1 end def tags config['tags'] || {} end def host_port [@host, @port] end def hash address.hash end def healthy? connected? && config end def max_bson_size @max_bson_size || DEFAULT_MAX_BSON_SIZE end def max_message_size @max_message_size || max_bson_size * MESSAGE_SIZE_FACTOR end def max_wire_version @max_wire_version || 0 end def min_wire_version @min_wire_version || 0 end def wire_version_feature?(feature) min_wire_version <= feature && feature <= max_wire_version end def max_write_batch_size @max_write_batch_size || Mongo::MongoClient::DEFAULT_MAX_WRITE_BATCH_SIZE end protected # Ensure that this node is a healthy member of a replica set. def check_set_membership(config) if !config.has_key?('hosts') message = "Will not connect to #{host_string} because it's not a member " + "of a replica set." raise ConnectionFailure, message elsif config['hosts'].length == 1 && !config['ismaster'] && !config['secondary'] message = "Attempting to connect to an unhealthy, single-node replica set." raise ConnectionFailure, message end end # Ensure that this node is part of a replica set of the expected name. def check_set_name(config) if @client.replica_set_name if !config['setName'] @client.log(:warn, "Could not verify replica set name for member #{host_string} " + "because ismaster does not return name in this version of MongoDB") elsif @client.replica_set_name != config['setName'] message = "Attempting to connect to replica set '#{config['setName']}' on member #{host_string} " + "but expected '#{@client.replica_set_name}'" raise ReplicaSetConnectionError, message end end end private def update_max_sizes @max_bson_size = config['maxBsonObjectSize'] || DEFAULT_MAX_BSON_SIZE @max_message_size = config['maxMessageSizeBytes'] || @max_bson_size * MESSAGE_SIZE_FACTOR @max_wire_version = config['maxWireVersion'] || 0 @min_wire_version = config['minWireVersion'] || 0 @max_write_batch_size = config['maxWriteBatchSize'] || Mongo::MongoClient::DEFAULT_MAX_WRITE_BATCH_SIZE end end end ruby-mongo-1.10.0/lib/mongo/connection/pool.rb000066400000000000000000000226641233461006100212170ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Pool PING_ATTEMPTS = 6 MAX_PING_TIME = 1_000_000 PRUNE_INTERVAL = 10_000 attr_accessor :host, :port, :address, :size, :timeout, :checked_out, :client, :node # Create a new pool of connections. def initialize(client, host, port, opts={}) @client = client @host, @port = host, port # A Mongo::Node object. @node = opts[:node] # The string address @address = "#{@host}:#{@port}" # Pool size and timeout. @size = opts.fetch(:size, 20) @timeout = opts.fetch(:timeout, 30) # Mutex for synchronizing pool access @connection_mutex = Mutex.new # Mutex for synchronizing pings @ping_mutex = Mutex.new # Condition variable for signal and wait @queue = ConditionVariable.new @sockets = [] @checked_out = [] @ping_time = nil @last_ping = nil @closed = false @thread_ids_to_sockets = {} @checkout_counter = 0 end # Close this pool. # # @option opts [Boolean]:soft (false) If true, # close only those sockets that are not checked out. def close(opts={}) @connection_mutex.synchronize do if opts[:soft] && !@checked_out.empty? @closing = true close_sockets(@sockets - @checked_out) else close_sockets(@sockets) @closed = true end @node.close if @node end true end def tags @node.tags end def healthy? close if @sockets.all?(&:closed?) !closed? && node.healthy? end def closed? @closed end def up? !@closed end def inspect "#" end def host_string "#{@host}:#{@port}" end def host_port [@host, @port] end # Refresh ping time only if we haven't # checked within the last five minutes. def ping_time @ping_mutex.synchronize do if !@last_ping || (Time.now - @last_ping) > 300 @ping_time = refresh_ping_time @last_ping = Time.now end end @ping_time end # Return the time it takes on average # to do a round-trip against this node. def refresh_ping_time trials = [] PING_ATTEMPTS.times do t1 = Time.now if !self.ping return MAX_PING_TIME end trials << (Time.now - t1) * 1000 end trials.sort! # Delete shortest and longest times trials.delete_at(trials.length-1) trials.delete_at(0) total = 0.0 trials.each { |t| total += t } (total / trials.length).ceil end def ping begin return self.client['admin'].command({:ping => 1}, :socket => @node.socket, :timeout => MAX_PING_TIME) rescue ConnectionFailure, OperationFailure, SocketError, SystemCallError, IOError return false end end # Return a socket to the pool. def checkin(socket) @connection_mutex.synchronize do if @checked_out.delete(socket) @queue.broadcast else return false end end true end # Adds a new socket to the pool and checks it out. # # This method is called exclusively from #checkout; # therefore, it runs within a mutex. def checkout_new_socket begin socket = @client.socket_class.new(@host, @port, @client.op_timeout, @client.connect_timeout, @client.socket_opts) socket.pool = self rescue => ex socket.close if socket @node.close if @node raise ConnectionFailure, "Failed to connect to host #{@host} and port #{@port}: #{ex}" end # If any saved authentications exist, we want to apply those # when creating new sockets and process logouts. check_auths(socket) @sockets << socket @checked_out << socket @thread_ids_to_sockets[Thread.current.object_id] = socket socket end # If a user calls DB#authenticate, and several sockets exist, # then we need a way to apply the authentication on each socket. # So we store the apply_authentication method, and this will be # applied right before the next use of each socket. # # @deprecated This method has been replaced by Pool#check_auths (private) # and it isn't necessary to ever invoke this method directly. def authenticate_existing @connection_mutex.synchronize do @sockets.each do |socket| check_auths(socket) end end end # Store the logout op for each existing socket to be applied before # the next use of each socket. # # @deprecated This method has been replaced by Pool#check_auths (private) # and it isn't necessary to ever invoke this method directly. def logout_existing(database) @connection_mutex.synchronize do @sockets.each do |socket| check_auths(socket) end end end # Checks out the first available socket from the pool. # # If the pid has changed, remove the socket and check out # new one. # # This method is called exclusively from #checkout; # therefore, it runs within a mutex. def checkout_existing_socket(socket=nil) if !socket available = @sockets - @checked_out socket = available[rand(available.length)] end if socket.pid != Process.pid @sockets.delete(socket) if socket socket.close unless socket.closed? end checkout_new_socket else @checked_out << socket @thread_ids_to_sockets[Thread.current.object_id] = socket socket end end def prune_threads live_threads = Thread.list.map(&:object_id) @thread_ids_to_sockets.reject! do |key, value| !live_threads.include?(key) end end def check_prune if @checkout_counter > PRUNE_INTERVAL @checkout_counter = 0 prune_threads else @checkout_counter += 1 end end # Check out an existing socket or create a new socket if the maximum # pool size has not been exceeded. Otherwise, wait for the next # available socket. def checkout @client.connect if !@client.connected? start_time = Time.now loop do if (Time.now - start_time) > @timeout raise ConnectionTimeoutError, "could not obtain connection within " + "#{@timeout} seconds. The max pool size is currently #{@size}; " + "consider increasing the pool size or timeout." end @connection_mutex.synchronize do check_prune socket = nil if socket_for_thread = @thread_ids_to_sockets[Thread.current.object_id] if !@checked_out.include?(socket_for_thread) socket = checkout_existing_socket(socket_for_thread) end else if @sockets.size < @size socket = checkout_new_socket elsif @checked_out.size < @sockets.size socket = checkout_existing_socket end end if socket check_auths(socket) if socket.closed? @checked_out.delete(socket) @sockets.delete(socket) @thread_ids_to_sockets.delete(Thread.current.object_id) socket = checkout_new_socket end return socket else # Otherwise, wait @queue.wait(@connection_mutex) end end end end private # Helper method to handle keeping track of auths/logouts for sockets. # # @param socket [Socket] The socket instance to be checked. # # @return [Socket] The authenticated socket instance. def check_auths(socket) # find and handle logouts (socket.auths - @client.auths).each do |auth| @client.issue_logout(auth[:source], :socket => socket) socket.auths.delete(auth) end # find and handle new auths (@client.auths - socket.auths).each do |auth| @client.issue_authentication(auth, :socket => socket) socket.auths.add(auth) end socket end def close_sockets(sockets) sockets.each do |socket| @sockets.delete(socket) begin socket.close unless socket.closed? rescue IOError => ex warn "IOError when attempting to close socket connected to #{@host}:#{@port}: #{ex.inspect}" end end end end end ruby-mongo-1.10.0/lib/mongo/connection/pool_manager.rb000066400000000000000000000225001233461006100226760ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class PoolManager include ThreadLocalVariableManager attr_reader :client, :primary, :primary_pool, :seeds, :max_bson_size, :max_message_size, :max_wire_version, :min_wire_version # Create a new set of connection pools. # # The pool manager will by default use the original seed list passed # to the connection objects, accessible via connection.seeds. In addition, # the user may pass an additional list of seeds nodes discovered in real # time. The union of these lists will be used when attempting to connect, # with the newly-discovered nodes being used first. def initialize(client, seeds=[]) @client = client @seeds = seeds @pools = Set.new @primary = nil @primary_pool = nil @secondaries = Set.new @secondary_pools = [] @hosts = Set.new @members = Set.new @refresh_required = false @max_bson_size = DEFAULT_MAX_BSON_SIZE @max_message_size = @max_bson_size * MESSAGE_SIZE_FACTOR @max_wire_version = 0 @min_wire_version = 0 @connect_mutex = Mutex.new thread_local[:locks][:connecting_manager] = false end def inspect "" end def connect @connect_mutex.synchronize do begin thread_local[:locks][:connecting_manager] = true @refresh_required = false disconnect_old_members connect_to_members initialize_pools(@members) update_max_sizes @seeds = discovered_seeds ensure thread_local[:locks][:connecting_manager] = false end end end def refresh!(additional_seeds) @seeds |= additional_seeds connect end # We're healthy if all members are pingable and if the view # of the replica set returned by isMaster is equivalent # to our view. If any of these isn't the case, # set @refresh_required to true, and return. def check_connection_health return if thread_local[:locks][:connecting_manager] members = copy_members begin seed = get_valid_seed_node rescue ConnectionFailure @refresh_required = true return end unless current_config = seed.config @refresh_required = true seed.close return end if current_config['hosts'].length != members.length @refresh_required = true seed.close return end current_config['hosts'].each do |host| member = members.detect do |m| m.address == host end if member && validate_existing_member(current_config, member) next else @refresh_required = true seed.close return end end seed.close end # The replica set connection should initiate a full refresh. def refresh_required? @refresh_required end def closed? pools.all? { |pool| pool.closed? } end def close(opts={}) begin pools.each { |pool| pool.close(opts) } rescue ConnectionFailure end end def read read_pool.host_port end def hosts @connect_mutex.synchronize do @hosts.nil? ? nil : @hosts.clone end end def pools @connect_mutex.synchronize do @pools.nil? ? nil : @pools.clone end end def secondaries @connect_mutex.synchronize do @secondaries.nil? ? nil : @secondaries.clone end end def secondary_pools @connect_mutex.synchronize do @secondary_pools.nil? ? nil : @secondary_pools.clone end end def arbiters @connect_mutex.synchronize do @arbiters.nil? ? nil : @arbiters.clone end end def state_snapshot @connect_mutex.synchronize do { :pools => @pools.nil? ? nil : @pools.clone, :secondaries => @secondaries.nil? ? nil : @secondaries.clone, :secondary_pools => @secondary_pools.nil? ? nil : @secondary_pools.clone, :hosts => @hosts.nil? ? nil : @hosts.clone, :arbiters => @arbiters.nil? ? nil : @arbiters.clone } end end private def update_max_sizes unless @members.size == 0 @max_bson_size = @members.map(&:max_bson_size).min @max_message_size = @members.map(&:max_message_size).min @max_wire_version = @members.map(&:max_wire_version).min @min_wire_version = @members.map(&:min_wire_version).max end end def validate_existing_member(current_config, member) if current_config['ismaster'] && member.last_state != :primary return false elsif member.last_state != :other return false end return true end # For any existing members, close and remove any that are unhealthy or already closed. def disconnect_old_members @pools.reject! {|pool| !pool.healthy? } @members.reject! {|node| !node.healthy? } end # Connect to each member of the replica set # as reported by the given seed node. def connect_to_members seed = get_valid_seed_node seed.node_list.each do |host| if existing = @members.detect {|node| node =~ host } if existing.healthy? # Refresh this node's configuration existing.set_config # If we are unhealthy after refreshing our config, drop from the set. if !existing.healthy? @members.delete(existing) else next end else existing.close @members.delete(existing) end end node = Mongo::Node.new(self.client, host) node.connect @members << node if node.healthy? end seed.close if @members.empty? raise ConnectionFailure, "Failed to connect to any given member." end end # Initialize the connection pools for the primary and secondary nodes. def initialize_pools(members) @primary_pool = nil @primary = nil @secondaries.clear @secondary_pools.clear @hosts.clear members.each do |member| member.last_state = nil @hosts << member.host_string if member.primary? assign_primary(member) elsif member.secondary? # member could be not primary but secondary still is false assign_secondary(member) end end @arbiters = members.first.arbiters end def assign_primary(member) member.last_state = :primary @primary = member.host_port if existing = @pools.detect {|pool| pool.node == member } @primary_pool = existing else @primary_pool = Pool.new(self.client, member.host, member.port, :size => self.client.pool_size, :timeout => self.client.pool_timeout, :node => member ) @pools << @primary_pool end end def assign_secondary(member) member.last_state = :secondary @secondaries << member.host_port if existing = @pools.detect {|pool| pool.node == member } @secondary_pools << existing else pool = Pool.new(self.client, member.host, member.port, :size => self.client.pool_size, :timeout => self.client.pool_timeout, :node => member ) @secondary_pools << pool @pools << pool end end # Iterate through the list of provided seed # nodes until we've gotten a response from the # replica set we're trying to connect to. # # If we don't get a response, raise an exception. def get_valid_seed_node @seeds.each do |seed| node = Mongo::Node.new(self.client, seed) node.connect return node if node.healthy? end raise ConnectionFailure, "Cannot connect to a replica set using seeds " + "#{@seeds.map {|s| "#{s[0]}:#{s[1]}" }.join(', ')}" end def discovered_seeds @members.map(&:host_port) end def copy_members members = Set.new @connect_mutex.synchronize do @members.map do |m| members << m.dup end end members end end end ruby-mongo-1.10.0/lib/mongo/connection/sharding_pool_manager.rb000066400000000000000000000041331233461006100245570ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class ShardingPoolManager < PoolManager def inspect "" end # "Best" should be the member with the fastest ping time # but connect/connect_to_members reinitializes @members def best(members) Array(members.first) end def connect @connect_mutex.synchronize do begin thread_local[:locks][:connecting_manager] = true @refresh_required = false disconnect_old_members connect_to_members initialize_pools best(@members) update_max_sizes @seeds = discovered_seeds ensure thread_local[:locks][:connecting_manager] = false end end end # Checks that each node is healthy (via check_is_master) and that each # node is in fact a mongos. If either criteria are not true, a refresh is # set to be triggered and close() is called on the node. # # @return [Boolean] indicating if a refresh is required. def check_connection_health @refresh_required = false @members.each do |member| begin config = @client.check_is_master([member.host, member.port]) unless config && config.has_key?('msg') @refresh_required = true member.close end rescue OperationTimeout @refresh_required = true member.close end break if @refresh_required end @refresh_required end end end ruby-mongo-1.10.0/lib/mongo/connection/socket.rb000066400000000000000000000014131233461006100215230ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/connection/socket/socket_util.rb' require 'mongo/connection/socket/ssl_socket.rb' require 'mongo/connection/socket/tcp_socket.rb' require 'mongo/connection/socket/unix_socket.rb' ruby-mongo-1.10.0/lib/mongo/connection/socket/000077500000000000000000000000001233461006100211775ustar00rootroot00000000000000ruby-mongo-1.10.0/lib/mongo/connection/socket/socket_util.rb000066400000000000000000000015551233461006100240570ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'socket' require 'timeout' module SocketUtil attr_accessor :pool, :pid, :auths def checkout @pool.checkout if @pool end def checkin @pool.checkin(self) if @pool end def close @socket.close unless closed? end def closed? @socket.closed? end end ruby-mongo-1.10.0/lib/mongo/connection/socket/ssl_socket.rb000066400000000000000000000053611233461006100237020ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'openssl' module Mongo # A basic wrapper over Ruby's SSLSocket that initiates # a TCP connection over SSL and then provides an basic interface # mirroring Ruby's TCPSocket, vis., TCPSocket#send and TCPSocket#read. class SSLSocket include SocketUtil def initialize(host, port, op_timeout=nil, connect_timeout=nil, opts={}) @op_timeout = op_timeout @connect_timeout = connect_timeout @pid = Process.pid @auths = Set.new @tcp_socket = ::TCPSocket.new(host, port) @tcp_socket.setsockopt(Socket::IPPROTO_TCP, Socket::TCP_NODELAY, 1) @context = OpenSSL::SSL::SSLContext.new if opts[:cert] @context.cert = OpenSSL::X509::Certificate.new(File.open(opts[:cert])) end if opts[:key] if opts[:key_pass_phrase] @context.key = OpenSSL::PKey::RSA.new(File.open(opts[:key]), opts[:key_pass_phrase]) else @context.key = OpenSSL::PKey::RSA.new(File.open(opts[:key])) end end if opts[:verify] @context.ca_file = opts[:ca_cert] @context.verify_mode = OpenSSL::SSL::VERIFY_PEER end begin @socket = OpenSSL::SSL::SSLSocket.new(@tcp_socket, @context) @socket.sync_close = true connect rescue OpenSSL::SSL::SSLError raise ConnectionFailure, "SSL handshake failed. MongoDB may " + "not be configured with SSL support." end if opts[:verify] unless OpenSSL::SSL.verify_certificate_identity(@socket.peer_cert, host) raise ConnectionFailure, "SSL handshake failed. Hostname mismatch." end end self end def connect if @connect_timeout Timeout::timeout(@connect_timeout, ConnectionTimeoutError) do @socket.connect end else @socket.connect end end def send(data) @socket.syswrite(data) end def read(length, buffer) if @op_timeout Timeout::timeout(@op_timeout, OperationTimeout) do @socket.sysread(length, buffer) end else @socket.sysread(length, buffer) end end end end ruby-mongo-1.10.0/lib/mongo/connection/socket/tcp_socket.rb000066400000000000000000000047461233461006100236750ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo # Wrapper class for Socket # # Emulates TCPSocket with operation and connection timeout # sans Timeout::timeout # class TCPSocket include SocketUtil def initialize(host, port, op_timeout=nil, connect_timeout=nil, opts={}) @op_timeout = op_timeout @connect_timeout = connect_timeout @pid = Process.pid @auths = Set.new @socket = handle_connect(host, port) end def handle_connect(host, port) error = nil # Following python's lead (see PYTHON-356) family = host == 'localhost' ? Socket::AF_INET : Socket::AF_UNSPEC addr_info = Socket.getaddrinfo(host, nil, family, Socket::SOCK_STREAM) addr_info.each do |info| begin sock = Socket.new(info[4], Socket::SOCK_STREAM, 0) sock.setsockopt(Socket::IPPROTO_TCP, Socket::TCP_NODELAY, 1) socket_address = Socket.pack_sockaddr_in(port, info[3]) connect(sock, socket_address) return sock rescue IOError, SystemCallError => e error = e sock.close if sock end end raise error end def connect(socket, socket_address) if @connect_timeout Timeout::timeout(@connect_timeout, ConnectionTimeoutError) do socket.connect(socket_address) end else socket.connect(socket_address) end end def send(data) @socket.write(data) end def read(maxlen, buffer) # Block on data to read for @op_timeout seconds begin ready = IO.select([@socket], nil, [@socket], @op_timeout) unless ready raise OperationTimeout end rescue IOError raise ConnectionFailure end # Read data from socket begin @socket.sysread(maxlen, buffer) rescue SystemCallError, IOError => ex raise ConnectionFailure, ex end end end end ruby-mongo-1.10.0/lib/mongo/connection/socket/unix_socket.rb000066400000000000000000000024051233461006100240600ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo attr_accessor :auths # Wrapper class for Socket # # Emulates UNIXSocket with operation and connection timeout # sans Timeout::timeout # class UNIXSocket < TCPSocket def initialize(socket_path, port=:socket, op_timeout=nil, connect_timeout=nil, opts={}) @op_timeout = op_timeout @connect_timeout = connect_timeout @pid = Process.pid @auths = Set.new @address = socket_path @port = :socket # purposely override input @socket_address = Socket.pack_sockaddr_un(@address) @socket = Socket.new(Socket::AF_UNIX, Socket::SOCK_STREAM, 0) connect end end end ruby-mongo-1.10.0/lib/mongo/cursor.rb000066400000000000000000000551311233461006100174170ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo # A cursor over query results. Returned objects are hashes. class Cursor include Enumerable include Mongo::Constants include Mongo::Conversions include Mongo::Logging include Mongo::ReadPreference attr_reader :collection, :selector, :fields, :order, :hint, :snapshot, :timeout, :full_collection_name, :transformer, :options, :cursor_id, :show_disk_loc, :comment, :compile_regex, :read, :tag_sets, :acceptable_latency # Create a new cursor. # # Note: cursors are created when executing queries using [Collection#find] and other # similar methods. Application developers shouldn't have to create cursors manually. # # @return [Cursor] def initialize(collection, opts={}) opts = opts.dup @cursor_id = opts.delete(:cursor_id) @db = collection.db @collection = collection @connection = @db.connection @logger = @connection.logger # Query selector @selector = opts.delete(:selector) || {} # Query pre-serialized bson to append @bson = @selector.delete(:bson) # Special operators that form part of $query @order = opts.delete(:order) @explain = opts.delete(:explain) @hint = opts.delete(:hint) @snapshot = opts.delete(:snapshot) @max_scan = opts.delete(:max_scan) @return_key = opts.delete(:return_key) @show_disk_loc = opts.delete(:show_disk_loc) @comment = opts.delete(:comment) @compile_regex = opts.key?(:compile_regex) ? opts.delete(:compile_regex) : true # Wire-protocol settings @fields = convert_fields_for_query(opts.delete(:fields)) @skip = opts.delete(:skip) || 0 @limit = opts.delete(:limit) || 0 @tailable = opts.delete(:tailable) @timeout = opts.key?(:timeout) ? opts.delete(:timeout) : true @options = 0 # Use this socket for the query @socket = opts.delete(:socket) @pool = opts.delete(:pool) @closed = false @query_run = false @transformer = opts.delete(:transformer) @read = opts.delete(:read) || @collection.read Mongo::ReadPreference::validate(@read) @tag_sets = opts.delete(:tag_sets) || @collection.tag_sets @acceptable_latency = opts.delete(:acceptable_latency) || @collection.acceptable_latency batch_size(opts.delete(:batch_size) || 0) @full_collection_name = "#{@collection.db.name}.#{@collection.name}" @cache = opts.delete(:first_batch) || [] @returned = 0 if(!@timeout) add_option(OP_QUERY_NO_CURSOR_TIMEOUT) end if(@read != :primary) add_option(OP_QUERY_SLAVE_OK) end if(@tailable) add_option(OP_QUERY_TAILABLE) end # If a cursor_id is provided, this is a cursor for a command if @cursor_id @command_cursor = true @query_run = true end if @collection.name =~ /^\$cmd/ || @collection.name =~ /^system/ @command = true else @command = false end @opts = opts end # Guess whether the cursor is alive on the server. # # Note that this method only checks whether we have # a cursor id. The cursor may still have timed out # on the server. This will be indicated in the next # call to Cursor#next. # # @return [Boolean] def alive? @cursor_id && @cursor_id != 0 end # Get the next document specified the cursor options. # # @return [Hash, Nil] the next document or Nil if no documents remain. def next if @cache.length == 0 if @query_run && exhaust? close return nil else refresh end end doc = @cache.shift if doc && (err = doc['errmsg'] || doc['$err']) # assignment code = doc['code'] # If the server has stopped being the master (e.g., it's one of a # pair but it has died or something like that) then we close that # connection. The next request will re-open on master server. if err.include?("not master") @connection.close raise ConnectionFailure.new(err, code, doc) end # Handle server side operation execution timeout if code == 50 raise ExecutionTimeout.new(err, code, doc) end raise OperationFailure.new(err, code, doc) elsif doc && (write_concern_error = doc['writeConcernError']) # assignment raise WriteConcernError.new(write_concern_error['errmsg'], write_concern_error['code'], doc) end if @transformer.nil? doc else @transformer.call(doc) if doc end end alias :next_document :next # Reset this cursor on the server. Cursor options, such as the # query string and the values for skip and limit, are preserved. def rewind! check_command_cursor close @cache.clear @cursor_id = nil @closed = false @query_run = false @n_received = nil true end # Determine whether this cursor has any remaining results. # # @return [Boolean] def has_next? num_remaining > 0 end # Get the size of the result set for this query. # # @param [Boolean] skip_and_limit whether or not to take skip or limit into account. # # @return [Integer] the number of objects in the result set for this query. # # @raise [OperationFailure] on a database error. def count(skip_and_limit = false) check_command_cursor command = BSON::OrderedHash["count", @collection.name, "query", @selector] if skip_and_limit command.merge!(BSON::OrderedHash["limit", @limit]) if @limit != 0 command.merge!(BSON::OrderedHash["skip", @skip]) if @skip != 0 end command.merge!(BSON::OrderedHash["fields", @fields]) response = @db.command(command, :read => @read, :comment => @comment) return response['n'].to_i if Mongo::Support.ok?(response) return 0 if response['errmsg'] == "ns missing" raise OperationFailure.new("Count failed: #{response['errmsg']}", response['code'], response) end # Sort this cursor's results. # # This method overrides any sort order specified in the Collection#find # method, and only the last sort applied has an effect. # # @param [Symbol, Array, Hash, OrderedHash] order either 1) a key to sort by 2) # an array of [key, direction] pairs to sort by or 3) a hash of # field => direction pairs to sort by. Direction should be specified as # Mongo::ASCENDING (or :ascending / :asc) or Mongo::DESCENDING # (or :descending / :desc) # # @raise [InvalidOperation] if this cursor has already been used. # # @raise [InvalidSortValueError] if the specified order is invalid. def sort(order, direction=nil) check_modifiable order = [[order, direction]] unless direction.nil? @order = order self end # Limit the number of results to be returned by this cursor. # # This method overrides any limit specified in the Collection#find method, # and only the last limit applied has an effect. # # @return [Integer] the current number_to_return if no parameter is given. # # @raise [InvalidOperation] if this cursor has already been used. def limit(number_to_return=nil) return @limit unless number_to_return check_modifiable if (number_to_return != 0) && exhaust? raise MongoArgumentError, "Limit is incompatible with exhaust option." end @limit = number_to_return self end # Skips the first +number_to_skip+ results of this cursor. # Returns the current number_to_skip if no parameter is given. # # This method overrides any skip specified in the Collection#find method, # and only the last skip applied has an effect. # # @return [Integer] # # @raise [InvalidOperation] if this cursor has already been used. def skip(number_to_skip=nil) return @skip unless number_to_skip check_modifiable @skip = number_to_skip self end # Instruct the server to abort queries after they exceed the specified # wall-clock execution time. # # A query that completes in under its time limit will "roll over" # remaining time to the first getmore op (which will then "roll over" # its remaining time to the second getmore op and so on, until the # time limit is hit). # # Cursors returned by successful time-limited queries will still obey # the default cursor idle timeout (unless the "no cursor idle timeout" # flag has been set). # # @note This will only have an effect in MongoDB 2.5+ # # @param max_time_ms [Fixnum] max execution time (in milliseconds) # # @return [Fixnum, Cursor] either the current max_time_ms or cursor def max_time_ms(max_time_ms=nil) return @max_time_ms unless max_time_ms check_modifiable @max_time_ms = max_time_ms self end # Set the batch size for server responses. # # Note that the batch size will take effect only on queries # where the number to be returned is greater than 100. # # This can not override MongoDB's limit on the amount of data it will # return to the client. Depending on server version this can be 4-16mb. # # @param [Integer] size either 0 or some integer greater than 1. If 0, # the server will determine the batch size. # # @return [Cursor] def batch_size(size=nil) return @batch_size unless size check_modifiable if size < 0 || size == 1 raise ArgumentError, "Invalid value for batch_size #{size}; must be 0 or > 1." else @batch_size = @limit != 0 && size > @limit ? @limit : size end self end # Iterate over each document in this cursor, yielding it to the given # block, if provided. An Enumerator is returned if no block is given. # # Iterating over an entire cursor will close it. # # @yield passes each document to a block for processing. # # @example if 'comments' represents a collection of comments: # comments.find.each do |doc| # puts doc['user'] # end def each if block_given? || !defined?(Enumerator) while doc = self.next yield doc end else Enumerator.new do |yielder| while doc = self.next yielder.yield doc end end end end # Receive all the documents from this cursor as an array of hashes. # # Notes: # # If you've already started iterating over the cursor, the array returned # by this method contains only the remaining documents. See Cursor#rewind! if you # need to reset the cursor. # # Use of this method is discouraged - in most cases, it's much more # efficient to retrieve documents as you need them by iterating over the cursor. # # @return [Array] an array of documents. def to_a super end # Get the explain plan for this cursor. # # @return [Hash] a document containing the explain plan for this cursor. def explain check_command_cursor c = Cursor.new(@collection, query_options_hash.merge(:limit => -@limit.abs, :explain => true)) explanation = c.next_document c.close explanation end # Close the cursor. # # Note: if a cursor is read until exhausted (read until Mongo::Constants::OP_QUERY or # Mongo::Constants::OP_GETMORE returns zero for the cursor id), there is no need to # close it manually. # # Note also: Collection#find takes an optional block argument which can be used to # ensure that your cursors get closed. # # @return [True] def close if @cursor_id && @cursor_id != 0 message = BSON::ByteBuffer.new([0, 0, 0, 0]) message.put_int(1) message.put_long(@cursor_id) log(:debug, "Cursor#close #{@cursor_id}") @connection.send_message( Mongo::Constants::OP_KILL_CURSORS, message, :pool => @pool ) end @cursor_id = 0 @closed = true end # Is this cursor closed? # # @return [Boolean] def closed? @closed end # Returns an integer indicating which query options have been selected. # # @return [Integer] # # @see http://www.mongodb.org/display/DOCS/Mongo+Wire+Protocol#MongoWireProtocol-Mongo::Constants::OPQUERY # The MongoDB wire protocol. def query_opts warn "The method Cursor#query_opts has been deprecated " + "and will removed in v2.0. Use Cursor#options instead." @options end # Add an option to the query options bitfield. # # @param opt a valid query option # # @raise InvalidOperation if this method is run after the cursor has bee # iterated for the first time. # # @return [Integer] the current value of the options bitfield for this cursor. # # @see http://www.mongodb.org/display/DOCS/Mongo+Wire+Protocol#MongoWireProtocol-Mongo::Constants::OPQUERY def add_option(opt) check_modifiable if exhaust?(opt) if @limit != 0 raise MongoArgumentError, "Exhaust is incompatible with limit." elsif @connection.mongos? raise MongoArgumentError, "Exhaust is incompatible with mongos." end end @options |= opt @options end # Remove an option from the query options bitfield. # # @param opt a valid query option # # @raise InvalidOperation if this method is run after the cursor has bee # iterated for the first time. # # @return [Integer] the current value of the options bitfield for this cursor. # # @see http://www.mongodb.org/display/DOCS/Mongo+Wire+Protocol#MongoWireProtocol-Mongo::Constants::OPQUERY def remove_option(opt) check_modifiable @options &= ~opt @options end # Get the query options for this Cursor. # # @return [Hash] def query_options_hash BSON::OrderedHash[ :selector => @selector, :fields => @fields, :skip => @skip, :limit => @limit, :order => @order, :hint => @hint, :snapshot => @snapshot, :timeout => @timeout, :max_scan => @max_scan, :return_key => @return_key, :show_disk_loc => @show_disk_loc, :comment => @comment ] end # Clean output for inspect. def inspect "" end private # Convert the +:fields+ parameter from a single field name or an array # of fields names to a hash, with the field names for keys and '1' for each # value. def convert_fields_for_query(fields) case fields when String, Symbol {fields => 1} when Array return nil if fields.length.zero? fields.inject({}) do |hash, field| field.is_a?(Hash) ? hash.merge!(field) : hash[field] = 1 hash end when Hash return fields end end # Return the number of documents remaining for this cursor. def num_remaining if @cache.length == 0 if @query_run && exhaust? close return 0 else refresh end end @cache.length end # Refresh the documents in @cache. This means either # sending the initial query or sending a GET_MORE operation. def refresh if !@query_run send_initial_query elsif !@cursor_id.zero? send_get_more end end # Sends initial query -- which is always a read unless it is a command # # Upon ConnectionFailure, tries query 3 times if socket was not provided # and the query is either not a command or is a secondary_ok command. # # Pins pools upon successful read and unpins pool upon ConnectionFailure # def send_initial_query tries = 0 instrument(:find, instrument_payload) do begin message = construct_query_message socket = @socket || checkout_socket_from_connection results, @n_received, @cursor_id = @connection.receive_message( Mongo::Constants::OP_QUERY, message, nil, socket, @command, nil, exhaust?, compile_regex?) rescue ConnectionFailure => ex socket.close if socket @pool = nil @connection.unpin_pool @connection.refresh if tries < 3 && !@socket && (!@command || Mongo::ReadPreference::secondary_ok?(@selector)) tries += 1 retry else raise ex end rescue OperationFailure, OperationTimeout => ex raise ex ensure socket.checkin unless @socket || socket.nil? end if !@socket && !@command @connection.pin_pool(socket.pool, read_preference) end @returned += @n_received @cache += results @query_run = true close_cursor_if_query_complete end end def send_get_more message = BSON::ByteBuffer.new([0, 0, 0, 0]) # DB name. BSON::BSON_RUBY.serialize_cstr(message, "#{@db.name}.#{@collection.name}") # Number of results to return. if @limit > 0 limit = @limit - @returned if @batch_size > 0 limit = limit < @batch_size ? limit : @batch_size end message.put_int(limit) else message.put_int(@batch_size) end # Cursor id. message.put_long(@cursor_id) log(:debug, "cursor.refresh() for cursor #{@cursor_id}") if @logger socket = @pool.checkout begin results, @n_received, @cursor_id = @connection.receive_message( Mongo::Constants::OP_GET_MORE, message, nil, socket, @command, nil, exhaust?, compile_regex?) ensure socket.checkin end @returned += @n_received @cache += results close_cursor_if_query_complete end def checkout_socket_from_connection begin if @pool socket = @pool.checkout elsif @command && !Mongo::ReadPreference::secondary_ok?(@selector) socket = @connection.checkout_reader({:mode => :primary}) else socket = @connection.checkout_reader(read_preference) end rescue SystemStackError, NoMemoryError, SystemCallError => ex @connection.close raise ex end @pool = socket.pool socket end def checkin_socket(sock) @connection.checkin(sock) end def construct_query_message message = BSON::ByteBuffer.new("", @connection.max_bson_size + MongoClient::COMMAND_HEADROOM) message.put_int(@options) BSON::BSON_RUBY.serialize_cstr(message, "#{@db.name}.#{@collection.name}") message.put_int(@skip) @batch_size > 1 ? message.put_int(@batch_size) : message.put_int(@limit) if query_contains_special_fields? && @bson # costs two serialize calls query_message = BSON::BSON_CODER.serialize(@selector, false, false, @connection.max_bson_size + MongoClient::APPEND_HEADROOM) query_message.grow(@bson) query_spec = construct_query_spec query_spec.delete('$query') query_message.grow(BSON::BSON_CODER.serialize(query_spec, false, false, @connection.max_bson_size)) else # costs only one serialize call spec = query_contains_special_fields? ? construct_query_spec : @selector spec.merge!(@opts) query_message = BSON::BSON_CODER.serialize(spec, false, false, @connection.max_bson_size + MongoClient::APPEND_HEADROOM) query_message.grow(@bson) if @bson end message.put_binary(query_message.to_s) message.put_binary(BSON::BSON_CODER.serialize(@fields, false, false, @connection.max_bson_size).to_s) if @fields message end def instrument_payload log = { :database => @db.name, :collection => @collection.name, :selector => selector } log[:fields] = @fields if @fields log[:skip] = @skip if @skip && (@skip != 0) log[:limit] = @limit if @limit && (@limit != 0) log[:order] = @order if @order log end def construct_query_spec return @selector if @selector.has_key?('$query') spec = BSON::OrderedHash.new spec['$query'] = @selector spec['$orderby'] = Mongo::Support.format_order_clause(@order) if @order spec['$hint'] = @hint if @hint && @hint.length > 0 spec['$explain'] = true if @explain spec['$snapshot'] = true if @snapshot spec['$maxScan'] = @max_scan if @max_scan spec['$returnKey'] = true if @return_key spec['$showDiskLoc'] = true if @show_disk_loc spec['$comment'] = @comment if @comment spec['$maxTimeMS'] = @max_time_ms if @max_time_ms if needs_read_pref? read_pref = Mongo::ReadPreference::mongos(@read, @tag_sets) spec['$readPreference'] = read_pref if read_pref end spec end def needs_read_pref? @connection.mongos? && @read != :primary end def query_contains_special_fields? @order || @explain || @hint || @snapshot || @show_disk_loc || @max_scan || @return_key || @comment || @max_time_ms || needs_read_pref? end def close_cursor_if_query_complete if @limit > 0 && @returned >= @limit close end end # Check whether the exhaust option is set # # @return [true, false] The state of the exhaust flag. def exhaust?(opts = options) !(opts & OP_QUERY_EXHAUST).zero? end def check_modifiable if @query_run || @closed raise InvalidOperation, "Cannot modify the query once it has been run or closed." end end def check_command_cursor if @command_cursor raise InvalidOperation, "Cannot call #{caller.first} on command cursors" end end def compile_regex? @compile_regex end end end ruby-mongo-1.10.0/lib/mongo/db.rb000066400000000000000000000664031233461006100164730ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo # A MongoDB database. class DB include Mongo::WriteConcern SYSTEM_NAMESPACE_COLLECTION = 'system.namespaces' SYSTEM_INDEX_COLLECTION = 'system.indexes' SYSTEM_PROFILE_COLLECTION = 'system.profile' SYSTEM_USER_COLLECTION = 'system.users' SYSTEM_JS_COLLECTION = 'system.js' SYSTEM_COMMAND_COLLECTION = '$cmd' MAX_TIME_MS_CODE = 50 PROFILE_LEVEL = { :off => 0, :slow_only => 1, :all => 2 } # Counter for generating unique request ids. @@current_request_id = 0 # Strict mode enforces collection existence checks. When +true+, # asking for a collection that does not exist, or trying to create a # collection that already exists, raises an error. # # Strict mode is disabled by default, but enabled (+true+) at any time. # # @deprecated Support for strict will be removed in version 2.0 of the driver. def strict=(value) unless ENV['TEST_MODE'] warn "Support for strict mode has been deprecated and will be " + "removed in version 2.0 of the driver." end @strict = value end # Returns the value of the +strict+ flag. # # @deprecated Support for strict will be removed in version 2.0 of the driver. def strict? @strict end # The name of the database and the local write concern options. attr_reader :name, :write_concern # The Mongo::MongoClient instance connecting to the MongoDB server. attr_reader :client # for backward compatibility alias_method :connection, :client # The length of time that Collection.ensure_index should cache index calls attr_accessor :cache_time # Read Preference attr_accessor :read, :tag_sets, :acceptable_latency # Instances of DB are normally obtained by calling Mongo#db. # # @param [String] name the database name. # @param [Mongo::MongoClient] client a connection object pointing to MongoDB. Note # that databases are usually instantiated via the MongoClient class. See the examples below. # # @option opts [Boolean] :strict (False) [DEPRECATED] If true, collections existence checks are # performed during a number of relevant operations. See DB#collection, DB#create_collection and # DB#drop_collection. # # @option opts [Object, #create_pk(doc)] :pk (BSON::ObjectId) A primary key factory object, # which should take a hash and return a hash which merges the original hash with any primary key # fields the factory wishes to inject. (NOTE: if the object already has a primary key, # the factory should not inject a new key). # # @option opts [String, Integer, Symbol] :w (1) Set default number of nodes to which a write # should be acknowledged. # @option opts [Integer] :wtimeout (nil) Set replica set acknowledgement timeout. # @option opts [Boolean] :j (false) If true, block until write operations have been committed # to the journal. Cannot be used in combination with 'fsync'. Prior to MongoDB 2.6 this option was # ignored if the server was running without journaling. Starting with MongoDB 2.6, write operations will # fail with an exception if this option is used when the server is running without journaling. # @option opts [Boolean] :fsync (false) If true, and the server is running without journaling, blocks until # the server has synced all data files to disk. If the server is running with journaling, this acts the same as # the 'j' option, blocking until write operations have been committed to the journal. # Cannot be used in combination with 'j'. # # Notes on write concern: # These write concern options are propagated to Collection objects instantiated off of this DB. If no # options are provided, the default write concern set on this instance's MongoClient object will be used. This # default can be overridden upon instantiation of any collection by explicitly setting write concern options # on initialization or at the time of an operation. # # @option opts [Integer] :cache_time (300) Set the time that all ensure_index calls should cache the command. def initialize(name, client, opts={}) # A database name of '$external' is permitted for some auth types Support.validate_db_name(name) unless name == '$external' @name = name @client = client @strict = opts[:strict] @pk_factory = opts[:pk] @write_concern = get_write_concern(opts, client) @read = opts[:read] || @client.read ReadPreference::validate(@read) @tag_sets = opts.fetch(:tag_sets, @client.tag_sets) @acceptable_latency = opts.fetch(:acceptable_latency, @client.acceptable_latency) @cache_time = opts[:cache_time] || 300 #5 minutes. end # Authenticate with the given username and password. # # @param username [String] The username. # @param password [String] The user's password. This is not required for # some authentication mechanisms. # @param save_auth [Boolean] # Save this authentication to the client object using # MongoClient#add_auth. This will ensure that the authentication will # be applied to all sockets and upon database reconnect. # @param source [String] Database with user credentials. This should be # used to authenticate against a database when the credentials exist # elsewhere. # @param mechanism [String] The authentication mechanism to be used. # @param extra [Hash] A optional hash of extra options to be stored with # the credential set. # # @note The ability to disable the save_auth option has been deprecated. # With save_auth=false specified, driver authentication behavior during # failovers and reconnections becomes unreliable. This option still # exists for API compatibility, but it no longer has any effect if # disabled and now always uses the default behavior (safe_auth=true). # # @raise [AuthenticationError] Raised if authentication fails. # @return [Boolean] The result of the authentication operation. def authenticate(username, password=nil, save_auth=nil, source=nil, mechanism=nil, extra=nil) warn "[DEPRECATED] Disabling the 'save_auth' option no longer has " + "any effect. Please see the API documentation for more details " + "on this change." unless save_auth.nil? @client.add_auth(self.name, username, password, source, mechanism, extra) true end # Deauthorizes use for this database for this client connection. Also removes # the saved authentication in the MongoClient class associated with this # database. # # @return [Boolean] def logout(opts={}) @client.remove_auth(self.name) true end # Adds a stored Javascript function to the database which can executed # server-side in map_reduce, db.eval and $where clauses. # # @param [String] function_name # @param [String] code # # @return [String] the function name saved to the database def add_stored_function(function_name, code) self[SYSTEM_JS_COLLECTION].save( { "_id" => function_name, :value => BSON::Code.new(code) } ) end # Removes stored Javascript function from the database. Returns # false if the function does not exist # # @param [String] function_name # # @return [Boolean] def remove_stored_function(function_name) return false unless self[SYSTEM_JS_COLLECTION].find_one({"_id" => function_name}) self[SYSTEM_JS_COLLECTION].remove({"_id" => function_name}, :w => 1) end # Adds a user to this database for use with authentication. If the user already # exists in the system, the password and any additional fields provided in opts # will be updated. # # @param [String] username # @param [String] password # @param [Boolean] read_only # Create a read-only user. # # @param [Hash] opts # Optional fields for the user document (e.g. +userSource+, or +roles+) # # See {http://docs.mongodb.org/manual/reference/privilege-documents} # for more information. # # @note The use of the opts argument to provide or update additional fields # on the user document requires MongoDB >= 2.4.0 # # @return [Hash] an object representing the user. def add_user(username, password=nil, read_only=false, opts={}) begin user_info = command(:usersInfo => username) # MongoDB >= 2.5.3 requires the use of commands to manage users. # "Command not found" error didn't return an error code (59) before # MongoDB 2.4.7 so we assume that a nil error code means the usersInfo # command doesn't exist and we should fall back to the legacy add user code. rescue OperationFailure => ex raise ex unless ex.error_code == Mongo::ErrorCode::COMMAND_NOT_FOUND || ex.error_code.nil? return legacy_add_user(username, password, read_only, opts) end if user_info.key?('users') && !user_info['users'].empty? create_or_update_user(:updateUser, username, password, read_only, opts) else create_or_update_user(:createUser, username, password, read_only, opts) end end # Remove the given user from this database. Returns false if the user # doesn't exist in the system. # # @param [String] username # # @return [Boolean] def remove_user(username) begin command(:dropUser => username) rescue OperationFailure => ex raise ex unless ex.error_code == Mongo::ErrorCode::COMMAND_NOT_FOUND || ex.error_code.nil? response = self[SYSTEM_USER_COLLECTION].remove({:user => username}, :w => 1) response.key?('n') && response['n'] > 0 ? response : false end end # Get an array of collection names in this database. # # @return [Array] def collection_names names = collections_info.collect { |doc| doc['name'] || '' } names = names.delete_if {|name| name.index(@name).nil? || name.index('$')} names.map {|name| name.sub(@name + '.', '')} end # Get an array of Collection instances, one for each collection in this database. # # @return [Array] def collections collection_names.map do |name| Collection.new(name, self) end end # Get info on system namespaces (collections). This method returns # a cursor which can be iterated over. For each collection, a hash # will be yielded containing a 'name' string and, optionally, an 'options' hash. # # @param [String] coll_name return info for the specified collection only. # # @return [Mongo::Cursor] def collections_info(coll_name=nil) selector = {} selector[:name] = full_collection_name(coll_name) if coll_name Cursor.new(Collection.new(SYSTEM_NAMESPACE_COLLECTION, self), :selector => selector) end # Create a collection. # # new collection. If +strict+ is true, will raise an error if # collection +name+ already exists. # # @param [String, Symbol] name the name of the new collection. # # @option opts [Boolean] :capped (False) created a capped collection. # # @option opts [Integer] :size (Nil) If +capped+ is +true+, # specifies the maximum number of bytes for the capped collection. # If +false+, specifies the number of bytes allocated # for the initial extent of the collection. # # @option opts [Integer] :max (Nil) If +capped+ is +true+, indicates # the maximum number of records in a capped collection. # # @raise [MongoDBError] raised under two conditions: # either we're in +strict+ mode and the collection # already exists or collection creation fails on the server. # # @return [Mongo::Collection] def create_collection(name, opts={}) name = name.to_s if strict? && collection_names.include?(name) raise MongoDBError, "Collection '#{name}' already exists. (strict=true)" end begin cmd = BSON::OrderedHash.new cmd[:create] = name doc = command(cmd.merge(opts || {})) return Collection.new(name, self, :pk => @pk_factory) if ok?(doc) rescue OperationFailure => e return Collection.new(name, self, :pk => @pk_factory) if e.message =~ /exists/ raise e end raise MongoDBError, "Error creating collection: #{doc.inspect}" end # Get a collection by name. # # @param [String, Symbol] name the collection name. # @param [Hash] opts any valid options that can be passed to Collection#new. # # @raise [MongoDBError] if collection does not already exist and we're in # +strict+ mode. # # @return [Mongo::Collection] def collection(name, opts={}) if strict? && !collection_names.include?(name.to_s) raise MongoDBError, "Collection '#{name}' doesn't exist. (strict=true)" else opts = opts.dup opts.merge!(:pk => @pk_factory) unless opts[:pk] Collection.new(name, self, opts) end end alias_method :[], :collection # Drop a collection by +name+. # # @param [String, Symbol] name # # @return [Boolean] +true+ on success or +false+ if the collection name doesn't exist. def drop_collection(name) return false if strict? && !collection_names.include?(name.to_s) begin ok?(command(:drop => name)) rescue OperationFailure false end end # Run the getlasterror command with the specified replication options. # # @option opts [Boolean] :fsync (false) # @option opts [Integer] :w (nil) # @option opts [Integer] :wtimeout (nil) # @option opts [Boolean] :j (false) # # @return [Hash] the entire response to getlasterror. # # @raise [MongoDBError] if the operation fails. def get_last_error(opts={}) cmd = BSON::OrderedHash.new cmd[:getlasterror] = 1 cmd.merge!(opts) doc = command(cmd, :check_response => false) raise MongoDBError, "Error retrieving last error: #{doc.inspect}" unless ok?(doc) doc end # Return +true+ if an error was caused by the most recently executed # database operation. # # @return [Boolean] def error? get_last_error['err'] != nil end # Get the most recent error to have occurred on this database. # # This command only returns errors that have occurred since the last call to # DB#reset_error_history - returns +nil+ if there is no such error. # # @return [String, Nil] the text of the error or +nil+ if no error has occurred. def previous_error error = command(:getpreverror => 1) error["err"] ? error : nil end # Reset the error history of this database # # Calls to DB#previous_error will only return errors that have occurred # since the most recent call to this method. # # @return [Hash] def reset_error_history command(:reseterror => 1) end # Dereference a DBRef, returning the document it points to. # # @param [Mongo::DBRef] dbref # # @return [Hash] the document indicated by the db reference. # # @see http://www.mongodb.org/display/DOCS/DB+Ref MongoDB DBRef spec. def dereference(dbref) collection(dbref.namespace).find_one("_id" => dbref.object_id) end # Evaluate a JavaScript expression in MongoDB. # # @param [String, Code] code a JavaScript expression to evaluate server-side. # @param [Integer, Hash] args any additional argument to be passed to the +code+ expression when # it's run on the server. # # @return [String] the return value of the function. def eval(code, *args) unless code.is_a?(BSON::Code) code = BSON::Code.new(code) end cmd = BSON::OrderedHash.new cmd[:$eval] = code cmd.merge!(args.pop) if args.last.respond_to?(:keys) && args.last.key?(:nolock) cmd[:args] = args doc = command(cmd) doc['retval'] end # Rename a collection. # # @param [String] from original collection name. # @param [String] to new collection name. # # @return [True] returns +true+ on success. # # @raise MongoDBError if there's an error renaming the collection. def rename_collection(from, to) cmd = BSON::OrderedHash.new cmd[:renameCollection] = "#{@name}.#{from}" cmd[:to] = "#{@name}.#{to}" doc = DB.new('admin', @client).command(cmd, :check_response => false) ok?(doc) || raise(MongoDBError, "Error renaming collection: #{doc.inspect}") end # Drop an index from a given collection. Normally called from # Collection#drop_index or Collection#drop_indexes. # # @param [String] collection_name # @param [String] index_name # # @return [True] returns +true+ on success. # # @raise MongoDBError if there's an error dropping the index. def drop_index(collection_name, index_name) cmd = BSON::OrderedHash.new cmd[:deleteIndexes] = collection_name cmd[:index] = index_name.to_s doc = command(cmd, :check_response => false) ok?(doc) || raise(MongoDBError, "Error with drop_index command: #{doc.inspect}") end # Get information on the indexes for the given collection. # Normally called by Collection#index_information. # # @param [String] collection_name # # @return [Hash] keys are index names and the values are lists of [key, type] pairs # defining the index. def index_information(collection_name) sel = {:ns => full_collection_name(collection_name)} info = {} Cursor.new(Collection.new(SYSTEM_INDEX_COLLECTION, self), :selector => sel).each do |index| info[index['name']] = index end info end # Return stats on this database. Uses MongoDB's dbstats command. # # @return [Hash] def stats self.command(:dbstats => 1) end # Return +true+ if the supplied +doc+ contains an 'ok' field with the value 1. # # @param [Hash] doc # # @return [Boolean] def ok?(doc) Mongo::Support.ok?(doc) end # Send a command to the database. # # Note: DB commands must start with the "command" key. For this reason, # any selector containing more than one key must be an OrderedHash. # # Note also that a command in MongoDB is just a kind of query # that occurs on the system command collection ($cmd). Examine this method's implementation # to see how it works. # # @param [OrderedHash, Hash] selector an OrderedHash, or a standard Hash with just one # key, specifying the command to be performed. In Ruby 1.9 and above, OrderedHash isn't necessary # because hashes are ordered by default. # # @option opts [Boolean] :check_response (true) If +true+, raises an exception if the # command fails. # @option opts [Socket] :socket a socket to use for sending the command. This is mainly for internal use. # @option opts [:primary, :secondary] :read Read preference for this command. See Collection#find for # more details. # @option opts [String] :comment (nil) a comment to include in profiling logs # @option opts [Boolean] :compile_regex (true) whether BSON regex objects should be compiled into Ruby regexes. # If false, a BSON::Regex object will be returned instead. # # @return [Hash] def command(selector, opts={}) raise MongoArgumentError, "Command must be given a selector" unless selector.respond_to?(:keys) && !selector.empty? opts = opts.dup # deletes :check_response and returns the value, if nil defaults to the block result check_response = opts.delete(:check_response) { true } # build up the command hash command = opts.key?(:socket) ? { :socket => opts.delete(:socket) } : {} command.merge!(:comment => opts.delete(:comment)) if opts.key?(:comment) command.merge!(:compile_regex => opts.delete(:compile_regex)) if opts.key?(:compile_regex) command[:limit] = -1 command[:read] = Mongo::ReadPreference::cmd_read_pref(opts.delete(:read), selector) if opts.key?(:read) if RUBY_VERSION < '1.9' && selector.class != BSON::OrderedHash if selector.keys.length > 1 raise MongoArgumentError, "DB#command requires an OrderedHash when hash contains multiple keys" end if opts.keys.size > 0 # extra opts will be merged into the selector, so make sure it's an OH in versions < 1.9 selector = selector.dup selector = BSON::OrderedHash.new.merge!(selector) end end # arbitrary opts are merged into the selector command[:selector] = selector.merge!(opts) begin result = Cursor.new(system_command_collection, command).next_document rescue OperationFailure => ex if check_response raise ex.class.new("Database command '#{selector.keys.first}' failed: #{ex.message}", ex.error_code, ex.result) else result = ex.result end end raise OperationFailure, "Database command '#{selector.keys.first}' failed: returned null." unless result if check_response && (!ok?(result) || result['writeErrors'] || result['writeConcernError']) message = "Database command '#{selector.keys.first}' failed: (" message << result.map do |key, value| "#{key}: '#{value}'" end.join('; ') message << ').' code = result['code'] || result['assertionCode'] raise ExecutionTimeout.new(message, code, result) if code == MAX_TIME_MS_CODE raise OperationFailure.new(message, code, result) end result end # A shortcut returning db plus dot plus collection name. # # @param [String] collection_name # # @return [String] def full_collection_name(collection_name) "#{@name}.#{collection_name}" end # The primary key factory object (or +nil+). # # @return [Object, Nil] def pk_factory @pk_factory end # Specify a primary key factory if not already set. # # @raise [MongoArgumentError] if the primary key factory has already been set. def pk_factory=(pk_factory) raise MongoArgumentError, "Cannot change primary key factory once it's been set" if @pk_factory @pk_factory = pk_factory end # Return the current database profiling level. If profiling is enabled, you can # get the results using DB#profiling_info. # # @return [Symbol] :off, :slow_only, or :all def profiling_level cmd = BSON::OrderedHash.new cmd[:profile] = -1 doc = command(cmd, :check_response => false) raise "Error with profile command: #{doc.inspect}" unless ok?(doc) level_sym = PROFILE_LEVEL.invert[doc['was'].to_i] raise "Error: illegal profiling level value #{doc['was']}" unless level_sym level_sym end # Set this database's profiling level. If profiling is enabled, you can # get the results using DB#profiling_info. # # @param [Symbol] level acceptable options are +:off+, +:slow_only+, or +:all+. def profiling_level=(level) cmd = BSON::OrderedHash.new cmd[:profile] = PROFILE_LEVEL[level] doc = command(cmd, :check_response => false) ok?(doc) || raise(MongoDBError, "Error with profile command: #{doc.inspect}") end # Get the current profiling information. # # @return [Array] a list of documents containing profiling information. def profiling_info Cursor.new(Collection.new(SYSTEM_PROFILE_COLLECTION, self), :selector => {}).to_a end # Validate a named collection. # # @param [String] name the collection name. # # @return [Hash] validation information. # # @raise [MongoDBError] if the command fails or there's a problem with the validation # data, or if the collection is invalid. def validate_collection(name) cmd = BSON::OrderedHash.new cmd[:validate] = name cmd[:full] = true doc = command(cmd, :check_response => false) raise MongoDBError, "Error with validate command: #{doc.inspect}" unless ok?(doc) if (doc.has_key?('valid') && !doc['valid']) || (doc['result'] =~ /\b(exception|corrupt)\b/i) raise MongoDBError, "Error: invalid collection #{name}: #{doc.inspect}" end doc end private def system_command_collection Collection.new(SYSTEM_COMMAND_COLLECTION, self) end # Create a new user. # # @param username [String] The username. # @param password [String] The user's password. # @param read_only [Boolean] Create a read-only user (deprecated in MongoDB >= 2.6) # @param opts [Hash] # # @private def create_or_update_user(command, username, password, read_only, opts) if read_only || !opts.key?(:roles) warn "Creating a user with the read_only option or without roles is " + "deprecated in MongoDB >= 2.6" end # The password is always salted and hashed by the driver. if opts.key?(:digestPassword) raise MongoArgumentError, "The digestPassword option is not available via DB#add_user. " + "Use DB#command(:createUser => ...) instead for this option." end opts = opts.dup pwd = Mongo::Authentication.hash_password(username, password) if password cmd_opts = pwd ? { :pwd => pwd } : {} # specify that the server shouldn't digest the password because the driver does cmd_opts[:digestPassword] = false unless opts.key?(:roles) if name == 'admin' roles = read_only ? ['readAnyDatabase'] : ['root'] else roles = read_only ? ['read'] : ["dbOwner"] end cmd_opts[:roles] = roles end cmd_opts[:writeConcern] = opts.key?(:writeConcern) ? opts.delete(:writeConcern) : { :w => 1 } cmd_opts.merge!(opts) command({ command => username }, cmd_opts) end # Create a user in MongoDB versions < 2.5.3. # Called by #add_user if the 'usersInfo' command fails. # # @param username [String] The username. # @param password [String] (nil) The user's password. # @param read_only [Boolean] (false) Create a read-only user. # @param opts [Hash] # # @private def legacy_add_user(username, password=nil, read_only=false, opts={}) users = self[SYSTEM_USER_COLLECTION] user = users.find_one(:user => username) || {:user => username} user['pwd'] = Mongo::Authentication.hash_password(username, password) if password user['readOnly'] = true if read_only user.merge!(opts) begin users.save(user) rescue OperationFailure => ex # adding first admin user fails GLE in MongoDB 2.2 raise ex unless ex.message =~ /login/ end user end end end ruby-mongo-1.10.0/lib/mongo/exception.rb000066400000000000000000000056651233461006100201070ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo # Generic Mongo Ruby Driver exception class. class MongoRubyError < StandardError; end # Raised when MongoDB itself has returned an error. class MongoDBError < RuntimeError # @return The entire failed command's response object, if available. attr_reader :result # @return The failed command's error code, if availab.e attr_reader :error_code def initialize(message=nil, error_code=nil, result=nil) @error_code = error_code @result = result super(message) end end # Raised on fatal errors to GridFS. class GridError < MongoRubyError; end # Raised on fatal errors to GridFS. class GridFileNotFound < GridError; end # Raised on fatal errors to GridFS. class GridMD5Failure < GridError; end # Raised when invalid arguments are sent to Mongo Ruby methods. class MongoArgumentError < MongoRubyError; end # Raised on failures in connection to the database server. class ConnectionError < MongoRubyError; end # Raised on failures in connection to the database server. class ReplicaSetConnectionError < ConnectionError; end # Raised on failures in connection to the database server. class ConnectionTimeoutError < MongoRubyError; end # Raised when no tags in a read preference maps to a given connection. class NodeWithTagsNotFound < MongoRubyError; end # Raised when a connection operation fails. class ConnectionFailure < MongoDBError; end # Raised when authentication fails. class AuthenticationError < MongoDBError; end # Raised when a database operation fails. class OperationFailure < MongoDBError; end # Raised when a database operation exceeds maximum specified time. class ExecutionTimeout < OperationFailure; end # Raised when a database operation has a write concern error. class WriteConcernError < OperationFailure; end # Raised when a socket read operation times out. class OperationTimeout < SocketError; end # Raised when a client attempts to perform an invalid operation. class InvalidOperation < MongoDBError; end # Raised when an invalid collection or database name is used (invalid namespace name). class InvalidNSName < RuntimeError; end # Raised when the client supplies an invalid value to sort by. class InvalidSortValueError < MongoRubyError; end # Raised for bulk write errors. class BulkWriteError < OperationFailure; end end ruby-mongo-1.10.0/lib/mongo/functional.rb000066400000000000000000000015211233461006100202360ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/functional/authentication' require 'mongo/functional/logging' require 'mongo/functional/read_preference' require 'mongo/functional/write_concern' require 'mongo/functional/uri_parser' require 'mongo/functional/sasl_java' if RUBY_PLATFORM =~ /java/ ruby-mongo-1.10.0/lib/mongo/functional/000077500000000000000000000000001233461006100177125ustar00rootroot00000000000000ruby-mongo-1.10.0/lib/mongo/functional/authentication.rb000066400000000000000000000266171233461006100232720ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'digest/md5' module Mongo module Authentication DEFAULT_MECHANISM = 'MONGODB-CR' MECHANISMS = ['GSSAPI', 'MONGODB-CR', 'MONGODB-X509', 'PLAIN'] EXTRA = { 'GSSAPI' => [:gssapi_service_name, :canonicalize_host_name] } # authentication module methods class << self # Helper to validate an authentication mechanism and optionally # raise an error if invalid. # # @param mechanism [String] [description] # @param raise_error [Boolean] [description] # # @raise [ArgumentError] if raise_error and not a valid auth mechanism. # @return [Boolean] returns the validation result. def validate_mechanism(mechanism, raise_error=false) return true if MECHANISMS.include?(mechanism.upcase) if raise_error raise ArgumentError, "Invalid authentication mechanism provided. Must be one of " + "#{Mongo::Authentication::MECHANISMS.join(', ')}." end false end # Helper to validate and normalize credential sets. # # @param auth [Hash] A hash containing the credential set. # # @raise [MongoArgumentError] if the credential set is invalid. # @return [Hash] The validated credential set. def validate_credentials(auth) # set the default auth mechanism if not defined auth[:mechanism] ||= DEFAULT_MECHANISM # set the default auth source if not defined auth[:source] = auth[:source] || auth[:db_name] || 'admin' if (auth[:mechanism] == 'MONGODB-CR' || auth[:mechanism] == 'PLAIN') && !auth[:password] raise MongoArgumentError, "When using the authentication mechanism #{auth[:mechanism]} " + "both username and password are required." end # if extra opts exist, validate them allowed_keys = EXTRA[auth[:mechanism]] if auth[:extra] && !auth[:extra].empty? invalid_opts = [] auth[:extra].keys.each { |k| invalid_opts << k unless allowed_keys.include?(k) } raise MongoArgumentError, "Invalid extra option(s): #{invalid_opts} found. Please check the extra options" + " passed and try again." unless invalid_opts.empty? end auth end # Generate an MD5 for authentication. # # @param username [String] The username. # @param password [String] The user's password. # @param nonce [String] The nonce value. # # @return [String] MD5 key for db authentication. def auth_key(username, password, nonce) Digest::MD5.hexdigest("#{nonce}#{username}#{hash_password(username, password)}") end # Return a hashed password for auth. # # @param username [String] The username. # @param password [String] The users's password. # # @return [String] The hashed password value. def hash_password(username, password) Digest::MD5.hexdigest("#{username}:mongo:#{password}") end end # Saves a cache of authentication credentials to the current # client instance. This method is called automatically by DB#authenticate. # # @param db_name [String] The current database name. # @param username [String] The current username. # @param password [String] (nil) The users's password (not required for # all authentication mechanisms). # @param source [String] (nil) The authentication source database # (if different than the current database). # @param mechanism [String] (nil) The authentication mechanism being used # (default: 'MONGODB-CR'). # @param extra [Hash] (nil) A optional hash of extra options to be stored with # the credential set. # # @raise [MongoArgumentError] Raised if the database has already been used # for authentication. A log out is required before additional auths can # be issued against a given database. # @raise [AuthenticationError] Raised if authentication fails. # @return [Hash] a hash representing the authentication just added. def add_auth(db_name, username, password=nil, source=nil, mechanism=nil, extra=nil) auth = Authentication.validate_credentials({ :db_name => db_name, :username => username, :password => password, :source => source, :mechanism => mechanism, :extra => extra }) if @auths.any? {|a| a[:source] == auth[:source]} raise MongoArgumentError, "Another user has already authenticated to the database " + "'#{auth[:source]}' and multiple authentications are not " + "permitted. Please logout first." end begin socket = self.checkout_reader(:mode => :primary_preferred) self.issue_authentication(auth, :socket => socket) ensure socket.checkin if socket end @auths << auth auth end # Remove a saved authentication for this connection. # # @param db_name [String] The database name. # # @return [Boolean] The result of the operation. def remove_auth(db_name) return false unless @auths @auths.reject! { |a| a[:source] == db_name } ? true : false end # Remove all authentication information stored in this connection. # # @return [Boolean] result of the operation. def clear_auths @auths = Set.new true end # Method to handle and issue logout commands. # # @note This method should not be called directly. Use DB#logout. # # @param db_name [String] The database name. # @param opts [Hash] Hash of optional settings and configuration values. # # @option opts [Socket] socket (nil) Optional socket instance to use. # # @raise [MongoDBError] Raised if the logout operation fails. # @return [Boolean] The result of the logout operation. def issue_logout(db_name, opts={}) doc = db(db_name).command({:logout => 1}, :socket => opts[:socket]) unless Support.ok?(doc) raise MongoDBError, "Error logging out on DB #{db_name}." end true # somewhat pointless, but here to preserve the existing API end # Method to handle and issue authentication commands. # # @note This method should not be called directly. Use DB#authenticate. # # @param auth [Hash] The authentication credentials to be used. # @param opts [Hash] Hash of optional settings and configuration values. # # @option opts [Socket] socket (nil) Optional socket instance to use. # # @raise [AuthenticationError] Raised if the authentication fails. # @return [Boolean] Result of the authentication operation. def issue_authentication(auth, opts={}) result = case auth[:mechanism] when 'MONGODB-CR' issue_cr(auth, opts) when 'MONGODB-X509' issue_x509(auth, opts) when 'PLAIN' issue_plain(auth, opts) when 'GSSAPI' issue_gssapi(auth, opts) end unless Support.ok?(result) raise AuthenticationError, "Failed to authenticate user '#{auth[:username]}' " + "on db '#{auth[:source]}'." end true end private # Handles issuing authentication commands for the MONGODB-CR auth mechanism. # # @param auth [Hash] The authentication credentials to be used. # @param opts [Hash] Hash of optional settings and configuration values. # # @option opts [Socket] socket (nil) Optional socket instance to use. # # @return [Boolean] Result of the authentication operation. # # @private def issue_cr(auth, opts={}) database = db(auth[:source]) nonce = get_nonce(database, opts) # build auth command document cmd = BSON::OrderedHash.new cmd['authenticate'] = 1 cmd['user'] = auth[:username] cmd['nonce'] = nonce cmd['key'] = Authentication.auth_key(auth[:username], auth[:password], nonce) database.command(cmd, :check_response => false, :socket => opts[:socket]) end # Handles issuing authentication commands for the MONGODB-X509 auth mechanism. # # @param auth [Hash] The authentication credentials to be used. # @param opts [Hash] Hash of optional settings and configuration values. # # @private def issue_x509(auth, opts={}) database = db('$external') cmd = BSON::OrderedHash.new cmd[:authenticate] = 1 cmd[:mechanism] = auth[:mechanism] cmd[:user] = auth[:username] database.command(cmd, :check_response => false, :socket => opts[:socket]) end # Handles issuing authentication commands for the PLAIN auth mechanism. # # @param auth [Hash] The authentication credentials to be used. # @param opts [Hash] Hash of optional settings and configuration values. # # @option opts [Socket] socket (nil) Optional socket instance to use. # # @return [Boolean] Result of the authentication operation. # # @private def issue_plain(auth, opts={}) database = db(auth[:source]) payload = "\x00#{auth[:username]}\x00#{auth[:password]}" cmd = BSON::OrderedHash.new cmd[:saslStart] = 1 cmd[:mechanism] = auth[:mechanism] cmd[:payload] = BSON::Binary.new(payload) cmd[:autoAuthorize] = 1 database.command(cmd, :check_response => false, :socket => opts[:socket]) end # Handles issuing authentication commands for the GSSAPI auth mechanism. # # @param auth [Hash] The authentication credentials to be used. # @param opts [Hash] Hash of optional settings and configuration values. # # @private def issue_gssapi(auth, opts={}) raise NotImplementedError, "The #{auth[:mechanism]} authentication mechanism is only supported " + "for JRuby." unless RUBY_PLATFORM =~ /java/ Mongo::Sasl::GSSAPI.authenticate(auth[:username], self, opts[:socket], auth[:extra] || {}) end # Helper to fetch a nonce value from a given database instance. # # @param database [Mongo::DB] The DB instance to use for issue the nonce command. # @param opts [Hash] Hash of optional settings and configuration values. # # @option opts [Socket] socket (nil) Optional socket instance to use. # # @raise [MongoDBError] Raised if there is an error executing the command. # @return [String] Returns the nonce value. # # @private def get_nonce(database, opts={}) doc = database.command({:getnonce => 1}, :check_response => false, :socket => opts[:socket]) unless Support.ok?(doc) raise MongoDBError, "Error retrieving nonce: #{doc}" end doc['nonce'] end end end ruby-mongo-1.10.0/lib/mongo/functional/logging.rb000066400000000000000000000051051233461006100216660ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Logging module Instrumenter def self.instrument(name, payload = {}) yield end end @instrumenter = Instrumenter def write_logging_startup_message log(:debug, "Logging level is currently :debug which could negatively impact " + "client-side performance. You should set your logging level no lower than " + ":info in production.") end # Log a message with the given level. def log(level, msg) return unless @logger case level when :fatal then @logger.fatal "MONGODB [FATAL] #{msg}" when :error then @logger.error "MONGODB [ERROR] #{msg}" when :warn then @logger.warn "MONGODB [WARNING] #{msg}" when :info then @logger.info "MONGODB [INFO] #{msg}" when :debug then @logger.debug "MONGODB [DEBUG] #{msg}" else @logger.debug "MONGODB [DEBUG] #{msg}" end end # Execute the block and log the operation described by name and payload. def instrument(name, payload = {}) start_time = Time.now res = Logging.instrumenter.instrument(name, payload) do yield end duration = Time.now - start_time log_operation(name, payload, duration) res end def self.instrumenter @instrumenter end def self.instrumenter=(instrumenter) @instrumenter = instrumenter end protected def log_operation(name, payload, duration) @logger && @logger.debug do msg = "MONGODB " msg << "(%.1fms) " % (duration * 1000) msg << "#{payload[:database]}['#{payload[:collection]}'].#{name}(" msg << payload.values_at(:selector, :document, :documents, :fields ).compact.map(&:inspect).join(', ') + ")" msg << ".skip(#{payload[:skip]})" if payload[:skip] msg << ".limit(#{payload[:limit]})" if payload[:limit] msg << ".sort(#{payload[:order]})" if payload[:order] msg end end end end ruby-mongo-1.10.0/lib/mongo/functional/read_preference.rb000066400000000000000000000125021233461006100233500ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module ReadPreference READ_PREFERENCES = [ :primary, :primary_preferred, :secondary, :secondary_preferred, :nearest ] MONGOS_MODES = { :primary => 'primary', :primary_preferred => 'primaryPreferred', :secondary => 'secondary', :secondary_preferred => 'secondaryPreferred', :nearest => 'nearest' } # Commands that may be sent to replica-set secondaries, depending on # read preference and tags. All other commands are always run on the primary. SECONDARY_OK_COMMANDS = [ 'group', 'aggregate', 'collstats', 'dbstats', 'count', 'distinct', 'geonear', 'geosearch', 'geowalk', 'mapreduce', 'replsetgetstatus', 'ismaster', 'parallelcollectionscan' ] def self.mongos(mode, tag_sets) if mode != :secondary_preferred || !tag_sets.empty? mongos_read_preference = BSON::OrderedHash[:mode => MONGOS_MODES[mode]] mongos_read_preference[:tags] = tag_sets if !tag_sets.empty? end mongos_read_preference end def self.validate(value) if READ_PREFERENCES.include?(value) return true else raise MongoArgumentError, "#{value} is not a valid read preference. " + "Please specify one of the following read preferences as a symbol: #{READ_PREFERENCES}" end end # Returns true if it's ok to run the command on a secondary def self.secondary_ok?(selector) command = selector.keys.first.to_s.downcase if command == 'mapreduce' out = selector.select { |k, v| k.to_s.downcase == 'out' }.first.last # the server only looks at the first key in the out object return out.respond_to?(:keys) && out.keys.first.to_s.downcase == 'inline' elsif command == 'aggregate' return selector['pipeline'].none? { |op| op.key?('$out') || op.key?(:$out) } end SECONDARY_OK_COMMANDS.member?(command) end # Returns true if the command should be rerouted to the primary. def self.reroute_cmd_primary?(read_pref, selector) return false if read_pref == :primary !secondary_ok?(selector) end # Given a command and read preference, possibly reroute to primary. def self.cmd_read_pref(read_pref, selector) ReadPreference::validate(read_pref) if reroute_cmd_primary?(read_pref, selector) warn "Database command '#{selector.keys.first}' rerouted to primary node" read_pref = :primary end read_pref end def read_preference { :mode => @read, :tags => @tag_sets, :latency => @acceptable_latency } end def read_pool(read_preference_override={}) return primary_pool if mongos? read_pref = read_preference.merge(read_preference_override) if pinned_pool && pinned_pool[:read_preference] == read_pref pool = pinned_pool[:pool] else unpin_pool pool = select_pool(read_pref) end unless pool raise ConnectionFailure, "No replica set member available for query " + "with read preference matching mode #{read_pref[:mode]} and tags " + "matching #{read_pref[:tags]}." end pool end def select_pool(read_pref) if read_pref[:mode] == :primary && !read_pref[:tags].empty? raise MongoArgumentError, "Read preference :primary cannot be combined with tags" end case read_pref[:mode] when :primary primary_pool when :primary_preferred primary_pool || select_secondary_pool(secondary_pools, read_pref) when :secondary select_secondary_pool(secondary_pools, read_pref) when :secondary_preferred select_secondary_pool(secondary_pools, read_pref) || primary_pool when :nearest select_near_pool(pools, read_pref) end end def select_secondary_pool(candidates, read_pref) tag_sets = read_pref[:tags] if !tag_sets.empty? matches = [] tag_sets.detect do |tag_set| matches = candidates.select do |candidate| tag_set.none? { |k,v| candidate.tags[k.to_s] != v } && candidate.ping_time end !matches.empty? end else matches = candidates end matches.empty? ? nil : select_near_pool(matches, read_pref) end def select_near_pool(candidates, read_pref) latency = read_pref[:latency] nearest_pool = candidates.min_by { |candidate| candidate.ping_time } near_pools = candidates.select do |candidate| (candidate.ping_time - nearest_pool.ping_time) <= latency end near_pools[ rand(near_pools.length) ] end end end ruby-mongo-1.10.0/lib/mongo/functional/sasl_java.rb000066400000000000000000000035641233461006100222120ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'jruby' include Java jar_dir = File.expand_path(File.join(File.dirname(__FILE__), '../../../ext/jsasl')) require File.join(jar_dir, 'target/jsasl.jar') module Mongo module Sasl module GSSAPI def self.authenticate(username, client, socket, opts={}) db = client.db('$external') hostname = socket.pool.host servicename = opts[:gssapi_service_name] || 'mongodb' canonicalize = opts[:canonicalize_host_name] ? opts[:canonicalize_host_name] : false authenticator = org.mongodb.sasl.GSSAPIAuthenticator.new(JRuby.runtime, username, hostname, servicename, canonicalize) token = BSON::Binary.new(authenticator.initialize_challenge) cmd = BSON::OrderedHash['saslStart', 1, 'mechanism', 'GSSAPI', 'payload', token, 'autoAuthorize', 1] response = db.command(cmd, :check_response => false, :socket => socket) until response['done'] do token = BSON::Binary.new(authenticator.evaluate_challenge(response['payload'].to_s)) cmd = BSON::OrderedHash['saslContinue', 1, 'conversationId', response['conversationId'], 'payload', token] response = db.command(cmd, :check_response => false, :socket => socket) end response end end end end ruby-mongo-1.10.0/lib/mongo/functional/uri_parser.rb000066400000000000000000000323571233461006100224240ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'cgi' require 'uri' module Mongo class URIParser AUTH_REGEX = /((.+)@)?/ HOST_REGEX = /([-.\w]+)|(\[[^\]]+\])/ PORT_REGEX = /(?::(\w+))?/ NODE_REGEX = /((#{HOST_REGEX}#{PORT_REGEX},?)+)/ PATH_REGEX = /(?:\/([-\w]+))?/ MONGODB_URI_MATCHER = /#{AUTH_REGEX}#{NODE_REGEX}#{PATH_REGEX}/ MONGODB_URI_SPEC = "mongodb://[username:password@]host1[:port1][,host2[:port2],...[,hostN[:portN]]][/[database][?options]]" SPEC_ATTRS = [:nodes, :auths] READ_PREFERENCES = { 'primary' => :primary, 'primarypreferred' => :primary_preferred, 'secondary' => :secondary, 'secondarypreferred' => :secondary_preferred, 'nearest' => :nearest } OPT_ATTRS = [ :authmechanism, :authsource, :canonicalizehostname, :connect, :connecttimeoutms, :fsync, :gssapiservicename, :journal, :pool_size, :readpreference, :replicaset, :safe, :slaveok, :sockettimeoutms, :ssl, :w, :wtimeout, :wtimeoutms ] OPT_VALID = { :authmechanism => lambda { |arg| Mongo::Authentication.validate_mechanism(arg) }, :authsource => lambda { |arg| arg.length > 0 }, :canonicalizehostname => lambda { |arg| ['true', 'false'].include?(arg) }, :connect => lambda { |arg| [ 'direct', 'replicaset', 'true', 'false', true, false ].include?(arg) }, :connecttimeoutms => lambda { |arg| arg =~ /^\d+$/ }, :fsync => lambda { |arg| ['true', 'false'].include?(arg) }, :gssapiservicename => lambda { |arg| arg.length > 0 }, :journal => lambda { |arg| ['true', 'false'].include?(arg) }, :pool_size => lambda { |arg| arg.to_i > 0 }, :readpreference => lambda { |arg| READ_PREFERENCES.keys.include?(arg) }, :replicaset => lambda { |arg| arg.length > 0 }, :safe => lambda { |arg| ['true', 'false'].include?(arg) }, :slaveok => lambda { |arg| ['true', 'false'].include?(arg) }, :sockettimeoutms => lambda { |arg| arg =~ /^\d+$/ }, :ssl => lambda { |arg| ['true', 'false'].include?(arg) }, :w => lambda { |arg| arg =~ /^\w+$/ }, :wtimeout => lambda { |arg| arg =~ /^\d+$/ }, :wtimeoutms => lambda { |arg| arg =~ /^\d+$/ } } OPT_ERR = { :authmechanism => "must be one of #{Mongo::Authentication::MECHANISMS.join(', ')}", :authsource => "must be a string containing the name of the database being used for authentication", :canonicalizehostname => "must be 'true' or 'false'", :connect => "must be 'direct', 'replicaset', 'true', or 'false'", :connecttimeoutms => "must be an integer specifying milliseconds", :fsync => "must be 'true' or 'false'", :gssapiservicename => "must be a string containing the name of the GSSAPI service", :journal => "must be 'true' or 'false'", :pool_size => "must be an integer greater than zero", :readpreference => "must be on of #{READ_PREFERENCES.keys.map(&:inspect).join(",")}", :replicaset => "must be a string containing the name of the replica set to connect to", :safe => "must be 'true' or 'false'", :slaveok => "must be 'true' or 'false'", :settimeoutms => "must be an integer specifying milliseconds", :ssl => "must be 'true' or 'false'", :w => "must be an integer indicating number of nodes to replicate to or a string " + "specifying that replication is required to the majority or nodes with a " + "particilar getLastErrorMode.", :wtimeout => "must be an integer specifying milliseconds", :wtimeoutms => "must be an integer specifying milliseconds" } OPT_CONV = { :authmechanism => lambda { |arg| arg.upcase }, :authsource => lambda { |arg| arg }, :canonicalizehostname => lambda { |arg| arg == 'true' ? true : false }, :connect => lambda { |arg| arg == 'false' ? false : arg }, # convert 'false' to FalseClass :connecttimeoutms => lambda { |arg| arg.to_f / 1000 }, # stored as seconds :fsync => lambda { |arg| arg == 'true' ? true : false }, :gssapiservicename => lambda { |arg| arg }, :journal => lambda { |arg| arg == 'true' ? true : false }, :pool_size => lambda { |arg| arg.to_i }, :readpreference => lambda { |arg| READ_PREFERENCES[arg] }, :replicaset => lambda { |arg| arg }, :safe => lambda { |arg| arg == 'true' ? true : false }, :slaveok => lambda { |arg| arg == 'true' ? true : false }, :sockettimeoutms => lambda { |arg| arg.to_f / 1000 }, # stored as seconds :ssl => lambda { |arg| arg == 'true' ? true : false }, :w => lambda { |arg| Mongo::Support.is_i?(arg) ? arg.to_i : arg.to_sym }, :wtimeout => lambda { |arg| arg.to_i }, :wtimeoutms => lambda { |arg| arg.to_i } } attr_reader :auths, :authmechanism, :authsource, :canonicalizehostname, :connect, :connecttimeoutms, :db_name, :fsync, :gssapiservicename, :journal, :nodes, :pool_size, :readpreference, :replicaset, :safe, :slaveok, :sockettimeoutms, :ssl, :w, :wtimeout, :wtimeoutms # Parse a MongoDB URI. This method is used by MongoClient.from_uri. # Returns an array of nodes and an array of db authorizations, if applicable. # # @note Passwords can contain any character except for ',' # # @param [String] uri The MongoDB URI string. def initialize(uri) if uri.start_with?('mongodb://') uri = uri[10..-1] else raise MongoArgumentError, "MongoDB URI must match this spec: #{MONGODB_URI_SPEC}" end hosts, opts = uri.split('?') parse_options(opts) parse_hosts(hosts) validate_connect end # Create a Mongo::MongoClient or a Mongo::MongoReplicaSetClient based on the URI. # # @note Don't confuse this with attribute getter method #connect. # # @return [MongoClient,MongoReplicaSetClient] def connection(extra_opts={}, legacy = false, sharded = false) opts = connection_options.merge!(extra_opts) if(legacy) if replicaset? ReplSetConnection.new(node_strings, opts) else Connection.new(host, port, opts) end else if sharded MongoShardedClient.new(node_strings, opts) elsif replicaset? MongoReplicaSetClient.new(node_strings, opts) else MongoClient.new(host, port, opts) end end end # Whether this represents a replica set. # @return [true,false] def replicaset? replicaset.is_a?(String) || nodes.length > 1 end # Whether to immediately connect to the MongoDB node[s]. Defaults to true. # @return [true, false] def connect? connect != false end # Whether this represents a direct connection. # # @note Specifying :connect => 'direct' has no effect... other than to raise an exception if other variables suggest a replicaset. # # @return [true,false] def direct? !replicaset? end # For direct connections, the host of the (only) node. # @return [String] def host nodes[0][0] end # For direct connections, the port of the (only) node. # @return [Integer] def port nodes[0][1].to_i end # Options that can be passed to MongoClient.new or MongoReplicaSetClient.new # @return [Hash] def connection_options opts = {} if @wtimeout warn "Using wtimeout in a URI is deprecated, please use wtimeoutMS. It will be removed in v2.0." opts[:wtimeout] = @wtimeout end opts[:wtimeout] = @wtimeoutms if @wtimeoutms opts[:w] = 1 if @safe opts[:w] = @w if @w opts[:j] = @journal if @journal opts[:fsync] = @fsync if @fsync opts[:connect_timeout] = @connecttimeoutms if @connecttimeoutms opts[:op_timeout] = @sockettimeoutms if @sockettimeoutms opts[:pool_size] = @pool_size if @pool_size opts[:read] = @readpreference if @readpreference if @slaveok && !@readpreference unless replicaset? opts[:slave_ok] = true else opts[:read] = :secondary_preferred end end if replicaset.is_a?(String) opts[:name] = replicaset end opts[:db_name] = @db_name if @db_name opts[:auths] = @auths if @auths opts[:ssl] = @ssl if @ssl opts[:connect] = connect? opts end def node_strings nodes.map { |node| node.join(':') } end private def parse_hosts(uri_without_protocol) @nodes = [] @auths = Set.new unless matches = MONGODB_URI_MATCHER.match(uri_without_protocol) raise MongoArgumentError, "MongoDB URI must match this spec: #{MONGODB_URI_SPEC}" end user_info = matches[2].split(':') if matches[2] host_info = matches[3].split(',') @db_name = matches[8] host_info.each do |host| if host[0,1] == '[' host, port = host.split(']:') << MongoClient::DEFAULT_PORT host = host.end_with?(']') ? host[1...-1] : host[1..-1] else host, port = host.split(':') << MongoClient::DEFAULT_PORT end unless port.to_s =~ /^\d+$/ raise MongoArgumentError, "Invalid port #{port}; port must be specified as digits." end @nodes << [host, port.to_i] end if @nodes.empty? raise MongoArgumentError, "No nodes specified. Please ensure that you've provided at " + "least one node." end # no user info to parse, exit here return unless user_info # check for url encoding for username and password username, password = user_info if user_info.size > 2 || (username && username.include?('@')) || (password && password.include?('@')) raise MongoArgumentError, "The characters ':' and '@' in a username or password " + "must be escaped (RFC 2396)." end # if username exists, proceed adding to auth set unless username.nil? || username.empty? auth = Authentication.validate_credentials({ :db_name => @db_name, :username => URI.unescape(username), :password => password ? URI.unescape(password) : nil, :source => @authsource, :mechanism => @authmechanism }) auth[:extra] = @canonicalizehostname ? { :canonicalize_host_name => @canonicalizehostname } : {} auth[:extra].merge!(:gssapi_service_name => @gssapiservicename) if @gssapiservicename @auths << auth end end # This method uses the lambdas defined in OPT_VALID and OPT_CONV to validate # and convert the given options. def parse_options(string_opts) # initialize instance variables for available options OPT_VALID.keys.each { |k| instance_variable_set("@#{k}", nil) } string_opts ||= '' return if string_opts.empty? if string_opts.include?(';') and string_opts.include?('&') raise MongoArgumentError, 'must not mix URL separators ; and &' end opts = CGI.parse(string_opts).inject({}) do |memo, (key, value)| value = value.first memo[key.downcase.to_sym] = value.strip.downcase memo end opts.each do |key, value| if !OPT_ATTRS.include?(key) raise MongoArgumentError, "Invalid Mongo URI option #{key}" end if OPT_VALID[key].call(value) instance_variable_set("@#{key}", OPT_CONV[key].call(value)) else raise MongoArgumentError, "Invalid value #{value.inspect} for #{key}: #{OPT_ERR[key]}" end end end def validate_connect if replicaset? and @connect == 'direct' # Make sure the user doesn't specify something contradictory raise MongoArgumentError, "connect=direct conflicts with setting a replicaset name" end end end end ruby-mongo-1.10.0/lib/mongo/functional/write_concern.rb000066400000000000000000000042161233461006100231030ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module WriteConcern VALID_KEYS = [:w, :j, :fsync, :wtimeout] DEFAULT_WRITE_CONCERN = {:w => 1} attr_reader :legacy_write_concern @@safe_warn = nil def write_concern_from_legacy(opts) # Warn if 'safe' parameter is being used, if opts.key?(:safe) && !@@safe_warn && !ENV['TEST_MODE'] warn "[DEPRECATED] The 'safe' write concern option has been deprecated in favor of 'w'." @@safe_warn = true end # nil: set :w => 0 # false: set :w => 0 # true: set :w => 1 # hash: set :w => 0 and merge with opts unless opts.has_key?(:w) opts[:w] = 0 # legacy default, unacknowledged safe = opts.delete(:safe) if(safe && safe.is_a?(Hash)) opts.merge!(safe) elsif(safe == true) opts[:w] = 1 end end end # todo: throw exception for conflicting write concern options def get_write_concern(opts, parent=nil) write_concern_from_legacy(opts) if opts.key?(:safe) || legacy_write_concern write_concern = DEFAULT_WRITE_CONCERN.dup write_concern.merge!(parent.write_concern) if parent write_concern.merge!(opts.reject {|k,v| !VALID_KEYS.include?(k)}) write_concern[:w] = write_concern[:w].to_s if write_concern[:w].is_a?(Symbol) write_concern end def self.gle?(write_concern) (write_concern[:w].is_a? Symbol) || (write_concern[:w].is_a? String) || write_concern[:w] > 0 || write_concern[:j] || write_concern[:fsync] || write_concern[:wtimeout] end end end ruby-mongo-1.10.0/lib/mongo/gridfs.rb000066400000000000000000000013141233461006100173520ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/gridfs/grid_ext' require 'mongo/gridfs/grid' require 'mongo/gridfs/grid_file_system' require 'mongo/gridfs/grid_io' ruby-mongo-1.10.0/lib/mongo/gridfs/000077500000000000000000000000001233461006100170265ustar00rootroot00000000000000ruby-mongo-1.10.0/lib/mongo/gridfs/grid.rb000066400000000000000000000102701233461006100203000ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo # Implementation of the MongoDB GridFS specification. A file store. class Grid include GridExt::InstanceMethods DEFAULT_FS_NAME = 'fs' # Initialize a new Grid instance, consisting of a MongoDB database # and a filesystem prefix if not using the default. # # @see GridFileSystem def initialize(db, fs_name=DEFAULT_FS_NAME) raise MongoArgumentError, "db must be a Mongo::DB." unless db.is_a?(Mongo::DB) @db = db @files = @db["#{fs_name}.files"] @chunks = @db["#{fs_name}.chunks"] @fs_name = fs_name # This will create indexes only if we're connected to a primary node. begin @chunks.ensure_index([['files_id', Mongo::ASCENDING], ['n', Mongo::ASCENDING]], :unique => true) rescue Mongo::ConnectionFailure end end # Store a file in the file store. This method is designed only for writing new files; # if you need to update a given file, first delete it using Grid#delete. # # Note that arbitrary metadata attributes can be saved to the file by passing # them in as options. # # @param [String, #read] data a string or io-like object to store. # # @option opts [String] :filename (nil) a name for the file. # @option opts [Hash] :metadata ({}) any additional data to store with the file. # @option opts [ObjectId] :_id (ObjectId) a unique id for # the file to be use in lieu of an automatically generated one. # @option opts [String] :content_type ('binary/octet-stream') If no content type is specified, # the content type will may be inferred from the filename extension if the mime-types gem can be # loaded. Otherwise, the content type 'binary/octet-stream' will be used. # @option opts [Integer] (261120) :chunk_size size of file chunks in bytes. # @option opts [String, Integer, Symbol] :w (1) Set write concern # # Notes on write concern: # When :w > 0, the chunks sent to the server are validated using an md5 hash. # If validation fails, an exception will be raised. # # @return [BSON::ObjectId] the file's id. def put(data, opts={}) begin # Ensure there is an index on files_id and n, as state may have changed since instantiation of self. # Recall that index definitions are cached with ensure_index so this statement won't unneccesarily repeat index creation. @chunks.ensure_index([['files_id', Mongo::ASCENDING], ['n', Mongo::ASCENDING]], :unique => true) opts = opts.dup filename = opts.delete(:filename) opts.merge!(default_grid_io_opts) file = GridIO.new(@files, @chunks, filename, 'w', opts) file.write(data) file.close file.files_id rescue Mongo::ConnectionFailure => e raise e, "Failed to create necessary index and write data." end end # Read a file from the file store. # # @param id the file's unique id. # # @return [Mongo::GridIO] def get(id) opts = {:query => {'_id' => id}}.merge!(default_grid_io_opts) GridIO.new(@files, @chunks, nil, 'r', opts) end # Delete a file from the store. # # Note that deleting a GridFS file can result in read errors if another process # is attempting to read a file while it's being deleted. While the odds for this # kind of race condition are small, it's important to be aware of. # # @param id # # @return [Boolean] def delete(id) @files.remove({"_id" => id}) @chunks.remove({"files_id" => id}) end private def default_grid_io_opts {:fs_name => @fs_name} end end end ruby-mongo-1.10.0/lib/mongo/gridfs/grid_ext.rb000066400000000000000000000041271233461006100211640ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module GridExt module InstanceMethods # Check the existence of a file matching the given query selector. # # Note that this method can be used with both the Grid and GridFileSystem classes. Also # keep in mind that if you're going to be performing lots of existence checks, you should # keep an instance of Grid or GridFileSystem handy rather than instantiating for each existence # check. Alternatively, simply keep a reference to the proper files collection and query that # as needed. That's exactly how this methods works. # # @param [Hash] selector a query selector. # # @example # # # Check for the existence of a given filename # @grid = Mongo::GridFileSystem.new(@db) # @grid.exist?(:filename => 'foo.txt') # # # Check for existence filename and content type # @grid = Mongo::GridFileSystem.new(@db) # @grid.exist?(:filename => 'foo.txt', :content_type => 'image/jpg') # # # Check for existence by _id # @grid = Mongo::Grid.new(@db) # @grid.exist?(:_id => BSON::ObjectId.from_string('4bddcd24beffd95a7db9b8c8')) # # # Check for existence by an arbitrary attribute. # @grid = Mongo::Grid.new(@db) # @grid.exist?(:tags => {'$in' => ['nature', 'zen', 'photography']}) # # @return [nil, Hash] either nil for the file's metadata as a hash. def exist?(selector) @files.find_one(selector) end end end end ruby-mongo-1.10.0/lib/mongo/gridfs/grid_file_system.rb000066400000000000000000000146751233461006100227200ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo # A file store built on the GridFS specification featuring # an API and behavior similar to that of a traditional file system. class GridFileSystem include GridExt::InstanceMethods # Initialize a new GridFileSystem instance, consisting of a MongoDB database # and a filesystem prefix if not using the default. # # @param [Mongo::DB] db a MongoDB database. # @param [String] fs_name A name for the file system. The default name, based on # the specification, is 'fs'. def initialize(db, fs_name=Grid::DEFAULT_FS_NAME) raise MongoArgumentError, "db must be a Mongo::DB." unless db.is_a?(Mongo::DB) @db = db @files = @db["#{fs_name}.files"] @chunks = @db["#{fs_name}.chunks"] @fs_name = fs_name @default_query_opts = {:sort => [['filename', 1], ['uploadDate', -1]], :limit => 1} # This will create indexes only if we're connected to a primary node. begin @files.ensure_index([['filename', 1], ['uploadDate', -1]]) @chunks.ensure_index([['files_id', Mongo::ASCENDING], ['n', Mongo::ASCENDING]], :unique => true) rescue Mongo::ConnectionFailure end end # Open a file for reading or writing. Note that the options for this method only apply # when opening in 'w' mode. # # Note that arbitrary metadata attributes can be saved to the file by passing # them is as options. # # @param [String] filename the name of the file. # @param [String] mode either 'r' or 'w' for reading from # or writing to the file. # @param [Hash] opts see GridIO#new # # @option opts [Hash] :metadata ({}) any additional data to store with the file. # @option opts [ObjectId] :_id (ObjectId) a unique id for # the file to be use in lieu of an automatically generated one. # @option opts [String] :content_type ('binary/octet-stream') If no content type is specified, # the content type will may be inferred from the filename extension if the mime-types gem can be # loaded. Otherwise, the content type 'binary/octet-stream' will be used. # @option opts [Integer] (261120) :chunk_size size of file chunks in bytes. # @option opts [Boolean] :delete_old (false) ensure that old versions of the file are deleted. This option # only work in 'w' mode. Certain precautions must be taken when deleting GridFS files. See the notes under # GridFileSystem#delete. # @option opts [String, Integer, Symbol] :w (1) Set write concern # # Notes on write concern: # When :w > 0, the chunks sent to the server # will be validated using an md5 hash. If validation fails, an exception will be raised. # @option opts [Integer] :versions (false) deletes all versions which exceed the number specified to # retain ordered by uploadDate. This option only works in 'w' mode. Certain precautions must be taken when # deleting GridFS files. See the notes under GridFileSystem#delete. # # @example # # # Store the text "Hello, world!" in the grid file system. # @grid = Mongo::GridFileSystem.new(@db) # @grid.open('filename', 'w') do |f| # f.write "Hello, world!" # end # # # Output "Hello, world!" # @grid = Mongo::GridFileSystem.new(@db) # @grid.open('filename', 'r') do |f| # puts f.read # end # # # Write a file on disk to the GridFileSystem # @file = File.open('image.jpg') # @grid = Mongo::GridFileSystem.new(@db) # @grid.open('image.jpg, 'w') do |f| # f.write @file # end # # @return [Mongo::GridIO] def open(filename, mode, opts={}) opts = opts.dup opts.merge!(default_grid_io_opts(filename)) if mode == 'w' begin # Ensure there are the appropriate indexes, as state may have changed since instantiation of self. # Recall that index definitions are cached with ensure_index so this statement won't unneccesarily repeat index creation. @files.ensure_index([['filename', 1], ['uploadDate', -1]]) @chunks.ensure_index([['files_id', Mongo::ASCENDING], ['n', Mongo::ASCENDING]], :unique => true) versions = opts.delete(:versions) if opts.delete(:delete_old) || (versions && versions < 1) versions = 1 end rescue Mongo::ConnectionFailure => e raise e, "Failed to create necessary indexes and write data." return end end file = GridIO.new(@files, @chunks, filename, mode, opts) return file unless block_given? result = nil begin result = yield file ensure id = file.close if versions self.delete do @files.find({'filename' => filename, '_id' => {'$ne' => id}}, :fields => ['_id'], :sort => ['uploadDate', -1], :skip => (versions - 1)) end end end result end # Delete the file with the given filename. Note that this will delete # all versions of the file. # # Be careful with this. Deleting a GridFS file can result in read errors if another process # is attempting to read a file while it's being deleted. While the odds for this # kind of race condition are small, it's important to be aware of. # # @param [String] filename # # @yield [] pass a block that returns an array of documents to be deleted. # # @return [Boolean] def delete(filename=nil) if block_given? files = yield else files = @files.find({'filename' => filename}, :fields => ['_id']) end files.each do |file| @files.remove({'_id' => file['_id']}) @chunks.remove({'files_id' => file['_id']}) end end alias_method :unlink, :delete private def default_grid_io_opts(filename=nil) {:fs_name => @fs_name, :query => {'filename' => filename}, :query_opts => @default_query_opts} end end end ruby-mongo-1.10.0/lib/mongo/gridfs/grid_io.rb000066400000000000000000000366111233461006100207760ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'digest/md5' module Mongo # GridIO objects represent files in the GridFS specification. This class # manages the reading and writing of file chunks and metadata. class GridIO include Mongo::WriteConcern DEFAULT_CHUNK_SIZE = 255 * 1024 DEFAULT_CONTENT_TYPE = 'binary/octet-stream' PROTECTED_ATTRS = [:files_id, :file_length, :client_md5, :server_md5] attr_reader :content_type, :chunk_size, :upload_date, :files_id, :filename, :metadata, :server_md5, :client_md5, :file_length, :file_position # Create a new GridIO object. Note that most users will not need to use this class directly; # the Grid and GridFileSystem classes will instantiate this class # # @param [Mongo::Collection] files a collection for storing file metadata. # @param [Mongo::Collection] chunks a collection for storing file chunks. # @param [String] filename the name of the file to open or write. # @param [String] mode 'r' or 'w' or reading or creating a file. # # @option opts [Hash] :query a query selector used when opening the file in 'r' mode. # @option opts [Hash] :query_opts any query options to be used when opening the file in 'r' mode. # @option opts [String] :fs_name the file system prefix. # @option opts [Integer] (261120) :chunk_size size of file chunks in bytes. # @option opts [Hash] :metadata ({}) any additional data to store with the file. # @option opts [ObjectId] :_id (ObjectId) a unique id for # the file to be use in lieu of an automatically generated one. # @option opts [String] :content_type ('binary/octet-stream') If no content type is specified, # the content type will may be inferred from the filename extension if the mime-types gem can be # loaded. Otherwise, the content type 'binary/octet-stream' will be used. # @option opts [String, Integer, Symbol] :w (1) Set the write concern # # Notes on write concern: # When :w > 0, the chunks sent to the server # will be validated using an md5 hash. If validation fails, an exception will be raised. def initialize(files, chunks, filename, mode, opts={}) @files = files @chunks = chunks @filename = filename @mode = mode opts = opts.dup @query = opts.delete(:query) || {} @query_opts = opts.delete(:query_opts) || {} @fs_name = opts.delete(:fs_name) || Grid::DEFAULT_FS_NAME @write_concern = get_write_concern(opts) @local_md5 = Digest::MD5.new if Mongo::WriteConcern.gle?(@write_concern) @custom_attrs = {} case @mode when 'r' then init_read when 'w' then init_write(opts) else raise GridError, "Invalid file mode #{@mode}. Mode should be 'r' or 'w'." end end def [](key) @custom_attrs[key] || instance_variable_get("@#{key.to_s}") end def []=(key, value) if PROTECTED_ATTRS.include?(key.to_sym) warn "Attempting to overwrite protected value." return nil else @custom_attrs[key] = value end end # Read the data from the file. If a length if specified, will read from the # current file position. # # @param [Integer] length # # @return [String] # the data in the file def read(length=nil) return '' if @file_length.zero? if length == 0 return '' elsif length.nil? && @file_position.zero? read_all else read_length(length) end end alias_method :data, :read # Write the given string (binary) data to the file. # # @param [String] io the data to write. # # @return [Integer] the number of bytes written. def write(io) raise GridError, "file not opened for write" unless @mode[0] == ?w if io.is_a? String if Mongo::WriteConcern.gle?(@write_concern) @local_md5.update(io) end write_string(io) else length = 0 if Mongo::WriteConcern.gle?(@write_concern) while(string = io.read(@chunk_size)) @local_md5.update(string) length += write_string(string) end else while(string = io.read(@chunk_size)) length += write_string(string) end end length end end # Position the file pointer at the provided location. # # @param [Integer] pos # the number of bytes to advance the file pointer. this can be a negative # number. # @param [Integer] whence # one of IO::SEEK_CUR, IO::SEEK_END, or IO::SEEK_SET # # @return [Integer] the new file position def seek(pos, whence=IO::SEEK_SET) raise GridError, "Seek is only allowed in read mode." unless @mode == 'r' target_pos = case whence when IO::SEEK_CUR @file_position + pos when IO::SEEK_END @file_length + pos when IO::SEEK_SET pos end new_chunk_number = (target_pos / @chunk_size).to_i if new_chunk_number != @current_chunk['n'] save_chunk(@current_chunk) if @mode[0] == ?w @current_chunk = get_chunk(new_chunk_number) end @file_position = target_pos @chunk_position = @file_position % @chunk_size @file_position end # The current position of the file. # # @return [Integer] def tell @file_position end alias :pos :tell # Rewind the file. This is equivalent to seeking to the zeroth position. # # @return [Integer] the position of the file after rewinding (always zero). def rewind raise GridError, "file not opened for read" unless @mode[0] == ?r seek(0) end # Return a boolean indicating whether the position pointer is # at the end of the file. # # @return [Boolean] def eof raise GridError, "file not opened for read #{@mode}" unless @mode[0] == ?r @file_position >= @file_length end alias :eof? :eof # Return the next line from a GridFS file. This probably # makes sense only if you're storing plain text. This method # has a somewhat tricky API, which it inherits from Ruby's # StringIO#gets. # # @param [String, Integer] separator or length. If a separator, # read up to the separator. If a length, read the +length+ number # of bytes. If nil, read the entire file. # @param [Integer] length If a separator is provided, then # read until either finding the separator or # passing over the +length+ number of bytes. # # @return [String] def gets(separator="\n", length=nil) if separator.nil? read_all elsif separator.is_a?(Integer) read_length(separator) elsif separator.length > 1 read_to_string(separator, length) else read_to_character(separator, length) end end # Return the next byte from the GridFS file. # # @return [String] def getc read_length(1) end # Creates or updates the document from the files collection that # stores the chunks' metadata. The file becomes available only after # this method has been called. # # This method will be invoked automatically when # on GridIO#open is passed a block. Otherwise, it must be called manually. # # @return [BSON::ObjectId] def close if @mode[0] == ?w if @current_chunk['n'].zero? && @chunk_position.zero? warn "Warning: Storing a file with zero length." end @upload_date = Time.now.utc id = @files.insert(to_mongo_object) end id end # Read a chunk of the data from the file and yield it to the given # block. # # Note that this method reads from the current file position. # # @yield Yields on chunk per iteration as defined by this file's # chunk size. # # @return [Mongo::GridIO] self def each return read_all unless block_given? while chunk = read(chunk_size) yield chunk break if chunk.empty? end self end def inspect "#" end private def create_chunk(n) chunk = BSON::OrderedHash.new chunk['_id'] = BSON::ObjectId.new chunk['n'] = n chunk['files_id'] = @files_id chunk['data'] = '' @chunk_position = 0 chunk end def save_chunk(chunk) @chunks.save(chunk) end def get_chunk(n) chunk = @chunks.find({'files_id' => @files_id, 'n' => n}).next_document @chunk_position = 0 chunk end # Read a file in its entirety. def read_all buf = '' if @current_chunk buf << @current_chunk['data'].to_s while buf.size < @file_length @current_chunk = get_chunk(@current_chunk['n'] + 1) break if @current_chunk.nil? buf << @current_chunk['data'].to_s end @file_position = @file_length end buf end # Read a file incrementally. def read_length(length) cache_chunk_data remaining = (@file_length - @file_position) if length.nil? to_read = remaining else to_read = length > remaining ? remaining : length end return nil unless remaining > 0 buf = '' while to_read > 0 if @chunk_position == @chunk_data_length @current_chunk = get_chunk(@current_chunk['n'] + 1) cache_chunk_data end chunk_remainder = @chunk_data_length - @chunk_position size = (to_read >= chunk_remainder) ? chunk_remainder : to_read buf << @current_chunk_data[@chunk_position, size] to_read -= size @chunk_position += size @file_position += size end buf end def read_to_character(character="\n", length=nil) result = '' len = 0 while char = getc result << char len += 1 break if char == character || (length ? len >= length : false) end result.length > 0 ? result : nil end def read_to_string(string="\n", length=nil) result = '' len = 0 match_idx = 0 match_num = string.length - 1 to_match = string[match_idx].chr if length matcher = lambda {|idx, num| idx < num && len < length } else matcher = lambda {|idx, num| idx < num} end while matcher.call(match_idx, match_num) && char = getc result << char len += 1 if char == to_match while match_idx < match_num do match_idx += 1 to_match = string[match_idx].chr char = getc result << char if char != to_match match_idx = 0 to_match = string[match_idx].chr break end end end end result.length > 0 ? result : nil end def cache_chunk_data @current_chunk_data = @current_chunk['data'].to_s if @current_chunk_data.respond_to?(:force_encoding) @current_chunk_data.force_encoding("binary") end @chunk_data_length = @current_chunk['data'].length end def write_string(string) # Since Ruby 1.9.1 doesn't necessarily store one character per byte. if string.respond_to?(:force_encoding) string.force_encoding("binary") end to_write = string.length while (to_write > 0) do if @current_chunk && @chunk_position == @chunk_size next_chunk_number = @current_chunk['n'] + 1 @current_chunk = create_chunk(next_chunk_number) end chunk_available = @chunk_size - @chunk_position step_size = (to_write > chunk_available) ? chunk_available : to_write @current_chunk['data'] = BSON::Binary.new((@current_chunk['data'].to_s << string[-to_write, step_size]).unpack("c*")) @chunk_position += step_size to_write -= step_size save_chunk(@current_chunk) end string.length - to_write end # Initialize the class for reading a file. def init_read doc = @files.find(@query, @query_opts).next_document raise GridFileNotFound, "Could not open file matching #{@query.inspect} #{@query_opts.inspect}" unless doc @files_id = doc['_id'] @content_type = doc['contentType'] @chunk_size = doc['chunkSize'] @upload_date = doc['uploadDate'] @aliases = doc['aliases'] @file_length = doc['length'] @metadata = doc['metadata'] @md5 = doc['md5'] @filename = doc['filename'] @custom_attrs = doc @current_chunk = get_chunk(0) @file_position = 0 end # Initialize the class for writing a file. def init_write(opts) opts = opts.dup @files_id = opts.delete(:_id) || BSON::ObjectId.new @content_type = opts.delete(:content_type) || (defined? MIME) && get_content_type || DEFAULT_CONTENT_TYPE @chunk_size = opts.delete(:chunk_size) || DEFAULT_CHUNK_SIZE @metadata = opts.delete(:metadata) @aliases = opts.delete(:aliases) @file_length = 0 opts.each {|k, v| self[k] = v} check_existing_file if Mongo::WriteConcern.gle?(@write_concern) @current_chunk = create_chunk(0) @file_position = 0 end def check_existing_file if @files.find_one('_id' => @files_id) raise GridError, "Attempting to overwrite with Grid#put. You must delete the file first." end end def to_mongo_object h = BSON::OrderedHash.new h['_id'] = @files_id h['filename'] = @filename if @filename h['contentType'] = @content_type h['length'] = @current_chunk ? @current_chunk['n'] * @chunk_size + @chunk_position : 0 h['chunkSize'] = @chunk_size h['uploadDate'] = @upload_date h['aliases'] = @aliases if @aliases h['metadata'] = @metadata if @metadata h['md5'] = get_md5 h.merge!(@custom_attrs) h end # Get a server-side md5 and validate against the client if running with acknowledged writes def get_md5 md5_command = BSON::OrderedHash.new md5_command['filemd5'] = @files_id md5_command['root'] = @fs_name @server_md5 = @files.db.command(md5_command)['md5'] if Mongo::WriteConcern.gle?(@write_concern) @client_md5 = @local_md5.hexdigest if @local_md5 == @server_md5 @server_md5 else raise GridMD5Failure, "File on server failed MD5 check" end else @server_md5 end end # Determine the content type based on the filename. def get_content_type if @filename if types = MIME::Types.type_for(@filename) types.first.simplified unless types.empty? end end end end end ruby-mongo-1.10.0/lib/mongo/legacy.rb000066400000000000000000000100371233461006100173420ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module LegacyWriteConcern @legacy_write_concern = true def safe=(value) @write_concern = value end def safe if @write_concern[:w] == 0 return false elsif @write_concern[:w] == 1 return true else return @write_concern end end def self.from_uri(uri = ENV['MONGODB_URI'], extra_opts={}) parser = URIParser.new uri parser.connection(extra_opts, true) end end end module Mongo # @deprecated Use Mongo::MongoClient instead. Support will be removed after # v2.0. Please see old documentation for the Connection class. class Connection < MongoClient include Mongo::LegacyWriteConcern def initialize(*args) if args.last.is_a?(Hash) opts = args.pop write_concern_from_legacy(opts) args.push(opts) end super end end # @deprecated Use Mongo::MongoReplicaSetClient instead. Support will be # removed after v2.0. Please see old documentation for the # ReplSetConnection class. class ReplSetConnection < MongoReplicaSetClient include Mongo::LegacyWriteConcern def initialize(*args) if args.last.is_a?(Hash) opts = args.pop write_concern_from_legacy(opts) args.push(opts) end super end end # @deprecated Use Mongo::MongoShardedClient instead. Support will be removed # after v2.0. Please see old documentation for the ShardedConnection class. class ShardedConnection < MongoShardedClient include Mongo::LegacyWriteConcern def initialize(*args) if args.last.is_a?(Hash) opts = args.pop write_concern_from_legacy(opts) args.push(opts) end super end end class MongoClient # @deprecated This method is no longer in use and never needs to be called # directly. Support will be removed after v2.0 def authenticate_pools @primary_pool.authenticate_existing end # @deprecated This method is no longer in use and never needs to be called # directly. Support will be removed after v2.0 def logout_pools(database) @primary_pool.logout_existing(database) end # @deprecated This method is no longer in use and never needs to be called # directly. Support will be removed after v2.0 def apply_saved_authentication true end end class MongoReplicaSetClient # @deprecated This method is no longer in use and never needs to be called # directly. Support will be removed after v2.0 def authenticate_pools @manager.pools.each { |pool| pool.authenticate_existing } end # @deprecated This method is no longer in use and never needs to be called # directly. Support will be removed after v2.0 def logout_pools(database) @manager.pools.each { |pool| pool.logout_existing(database) } end end class DB # @deprecated Please use MongoClient#issue_authentication instead. Support # will be removed after v2.0 def issue_authentication(username, password, save_auth=true, opts={}) auth = Authentication.validate_credentials({ :db_name => self.name, :username => username, :password => password }) opts[:save_auth] = save_auth @client.issue_authentication(auth, opts) end # @deprecated Please use MongoClient#issue_logout instead. Support will be # removed after v2.0 def issue_logout(opts={}) @client.issue_logout(self.name, opts) end end end ruby-mongo-1.10.0/lib/mongo/mongo_client.rb000066400000000000000000000611451233461006100205610ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo # Instantiates and manages self.connections to MongoDB. class MongoClient include Mongo::Logging include Mongo::Networking include Mongo::WriteConcern include Mongo::Authentication # Wire version RELEASE_2_4_AND_BEFORE = 0 # Everything before we started tracking. AGG_RETURNS_CURSORS = 1 # The aggregation command may now be requested to return cursors. BATCH_COMMANDS = 2 # insert, update, and delete batch command MAX_WIRE_VERSION = BATCH_COMMANDS # supported by this client implementation MIN_WIRE_VERSION = RELEASE_2_4_AND_BEFORE # supported by this client implementation # Server command headroom COMMAND_HEADROOM = 16_384 APPEND_HEADROOM = COMMAND_HEADROOM / 2 SERIALIZE_HEADROOM = APPEND_HEADROOM / 2 DEFAULT_MAX_WRITE_BATCH_SIZE = 1000 Mutex = ::Mutex ConditionVariable = ::ConditionVariable DEFAULT_HOST = 'localhost' DEFAULT_PORT = 27017 DEFAULT_DB_NAME = 'test' GENERIC_OPTS = [:auths, :logger, :connect, :db_name] TIMEOUT_OPTS = [:timeout, :op_timeout, :connect_timeout] SSL_OPTS = [:ssl, :ssl_key, :ssl_cert, :ssl_verify, :ssl_ca_cert, :ssl_key_pass_phrase] POOL_OPTS = [:pool_size, :pool_timeout] READ_PREFERENCE_OPTS = [:read, :tag_sets, :secondary_acceptable_latency_ms] WRITE_CONCERN_OPTS = [:w, :j, :fsync, :wtimeout] CLIENT_ONLY_OPTS = [:slave_ok] mongo_thread_local_accessor :connections attr_reader :logger, :size, :auths, :primary, :write_concern, :host_to_try, :pool_size, :connect_timeout, :pool_timeout, :primary_pool, :socket_class, :socket_opts, :op_timeout, :tag_sets, :acceptable_latency, :read, :max_wire_version, :min_wire_version, :max_write_batch_size # Create a connection to single MongoDB instance. # # If no args are provided, it will check ENV["MONGODB_URI"]. # # You may specify whether connection to slave is permitted. # In all cases, the default host is "localhost" and the default port is 27017. # # If you're connecting to a replica set, you'll need to use MongoReplicaSetClient.new instead. # # Once connected to a replica set, you can find out which nodes are primary, secondary, and # arbiters with the corresponding accessors: MongoClient#primary, MongoClient#secondaries, and # MongoClient#arbiters. This is useful if your application needs to connect manually to nodes other # than the primary. # # @overload initialize(host, port, opts={}) # @param [String] host hostname for the target MongoDB server. # @param [Integer] port specify a port number here if only one host is being specified. # @param [Hash] opts hash of optional settings and configuration values. # # @option opts [String, Integer, Symbol] :w (1) Set default number of nodes to which a write # should be acknowledged. # @option opts [Integer] :wtimeout (nil) Set replica set acknowledgement timeout. # @option opts [Boolean] :j (false) If true, block until write operations have been committed # to the journal. Cannot be used in combination with 'fsync'. Prior to MongoDB 2.6 this option was # ignored if the server was running without journaling. Starting with MongoDB 2.6, write operations will # fail with an exception if this option is used when the server is running without journaling. # @option opts [Boolean] :fsync (false) If true, and the server is running without journaling, blocks until # the server has synced all data files to disk. If the server is running with journaling, this acts the same as # the 'j' option, blocking until write operations have been committed to the journal. # Cannot be used in combination with 'j'. # # Notes about Write-Concern Options: # Write concern options are propagated to objects instantiated from this MongoClient. # These defaults can be overridden upon instantiation of any object by explicitly setting an options hash # on initialization. # # @option opts [Boolean] :ssl (false) If true, create the connection to the server using SSL. # @option opts [String] :ssl_cert (nil) The certificate file used to identify the local connection against MongoDB. # @option opts [String] :ssl_key (nil) The private keyfile used to identify the local connection against MongoDB. # Note that even if the key is stored in the same file as the certificate, both need to be explicitly specified. # @option opts [String] :ssl_key_pass_phrase (nil) A passphrase for the private key. # @option opts [Boolean] :ssl_verify (nil) Specifies whether or not peer certification validation should occur. # @option opts [String] :ssl_ca_cert (nil) The ca_certs file contains a set of concatenated "certification authority" # certificates, which are used to validate certificates passed from the other end of the connection. # Required for :ssl_verify. # @option opts [Boolean] :slave_ok (false) Must be set to +true+ when connecting # to a single, slave node. # @option opts [Logger, #debug] :logger (nil) A Logger instance for debugging driver ops. Note that # logging negatively impacts performance; therefore, it should not be used for high-performance apps. # @option opts [Integer] :pool_size (1) The maximum number of socket self.connections allowed per # connection pool. Note: this setting is relevant only for multi-threaded applications. # @option opts [Float] :pool_timeout (5.0) When all of the self.connections a pool are checked out, # this is the number of seconds to wait for a new connection to be released before throwing an exception. # Note: this setting is relevant only for multi-threaded applications. # @option opts [Float] :op_timeout (nil) The number of seconds to wait for a read operation to time out. # Disabled by default. # @option opts [Float] :connect_timeout (nil) The number of seconds to wait before timing out a # connection attempt. # # @example localhost, 27017 (or ENV["MONGODB_URI"] if available) # MongoClient.new # # @example localhost, 27017 # MongoClient.new("localhost") # # @example localhost, 3000, max 5 self.connections, with max 5 seconds of wait time. # MongoClient.new("localhost", 3000, :pool_size => 5, :pool_timeout => 5) # # @example localhost, 3000, where this node may be a slave # MongoClient.new("localhost", 3000, :slave_ok => true) # # @example Unix Domain Socket # MongoClient.new("/var/run/mongodb.sock") # # @see http://api.mongodb.org/ruby/current/file.REPLICA_SETS.html Replica sets in Ruby # # @raise [ReplicaSetConnectionError] This is raised if a replica set name is specified and the # driver fails to connect to a replica set with that name. # # @raise [MongoArgumentError] If called with no arguments and ENV["MONGODB_URI"] implies a replica set. def initialize(*args) opts = args.last.is_a?(Hash) ? args.pop : {} @host, @port = parse_init(args[0], args[1], opts) # Lock for request ids. @id_lock = Mutex.new # Connection pool for primary node @primary = nil @primary_pool = nil @mongos = false # Not set for direct connection @tag_sets = [] @acceptable_latency = 15 @max_bson_size = nil @max_message_size = nil @max_wire_version = nil @min_wire_version = nil @max_write_batch_size = nil check_opts(opts) setup(opts.dup) end # DEPRECATED # # Initialize a connection to a MongoDB replica set using an array of seed nodes. # # The seed nodes specified will be used on the initial connection to the replica set, but note # that this list of nodes will be replaced by the list of canonical nodes returned by running the # is_master command on the replica set. # # @param nodes [Array] An array of arrays, each of which specifies a host and port. # @param opts [Hash] Any of the available options that can be passed to MongoClient.new. # # @option opts [String] :rs_name (nil) The name of the replica set to connect to. An exception will be # raised if unable to connect to a replica set with this name. # @option opts [Boolean] :read_secondary (false) When true, this connection object will pick a random slave # to send reads to. # # @example # Mongo::MongoClient.multi([["db1.example.com", 27017], ["db2.example.com", 27017]]) # # @example This connection will read from a random secondary node. # Mongo::MongoClient.multi([["db1.example.com", 27017], ["db2.example.com", 27017], ["db3.example.com", 27017]], # :read_secondary => true) # # @return [Mongo::MongoClient] # # @deprecated def self.multi(nodes, opts={}) warn 'MongoClient.multi is now deprecated and will be removed in v2.0. Please use MongoReplicaSetClient.new instead.' MongoReplicaSetClient.new(nodes, opts) end # Initialize a connection to MongoDB using the MongoDB URI spec. # # Since MongoClient.new cannot be used with any ENV["MONGODB_URI"] that has multiple hosts (implying a replicaset), # you may use this when the type of your connection varies by environment and should be determined solely from ENV["MONGODB_URI"]. # # @param uri [String] # A string of the format mongodb://[username:password@]host1[:port1][,host2[:port2],...[,hostN[:portN]]][/database] # # @param [Hash] extra_opts Any of the options available for MongoClient.new # # @return [Mongo::MongoClient, Mongo::MongoReplicaSetClient] def self.from_uri(uri = ENV['MONGODB_URI'], extra_opts={}) parser = URIParser.new(uri) parser.connection(extra_opts) end # The host name used for this connection. # # @return [String] def host @primary_pool.host end # The port used for this connection. # # @return [Integer] def port @primary_pool.port end def host_port [@host, @port] end # Flush all pending writes to datafiles. # # @return [BSON::OrderedHash] the command response def lock! cmd = BSON::OrderedHash.new cmd[:fsync] = 1 cmd[:lock] = true self['admin'].command(cmd) end # Is this database locked against writes? # # @return [Boolean] def locked? [1, true].include? self['admin']['$cmd.sys.inprog'].find_one['fsyncLock'] end # Unlock a previously fsync-locked mongod process. # # @return [BSON::OrderedHash] command response def unlock! self['admin']['$cmd.sys.unlock'].find_one end # Return a hash with all database names # and their respective sizes on disk. # # @return [Hash] def database_info doc = self['admin'].command({:listDatabases => 1}) doc['databases'].inject({}) do |info, db| info[db['name']] = db['sizeOnDisk'].to_i info end end # Return an array of database names. # # @return [Array] def database_names database_info.keys end # Return a database with the given name. # See DB#new for valid options hash parameters. # # @param name [String] The name of the database. # @param opts [Hash] A hash of options to be passed to the DB constructor. # # @return [DB] The DB instance. def db(name = nil, opts = {}) DB.new(name || @db_name || DEFAULT_DB_NAME, self, opts) end # Shortcut for returning a database. Use MongoClient#db to accept options. # # @param name [String] The name of the database. # # @return [DB] The DB instance. def [](name) DB.new(name, self) end def refresh; end def pinned_pool @primary_pool end def pin_pool(pool, read_prefs); end def unpin_pool; end # Drop a database. # # @param database [String] name of an existing database. def drop_database(database) self[database].command(:dropDatabase => 1) end # Copy the database +from+ to +to+ on localhost. The +from+ database is # assumed to be on localhost, but an alternate host can be specified. # # @param from [String] name of the database to copy from. # @param to [String] name of the database to copy to. # @param from_host [String] host of the 'from' database. # @param username [String] username (applies to 'from' db) # @param password [String] password (applies to 'from' db) # # @note This command only supports the MONGODB-CR authentication mechanism. def copy_database(from, to, from_host=DEFAULT_HOST, username=nil, password=nil) oh = BSON::OrderedHash.new oh[:copydb] = 1 oh[:fromhost] = from_host oh[:fromdb] = from oh[:todb] = to if username || password unless username && password raise MongoArgumentError, 'Both username and password must be supplied for authentication.' end nonce_cmd = BSON::OrderedHash.new nonce_cmd[:copydbgetnonce] = 1 nonce_cmd[:fromhost] = from_host result = self['admin'].command(nonce_cmd) oh[:nonce] = result['nonce'] oh[:username] = username oh[:key] = Mongo::Authentication.auth_key(username, password, oh[:nonce]) end self['admin'].command(oh) end # Checks if a server is alive. This command will return immediately # even if the server is in a lock. # # @return [Hash] def ping self['admin'].command({:ping => 1}) end # Get the build information for the current connection. # # @return [Hash] def server_info self['admin'].command({:buildinfo => 1}) end # Get the build version of the current server. # # @return [Mongo::ServerVersion] # object allowing easy comparability of version. def server_version ServerVersion.new(server_info['version']) end # Is it okay to connect to a slave? # # @return [Boolean] def slave_ok? @slave_ok end def mongos? @mongos end # Create a new socket and attempt to connect to master. # If successful, sets host and port to master and returns the socket. # # If connecting to a replica set, this method will replace the # initially-provided seed list with any nodes known to the set. # # @raise [ConnectionFailure] if unable to connect to any host or port. def connect close config = check_is_master(host_port) if config if config['ismaster'] == 1 || config['ismaster'] == true @read_primary = true elsif @slave_ok @read_primary = false end if config.has_key?('msg') && config['msg'] == 'isdbgrid' @mongos = true end @max_bson_size = config['maxBsonObjectSize'] @max_message_size = config['maxMessageSizeBytes'] @max_wire_version = config['maxWireVersion'] @min_wire_version = config['minWireVersion'] @max_write_batch_size = config['maxWriteBatchSize'] check_wire_version_in_range set_primary(host_port) end unless connected? raise ConnectionFailure, "Failed to connect to a master node at #{host_port.join(":")}" end true end alias :reconnect :connect # It's possible that we defined connected as all nodes being connected??? # NOTE: Do check if this needs to be more stringent. # Probably not since if any node raises a connection failure, all nodes will be closed. def connected? !!(@primary_pool && !@primary_pool.closed?) end # Determine if the connection is active. In a normal case the *server_info* operation # will be performed without issues, but if the connection was dropped by the server or # for some reason the sockets are unsynchronized, a ConnectionFailure will be raised and # the return will be false. # # @return [Boolean] def active? return false unless connected? ping true rescue ConnectionFailure false end # Determine whether we're reading from a primary node. If false, # this connection connects to a secondary node and @slave_ok is true. # # @return [Boolean] def read_primary? @read_primary end alias :primary? :read_primary? # The socket pool that this connection reads from. # # @return [Mongo::Pool] def read_pool @primary_pool end # Close the connection to the database. def close @primary_pool.close if @primary_pool @primary_pool = nil @primary = nil end # Returns the maximum BSON object size as returned by the core server. # Use the 4MB default when the server doesn't report this. # # @return [Integer] def max_bson_size @max_bson_size || DEFAULT_MAX_BSON_SIZE end def max_message_size @max_message_size || max_bson_size * MESSAGE_SIZE_FACTOR end def max_wire_version @max_wire_version || 0 end def min_wire_version @min_wire_version || 0 end def max_write_batch_size @max_write_batch_size || DEFAULT_MAX_WRITE_BATCH_SIZE end def wire_version_feature?(feature) min_wire_version <= feature && feature <= max_wire_version end def primary_wire_version_feature?(feature) min_wire_version <= feature && feature <= max_wire_version end def use_write_command?(write_concern) write_concern[:w] != 0 && primary_wire_version_feature?(Mongo::MongoClient::BATCH_COMMANDS) end # Checkout a socket for reading (i.e., a secondary node). # Note: this is overridden in MongoReplicaSetClient. def checkout_reader(read_preference) connect unless connected? @primary_pool.checkout end # Checkout a socket for writing (i.e., a primary node). # Note: this is overridden in MongoReplicaSetClient. def checkout_writer connect unless connected? @primary_pool.checkout end # Check a socket back into its pool. # Note: this is overridden in MongoReplicaSetClient. def checkin(socket) if @primary_pool && socket && socket.pool socket.checkin end end # Internal method for checking isMaster() on a given node. # # @param node [Array] Port and host for the target node # @return [Hash] Response from isMaster() # # @private def check_is_master(node) begin host, port = *node config = nil socket = @socket_class.new(host, port, @op_timeout, @connect_timeout, @socket_opts) if @connect_timeout Timeout::timeout(@connect_timeout, OperationTimeout) do config = self['admin'].command({:isMaster => 1}, :socket => socket) end else config = self['admin'].command({:isMaster => 1}, :socket => socket) end rescue OperationFailure, SocketError, SystemCallError, IOError close ensure socket.close unless socket.nil? || socket.closed? end config end protected def valid_opts GENERIC_OPTS + CLIENT_ONLY_OPTS + POOL_OPTS + READ_PREFERENCE_OPTS + WRITE_CONCERN_OPTS + TIMEOUT_OPTS + SSL_OPTS end def check_opts(opts) bad_opts = opts.keys.reject { |opt| valid_opts.include?(opt) } unless bad_opts.empty? bad_opts.each {|opt| warn "#{opt} is not a valid option for #{self.class}"} end end # Parse option hash def setup(opts) @slave_ok = opts.delete(:slave_ok) @ssl = opts.delete(:ssl) @unix = @host ? @host.end_with?('.sock') : false # if ssl options are present, but ssl is nil/false raise for misconfig ssl_opts = opts.keys.select { |k| k.to_s.start_with?('ssl') } if ssl_opts.size > 0 && !@ssl raise MongoArgumentError, "SSL has not been enabled (:ssl=false) " + "but the following SSL related options were " + "specified: #{ssl_opts.join(', ')}" end @socket_opts = {} if @ssl # construct ssl socket opts @socket_opts[:key] = opts.delete(:ssl_key) @socket_opts[:cert] = opts.delete(:ssl_cert) @socket_opts[:verify] = opts.delete(:ssl_verify) @socket_opts[:ca_cert] = opts.delete(:ssl_ca_cert) @socket_opts[:key_pass_phrase] = opts.delete(:ssl_key_pass_phrase) # verify peer requires ca_cert, raise if only one is present if @socket_opts[:verify] && !@socket_opts[:ca_cert] raise MongoArgumentError, 'If :ssl_verify_mode has been specified, then you must include ' + ':ssl_ca_cert in order to perform server validation.' end # if we have a keyfile passphrase but no key file, raise if @socket_opts[:key_pass_phrase] && !@socket_opts[:key] raise MongoArgumentError, 'If :ssl_key_pass_phrase has been specified, then you must include ' + ':ssl_key, the passphrase-protected keyfile.' end @socket_class = Mongo::SSLSocket elsif @unix @socket_class = Mongo::UNIXSocket else @socket_class = Mongo::TCPSocket end @db_name = opts.delete(:db_name) @auths = opts.delete(:auths) || Set.new # Pool size and timeout. @pool_size = opts.delete(:pool_size) || 1 if opts[:timeout] warn 'The :timeout option has been deprecated ' + 'and will be removed in the 2.0 release. ' + 'Use :pool_timeout instead.' end @pool_timeout = opts.delete(:pool_timeout) || opts.delete(:timeout) || 5.0 # Timeout on socket read operation. @op_timeout = opts.delete(:op_timeout) # Timeout on socket connect. @connect_timeout = opts.delete(:connect_timeout) || 30 @logger = opts.delete(:logger) if @logger write_logging_startup_message end # Determine read preference if defined?(@slave_ok) && (@slave_ok) || defined?(@read_secondary) && @read_secondary @read = :secondary_preferred else @read = opts.delete(:read) || :primary end Mongo::ReadPreference::validate(@read) @tag_sets = opts.delete(:tag_sets) || [] @acceptable_latency = opts.delete(:secondary_acceptable_latency_ms) || 15 # Connection level write concern options. @write_concern = get_write_concern(opts) connect if opts.fetch(:connect, true) end private # Parses client initialization info from MONGODB_URI env variable def parse_init(host, port, opts) if host.nil? && port.nil? && ENV.has_key?('MONGODB_URI') parser = URIParser.new(ENV['MONGODB_URI']) if parser.replicaset? raise MongoArgumentError, 'ENV[\'MONGODB_URI\'] implies a replica set.' end opts.merge!(parser.connection_options) [parser.host, parser.port] else host = host[1...-1] if host && host[0,1] == '[' # ipv6 support [host || DEFAULT_HOST, port || DEFAULT_PORT] end end # Set the specified node as primary def set_primary(node) host, port = *node @primary = [host, port] @primary_pool = Pool.new(self, host, port, :size => @pool_size, :timeout => @pool_timeout) end # calculate wire version in range def check_wire_version_in_range unless MIN_WIRE_VERSION <= max_wire_version && MAX_WIRE_VERSION >= min_wire_version close raise ConnectionFailure, "Client wire-version range #{MIN_WIRE_VERSION} to " + "#{MAX_WIRE_VERSION} does not support server range " + "#{min_wire_version} to #{max_wire_version}, please update " + "clients or servers" end end end end ruby-mongo-1.10.0/lib/mongo/mongo_replica_set_client.rb000066400000000000000000000430341233461006100231300ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo # Instantiates and manages connections to a MongoDB replica set. class MongoReplicaSetClient < MongoClient include ReadPreference include ThreadLocalVariableManager REPL_SET_OPTS = [ :refresh_mode, :refresh_interval, :read_secondary, :rs_name, :name ] attr_reader :replica_set_name, :seeds, :refresh_interval, :refresh_mode, :refresh_version, :manager # Create a connection to a MongoDB replica set. # # If no args are provided, it will check ENV["MONGODB_URI"]. # # Once connected to a replica set, you can find out which nodes are primary, secondary, and # arbiters with the corresponding accessors: MongoClient#primary, MongoClient#secondaries, and # MongoClient#arbiters. This is useful if your application needs to connect manually to nodes other # than the primary. # # @overload initialize(seeds=ENV["MONGODB_URI"], opts={}) # @param [Array, Array] seeds # # @option opts [String, Integer, Symbol] :w (1) Set default number of nodes to which a write # should be acknowledged. # @option opts [Integer] :wtimeout (nil) Set replica set acknowledgement timeout. # @option opts [Boolean] :j (false) If true, block until write operations have been committed # to the journal. Cannot be used in combination with 'fsync'. Prior to MongoDB 2.6 this option was # ignored if the server was running without journaling. Starting with MongoDB 2.6, write operations will # fail with an exception if this option is used when the server is running without journaling. # @option opts [Boolean] :fsync (false) If true, and the server is running without journaling, blocks until # the server has synced all data files to disk. If the server is running with journaling, this acts the same as # the 'j' option, blocking until write operations have been committed to the journal. # Cannot be used in combination with 'j'. # # Notes about write concern options: # Write concern options are propagated to objects instantiated from this MongoReplicaSetClient. # These defaults can be overridden upon instantiation of any object by explicitly setting an options hash # on initialization. # @option opts [:primary, :primary_preferred, :secondary, :secondary_preferred, :nearest] :read (:primary) # A "read preference" determines the candidate replica set members to which a query or command can be sent. # [:primary] # * Read from primary only. # * Cannot be combined with tags. # [:primary_preferred] # * Read from primary if available, otherwise read from a secondary. # [:secondary] # * Read from secondary if available. # [:secondary_preferred] # * Read from a secondary if available, otherwise read from the primary. # [:nearest] # * Read from any member. # @option opts [Array Tag Value }>] :tag_sets ([]) # Read from replica-set members with these tags. # @option opts [Integer] :secondary_acceptable_latency_ms (15) The acceptable # nearest available member for a member to be considered "near". # @option opts [Logger] :logger (nil) Logger instance to receive driver operation log. # @option opts [Integer] :pool_size (1) The maximum number of socket connections allowed per # connection pool. Note: this setting is relevant only for multi-threaded applications. # @option opts [Float] :pool_timeout (5.0) When all of the connections a pool are checked out, # this is the number of seconds to wait for a new connection to be released before throwing an exception. # Note: this setting is relevant only for multi-threaded applications. # @option opts [Float] :op_timeout (nil) The number of seconds to wait for a read operation to time out. # @option opts [Float] :connect_timeout (30) The number of seconds to wait before timing out a # connection attempt. # @option opts [Boolean] :ssl (false) If true, create the connection to the server using SSL. # @option opts [String] :ssl_cert (nil) The certificate file used to identify the local connection against MongoDB. # @option opts [String] :ssl_key (nil) The private keyfile used to identify the local connection against MongoDB. # Note that even if the key is stored in the same file as the certificate, both need to be explicitly specified. # @option opts [String] :ssl_key_pass_phrase (nil) A passphrase for the private key. # @option opts [Boolean] :ssl_verify (nil) Specifies whether or not peer certification validation should occur. # @option opts [String] :ssl_ca_cert (nil) The ca_certs file contains a set of concatenated "certification authority" # certificates, which are used to validate certificates passed from the other end of the connection. # Required for :ssl_verify. # @option opts [Boolean] :refresh_mode (false) Set this to :sync to periodically update the # state of the connection every :refresh_interval seconds. Replica set connection failures # will always trigger a complete refresh. This option is useful when you want to add new nodes # or remove replica set nodes not currently in use by the driver. # @option opts [Integer] :refresh_interval (90) If :refresh_mode is enabled, this is the number of seconds # between calls to check the replica set's state. # @note the number of seed nodes does not have to be equal to the number of replica set members. # The purpose of seed nodes is to permit the driver to find at least one replica set member even if a member is down. # # @example Connect to a replica set and provide two seed nodes. # MongoReplicaSetClient.new(['localhost:30000', 'localhost:30001']) # # @example Connect to a replica set providing two seed nodes and ensuring a connection to the replica set named 'prod': # MongoReplicaSetClient.new(['localhost:30000', 'localhost:30001'], :name => 'prod') # # @example Connect to a replica set providing two seed nodes and allowing reads from a secondary node: # MongoReplicaSetClient.new(['localhost:30000', 'localhost:30001'], :read => :secondary) # # @see http://api.mongodb.org/ruby/current/file.REPLICA_SETS.html Replica sets in Ruby # # @raise [MongoArgumentError] This is raised for usage errors. # # @raise [ConnectionFailure] This is raised for the various connection failures. def initialize(*args) opts = args.last.is_a?(Hash) ? args.pop : {} nodes = args.shift || [] raise MongoArgumentError, "Too many arguments" unless args.empty? # This is temporary until support for the old format is dropped @seeds = nodes.collect do |node| if node.is_a?(Array) warn "Initiating a MongoReplicaSetClient with seeds passed as individual [host, port] array arguments is deprecated." warn "Please specify hosts as an array of 'host:port' strings; the old format will be removed in v2.0" node elsif node.is_a?(String) Support.normalize_seeds(node) else raise MongoArgumentError "Bad seed format!" end end if @seeds.empty? && ENV.has_key?('MONGODB_URI') parser = URIParser.new ENV['MONGODB_URI'] if parser.direct? raise MongoArgumentError, "ENV['MONGODB_URI'] implies a direct connection." end opts = parser.connection_options.merge! opts @seeds = parser.nodes end if @seeds.length.zero? raise MongoArgumentError, "A MongoReplicaSetClient requires at least one seed node." end @seeds.freeze # Refresh @last_refresh = Time.now @refresh_version = 0 # No connection manager by default. @manager = nil # Lock for request ids. @id_lock = Mutex.new @connected = false @connect_mutex = Mutex.new @mongos = false check_opts(opts) setup(opts.dup) end def valid_opts super + REPL_SET_OPTS - CLIENT_ONLY_OPTS end def inspect "" end # Initiate a connection to the replica set. def connect(force = !connected?) return unless force log(:info, "Connecting...") # Prevent recursive connection attempts from the same thread. # This is done rather than using a Monitor to prevent potentially recursing # infinitely while attempting to connect and continually failing. Instead, fail fast. raise ConnectionFailure, "Failed to get node data." if thread_local[:locks][:connecting] == true current_version = @refresh_version @connect_mutex.synchronize do # don't try to connect if another thread has done so while we were waiting for the lock return unless current_version == @refresh_version begin thread_local[:locks][:connecting] = true if @manager ensure_manager @manager.refresh!(@seeds) else @manager = PoolManager.new(self, @seeds) ensure_manager @manager.connect end ensure thread_local[:locks][:connecting] = false end @refresh_version += 1 if @manager.pools.empty? close raise ConnectionFailure, "Failed to connect to any node." end check_wire_version_in_range @connected = true end end # Determine whether a replica set refresh is # required. If so, run a hard refresh. You can # force a hard refresh by running # MongoReplicaSetClient#hard_refresh! # # @return [Boolean] +true+ unless a hard refresh # is run and the refresh lock can't be acquired. def refresh(opts={}) if !connected? log(:info, "Trying to check replica set health but not " + "connected...") return hard_refresh! end log(:debug, "Checking replica set connection health...") ensure_manager @manager.check_connection_health if @manager.refresh_required? return hard_refresh! end return true end # Force a hard refresh of this connection's view # of the replica set. # # @return [Boolean] +true+ if hard refresh # occurred. +false+ is returned when unable # to get the refresh lock. def hard_refresh! log(:info, "Initiating hard refresh...") connect(true) return true end def connected? @connected && !@manager.pools.empty? end # @deprecated def connecting? warn "MongoReplicaSetClient#connecting? is deprecated and will be removed in v2.0." false end # The replica set primary's host name. # # @return [String] def host @manager.primary_pool.host end # The replica set primary's port. # # @return [Integer] def port @manager.primary_pool.port end def nodes warn "MongoReplicaSetClient#nodes is DEPRECATED and will be removed in v2.0. " + "Please use MongoReplicaSetClient#seeds instead." @seeds end # Determine whether we're reading from a primary node. If false, # this connection connects to a secondary node and @read_secondaries is true. # # @return [Boolean] def read_primary? read_pool == primary_pool end alias :primary? :read_primary? # Close the connection to the database. def close(opts={}) if opts[:soft] @manager.close(:soft => true) if @manager else @manager.close if @manager end # Clear the reference to this object. thread_local[:managers].delete(self) unpin_pool @connected = false end # If a ConnectionFailure is raised, this method will be called # to close the connection and reset connection values. # @deprecated def reset_connection close warn "MongoReplicaSetClient#reset_connection is now deprecated and will be removed in v2.0. " + "Use MongoReplicaSetClient#close instead." end # Returns +true+ if it's okay to read from a secondary node. # # This method exist primarily so that Cursor objects will # generate query messages with a slaveOkay value of +true+. # # @return [Boolean] +true+ def slave_ok? @read != :primary end # Generic socket checkout # Takes a block that returns a socket from pool def checkout ensure_manager connected? ? sync_refresh : connect begin socket = yield rescue => ex checkin(socket) if socket raise ex end if socket return socket else @connected = false raise ConnectionFailure.new("Could not checkout a socket.") end end def checkout_reader(read_pref={}) checkout do pool = read_pool(read_pref) get_socket_from_pool(pool) end end # Checkout a socket for writing (i.e., a primary node). def checkout_writer checkout do get_socket_from_pool(primary_pool) end end # Checkin a socket used for reading. def checkin(socket) if socket && socket.pool socket.checkin end sync_refresh end def ensure_manager thread_local[:managers][self] = @manager end def pinned_pool thread_local[:pinned_pools][@manager.object_id] if @manager end def pin_pool(pool, read_preference) if @manager thread_local[:pinned_pools][@manager.object_id] = { :pool => pool, :read_preference => read_preference } end end def unpin_pool thread_local[:pinned_pools].delete @manager.object_id if @manager end def get_socket_from_pool(pool) begin pool.checkout if pool rescue ConnectionFailure nil end end def local_manager thread_local[:managers][self] end def arbiters local_manager.arbiters.nil? ? [] : local_manager.arbiters end def primary local_manager ? local_manager.primary : nil end # Note: might want to freeze these after connecting. def secondaries local_manager ? local_manager.secondaries : [] end def hosts local_manager ? local_manager.hosts : [] end def primary_pool local_manager ? local_manager.primary_pool : nil end def secondary_pool local_manager ? local_manager.secondary_pool : nil end def secondary_pools local_manager ? local_manager.secondary_pools : [] end def pools local_manager ? local_manager.pools : [] end def tag_map local_manager ? local_manager.tag_map : {} end def max_bson_size return local_manager.max_bson_size if local_manager DEFAULT_MAX_BSON_SIZE end def max_message_size return local_manager.max_message_size if local_manager max_bson_size * MESSAGE_SIZE_FACTOR end def max_wire_version return local_manager.max_wire_version if local_manager 0 end def min_wire_version return local_manager.min_wire_version if local_manager 0 end def primary_wire_version_feature?(feature) local_manager && local_manager.primary_pool && local_manager.primary_pool.node.wire_version_feature?(feature) end def max_write_batch_size local_manager && local_manager.primary_pool && local_manager.primary_pool.node.max_write_batch_size end private # Parse option hash def setup(opts) # Refresh @refresh_mode = opts.delete(:refresh_mode) || false @refresh_interval = opts.delete(:refresh_interval) || 90 if @refresh_mode && @refresh_interval < 60 @refresh_interval = 60 unless ENV['TEST_MODE'] = 'TRUE' end if @refresh_mode == :async warn ":async refresh mode has been deprecated. Refresh mode will be disabled." elsif ![:sync, false].include?(@refresh_mode) raise MongoArgumentError, "Refresh mode must be either :sync or false." end if opts[:read_secondary] warn ":read_secondary options has now been deprecated and will " + "be removed in driver v2.0. Use the :read option instead." @read_secondary = opts.delete(:read_secondary) || false end # Replica set name if opts[:rs_name] warn ":rs_name option has been deprecated and will be removed in v2.0. " + "Please use :name instead." @replica_set_name = opts.delete(:rs_name) else @replica_set_name = opts.delete(:name) end super opts end def sync_refresh if @refresh_mode == :sync && ((Time.now - @last_refresh) > @refresh_interval) @last_refresh = Time.now refresh end end end end ruby-mongo-1.10.0/lib/mongo/mongo_sharded_client.rb000066400000000000000000000110771233461006100222520ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo # Instantiates and manages connections to a MongoDB sharded cluster for high availability. class MongoShardedClient < MongoReplicaSetClient include ThreadLocalVariableManager SHARDED_CLUSTER_OPTS = [:refresh_mode, :refresh_interval, :tag_sets, :read] attr_reader :seeds, :refresh_interval, :refresh_mode, :refresh_version, :manager def initialize(*args) opts = args.last.is_a?(Hash) ? args.pop : {} nodes = args.flatten if nodes.empty? and ENV.has_key?('MONGODB_URI') parser = URIParser.new ENV['MONGODB_URI'] opts = parser.connection_options.merge! opts nodes = parser.node_strings end unless nodes.length > 0 raise MongoArgumentError, "A MongoShardedClient requires at least one seed node." end @seeds = nodes.map do |host_port| Support.normalize_seeds(host_port) end # TODO: add a method for replacing this list of node. @seeds.freeze # Refresh @last_refresh = Time.now @refresh_version = 0 # No connection manager by default. @manager = nil # Lock for request ids. @id_lock = Mutex.new @connected = false @connect_mutex = Mutex.new @mongos = true check_opts(opts) setup(opts) end def valid_opts super + SHARDED_CLUSTER_OPTS end def inspect "" end # Initiate a connection to the sharded cluster. def connect(force = !connected?) return unless force log(:info, "Connecting...") # Prevent recursive connection attempts from the same thread. # This is done rather than using a Monitor to prevent potentially recursing # infinitely while attempting to connect and continually failing. Instead, fail fast. raise ConnectionFailure, "Failed to get node data." if thread_local[:locks][:connecting] @connect_mutex.synchronize do begin thread_local[:locks][:connecting] = true if @manager thread_local[:managers][self] = @manager @manager.refresh! @seeds else @manager = ShardingPoolManager.new(self, @seeds) ensure_manager @manager.connect check_wire_version_in_range end ensure thread_local[:locks][:connecting] = false end @refresh_version += 1 @last_refresh = Time.now @connected = true end end # Force a hard refresh of this connection's view # of the sharded cluster. # # @return [Boolean] +true+ if hard refresh # occurred. +false+ is returned when unable # to get the refresh lock. def hard_refresh! log(:info, "Initiating hard refresh...") connect(true) return true end def connected? !!(@connected && @manager.primary_pool) end # Returns +true+ if it's okay to read from a secondary node. # Since this is a sharded cluster, this must always be false. # # This method exist primarily so that Cursor objects will # generate query messages with a slaveOkay value of +true+. # # @return [Boolean] +true+ def slave_ok? false end def checkout(&block) tries = 0 begin super(&block) rescue ConnectionFailure tries +=1 tries < 2 ? retry : raise end end # Initialize a connection to MongoDB using the MongoDB URI spec. # # @param uri [ String ] string of the format: # mongodb://[username:password@]host1[:port1][,host2[:port2],...[,hostN[:portN]]][/database] # # @param options [ Hash ] Any of the options available for MongoShardedClient.new # # @return [ Mongo::MongoShardedClient ] The sharded client. def self.from_uri(uri, options={}) uri ||= ENV['MONGODB_URI'] URIParser.new(uri).connection(options, false, true) end end end ruby-mongo-1.10.0/lib/mongo/networking.rb000066400000000000000000000310131233461006100202620ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Networking STANDARD_HEADER_SIZE = 16 RESPONSE_HEADER_SIZE = 20 # Counter for generating unique request ids. @@current_request_id = 0 # Send a message to MongoDB, adding the necessary headers. # # @param [Integer] operation a MongoDB opcode. # @param [BSON::ByteBuffer] message a message to send to the database. # # @option opts [Symbol] :connection (:writer) The connection to which # this message should be sent. Valid options are :writer and :reader. # # @return [Integer] number of bytes sent def send_message(operation, message, opts={}) if opts.is_a?(String) warn "MongoClient#send_message no longer takes a string log message. " + "Logging is now handled within the Collection and Cursor classes." opts = {} end add_message_headers(message, operation) packed_message = message.to_s sock = nil pool = opts.fetch(:pool, nil) begin if pool #puts "send_message pool.port:#{pool.port}" sock = pool.checkout else sock ||= checkout_writer end send_message_on_socket(packed_message, sock) rescue SystemStackError, NoMemoryError, SystemCallError => ex close raise ex ensure if sock sock.checkin end end true end # Sends a message to the database, waits for a response, and raises # an exception if the operation has failed. # # @param [Integer] operation a MongoDB opcode. # @param [BSON::ByteBuffer] message a message to send to the database. # @param [String] db_name the name of the database. used on call to get_last_error. # @param [String] log_message this is currently a no-op and will be removed. # @param [Hash] write_concern write concern. # # @see DB#get_last_error for valid last error params. # # @return [Hash] The document returned by the call to getlasterror. def send_message_with_gle(operation, message, db_name, log_message=nil, write_concern=false) docs = num_received = cursor_id = '' add_message_headers(message, operation) last_error_message = build_get_last_error_message(db_name, write_concern) last_error_id = add_message_headers(last_error_message, Mongo::Constants::OP_QUERY) packed_message = message.append!(last_error_message).to_s sock = nil begin sock = checkout_writer send_message_on_socket(packed_message, sock) docs, num_received, cursor_id = receive(sock, last_error_id) checkin(sock) rescue ConnectionFailure, OperationFailure, OperationTimeout => ex checkin(sock) raise ex rescue SystemStackError, NoMemoryError, SystemCallError => ex close raise ex end if num_received == 1 error = docs[0]['err'] || docs[0]['errmsg'] if error && error.include?("not master") close raise ConnectionFailure.new(docs[0]['code'].to_s + ': ' + error, docs[0]['code'], docs[0]) elsif (note = docs[0]['jnote'] || docs[0]['wnote']) # assignment code = docs[0]['code'] || Mongo::ErrorCode::BAD_VALUE # as of server version 2.5.5 raise WriteConcernError.new(code.to_s + ': ' + note, code, docs[0]) elsif error code = docs[0]['code'] || Mongo::ErrorCode::UNKNOWN_ERROR error = "wtimeout" if error == "timeout" raise WriteConcernError.new(code.to_s + ': ' + error, code, docs[0]) if error == "wtimeout" raise OperationFailure.new(code.to_s + ': ' + error, code, docs[0]) end end docs[0] end # Sends a message to the database and waits for the response. # # @param [Integer] operation a MongoDB opcode. # @param [BSON::ByteBuffer] message a message to send to the database. # @param [String] log_message this is currently a no-op and will be removed. # @param [Socket] socket a socket to use in lieu of checking out a new one. # @param [Boolean] command (false) indicate whether this is a command. If this is a command, # the message will be sent to the primary node. # @param [Symbol] read the read preference. # @param [Boolean] exhaust (false) indicate whether the cursor should be exhausted. Set # this to true only when the OP_QUERY_EXHAUST flag is set. # @param [Boolean] compile_regex whether BSON regex objects should be compiled into Ruby regexes. # # @return [Array] # An array whose indexes include [0] documents returned, [1] number of document received, # and [3] a cursor_id. def receive_message(operation, message, log_message=nil, socket=nil, command=false, read=:primary, exhaust=false, compile_regex=true) request_id = add_message_headers(message, operation) packed_message = message.to_s opts = { :exhaust => exhaust, :compile_regex => compile_regex } result = '' begin send_message_on_socket(packed_message, socket) result = receive(socket, request_id, opts) rescue ConnectionFailure => ex socket.close checkin(socket) raise ex rescue SystemStackError, NoMemoryError, SystemCallError => ex close raise ex rescue Exception => ex if defined?(IRB) close if ex.class == IRB::Abort end raise ex end result end private def receive(sock, cursor_id, opts={}) exhaust = !!opts.delete(:exhaust) if exhaust docs = [] num_received = 0 while(cursor_id != 0) do receive_header(sock, cursor_id, exhaust) number_received, cursor_id = receive_response_header(sock) new_docs, n = read_documents(number_received, sock, opts) docs += new_docs num_received += n end return [docs, num_received, cursor_id] else receive_header(sock, cursor_id, exhaust) number_received, cursor_id = receive_response_header(sock) docs, num_received = read_documents(number_received, sock, opts) return [docs, num_received, cursor_id] end end def receive_header(sock, expected_response, exhaust=false) header = receive_message_on_socket(16, sock) # unpacks to size, request_id, response_to response_to = header.unpack('VVV')[2] if !exhaust && expected_response != response_to raise Mongo::ConnectionFailure, "Expected response #{expected_response} but got #{response_to}" end unless header.size == STANDARD_HEADER_SIZE raise "Short read for DB response header: " + "expected #{STANDARD_HEADER_SIZE} bytes, saw #{header.size}" end nil end def receive_response_header(sock) header_buf = receive_message_on_socket(RESPONSE_HEADER_SIZE, sock) if header_buf.length != RESPONSE_HEADER_SIZE raise "Short read for DB response header; " + "expected #{RESPONSE_HEADER_SIZE} bytes, saw #{header_buf.length}" end # unpacks to flags, cursor_id_a, cursor_id_b, starting_from, number_remaining flags, cursor_id_a, cursor_id_b, _, number_remaining = header_buf.unpack('VVVVV') check_response_flags(flags) cursor_id = (cursor_id_b << 32) + cursor_id_a [number_remaining, cursor_id] end def check_response_flags(flags) if flags & Mongo::Constants::REPLY_CURSOR_NOT_FOUND != 0 raise Mongo::OperationFailure, "Query response returned CURSOR_NOT_FOUND. " + "Either an invalid cursor was specified, or the cursor may have timed out on the server." elsif flags & Mongo::Constants::REPLY_QUERY_FAILURE != 0 # Mongo query reply failures are handled in Cursor#next. end end def read_documents(number_received, sock, opts) docs = [] number_remaining = number_received while number_remaining > 0 do buf = receive_message_on_socket(4, sock) size = buf.unpack('V')[0] buf << receive_message_on_socket(size - 4, sock) number_remaining -= 1 docs << BSON::BSON_CODER.deserialize(buf, opts) end [docs, number_received] end def build_command_message(db_name, query, projection=nil, skip=0, limit=-1) message = BSON::ByteBuffer.new("", max_message_size) message.put_int(0) BSON::BSON_RUBY.serialize_cstr(message, "#{db_name}.$cmd") message.put_int(skip) message.put_int(limit) message.put_binary(BSON::BSON_CODER.serialize(query, false, false, max_bson_size).to_s) message.put_binary(BSON::BSON_CODER.serialize(projection, false, false, max_bson_size).to_s) if projection message end # Constructs a getlasterror message. This method is used exclusively by # MongoClient#send_message_with_gle. def build_get_last_error_message(db_name, write_concern) gle = BSON::OrderedHash.new gle[:getlasterror] = 1 if write_concern.is_a?(Hash) write_concern.assert_valid_keys(:w, :wtimeout, :fsync, :j) gle.merge!(write_concern) gle.delete(:w) if gle[:w] == 1 end gle[:w] = gle[:w].to_s if gle[:w].is_a?(Symbol) build_command_message(db_name, gle) end # Prepares a message for transmission to MongoDB by # constructing a valid message header. # # Note: this method modifies message by reference. # # @return [Integer] the request id used in the header def add_message_headers(message, operation) headers = [ # Message size. 16 + message.size, # Unique request id. request_id = get_request_id, # Response id. 0, # Opcode. operation ].pack('VVVV') message.prepend!(headers) request_id end # Increment and return the next available request id. # # return [Integer] def get_request_id request_id = '' @id_lock.synchronize do request_id = @@current_request_id += 1 end request_id end # Low-level method for sending a message on a socket. # Requires a packed message and an available socket, # # @return [Integer] number of bytes sent def send_message_on_socket(packed_message, socket) begin total_bytes_sent = socket.send(packed_message) if total_bytes_sent != packed_message.size packed_message.slice!(0, total_bytes_sent) while packed_message.size > 0 byte_sent = socket.send(packed_message) total_bytes_sent += byte_sent packed_message.slice!(0, byte_sent) end end total_bytes_sent rescue => ex socket.close raise ConnectionFailure, "Operation failed with the following exception: #{ex}:#{ex.message}" end end # Low-level method for receiving data from socket. # Requires length and an available socket. def receive_message_on_socket(length, socket) begin message = receive_data(length, socket) rescue OperationTimeout, ConnectionFailure => ex socket.close if ex.class == OperationTimeout raise OperationTimeout, "Timed out waiting on socket read." else raise ConnectionFailure, "Operation failed with the following exception: #{ex}" end end message end def receive_data(length, socket) message = new_binary_string socket.read(length, message) raise ConnectionFailure, "connection closed" unless message && message.length > 0 if message.length < length chunk = new_binary_string while message.length < length socket.read(length - message.length, chunk) raise ConnectionFailure, "connection closed" unless chunk.length > 0 message << chunk end end message end if defined?(Encoding) BINARY_ENCODING = Encoding.find("binary") def new_binary_string "".force_encoding(BINARY_ENCODING) end else def new_binary_string "" end end end end ruby-mongo-1.10.0/lib/mongo/utils.rb000066400000000000000000000014011233461006100172310ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/utils/conversions' require 'mongo/utils/core_ext' require 'mongo/utils/server_version' require 'mongo/utils/support' require 'mongo/utils/thread_local_variable_manager' ruby-mongo-1.10.0/lib/mongo/utils/000077500000000000000000000000001233461006100167105ustar00rootroot00000000000000ruby-mongo-1.10.0/lib/mongo/utils/conversions.rb000066400000000000000000000074361233461006100216170ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo #:nodoc: # Utility module to include when needing to convert certain types of # objects to mongo-friendly parameters. module Conversions ASCENDING_CONVERSION = ["ascending", "asc", "1"] DESCENDING_CONVERSION = ["descending", "desc", "-1"] # Allows sort parameters to be defined as a Hash. # Does not allow usage of un-ordered hashes, therefore # Ruby 1.8.x users must use BSON::OrderedHash. # # Example: # # hash_as_sort_parameters({:field1 => :asc, "field2" => :desc}) => # { "field1" => 1, "field2" => -1} def hash_as_sort_parameters(value) if RUBY_VERSION < '1.9' && !value.is_a?(BSON::OrderedHash) raise InvalidSortValueError.new( "Hashes used to supply sort order must maintain ordering." + "Use BSON::OrderedHash." ) else order_by = value.inject({}) do |memo, (key, direction)| memo[key.to_s] = sort_value(direction) memo end end order_by end # Converts the supplied +Array+ to a +Hash+ to pass to mongo as # sorting parameters. The returned +Hash+ will vary depending # on whether the passed +Array+ is one or two dimensional. # # Example: # # array_as_sort_parameters([["field1", :asc], ["field2", :desc]]) => # { "field1" => 1, "field2" => -1} def array_as_sort_parameters(value) order_by = BSON::OrderedHash.new if value.first.is_a? Array value.each do |param| if (param.class.name == "String") order_by[param] = 1 else order_by[param[0]] = sort_value(param[1]) unless param[1].nil? end end elsif !value.empty? if order_by.size == 1 order_by[value.first] = 1 else order_by[value.first] = sort_value(value[1]) end end order_by end # Converts the supplied +String+ or +Symbol+ to a +Hash+ to pass to mongo as # a sorting parameter with ascending order. If the +String+ # is empty then an empty +Hash+ will be returned. # # Example: # # *DEPRECATED # # string_as_sort_parameters("field") => { "field" => 1 } # string_as_sort_parameters("") => {} def string_as_sort_parameters(value) return {} if (str = value.to_s).empty? { str => 1 } end # Converts the +String+, +Symbol+, or +Integer+ to the # corresponding sort value in MongoDB. # # Valid conversions (case-insensitive): # # ascending, asc, :ascending, :asc, 1 => 1 # descending, desc, :descending, :desc, -1 => -1 # # If the value is invalid then an error will be raised. def sort_value(value) return value if value.is_a?(Hash) val = value.to_s.downcase return 1 if ASCENDING_CONVERSION.include?(val) return -1 if DESCENDING_CONVERSION.include?(val) raise InvalidSortValueError.new( "#{self} was supplied as a sort direction when acceptable values are: " + "Mongo::ASCENDING, 'ascending', 'asc', :ascending, :asc, 1, Mongo::DESCENDING, " + "'descending', 'desc', :descending, :desc, -1.") end end end ruby-mongo-1.10.0/lib/mongo/utils/core_ext.rb000066400000000000000000000026661233461006100210570ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. #:nodoc: class Object #:nodoc: def tap yield self self end unless respond_to? :tap end #:nodoc: class Hash #:nodoc: def assert_valid_keys(*valid_keys) unknown_keys = keys - [valid_keys].flatten raise(ArgumentError, "Unknown key(s): #{unknown_keys.join(", ")}") unless unknown_keys.empty? end end #:nodoc: class String #:nodoc: def to_bson_code BSON::Code.new(self) end end #:nodoc: class Class def mongo_thread_local_accessor name, options = {} m = Module.new m.module_eval do class_variable_set :"@@#{name}", Hash.new {|h,k| h[k] = options[:default] } end m.module_eval %{ def #{name} @@#{name}[Thread.current.object_id] end def #{name}=(val) @@#{name}[Thread.current.object_id] = val end } class_eval do include m extend m end end end ruby-mongo-1.10.0/lib/mongo/utils/server_version.rb000066400000000000000000000034611233461006100223140ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo # Simple class for comparing server versions. class ServerVersion include Comparable def initialize(version) @version = version end # Implements comparable. def <=>(new) local, new = self.to_a, to_array(new) for n in 0...local.size do break if elements_include_mods?(local[n], new[n]) if local[n] < new[n].to_i result = -1 break; elsif local[n] > new[n].to_i result = 1 break; end end result || 0 end # Return an array representation of this server version. def to_a to_array(@version) end # Return a string representation of this server version. def to_s @version end private # Returns true if any elements include mod symbols (-, +) def elements_include_mods?(*elements) elements.any? { |n| n =~ /[\-\+]/ } end # Converts argument to an array of integers, # appending any mods as the final element. def to_array(version) array = version.split(".").map {|n| (n =~ /^\d+$/) ? n.to_i : n } if array.last =~ /(\d+)([\-\+])/ array[array.length-1] = $1.to_i array << $2 end array end end end ruby-mongo-1.10.0/lib/mongo/utils/support.rb000066400000000000000000000050541233461006100207550ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Support include Mongo::Conversions extend self def validate_db_name(db_name) unless [String, Symbol].include?(db_name.class) raise TypeError, "db_name must be a string or symbol" end [" ", ".", "$", "/", "\\"].each do |invalid_char| if db_name.include? invalid_char raise Mongo::InvalidNSName, "database names cannot contain the character '#{invalid_char}'" end end raise Mongo::InvalidNSName, "database name cannot be the empty string" if db_name.empty? db_name end def format_order_clause(order) case order when Hash, BSON::OrderedHash then hash_as_sort_parameters(order) when String, Symbol then string_as_sort_parameters(order) when Array then array_as_sort_parameters(order) else raise InvalidSortValueError, "Illegal sort clause, '#{order.class.name}'; must be of the form " + "[['field1', '(ascending|descending)'], ['field2', '(ascending|descending)']]" end end def normalize_seeds(seeds) pairs = Array(seeds) pairs = [ seeds ] if pairs.last.is_a?(Fixnum) pairs = pairs.collect do |hostport| if hostport.is_a?(String) if hostport[0,1] == '[' host, port = hostport.split(']:') << MongoClient::DEFAULT_PORT host = host.end_with?(']') ? host[1...-1] : host[1..-1] else host, port = hostport.split(':') << MongoClient::DEFAULT_PORT end [ host, port.to_i ] else hostport end end pairs.length > 1 ? pairs : pairs.first end def is_i?(value) return !!(value =~ /^\d+$/) end # Determine if a database command has succeeded by # checking the document response. # # @param [Hash] doc # # @return [Boolean] true if the 'ok' key is either 1 or *true*. def ok?(doc) ok = doc['ok'] ok == 1 || ok == 1.0 || ok == true end end end ruby-mongo-1.10.0/lib/mongo/utils/thread_local_variable_manager.rb000066400000000000000000000014761233461006100252250ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. #:nodoc: module Mongo module ThreadLocalVariableManager def thread_local Thread.current[:mongo_thread_locals] ||= Hash.new do |hash, key| hash[key] = Hash.new unless hash.key? key hash[key] end end end endruby-mongo-1.10.0/metadata.gz.sig000066400000000000000000000004001233461006100165600ustar00rootroot000000000000008r&lwG&iͨ ]e" ڑSxP(bCvl-_C:|&(G-COVhL\hEb,0*wzbX BJno15˵"_`]gbs"`S(Ԁws|3O|!!h-X)Dk%A6XEnbVYt opG"UTF y4FfzJ`Ɗs]ڝ⸊J<ruby-mongo-1.10.0/metadata.yml000066400000000000000000000206771233461006100162020ustar00rootroot00000000000000--- !ruby/object:Gem::Specification name: mongo version: !ruby/object:Gem::Version version: 1.10.0 platform: ruby authors: - Tyler Brock - Gary Murakami - Emily Stolfo - Brandon Black - Durran Jordan autorequire: bindir: bin cert_chain: - | -----BEGIN CERTIFICATE----- MIIDfDCCAmSgAwIBAgIBATANBgkqhkiG9w0BAQUFADBCMRQwEgYDVQQDDAtkcml2 ZXItcnVieTEVMBMGCgmSJomT8ixkARkWBTEwZ2VuMRMwEQYKCZImiZPyLGQBGRYD Y29tMB4XDTE0MDIxOTE1MTEyNloXDTE1MDIxOTE1MTEyNlowQjEUMBIGA1UEAwwL ZHJpdmVyLXJ1YnkxFTATBgoJkiaJk/IsZAEZFgUxMGdlbjETMBEGCgmSJomT8ixk ARkWA2NvbTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBANFdSAa8fRm1 bAM9za6Z0fAH4g02bqM1NGnw8zJQrE/PFrFfY6IFCT2AsLfOwr1maVm7iU1+kdVI IQ+iI/9+E+ArJ+rbGV3dDPQ+SLl3mLT+vXjfjcxMqI2IW6UuVtt2U3Rxd4QU0kdT JxmcPYs5fDN6BgYc6XXgUjy3m+Kwha2pGctdciUOwEfOZ4RmNRlEZKCMLRHdFP8j 4WTnJSGfXDiuoXICJb5yOPOZPuaapPSNXp93QkUdsqdKC32I+KMpKKYGBQ6yisfA 5MyVPPCzLR1lP5qXVGJPnOqUAkvEUfCahg7EP9tI20qxiXrR6TSEraYhIFXL0EGY u8KAcPHm5KkCAwEAAaN9MHswCQYDVR0TBAIwADALBgNVHQ8EBAMCBLAwHQYDVR0O BBYEFFt3WbF+9JpUjAoj62cQBgNb8HzXMCAGA1UdEQQZMBeBFWRyaXZlci1ydWJ5 QDEwZ2VuLmNvbTAgBgNVHRIEGTAXgRVkcml2ZXItcnVieUAxMGdlbi5jb20wDQYJ KoZIhvcNAQEFBQADggEBALGvdxHF+CnH6QO4PeIce3S8EHuHsYiGLk4sWgNGZkjD V3C4XjlI8rQZxalwQwcauacOGj9x94flWUXruEF7+rjUtig7OIrQK2+uVg86vl8r xy1n2s1d31KsuazEVExe5o19tnVbI9+30P9qPkS+NgaellXpj5c5qnJUGn5BJtzo 3D001zXpVnuZvCcE/A4fQ+BEM0zm0oOmA/gWIAFrufOL9oYg1881dRZ+kQytF/9c JrZM8w8wGbIOeLtoQqa7HB/jOYbTahH7KMNh2LHAbOR93hNIJxVRa4iwxiMQ75tN 9WUIAJ4AEtjwRg1Bz0OwDo3aucPCBpx77+/FWhv7JYY= -----END CERTIFICATE----- date: 2014-04-03 00:00:00.000000000 Z dependencies: - !ruby/object:Gem::Dependency name: bson requirement: !ruby/object:Gem::Requirement requirements: - - "~>" - !ruby/object:Gem::Version version: 1.10.0 type: :runtime prerelease: false version_requirements: !ruby/object:Gem::Requirement requirements: - - "~>" - !ruby/object:Gem::Version version: 1.10.0 description: A Ruby driver for MongoDB. For more information about Mongo, see http://www.mongodb.org. email: mongodb-dev@googlegroups.com executables: - mongo_console extensions: [] extra_rdoc_files: [] files: - LICENSE - README.md - Rakefile - VERSION - bin/mongo_console - lib/mongo.rb - lib/mongo/bulk_write_collection_view.rb - lib/mongo/collection.rb - lib/mongo/collection_writer.rb - lib/mongo/connection.rb - lib/mongo/connection/node.rb - lib/mongo/connection/pool.rb - lib/mongo/connection/pool_manager.rb - lib/mongo/connection/sharding_pool_manager.rb - lib/mongo/connection/socket.rb - lib/mongo/connection/socket/socket_util.rb - lib/mongo/connection/socket/ssl_socket.rb - lib/mongo/connection/socket/tcp_socket.rb - lib/mongo/connection/socket/unix_socket.rb - lib/mongo/cursor.rb - lib/mongo/db.rb - lib/mongo/exception.rb - lib/mongo/functional.rb - lib/mongo/functional/authentication.rb - lib/mongo/functional/logging.rb - lib/mongo/functional/read_preference.rb - lib/mongo/functional/sasl_java.rb - lib/mongo/functional/uri_parser.rb - lib/mongo/functional/write_concern.rb - lib/mongo/gridfs.rb - lib/mongo/gridfs/grid.rb - lib/mongo/gridfs/grid_ext.rb - lib/mongo/gridfs/grid_file_system.rb - lib/mongo/gridfs/grid_io.rb - lib/mongo/legacy.rb - lib/mongo/mongo_client.rb - lib/mongo/mongo_replica_set_client.rb - lib/mongo/mongo_sharded_client.rb - lib/mongo/networking.rb - lib/mongo/utils.rb - lib/mongo/utils/conversions.rb - lib/mongo/utils/core_ext.rb - lib/mongo/utils/server_version.rb - lib/mongo/utils/support.rb - lib/mongo/utils/thread_local_variable_manager.rb - mongo.gemspec - test/functional/authentication_test.rb - test/functional/bulk_api_stress_test.rb - test/functional/bulk_write_collection_view_test.rb - test/functional/client_test.rb - test/functional/collection_test.rb - test/functional/collection_writer_test.rb - test/functional/conversions_test.rb - test/functional/cursor_fail_test.rb - test/functional/cursor_message_test.rb - test/functional/cursor_test.rb - test/functional/db_api_test.rb - test/functional/db_connection_test.rb - test/functional/db_test.rb - test/functional/grid_file_system_test.rb - test/functional/grid_io_test.rb - test/functional/grid_test.rb - test/functional/pool_test.rb - test/functional/safe_test.rb - test/functional/ssl_test.rb - test/functional/support_test.rb - test/functional/timeout_test.rb - test/functional/uri_test.rb - test/functional/write_concern_test.rb - test/helpers/general.rb - test/helpers/test_unit.rb - test/replica_set/authentication_test.rb - test/replica_set/basic_test.rb - test/replica_set/client_test.rb - test/replica_set/complex_connect_test.rb - test/replica_set/connection_test.rb - test/replica_set/count_test.rb - test/replica_set/cursor_test.rb - test/replica_set/insert_test.rb - test/replica_set/max_values_test.rb - test/replica_set/pinning_test.rb - test/replica_set/query_test.rb - test/replica_set/read_preference_test.rb - test/replica_set/refresh_test.rb - test/replica_set/replication_ack_test.rb - test/replica_set/ssl_test.rb - test/sharded_cluster/basic_test.rb - test/shared/authentication/basic_auth_shared.rb - test/shared/authentication/bulk_api_auth_shared.rb - test/shared/authentication/gssapi_shared.rb - test/shared/authentication/sasl_plain_shared.rb - test/shared/ssl_shared.rb - test/test_helper.rb - test/threading/basic_test.rb - test/tools/mongo_config.rb - test/tools/mongo_config_test.rb - test/unit/client_test.rb - test/unit/collection_test.rb - test/unit/connection_test.rb - test/unit/cursor_test.rb - test/unit/db_test.rb - test/unit/grid_test.rb - test/unit/mongo_sharded_client_test.rb - test/unit/node_test.rb - test/unit/pool_manager_test.rb - test/unit/read_pref_test.rb - test/unit/read_test.rb - test/unit/safe_test.rb - test/unit/sharding_pool_manager_test.rb - test/unit/write_concern_test.rb homepage: http://www.mongodb.org licenses: - Apache License Version 2.0 metadata: {} post_install_message: rdoc_options: [] require_paths: - lib required_ruby_version: !ruby/object:Gem::Requirement requirements: - - ">=" - !ruby/object:Gem::Version version: '0' required_rubygems_version: !ruby/object:Gem::Requirement requirements: - - ">=" - !ruby/object:Gem::Version version: '0' requirements: [] rubyforge_project: mongo rubygems_version: 2.2.2 signing_key: specification_version: 4 summary: Ruby driver for MongoDB test_files: - test/functional/authentication_test.rb - test/functional/bulk_api_stress_test.rb - test/functional/bulk_write_collection_view_test.rb - test/functional/client_test.rb - test/functional/collection_test.rb - test/functional/collection_writer_test.rb - test/functional/conversions_test.rb - test/functional/cursor_fail_test.rb - test/functional/cursor_message_test.rb - test/functional/cursor_test.rb - test/functional/db_api_test.rb - test/functional/db_connection_test.rb - test/functional/db_test.rb - test/functional/grid_file_system_test.rb - test/functional/grid_io_test.rb - test/functional/grid_test.rb - test/functional/pool_test.rb - test/functional/safe_test.rb - test/functional/ssl_test.rb - test/functional/support_test.rb - test/functional/timeout_test.rb - test/functional/uri_test.rb - test/functional/write_concern_test.rb - test/helpers/general.rb - test/helpers/test_unit.rb - test/replica_set/authentication_test.rb - test/replica_set/basic_test.rb - test/replica_set/client_test.rb - test/replica_set/complex_connect_test.rb - test/replica_set/connection_test.rb - test/replica_set/count_test.rb - test/replica_set/cursor_test.rb - test/replica_set/insert_test.rb - test/replica_set/max_values_test.rb - test/replica_set/pinning_test.rb - test/replica_set/query_test.rb - test/replica_set/read_preference_test.rb - test/replica_set/refresh_test.rb - test/replica_set/replication_ack_test.rb - test/replica_set/ssl_test.rb - test/sharded_cluster/basic_test.rb - test/shared/authentication/basic_auth_shared.rb - test/shared/authentication/bulk_api_auth_shared.rb - test/shared/authentication/gssapi_shared.rb - test/shared/authentication/sasl_plain_shared.rb - test/shared/ssl_shared.rb - test/test_helper.rb - test/threading/basic_test.rb - test/tools/mongo_config.rb - test/tools/mongo_config_test.rb - test/unit/client_test.rb - test/unit/collection_test.rb - test/unit/connection_test.rb - test/unit/cursor_test.rb - test/unit/db_test.rb - test/unit/grid_test.rb - test/unit/mongo_sharded_client_test.rb - test/unit/node_test.rb - test/unit/pool_manager_test.rb - test/unit/read_pref_test.rb - test/unit/read_test.rb - test/unit/safe_test.rb - test/unit/sharding_pool_manager_test.rb - test/unit/write_concern_test.rb has_rdoc: yard ruby-mongo-1.10.0/mongo.gemspec000066400000000000000000000024041233461006100163470ustar00rootroot00000000000000Gem::Specification.new do |s| s.name = 'mongo' s.version = File.read(File.join(File.dirname(__FILE__), 'VERSION')) s.platform = Gem::Platform::RUBY s.authors = ['Tyler Brock', 'Gary Murakami', 'Emily Stolfo', 'Brandon Black', 'Durran Jordan'] s.email = 'mongodb-dev@googlegroups.com' s.homepage = 'http://www.mongodb.org' s.summary = 'Ruby driver for MongoDB' s.description = 'A Ruby driver for MongoDB. For more information about Mongo, see http://www.mongodb.org.' s.rubyforge_project = 'mongo' s.license = 'Apache License Version 2.0' if File.exists?('gem-private_key.pem') s.signing_key = 'gem-private_key.pem' s.cert_chain = ['gem-public_cert.pem'] else warn 'Warning: No private key present, creating unsigned gem.' end s.files = ['mongo.gemspec', 'LICENSE', 'VERSION'] s.files += ['README.md', 'Rakefile', 'bin/mongo_console'] s.files += ['lib/mongo.rb'] + Dir['lib/mongo/**/*.rb'] s.test_files = Dir['test/**/*.rb'] - Dir['test/bson/*'] s.executables = ['mongo_console'] s.require_paths = ['lib'] s.has_rdoc = 'yard' s.add_dependency('bson', "~> #{s.version}") end ruby-mongo-1.10.0/test/000077500000000000000000000000001233461006100146425ustar00rootroot00000000000000ruby-mongo-1.10.0/test/functional/000077500000000000000000000000001233461006100170045ustar00rootroot00000000000000ruby-mongo-1.10.0/test/functional/authentication_test.rb000066400000000000000000000021761233461006100234150ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' require 'shared/authentication/basic_auth_shared' require 'shared/authentication/sasl_plain_shared' require 'shared/authentication/bulk_api_auth_shared' require 'shared/authentication/gssapi_shared' class AuthenticationTest < Test::Unit::TestCase include Mongo include BasicAuthTests include SASLPlainTests include BulkAPIAuthTests include GSSAPITests def setup @client = MongoClient.new(TEST_HOST, TEST_PORT) @version = @client.server_version @db = @client[TEST_DB] @host_info = host_port end end ruby-mongo-1.10.0/test/functional/bulk_api_stress_test.rb000066400000000000000000000073101233461006100235620ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License") # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' class BulkApiStressTest < Test::Unit::TestCase # Generate a large string of 'size' MB (estimated # by a string of 'size' * 1024 * 1024 characters). def generate_large_string(size) s = "a" * (size * 1024 * 1024) end def setup @client = standard_connection @db = @client[TEST_DB] @coll = @db["bulk-api-stress-tests"] @coll.remove end def test_ordered_batch_large_inserts bulk = @coll.initialize_ordered_bulk_op s = generate_large_string(4) for i in 0..5 bulk.insert({:_id => i, :msg => s}) end bulk.insert({:_id => 3}) # error bulk.insert({:_id => 100}) ex = assert_raise BulkWriteError do bulk.execute end error_details = ex.result assert_equal 6, error_details["nInserted"] assert_equal 1, error_details["writeErrors"].length error = error_details["writeErrors"][0] assert_equal 11000, error["code"] # duplicate key error assert error["errmsg"].kind_of? String assert_equal 6, error["index"] assert_equal 6, @coll.count() end def test_unordered_batch_large_inserts bulk = @coll.initialize_unordered_bulk_op s = generate_large_string(4) for i in 0..5 bulk.insert({:_id => i, :msg => s}) end bulk.insert({:_id => 3}) # error bulk.insert({:_id => 100}) ex = assert_raise BulkWriteError do bulk.execute end error_details = ex.result assert_equal 7, error_details["nInserted"] assert_equal 1, error_details["writeErrors"].length error = error_details["writeErrors"][0] assert_equal 11000, error["code"] # duplicate key error assert error["errmsg"].kind_of? String assert_equal 6, error["index"] assert_equal 7, @coll.count() end def test_large_single_insert bulk = @coll.initialize_unordered_bulk_op s = generate_large_string(17) bulk.insert({:a => s}) # RUBY-730: # ex = assert_raise BulkWriteError do # bulk.execute # end end def test_ordered_batch_large_batch bulk = @coll.initialize_ordered_bulk_op bulk.insert({:_id => 1600}) for i in 0..2000 bulk.insert({:_id => i}) end ex = assert_raise BulkWriteError do bulk.execute end error_details = ex.result assert_equal 1601, error_details["nInserted"] assert_equal 1, error_details["writeErrors"].length error = error_details["writeErrors"][0] assert_equal 11000, error["code"] # duplicate key error assert error["errmsg"].kind_of? String assert_equal 1601, error["index"] assert_equal 1601, @coll.count() end def test_unordered_batch_large_batch bulk = @coll.initialize_unordered_bulk_op bulk.insert({:_id => 1600}) for i in 0..2000 bulk.insert({:_id => i}) end ex = assert_raise BulkWriteError do bulk.execute end error_details = ex.result assert_equal 2001, error_details["nInserted"] assert_equal 1, error_details["writeErrors"].length error = error_details["writeErrors"][0] assert_equal 11000, error["code"] # duplicate key error assert error["errmsg"].kind_of? String assert_equal 1601, error["index"] assert_equal 2001, @coll.count() end end ruby-mongo-1.10.0/test/functional/bulk_write_collection_view_test.rb000066400000000000000000001222731233461006100260130ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License") # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' require 'json' module Mongo class Collection public :batch_write end class BulkWriteCollectionView public :update_doc?, :replace_doc?, :nil_tally, :merge_result # for reference and future server direction def generate_batch_commands(groups, write_concern) groups.collect do |op_type, documents| { op_type => @collection.name, Mongo::CollectionWriter::WRITE_COMMAND_ARG_KEY[op_type] => documents, :ordered => @options[:ordered], :writeConcern => write_concern } end end end class MongoDBError def inspect "#{self.class.name}.new(#{message.inspect},#{error_code.inspect},#{result.inspect})" end end end module BSON class InvalidDocument def inspect "#{self.class.name}.new(#{message.inspect})" end end end class BulkWriteCollectionViewTest < Test::Unit::TestCase @@client ||= standard_connection(:op_timeout => 10) @@db = @@client.db(TEST_DB) @@test = @@db.collection("test") @@version = @@client.server_version DATABASE_NAME = 'ruby_test_bulk_write_collection_view' COLLECTION_NAME = 'test' DUPLICATE_KEY_ERROR_CODE_SET = [11000, 11001, 12582, 16460].to_set def assert_bulk_op_pushed(expected, view) assert_equal expected, view.ops.last end def assert_is_bulk_write_collection_view(view) assert_equal Mongo::BulkWriteCollectionView, view.class end def assert_bulk_exception(expected, message = '') ex = assert_raise BulkWriteError, message do pp yield end assert_equal(Mongo::ErrorCode::MULTIPLE_ERRORS_OCCURRED, ex.error_code, message) assert_match_document(expected, ex.result, message) end def default_setup @client = MongoClient.new @db = @client[DATABASE_NAME] @collection = @db[COLLECTION_NAME] @collection.drop @bulk = @collection.initialize_ordered_bulk_op @q = {:a => 1} @u = {"$inc" => {:x => 1}} @r = {:b => 2} end def sort_docs(docs) docs.sort{|a,b| [a.keys, a.values] <=> [b.keys, b.values]} end def generate_sized_doc(size) doc = {"_id" => BSON::ObjectId.new, "x" => "y"} serialize_doc = BSON::BSON_CODER.serialize(doc, false, false, size) doc = {"_id" => BSON::ObjectId.new, "x" => "y" * (1 + size - serialize_doc.size)} assert_equal size, BSON::BSON_CODER.serialize(doc, false, false, size).size doc end context "Bulk API Collection" do setup do default_setup end should "inspect" do assert_equal String, @bulk.inspect.class end should "check first key is operation for #update_doc?" do assert_not_nil @bulk.update_doc?({"$inc" => {:x => 1}}) assert_false @bulk.update_doc?({}) assert_nil @bulk.update_doc?({:x => 1}) end should "check no top-level key is operation for #replace_doc?" do assert_true @bulk.replace_doc?({:x => 1}) assert_true @bulk.replace_doc?({}) assert_false @bulk.replace_doc?({"$inc" => {:x => 1}}) assert_false @bulk.replace_doc?({:a => 1, "$inc" => {:x => 1}}) end should "generate_batch_commands" do groups = [ [:insert, [{:n => 0}]], [:update, [{:n => 1}, {:n => 2}]], [:delete, [{:n => 3}]], [:insert, [{:n => 5}, {:n => 6}, {:n => 7}]], [:update, [{:n => 8}]], [:delete, [{:n => 9}, {:n => 10}]] ] write_concern = {:w => 1} result = @bulk.generate_batch_commands(groups, write_concern) expected = [ {:insert => COLLECTION_NAME, :documents => [{:n => 0}], :ordered => true, :writeConcern => {:w => 1}}, {:update => COLLECTION_NAME, :updates => [{:n => 1}, {:n => 2}], :ordered => true, :writeConcern => {:w => 1}}, {:delete => COLLECTION_NAME, :deletes => [{:n => 3}], :ordered => true, :writeConcern => {:w => 1}}, {:insert => COLLECTION_NAME, :documents => [{:n => 5}, {:n => 6}, {:n => 7}], :ordered => true, :writeConcern => {:w => 1}}, {:update => COLLECTION_NAME, :updates => [{:n => 8}], :ordered => true, :writeConcern => {:w => 1}}, {:delete => COLLECTION_NAME, :deletes => [{:n => 9}, {:n => 10}], :ordered => true, :writeConcern => {:w => 1}} ] assert_equal expected, result end should "Initialize an unordered bulk op - spec Bulk Operation Builder" do @bulk = @collection.initialize_unordered_bulk_op assert_is_bulk_write_collection_view(@bulk) assert_equal @collection, @bulk.collection assert_equal false, @bulk.options[:ordered] end should "Initialize an ordered bulk op - spec Bulk Operation Builder" do assert_is_bulk_write_collection_view(@bulk) assert_equal @collection, @bulk.collection assert_equal true, @bulk.options[:ordered] end end def big_example(bulk) bulk.insert({:a => 1}) bulk.insert({:a => 2}) bulk.insert({:a => 3}) bulk.insert({:a => 4}) bulk.insert({:a => 5}) # Update one document matching the selector bulk.find({:a => 1}).update_one({"$inc" => {:x => 1}}) # Update all documents matching the selector bulk.find({:a => 2}).update({"$inc" => {:x => 2}}) # Replace entire document (update with whole doc replace) bulk.find({:a => 3}).replace_one({:x => 3}) # Update one document matching the selector or upsert bulk.find({:a => 1}).upsert.update_one({"$inc" => {:x => 1}}) # Update all documents matching the selector or upsert bulk.find({:a => 2}).upsert.update({"$inc" => {:x => 2}}) # Replaces a single document matching the selector or upsert bulk.find({:a => 3}).upsert.replace_one({:x => 3}) # Remove a single document matching the selector bulk.find({:a => 4}).remove_one() # Remove all documents matching the selector bulk.find({:a => 5}).remove() # Insert a document bulk.insert({:x => 4}) end def nil_tally_responses(responses, key) result = {} responses.each do |response| @bulk.nil_tally(result, key, response[key]) end result end context "Bulk API CollectionView" do setup do default_setup end # ----- INSERT ----- should "set :insert, :documents, terminate and return view for #insert" do with_write_commands_and_operations(@db.connection) do |wire_version| @collection.remove document = {:a => 5} result = @bulk.insert(document) assert_is_bulk_write_collection_view(result) assert_bulk_op_pushed [:insert, {:d => document}], @bulk result = @bulk.execute assert_match_document( { "ok" => 1, "n" => 1, "nInserted" => 1 }, result, "wire_version:#{wire_version}") assert_equal 1, @collection.count end end should "error out on $-prefixed keys with #insert" do assert_raise BulkWriteError do @bulk.insert({ "$key" => 1 }) @bulk.execute end end should "attempt to run #insert with find() and succeed, ignoring find()" do @bulk.find({}).insert({}) @bulk.execute end # ----- FIND ----- should "set :q and return view for #find" do result = @bulk.find(@q) assert_is_bulk_write_collection_view(result) assert_equal @q, @bulk.op_args[:q] @bulk.find({}) assert_equal({}, @bulk.op_args[:q]) @bulk.find(:a => 1) assert_equal({:a => 1}, @bulk.op_args[:q]) end should "raise an exception for empty #find" do assert_raise MongoArgumentError do @bulk.find({}) @bulk.execute end end # ----- UPDATE ----- should "set :upsert for #upsert" do result = @bulk.find(@q).upsert assert_is_bulk_write_collection_view(result) assert_true result.op_args[:upsert] end should "check arg for update, set :update, :u, :multi, terminate and return view for #update" do with_write_commands_and_operations(@db.connection) do |wire_version| @collection.remove @collection.insert({:a => 1, :b => 1}) @collection.insert({:a => 2, :b => 1}) @collection.insert({:a => 2, :b => 1}) bulk = @collection.initialize_ordered_bulk_op u = {"$inc" => {:b => 1}} q = {:a => 2} assert_raise_error(MongoArgumentError, "non-nil query must be set via find") do bulk.update(u) end assert_raise_error(MongoArgumentError, "document must start with an operator") do bulk.find(q).update(q) end result = bulk.find({:a => 2}).update(u) assert_is_bulk_write_collection_view(result) assert_bulk_op_pushed [:update, {:q => q, :u => u, :multi => true}], bulk result = bulk.execute assert_match_document( { "ok" => 1, "n" => 2, "nMatched" => 2, "nModified" => batch_commands?(wire_version) ? 2 : nil, }, result, "wire_version:#{wire_version}") assert_equal 1, @collection.find({:b => 1}).count end end # ----- UPDATE_ONE ----- should "check arg for update, set :update, :u, :multi, terminate and return view for #update_one" do with_write_commands_and_operations(@db.connection) do |wire_version| @collection.remove @collection.insert({:a => 1}) @collection.insert({:a => 1, :b => 2}) bulk = @collection.initialize_ordered_bulk_op assert_raise_error(MongoArgumentError, "non-nil query must be set via find") do bulk.update_one(@u) end assert_raise_error(MongoArgumentError, "document must start with an operator") do bulk.find(@q).update_one(@r) end result = bulk.find(@q).update_one(@u) assert_is_bulk_write_collection_view(result) assert_bulk_op_pushed [:update, {:q => @q, :u => @u, :multi => false}], bulk result = bulk.execute assert_match_document( { "ok" => 1, "n" => 1, "nMatched" => 1, "nModified" => batch_commands?(wire_version) ? 1 : nil, }, result, "wire_version:#{wire_version}") assert_equal 2, @collection.count end end should "error-out in server when $-prefixed key is passed to #update_one" do assert_raise BulkWriteError do oh = BSON::OrderedHash.new oh["$key"] = 1 oh[:a] = 1 @bulk.find(@q).update(oh) @bulk.execute end end should "error-out in driver when first field passed to #update_one is not operator" do assert_raise_error(MongoArgumentError, "document must start with an operator") do oh = BSON::OrderedHash.new oh[:a] = 1 oh["$key"] = 1 @bulk.find(@q).update(oh) end end # ----- REPLACE_ONE ----- should "check arg for replacement, set :update, :u, :multi, terminate and return view for #replace_one" do with_write_commands_and_operations(@db.connection) do |wire_version| @collection.remove @collection.insert({:a => 1}) @collection.insert({:a => 1}) bulk = @collection.initialize_ordered_bulk_op q = {:a => 1} r = {:a => 2} assert_raise_error(MongoArgumentError, "non-nil query must be set via find") do bulk.replace_one(q) end assert_raise_error(MongoArgumentError, "document must not contain any operators") do bulk.find(q).replace_one(@u) end result = bulk.find(q).replace_one(r) assert_is_bulk_write_collection_view(result) assert_bulk_op_pushed [:update, {:q => q, :u => r, :multi => false}], bulk result = bulk.execute assert_match_document( { "ok" => 1, "n" => 1, "nMatched" => 1, "nModified" => batch_commands?(wire_version) ? 1 : nil, }, result, "wire_version:#{wire_version}") assert_equal 1, @collection.find(q).count end end # ----- REMOVE ----- should "remove all documents when empty selector is passed to #remove" do with_write_commands_and_operations(@db.connection) do |wire_version| @collection.insert({:a => 1}) @collection.insert({:a => 2}) @bulk.find({}).remove result = @bulk.execute assert_equal 0, @collection.count end end should "#remove only documents that match selector" do with_write_commands_and_operations(@db.connection) do |wire_version| @collection.remove @collection.insert({:a => 1}) @collection.insert({:a => 2}) @bulk.find({:a => 1}).remove result = @bulk.execute assert_equal 1, @collection.count # should fail if we re-execute assert_raise_error(MongoArgumentError, "batch is empty") do @bulk.execute end end end should "set :remove, :q, :limit, terminate and return view for #remove" do assert_raise_error(MongoArgumentError, "non-nil query must be set via find") do @bulk.remove end result = @bulk.find(@q).remove assert_is_bulk_write_collection_view(result) assert_bulk_op_pushed [:delete, {:q => @q, :limit => 0}], @bulk end # ----- REMOVE_ONE ----- should "set :remove, :q, :limit, terminate and return view for #remove_one" do assert_raise_error(MongoArgumentError, "non-nil query must be set via find") do @bulk.remove_one end result = @bulk.find(@q).remove_one assert_is_bulk_write_collection_view(result) assert_bulk_op_pushed [:delete, {:q => @q, :limit => 1}], @bulk end should "remove only one of several matching documents for #remove_one" do with_write_commands_and_operations(@db.connection) do |wire_version| @collection.remove @collection.insert({:a => 1, :b => 1}) @collection.insert({:a => 1, :b => 2}) @bulk.find({:a => 1}).remove_one result = @bulk.execute assert_match_document( { "ok" => 1, "n" => 1, "nRemoved" => 1, "nModified" => nil, }, result, "wire_version:#{wire_version}") assert_equal 1, @collection.count end end # ----- UPSERT-UPDATE ----- should "handle single upsert - spec Handling upserts" do # chose array always for upserted value with_write_commands_and_operations(@db.connection) do |wire_version| @collection.remove @collection.ensure_index(BSON::OrderedHash[:a, Mongo::ASCENDING], {:unique => true}) bulk = @collection.initialize_ordered_bulk_op assert_raise_error(MongoArgumentError, "non-nil query must be set via find") do @bulk.upsert.update({"$set" => {:a => 1}}) end bulk.find({:a => 1}).upsert.update({'$set' => {:a => 2}}) result = bulk.execute assert_match_document( { "ok" => 1, "n" => 1, "nMatched" => 0, "nUpserted" => 1, "nModified" => batch_commands?(wire_version) ? 0 : nil, "upserted" => [ {"_id" => BSON::ObjectId('52a16767bb67fbc77e26a310'), "index" => 0} ] }, result, "wire_version:#{wire_version}") end end should "run #upsert.update without affecting non-upsert updates" do with_write_commands_and_operations(@db.connection) do |wire_version| @collection.remove bulk = @collection.initialize_unordered_bulk_op bulk.find({:a => 1}).update({"$set" => {:x => 1}}) bulk.find({:a => 2}).upsert.update({"$set" => {:x => 2}}) result = bulk.execute assert_match_document( { "ok" => 1, "n" => 1, "nMatched" => 0, "nModified" => batch_commands?(wire_version) ? 0 : nil, "nUpserted" => 1, "upserted" => [ {"_id" => BSON::ObjectId('52a16767bb67fbc77e26a310'), "index" => 1} ] }, result, "wire_version:#{wire_version}") # Repeat the batch and nMatched = 1, nUpserted = 0 bulk2 = @collection.initialize_unordered_bulk_op bulk2.find({:a => 1}).update({"$set" => {:x => 1}}) bulk2.find({:a => 2}).upsert.update({"$set" => {:x => 2}}) result2 = bulk2.execute assert_match_document( { "ok" => 1, "n" => 1, "nMatched" => 1, "nModified" => batch_commands?(wire_version) ? 0 : nil }, result2, "wire_version:#{wire_version}") end end # ----- UPSERT-UPDATE_ONE ----- should "#upsert a document without affecting non-upsert update_ones" do with_write_commands_and_operations(@db.connection) do |wire_version| @collection.remove bulk = @collection.initialize_unordered_bulk_op bulk.find({:a => 1}).update_one({"$set" => {:x => 1}}) bulk.find({:a => 2}).upsert.update_one({"$set" => {:x => 2}}) result = bulk.execute assert_match_document( { "ok" => 1, "n" => 1, "nMatched" => 0, "nUpserted" => 1, "nModified" => batch_commands?(wire_version) ? 0 : nil, "upserted" => [ {"_id" => BSON::ObjectId('52a16767bb67fbc77e26a310'), "index" => 1} ] }, result, "wire_version:#{wire_version}") end end should "only update one matching document with #upsert-update_one" do with_write_commands_and_operations(@db.connection) do |wire_version| @collection.remove @collection.insert({:a => 1}) @collection.insert({:a => 1}) bulk = @collection.initialize_unordered_bulk_op bulk.find({:a => 1}).update_one({"$set" => {:x => 1}}) result = bulk.execute assert_match_document( { "ok" => 1, "n" => 1, "nMatched" => 1, "nModified" => batch_commands?(wire_version) ? 1 : nil, }, result, "wire_version:#{wire_version}") end end # ----- UPSERT-REPLACE_ONE ----- should "not affect non-upsert replace_ones in same batch as #upsert-replace_one" do with_write_commands_and_operations(@db.connection) do |wire_version| @collection.remove bulk = @collection.initialize_unordered_bulk_op bulk.find({:a => 1}).replace_one({:x => 1}) bulk.find({:a => 2}).upsert.replace_one({:x => 2}) result = bulk.execute assert_match_document( { "ok" => 1, "n" => 1, "nMatched" => 0, "nUpserted" => 1, "nModified" => batch_commands?(wire_version) ? 0 : nil, "upserted" => [ {"_id" => BSON::ObjectId('52a16767bb67fbc77e26a310'), "index" => 1} ] }, result, "wire_version:#{wire_version}") assert_equal 1, @collection.count end end should "only replace one matching document with #upsert-replace_one" do with_write_commands_and_operations(@db.connection) do |wire_version| @collection.remove @collection.insert({:a => 1}) @collection.insert({:a => 1}) bulk = @collection.initialize_unordered_bulk_op bulk.find({:a => 1}).replace_one({:x => 1}) bulk.find({:a => 2}).upsert.replace_one({:x => 2}) result = bulk.execute assert_match_document( { "ok" => 1, "n" => 2, "nMatched" => 1, "nUpserted" => 1, "nModified" => batch_commands?(wire_version) ? 1 : nil, "upserted" => [ {"_id" => BSON::ObjectId('52a16767bb67fbc77e26a310'), "index" => 1} ] }, result, "wire_version:#{wire_version}") assert_equal 3, @collection.count end end should "tally given all numbers or return nil for #nil_tally" do assert_equal({"nM" => 6}, nil_tally_responses([{"nM" => 1}, {"nM" => 2}, {"nM" => 3}], "nM")) assert_equal({"nM" => nil}, nil_tally_responses([{"nM" => 1}, { }, {"nM" => 3}], "nM")) assert_equal({"nM" => nil}, nil_tally_responses([{"nM" => 1}, {"nM" => nil}, {"nM" => 3}], "nM")) end # ----- MIXED OPS, ORDERED ----- should "execute, return result and reset @ops for #execute" do with_write_commands_and_operations(@db.connection) do |wire_version| @collection.remove @bulk.insert({:x => 1}) @bulk.insert({:x => 2}) write_concern = {:w => 1} result = @bulk.execute(write_concern) assert_equal({"ok" => 1, "n" => 2, "nInserted" => 2}, result, "wire_version:#{wire_version}") assert_equal 2, @collection.count assert_equal [], @bulk.ops end end should "run ordered big example" do with_write_commands_and_operations(@db.connection) do |wire_version| @collection.remove big_example(@bulk) write_concern = {:w => 1} #{:w => 1. :j => 1} #nojournal for tests result = @bulk.execute(write_concern) assert_match_document( { "ok" => 1, "n" => 14, "nInserted" => 6, "nMatched" => 5, "nUpserted" => 1, "nModified" => batch_commands?(wire_version) ? 5 : nil, "nRemoved" => 2, "upserted" => [ { "index" => 10, "_id" => BSON::ObjectId('52a1e4a4bb67fbc77e26a340') } ] }, result, "wire_version:#{wire_version}") assert_equal(batch_commands?(wire_version), result.has_key?("nModified"), "wire_version:#{wire_version}") assert_false(@collection.find.to_a.empty?, "wire_version:#{wire_version}") assert_equal [{"a"=>1, "x"=>2}, {"a"=>2, "x"=>4}, {"x"=>3}, {"x"=>3}, {"x"=>4}], sort_docs(@collection.find.to_a.collect { |doc| doc.delete("_id"); doc }) end end should "run spec Ordered Bulk Operations" do with_write_commands_and_operations(@db.connection) do |wire_version| @bulk.insert({:a => 1}) @bulk.insert({:a => 2}) @bulk.insert({:a => 3}) @bulk.find({:a => 2}).upsert.update({'$set' => {:a => 4}}) @bulk.find({:a => 1}).remove_one @bulk.insert({:a => 5}) result = @bulk.execute({:w => 1}) assert_match_document( { "ok" => 1, "n" => 6, "nInserted" => 4, "nMatched" => 1, "nModified" => batch_commands?(wire_version) ? 1 : nil, "nRemoved" => 1, }, result, "wire_version:#{wire_version}") end end # ----- MIXED OPS, UNORDERED ----- should "run unordered big example" do with_write_commands_and_operations(@db.connection) do |wire_version| @collection.remove @bulk = @collection.initialize_unordered_bulk_op big_example(@bulk) write_concern = {:w => 1} #{:w => 1. :j => 1} #nojournal for tests result = @bulk.execute(write_concern) assert_equal(6, result["nInserted"]) assert_true(result["n"] > 0, "wire_version:#{wire_version}") assert_equal(batch_commands?(wire_version), result.has_key?("nModified"), "wire_version:#{wire_version}") assert_false(@collection.find.to_a.empty?, "wire_version:#{wire_version}") end end should "run unordered big example with w 0" do with_write_commands_and_operations(@db.connection) do |wire_version| @collection.remove @bulk = @collection.initialize_unordered_bulk_op big_example(@bulk) write_concern = {:w => 0} result = @bulk.execute(write_concern) assert_equal(true, result, "wire_version:#{wire_version}") assert_false(@collection.find.to_a.empty?, "wire_version:#{wire_version}") end end should "run unordered bulk operations in one batch per write-type" do with_write_commands(@db.connection) do @collection.expects(:batch_write).at_most(3).returns([[], [], [], []]) bulk = @collection.initialize_unordered_bulk_op bulk.insert({:_id => 1, :a => 1}) bulk.find({:_id => 1, :a => 1}).update({"$inc" => {:x => 1}}) bulk.find({:_id => 1, :a => 1}).remove bulk.insert({:_id => 2, :a => 2}) bulk.find({:_id => 2, :a => 2}).update({"$inc" => {:x => 2}}) bulk.find({:_id => 2, :a => 2}).remove bulk.insert({:_id => 3, :a => 3}) bulk.find({:_id => 3, :a => 3}).update({"$inc" => {:x => 3}}) bulk.find({:_id => 3, :a => 3}).remove result = bulk.execute # unordered varies, don't use assert_match_document end end should "run spec Unordered Bulk Operations" do with_write_commands_and_operations(@db.connection) do |wire_version| bulk = @collection.initialize_unordered_bulk_op bulk.insert({:_id => 1}) bulk.find({:_id => 2}).update_one({'$inc' => { :x => 1 }}) bulk.find({:_id => 3}).remove_one bulk.insert({:_id => 4}) bulk.find({:_id => 5}).update_one({'$inc' => { :x => 1 }}) bulk.find({:_id => 6}).remove_one result = nil begin result = bulk.execute rescue => ex result = ex.result end # for write commands internally the driver will execute 3. One each for the inserts, updates and removes. end end # ----- EMPTY BATCH ----- should "handle empty bulk op" do with_write_commands_and_operations(@db.connection) do |wire_version| assert_raise_error(MongoArgumentError, Mongo::BulkWriteCollectionView::EMPTY_BATCH_MSG) do @bulk.execute end end end should "handle insert of overly large document" do large_doc = {"a" => "y"*(2*@client.max_message_size)} with_write_commands_and_operations(@db.connection) do |wire_version| ex = assert_raise Mongo::BulkWriteError do @collection.remove bulk = @collection.initialize_unordered_bulk_op bulk.insert(large_doc) puts "bulk.execute:#{bulk.execute.inspect}" end assert_equal 22, ex.result["writeErrors"].first["code"] end end # ----- ORDERED, WITH ERRORS ----- should "handle error for duplicate key with offset" do with_write_commands_and_operations(@db.connection) do |wire_version| @collection.remove @bulk.find({:a => 1}).update_one({"$inc" => {:x => 1}}) @bulk.insert({:_id => 1, :a => 1}) @bulk.insert({:_id => 1, :a => 2}) @bulk.insert({:_id => 3, :a => 3}) ex = assert_raise BulkWriteError do @bulk.execute end result = ex.result assert_match_document( { "ok" => 1, "n" => 1, "writeErrors" => [{ "index" => 2, "code" => 11000, "errmsg" => /duplicate key error/ }], "code" => 65, "errmsg" => "batch item errors occurred", "nInserted" => 1, "nMatched" => 0, "nModified" => batch_commands?(wire_version) ? 0 : nil }, result, "wire_version:#{wire_version}") end end should "handle error for serialization with offset" do with_write_commands_and_operations(@db.connection) do |wire_version| @collection.remove assert_equal 16777216, @@client.max_bson_size @bulk.find({:a => 1}).update_one({"$inc" => {:x => 1}}) @bulk.insert({:_id => 1, :a => 1}) @bulk.insert(generate_sized_doc(@@client.max_message_size + 1)) @bulk.insert({:_id => 3, :a => 3}) ex = assert_raise BulkWriteError do @bulk.execute end result = ex.result assert_match_document( { "ok" => 1, "n" => 1, "writeErrors" => [{ "index" => 2, "code" => 22, "errmsg" => /too large/ }], "code" => 65, "errmsg" => "batch item errors occurred", "nInserted" => 1, "nMatched" => 0, "nModified" => batch_commands?(wire_version) ? 0 : nil }, result, "wire_version:#{wire_version}") end end should "run ordered bulk op - spec Modes of Execution" do with_write_commands_and_operations(@db.connection) do |wire_version| @collection.remove @collection.ensure_index(BSON::OrderedHash[:a, Mongo::ASCENDING], {:unique => true}) @bulk.insert({:a => 1}) @bulk.insert({:a => 2}) @bulk.find({:a => 2}).update({'$set' => {:a => 1}}) # Clashes with unique index @bulk.find({:a => 1}).remove ex = assert_raise BulkWriteError do @bulk.execute end assert_equal(2, @collection.count) end end should "handle duplicate key error - spec Merging Results" do with_write_commands_and_operations(@db.connection) do |wire_version| @collection.remove @collection.ensure_index(BSON::OrderedHash[:a, Mongo::ASCENDING], {:unique => true}) bulk = @collection.initialize_ordered_bulk_op bulk.insert({:a => 1}) bulk.insert({:a => 2}) bulk.find({:a => 2}).upsert.update({'$set' => {:a => 1}}) bulk.insert({:a => 3}) ex = assert_raise BulkWriteError do bulk.execute end result = ex.result assert_match_document( { "ok" => 1, "n" => 2, "writeErrors" => [{ "index" => 2, "code" => DUPLICATE_KEY_ERROR_CODE_SET, "errmsg" => /duplicate key error/ }], "code" => 65, "errmsg" => "batch item errors occurred", "nInserted" => 2, "nMatched" => 0, "nModified" => batch_commands?(wire_version) ? 0 : nil }, result, "wire_version:#{wire_version}") end end should "report user index - spec Merging errors" do with_write_commands_and_operations(@db.connection) do |wire_version| @collection.remove @collection.ensure_index(BSON::OrderedHash[:a, Mongo::ASCENDING], {:unique => true}) bulk = @collection.initialize_ordered_bulk_op bulk.insert({:a => 1}) bulk.insert({:a => 2}) bulk.find({:a => 2}).update_one({'$set' => {:a => 1}}); bulk.find({:a => 4}).remove_one(); ex = assert_raise BulkWriteError do bulk.execute({:w => 1}) end result = ex.result assert_match_document( { "ok" => 1, "n" => 2, "writeErrors" => [{ "index" => 2, "code" => DUPLICATE_KEY_ERROR_CODE_SET, "errmsg" => /duplicate key error/ }], "code" => 65, "errmsg" => "batch item errors occurred", "nInserted" => 2, "nMatched" => 0, "nModified" => batch_commands?(wire_version) ? 0 : nil }, result, "wire_version:#{wire_version}") end end should "handle multiple upsert - spec Handling upserts" do with_write_commands_and_operations(@db.connection) do |wire_version| @collection.remove @collection.ensure_index(BSON::OrderedHash[:a, Mongo::ASCENDING], {:unique => true}) bulk = @collection.initialize_ordered_bulk_op bulk.find({:a => 1}).upsert.update({'$set' => {:a => 2}}) bulk.find({:a => 3}).upsert.update({'$set' => {:a => 4}}) result = bulk.execute assert_match_document( { "ok" => 1, "n" => 2, "nMatched" => 0, "nUpserted" => 2, "nModified" => batch_commands?(wire_version) ? 0 : nil, "upserted" => [ {"index" => 0, "_id" => BSON::ObjectId('52a1e37cbb67fbc77e26a338')}, {"index" => 1, "_id" => BSON::ObjectId('52a1e37cbb67fbc77e26a339')} ] }, result, "wire_version:#{wire_version}") end end should "handle replication usage error" do with_no_replication(@db.connection) do with_write_commands_and_operations(@db.connection) do |wire_version| @collection.remove @bulk = @collection.initialize_ordered_bulk_op @bulk.insert({:_id => 1, :a => 1}) write_concern = {:w => 5} ex = assert_raise BulkWriteError do @bulk.execute(write_concern) end result = ex.result if @@version >= "2.5.5" assert_match_document( { "ok" => 0, "n" => 0, "code" => 65, "errmsg" => "batch item errors occurred", "writeErrors" => [ { "errmsg" => "cannot use 'w' > 1 when a host is not replicated", "code" => 2, "index" => 0} ], "nInserted" => 0, }, result, "wire_version:#{wire_version}") else assert_match_document( { "ok" => 1, "n" => 1, "code" => 65, "errmsg" => "batch item errors occurred", "writeConcernError" => [ { "errmsg" => /no replication has been enabled/, "code" => 64, "index" => 0 } ], "nInserted" => 1, }, result, "wire_version:#{wire_version}") end end end end # ----- UNORDERED, WITH ERRORS ----- should "handle error for unordered multiple duplicate key with offset" do with_write_commands_and_operations(@db.connection) do |wire_version| @collection.remove @bulk = @collection.initialize_unordered_bulk_op @bulk.find({:a => 1}).remove @bulk.insert({:_id => 1, :a => 1}) @bulk.insert({:_id => 1, :a => 2}) @bulk.insert({:_id => 3, :a => 3}) @bulk.insert({:_id => 3, :a => 3}) ex = assert_raise BulkWriteError do @bulk.execute end result = ex.result assert_true (0 < result["nInserted"] && result["nInserted"] < 3) assert_not_nil(result["writeErrors"], "wire_version:#{wire_version}") end end should "run unordered bulk op - spec Modes of Execution" do with_write_commands_and_operations(@db.connection) do |wire_version| @collection.remove @collection.ensure_index(BSON::OrderedHash[:a, Mongo::ASCENDING], {:unique => true}) bulk = @collection.initialize_unordered_bulk_op bulk.insert({:a => 1}) bulk.insert({:a => 2}) bulk.find({:a => 2}).update({'$set' => {:a => 1}}) # Clashes with unique index bulk.find({:a => 3}).remove bulk.find({:a => 2}).update({'$set' => {:a => 1}}) # Clashes with unique index ex = assert_raise BulkWriteError do bulk.execute end result = ex.result assert(result["writeErrors"].size > 1, "wire_version:#{wire_version}") end end should "handle unordered errors - spec Merging Results" do with_write_commands_and_operations(@db.connection) do |wire_version| @collection.remove @collection.ensure_index(BSON::OrderedHash[:a, Mongo::ASCENDING], {:unique => true}) bulk = @collection.initialize_unordered_bulk_op bulk.insert({:a => 1}) bulk.find({:a => 1}).upsert.update({'$set' => {:a => 2}}) bulk.insert({:a => 2}) ex = assert_raise BulkWriteError do bulk.execute end result = ex.result # unordered varies, don't use assert_bulk_exception assert_equal(1, result['ok'], "wire_version:#{wire_version}") assert_equal(2, result['n'], "wire_version:#{wire_version}") err_details = result['writeErrors'] assert_match(/duplicate key error/, err_details.first['errmsg'], "wire_version:#{wire_version}") end end should "handle multiple errors for unordered bulk write" do with_write_commands_and_operations(@db.connection) do |wire_version| @collection.remove @bulk = @collection.initialize_unordered_bulk_op @bulk.insert({:_id => 1, :a => 1}) @bulk.insert({:_id => 1, :a => 2}) @bulk.insert(generate_sized_doc(@@client.max_message_size + 1)) @bulk.insert({:_id => 3, :a => 3}) @bulk.find({:a => 4}).upsert.replace_one({:x => 3}) ex = assert_raise BulkWriteError do @bulk.execute end result = ex.result # unordered varies, don't use assert_bulk_exception assert_equal(1, result['ok'], "wire_version:#{wire_version}") assert_equal(3, result['n'], "wire_version:#{wire_version}") err_details = result['writeErrors'] assert_match(/duplicate key error/, err_details.find { |e| e['code']==11000 }['errmsg'], "wire_version:#{wire_version}") assert_match(/too large/, err_details.find { |e| e['index']==2 }['errmsg'], "wire_version:#{wire_version}") assert_not_nil(result['upserted'].find { |e| e['index']==4 }, "wire_version:#{wire_version}") end end # ----- NO_JOURNAL ----- should "handle journaling error" do with_no_journaling(@db.connection) do with_write_commands_and_operations(@db.connection) do |wire_version| @collection.remove @bulk = @collection.initialize_ordered_bulk_op @bulk.insert({:_id => 1, :a => 1}) write_concern = {:w => 1, :j => 1} ex = assert_raise BulkWriteError do @bulk.execute(write_concern) end result = ex.result if @@version >= "2.5.5" assert_match_document( { "ok" => 0, "n" => 0, "writeErrors" => [ { "code" => 2, "errmsg" => "cannot use 'j' option when a host does not have journaling enabled", "index" => 0 } ], "code" => 65, "errmsg" => "batch item errors occurred", "nInserted" => 0 }, result, "wire_version:#{wire_version}") else assert_match_document( { "ok" => 1, "n" => 1, "writeConcernError" => [ { "code" => 2, "errmsg" => "journaling not enabled on this server", "index" => 0 } ], "code" => 65, "errmsg" => "batch item errors occurred", "nInserted" => 1 }, result, "wire_version:#{wire_version}") end end end end # ----- W = 0 ----- should "run ordered big example with w 0" do with_write_commands_and_operations(@db.connection) do |wire_version| @collection.remove big_example(@bulk) result = @bulk.execute({:w => 0}) assert_equal(true, result, "wire_version:#{wire_version}") assert_false(@collection.find.to_a.empty?, "wire_version:#{wire_version}") assert_equal [{"a"=>1, "x"=>2}, {"a"=>2, "x"=>4}, {"x"=>3}, {"x"=>3}, {"x"=>4}], sort_docs(@collection.find.to_a.collect { |doc| doc.delete("_id"); doc }) end end should "running with w 0 should not report write errors" do with_write_commands_and_operations(@db.connection) do @bulk.insert({:_id => 1, :a => 1 }) @bulk.insert({:_id => 1, :a => 2 }) @bulk.execute({:w => 0}) # should raise no duplicate key error end end # ----- W > 0 WITH STANDALONE ----- should "disallow w > 0 against a standalone" do with_write_commands_and_operations(@db.connection) do |wire_version| @collection.remove @bulk.insert({:_id => 1, :a => 1 }) @bulk.insert({:_id => 2, :a => 1 }) @bulk.insert({:_id => 3, :a => 1 }) assert_raise_error BulkWriteError do @bulk.execute({:w => 2}) end assert (@collection.count == batch_commands?(wire_version) ? 0 : 1) end end end end ruby-mongo-1.10.0/test/functional/client_test.rb000066400000000000000000000420041233461006100216460ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' require 'logger' class ClientTest < Test::Unit::TestCase include Mongo include BSON def setup @client = standard_connection end def teardown @client.close end def test_connection_failure assert_raise Mongo::ConnectionFailure do MongoClient.new('localhost', 27347) end end def test_host_port_accessors assert_equal @client.host, TEST_HOST assert_equal @client.port, TEST_PORT end def test_server_info server_info = @client.server_info assert server_info.keys.include?("version") assert Mongo::Support.ok?(server_info) end def test_ping ping = @client.ping assert ping['ok'] end def test_ipv6 with_ipv6_enabled(@client) do assert client = MongoClient.new('[::1]') end end def test_ipv6_uri_no_opts with_ipv6_enabled(@client) do uri = 'mongodb://[::1]:27017' with_preserved_env_uri(uri) do assert MongoClient.new end end end def test_ipv6_uri_opts with_ipv6_enabled(@client) do uri = 'mongodb://[::1]:27017/?slaveOk=true' with_preserved_env_uri(uri) do assert MongoClient.new end end end def test_connection_uri con = MongoClient.from_uri("mongodb://#{host_port}") assert_equal mongo_host, con.primary_pool.host assert_equal mongo_port, con.primary_pool.port end def test_uri_with_extra_opts con = MongoClient.from_uri("mongodb://#{host_port}", :pool_size => 10, :slave_ok => true) assert_equal 10, con.pool_size assert con.slave_ok? end def test_env_mongodb_uri uri = "mongodb://#{host_port}" with_preserved_env_uri(uri) do con = MongoClient.new assert_equal mongo_host, con.primary_pool.host assert_equal mongo_port, con.primary_pool.port end end def test_from_uri_implicit_mongodb_uri uri = "mongodb://#{host_port}" with_preserved_env_uri(uri) do con = MongoClient.from_uri assert_equal mongo_host, con.primary_pool.host assert_equal mongo_port, con.primary_pool.port end end def test_db_from_uri_exists_no_options db_name = "_database" uri = "mongodb://#{host_port}/#{db_name}" with_preserved_env_uri(uri) do con = MongoClient.from_uri db = con.db assert_equal db.name, db_name end end def test_db_from_uri_exists_options db_name = "_database" uri = "mongodb://#{host_port}/#{db_name}?" with_preserved_env_uri(uri) do con = MongoClient.from_uri db = con.db assert_equal db.name, db_name end end def test_db_from_uri_exists_no_db_name uri = "mongodb://#{host_port}/" with_preserved_env_uri(uri) do con = MongoClient.from_uri db = con.db assert_equal db.name, MongoClient::DEFAULT_DB_NAME end end def test_db_from_uri_from_string_param db_name = "_database" db = MongoClient.from_uri("mongodb://#{host_port}/#{db_name}").db assert_equal db.name, db_name end def test_db_from_uri_from_string_param_no_db_name db = MongoClient.from_uri("mongodb://#{host_port}").db assert_equal db.name, MongoClient::DEFAULT_DB_NAME end def test_from_uri_write_concern con = MongoClient.from_uri("mongodb://#{host_port}") db = con.db coll = db.collection('from-uri-test') assert_equal BSON::ObjectId, coll.insert({'a' => 1}).class [con, db, coll].each do |component| component.write_concern.each do |key,value| assert_not_nil(value, "component #{component.class.inspect} should not have write concern #{key.inspect} field with nil value") end end assert_equal({:w => 1}, con.write_concern, "write concern should not have extra pairs that were not specified by the user") assert_equal({:w => 1}, db.write_concern, "write concern should not have extra pairs that were not specified by the user") assert_equal({:w => 1}, coll.write_concern, "write concern should not have extra pairs that were not specified by the user") end def test_server_version assert_match(/\d\.\d+(\.\d+)?/, @client.server_version.to_s) end def test_invalid_database_names assert_raise TypeError do @client.db(4) end assert_raise Mongo::InvalidNSName do @client.db('') end assert_raise Mongo::InvalidNSName do @client.db('te$t') end assert_raise Mongo::InvalidNSName do @client.db('te.t') end assert_raise Mongo::InvalidNSName do @client.db('te\\t') end assert_raise Mongo::InvalidNSName do @client.db('te/t') end assert_raise Mongo::InvalidNSName do @client.db('te st') end end def test_options_passed_to_db @pk_mock = Object.new db = @client.db('test', :pk => @pk_mock, :strict => true) assert_equal @pk_mock, db.pk_factory assert db.strict? end def test_database_info @client.drop_database(TEST_DB) @client.db(TEST_DB).collection('info-test').insert('a' => 1) info = @client.database_info assert_not_nil info assert_kind_of Hash, info assert_not_nil info[TEST_DB] assert info[TEST_DB] > 0 @client.drop_database(TEST_DB) end def test_copy_database old_name = TEST_DB + '_old' new_name = TEST_DB + '_new' @client.drop_database(new_name) @client.db(old_name).collection('copy-test').insert('a' => 1) @client.copy_database(old_name, new_name, host_port) old_object = @client.db(old_name).collection('copy-test').find.next_document new_object = @client.db(new_name).collection('copy-test').find.next_document assert_equal old_object, new_object end def test_database_names @client.drop_database(TEST_DB) @client.db(TEST_DB).collection('info-test').insert('a' => 1) names = @client.database_names assert_not_nil names assert_kind_of Array, names assert names.length >= 1 assert names.include?(TEST_DB) end def test_logging output = StringIO.new logger = Logger.new(output) logger.level = Logger::DEBUG standard_connection(:logger => logger).db(TEST_DB) assert output.string.include?("admin['$cmd'].find") end def test_logging_duration output = StringIO.new logger = Logger.new(output) logger.level = Logger::DEBUG standard_connection(:logger => logger).db(TEST_DB) assert_match(/\(\d+.\d{1}ms\)/, output.string) assert output.string.include?("admin['$cmd'].find") end def test_connection_logger output = StringIO.new logger = Logger.new(output) logger.level = Logger::DEBUG connection = standard_connection(:logger => logger) assert_equal logger, connection.logger connection.logger.debug 'testing' assert output.string.include?('testing') end def test_drop_database db = @client.db(TEST_DB + '_drop_test') coll = db.collection('temp') coll.remove coll.insert(:name => 'temp') assert_equal 1, coll.count() assert @client.database_names.include?(TEST_DB + '_drop_test') @client.drop_database(TEST_DB + '_drop_test') assert !@client.database_names.include?(TEST_DB + '_drop_test') end def test_nodes silently do @client = MongoClient.multi([['foo', 27017], ['bar', 27018]], :connect => false) end seeds = @client.seeds assert_equal 2, seeds.length assert_equal ['foo', 27017], seeds[0] assert_equal ['bar', 27018], seeds[1] end def test_fsync_lock assert !@client.locked? @client.lock! assert @client.locked? assert [1, true].include?(@client['admin']['$cmd.sys.inprog'].find_one['fsyncLock']) assert_match(/unlock/, @client.unlock!['info']) unlocked = false counter = 0 while counter < 100 if @client['admin']['$cmd.sys.inprog'].find_one['fsyncLock'].nil? unlocked = true break else counter += 1 end end assert !@client.locked? assert unlocked, "mongod failed to unlock" end def test_max_bson_size_value conn = standard_connection(:connect => false) admin_db = Object.new admin_db.expects(:command).returns({'ok' => 1, 'ismaster' => 1, 'maxBsonObjectSize' => 15_000_000}) conn.expects(:[]).with('admin').returns(admin_db) conn.connect assert_equal 15_000_000, conn.max_bson_size conn = standard_connection if conn.server_version > "1.7.2" assert_equal conn['admin'].command({:ismaster => 1})['maxBsonObjectSize'], conn.max_bson_size end end def test_max_message_size_value conn = standard_connection(:connect => false) admin_db = Object.new admin_db.expects(:command).returns({'ok' => 1, 'ismaster' => 1, 'maxMessageSizeBytes' => 20_000_000}) conn.expects(:[]).with('admin').returns(admin_db) conn.connect assert_equal 20_000_000, conn.max_message_size conn = standard_connection maxMessageSizeBytes = conn['admin'].command({:ismaster => 1})['maxMessageSizeBytes'] if conn.server_version.to_s[/([^-]+)/,1] >= "2.4.0" assert_equal 48_000_000, maxMessageSizeBytes elsif conn.server_version > "2.3.2" assert_equal conn.max_bson_size, maxMessageSizeBytes end end def test_max_bson_size_with_no_reported_max_size conn = standard_connection(:connect => false) admin_db = Object.new admin_db.expects(:command).returns({'ok' => 1, 'ismaster' => 1}) conn.expects(:[]).with('admin').returns(admin_db) conn.connect assert_equal Mongo::DEFAULT_MAX_BSON_SIZE, conn.max_bson_size end def test_max_message_size_with_no_reported_max_size conn = standard_connection(:connect => false) admin_db = Object.new admin_db.expects(:command).returns({'ok' => 1, 'ismaster' => 1}) conn.expects(:[]).with('admin').returns(admin_db) conn.connect assert_equal Mongo::DEFAULT_MAX_BSON_SIZE * Mongo::MESSAGE_SIZE_FACTOR, conn.max_message_size end def test_max_wire_version_and_min_wire_version_values conn = standard_connection(:connect => false) admin_db = Object.new admin_db.expects(:command).returns({'ok' => 1, 'ismaster' => 1, 'maxWireVersion' => 1, 'minWireVersion' => 1, 'maxWriteBatchSize' => 999}) conn.expects(:[]).with('admin').returns(admin_db) conn.connect assert_equal 1, conn.max_wire_version assert_equal 1, conn.min_wire_version assert_equal 999, conn.max_write_batch_size end def test_max_wire_version_and_min_wire_version_values_with_no_reported_values conn = standard_connection(:connect => false) admin_db = Object.new admin_db.expects(:command).returns({'ok' => 1, 'ismaster' => 1}) conn.expects(:[]).with('admin').returns(admin_db) conn.connect assert_equal 0, conn.max_wire_version assert_equal 0, conn.min_wire_version assert_equal Mongo::MongoClient::DEFAULT_MAX_WRITE_BATCH_SIZE, conn.max_write_batch_size end def test_wire_version_feature conn = standard_connection(:connect => false) conn.stubs(:min_wire_version).returns(0) conn.stubs(:max_wire_version).returns(1) assert_true conn.wire_version_feature?(0) assert_true conn.wire_version_feature?(1) assert_false conn.wire_version_feature?(2) assert_false conn.wire_version_feature?(-1) end def test_wire_version_not_in_range [ [Mongo::MongoClient::MAX_WIRE_VERSION+1, Mongo::MongoClient::MAX_WIRE_VERSION+1], [Mongo::MongoClient::MIN_WIRE_VERSION-1, Mongo::MongoClient::MIN_WIRE_VERSION-1] ].each do |min_wire_version, max_wire_version| conn = standard_connection(:connect => false) admin_db = Object.new admin_db.expects(:command).returns({'ok' => 1, 'ismaster' => 1, 'maxWireVersion' => max_wire_version, 'minWireVersion' => min_wire_version}) conn.expects(:[]).with('admin').returns(admin_db) assert_raises Mongo::ConnectionFailure do conn.connect end end end def test_use_write_command with_write_commands(@client) do assert_true @client.use_write_command?({:w => 1}) assert_false @client.use_write_command?({:w => 0}) end with_write_operations(@client) do assert_false @client.use_write_command?({:w => 1}) assert_false @client.use_write_command?({:w => 0}) end end def test_connection_activity conn = standard_connection assert conn.active? conn.primary_pool.close assert !conn.active? # Simulate a dropped connection. dropped_socket = mock('dropped_socket') dropped_socket.stubs(:read).raises(Errno::ECONNRESET) dropped_socket.stubs(:send).raises(Errno::ECONNRESET) dropped_socket.stub_everything conn.primary_pool.host = 'localhost' conn.primary_pool.port = Mongo::MongoClient::DEFAULT_PORT conn.primary_pool.instance_variable_set("@pids", {dropped_socket => Process.pid}) conn.primary_pool.instance_variable_set("@sockets", [dropped_socket]) assert !conn.active? end context "Saved authentications" do setup do @client = standard_connection @auth = { :db_name => TEST_DB, :username => 'bob', :password => 'secret', :source => TEST_DB, :mechanism => 'MONGODB-CR' } @client.auths << @auth end teardown do @client.clear_auths end should "save and validate the authentication" do assert_equal Authentication.validate_credentials(@auth), @client.auths.first end should "not allow multiple authentications for the same db" do auth = { :db_name => TEST_DB, :username => 'mickey', :password => 'm0u53', :source => nil, :mechanism => nil } assert_raise Mongo::MongoArgumentError do @client.add_auth( auth[:db_name], auth[:username], auth[:password], auth[:source], auth[:mechanism]) end end should "remove auths by database" do @client.remove_auth('non-existent database') assert_equal 1, @client.auths.length @client.remove_auth(TEST_DB) assert_equal 0, @client.auths.length end should "remove all auths" do @client.clear_auths assert_equal 0, @client.auths.length end end context "Socket pools" do context "checking out writers" do setup do @con = standard_connection(:pool_size => 10, :pool_timeout => 10) @coll = @con[TEST_DB]['test-connection-exceptions'] end should "close the connection on send_message for major exceptions" do @con.stubs(:checkout_writer).raises(SystemStackError) @con.stubs(:checkout_reader).raises(SystemStackError) @con.expects(:close) begin @coll.insert({:foo => "bar"}) rescue SystemStackError end end should "close the connection on send_message_with_gle for major exceptions" do @con.stubs(:checkout_writer).raises(SystemStackError) @con.stubs(:checkout_reader).raises(SystemStackError) @con.expects(:close) begin @coll.insert({:foo => "bar"}, :w => 1) rescue SystemStackError end end should "close the connection on receive_message for major exceptions" do @con.expects(:checkout_reader).raises(SystemStackError) @con.expects(:close) begin @coll.find.next rescue SystemStackError end end end end context "Connection exceptions" do setup do @con = standard_connection(:pool_size => 10, :pool_timeout => 10) @coll = @con[TEST_DB]['test-connection-exceptions'] end should "release connection if an exception is raised on send_message" do @con.stubs(:send_message_on_socket).raises(ConnectionFailure) assert_equal 0, @con.primary_pool.checked_out.size assert_raise ConnectionFailure do @coll.insert({:test => "insert"}) end assert_equal 0, @con.primary_pool.checked_out.size end should "release connection if an exception is raised on write concern :w => 1" do @con.stubs(:receive).raises(ConnectionFailure) assert_equal 0, @con.primary_pool.checked_out.size assert_raise ConnectionFailure do @coll.insert({:test => "insert"}, :w => 1) end assert_equal 0, @con.primary_pool.checked_out.size end should "release connection if an exception is raised on receive_message" do @con.stubs(:receive).raises(ConnectionFailure) assert_equal 0, @con.read_pool.checked_out.size assert_raise ConnectionFailure do @coll.find.to_a end assert_equal 0, @con.read_pool.checked_out.size end should "show a proper exception message if an IOError is raised while closing a socket" do TCPSocket.any_instance.stubs(:close).raises(IOError.new) @con.primary_pool.checkout_new_socket @con.primary_pool.expects(:warn) assert @con.primary_pool.close end end end ruby-mongo-1.10.0/test/functional/collection_test.rb000066400000000000000000002033211233461006100225240ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'rbconfig' require 'test_helper' class CollectionTest < Test::Unit::TestCase @@client ||= standard_connection(:op_timeout => 10) @@db = @@client.db(TEST_DB) @@test = @@db.collection("test") @@version = @@client.server_version LIMITED_MAX_BSON_SIZE = 1024 LIMITED_MAX_MESSAGE_SIZE = 3 * LIMITED_MAX_BSON_SIZE LIMITED_TEST_HEADROOM = 50 LIMITED_VALID_VALUE_SIZE = LIMITED_MAX_BSON_SIZE - LIMITED_TEST_HEADROOM LIMITED_INVALID_VALUE_SIZE = LIMITED_MAX_BSON_SIZE + Mongo::MongoClient::COMMAND_HEADROOM + 1 def setup @@test.remove end @@wv0 = Mongo::MongoClient::RELEASE_2_4_AND_BEFORE @@wv2 = Mongo::MongoClient::BATCH_COMMANDS @@a_h = Mongo::MongoClient::APPEND_HEADROOM @@s_h = Mongo::MongoClient::SERIALIZE_HEADROOM MAX_SIZE_EXCEPTION_TEST = [ #[@@wv0, @@client.max_bson_size, nil, /xyzzy/], # succeeds standalone, fails whole suite ] MAX_SIZE_EXCEPTION_CRUBY_TEST = [ [@@wv0, @@client.max_bson_size + 1, BSON::InvalidDocument, /Document.* too large/] ] MAX_SIZE_EXCEPTION_JRUBY_TEST = [ [@@wv0, @@client.max_bson_size + 1, Mongo::OperationFailure, /object to insert too large/] ] MAX_SIZE_EXCEPTION_COMMANDS_TEST = [ #[@@wv2, @@client.max_bson_size, nil, /xyzzy/], # succeeds standalone, fails whole suite [@@wv2, @@client.max_bson_size + 1, Mongo::OperationFailure, /object to insert too large/], [@@wv2, @@client.max_bson_size + @@s_h, Mongo::OperationFailure, /object to insert too large/], [@@wv2, @@client.max_bson_size + @@a_h, BSON::InvalidDocument, /Document.* too large/] ] @@max_size_exception_test = MAX_SIZE_EXCEPTION_TEST @@max_size_exception_test += MAX_SIZE_EXCEPTION_CRUBY_TEST unless RUBY_PLATFORM == 'java' #@@max_size_exception_test += MAX_SIZE_EXCEPTION_JRUBY_TEST if RUBY_PLATFORM == 'java' @@max_size_exception_test += MAX_SIZE_EXCEPTION_COMMANDS_TEST if @@version >= "2.5.2" def generate_sized_doc(size) doc = {"_id" => BSON::ObjectId.new, "x" => "y"} serialize_doc = BSON::BSON_CODER.serialize(doc, false, false, size) doc = {"_id" => BSON::ObjectId.new, "x" => "y" * (1 + size - serialize_doc.size)} assert_equal size, BSON::BSON_CODER.serialize(doc, false, false, size).size doc end def with_max_wire_version(client, wire_version) # does not support replica sets if client.wire_version_feature?(wire_version) client.class.class_eval(%Q{ alias :old_max_wire_version :max_wire_version def max_wire_version #{wire_version} end }) yield wire_version client.class.class_eval(%Q{ alias :max_wire_version :old_max_wire_version }) end end def test_insert_batch_max_sizes @@max_size_exception_test.each do |wire_version, size, exc, regexp| with_max_wire_version(@@client, wire_version) do @@test.remove doc = generate_sized_doc(size) begin @@test.insert([doc.dup]) assert_equal nil, exc rescue => e assert_equal exc, e.class, "wire_version:#{wire_version}, size:#{size}, exc:#{exc} e:#{e.message.inspect} @@version:#{@@version}" assert_match regexp, e.message end end end end if @@version >= '2.5.4' def test_single_delete_write_command @@test.drop @@test.insert([{ :a => 1 }, { :a => 1 }]) command = BSON::OrderedHash['delete', @@test.name, :deletes, [{ :q => { :a => 1 }, :limit => 1 }], :writeConcern, { :w => 1 }, :ordered, false] result = @@db.command(command) assert_equal 1, result['n'] assert_equal 1, result['ok'] assert_equal 1, @@test.count end def test_multi_ordered_delete_write_command @@test.drop @@test.insert([{ :a => 1 }, { :a => 1 }]) command = BSON::OrderedHash['delete', @@test.name, :deletes, [{ :q => { :a => 1 }, :limit => 0 }], :writeConcern, { :w => 1 }, :ordered, true] result = @@db.command(command) assert_equal 2, result['n'] assert_equal 1, result['ok'] assert_equal 0, @@test.count end def test_multi_unordered_delete_write_command @@test.drop @@test.insert([{ :a => 1 }, { :a => 1 }]) command = BSON::OrderedHash['delete', @@test.name, :deletes, [{ :q => { :a => 1 }, :limit => 0 }], :writeConcern, { :w => 1 }, :ordered, false] result = @@db.command(command) assert_equal 2, result['n'] assert_equal 1, result['ok'] assert_equal 0, @@test.count end def test_delete_write_command_with_no_concern @@test.drop @@test.insert([{ :a => 1 }, { :a => 1 }]) command = BSON::OrderedHash['delete', @@test.name, :deletes, [{ :q => { :a => 1 }, :limit => 0 }], :ordered, false] result = @@db.command(command) assert_equal 2, result['n'] assert_equal 1, result['ok'] assert_equal 0, @@test.count end def test_delete_write_command_with_error @@test.drop @@test.insert([{ :a => 1 }, { :a => 1 }]) command = BSON::OrderedHash['delete', @@test.name, :deletes, [{ :q => { '$set' => { :a => 1 }}, :limit => 0 }], :writeConcern, { :w => 1 }, :ordered, false] assert_raise Mongo::OperationFailure do @@db.command(command) end end def test_single_insert_write_command @@test.drop command = BSON::OrderedHash['insert', @@test.name, :documents, [{ :a => 1 }], :writeConcern, { :w => 1 }, :ordered, false] result = @@db.command(command) assert_equal 1, result['ok'] assert_equal 1, @@test.count end def test_multi_ordered_insert_write_command @@test.drop command = BSON::OrderedHash['insert', @@test.name, :documents, [{ :a => 1 }, { :a => 2 }], :writeConcern, { :w => 1 }, :ordered, true] result = @@db.command(command) assert_equal 1, result['ok'] assert_equal 2, @@test.count end def test_multi_unordered_insert_write_command @@test.drop command = BSON::OrderedHash['insert', @@test.name, :documents, [{ :a => 1 }, { :a => 2 }], :writeConcern, { :w => 1 }, :ordered, false] result = @@db.command(command) assert_equal 1, result['ok'] assert_equal 2, @@test.count end def test_insert_write_command_with_no_concern @@test.drop command = BSON::OrderedHash['insert', @@test.name, :documents, [{ :a => 1 }, { :a => 2 }], :ordered, false] result = @@db.command(command) assert_equal 1, result['ok'] assert_equal 2, @@test.count end def test_insert_write_command_with_error @@test.drop @@test.ensure_index([[:a, 1]], { :unique => true }) command = BSON::OrderedHash['insert', @@test.name, :documents, [{ :a => 1 }, { :a => 1 }], :writeConcern, { :w => 1 }, :ordered, false] assert_raise Mongo::OperationFailure do @@db.command(command) end end def test_single_update_write_command @@test.drop @@test.insert([{ :a => 1 }, { :a => 2 }]) command = BSON::OrderedHash['update', @@test.name, :updates, [{ :q => { :a => 1 }, :u => { '$set' => { :a => 2 }}}], :writeConcern, { :w => 1 }] result = @@db.command(command) assert_equal 1, result['ok'] assert_equal 1, result['n'] assert_equal 2, @@test.find({ :a => 2 }).count end def test_multi_ordered_update_write_command @@test.drop @@test.insert([{ :a => 1 }, { :a => 3 }]) command = BSON::OrderedHash['update', @@test.name, :updates, [ { :q => { :a => 1 }, :u => { '$set' => { :a => 2 }}}, { :q => { :a => 3 }, :u => { '$set' => { :a => 4 }}} ], :writeConcern, { :w => 1 }, :ordered, true] result = @@db.command(command) assert_equal 1, result['ok'] assert_equal 2, result['n'] assert_equal 1, @@test.find({ :a => 2 }).count assert_equal 1, @@test.find({ :a => 4 }).count end def test_multi_unordered_update_write_command @@test.drop @@test.insert([{ :a => 1 }, { :a => 3 }]) command = BSON::OrderedHash['update', @@test.name, :updates, [ { :q => { :a => 1 }, :u => { '$set' => { :a => 2 }}}, { :q => { :a => 3 }, :u => { '$set' => { :a => 4 }}} ], :writeConcern, { :w => 1 }, :ordered, false] result = @@db.command(command) assert_equal 1, result['ok'] assert_equal 2, result['n'] assert_equal 1, @@test.find({ :a => 2 }).count assert_equal 1, @@test.find({ :a => 4 }).count end def test_update_write_command_with_no_concern @@test.drop @@test.insert([{ :a => 1 }, { :a => 3 }]) command = BSON::OrderedHash['update', @@test.name, :updates, [ { :q => { :a => 1 }, :u => { '$set' => { :a => 2 }}}, { :q => { :a => 3 }, :u => { '$set' => { :a => 4 }}} ], :ordered, false] result = @@db.command(command) assert_equal 1, result['ok'] assert_equal 2, result['n'] assert_equal 1, @@test.find({ :a => 2 }).count assert_equal 1, @@test.find({ :a => 4 }).count end def test_update_write_command_with_error @@test.drop @@test.ensure_index([[:a, 1]], { :unique => true }) @@test.insert([{ :a => 1 }, { :a => 2 }]) command = BSON::OrderedHash['update', @@test.name, :updates, [ { :q => { :a => 2 }, :u => { '$set' => { :a => 1 }}} ], :ordered, false] assert_raise Mongo::OperationFailure do @@db.command(command) end end end if @@version >= '2.5.1' def test_aggregation_cursor [10, 1000].each do |size| @@test.drop size.times {|i| @@test.insert({ :_id => i }) } expected_sum = size.times.reduce(:+) cursor = @@test.aggregate( [{ :$project => {:_id => '$_id'}} ], :cursor => {} ) assert_equal Mongo::Cursor, cursor.class cursor_sum = cursor.reduce(0) do |sum, doc| sum += doc['_id'] end assert_equal expected_sum, cursor_sum end @@test.drop end def test_aggregation_array @@test.drop 100.times {|i| @@test.insert({ :_id => i }) } agg = @@test.aggregate([{ :$project => {:_id => '$_id'}} ]) assert agg.kind_of?(Array) @@test.drop end def test_aggregation_cursor_invalid_ops cursor = @@test.aggregate([], :cursor => {}) assert_raise(Mongo::InvalidOperation) { cursor.rewind! } assert_raise(Mongo::InvalidOperation) { cursor.explain } assert_raise(Mongo::InvalidOperation) { cursor.count } end end def test_aggregation_invalid_read_pref assert_raise Mongo::MongoArgumentError do @@test.aggregate([], :read => :invalid_read_pref) end end if @@version >= '2.5.3' def test_aggregation_supports_explain @@db.expects(:command).with do |selector, opts| opts[:explain] == true end.returns({ 'ok' => 1 }) @@test.aggregate([], :explain => true) end def test_aggregation_explain_returns_raw_result response = @@test.aggregate([], :explain => true) assert response['stages'] end end def test_capped_method @@db.create_collection('normal') assert !@@db['normal'].capped? @@db.drop_collection('normal') @@db.create_collection('c', :capped => true, :size => 100_000) assert @@db['c'].capped? @@db.drop_collection('c') end def test_optional_pk_factory @coll_default_pk = @@db.collection('stuff') assert_equal BSON::ObjectId, @coll_default_pk.pk_factory @coll_default_pk = @@db.create_collection('more-stuff') assert_equal BSON::ObjectId, @coll_default_pk.pk_factory # Create a db with a pk_factory. @db = MongoClient.new(ENV['MONGO_RUBY_DRIVER_HOST'] || 'localhost', ENV['MONGO_RUBY_DRIVER_PORT'] || MongoClient::DEFAULT_PORT).db(TEST_DB, :pk => Object.new) @coll = @db.collection('coll-with-pk') assert @coll.pk_factory.is_a?(Object) @coll = @db.create_collection('created_coll_with_pk') assert @coll.pk_factory.is_a?(Object) end class PKTest def self.create_pk end end def test_pk_factory_on_collection silently do @coll = Collection.new('foo', @@db, PKTest) assert_equal PKTest, @coll.pk_factory end @coll2 = Collection.new('foo', @@db, :pk => PKTest) assert_equal PKTest, @coll2.pk_factory end def test_valid_names assert_raise Mongo::InvalidNSName do @@db["te$t"] end assert_raise Mongo::InvalidNSName do @@db['$main'] end assert @@db['$cmd'] assert @@db['oplog.$main'] end def test_collection assert_kind_of Collection, @@db["test"] assert_equal @@db["test"].name(), @@db.collection("test").name() assert_equal @@db["test"].name(), @@db[:test].name() assert_kind_of Collection, @@db["test"]["foo"] assert_equal @@db["test"]["foo"].name(), @@db.collection("test.foo").name() assert_equal @@db["test"]["foo"].name(), @@db["test.foo"].name() @@db["test"]["foo"].remove @@db["test"]["foo"].insert("x" => 5) assert_equal 5, @@db.collection("test.foo").find_one()["x"] end def test_rename_collection @@db.drop_collection('foo1') @@db.drop_collection('bar1') @col = @@db.create_collection('foo1') assert_equal 'foo1', @col.name @col.rename('bar1') assert_equal 'bar1', @col.name end def test_nil_id assert_equal 5, @@test.insert({"_id" => 5, "foo" => "bar"}) assert_equal 5, @@test.save({"_id" => 5, "foo" => "baz"}) assert_equal nil, @@test.find_one("foo" => "bar") assert_equal "baz", @@test.find_one(:_id => 5)["foo"] assert_raise OperationFailure do @@test.insert({"_id" => 5, "foo" => "bar"}) end assert_equal nil, @@test.insert({"_id" => nil, "foo" => "bar"}) assert_equal nil, @@test.save({"_id" => nil, "foo" => "baz"}) assert_equal nil, @@test.find_one("foo" => "bar") assert_equal "baz", @@test.find_one(:_id => nil)["foo"] assert_raise OperationFailure do @@test.insert({"_id" => nil, "foo" => "bar"}) end assert_raise OperationFailure do @@test.insert({:_id => nil, "foo" => "bar"}) end end if @@version > "1.1" def setup_for_distinct @@test.remove @@test.insert([{:a => 0, :b => {:c => "a"}}, {:a => 1, :b => {:c => "b"}}, {:a => 1, :b => {:c => "c"}}, {:a => 2, :b => {:c => "a"}}, {:a => 3}, {:a => 3}]) end def test_distinct_queries setup_for_distinct assert_equal [0, 1, 2, 3], @@test.distinct(:a).sort assert_equal ["a", "b", "c"], @@test.distinct("b.c").sort end if @@version >= "1.2" def test_filter_collection_with_query setup_for_distinct assert_equal [2, 3], @@test.distinct(:a, {:a => {"$gt" => 1}}).sort end def test_filter_nested_objects setup_for_distinct assert_equal ["a", "b"], @@test.distinct("b.c", {"b.c" => {"$ne" => "c"}}).sort end end end def test_safe_insert @@test.create_index("hello", :unique => true) begin a = {"hello" => "world"} @@test.insert(a) @@test.insert(a, :w => 0) assert(@@db.get_last_error['err'].include?("11000")) assert_raise OperationFailure do @@test.insert(a) end ensure @@test.drop_indexes end end def test_bulk_insert docs = [] docs << {:foo => 1} docs << {:foo => 2} docs << {:foo => 3} response = @@test.insert(docs) assert_equal 3, response.length assert response.all? {|id| id.is_a?(BSON::ObjectId)} assert_equal 3, @@test.count end def test_bulk_insert_with_continue_on_error if @@version >= "2.0" @@test.create_index([["foo", 1]], :unique => true) begin docs = [] docs << {:foo => 1} docs << {:foo => 1} docs << {:foo => 2} docs << {:foo => 3} assert_raise OperationFailure do @@test.insert(docs) end assert_equal 1, @@test.count @@test.remove docs = [] docs << {:foo => 1} docs << {:foo => 1} docs << {:foo => 2} docs << {:foo => 3} assert_raise OperationFailure do @@test.insert(docs, :continue_on_error => true) end assert_equal 3, @@test.count @@test.remove ensure @@test.drop_index("foo_1") end end end def test_bson_valid_with_collect_on_error docs = [] docs << {:foo => 1} docs << {:bar => 1} doc_ids, error_docs = @@test.insert(docs, :collect_on_error => true) assert_equal 2, @@test.count assert_equal 2, doc_ids.count assert_equal error_docs, [] end def test_bson_invalid_key_serialize_error_with_collect_on_error docs = [] docs << {:foo => 1} docs << {:bar => 1} invalid_docs = [] invalid_docs << {'$invalid-key' => 1} invalid_docs << {'invalid.key' => 1} docs += invalid_docs assert_raise BSON::InvalidKeyName do @@test.insert(docs, :collect_on_error => false) end assert_equal 2, @@test.count doc_ids, error_docs = @@test.insert(docs, :collect_on_error => true) assert_equal 2, @@test.count assert_equal 2, doc_ids.count assert_equal error_docs, invalid_docs end def test_bson_invalid_encoding_serialize_error_with_collect_on_error # Broken for current JRuby if RUBY_PLATFORM == 'java' then return end docs = [] docs << {:foo => 1} docs << {:bar => 1} invalid_docs = [] invalid_docs << {"\223\372\226}" => 1} # non utf8 encoding docs += invalid_docs assert_raise BSON::InvalidStringEncoding do @@test.insert(docs, :collect_on_error => false) end assert_equal 2, @@test.count doc_ids, error_docs = @@test.insert(docs, :collect_on_error => true) assert_equal 2, @@test.count assert_equal 2, doc_ids.count assert_equal error_docs, invalid_docs end def test_insert_one_error_doc_with_collect_on_error invalid_doc = {'$invalid-key' => 1} invalid_docs = [invalid_doc] doc_ids, error_docs = @@test.insert(invalid_docs, :collect_on_error => true) assert_equal [], doc_ids assert_equal [invalid_doc], error_docs end def test_insert_empty_docs_raises_exception assert_raise OperationFailure do @@test.insert([]) end end def test_insert_empty_docs_with_collect_on_error_raises_exception assert_raise OperationFailure do @@test.insert([], :collect_on_error => true) end end def limited_collection conn = standard_connection(:connect => false) admin_db = Object.new admin_db.expects(:command).returns({ 'ok' => 1, 'ismaster' => 1, 'maxBsonObjectSize' => LIMITED_MAX_BSON_SIZE, 'maxMessageSizeBytes' => LIMITED_MAX_MESSAGE_SIZE }) conn.expects(:[]).with('admin').returns(admin_db) conn.connect return conn.db(TEST_DB)["test"] end def test_non_operation_failure_halts_insertion_with_continue_on_error coll = limited_collection coll.db.connection.stubs(:send_message_with_gle).raises(OperationTimeout).times(1) docs = [] 10.times do docs << {'foo' => 'a' * LIMITED_VALID_VALUE_SIZE} end assert_raise OperationTimeout do coll.insert(docs, :continue_on_error => true) end end def test_chunking_batch_insert docs = [] 10.times do docs << {'foo' => 'a' * LIMITED_VALID_VALUE_SIZE} end limited_collection.insert(docs) assert_equal 10, limited_collection.count end def test_chunking_batch_insert_without_collect_on_error docs = [] 4.times do docs << {'foo' => 'a' * LIMITED_VALID_VALUE_SIZE} end invalid_docs = [] invalid_docs << {'$invalid-key' => 1} # non utf8 encoding docs += invalid_docs 4.times do docs << {'foo' => 'a' * LIMITED_VALID_VALUE_SIZE} end assert_raise BSON::InvalidKeyName do limited_collection.insert(docs, :collect_on_error => false) end end def test_chunking_batch_insert_with_collect_on_error # Broken for current JRuby if RUBY_PLATFORM == 'java' then return end docs = [] 4.times do docs << {'foo' => 'a' * LIMITED_VALID_VALUE_SIZE} end invalid_docs = [] invalid_docs << {'$invalid-key' => 1} # non utf8 encoding docs += invalid_docs 4.times do docs << {'foo' => 'a' * LIMITED_VALID_VALUE_SIZE} end doc_ids, error_docs = limited_collection.insert(docs, :collect_on_error => true) assert_equal 8, doc_ids.count assert_equal doc_ids.count, limited_collection.count assert_equal error_docs, invalid_docs end def test_chunking_batch_insert_with_continue_on_error docs = [] 4.times do docs << {'foo' => 'a' * LIMITED_VALID_VALUE_SIZE} end docs << {'_id' => 'b', 'foo' => 'a'} docs << {'_id' => 'b', 'foo' => 'c'} 4.times do docs << {'foo' => 'a' * LIMITED_VALID_VALUE_SIZE} end assert_raise OperationFailure do limited_collection.insert(docs, :continue_on_error => true) end assert limited_collection.count >= 6, "write commands need headroom for doc wrapping overhead - count:#{limited_collection.count}" end def test_chunking_batch_insert_without_continue_on_error docs = [] 4.times do docs << {'foo' => 'a' * LIMITED_VALID_VALUE_SIZE} end docs << {'_id' => 'b', 'foo' => 'a'} docs << {'_id' => 'b', 'foo' => 'c'} 4.times do docs << {'foo' => 'a' * LIMITED_VALID_VALUE_SIZE} end assert_raise OperationFailure do limited_collection.insert(docs, :continue_on_error => false) end assert_equal 5, limited_collection.count end def test_maximum_insert_size docs = [] 3.times do docs << {'foo' => 'a' * LIMITED_VALID_VALUE_SIZE} end assert_equal limited_collection.insert(docs).length, 3 end def test_maximum_document_size assert_raise InvalidDocument do limited_collection.insert({'foo' => 'a' * LIMITED_MAX_BSON_SIZE}) end end def test_maximum_save_size assert limited_collection.save({'foo' => 'a' * LIMITED_VALID_VALUE_SIZE}) assert_raise InvalidDocument do limited_collection.save({'foo' => 'a' * LIMITED_MAX_BSON_SIZE}) end end def test_maximum_remove_size assert limited_collection.remove({'foo' => 'a' * LIMITED_VALID_VALUE_SIZE}) assert_raise InvalidDocument do limited_collection.remove({'foo' => 'a' * LIMITED_MAX_BSON_SIZE}) end end def test_maximum_update_size assert_raise InvalidDocument do limited_collection.update( {'foo' => 'a' * LIMITED_MAX_BSON_SIZE}, {'foo' => 'a' * LIMITED_VALID_VALUE_SIZE} ) end assert_raise InvalidDocument do limited_collection.update( {'foo' => 'a' * LIMITED_VALID_VALUE_SIZE}, {'foo' => 'a' * LIMITED_MAX_BSON_SIZE} ) end assert_raise InvalidDocument do limited_collection.update( {'foo' => 'a' * LIMITED_MAX_BSON_SIZE}, {'foo' => 'a' * LIMITED_MAX_BSON_SIZE} ) end assert limited_collection.update( {'foo' => 'a' * LIMITED_VALID_VALUE_SIZE}, {'foo' => 'a' * LIMITED_VALID_VALUE_SIZE} ) end def test_maximum_query_size assert limited_collection.find({'foo' => 'a' * LIMITED_VALID_VALUE_SIZE}).to_a assert limited_collection.find( {'foo' => 'a' * LIMITED_VALID_VALUE_SIZE}, {:fields => {'foo' => 'a' * LIMITED_VALID_VALUE_SIZE}} ).to_a assert_raise InvalidDocument do limited_collection.find({'foo' => 'a' * LIMITED_INVALID_VALUE_SIZE}).to_a end assert_raise InvalidDocument do limited_collection.find( {'foo' => 'a' * LIMITED_VALID_VALUE_SIZE}, {:fields => {'foo' => 'a' * LIMITED_MAX_BSON_SIZE}} ).to_a end end #if @@version >= "1.5.1" # def test_safe_mode_with_advanced_safe_with_invalid_options # assert_raise_error ArgumentError, "Unknown key(s): wtime" do # @@test.insert({:foo => 1}, :w => 2, :wtime => 1, :fsync => true) # end # assert_raise_error ArgumentError, "Unknown key(s): wtime" do # @@test.update({:foo => 1}, {:foo => 2}, :w => 2, :wtime => 1, :fsync => true) # end # # assert_raise_error ArgumentError, "Unknown key(s): wtime" do # @@test.remove({:foo => 2}, :w => 2, :wtime => 1, :fsync => true) # end # end #end def test_safe_mode_with_journal_commit_option with_default_journaling(@@client) do @@test.insert({:foo => 1}, :j => true) @@test.update({:foo => 1}, {:foo => 2}, :j => true) @@test.remove({:foo => 2}, :j => true) end end if @@version < "2.5.3" def test_jnote_raises_exception with_no_journaling(@@client) do ex = assert_raise Mongo::WriteConcernError do @@test.insert({:foo => 1}, :j => true) end result = ex.result assert_true result.has_key?("jnote") end end def test_wnote_raises_exception ex = assert_raise Mongo::WriteConcernError do @@test.insert({:foo => 1}, :w => 2) end result = ex.result assert_true result.has_key?("wnote") end end def test_update id1 = @@test.save("x" => 5) @@test.update({}, {"$inc" => {"x" => 1}}) assert_equal 1, @@test.count() assert_equal 6, @@test.find_one(:_id => id1)["x"] id2 = @@test.save("x" => 1) @@test.update({"x" => 6}, {"$inc" => {"x" => 1}}) assert_equal 7, @@test.find_one(:_id => id1)["x"] assert_equal 1, @@test.find_one(:_id => id2)["x"] end if @@version < "2.5.3" def test_update_check_keys @@test.save("x" => 1) @@test.update({"x" => 1}, {"$set" => {"a.b" => 2}}) assert_equal 2, @@test.find_one("x" => 1)["a"]["b"] assert_raise_error BSON::InvalidKeyName do @@test.update({"x" => 1}, {"a.b" => 3}) end end end if @@version >= "1.1.3" def test_multi_update @@test.save("num" => 10) @@test.save("num" => 10) @@test.save("num" => 10) assert_equal 3, @@test.count @@test.update({"num" => 10}, {"$set" => {"num" => 100}}, :multi => true) @@test.find.each do |doc| assert_equal 100, doc["num"] end end end def test_upsert @@test.update({"page" => "/"}, {"$inc" => {"count" => 1}}, :upsert => true) @@test.update({"page" => "/"}, {"$inc" => {"count" => 1}}, :upsert => true) assert_equal 1, @@test.count() assert_equal 2, @@test.find_one()["count"] end if @@version < "1.1.3" def test_safe_update @@test.create_index("x") @@test.insert("x" => 5) @@test.update({}, {"$inc" => {"x" => 1}}) assert @@db.error? # Can't change an index. assert_raise OperationFailure do @@test.update({}, {"$inc" => {"x" => 1}}) end @@test.drop end else def test_safe_update @@test.create_index("x", :unique => true) @@test.insert("x" => 5) @@test.insert("x" => 10) # Can update an indexed collection. @@test.update({}, {"$inc" => {"x" => 1}}) assert !@@db.error? # Can't duplicate an index. assert_raise OperationFailure do @@test.update({}, {"x" => 10}) end @@test.drop end end def test_safe_save @@test.create_index("hello", :unique => true) @@test.save("hello" => "world") @@test.save({"hello" => "world"}, :w => 0) assert_raise OperationFailure do @@test.save({"hello" => "world"}) end @@test.drop end def test_mocked_safe_remove @client = standard_connection @db = @client[TEST_DB] @test = @db['test-safe-remove'] @test.save({:a => 20}) @client.stubs(:receive).returns([[{'ok' => 0, 'err' => 'failed'}], 1, 0]) assert_raise OperationFailure do @test.remove({}) end @test.drop end def test_safe_remove @client = standard_connection @db = @client[TEST_DB] @test = @db['test-safe-remove'] @test.remove @test.save({:a => 50}) assert_equal 1, @test.remove({})["n"] @test.drop end def test_remove_return_value assert_equal true, @@test.remove({}, :w => 0) end def test_remove_with_limit @@test.insert([{:n => 1},{:n => 2},{:n => 3}]) @@test.remove({}, :limit => 1) assert_equal 2, @@test.count @@test.remove({}, :limit => 0) assert_equal 0, @@test.count end def test_count @@test.drop assert_equal 0, @@test.count @@test.save(:x => 1) @@test.save(:x => 2) assert_equal 2, @@test.count assert_equal 1, @@test.count(:query => {:x => 1}) assert_equal 1, @@test.count(:limit => 1) assert_equal 0, @@test.count(:skip => 2) end # Note: #size is just an alias for #count. def test_size @@test.drop assert_equal 0, @@test.count assert_equal @@test.size, @@test.count @@test.save("x" => 1) @@test.save("x" => 2) assert_equal @@test.size, @@test.count end def test_no_timeout_option @@test.drop assert_raise ArgumentError, "Timeout can be set to false only when #find is invoked with a block." do @@test.find({}, :timeout => false) end @@test.find({}, :timeout => false) do |cursor| assert_equal 0, cursor.count end @@test.save("x" => 1) @@test.save("x" => 2) @@test.find({}, :timeout => false) do |cursor| assert_equal 2, cursor.count end end def test_default_timeout cursor = @@test.find assert_equal true, cursor.timeout end def test_fields_as_hash @@test.save(:a => 1, :b => 1, :c => 1) doc = @@test.find_one({:a => 1}, :fields => {:b => 0}) assert_nil doc['b'] assert doc['a'] assert doc['c'] doc = @@test.find_one({:a => 1}, :fields => {:a => 1, :b => 1}) assert_nil doc['c'] assert doc['a'] assert doc['b'] assert_raise Mongo::OperationFailure do @@test.find_one({:a => 1}, :fields => {:a => 1, :b => 0}) end end if @@version >= '2.5.5' def test_meta_field_projection @@test.save({ :t => 'spam eggs and spam'}) @@test.save({ :t => 'spam'}) @@test.save({ :t => 'egg sausage and bacon'}) @@test.ensure_index([[:t, 'text']]) assert @@test.find_one({ :$text => { :$search => 'spam' }}, { :fields => [:t, { :score => { :$meta => 'textScore' } }] }) end def test_sort_by_meta @@test.save({ :t => 'spam eggs and spam'}) @@test.save({ :t => 'spam'}) @@test.save({ :t => 'egg sausage and bacon'}) @@test.ensure_index([[:t, 'text']]) assert @@test.find({ :$text => { :$search => 'spam' }}).sort([:score, { '$meta' => 'textScore' }]) assert @@test.find({ :$text => { :$search => 'spam' }}).sort(:score => { '$meta' =>'textScore' }) end end if @@version >= "1.5.1" def test_fields_with_slice @@test.save({:foo => [1, 2, 3, 4, 5, 6], :test => 'slice'}) doc = @@test.find_one({:test => 'slice'}, :fields => {'foo' => {'$slice' => [0, 3]}}) assert_equal [1, 2, 3], doc['foo'] @@test.remove end end def test_find_one id = @@test.save("hello" => "world", "foo" => "bar") assert_equal "world", @@test.find_one()["hello"] assert_equal @@test.find_one(id), @@test.find_one() assert_equal @@test.find_one(nil), @@test.find_one() assert_equal @@test.find_one({}), @@test.find_one() assert_equal @@test.find_one("hello" => "world"), @@test.find_one() assert_equal @@test.find_one(BSON::OrderedHash["hello", "world"]), @@test.find_one() assert @@test.find_one(nil, :fields => ["hello"]).include?("hello") assert !@@test.find_one(nil, :fields => ["foo"]).include?("hello") assert_equal ["_id"], @@test.find_one(nil, :fields => []).keys() assert_equal nil, @@test.find_one("hello" => "foo") assert_equal nil, @@test.find_one(BSON::OrderedHash["hello", "foo"]) assert_equal nil, @@test.find_one(ObjectId.new) assert_raise TypeError do @@test.find_one(6) end end def test_find_one_with_max_time_ms with_forced_timeout(@@client) do assert_raise ExecutionTimeout do @@test.find_one({}, { :max_time_ms => 100 }) end end end def test_find_one_with_compile_regex_option regex = /.*/ @@test.insert('r' => /.*/) assert_kind_of Regexp, @@test.find_one({})['r'] assert_kind_of Regexp, @@test.find_one({}, :compile_regex => true)['r'] assert_equal BSON::Regex, @@test.find_one({}, :compile_regex => false)['r'].class end def test_insert_adds_id doc = {"hello" => "world"} @@test.insert(doc) assert(doc.include?(:_id)) docs = [{"hello" => "world"}, {"hello" => "world"}] @@test.insert(docs) docs.each do |d| assert(d.include?(:_id)) end end def test_save_adds_id doc = {"hello" => "world"} @@test.save(doc) assert(doc.include?(:_id)) end def test_optional_find_block 10.times do |i| @@test.save("i" => i) end x = nil @@test.find("i" => 2) { |cursor| x = cursor.count() } assert_equal 1, x i = 0 @@test.find({}, :skip => 5) do |cursor| cursor.each do |doc| i = i + 1 end end assert_equal 5, i c = nil @@test.find() do |cursor| c = cursor end assert c.closed? end def setup_aggregate_data # save some data @@test.save( { "_id" => 1, "title" => "this is my title", "author" => "bob", "posted" => Time.utc(2000), "pageViews" => 5 , "tags" => [ "fun" , "good" , "fun" ], "comments" => [ { "author" => "joe", "text" => "this is cool" }, { "author" => "sam", "text" => "this is bad" } ], "other" => { "foo" => 5 } } ) @@test.save( { "_id" => 2, "title" => "this is your title", "author" => "dave", "posted" => Time.utc(2001), "pageViews" => 7, "tags" => [ "fun" , "nasty" ], "comments" => [ { "author" => "barbara" , "text" => "this is interesting" }, { "author" => "jenny", "text" => "i like to play pinball", "votes" => 10 } ], "other" => { "bar" => 14 } }) @@test.save( { "_id" => 3, "title" => "this is some other title", "author" => "jane", "posted" => Time.utc(2002), "pageViews" => 6 , "tags" => [ "nasty", "filthy" ], "comments" => [ { "author" => "will" , "text" => "i don't like the color" } , { "author" => "jenny" , "text" => "can i get that in green?" } ], "other" => { "bar" => 14 } }) end if @@version > '2.1.1' def test_reponds_to_aggregate assert_respond_to @@test, :aggregate end def test_aggregate_requires_arguments assert_raise MongoArgumentError do @@test.aggregate() end end def test_aggregate_requires_valid_arguments assert_raise MongoArgumentError do @@test.aggregate({}) end end def test_aggregate_pipeline_operator_format assert_raise Mongo::OperationFailure do @@test.aggregate([{"$project" => "_id"}]) end end def test_aggregate_pipeline_operators_using_strings setup_aggregate_data desired_results = [ {"_id"=>1, "pageViews"=>5, "tags"=>["fun", "good", "fun"]}, {"_id"=>2, "pageViews"=>7, "tags"=>["fun", "nasty"]}, {"_id"=>3, "pageViews"=>6, "tags"=>["nasty", "filthy"]} ] results = @@test.aggregate([{"$project" => {"tags" => 1, "pageViews" => 1}}]) assert_equal desired_results, results end def test_aggregate_pipeline_operators_using_symbols setup_aggregate_data desired_results = [ {"_id"=>1, "pageViews"=>5, "tags"=>["fun", "good", "fun"]}, {"_id"=>2, "pageViews"=>7, "tags"=>["fun", "nasty"]}, {"_id"=>3, "pageViews"=>6, "tags"=>["nasty", "filthy"]} ] results = @@test.aggregate([{"$project" => {:tags => 1, :pageViews => 1}}]) assert_equal desired_results, results end def test_aggregate_pipeline_multiple_operators setup_aggregate_data results = @@test.aggregate([{"$project" => {"tags" => 1, "pageViews" => 1}}, {"$match" => {"pageViews" => 7}}]) assert_equal 1, results.length end def test_aggregate_pipeline_unwind setup_aggregate_data desired_results = [ {"_id"=>1, "title"=>"this is my title", "author"=>"bob", "posted"=>Time.utc(2000), "pageViews"=>5, "tags"=>"fun", "comments"=>[{"author"=>"joe", "text"=>"this is cool"}, {"author"=>"sam", "text"=>"this is bad"}], "other"=>{"foo"=>5 } }, {"_id"=>1, "title"=>"this is my title", "author"=>"bob", "posted"=>Time.utc(2000), "pageViews"=>5, "tags"=>"good", "comments"=>[{"author"=>"joe", "text"=>"this is cool"}, {"author"=>"sam", "text"=>"this is bad"}], "other"=>{"foo"=>5 } }, {"_id"=>1, "title"=>"this is my title", "author"=>"bob", "posted"=>Time.utc(2000), "pageViews"=>5, "tags"=>"fun", "comments"=>[{"author"=>"joe", "text"=>"this is cool"}, {"author"=>"sam", "text"=>"this is bad"}], "other"=>{"foo"=>5 } }, {"_id"=>2, "title"=>"this is your title", "author"=>"dave", "posted"=>Time.utc(2001), "pageViews"=>7, "tags"=>"fun", "comments"=>[{"author"=>"barbara", "text"=>"this is interesting"}, {"author"=>"jenny", "text"=>"i like to play pinball", "votes"=>10 }], "other"=>{"bar"=>14 } }, {"_id"=>2, "title"=>"this is your title", "author"=>"dave", "posted"=>Time.utc(2001), "pageViews"=>7, "tags"=>"nasty", "comments"=>[{"author"=>"barbara", "text"=>"this is interesting"}, {"author"=>"jenny", "text"=>"i like to play pinball", "votes"=>10 }], "other"=>{"bar"=>14 } }, {"_id"=>3, "title"=>"this is some other title", "author"=>"jane", "posted"=>Time.utc(2002), "pageViews"=>6, "tags"=>"nasty", "comments"=>[{"author"=>"will", "text"=>"i don't like the color"}, {"author"=>"jenny", "text"=>"can i get that in green?"}], "other"=>{"bar"=>14 } }, {"_id"=>3, "title"=>"this is some other title", "author"=>"jane", "posted"=>Time.utc(2002), "pageViews"=>6, "tags"=>"filthy", "comments"=>[{"author"=>"will", "text"=>"i don't like the color"}, {"author"=>"jenny", "text"=>"can i get that in green?"}], "other"=>{"bar"=>14 } } ] results = @@test.aggregate([{"$unwind"=> "$tags"}]) assert_equal desired_results, results end def test_aggregate_with_compile_regex_option # see SERVER-6470 return unless @@version >= '2.3.2' @@test.insert({ 'r' => /.*/ }) result1 = @@test.aggregate([]) assert_kind_of Regexp, result1.first['r'] result2 = @@test.aggregate([], :compile_regex => false) assert_kind_of BSON::Regex, result2.first['r'] return unless @@version >= '2.5.1' result = @@test.aggregate([], :compile_regex => false, :cursor => {}) assert_kind_of BSON::Regex, result.first['r'] end end if @@version >= "2.5.2" def test_out_aggregate out_collection = 'test_out' @@db.drop_collection(out_collection) setup_aggregate_data docs = @@test.find.to_a pipeline = [{:$out => out_collection}] @@test.aggregate(pipeline) assert_equal docs, @@db.collection(out_collection).find.to_a end def test_out_aggregate_nonprimary_sym_warns ReadPreference::expects(:warn).with(regexp_matches(/rerouted to primary/)) pipeline = [{:$out => 'test_out'}] @@test.aggregate(pipeline, :read => :secondary) end def test_out_aggregate_nonprimary_string_warns ReadPreference::expects(:warn).with(regexp_matches(/rerouted to primary/)) pipeline = [{'$out' => 'test_out'}] @@test.aggregate(pipeline, :read => :secondary) end def test_out_aggregate_string_returns_raw_response pipeline = [{'$out' => 'test_out'}] response = @@test.aggregate(pipeline) assert response.respond_to?(:keys) end def test_out_aggregate_sym_returns_raw_response pipeline = [{:$out => 'test_out'}] response = @@test.aggregate(pipeline) assert response.respond_to?(:keys) end end if @@version > "1.1.1" def test_map_reduce @@test << { "user_id" => 1 } @@test << { "user_id" => 2 } m = "function() { emit(this.user_id, 1); }" r = "function(k,vals) { return 1; }" res = @@test.map_reduce(m, r, :out => 'foo') assert res.find_one({"_id" => 1}) assert res.find_one({"_id" => 2}) end def test_map_reduce_with_code_objects @@test << { "user_id" => 1 } @@test << { "user_id" => 2 } m = Code.new("function() { emit(this.user_id, 1); }") r = Code.new("function(k,vals) { return 1; }") res = @@test.map_reduce(m, r, :out => 'foo') assert res.find_one({"_id" => 1}) assert res.find_one({"_id" => 2}) end def test_map_reduce_with_options @@test.remove @@test << { "user_id" => 1 } @@test << { "user_id" => 2 } @@test << { "user_id" => 3 } m = Code.new("function() { emit(this.user_id, 1); }") r = Code.new("function(k,vals) { return 1; }") res = @@test.map_reduce(m, r, :query => {"user_id" => {"$gt" => 1}}, :out => 'foo') assert_equal 2, res.count assert res.find_one({"_id" => 2}) assert res.find_one({"_id" => 3}) end def test_map_reduce_with_raw_response m = Code.new("function() { emit(this.user_id, 1); }") r = Code.new("function(k,vals) { return 1; }") res = @@test.map_reduce(m, r, :raw => true, :out => 'foo') assert res["result"] assert res["counts"] assert res["timeMillis"] end def test_map_reduce_with_output_collection output_collection = "test-map-coll" m = Code.new("function() { emit(this.user_id, 1); }") r = Code.new("function(k,vals) { return 1; }") res = @@test.map_reduce(m, r, :raw => true, :out => output_collection) assert_equal output_collection, res["result"] assert res["counts"] assert res["timeMillis"] end def test_map_reduce_nonprimary_output_collection_reroutes output_collection = "test-map-coll" m = Code.new("function() { emit(this.user_id, 1); }") r = Code.new("function(k,vals) { return 1; }") Mongo::ReadPreference.expects(:warn).with(regexp_matches(/rerouted to primary/)) res = @@test.map_reduce(m, r, :raw => true, :out => output_collection, :read => :secondary) end if @@version >= "1.8.0" def test_map_reduce_with_collection_merge @@test << {:user_id => 1} @@test << {:user_id => 2} output_collection = "test-map-coll" m = Code.new("function() { emit(this.user_id, {count: 1}); }") r = Code.new("function(k,vals) { var sum = 0;" + " vals.forEach(function(v) { sum += v.count;} ); return {count: sum}; }") res = @@test.map_reduce(m, r, :out => output_collection) @@test.remove @@test << {:user_id => 3} res = @@test.map_reduce(m, r, :out => {:merge => output_collection}) assert res.find.to_a.any? {|doc| doc["_id"] == 3 && doc["value"]["count"] == 1} @@test.remove @@test << {:user_id => 3} res = @@test.map_reduce(m, r, :out => {:reduce => output_collection}) assert res.find.to_a.any? {|doc| doc["_id"] == 3 && doc["value"]["count"] == 2} assert_raise ArgumentError do @@test.map_reduce(m, r, :out => {:inline => 1}) end @@test.map_reduce(m, r, :raw => true, :out => {:inline => 1}) assert res["results"] end def test_map_reduce_with_collection_output_to_other_db @@test << {:user_id => 1} @@test << {:user_id => 2} m = Code.new("function() { emit(this.user_id, 1); }") r = Code.new("function(k,vals) { return 1; }") oh = BSON::OrderedHash.new oh[:replace] = 'foo' oh[:db] = TEST_DB res = @@test.map_reduce(m, r, :out => (oh)) assert res["result"] assert res["counts"] assert res["timeMillis"] assert res.find.to_a.any? {|doc| doc["_id"] == 2 && doc["value"] == 1} end end end if @@version >= '2.5.5' def test_aggregation_allow_disk_use @@db.expects(:command).with do |selector, opts| opts[:allowDiskUse] == true end.returns({ 'ok' => 1 }) @@test.aggregate([], :allowDiskUse => true) end def test_parallel_scan 8000.times { |i| @@test.insert({ :_id => i }) } lock = Mutex.new doc_ids = Set.new threads = [] cursors = @@test.parallel_scan(3) cursors.each_with_index do |cursor, i| threads << Thread.new do docs = cursor.to_a lock.synchronize do docs.each do |doc| doc_ids << doc['_id'] end end end end threads.each(&:join) assert_equal 8000, doc_ids.count end end if @@version > "1.3.0" def test_find_and_modify @@test << { :a => 1, :processed => false } @@test << { :a => 2, :processed => false } @@test << { :a => 3, :processed => false } @@test.find_and_modify(:query => {}, :sort => [['a', -1]], :update => {"$set" => {:processed => true}}) assert @@test.find_one({:a => 3})['processed'] end def test_find_and_modify_with_invalid_options @@test << { :a => 1, :processed => false } @@test << { :a => 2, :processed => false } @@test << { :a => 3, :processed => false } assert_raise Mongo::OperationFailure do @@test.find_and_modify(:blimey => {}) end end def test_find_and_modify_with_full_response @@test << { :a => 1, :processed => false } @@test << { :a => 2, :processed => false } @@test << { :a => 3, :processed => false } doc = @@test.find_and_modify(:query => {}, :sort => [['a', -1]], :update => {"$set" => {:processed => true}}, :full_response => true, :new => true) assert doc['value']['processed'] assert ['ok', 'value', 'lastErrorObject'].all? { |key| doc.key?(key) } end end if @@version >= "1.3.5" def test_coll_stats @@test << {:n => 1} @@test.create_index("n") assert_equal "#{TEST_DB}.test", @@test.stats['ns'] @@test.drop end end def test_saving_dates_pre_epoch if RbConfig::CONFIG['host_os'] =~ /mswin|mingw|cygwin/ then return true end begin @@test.save({'date' => Time.utc(1600)}) assert_in_delta Time.utc(1600), @@test.find_one()["date"], 2 rescue ArgumentError # See note in test_date_before_epoch (BSONTest) end end def test_save_symbol_find_string @@test.save(:foo => :mike) assert_equal :mike, @@test.find_one(:foo => :mike)["foo"] assert_equal :mike, @@test.find_one("foo" => :mike)["foo"] # TODO enable these tests conditionally based on server version (if >1.0) # assert_equal :mike, @@test.find_one(:foo => "mike")["foo"] # assert_equal :mike, @@test.find_one("foo" => "mike")["foo"] end def test_batch_size n_docs = 6 batch_size = n_docs/2 n_docs.times do |i| @@test.save(:foo => i) end doc_count = 0 cursor = @@test.find({}, :batch_size => batch_size) cursor.next assert_equal batch_size, cursor.instance_variable_get(:@returned) doc_count += batch_size batch_size.times { cursor.next } assert_equal doc_count + batch_size, cursor.instance_variable_get(:@returned) doc_count += batch_size assert_equal n_docs, doc_count end def test_batch_size_with_smaller_limit n_docs = 6 batch_size = n_docs/2 n_docs.times do |i| @@test.insert(:foo => i) end cursor = @@test.find({}, :batch_size => batch_size, :limit => 2) cursor.next assert_equal 2, cursor.instance_variable_get(:@returned) end def test_batch_size_with_larger_limit n_docs = 6 batch_size = n_docs/2 n_docs.times do |i| @@test.insert(:foo => i) end doc_count = 0 cursor = @@test.find({}, :batch_size => batch_size, :limit => n_docs + 5) cursor.next assert_equal batch_size, cursor.instance_variable_get(:@returned) doc_count += batch_size batch_size.times { cursor.next } assert_equal doc_count + batch_size, cursor.instance_variable_get(:@returned) doc_count += batch_size assert_equal n_docs, doc_count end def test_batch_size_with_negative_limit n_docs = 6 batch_size = n_docs/2 n_docs.times do |i| @@test.insert(:foo => i) end cursor = @@test.find({}, :batch_size => batch_size, :limit => -7) cursor.next assert_equal n_docs, cursor.instance_variable_get(:@returned) end def test_limit_and_skip 10.times do |i| @@test.save(:foo => i) end assert_equal 5, @@test.find({}, :skip => 5).next_document()["foo"] assert_equal nil, @@test.find({}, :skip => 10).next_document() assert_equal 5, @@test.find({}, :limit => 5).to_a.length assert_equal 3, @@test.find({}, :skip => 3, :limit => 5).next_document()["foo"] assert_equal 5, @@test.find({}, :skip => 3, :limit => 5).to_a.length end def test_large_limit 2000.times do |i| @@test.insert("x" => i, "y" => "mongomongo" * 1000) end assert_equal 2000, @@test.count i = 0 y = 0 @@test.find({}, :limit => 1900).each do |doc| i += 1 y += doc["x"] end assert_equal 1900, i assert_equal 1804050, y end def test_small_limit @@test.insert("x" => "hello world") @@test.insert("x" => "goodbye world") assert_equal 2, @@test.count x = 0 @@test.find({}, :limit => 1).each do |doc| x += 1 assert_equal "hello world", doc["x"] end assert_equal 1, x end def test_find_with_transformer klass = Struct.new(:id, :a) transformer = Proc.new { |doc| klass.new(doc['_id'], doc['a']) } cursor = @@test.find({}, :transformer => transformer) assert_equal(transformer, cursor.transformer) end def test_find_one_with_transformer klass = Struct.new(:id, :a) transformer = Proc.new { |doc| klass.new(doc['_id'], doc['a']) } id = @@test.insert('a' => 1) doc = @@test.find_one(id, :transformer => transformer) assert_instance_of(klass, doc) end def test_ensure_index @@test.drop_indexes @@test.insert("x" => "hello world") assert_equal 1, @@test.index_information.keys.count #default index @@test.ensure_index([["x", Mongo::DESCENDING]], {}) assert_equal 2, @@test.index_information.keys.count assert @@test.index_information.keys.include?("x_-1") @@test.ensure_index([["x", Mongo::ASCENDING]]) assert @@test.index_information.keys.include?("x_1") @@test.ensure_index([["type", 1], ["date", -1]]) assert @@test.index_information.keys.include?("type_1_date_-1") @@test.drop_index("x_1") assert_equal 3, @@test.index_information.keys.count @@test.drop_index("x_-1") assert_equal 2, @@test.index_information.keys.count @@test.ensure_index([["x", Mongo::DESCENDING]], {}) assert_equal 3, @@test.index_information.keys.count assert @@test.index_information.keys.include?("x_-1") # Make sure that drop_index expires cache properly @@test.ensure_index([['a', 1]]) assert @@test.index_information.keys.include?("a_1") @@test.drop_index("a_1") assert !@@test.index_information.keys.include?("a_1") @@test.ensure_index([['a', 1]]) assert @@test.index_information.keys.include?("a_1") @@test.drop_index("a_1") @@test.drop_indexes end def test_ensure_index_timeout @@db.cache_time = 1 coll = @@db['ensure_test'] coll.expects(:generate_indexes).twice coll.ensure_index([['a', 1]]) # These will be cached coll.ensure_index([['a', 1]]) coll.ensure_index([['a', 1]]) coll.ensure_index([['a', 1]]) coll.ensure_index([['a', 1]]) sleep(1) # This won't be, so generate_indexes will be called twice coll.ensure_index([['a', 1]]) coll.drop end if @@version > '2.0.0' def test_show_disk_loc @@test.save({:a => 1}) @@test.save({:a => 2}) assert @@test.find({:a => 1}, :show_disk_loc => true).show_disk_loc assert @@test.find({:a => 1}, :show_disk_loc => true).next['$diskLoc'] @@test.remove end def test_max_scan @@test.drop n = 100 n.times do |i| @@test.save({:_id => i, :x => i % 10}) end assert_equal(n, @@test.find.to_a.size) assert_equal(50, @@test.find({}, :max_scan => 50).to_a.size) assert_equal(10, @@test.find({:x => 2}).to_a.size) assert_equal(5, @@test.find({:x => 2}, :max_scan => 50).to_a.size) @@test.ensure_index([[:x, 1]]) assert_equal(10, @@test.find({:x => 2}, :max_scan => n).to_a.size) @@test.drop end end context "Grouping" do setup do @@test.remove @@test.save("a" => 1) @@test.save("b" => 1) @initial = {"count" => 0} @reduce_function = "function (obj, prev) { prev.count += inc_value; }" end should "fail if missing required options" do assert_raise MongoArgumentError do @@test.group(:initial => {}) end assert_raise MongoArgumentError do @@test.group(:reduce => "foo") end end should "group results using eval form" do assert_equal 1, @@test.group(:initial => @initial, :reduce => Code.new(@reduce_function, {"inc_value" => 0.5}))[0]["count"] assert_equal 2, @@test.group(:initial => @initial, :reduce => Code.new(@reduce_function, {"inc_value" => 1}))[0]["count"] assert_equal 4, @@test.group(:initial => @initial, :reduce => Code.new(@reduce_function, {"inc_value" => 2}))[0]["count"] end should "finalize grouped results" do @finalize = "function(doc) {doc.f = doc.count + 200; }" assert_equal 202, @@test.group(:initial => @initial, :reduce => Code.new(@reduce_function, {"inc_value" => 1}), :finalize => @finalize)[0]["f"] end end context "Grouping with key" do setup do @@test.remove @@test.save("a" => 1, "pop" => 100) @@test.save("a" => 1, "pop" => 100) @@test.save("a" => 2, "pop" => 100) @@test.save("a" => 2, "pop" => 100) @initial = {"count" => 0, "foo" => 1} @reduce_function = "function (obj, prev) { prev.count += obj.pop; }" end should "group" do result = @@test.group(:key => :a, :initial => @initial, :reduce => @reduce_function) assert result.all? { |r| r['count'] == 200 } end end context "Grouping with a key function" do setup do @@test.remove @@test.save("a" => 1) @@test.save("a" => 2) @@test.save("a" => 3) @@test.save("a" => 4) @@test.save("a" => 5) @initial = {"count" => 0} @keyf = "function (doc) { if(doc.a % 2 == 0) { return {even: true}; } else {return {odd: true}} };" @reduce = "function (obj, prev) { prev.count += 1; }" end should "group results" do results = @@test.group(:keyf => @keyf, :initial => @initial, :reduce => @reduce).sort {|a, b| a['count'] <=> b['count']} assert results[0]['even'] && results[0]['count'] == 2.0 assert results[1]['odd'] && results[1]['count'] == 3.0 end should "group filtered results" do results = @@test.group(:keyf => @keyf, :cond => {:a => {'$ne' => 2}}, :initial => @initial, :reduce => @reduce).sort {|a, b| a['count'] <=> b['count']} assert results[0]['even'] && results[0]['count'] == 1.0 assert results[1]['odd'] && results[1]['count'] == 3.0 end end context "A collection with two records" do setup do @collection = @@db.collection('test-collection') @collection.remove @collection.insert({:name => "Jones"}) @collection.insert({:name => "Smith"}) end should "have two records" do assert_equal 2, @collection.size end should "remove the two records" do @collection.remove() assert_equal 0, @collection.size end should "remove all records if an empty document is specified" do @collection.remove({}) assert_equal 0, @collection.find.count end should "remove only matching records" do @collection.remove({:name => "Jones"}) assert_equal 1, @collection.size end end context "Drop index " do setup do @@db.drop_collection('test-collection') @collection = @@db.collection('test-collection') end should "drop an index" do @collection.create_index([['a', Mongo::ASCENDING]]) assert @collection.index_information['a_1'] @collection.drop_index([['a', Mongo::ASCENDING]]) assert_nil @collection.index_information['a_1'] end should "drop an index which was given a specific name" do @collection.create_index([['a', Mongo::DESCENDING]], {:name => 'i_will_not_fear'}) assert @collection.index_information['i_will_not_fear'] @collection.drop_index([['a', Mongo::DESCENDING]]) assert_nil @collection.index_information['i_will_not_fear'] end should "drops an composite index" do @collection.create_index([['a', Mongo::DESCENDING], ['b', Mongo::ASCENDING]]) assert @collection.index_information['a_-1_b_1'] @collection.drop_index([['a', Mongo::DESCENDING], ['b', Mongo::ASCENDING]]) assert_nil @collection.index_information['a_-1_b_1'] end should "drops an index with symbols" do @collection.create_index([['a', Mongo::DESCENDING], [:b, Mongo::ASCENDING]]) assert @collection.index_information['a_-1_b_1'] @collection.drop_index([['a', Mongo::DESCENDING], [:b, Mongo::ASCENDING]]) assert_nil @collection.index_information['a_-1_b_1'] end end context "Creating indexes " do setup do @@db.drop_collection('geo') @@db.drop_collection('test-collection') @collection = @@db.collection('test-collection') @geo = @@db.collection('geo') end should "create index using symbols" do @collection.create_index :foo, :name => :bar @geo.create_index :goo, :name => :baz assert @collection.index_information['bar'] @collection.drop_index :bar assert_nil @collection.index_information['bar'] assert @geo.index_information['baz'] @geo.drop_index(:baz) assert_nil @geo.index_information['baz'] end #should "create a text index" do # @geo.save({'title' => "some text"}) # @geo.create_index([['title', Mongo::TEXT]]) # assert @geo.index_information['title_text'] #end should "create a hashed index" do @geo.save({'a' => 1}) @geo.create_index([['a', Mongo::HASHED]]) assert @geo.index_information['a_hashed'] end should "create a geospatial index" do @geo.save({'loc' => [-100, 100]}) @geo.create_index([['loc', Mongo::GEO2D]]) assert @geo.index_information['loc_2d'] end should "create a geoHaystack index" do @geo.save({ "_id" => 100, "pos" => { "long" => 126.9, "lat" => 35.2 }, "type" => "restaurant"}) @geo.create_index([['pos', Mongo::GEOHAYSTACK], ['type', Mongo::ASCENDING]], :bucket_size => 1) assert @geo.index_information['pos_geoHaystack_type_1'] end should "create a geo 2dsphere index" do @collection.insert({"coordinates" => [ 5 , 5 ], "type" => "Point"}) @geo.create_index([['coordinates', Mongo::GEO2DSPHERE]]) assert @geo.index_information['coordinates_2dsphere'] end should "create a unique index" do @collection.create_index([['a', Mongo::ASCENDING]], :unique => true) assert @collection.index_information['a_1']['unique'] == true end should "drop duplicates" do @collection.insert({:a => 1}) @collection.insert({:a => 1}) assert_equal 2, @collection.find({:a => 1}).count @collection.create_index([['a', Mongo::ASCENDING]], :unique => true, :dropDups => true) assert_equal 1, @collection.find({:a => 1}).count end should "drop duplicates with ruby-like drop_dups key" do @collection.insert({:a => 1}) @collection.insert({:a => 1}) assert_equal 2, @collection.find({:a => 1}).count @collection.create_index([['a', Mongo::ASCENDING]], :unique => true, :drop_dups => true) assert_equal 1, @collection.find({:a => 1}).count end should "drop duplicates with ensure_index and drop_dups key" do @collection.insert({:a => 1}) @collection.insert({:a => 1}) assert_equal 2, @collection.find({:a => 1}).count @collection.ensure_index([['a', Mongo::ASCENDING]], :unique => true, :drop_dups => true) assert_equal 1, @collection.find({:a => 1}).count end should "create an index in the background" do if @@version > '1.3.1' @collection.create_index([['b', Mongo::ASCENDING]], :background => true) assert @collection.index_information['b_1']['background'] == true else assert true end end should "require an array of arrays" do assert_raise MongoArgumentError do @collection.create_index(['c', Mongo::ASCENDING]) end end should "enforce proper index types" do assert_raise MongoArgumentError do @collection.create_index([['c', 'blah']]) end end should "raise an error if index name is greater than 128" do assert_raise Mongo::OperationFailure do @collection.create_index([['a' * 25, 1], ['b' * 25, 1], ['c' * 25, 1], ['d' * 25, 1], ['e' * 25, 1]]) end end should "allow for an alternate name to be specified" do @collection.create_index([['a' * 25, 1], ['b' * 25, 1], ['c' * 25, 1], ['d' * 25, 1], ['e' * 25, 1]], :name => 'foo_index') assert @collection.index_information['foo_index'] end should "generate indexes in the proper order" do key = BSON::OrderedHash['b', 1, 'a', 1] if @@version < '2.5.5' @collection.expects(:send_write) do |type, selector, documents, check_keys, opts, collection_name| assert_equal key, selector[:key] end else @collection.db.expects(:command) do |selector| assert_equal key, selector[:indexes].first[:key] end end @collection.create_index([['b', 1], ['a', 1]]) end should "allow creation of multiple indexes" do assert @collection.create_index([['a', 1]]) assert @collection.create_index([['a', 1]]) end context "with an index created" do setup do @collection.create_index([['b', 1], ['a', 1]]) end should "return properly ordered index information" do assert @collection.index_information['b_1_a_1'] end end end context "Capped collections" do setup do @@db.drop_collection('log') @capped = @@db.create_collection('log', :capped => true, :size => LIMITED_MAX_BSON_SIZE) 10.times { |n| @capped.insert({:n => n}) } end should "find using a standard cursor" do cursor = @capped.find 10.times do assert cursor.next_document end assert_nil cursor.next_document @capped.insert({:n => 100}) assert_nil cursor.next_document end should "fail tailable cursor on a non-capped collection" do col = @@db['regular-collection'] col.insert({:a => 1000}) tail = Cursor.new(col, :tailable => true, :order => [['$natural', 1]]) assert_raise OperationFailure do tail.next_document end end should "find using a tailable cursor" do tail = Cursor.new(@capped, :tailable => true, :order => [['$natural', 1]]) 10.times do assert tail.next_document end assert_nil tail.next_document @capped.insert({:n => 100}) assert tail.next_document end end end ruby-mongo-1.10.0/test/functional/collection_writer_test.rb000066400000000000000000000047261233461006100241300ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License") # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' module Mongo class Collection public :batch_write end class CollectionWriter public :sort_by_first_sym, :ordered_group_by_first end end class CollectionWriterTest < Test::Unit::TestCase DATABASE_NAME = 'ruby_test_collection_writer' COLLECTION_NAME = 'test' def default_setup @client = MongoClient.new @db = @client[DATABASE_NAME] @collection = @db[COLLECTION_NAME] @collection.drop end context "Bulk API Execute" do setup do default_setup end should "sort_by_first_sym for grouping unordered ops" do pairs = [ [:insert, {:n => 0}], [:update, {:n => 1}], [:update, {:n => 2}], [:delete, {:n => 3}], [:insert, {:n => 5}], [:insert, {:n => 6}], [:insert, {:n => 7}], [:update, {:n => 8}], [:delete, {:n => 9}], [:delete, {:n => 10}] ] result = @collection.command_writer.sort_by_first_sym(pairs) expected = [ :delete, :delete, :delete, :insert, :insert, :insert, :insert, :update, :update, :update ] assert_equal expected, result.collect{|first, rest| first} end should "calculate ordered_group_by_first" do pairs = [ [:insert, {:n => 0}], [:update, {:n => 1}], [:update, {:n => 2}], [:delete, {:n => 3}], [:insert, {:n => 5}], [:insert, {:n => 6}], [:insert, {:n => 7}], [:update, {:n => 8}], [:delete, {:n => 9}], [:delete, {:n => 10}] ] result = @collection.command_writer.ordered_group_by_first(pairs) expected = [ [:insert, [{:n => 0}]], [:update, [{:n => 1}, {:n => 2}]], [:delete, [{:n => 3}]], [:insert, [{:n => 5}, {:n => 6}, {:n => 7}]], [:update, [{:n => 8}]], [:delete, [{:n => 9}, {:n => 10}]] ] assert_equal expected, result end end end ruby-mongo-1.10.0/test/functional/conversions_test.rb000066400000000000000000000113711233461006100227430ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' class ConversionsTest < Test::Unit::TestCase include Mongo::Conversions def test_array_as_sort_parameters_with_array_of_key_and_value params = array_as_sort_parameters(["field1", "asc"]) assert_equal({"field1" => 1}, params) end def test_array_as_sort_parameters_with_array_of_string_and_values params = array_as_sort_parameters([["field1", :asc], ["field2", :desc]]) assert_equal({ "field1" => 1, "field2" => -1 }, params) end def test_array_as_sort_parameters_with_array_of_key_and_hash params = array_as_sort_parameters(["score", {"$meta" => "textScore"}]) assert_equal({"score" => {"$meta" => "textScore"}}, params) end def test_array_as_sort_parameters_with_array_of_key_and_hashes params = array_as_sort_parameters([["field1", :asc],["score", {"$meta" => "textScore"}]]) assert_equal({"field1" => 1, "score" => {"$meta" => "textScore"}}, params) end def test_hash_as_sort_parameters_with_string sort = BSON::OrderedHash["field", "asc"] params = hash_as_sort_parameters(sort) assert_equal({"field" => 1}, params) end def test_hash_as_sort_parameters_with_hash sort = BSON::OrderedHash["score", {"$meta" => "textScore"}] params = hash_as_sort_parameters(sort) assert_equal({"score" => {"$meta" => "textScore"}}, params) end def test_hash_as_sort_parameters_with_hash_and_string sort = BSON::OrderedHash["score", {"$meta" => "textScore"}, "field", "asc"] params = hash_as_sort_parameters(sort) assert_equal({ "score" => {"$meta" => "textScore"}, "field" => 1 }, params) end def test_string_as_sort_parameters_with_string params = string_as_sort_parameters("field") assert_equal({ "field" => 1 }, params) end def test_string_as_sort_parameters_with_empty_string params = string_as_sort_parameters("") assert_equal({}, params) end def test_symbol_as_sort_parameters params = string_as_sort_parameters(:field) assert_equal({ "field" => 1 }, params) end def test_sort_value_when_value_is_one assert_equal 1, sort_value(1) end def test_sort_value_when_value_is_one_as_a_string assert_equal 1, sort_value("1") end def test_sort_value_when_value_is_negative_one assert_equal(-1, sort_value(-1)) end def test_sort_value_when_value_is_negative_one_as_a_string assert_equal(-1, sort_value("-1")) end def test_sort_value_when_value_is_ascending assert_equal 1, sort_value("ascending") end def test_sort_value_when_value_is_asc assert_equal 1, sort_value("asc") end def test_sort_value_when_value_is_uppercase_ascending assert_equal 1, sort_value("ASCENDING") end def test_sort_value_when_value_is_uppercase_asc assert_equal 1, sort_value("ASC") end def test_sort_value_when_value_is_symbol_ascending assert_equal 1, sort_value(:ascending) end def test_sort_value_when_value_is_symbol_asc assert_equal 1, sort_value(:asc) end def test_sort_value_when_value_is_symbol_uppercase_ascending assert_equal 1, sort_value(:ASCENDING) end def test_sort_value_when_value_is_symbol_uppercase_asc assert_equal 1, sort_value(:ASC) end def test_sort_value_when_value_is_descending assert_equal(-1, sort_value("descending")) end def test_sort_value_when_value_is_desc assert_equal(-1, sort_value("desc")) end def test_sort_value_when_value_is_uppercase_descending assert_equal(-1, sort_value("DESCENDING")) end def test_sort_value_when_value_is_uppercase_desc assert_equal(-1, sort_value("DESC")) end def test_sort_value_when_value_is_symbol_descending assert_equal(-1, sort_value(:descending)) end def test_sort_value_when_value_is_symbol_desc assert_equal(-1, sort_value(:desc)) end def test_sort_value_when_value_is_uppercase_symbol_descending assert_equal(-1, sort_value(:DESCENDING)) end def test_sort_value_when_value_is_uppercase_symbol_desc assert_equal(-1, sort_value(:DESC)) end def test_sort_value_when_value_is_hash assert_equal({"$meta" => "textScore"}, sort_value("$meta" => "textScore")) end def test_sort_value_when_value_is_invalid assert_raise Mongo::InvalidSortValueError do sort_value(2) end end end ruby-mongo-1.10.0/test/functional/cursor_fail_test.rb000066400000000000000000000032321233461006100227000ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' require 'logger' class CursorFailTest < Test::Unit::TestCase include Mongo @@connection = standard_connection @@db = @@connection.db(TEST_DB) @@coll = @@db.collection('test') @@version = @@connection.server_version def setup @@coll.remove({}) @@coll.insert({'a' => 1}) # collection not created until it's used @@coll_full_name = "#{TEST_DB}.test" end def test_refill_via_get_more_alt_coll coll = @@db.collection('test-alt-coll') coll.remove coll.insert('a' => 1) # collection not created until it's used assert_equal 1, coll.count 1000.times { |i| assert_equal 1 + i, coll.count coll.insert('a' => i) } assert_equal 1001, coll.count count = 0 coll.find.each { |obj| count += obj['a'] } assert_equal 1001, coll.count # do the same thing again for debugging assert_equal 1001, coll.count count2 = 0 coll.find.each { |obj| count2 += obj['a'] } assert_equal 1001, coll.count assert_equal count, count2 assert_equal 499501, count end end ruby-mongo-1.10.0/test/functional/cursor_message_test.rb000066400000000000000000000030631233461006100234130ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' require 'logger' class CursorMessageTest < Test::Unit::TestCase include Mongo @@connection = standard_connection @@db = @@connection.db(TEST_DB) @@coll = @@db.collection('test') @@version = @@connection.server_version def setup @@coll.remove @@coll.insert('a' => 1) # collection not created until it's used @@coll_full_name = "#{TEST_DB}.test" end def test_valid_batch_sizes assert_raise ArgumentError do @@coll.find({}, :batch_size => 1, :limit => 5) end assert_raise ArgumentError do @@coll.find({}, :batch_size => -1, :limit => 5) end assert @@coll.find({}, :batch_size => 0, :limit => 5) end def test_batch_size @@coll.remove 200.times do |n| @@coll.insert({:a => n}) end list = @@coll.find({}, :batch_size => 2, :limit => 6).to_a assert_equal 6, list.length list = @@coll.find({}, :batch_size => 100, :limit => 101).to_a assert_equal 101, list.length end end ruby-mongo-1.10.0/test/functional/cursor_test.rb000066400000000000000000000372671233461006100217240ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' require 'logger' class CursorTest < Test::Unit::TestCase include Mongo include Mongo::Constants @@connection = standard_connection @@db = @@connection.db(TEST_DB) @@coll = @@db.collection('test') @@version = @@connection.server_version def setup @@coll.remove @@coll.insert('a' => 1) # collection not created until it's used @@coll_full_name = "#{TEST_DB}.test" end def test_alive batch = [] 5000.times do |n| batch << {:a => n} end @@coll.insert(batch) cursor = @@coll.find assert !cursor.alive? cursor.next assert cursor.alive? cursor.close assert !cursor.alive? @@coll.remove end def test_add_and_remove_options c = @@coll.find assert_equal 0, c.options & OP_QUERY_EXHAUST c.add_option(OP_QUERY_EXHAUST) assert_equal OP_QUERY_EXHAUST, c.options & OP_QUERY_EXHAUST c.remove_option(OP_QUERY_EXHAUST) assert_equal 0, c.options & OP_QUERY_EXHAUST c.next assert_raise Mongo::InvalidOperation do c.add_option(OP_QUERY_EXHAUST) end assert_raise Mongo::InvalidOperation do c.add_option(OP_QUERY_EXHAUST) end end def test_exhaust if @@version >= "2.0" @@coll.remove data = "1" * 10_000 5000.times do |n| @@coll.insert({:n => n, :data => data}) end c = Cursor.new(@@coll) c.add_option(OP_QUERY_EXHAUST) assert_equal @@coll.count, c.to_a.size assert c.closed? c = Cursor.new(@@coll) c.add_option(OP_QUERY_EXHAUST) 4999.times do c.next end assert c.has_next? assert c.next assert !c.has_next? assert c.closed? @@coll.remove end end def test_compile_regex_get_more return unless defined?(BSON::BSON_RUBY) && BSON::BSON_CODER == BSON::BSON_RUBY @@coll.remove n_docs = 3 n_docs.times { |n| @@coll.insert({ 'n' => /.*/ }) } cursor = @@coll.find({}, :batch_size => (n_docs-1), :compile_regex => false) cursor.expects(:send_get_more) cursor.to_a.each do |doc| assert_kind_of BSON::Regex, doc['n'] end end def test_max_time_ms_error cursor = @@coll.find cursor.stubs(:send_initial_query).returns(true) cursor.instance_variable_set(:@cache, [{ '$err' => 'operation exceeded time limit', 'code' => 50 }]) assert_raise ExecutionTimeout do cursor.to_a end end def test_max_time_ms with_forced_timeout(@@connection) do assert_raise ExecutionTimeout do cursor = @@coll.find.max_time_ms(100) cursor.to_a end end end def test_exhaust_after_limit_error c = Cursor.new(@@coll, :limit => 17) assert_raise MongoArgumentError do c.add_option(OP_QUERY_EXHAUST) end assert_raise MongoArgumentError do c.add_option(OP_QUERY_EXHAUST + OP_QUERY_SLAVE_OK) end end def test_limit_after_exhaust_error c = Cursor.new(@@coll) c.add_option(OP_QUERY_EXHAUST) assert_raise MongoArgumentError do c.limit(17) end end def test_exhaust_with_mongos @@connection.expects(:mongos?).returns(:true) c = Cursor.new(@@coll) assert_raise MongoArgumentError do c.add_option(OP_QUERY_EXHAUST) end end def test_inspect selector = {:a => 1} cursor = @@coll.find(selector) assert_equal "", cursor.inspect end def test_explain cursor = @@coll.find('a' => 1) explaination = cursor.explain assert_not_nil explaination['cursor'] assert_kind_of Numeric, explaination['n'] assert_kind_of Numeric, explaination['millis'] assert_kind_of Numeric, explaination['nscanned'] end def test_each_with_no_block assert_kind_of(Enumerator, @@coll.find().each) if defined? Enumerator end def test_count @@coll.remove assert_equal 0, @@coll.find().count() 10.times do |i| @@coll.save("x" => i) end assert_equal 10, @@coll.find().count() assert_kind_of Integer, @@coll.find().count() assert_equal 10, @@coll.find({}, :limit => 5).count() assert_equal 10, @@coll.find({}, :skip => 5).count() assert_equal 5, @@coll.find({}, :limit => 5).count(true) assert_equal 5, @@coll.find({}, :skip => 5).count(true) assert_equal 2, @@coll.find({}, :skip => 5, :limit => 2).count(true) assert_equal 1, @@coll.find({"x" => 1}).count() assert_equal 5, @@coll.find({"x" => {"$lt" => 5}}).count() a = @@coll.find() b = a.count() a.each do |doc| break end assert_equal b, a.count() assert_equal 0, @@db['acollectionthatdoesn'].count() end def test_sort @@coll.remove 5.times{|x| @@coll.insert({"age" => x}) } assert_kind_of Cursor, @@coll.find().sort(:age, 1) assert_equal 0, @@coll.find().sort(:age, 1).next_document["age"] assert_equal 4, @@coll.find().sort(:age, -1).next_document["age"] assert_equal 0, @@coll.find().sort([["age", :asc]]).next_document["age"] assert_kind_of Cursor, @@coll.find().sort([[:age, -1], [:b, 1]]) assert_equal 4, @@coll.find().sort(:age, 1).sort(:age, -1).next_document["age"] assert_equal 0, @@coll.find().sort(:age, -1).sort(:age, 1).next_document["age"] assert_equal 4, @@coll.find().sort([:age, :asc]).sort(:age, -1).next_document["age"] assert_equal 0, @@coll.find().sort([:age, :desc]).sort(:age, 1).next_document["age"] cursor = @@coll.find() cursor.next_document assert_raise InvalidOperation do cursor.sort(["age"]) end assert_raise InvalidSortValueError do @@coll.find().sort(:age, 25).next_document end assert_raise InvalidSortValueError do @@coll.find().sort(25).next_document end end def test_sort_date @@coll.remove 5.times{|x| @@coll.insert({"created_at" => Time.utc(2000 + x)}) } assert_equal 2000, @@coll.find().sort(:created_at, :asc).next_document["created_at"].year assert_equal 2004, @@coll.find().sort(:created_at, :desc).next_document["created_at"].year assert_equal 2000, @@coll.find().sort([:created_at, :asc]).next_document["created_at"].year assert_equal 2004, @@coll.find().sort([:created_at, :desc]).next_document["created_at"].year assert_equal 2000, @@coll.find().sort([[:created_at, :asc]]).next_document["created_at"].year assert_equal 2004, @@coll.find().sort([[:created_at, :desc]]).next_document["created_at"].year end def test_sort_min_max_keys @@coll.remove @@coll.insert({"n" => 1000000}) @@coll.insert({"n" => -1000000}) @@coll.insert({"n" => MaxKey.new}) @@coll.insert({"n" => MinKey.new}) results = @@coll.find.sort([:n, :asc]).to_a assert_equal MinKey.new, results[0]['n'] assert_equal(-1000000, results[1]['n']) assert_equal 1000000, results[2]['n'] assert_equal MaxKey.new, results[3]['n'] end def test_id_range_queries @@coll.remove t1 = Time.now t1_id = ObjectId.from_time(t1) @@coll.save({:t => 't1'}) @@coll.save({:t => 't1'}) @@coll.save({:t => 't1'}) sleep(1) t2 = Time.now t2_id = ObjectId.from_time(t2) @@coll.save({:t => 't2'}) @@coll.save({:t => 't2'}) @@coll.save({:t => 't2'}) assert_equal 3, @@coll.find({'_id' => {'$gt' => t1_id, '$lt' => t2_id}}).count @@coll.find({'_id' => {'$gt' => t2_id}}).each do |doc| assert_equal 't2', doc['t'] end end def test_limit @@coll.remove 10.times do |i| @@coll.save("x" => i) end assert_equal 10, @@coll.find().count() results = @@coll.find().limit(5).to_a assert_equal 5, results.length end def test_timeout_options cursor = Cursor.new(@@coll) assert_equal true, cursor.timeout cursor = @@coll.find assert_equal true, cursor.timeout cursor = @@coll.find({}, :timeout => nil) assert_equal true, cursor.timeout cursor = Cursor.new(@@coll, :timeout => false) assert_equal false, cursor.timeout @@coll.find({}, :timeout => false) do |c| assert_equal false, c.timeout end end def test_timeout opts = Cursor.new(@@coll).options assert_equal 0, opts & Mongo::Constants::OP_QUERY_NO_CURSOR_TIMEOUT opts = Cursor.new(@@coll, :timeout => false).options assert_equal Mongo::Constants::OP_QUERY_NO_CURSOR_TIMEOUT, opts & Mongo::Constants::OP_QUERY_NO_CURSOR_TIMEOUT end def test_limit_exceptions cursor = @@coll.find() cursor.next_document assert_raise InvalidOperation, "Cannot modify the query once it has been run or closed." do cursor.limit(1) end cursor = @@coll.find() cursor.close assert_raise InvalidOperation, "Cannot modify the query once it has been run or closed." do cursor.limit(1) end end def test_skip @@coll.remove 10.times do |i| @@coll.save("x" => i) end assert_equal 10, @@coll.find().count() all_results = @@coll.find().to_a skip_results = @@coll.find().skip(2).to_a assert_equal 10, all_results.length assert_equal 8, skip_results.length assert_equal all_results.slice(2...10), skip_results end def test_skip_exceptions cursor = @@coll.find() cursor.next_document assert_raise InvalidOperation, "Cannot modify the query once it has been run or closed." do cursor.skip(1) end cursor = @@coll.find() cursor.close assert_raise InvalidOperation, "Cannot modify the query once it has been run or closed." do cursor.skip(1) end end def test_limit_skip_chaining @@coll.remove 10.times do |i| @@coll.save("x" => i) end all_results = @@coll.find().to_a limited_skip_results = @@coll.find().limit(5).skip(3).to_a assert_equal all_results.slice(3...8), limited_skip_results end def test_close_no_query_sent begin cursor = @@coll.find('a' => 1) cursor.close assert cursor.closed? rescue => ex fail ex.to_s end end def test_refill_via_get_more assert_equal 1, @@coll.count 1000.times { |i| assert_equal 1 + i, @@coll.count @@coll.insert('a' => i) } assert_equal 1001, @@coll.count count = 0 @@coll.find.each { |obj| count += obj['a'] } assert_equal 1001, @@coll.count # do the same thing again for debugging assert_equal 1001, @@coll.count count2 = 0 @@coll.find.each { |obj| count2 += obj['a'] } assert_equal 1001, @@coll.count assert_equal count, count2 assert_equal 499501, count end def test_refill_via_get_more_alt_coll coll = @@db.collection('test-alt-coll') coll.remove coll.insert('a' => 1) # collection not created until it's used assert_equal 1, coll.count 1000.times { |i| assert_equal 1 + i, coll.count coll.insert('a' => i) } assert_equal 1001, coll.count count = 0 coll.find.each { |obj| count += obj['a'] } assert_equal 1001, coll.count # do the same thing again for debugging assert_equal 1001, coll.count count2 = 0 coll.find.each { |obj| count2 += obj['a'] } assert_equal 1001, coll.count assert_equal count, count2 assert_equal 499501, count end def test_close_after_query_sent begin cursor = @@coll.find('a' => 1) cursor.next_document cursor.close assert cursor.closed? rescue => ex fail ex.to_s end end def test_kill_cursors @@coll.drop client_cursors = @@db.command("cursorInfo" => 1)["clientCursors_size"] 10000.times do |i| @@coll.insert("i" => i) end assert_equal(client_cursors, @@db.command("cursorInfo" => 1)["clientCursors_size"]) 10.times do |i| @@coll.find_one() end assert_equal(client_cursors, @@db.command("cursorInfo" => 1)["clientCursors_size"]) 10.times do |i| a = @@coll.find() a.next_document a.close() end assert_equal(client_cursors, @@db.command("cursorInfo" => 1)["clientCursors_size"]) a = @@coll.find() a.next_document assert_not_equal(client_cursors, @@db.command("cursorInfo" => 1)["clientCursors_size"]) a.close() assert_equal(client_cursors, @@db.command("cursorInfo" => 1)["clientCursors_size"]) a = @@coll.find({}, :limit => 10).next_document assert_equal(client_cursors, @@db.command("cursorInfo" => 1)["clientCursors_size"]) @@coll.find() do |cursor| cursor.next_document end assert_equal(client_cursors, @@db.command("cursorInfo" => 1)["clientCursors_size"]) @@coll.find() { |cursor| cursor.next_document } assert_equal(client_cursors, @@db.command("cursorInfo" => 1)["clientCursors_size"]) end def test_count_with_fields @@coll.remove @@coll.save("x" => 1) if @@version < "1.1.3" assert_equal(0, @@coll.find({}, :fields => ["a"]).count()) else assert_equal(1, @@coll.find({}, :fields => ["a"]).count()) end end def test_has_next @@coll.remove 200.times do |n| @@coll.save("x" => n) end cursor = @@coll.find n = 0 while cursor.has_next? assert cursor.next n += 1 end assert_equal n, 200 assert_equal false, cursor.has_next? end def test_cursor_invalid @@coll.remove 10000.times do |n| @@coll.insert({:a => n}) end cursor = @@coll.find({}) assert_raise_error Mongo::OperationFailure, "CURSOR_NOT_FOUND" do 9999.times do cursor.next_document cursor.instance_variable_set(:@cursor_id, 1234567890) end end end def test_enumberables @@coll.remove 100.times do |n| @@coll.insert({:a => n}) end assert_equal 100, @@coll.find.to_a.length assert_equal 100, @@coll.find.to_set.length cursor = @@coll.find 50.times { |n| cursor.next_document } assert_equal 50, cursor.to_a.length end def test_rewind @@coll.remove 100.times do |n| @@coll.insert({:a => n}) end cursor = @@coll.find cursor.to_a assert_equal [], cursor.map {|doc| doc } cursor.rewind! assert_equal 100, cursor.map {|doc| doc }.length cursor.rewind! 5.times { cursor.next_document } cursor.rewind! assert_equal 100, cursor.map {|doc| doc }.length end def test_transformer transformer = Proc.new { |doc| doc } cursor = Cursor.new(@@coll, :transformer => transformer) assert_equal(transformer, cursor.transformer) end def test_instance_transformation_with_next klass = Struct.new(:id, :a) transformer = Proc.new { |doc| klass.new(doc['_id'], doc['a']) } cursor = Cursor.new(@@coll, :transformer => transformer) instance = cursor.next assert_instance_of(klass, instance) assert_instance_of(BSON::ObjectId, instance.id) assert_equal(1, instance.a) end def test_instance_transformation_with_each klass = Struct.new(:id, :a) transformer = Proc.new { |doc| klass.new(doc['_id'], doc['a']) } cursor = Cursor.new(@@coll, :transformer => transformer) cursor.each do |instance| assert_instance_of(klass, instance) end end end ruby-mongo-1.10.0/test/functional/db_api_test.rb000066400000000000000000000566161233461006100216240ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' class DBAPITest < Test::Unit::TestCase include Mongo include BSON @@client = standard_connection @@db = @@client.db(TEST_DB) @@coll = @@db.collection('test') @@version = @@client.server_version def setup @@coll.remove @r1 = {'a' => 1} @@coll.insert(@r1) # collection not created until it's used @@coll_full_name = "#{TEST_DB}.test" end def teardown @@coll.remove @@db.get_last_error end def test_clear assert_equal 1, @@coll.count @@coll.remove assert_equal 0, @@coll.count end def test_insert assert_kind_of BSON::ObjectId, @@coll.insert('a' => 2) assert_kind_of BSON::ObjectId, @@coll.insert('b' => 3) assert_equal 3, @@coll.count docs = @@coll.find().to_a assert_equal 3, docs.length assert docs.detect { |row| row['a'] == 1 } assert docs.detect { |row| row['a'] == 2 } assert docs.detect { |row| row['b'] == 3 } @@coll << {'b' => 4} docs = @@coll.find().to_a assert_equal 4, docs.length assert docs.detect { |row| row['b'] == 4 } end def test_save_ordered_hash oh = BSON::OrderedHash.new oh['a'] = -1 oh['b'] = 'foo' oid = @@coll.save(oh) assert_equal 'foo', @@coll.find_one(oid)['b'] oh = BSON::OrderedHash['a' => 1, 'b' => 'foo'] oid = @@coll.save(oh) assert_equal 'foo', @@coll.find_one(oid)['b'] end def test_insert_multiple ids = @@coll.insert([{'a' => 2}, {'b' => 3}]) ids.each do |i| assert_kind_of BSON::ObjectId, i end assert_equal 3, @@coll.count docs = @@coll.find().to_a assert_equal 3, docs.length assert docs.detect { |row| row['a'] == 1 } assert docs.detect { |row| row['a'] == 2 } assert docs.detect { |row| row['b'] == 3 } end def test_count_on_nonexisting @@db.drop_collection('foo') assert_equal 0, @@db.collection('foo').count() end def test_find_simple @r2 = @@coll.insert('a' => 2) @r3 = @@coll.insert('b' => 3) # Check sizes docs = @@coll.find().to_a assert_equal 3, docs.size assert_equal 3, @@coll.count # Find by other value docs = @@coll.find('a' => @r1['a']).to_a assert_equal 1, docs.size doc = docs.first # Can't compare _id values because at insert, an _id was added to @r1 by # the database but we don't know what it is without re-reading the record # (which is what we are doing right now). # assert_equal doc['_id'], @r1['_id'] assert_equal doc['a'], @r1['a'] end def test_find_advanced @@coll.insert('a' => 2) @@coll.insert('b' => 3) # Find by advanced query (less than) docs = @@coll.find('a' => { '$lt' => 10 }).to_a assert_equal 2, docs.size assert docs.detect { |row| row['a'] == 1 } assert docs.detect { |row| row['a'] == 2 } # Find by advanced query (greater than) docs = @@coll.find('a' => { '$gt' => 1 }).to_a assert_equal 1, docs.size assert docs.detect { |row| row['a'] == 2 } # Find by advanced query (less than or equal to) docs = @@coll.find('a' => { '$lte' => 1 }).to_a assert_equal 1, docs.size assert docs.detect { |row| row['a'] == 1 } # Find by advanced query (greater than or equal to) docs = @@coll.find('a' => { '$gte' => 1 }).to_a assert_equal 2, docs.size assert docs.detect { |row| row['a'] == 1 } assert docs.detect { |row| row['a'] == 2 } # Find by advanced query (between) docs = @@coll.find('a' => { '$gt' => 1, '$lt' => 3 }).to_a assert_equal 1, docs.size assert docs.detect { |row| row['a'] == 2 } # Find by advanced query (in clause) docs = @@coll.find('a' => {'$in' => [1,2]}).to_a assert_equal 2, docs.size assert docs.detect { |row| row['a'] == 1 } assert docs.detect { |row| row['a'] == 2 } end def test_find_sorting @@coll.remove @@coll.insert('a' => 1, 'b' => 2) @@coll.insert('a' => 2, 'b' => 1) @@coll.insert('a' => 3, 'b' => 2) @@coll.insert('a' => 4, 'b' => 1) # Sorting (ascending) docs = @@coll.find({'a' => { '$lt' => 10 }}, :sort => [['a', 1]]).to_a assert_equal 4, docs.size assert_equal 1, docs[0]['a'] assert_equal 2, docs[1]['a'] assert_equal 3, docs[2]['a'] assert_equal 4, docs[3]['a'] # Sorting (descending) docs = @@coll.find({'a' => { '$lt' => 10 }}, :sort => [['a', -1]]).to_a assert_equal 4, docs.size assert_equal 4, docs[0]['a'] assert_equal 3, docs[1]['a'] assert_equal 2, docs[2]['a'] assert_equal 1, docs[3]['a'] # Sorting using array of names; assumes ascending order. docs = @@coll.find({'a' => { '$lt' => 10 }}, :sort => 'a').to_a assert_equal 4, docs.size assert_equal 1, docs[0]['a'] assert_equal 2, docs[1]['a'] assert_equal 3, docs[2]['a'] assert_equal 4, docs[3]['a'] # Sorting using single name; assumes ascending order. docs = @@coll.find({'a' => { '$lt' => 10 }}, :sort => 'a').to_a assert_equal 4, docs.size assert_equal 1, docs[0]['a'] assert_equal 2, docs[1]['a'] assert_equal 3, docs[2]['a'] assert_equal 4, docs[3]['a'] docs = @@coll.find({'a' => { '$lt' => 10 }}, :sort => [['b', 'asc'], ['a', 'asc']]).to_a assert_equal 4, docs.size assert_equal 2, docs[0]['a'] assert_equal 4, docs[1]['a'] assert_equal 1, docs[2]['a'] assert_equal 3, docs[3]['a'] # Sorting using empty array; no order guarantee should not blow up. docs = @@coll.find({'a' => { '$lt' => 10 }}, :sort => []).to_a assert_equal 4, docs.size end def test_find_sorting_with_hash # Sorting using ordered hash. You can use an unordered one, but then the # order of the keys won't be guaranteed thus your sort won't make sense. @@coll.remove @@coll.insert('a' => 1, 'b' => 2) @@coll.insert('a' => 2, 'b' => 1) @@coll.insert('a' => 3, 'b' => 2) @@coll.insert('a' => 4, 'b' => 1) oh = BSON::OrderedHash.new oh['a'] = -1 # Sort as a method docs = @@coll.find.sort(oh).to_a assert_equal 4, docs.size assert_equal 4, docs[0]['a'] assert_equal 3, docs[1]['a'] assert_equal 2, docs[2]['a'] assert_equal 1, docs[3]['a'] # Sort as an option docs = @@coll.find({}, :sort => oh).to_a assert_equal 4, docs.size assert_equal 4, docs[0]['a'] assert_equal 3, docs[1]['a'] assert_equal 2, docs[2]['a'] assert_equal 1, docs[3]['a'] if RUBY_VERSION > '1.9' docs = @@coll.find({}, :sort => {:a => -1}).to_a assert_equal 4, docs.size assert_equal 4, docs[0]['a'] assert_equal 3, docs[1]['a'] assert_equal 2, docs[2]['a'] assert_equal 1, docs[3]['a'] docs = @@coll.find.sort(:a => -1).to_a assert_equal 4, docs.size assert_equal 4, docs[0]['a'] assert_equal 3, docs[1]['a'] assert_equal 2, docs[2]['a'] assert_equal 1, docs[3]['a'] docs = @@coll.find.sort(:b => -1, :a => 1).to_a assert_equal 4, docs.size assert_equal 1, docs[0]['a'] assert_equal 3, docs[1]['a'] assert_equal 2, docs[2]['a'] assert_equal 4, docs[3]['a'] else # Sort as an option assert_raise InvalidSortValueError do @@coll.find({}, :sort => {:a => -1}).to_a end # Sort as a method assert_raise InvalidSortValueError do @@coll.find.sort(:a => -1).to_a end end end def test_find_limits @@coll.insert('b' => 2) @@coll.insert('c' => 3) @@coll.insert('d' => 4) docs = @@coll.find({}, :limit => 1).to_a assert_equal 1, docs.size docs = @@coll.find({}, :limit => 2).to_a assert_equal 2, docs.size docs = @@coll.find({}, :limit => 3).to_a assert_equal 3, docs.size docs = @@coll.find({}, :limit => 4).to_a assert_equal 4, docs.size docs = @@coll.find({}).to_a assert_equal 4, docs.size docs = @@coll.find({}, :limit => 99).to_a assert_equal 4, docs.size end def test_find_one_no_records @@coll.remove x = @@coll.find_one('a' => 1) assert_nil x end def test_drop_collection assert @@db.drop_collection(@@coll.name), "drop of collection #{@@coll.name} failed" assert !@@db.collection_names.include?(@@coll.name) end def test_other_drop assert @@db.collection_names.include?(@@coll.name) @@coll.drop assert !@@db.collection_names.include?(@@coll.name) end def test_collection_names names = @@db.collection_names assert names.length >= 1 assert names.include?(@@coll.name) coll2 = @@db.collection('test2') coll2.insert('a' => 1) # collection not created until it's used names = @@db.collection_names assert names.length >= 2 assert names.include?(@@coll.name) assert names.include?('test2') ensure @@db.drop_collection('test2') end def test_collections_info cursor = @@db.collections_info rows = cursor.to_a assert rows.length >= 1 row = rows.detect { |r| r['name'] == @@coll_full_name } assert_not_nil row end def test_collection_options @@db.drop_collection('foobar') @@db.strict = true begin coll = @@db.create_collection('foobar', :capped => true, :size => 4096) options = coll.options assert_equal 'foobar', options['create'] if @@client.server_version < '2.5.5' assert_equal true, options['capped'] assert_equal 4096, options['size'] rescue => ex @@db.drop_collection('foobar') fail "did not expect exception \"#{ex.inspect}\"" ensure @@db.strict = false end end def test_collection_options_are_passed_to_the_existing_ones @@db.drop_collection('foobar') @@db.create_collection('foobar') coll = @@db.create_collection('foobar') assert_equal true, Mongo::WriteConcern.gle?(coll.write_concern) end def test_index_information assert_equal @@coll.index_information.length, 1 name = @@coll.create_index('a') info = @@db.index_information(@@coll.name) assert_equal name, "a_1" assert_equal @@coll.index_information, info assert_equal 2, info.length assert info.has_key?(name) assert_equal info[name]["key"], {"a" => 1} ensure @@db.drop_index(@@coll.name, name) end def test_index_create_with_symbol assert_equal @@coll.index_information.length, 1 name = @@coll.create_index([['a', 1]]) info = @@db.index_information(@@coll.name) assert_equal name, "a_1" assert_equal @@coll.index_information, info assert_equal 2, info.length assert info.has_key?(name) assert_equal info[name]['key'], {"a" => 1} ensure @@db.drop_index(@@coll.name, name) end def test_multiple_index_cols name = @@coll.create_index([['a', DESCENDING], ['b', ASCENDING], ['c', DESCENDING]]) info = @@db.index_information(@@coll.name) assert_equal 2, info.length assert_equal name, 'a_-1_b_1_c_-1' assert info.has_key?(name) assert_equal info[name]['key'], {"a" => -1, "b" => 1, "c" => -1} ensure @@db.drop_index(@@coll.name, name) end def test_multiple_index_cols_with_symbols name = @@coll.create_index([[:a, DESCENDING], [:b, ASCENDING], [:c, DESCENDING]]) info = @@db.index_information(@@coll.name) assert_equal 2, info.length assert_equal name, 'a_-1_b_1_c_-1' assert info.has_key?(name) assert_equal info[name]['key'], {"a" => -1, "b" => 1, "c" => -1} ensure @@db.drop_index(@@coll.name, name) end def test_unique_index @@db.drop_collection("blah") test = @@db.collection("blah") test.create_index("hello") test.insert("hello" => "world") test.insert("hello" => "mike") test.insert("hello" => "world") assert !@@db.error? @@db.drop_collection("blah") test = @@db.collection("blah") test.create_index("hello", :unique => true) test.insert("hello" => "world") test.insert("hello" => "mike") assert_raise OperationFailure do test.insert("hello" => "world") end end def test_index_on_subfield @@db.drop_collection("blah") test = @@db.collection("blah") test.insert("hello" => {"a" => 4, "b" => 5}) test.insert("hello" => {"a" => 7, "b" => 2}) test.insert("hello" => {"a" => 4, "b" => 10}) assert !@@db.error? @@db.drop_collection("blah") test = @@db.collection("blah") test.create_index("hello.a", :unique => true) test.insert("hello" => {"a" => 4, "b" => 5}) test.insert("hello" => {"a" => 7, "b" => 2}) assert_raise OperationFailure do test.insert("hello" => {"a" => 4, "b" => 10} ) end end def test_array @@coll.remove({'$atomic' => true}) @@coll.insert({'b' => [1, 2, 3]}) @@coll.insert({'b' => [1, 2, 3]}) rows = @@coll.find({}, {:fields => ['b']}).to_a assert_equal 2, rows.length assert_equal [1, 2, 3], rows[1]['b'] end def test_regex regex = /foobar/i @@coll << {'b' => regex} rows = @@coll.find({}, {:fields => ['b']}).to_a if @@version < "1.1.3" assert_equal 1, rows.length assert_equal regex, rows[0]['b'] else assert_equal 2, rows.length assert_equal regex, rows[1]['b'] end end def test_regex_multi_line if @@version >= "1.9.1" doc = < doc}) assert @@coll.find_one({:doc => /n.*x/m}) @@coll.remove end end def test_non_oid_id # Note: can't use Time.new because that will include fractional seconds, # which Mongo does not store. t = Time.at(1234567890) @@coll << {'_id' => t} rows = @@coll.find({'_id' => t}).to_a assert_equal 1, rows.length assert_equal t, rows[0]['_id'] end def test_strict assert !@@db.strict? @@db.strict = true assert @@db.strict? ensure @@db.strict = false end def test_strict_access_collection @@db.strict = true begin @@db.collection('does-not-exist') fail "expected exception" rescue => ex assert_equal Mongo::MongoDBError, ex.class assert_equal "Collection 'does-not-exist' doesn't exist. (strict=true)", ex.to_s ensure @@db.strict = false @@db.drop_collection('does-not-exist') end end def test_strict_create_collection @@db.drop_collection('foobar') @@db.strict = true begin assert @@db.create_collection('foobar') rescue => ex fail "did not expect exception \"#{ex}\"" end # Now the collection exists. This time we should see an exception. assert_raise Mongo::MongoDBError do @@db.create_collection('foobar') end @@db.strict = false @@db.drop_collection('foobar') # Now we're not in strict mode - should succeed @@db.create_collection('foobar') @@db.create_collection('foobar') @@db.drop_collection('foobar') end def test_where @@coll.insert('a' => 2) @@coll.insert('a' => 3) assert_equal 3, @@coll.count assert_equal 1, @@coll.find('$where' => BSON::Code.new('this.a > 2')).count() assert_equal 2, @@coll.find('$where' => BSON::Code.new('this.a > i', {'i' => 1})).count() end def test_eval assert_equal 3, @@db.eval('function (x) {return x;}', 3) assert_equal nil, @@db.eval("function (x) {db.test_eval.save({y:x});}", 5) assert_equal 5, @@db.collection('test_eval').find_one['y'] assert_equal 5, @@db.eval("function (x, y) {return x + y;}", 2, 3) assert_equal 5, @@db.eval("function () {return 5;}") assert_equal 5, @@db.eval("2 + 3;") assert_equal 5, @@db.eval(Code.new("2 + 3;")) assert_equal 2, @@db.eval(Code.new("return i;", {"i" => 2})) assert_equal 5, @@db.eval(Code.new("i + 3;", {"i" => 2})) assert_raise OperationFailure do @@db.eval("5 ++ 5;") end end def test_hint name = @@coll.create_index('a') begin assert_nil @@coll.hint assert_equal 1, @@coll.find({'a' => 1}, :hint => 'a').to_a.size assert_equal 1, @@coll.find({'a' => 1}, :hint => ['a']).to_a.size assert_equal 1, @@coll.find({'a' => 1}, :hint => {'a' => 1}).to_a.size @@coll.hint = 'a' assert_equal({'a' => 1}, @@coll.hint) assert_equal 1, @@coll.find('a' => 1).to_a.size @@coll.hint = ['a'] assert_equal({'a' => 1}, @@coll.hint) assert_equal 1, @@coll.find('a' => 1).to_a.size @@coll.hint = {'a' => 1} assert_equal({'a' => 1}, @@coll.hint) assert_equal 1, @@coll.find('a' => 1).to_a.size @@coll.hint = nil assert_nil @@coll.hint assert_equal 1, @@coll.find('a' => 1).to_a.size ensure @@coll.drop_index(name) end end def test_named_hint name = @@coll.create_index('a', :name => 'named_index') begin assert_nil @@coll.hint assert_equal 1, @@coll.find({'a' => 1}, :named_hint => 'named_index').to_a.size assert_equal 1, @@coll.find({'a' => 1}, :hint => 'a', :named_hint => "bad_hint").to_a.size ensure @@coll.drop_index('named_index') end end def test_hash_default_value_id val = Hash.new(0) val["x"] = 5 @@coll.insert val id = @@coll.find_one("x" => 5)["_id"] assert id != 0 end def test_group @@db.drop_collection("test") test = @@db.collection("test") assert_equal [], test.group(:initial => {"count" => 0}, :reduce => "function (obj, prev) { prev.count++; }") assert_equal [], test.group(:initial => {"count" => 0}, :reduce => "function (obj, prev) { prev.count++; }") test.insert("a" => 2) test.insert("b" => 5) test.insert("a" => 1) assert_equal 3, test.group(:initial => {"count" => 0}, :reduce => "function (obj, prev) { prev.count++; }")[0]["count"] assert_equal 3, test.group(:initial => {"count" => 0}, :reduce => "function (obj, prev) { prev.count++; }")[0]["count"] assert_equal 1, test.group(:cond => {"a" => {"$gt" => 1}}, :initial => {"count" => 0}, :reduce => "function (obj, prev) { prev.count++; }")[0]["count"] assert_equal 1, test.group(:cond => {"a" => {"$gt" => 1}}, :initial => {"count" => 0}, :reduce => "function (obj, prev) { prev.count++; }")[0]["count"] finalize = "function (obj) { obj.f = obj.count - 1; }" assert_equal 2, test.group(:initial => {"count" => 0}, :reduce => "function (obj, prev) { prev.count++; }", :finalize => finalize)[0]["f"] test.insert("a" => 2, "b" => 3) expected = [{"a" => 2, "count" => 2}, {"a" => nil, "count" => 1}, {"a" => 1, "count" => 1}] assert_equal expected, test.group(:key => ["a"], :initial => {"count" => 0}, :reduce => "function (obj, prev) { prev.count++; }") assert_equal expected, test.group(:key => :a, :initial => {"count" => 0}, :reduce => "function (obj, prev) { prev.count++; }") assert_raise OperationFailure do test.group(:initial => {}, :reduce => "5 ++ 5") end end def test_deref @@coll.remove assert_equal nil, @@db.dereference(DBRef.new("test", ObjectId.new)) @@coll.insert({"x" => "hello"}) key = @@coll.find_one()["_id"] assert_equal "hello", @@db.dereference(DBRef.new("test", key))["x"] assert_equal nil, @@db.dereference(DBRef.new("test", 4)) obj = {"_id" => 4} @@coll.insert(obj) assert_equal obj, @@db.dereference(DBRef.new("test", 4)) @@coll.remove @@coll.insert({"x" => "hello"}) assert_equal nil, @@db.dereference(DBRef.new("test", nil)) end def test_save @@coll.remove a = {"hello" => "world"} id = @@coll.save(a) assert_kind_of ObjectId, id assert_equal 1, @@coll.count assert_equal id, @@coll.save(a) assert_equal 1, @@coll.count assert_equal "world", @@coll.find_one()["hello"] a["hello"] = "mike" @@coll.save(a) assert_equal 1, @@coll.count assert_equal "mike", @@coll.find_one()["hello"] @@coll.save({"hello" => "world"}) assert_equal 2, @@coll.count end def test_save_long @@coll.remove @@coll.insert("x" => 9223372036854775807) assert_equal 9223372036854775807, @@coll.find_one()["x"] end def test_find_by_oid @@coll.remove @@coll.save("hello" => "mike") id = @@coll.save("hello" => "world") assert_kind_of ObjectId, id assert_equal "world", @@coll.find_one(:_id => id)["hello"] @@coll.find(:_id => id).to_a.each do |doc| assert_equal "world", doc["hello"] end id = ObjectId.from_string(id.to_s) assert_equal "world", @@coll.find_one(:_id => id)["hello"] end def test_save_with_object_that_has_id_but_does_not_actually_exist_in_collection @@coll.remove a = {'_id' => '1', 'hello' => 'world'} @@coll.save(a) assert_equal(1, @@coll.count) assert_equal("world", @@coll.find_one()["hello"]) a["hello"] = "mike" @@coll.save(a) assert_equal(1, @@coll.count) assert_equal("mike", @@coll.find_one()["hello"]) end def test_collection_names_errors assert_raise TypeError do @@db.collection(5) end assert_raise Mongo::InvalidNSName do @@db.collection("") end assert_raise Mongo::InvalidNSName do @@db.collection("te$t") end assert_raise Mongo::InvalidNSName do @@db.collection(".test") end assert_raise Mongo::InvalidNSName do @@db.collection("test.") end assert_raise Mongo::InvalidNSName do @@db.collection("tes..t") end end def test_rename_collection @@db.drop_collection("foo") @@db.drop_collection("bar") a = @@db.collection("foo") b = @@db.collection("bar") assert_raise TypeError do a.rename(5) end assert_raise Mongo::InvalidNSName do a.rename("") end assert_raise Mongo::InvalidNSName do a.rename("te$t") end assert_raise Mongo::InvalidNSName do a.rename(".test") end assert_raise Mongo::InvalidNSName do a.rename("test.") end assert_raise Mongo::InvalidNSName do a.rename("tes..t") end assert_equal 0, a.count() assert_equal 0, b.count() a.insert("x" => 1) a.insert("x" => 2) assert_equal 2, a.count() a.rename("bar") assert_equal 2, a.count() end # doesn't really test functionality, just that the option is set correctly def test_snapshot @@db.collection("test").find({}, :snapshot => true).to_a assert_raise OperationFailure do @@db.collection("test").find({}, :snapshot => true, :sort => 'a').to_a end end def test_encodings if RUBY_VERSION >= '1.9' default = "hello world" utf8 = "hello world".encode("UTF-8") iso8859 = "hello world".encode("ISO-8859-1") if RUBY_PLATFORM =~ /jruby/ assert_equal "ASCII-8BIT", default.encoding.name elsif RUBY_VERSION >= '2.0' assert_equal "UTF-8", default.encoding.name else assert_equal "US-ASCII", default.encoding.name end assert_equal "UTF-8", utf8.encoding.name assert_equal "ISO-8859-1", iso8859.encoding.name @@coll.remove @@coll.save("default" => default, "utf8" => utf8, "iso8859" => iso8859) doc = @@coll.find_one() assert_equal "UTF-8", doc["default"].encoding.name assert_equal "UTF-8", doc["utf8"].encoding.name assert_equal "UTF-8", doc["iso8859"].encoding.name end end end ruby-mongo-1.10.0/test/functional/db_connection_test.rb000066400000000000000000000016501233461006100231760ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' class DBConnectionTest < Test::Unit::TestCase def test_no_exceptions host = ENV['MONGO_RUBY_DRIVER_HOST'] || 'localhost' port = ENV['MONGO_RUBY_DRIVER_PORT'] || MongoClient::DEFAULT_PORT db = MongoClient.new(host, port).db(TEST_DB) coll = db.collection('test') coll.remove db.get_last_error end end ruby-mongo-1.10.0/test/functional/db_test.rb000066400000000000000000000223451233461006100207630ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' require 'digest/md5' require 'stringio' require 'logger' class TestPKFactory def create_pk(row) row['_id'] ||= BSON::ObjectId.new row end end class DBTest < Test::Unit::TestCase include Mongo @@client = standard_connection @@db = @@client.db(TEST_DB) @@version = @@client.server_version def test_close @@client.close assert !@@client.connected? begin @@db.collection('test').insert('a' => 1) fail "expected 'NilClass' exception" rescue => ex assert_match(/NilClass/, ex.to_s) ensure @@db = standard_connection.db(TEST_DB) end end def test_create_collection col = @@db.create_collection('foo') assert_equal @@db['foo'].name, col.name col = @@db.create_collection(:foo) assert_equal @@db['foo'].name, col.name @@db.drop_collection('foo') end def test_get_and_drop_collection db = @@client.db(TEST_DB, :strict => true) db.create_collection('foo') assert db.collection('foo') assert db.drop_collection('foo') db.create_collection(:foo) assert db.collection(:foo) assert db.drop_collection(:foo) end def test_logger output = StringIO.new logger = Logger.new(output) logger.level = Logger::DEBUG conn = standard_connection(:logger => logger) assert_equal logger, conn.logger conn.logger.debug 'testing' assert output.string.include?('testing') end def test_full_coll_name coll = @@db.collection('test') assert_equal "#{TEST_DB}.test", @@db.full_collection_name(coll.name) end def test_collection_names @@db.collection("test").insert("foo" => 5) @@db.collection("test.mike").insert("bar" => 0) colls = @@db.collection_names() assert colls.include?("test") assert colls.include?("test.mike") colls.each { |name| assert !name.include?("$") } end def test_collections @@db.collection("test.durran").insert("foo" => 5) @@db.collection("test.les").insert("bar" => 0) colls = @@db.collections() assert_not_nil colls.select { |coll| coll.name == "test.durran" } assert_not_nil colls.select { |coll| coll.name == "test.les" } assert_equal [], colls.select { |coll| coll.name == "does_not_exist" } assert_kind_of Collection, colls[0] end def test_pk_factory db = standard_connection.db(TEST_DB, :pk => TestPKFactory.new) coll = db.collection('test') coll.remove insert_id = coll.insert('name' => 'Fred', 'age' => 42) # new id gets added to returned object row = coll.find_one({'name' => 'Fred'}) oid = row['_id'] assert_not_nil oid assert_equal insert_id, oid oid = BSON::ObjectId.new data = {'_id' => oid, 'name' => 'Barney', 'age' => 41} coll.insert(data) row = coll.find_one({'name' => data['name']}) db_oid = row['_id'] assert_equal oid, db_oid assert_equal data, row coll.remove end def test_pk_factory_reset conn = standard_connection db = conn.db(TEST_DB) db.pk_factory = Object.new # first time begin db.pk_factory = Object.new fail "error: expected exception" rescue => ex assert_match(/Cannot change/, ex.to_s) ensure conn.close end end def test_command assert_raise OperationFailure do @@db.command({:non_command => 1}, :check_response => true) end result = @@db.command({:non_command => 1}, :check_response => false) assert !Mongo::Support.ok?(result) end def test_error @@db.reset_error_history assert_nil @@db.get_last_error['err'] assert !@@db.error? assert_nil @@db.previous_error @@db.command({:forceerror => 1}, :check_response => false) assert @@db.error? assert_not_nil @@db.get_last_error['err'] assert_not_nil @@db.previous_error @@db.command({:forceerror => 1}, :check_response => false) assert @@db.error? assert @@db.get_last_error['err'] prev_error = @@db.previous_error assert_equal 1, prev_error['nPrev'] assert_equal prev_error["err"], @@db.get_last_error['err'] @@db.collection('test').find_one assert_nil @@db.get_last_error['err'] assert !@@db.error? assert @@db.previous_error assert_equal 2, @@db.previous_error['nPrev'] @@db.reset_error_history assert_nil @@db.get_last_error['err'] assert !@@db.error? assert_nil @@db.previous_error end def test_check_command_response command = {:forceerror => 1} raised = false begin @@db.command(command) rescue => ex raised = true assert ex.message.include?("forced error"), "error message does not contain 'forced error'" assert_equal 10038, ex.error_code if @@version >= "2.1.0" assert_equal 10038, ex.result['code'] else assert_equal 10038, ex.result['assertionCode'] end ensure assert raised, "No assertion raised!" end end def test_arbitrary_command_opts with_forced_timeout(@@client) do assert_raise ExecutionTimeout do cmd = OrderedHash.new cmd[:ping] = 1 cmd[:maxTimeMS] = 100 @@db.command(cmd) end end end def test_command_with_bson normal_response = @@db.command({:buildInfo => 1}) bson = BSON::BSON_CODER.serialize({:buildInfo => 1}, false, false) bson_response = @@db.command({:bson => bson}) assert_equal normal_response, bson_response end def test_last_status @@db['test'].remove @@db['test'].save("i" => 1) @@db['test'].update({"i" => 1}, {"$set" => {"i" => 2}}) assert @@db.get_last_error()["updatedExisting"] @@db['test'].update({"i" => 1}, {"$set" => {"i" => 500}}) assert !@@db.get_last_error()["updatedExisting"] end def test_text_port_number_raises_no_errors client = standard_connection db = client[TEST_DB] db.collection('users').remove end def test_stored_function_management @@db.add_stored_function("sum", "function (x, y) { return x + y; }") assert_equal @@db.eval("return sum(2,3);"), 5 assert @@db.remove_stored_function("sum") assert_raise OperationFailure do @@db.eval("return sum(2,3);") end end def test_eval @@db.eval("db.system.save({_id:'hello', value: function() { print('hello'); } })") assert_equal 'hello', @@db['system'].find_one['_id'] end def test_eval_nolock function = "db.system.save({_id:'hello', value: function(string) { print(string); } })" @@db.expects(:command).with do |selector, opts| selector[:nolock] == true end.returns({ 'ok' => 1, 'retval' => 1 }) @@db.eval(function, 'hello', :nolock => true) end if @@version >= '2.5.3' def test_default_admin_roles # admin user db = Mongo::MongoClient.new()['admin'] db.logout silently { db.add_user('admin', 'pass') } db.authenticate('admin', 'pass') info = db.command(:usersInfo => 'admin')['users'].first assert_equal 'root', info['roles'].first['role'] # read-only admin user silently { db.add_user('ro-admin', 'pass', true) } db.logout db.authenticate('ro-admin', 'pass') info = db.command(:usersInfo => 'ro-admin')['users'].first assert_equal 'readAnyDatabase', info['roles'].first['role'] db.logout db.authenticate('admin', 'pass') db.command(:dropAllUsersFromDatabase => 1) db.logout end end if @@version >= "1.3.5" def test_db_stats stats = @@db.stats assert stats.has_key?('collections') assert stats.has_key?('dataSize') end end context "database profiling" do setup do @db = @@client[TEST_DB] @coll = @db['test'] @coll.remove @r1 = @coll.insert('a' => 1) # collection not created until it's used end should "set default profiling level" do assert_equal :off, @db.profiling_level end should "change profiling level" do @db.profiling_level = :slow_only assert_equal :slow_only, @db.profiling_level @db.profiling_level = :off assert_equal :off, @db.profiling_level @db.profiling_level = :all assert_equal :all, @db.profiling_level begin @db.profiling_level = :medium fail "shouldn't be able to do this" rescue end end should "return profiling info" do @db.profiling_level = :all @coll.find() @db.profiling_level = :off info = @db.profiling_info assert_kind_of Array, info assert info.length >= 1 first = info.first assert_kind_of Time, first['ts'] assert_kind_of Numeric, first['millis'] end should "validate collection" do doc = @db.validate_collection(@coll.name) if @@version >= "1.9.1" assert doc['valid'] else assert doc['result'] end end end end ruby-mongo-1.10.0/test/functional/grid_file_system_test.rb000066400000000000000000000207751233461006100237330ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' class GridFileSystemTest < Test::Unit::TestCase context "GridFileSystem:" do setup do @con = standard_connection @db = @con.db(TEST_DB) end teardown do @db.drop_collection('fs.files') @db.drop_collection('fs.chunks') end context "Initialization" do setup do @chunks_data = "CHUNKS" * 50000 @grid = GridFileSystem.new(@db) @opts = {:w => 1} @original_opts = @opts.dup @grid.open('sample.file', 'w', @opts) do |f| f.write @chunks_data end end should "not modify original opts" do assert_equal @original_opts, @opts end end context "When reading:" do setup do @chunks_data = "CHUNKS" * 50000 @grid = GridFileSystem.new(@db) @grid.open('sample.file', 'w') do |f| f.write @chunks_data end @grid = GridFileSystem.new(@db) end should "return existence of the file" do file = @grid.exist?(:filename => 'sample.file') assert_equal 'sample.file', file['filename'] end should "return nil if the file doesn't exist" do assert_nil @grid.exist?(:filename => 'foo.file') end should "read sample data" do data = @grid.open('sample.file', 'r') { |f| f.read } assert_equal data.length, @chunks_data.length end should "have a unique index on chunks" do assert @db['fs.chunks'].index_information['files_id_1_n_1']['unique'] end should "have an index on filename" do assert @db['fs.files'].index_information['filename_1_uploadDate_-1'] end should "return an empty string if length is zero" do data = @grid.open('sample.file', 'r') { |f| f.read(0) } assert_equal '', data end should "return the first n bytes" do data = @grid.open('sample.file', 'r') {|f| f.read(288888) } assert_equal 288888, data.length assert_equal @chunks_data[0...288888], data end should "return the first n bytes even with an offset" do data = @grid.open('sample.file', 'r') do |f| f.seek(1000) f.read(288888) end assert_equal 288888, data.length assert_equal @chunks_data[1000...289888], data end end context "When writing:" do setup do @data = "BYTES" * 50 @grid = GridFileSystem.new(@db) @grid.open('sample', 'w') do |f| f.write @data end end should "read sample data" do data = @grid.open('sample', 'r') { |f| f.read } assert_equal data.length, @data.length end should "return the total number of bytes written" do data = 'a' * 300000 assert_equal 300000, @grid.open('sample', 'w') {|f| f.write(data) } end should "more read sample data" do data = @grid.open('sample', 'r') { |f| f.read } assert_equal data.length, @data.length end should "raise exception if file not found" do assert_raise GridFileNotFound do @grid.open('io', 'r') { |f| f.write('hello') } end end should "raise exception if not opened for write" do assert_raise GridError do @grid.open('sample', 'r') { |f| f.write('hello') } end end context "and when overwriting the file" do setup do @old = @grid.open('sample', 'r') @new_data = "DATA" * 10 @grid.open('sample', 'w') do |f| f.write @new_data end @new = @grid.open('sample', 'r') end should "have a newer upload date" do assert @new.upload_date > @old.upload_date, "New data is not greater than old date." end should "have a different files_id" do assert_not_equal @new.files_id, @old.files_id end should "contain the new data" do assert_equal @new_data, @new.read, "Expected DATA" end context "and on a second overwrite" do setup do @new_data = "NEW" * 1000 @grid.open('sample', 'w') do |f| f.write @new_data end @ids = @db['fs.files'].find({'filename' => 'sample'}).map {|file| file['_id']} end should "write a third version of the file" do assert_equal 3, @db['fs.files'].find({'filename' => 'sample'}).count assert_equal 3, @db['fs.chunks'].find({'files_id' => {'$in' => @ids}}).count end should "remove all versions and their data on delete" do @grid.delete('sample') assert_equal 0, @db['fs.files'].find({'filename' => 'sample'}).count assert_equal 0, @db['fs.chunks'].find({'files_id' => {'$in' => @ids}}).count end should "delete all versions which exceed the number of versions to keep specified by the option :versions" do @versions = 1 + rand(4-1) @grid.open('sample', 'w', :versions => @versions) do |f| f.write @new_data end @new_ids = @db['fs.files'].find({'filename' => 'sample'}).map {|file| file['_id']} assert_equal @versions, @new_ids.length id = @new_ids.first assert !@ids.include?(id) assert_equal @versions, @db['fs.files'].find({'filename' => 'sample'}).count end should "delete old versions on write with :delete_old is passed in" do @grid.open('sample', 'w', :delete_old => true) do |f| f.write @new_data end @new_ids = @db['fs.files'].find({'filename' => 'sample'}).map {|file| file['_id']} assert_equal 1, @new_ids.length id = @new_ids.first assert !@ids.include?(id) assert_equal 1, @db['fs.files'].find({'filename' => 'sample'}).count assert_equal 1, @db['fs.chunks'].find({'files_id' => id}).count end end end end context "When writing chunks:" do setup do data = "B" * 50000 @grid = GridFileSystem.new(@db) @grid.open('sample', 'w', :chunk_size => 1000) do |f| f.write data end end should "write the correct number of chunks" do file = @db['fs.files'].find_one({:filename => 'sample'}) chunks = @db['fs.chunks'].find({'files_id' => file['_id']}).to_a assert_equal 50, chunks.length end end context "Positioning:" do setup do data = 'hello, world' + '1' * 5000 + 'goodbye!' + '2' * 1000 + '!' @grid = GridFileSystem.new(@db) @grid.open('hello', 'w', :chunk_size => 1000) do |f| f.write data end end should "seek within chunks" do @grid.open('hello', 'r') do |f| f.seek(0) assert_equal 'h', f.read(1) f.seek(7) assert_equal 'w', f.read(1) f.seek(4) assert_equal 'o', f.read(1) f.seek(0) f.seek(7, IO::SEEK_CUR) assert_equal 'w', f.read(1) f.seek(-2, IO::SEEK_CUR) assert_equal ' ', f.read(1) f.seek(-4, IO::SEEK_CUR) assert_equal 'l', f.read(1) f.seek(3, IO::SEEK_CUR) assert_equal 'w', f.read(1) end end should "seek between chunks" do @grid.open('hello', 'r') do |f| f.seek(1000) assert_equal '11111', f.read(5) f.seek(5009) assert_equal '111goodbye!222', f.read(14) f.seek(-1, IO::SEEK_END) assert_equal '!', f.read(1) f.seek(-6, IO::SEEK_END) assert_equal '2', f.read(1) end end should "tell the current position" do @grid.open('hello', 'r') do |f| assert_equal 0, f.tell f.seek(999) assert_equal 999, f.tell end end should "seek only in read mode" do assert_raise GridError do silently do @grid.open('hello', 'w') { |f| f.seek(0) } end end end end end end ruby-mongo-1.10.0/test/functional/grid_io_test.rb000066400000000000000000000173621233461006100220150ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' class GridIOTest < Test::Unit::TestCase context "GridIO" do setup do @db = standard_connection.db(TEST_DB) @files = @db.collection('fs.files') @chunks = @db.collection('fs.chunks') @chunks.create_index([['files_id', Mongo::ASCENDING], ['n', Mongo::ASCENDING]]) end teardown do @files.remove @chunks.remove end context "Options" do setup do @filename = 'test' @mode = 'w' end should "set default 255k chunk size" do file = GridIO.new(@files, @chunks, @filename, @mode) assert_equal 255 * 1024, file.chunk_size end should "set chunk size" do file = GridIO.new(@files, @chunks, @filename, @mode, :chunk_size => 1000) assert_equal 1000, file.chunk_size end end context "StringIO methods" do setup do @filename = 'test' @mode = 'w' @data = "012345678\n" * 100000 @file = GridIO.new(@files, @chunks, @filename, @mode) @file.write(@data) @file.close end should "read data character by character using" do bytes = 0 file = GridIO.new(@files, @chunks, nil, "r", :query => {:_id => @file.files_id}) while file.getc bytes += 1 end assert_equal bytes, 1_000_000 end should "read length is a length is given" do file = GridIO.new(@files, @chunks, nil, "r", :query => {:_id => @file.files_id}) string = file.gets(1000) assert_equal string.length, 1000 bytes = 0 bytes += string.length while string = file.gets(1000) bytes += string.length end assert_equal bytes, 1_000_000 end should "read to the end of the line by default and assign to $_" do file = GridIO.new(@files, @chunks, nil, "r", :query => {:_id => @file.files_id}) string = file.gets assert_equal 10, string.length end should "read to the end of the file one line at a time" do file = GridIO.new(@files, @chunks, nil, "r", :query => {:_id => @file.files_id}) bytes = 0 while string = file.gets bytes += string.length end assert_equal 1_000_000, bytes end should "read to the end of the file one multi-character separator at a time" do file = GridIO.new(@files, @chunks, nil, "r", :query => {:_id => @file.files_id}) bytes = 0 while string = file.gets("45") bytes += string.length end assert_equal 1_000_000, bytes end should "read to a given separator" do file = GridIO.new(@files, @chunks, nil, "r", :query => {:_id => @file.files_id}) string = file.gets("5") assert_equal 6, string.length end should "read a multi-character separator" do file = GridIO.new(@files, @chunks, nil, "r", :query => {:_id => @file.files_id}) string = file.gets("45") assert_equal 6, string.length string = file.gets("45") assert_equal "678\n012345", string string = file.gets("\n01") assert_equal "678\n01", string end should "read a mult-character separator with a length" do file = GridIO.new(@files, @chunks, nil, "r", :query => {:_id => @file.files_id}) string = file.gets("45", 3) assert_equal 3, string.length end should "tell position, eof, and rewind" do file = GridIO.new(@files, @chunks, nil, "r", :query => {:_id => @file.files_id}) file.read(1000) assert_equal 1000, file.pos assert !file.eof? file.read assert file.eof? file.rewind assert_equal 0, file.pos assert_equal 1_000_000, file.read.length end end context "Writing" do setup do @filename = 'test' @length = 50000 @times = 10 end should "correctly write multiple chunks from mutiple writes" do file = GridIO.new(@files, @chunks, @filename, 'w') @times.times do file.write("1" * @length) end file.close file = GridIO.new(@files, @chunks, @filename, 'r') total_size = 0 while !file.eof? total_size += file.read(@length).length end file.close assert_equal total_size, @times * @length end end context "Seeking" do setup do @filename = 'test' @mode = 'w' @data = "1" * 1024 * 1024 @file = GridIO.new(@files, @chunks, @filename, @mode) @file.write(@data) @file.close end should "read all data using read_length and then be able to seek" do file = GridIO.new(@files, @chunks, nil, "r", :query => {:_id => @file.files_id}) assert_equal @data, file.read(1024 * 1024) file.seek(0) assert_equal @data, file.read end should "read all data using read_all and then be able to seek" do file = GridIO.new(@files, @chunks, nil, "r", :query => {:_id => @file.files_id}) assert_equal @data, file.read file.seek(0) assert_equal @data, file.read file.seek(1024 * 512) assert_equal 524288, file.file_position assert_equal @data.length / 2, file.read.length assert_equal 1048576, file.file_position assert_nil file.read file.seek(1024 * 512) assert_equal 524288, file.file_position end end context "Grid MD5 check" do should "run in safe mode" do file = GridIO.new(@files, @chunks, 'smallfile', 'w') file.write("DATA" * 100) assert file.close assert_equal file.server_md5, file.client_md5 end should "validate with a large file" do io = File.open(File.join(TEST_DATA, 'sample_file.pdf'), 'r') file = GridIO.new(@files, @chunks, 'bigfile', 'w') file.write(io) assert file.close assert_equal file.server_md5, file.client_md5 end should "raise an exception when check fails" do io = File.open(File.join(TEST_DATA, 'sample_file.pdf'), 'r') @db.stubs(:command).returns({'md5' => '12345'}) file = GridIO.new(@files, @chunks, 'bigfile', 'w') file.write(io) assert_raise GridMD5Failure do assert file.close end assert_not_equal file.server_md5, file.client_md5 end end context "Content types" do if defined?(MIME) should "determine common content types from the extension" do file = GridIO.new(@files, @chunks, 'sample.pdf', 'w') assert_equal 'application/pdf', file.content_type file = GridIO.new(@files, @chunks, 'sample.txt', 'w') assert_equal 'text/plain', file.content_type end end should "default to binary/octet-stream when type is unknown" do file = GridIO.new(@files, @chunks, 'sample.l33t', 'w') assert_equal 'binary/octet-stream', file.content_type end should "use any provided content type by default" do file = GridIO.new(@files, @chunks, 'sample.l33t', 'w', :content_type => 'image/jpg') assert_equal 'image/jpg', file.content_type end end end end ruby-mongo-1.10.0/test/functional/grid_test.rb000066400000000000000000000171661233461006100213300ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' include Mongo def read_and_write_stream(filename, read_length, opts={}) io = File.open(File.join(TEST_DATA, filename), 'r+b') id = @grid.put(io, opts.merge!(:filename => filename + read_length.to_s)) file = @grid.get(id) io.rewind data = io.read if data.respond_to?(:force_encoding) data.force_encoding("binary") end read_data = "" while(chunk = file.read(read_length)) read_data << chunk break if chunk.empty? end assert_equal data.length, read_data.length end class GridTest < Test::Unit::TestCase context "Tests:" do setup do @db = standard_connection.db(TEST_DB) @files = @db.collection('test-fs.files') @chunks = @db.collection('test-fs.chunks') end teardown do @files.remove @chunks.remove end context "A one-chunk grid-stored file" do setup do @data = "GRIDDATA" * 5 @grid = Grid.new(@db, 'test-fs') @id = @grid.put(@data, :filename => 'sample', :metadata => {'app' => 'photos'}) end should "retrieve the file" do data = @grid.get(@id).data assert_equal @data, data end end context "A basic grid-stored file" do setup do @data = "GRIDDATA" * 50000 @grid = Grid.new(@db, 'test-fs') @id = @grid.put(@data, :filename => 'sample', :metadata => {'app' => 'photos'}) end should "check existence" do file = @grid.exist?(:filename => 'sample') assert_equal 'sample', file['filename'] end should "return nil if it doesn't exist" do assert_nil @grid.exist?(:metadata => 'foo') end should "retrieve the stored data" do data = @grid.get(@id).data assert_equal @data.length, data.length end should "have a unique index on chunks" do assert @chunks.index_information['files_id_1_n_1']['unique'] end should "store the filename" do file = @grid.get(@id) assert_equal 'sample', file.filename end should "store any relevant metadata" do file = @grid.get(@id) assert_equal 'photos', file.metadata['app'] end should "delete the file and any chunks" do @grid.delete(@id) assert_raise GridFileNotFound do @grid.get(@id) end assert_equal nil, @db['test-fs']['chunks'].find_one({:files_id => @id}) end end context "Filename not required" do setup do @data = "GRIDDATA" * 50000 @grid = Grid.new(@db, 'test-fs') @metadata = {'app' => 'photos'} end should "store the file with the old filename api" do id = @grid.put(@data, :filename => 'sample', :metadata => @metadata) file = @grid.get(id) assert_equal 'sample', file.filename assert_equal @metadata, file.metadata end should "store without a filename" do id = @grid.put(@data, :metadata => @metadata) file = @grid.get(id) assert_nil file.filename file_doc = @files.find_one({'_id' => id}) assert !file_doc.has_key?('filename') assert_equal @metadata, file.metadata end should "store with filename and metadata with the new api" do id = @grid.put(@data, :filename => 'sample', :metadata => @metadata) file = @grid.get(id) assert_equal 'sample', file.filename assert_equal @metadata, file.metadata end end context "Writing arbitrary data fields" do setup do @data = "GRIDDATA" * 50000 @grid = Grid.new(@db, 'test-fs') end should "write random keys to the files collection" do id = @grid.put(@data, :phrases => ["blimey", "ahoy!"]) file = @grid.get(id) assert_equal ["blimey", "ahoy!"], file['phrases'] end should "ignore special keys" do id = silently do @grid.put(@data, :file_length => 100, :phrase => "blimey") end file = @grid.get(id) assert_equal "blimey", file['phrase'] assert_equal 400_000, file.file_length end end context "Storing data with a length of zero" do setup do @grid = Grid.new(@db, 'test-fs') @id = silently do @grid.put('', :filename => 'sample', :metadata => {'app' => 'photos'}) end end should "return the zero length" do data = @grid.get(@id) assert_equal 0, data.read.length end end context "Grid streaming: " do setup do @grid = Grid.new(@db, 'test-fs') filename = 'sample_data' @io = File.open(File.join(TEST_DATA, filename), 'r') id = @grid.put(@io, :filename => filename) @file = @grid.get(id) @io.rewind @data = @io.read if @data.respond_to?(:force_encoding) @data.force_encoding("binary") end end should "be equal in length" do @io.rewind assert_equal @io.read.length, @file.read.length end should "read the file" do read_data = "" @file.each do |chunk| read_data << chunk end assert_equal @data.length, read_data.length end should "read the file if no block is given" do read_data = @file.each assert_equal @data.length, read_data.length end end context "Grid streaming an empty file: " do setup do @grid = Grid.new(@db, 'test-fs') filename = 'empty_data' @io = File.open(File.join(TEST_DATA, filename), 'r') id = silently do @grid.put(@io, :filename => filename) end @file = @grid.get(id) @io.rewind @data = @io.read if @data.respond_to?(:force_encoding) @data.force_encoding("binary") end end should "be equal in length" do @io.rewind assert_equal @io.read.length, @file.read.length end should "read the file" do read_data = "" @file.each do |chunk| read_data << chunk end assert_equal @data.length, read_data.length end should "read the file if no block is given" do read_data = @file.each assert_equal @data.length, read_data.length end end context "Streaming: " do || {} setup do @grid = Grid.new(@db, 'test-fs') end should "put and get a small io object with a small chunk size" do read_and_write_stream('small_data.txt', 1, :chunk_size => 2) end should "put and get an empty io object" do silently do read_and_write_stream('empty_data', 1) end end should "put and get a small io object" do read_and_write_stream('small_data.txt', 1) end should "put and get a large io object if reading less than the chunk size" do read_and_write_stream('sample_data', 255 * 1024) end should "put and get a large io object if reading more than the chunk size" do read_and_write_stream('sample_data', 300 * 1024) end end end end ruby-mongo-1.10.0/test/functional/pool_test.rb000066400000000000000000000034001233461006100213360ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' class PoolTest < Test::Unit::TestCase include Mongo def setup @client ||= standard_connection({:pool_size => 15, :pool_timeout => 5}) @db = @client.db(TEST_DB) @collection = @db.collection("pool_test") end def test_pool_affinity pool = Pool.new(@client, TEST_HOST, TEST_PORT, :size => 5) threads = [] 10.times do threads << Thread.new do original_socket = pool.checkout pool.checkin(original_socket) 500.times do socket = pool.checkout assert_equal original_socket, socket pool.checkin(socket) end end end threads.each { |t| t.join } end def test_pool_affinity_max_size docs = [] 8000.times {|x| docs << {:value => x}} @collection.insert(docs) threads = [] threads << Thread.new do @collection.find({"value" => {"$lt" => 100}}).each {|e| e} Thread.pass sleep(0.125) @collection.find({"value" => {"$gt" => 100}}).each {|e| e} end threads << Thread.new do @collection.find({'$where' => "function() {for(i=0;i<1000;i++) {this.value};}"}).each {|e| e} end threads.each(&:join) end end ruby-mongo-1.10.0/test/functional/safe_test.rb000066400000000000000000000061421233461006100213110ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' include Mongo class SafeTest < Test::Unit::TestCase context "Safe mode propogation: " do setup do @connection = standard_connection({:safe => true}, true) # Legacy @db = @connection[TEST_DB] @collection = @db['test-safe'] @collection.create_index([[:a, 1]], :unique => true) @collection.remove end should "propogate safe option on insert" do @collection.insert({:a => 1}) assert_raise_error(OperationFailure, "duplicate key") do @collection.insert({:a => 1}) end end should "allow safe override on insert" do @collection.insert({:a => 1}) @collection.insert({:a => 1}, :safe => false) end should "allow safe override on save" do @collection.insert({:a => 1}) id = @collection.insert({:a => 2}) assert_nothing_raised do @collection.save({:_id => id.to_s, :a => 1}, :safe => false) end end should "propogate safe option on save" do @collection.insert({:a => 1}) id = @collection.insert({:a => 2}) assert_raise(OperationFailure) do @collection.save({:_id => id.to_s, :a => 1}) end end should "propogate safe option on update" do @collection.insert({:a => 1}) @collection.insert({:a => 2}) assert_raise_error(OperationFailure, "duplicate key") do @collection.update({:a => 2}, {:a => 1}) end end should "allow safe override on update" do @collection.insert({:a => 1}) @collection.insert({:a => 2}) @collection.update({:a => 2}, {:a => 1}, :safe => false) end end context "Safe error objects" do setup do @connection = standard_connection({:safe => true}, true) # Legacy @db = @connection[TEST_DB] @collection = @db['test'] @collection.remove @collection.insert({:a => 1}) @collection.insert({:a => 1}) @collection.insert({:a => 1}) end should "return object on update" do response = @collection.update({:a => 1}, {"$set" => {:a => 2}}, :multi => true) assert(response['updatedExisting'] || @db.connection.wire_version_feature?(Mongo::MongoClient::BATCH_COMMANDS)) # TODO - review new write command return values assert(response['n'] == 3 || @db.connection.wire_version_feature?(Mongo::MongoClient::BATCH_COMMANDS)) # TODO - update command top pending end should "return object on remove" do response = @collection.remove({}) assert_equal 3, response['n'] end end endruby-mongo-1.10.0/test/functional/ssl_test.rb000066400000000000000000000015701233461006100211740ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' require 'shared/ssl_shared' class SSLTest < Test::Unit::TestCase include Mongo include SSLTests def setup @client_class = MongoClient @uri_info = 'server' @connect_info = ['server', 27017] @bad_connect_info = ['localhost', 27017] end end ruby-mongo-1.10.0/test/functional/support_test.rb000066400000000000000000000037661233461006100221200ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' class SupportTest < Test::Unit::TestCase def test_command_response_succeeds assert Support.ok?('ok' => 1) assert Support.ok?('ok' => 1.0) assert Support.ok?('ok' => true) end def test_command_response_fails assert !Support.ok?('ok' => 0) assert !Support.ok?('ok' => 0.0) assert !Support.ok?('ok' => 0.0) assert !Support.ok?('ok' => 'str') assert !Support.ok?('ok' => false) end def test_array_of_pairs hps = [["localhost", 27017], ["localhost", 27018], ["localhost", 27019]] assert_equal [["localhost", 27017], ["localhost", 27018], ["localhost", 27019]], Support.normalize_seeds(hps) end def test_array_of_strings hps = ["localhost:27017", "localhost:27018", "localhost:27019"] assert_equal [["localhost", 27017], ["localhost", 27018], ["localhost", 27019]], Support.normalize_seeds(hps) end def test_single_string_with_host_port hps = "localhost:27017" assert_equal ["localhost", 27017], Support.normalize_seeds(hps) end def test_single_string_missing_port hps = "localhost" assert_equal ["localhost", 27017], Support.normalize_seeds(hps) end def test_single_element_array_missing_port hps = ["localhost"] assert_equal ["localhost", 27017], Support.normalize_seeds(hps) end def test_pair_doesnt_get_converted hps = ["localhost", 27017] assert_equal ["localhost", 27017], Support.normalize_seeds(hps) end end ruby-mongo-1.10.0/test/functional/timeout_test.rb000066400000000000000000000030111233461006100220510ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' class TimeoutTest < Test::Unit::TestCase def test_op_timeout connection = standard_connection(:op_timeout => 0.5) admin = connection.db('admin') command = {:eval => "sleep(100)"} # Should not timeout assert admin.command(command) # Should timeout command = {:eval => "sleep(1000)"} assert_raise Mongo::OperationTimeout do admin.command(command) end end def test_external_timeout_does_not_leave_socket_in_bad_state client = standard_connection db = client[TEST_DB] coll = db['timeout-tests'] # prepare the database coll.drop coll.insert({:a => 1}) # use external timeout to mangle socket begin Timeout::timeout(0.5) do db.command({:eval => "sleep(1000)"}) end rescue Timeout::Error #puts "Thread timed out and has now mangled the socket" end assert_nothing_raised do coll.find_one end end end ruby-mongo-1.10.0/test/functional/uri_test.rb000066400000000000000000000303321233461006100211700ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' class URITest < Test::Unit::TestCase include Mongo def test_uri_without_port parser = Mongo::URIParser.new('mongodb://localhost') assert_equal 1, parser.nodes.length assert_equal 'localhost', parser.nodes[0][0] assert_equal 27017, parser.nodes[0][1] end def test_basic_uri parser = Mongo::URIParser.new('mongodb://localhost:27018') assert_equal 1, parser.nodes.length assert_equal 'localhost', parser.nodes[0][0] assert_equal 27018, parser.nodes[0][1] end def test_ipv6_format parser = Mongo::URIParser.new('mongodb://[::1]:27018') assert_equal 1, parser.nodes.length assert_equal '::1', parser.nodes[0][0] assert_equal 27018, parser.nodes[0][1] parser = Mongo::URIParser.new('mongodb://[::1]') assert_equal 1, parser.nodes.length assert_equal '::1', parser.nodes[0][0] end def test_ipv6_format_multi parser = Mongo::URIParser.new('mongodb://[::1]:27017,[::1]:27018') assert_equal 2, parser.nodes.length assert_equal '::1', parser.nodes[0][0] assert_equal 27017, parser.nodes[0][1] assert_equal '::1', parser.nodes[1][0] assert_equal 27018, parser.nodes[1][1] parser = Mongo::URIParser.new('mongodb://[::1]:27017,localhost:27018') assert_equal 2, parser.nodes.length assert_equal '::1', parser.nodes[0][0] assert_equal 27017, parser.nodes[0][1] assert_equal 'localhost', parser.nodes[1][0] assert_equal 27018, parser.nodes[1][1] parser = Mongo::URIParser.new('mongodb://localhost:27017,[::1]:27018') assert_equal 2, parser.nodes.length assert_equal 'localhost', parser.nodes[0][0] assert_equal 27017, parser.nodes[0][1] assert_equal '::1', parser.nodes[1][0] assert_equal 27018, parser.nodes[1][1] end def test_multiple_uris parser = Mongo::URIParser.new('mongodb://a.example.com:27018,b.example.com') assert_equal 2, parser.nodes.length assert_equal ['a.example.com', 27018], parser.nodes[0] assert_equal ['b.example.com', 27017], parser.nodes[1] end def test_username_without_password parser = Mongo::URIParser.new('mongodb://bob:@localhost?authMechanism=GSSAPI') assert_equal "bob", parser.auths.first[:username] assert_equal nil, parser.auths.first[:password] parser = Mongo::URIParser.new('mongodb://bob@localhost?authMechanism=GSSAPI') assert_equal nil, parser.auths.first[:password] assert_raise_error MongoArgumentError do Mongo::URIParser.new('mongodb://bob:@localhost') end assert_raise_error MongoArgumentError do Mongo::URIParser.new('mongodb://bob@localhost') end end def test_complex_passwords parser = Mongo::URIParser.new('mongodb://bob:secret.word@a.example.com:27018/test') assert_equal "bob", parser.auths.first[:username] assert_equal "secret.word", parser.auths.first[:password] parser = Mongo::URIParser.new('mongodb://bob:s-_3#%R.t@a.example.com:27018/test') assert_equal "bob", parser.auths.first[:username] assert_equal "s-_3#%R.t", parser.auths.first[:password] assert_raise_error MongoArgumentError do Mongo::URIParser.new('mongodb://doctor:bad:wolf@gallifrey.com:27018/test') end assert_raise_error MongoArgumentError do Mongo::URIParser.new('mongodb://doctor:bow@tie@gallifrey.com:27018/test') end end def test_complex_usernames parser = Mongo::URIParser.new('mongodb://s-_3#%R.t:secret.word@a.example.com:27018/test') assert_equal "s-_3#%R.t", parser.auths.first[:username] assert_raise_error MongoArgumentError do Mongo::URIParser.new('mongodb://doc:tor:badwolf@gallifrey.com:27018/test') end assert_raise_error MongoArgumentError do Mongo::URIParser.new('mongodb://d@ctor:bowtie@gallifrey.com:27018/test') end end def test_username_with_encoded_symbol parser = Mongo::URIParser.new('mongodb://f%40o:bar@localhost/admin') username = parser.auths.first[:username] assert_equal 'f@o', username parser = Mongo::URIParser.new('mongodb://f%3Ao:bar@localhost/admin') username = parser.auths.first[:username] assert_equal 'f:o', username end def test_password_with_encoded_symbol parser = Mongo::URIParser.new('mongodb://foo:b%40r@localhost/admin') password = parser.auths.first[:password] assert_equal 'b@r', password parser = Mongo::URIParser.new('mongodb://foo:b%3Ar@localhost/admin') password = parser.auths.first[:password] assert_equal 'b:r', password end def test_opts_with_semincolon_separator parser = Mongo::URIParser.new('mongodb://localhost:27018?connect=direct;slaveok=true;safe=true') assert_equal 'direct', parser.connect assert parser.direct? assert parser.slaveok assert parser.safe end def test_opts_with_amp_separator parser = Mongo::URIParser.new('mongodb://localhost:27018?connect=direct&slaveok=true&safe=true') assert_equal 'direct', parser.connect assert parser.direct? assert parser.slaveok assert parser.safe end def test_opts_with_uri_encoded_stuff parser = Mongo::URIParser.new('mongodb://localhost:27018?connect=%64%69%72%65%63%74&slaveok=%74%72%75%65&safe=true') assert_equal 'direct', parser.connect assert parser.direct? assert parser.slaveok assert parser.safe end def test_opts_made_invalid_by_mixed_separators assert_raise_error MongoArgumentError, "must not mix URL separators ; and &" do Mongo::URIParser.new('mongodb://localhost:27018?replicaset=foo;bar&slaveok=true&safe=true') end end def test_opts_safe parser = Mongo::URIParser.new('mongodb://localhost:27018?safe=true;w=2;journal=true;fsync=true;wtimeoutMS=200') assert parser.safe assert_equal 2, parser.w assert parser.fsync assert parser.journal assert_equal 200, parser.wtimeoutms end def test_opts_ssl parser = Mongo::URIParser.new('mongodb://localhost:27018?ssl=true;w=2;journal=true;fsync=true;wtimeoutMS=200') assert parser.ssl end def test_opts_nonsafe_timeout parser = Mongo::URIParser.new('mongodb://localhost:27018?connectTimeoutMS=5500&socketTimeoutMS=500') assert_equal 5.5, parser.connecttimeoutms assert_equal 0.5, parser.sockettimeoutms end def test_opts_replica_set parser = Mongo::URIParser.new('mongodb://localhost:27018?connect=replicaset;replicaset=foo') assert_equal 'foo', parser.replicaset assert_equal 'replicaset', parser.connect assert parser.replicaset? end def test_opts_conflicting_replica_set assert_raise_error MongoArgumentError, "connect=direct conflicts with setting a replicaset name" do Mongo::URIParser.new('mongodb://localhost:27018?connect=direct;replicaset=foo') end end def test_case_insensitivity parser = Mongo::URIParser.new('mongodb://localhost:27018?wtimeoutms=500&JOURNAL=true&SaFe=true') assert_equal 500, parser.wtimeoutms assert_equal true, parser.journal assert_equal true, parser.safe end def test_read_preference_option_primary parser = Mongo::URIParser.new("mongodb://localhost:27018?readPreference=primary") assert_equal :primary, parser.readpreference end def test_read_preference_option_primary_preferred parser = Mongo::URIParser.new("mongodb://localhost:27018?readPreference=primaryPreferred") assert_equal :primary_preferred, parser.readpreference end def test_read_preference_option_secondary parser = Mongo::URIParser.new("mongodb://localhost:27018?readPreference=secondary") assert_equal :secondary, parser.readpreference end def test_read_preference_option_secondary_preferred parser = Mongo::URIParser.new("mongodb://localhost:27018?readPreference=secondaryPreferred") assert_equal :secondary_preferred, parser.readpreference end def test_read_preference_option_nearest parser = Mongo::URIParser.new("mongodb://localhost:27018?readPreference=nearest") assert_equal :nearest, parser.readpreference end def test_read_preference_option_with_invalid assert_raise_error MongoArgumentError do Mongo::URIParser.new("mongodb://localhost:27018?readPreference=invalid") end end def test_read_preference_connection_options parser = Mongo::URIParser.new("mongodb://localhost:27018?replicaset=test&readPreference=nearest") assert_equal :nearest, parser.connection_options[:read] end def test_read_preference_connection_options_with_no_replica_set parser = Mongo::URIParser.new("mongodb://localhost:27018?readPreference=nearest") assert_equal :nearest, parser.connection_options[:read] end def test_read_preference_connection_options_prefers_preference_over_slaveok parser = Mongo::URIParser.new("mongodb://localhost:27018?replicaset=test&readPreference=nearest&slaveok=true") assert_equal :nearest, parser.connection_options[:read] end def test_connection_when_sharded_with_no_options parser = Mongo::URIParser.new("mongodb://localhost:27017,localhost:27018") client = parser.connection({}, false, true) assert_equal [[ "localhost", 27017 ], [ "localhost", 27018 ]], client.seeds assert_true client.mongos? end def test_connection_when_sharded_with_options parser = Mongo::URIParser.new("mongodb://localhost:27017,localhost:27018") client = parser.connection({ :refresh_interval => 10 }, false, true) assert_equal [[ "localhost", 27017 ], [ "localhost", 27018 ]], client.seeds assert_equal 10, client.refresh_interval assert_true client.mongos? end def test_connection_when_sharded_with_uri_options parser = Mongo::URIParser.new("mongodb://localhost:27017,localhost:27018?readPreference=nearest") client = parser.connection({}, false, true) assert_equal [[ "localhost", 27017 ], [ "localhost", 27018 ]], client.seeds assert_equal :nearest, client.read assert_true client.mongos? end def test_auth_source parser = Mongo::URIParser.new("mongodb://user:pass@localhost?authSource=foobar") assert_equal 'foobar', parser.authsource end def test_auth_mechanism parser = Mongo::URIParser.new("mongodb://user@localhost?authMechanism=MONGODB-X509") assert_equal 'MONGODB-X509', parser.authmechanism assert_raise_error MongoArgumentError do Mongo::URIParser.new("mongodb://user@localhost?authMechanism=INVALID") end end def test_sasl_plain parser = Mongo::URIParser.new("mongodb://user:pass@localhost?authMechanism=PLAIN") assert_equal 'PLAIN', parser.auths.first[:mechanism] assert_equal 'user', parser.auths.first[:username] assert_equal 'pass', parser.auths.first[:password] assert_equal 'admin', parser.auths.first[:source] parser = Mongo::URIParser.new("mongodb://foo%2Fbar%40example.net:pass@localhost/some_db?authMechanism=PLAIN") assert_equal 'PLAIN', parser.auths.first[:mechanism] assert_equal 'foo/bar@example.net', parser.auths.first[:username] assert_equal 'pass', parser.auths.first[:password] assert_equal 'some_db', parser.auths.first[:source] assert_raise_error MongoArgumentError do Mongo::URIParser.new("mongodb://user@localhost/some_db?authMechanism=PLAIN") end end def test_gssapi uri = "mongodb://foo%2Fbar%40example.net@localhost?authMechanism=GSSAPI;" parser = Mongo::URIParser.new(uri) assert_equal 'GSSAPI', parser.auths.first[:mechanism] assert_equal 'foo/bar@example.net', parser.auths.first[:username] uri = "mongodb://foo%2Fbar%40example.net@localhost?authMechanism=GSSAPI;" + "gssapiServiceName=mongodb;canonicalizeHostName=true" parser = Mongo::URIParser.new(uri) assert_equal 'GSSAPI', parser.auths.first[:mechanism] assert_equal 'foo/bar@example.net', parser.auths.first[:username] assert_equal 'mongodb', parser.auths.first[:extra][:gssapi_service_name] assert_equal true, parser.auths.first[:extra][:canonicalize_host_name] end end ruby-mongo-1.10.0/test/functional/write_concern_test.rb000066400000000000000000000065471233461006100232450ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' include Mongo class WriteConcernTest < Test::Unit::TestCase context "Write concern propogation: " do setup do @con = standard_connection @db = @con[TEST_DB] @col = @db['test-safe'] @col.create_index([[:a, 1]], :unique => true) @col.remove end #TODO: add write concern tests for remove should "propogate write concern options on insert" do @col.insert({:a => 1}) assert_raise_error(OperationFailure, "duplicate key") do @col.insert({:a => 1}) end end should "allow write concern override on insert" do @col.insert({:a => 1}) @col.insert({:a => 1}, :w => 0) end should "propogate write concern option on update" do @col.insert({:a => 1}) @col.insert({:a => 2}) assert_raise_error(OperationFailure, "duplicate key") do @col.update({:a => 2}, {:a => 1}) end end should "allow write concern override on update" do @col.insert({:a => 1}) @col.insert({:a => 2}) @col.update({:a => 2}, {:a => 1}, :w => 0) end end context "Write concern error objects" do setup do @con = standard_connection @db = @con[TEST_DB] @col = @db['test'] @col.remove @col.insert({:a => 1}) @col.insert({:a => 1}) @col.insert({:a => 1}) end should "return object on update" do response = @col.update({:a => 1}, {"$set" => {:a => 2}}, :multi => true) assert(response['updatedExisting'] || @db.connection.wire_version_feature?(Mongo::MongoClient::BATCH_COMMANDS)) # TODO - review new write command return values assert(response['n'] == 3 || @db.connection.wire_version_feature?(Mongo::MongoClient::BATCH_COMMANDS)) # TODO - update command top pending end should "return object on remove" do response = @col.remove({}) assert_equal 3, response['n'] end end context "Write concern in gridfs" do setup do @db = standard_connection.db(TEST_DB) @grid = Mongo::GridFileSystem.new(@db) @filename = 'sample' end teardown do @grid.delete(@filename) end should "should acknowledge writes by default using md5" do file = @grid.open(@filename, 'w') file.write "Hello world!" file.close assert_equal file.client_md5, file.server_md5 end should "should allow for unacknowledged writes" do file = @grid.open(@filename, 'w', {:w => 0} ) file.write "Hello world!" file.close assert_nil file.client_md5, file.server_md5 end should "should support legacy write concern api" do file = @grid.open(@filename, 'w', {:safe => false} ) file.write "Hello world!" file.close assert_nil file.client_md5, file.server_md5 end end end ruby-mongo-1.10.0/test/helpers/000077500000000000000000000000001233461006100163045ustar00rootroot00000000000000ruby-mongo-1.10.0/test/helpers/general.rb000066400000000000000000000022131233461006100202440ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Redirects output while yielding a given block of code. # # @return [Object] The result of the block. def silently warn_level = $VERBOSE $VERBOSE = nil begin result = yield ensure $VERBOSE = warn_level end result end class Hash def stringify_keys dup.stringify_keys! end def stringify_keys! keys.each do |key| self[key.to_s] = delete(key) end self end def except(*keys) dup.except!(*keys) end # Replaces the hash without the given keys. def except!(*keys) keys.each { |key| delete(key) } self end end ruby-mongo-1.10.0/test/helpers/test_unit.rb000066400000000000000000000234001233461006100206460ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. TEST_DB = 'ruby_test' unless defined? TEST_DB TEST_HOST = ENV['MONGO_RUBY_DRIVER_HOST'] || 'localhost' unless defined? TEST_HOST TEST_DATA = File.join(File.dirname(__FILE__), 'fixtures/data') TEST_BASE = Test::Unit::TestCase unless defined? TEST_PORT TEST_PORT = if ENV['MONGO_RUBY_DRIVER_PORT'] ENV['MONGO_RUBY_DRIVER_PORT'].to_i else Mongo::MongoClient::DEFAULT_PORT end end class Test::Unit::TestCase include Mongo include BSON # Handles creating a pre-defined MongoDB cluster for integration testing. # # @param kind=nil [Symbol] Type of cluster (:rs or :sc). # @param opts={} [Hash] Options to be passed through to the cluster manager. # # @return [ClusterManager] The cluster manager instance being used. def ensure_cluster(kind=nil, opts={}) cluster_instance = nil class_vars = TEST_BASE.class_eval { class_variables } if class_vars.include?("@@cluster_#{kind}") || class_vars.include?("@@cluster_#{kind}".to_sym) cluster_instance = TEST_BASE.class_eval { class_variable_get("@@cluster_#{kind}") } end unless cluster_instance if kind == :rs cluster_opts = Config::DEFAULT_REPLICA_SET.dup else cluster_opts = Config::DEFAULT_SHARDED_SIMPLE.dup end cluster_opts.merge!(opts) cluster_opts.merge!(:dbpath => ENV['MONGO_DBPATH'] || 'data') config = Config.cluster(cluster_opts) cluster_instance = Config::ClusterManager.new(config) TEST_BASE.class_eval { class_variable_set("@@cluster_#{kind}", cluster_instance) } end cluster_instance.start instance_variable_set("@#{kind}", cluster_instance) end # Generic helper to rescue and retry from a connection failure. # # @param max_retries=30 [Integer] The number of times to attempt a retry. # # @return [Object] The block result. def rescue_connection_failure(max_retries=30) retries = 0 begin yield rescue Mongo::ConnectionFailure => ex retries += 1 raise ex if retries > max_retries sleep(2) retry end end # Creates and connects a standard, pre-defined MongoClient instance. # # @param options={} [Hash] Options to be passed to the client instance. # @param legacy=false [Boolean] When true, uses deprecated Mongo::Connection. # # @return [MongoClient] The client instance. def self.standard_connection(options={}, legacy=false) if legacy Connection.new(TEST_HOST, TEST_PORT, options) else MongoClient.new(TEST_HOST, TEST_PORT, options) end end # Creates and connects a standard, pre-defined MongoClient instance. # # @param options={} [Hash] Options to be passed to the client instance. # @param legacy=false [Boolean] When true, uses deprecated Mongo::Connection. # # @return [MongoClient] The client instance. def standard_connection(options={}, legacy=false) self.class.standard_connection(options, legacy) end def self.host_port "#{mongo_host}:#{mongo_port}" end def self.mongo_host TEST_HOST end def self.mongo_port TEST_PORT end def host_port self.class.host_port end def mongo_host self.class.mongo_host end def mongo_port self.class.mongo_port end def method_name caller[0]=~/`(.*?)'/ $1 end def perform_step_down(member) start = Time.now timeout = 20 # seconds begin step_down_command = BSON::OrderedHash.new step_down_command[:replSetStepDown] = 30 member['admin'].command(step_down_command) rescue Mongo::OperationFailure => e retry unless (Time.now - start) > timeout raise e end end def new_mock_socket(host='localhost', port=27017) socket = Object.new socket.stubs(:setsockopt).with(Socket::IPPROTO_TCP, Socket::TCP_NODELAY, 1) socket.stubs(:close) socket.stubs(:closed?) socket.stubs(:checkin) socket.stubs(:pool) socket end def new_mock_unix_socket(sockfile='/tmp/mongod.sock') socket = Object.new socket.stubs(:setsockopt).with(Socket::IPPROTO_TCP) socket.stubs(:close) socket.stubs(:closed?) socket end def new_mock_db Object.new end def assert_raise_error(klass, message=nil) begin yield rescue => e if klass.to_s != e.class.to_s flunk "Expected exception class #{klass} but got #{e.class}.\n #{e.backtrace}" end if message && !e.message.include?(message) p e.backtrace flunk "#{e.message} does not include #{message}.\n#{e.backtrace}" end else flunk "Expected assertion #{klass} but none was raised." end end def match_document(key, expected, actual) # special cases for Regexp match, BSON::ObjectId, Range if expected.is_a?(Hash) && actual.is_a?(Hash) expected_keys = expected.keys.sort actual_keys = actual.keys.sort #currently allow extra fields in actual as the following check for equality of keys is commented out #raise "field:#{key.inspect} - Hash keys expected:#{expected_keys.inspect} actual:#{actual_keys.inspect}" if expected_keys != actual_keys expected_keys.each{|k| match_document(k, expected[k], actual[k])} elsif expected.is_a?(Array) && actual.is_a?(Array) raise "field:#{key.inspect} - Array size expected:#{expected.size} actual:#{actual.size}" if expected.size != actual.size (0...expected.size).each{|i| match_document(i, expected[i], actual[i])} elsif expected.is_a?(Regexp) && actual.is_a?(String) raise "field:#{key.inspect} - Regexp expected:#{expected.inspect} actual:#{actual.inspect}" if expected !~ actual elsif expected.is_a?(BSON::ObjectId) && actual.is_a?(BSON::ObjectId) # match type but not value elsif expected.is_a?(Range) raise "field:#{key.inspect} - Range expected:#{expected.inspect} actual:#{actual.inspect}" if !expected.include?(actual) elsif expected.is_a?(Set) raise "field:#{key.inspect} - Set expected:#{expected.inspect} actual:#{actual.inspect}" if !expected.include?(actual) else raise "field:#{key.inspect} - expected:#{expected.inspect} actual:#{actual.inspect}" if expected != actual end true end def assert_match_document(expected, actual, message = '') match = begin match_document('', expected, actual) rescue => ex message = ex.message + ' - ' + message false end assert(match, message) end def with_forced_timeout(client, &block) cmd_line_args = client['admin'].command({ :getCmdLineOpts => 1 })['argv'] if cmd_line_args.include?('enableTestCommands=1') && client.server_version >= "2.5.3" begin #Force any query or command with valid non-zero max time to fail (SERVER-10650) fail_point_cmd = OrderedHash.new fail_point_cmd[:configureFailPoint] = 'maxTimeAlwaysTimeOut' fail_point_cmd[:mode] = 'alwaysOn' client['admin'].command(fail_point_cmd) yield fail_point_cmd[:mode] = 'off' client['admin'].command(fail_point_cmd) end end end def with_auth(client, &block) cmd_line_args = client['admin'].command({ :getCmdLineOpts => 1 })['parsed'] yield if cmd_line_args.include?('auth') end def with_default_journaling(client, &block) cmd_line_args = client['admin'].command({ :getCmdLineOpts => 1 })['parsed'] unless client.server_version < "2.0" || cmd_line_args.include?('nojournal') yield end end def with_no_replication(client, &block) if client.class == MongoClient yield end end def with_no_journaling(client, &block) cmd_line_args = client['admin'].command({ :getCmdLineOpts => 1 })['parsed'] unless client.server_version < "2.0" || !cmd_line_args.include?('nojournal') yield end end def with_ipv6_enabled(client, &block) cmd_line_args = client['admin'].command({ :getCmdLineOpts => 1 })['parsed'] if cmd_line_args.include?('ipv6') yield end end def with_write_commands(client, &block) wire_version = Mongo::MongoClient::BATCH_COMMANDS if client.primary_wire_version_feature?(wire_version) yield wire_version end end def with_preserved_env_uri(new_uri=nil, &block) old_mongodb_uri = ENV['MONGODB_URI'] begin ENV['MONGODB_URI'] = new_uri yield ensure ENV['MONGODB_URI'] = old_mongodb_uri end end def with_write_operations(client, &block) wire_version = Mongo::MongoClient::RELEASE_2_4_AND_BEFORE if client.primary_wire_version_feature?(wire_version) client.class.class_eval(%Q{ alias :old_use_write_command? :use_write_command? def use_write_command?(write_concern) false end }) yield wire_version client.class.class_eval(%Q{ alias :use_write_command? :old_use_write_command? }) end end def with_write_commands_and_operations(client, &block) with_write_commands(client, &block) with_write_operations(client, &block) end def batch_commands?(wire_version) wire_version >= Mongo::MongoClient::BATCH_COMMANDS end end # Before and after hooks for the entire test run # handles mop up after the cluster manager is done. Test::Unit.at_exit do TEST_BASE.class_eval { class_variables }.select { |v| v =~ /@@cluster_/ }.each do |cluster| TEST_BASE.class_eval { class_variable_get(cluster) }.stop end end ruby-mongo-1.10.0/test/replica_set/000077500000000000000000000000001233461006100171345ustar00rootroot00000000000000ruby-mongo-1.10.0/test/replica_set/authentication_test.rb000066400000000000000000000022721233461006100235420ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' require 'shared/authentication/basic_auth_shared' require 'shared/authentication/sasl_plain_shared' require 'shared/authentication/bulk_api_auth_shared' require 'shared/authentication/gssapi_shared' class ReplicaSetAuthenticationTest < Test::Unit::TestCase include Mongo include BasicAuthTests include SASLPlainTests include BulkAPIAuthTests include GSSAPITests def setup ensure_cluster(:rs) @client = MongoReplicaSetClient.new(@rs.repl_set_seeds) @version = @client.server_version @db = @client[TEST_DB] @host_info = @rs.repl_set_seeds.join(',') end end ruby-mongo-1.10.0/test/replica_set/basic_test.rb000066400000000000000000000136351233461006100216110ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' class ReplicaSetBasicTest < Test::Unit::TestCase def setup ensure_cluster(:rs) end def test_connect client = MongoReplicaSetClient.new(@rs.repl_set_seeds, :name => @rs.repl_set_name) assert client.connected? assert_equal @rs.primary_name, client.primary.join(':') assert_equal @rs.secondary_names.sort, client.secondaries.collect{|s| s.join(':')}.sort assert_equal @rs.arbiter_names.sort, client.arbiters.collect{|s| s.join(':')}.sort client.close silently do client = MongoReplicaSetClient.new(@rs.repl_set_seeds_old, :name => @rs.repl_set_name) end assert client.connected? client.close end def test_safe_option client = MongoReplicaSetClient.new(@rs.repl_set_seeds, :name => @rs.repl_set_name) assert client.connected? assert client.write_concern[:w] > 0 client.close client = MongoReplicaSetClient.new(@rs.repl_set_seeds, :name => @rs.repl_set_name, :w => 0) assert client.connected? assert client.write_concern[:w] < 1 client.close client = MongoReplicaSetClient.new(@rs.repl_set_seeds, :name => @rs.repl_set_name, :w => 2) assert client.connected? assert client.write_concern[:w] > 0 client.close end def test_multiple_concurrent_replica_set_connection client1 = MongoReplicaSetClient.new(@rs.repl_set_seeds, :name => @rs.repl_set_name) client2 = MongoReplicaSetClient.new(@rs.repl_set_seeds, :name => @rs.repl_set_name) assert client1.connected? assert client2.connected? assert client1.manager != client2.manager assert client1.local_manager != client2.local_manager client1.close client2.close end def test_cache_original_seed_nodes host = @rs.servers.first.host seeds = @rs.repl_set_seeds << "#{host}:19356" client = MongoReplicaSetClient.new(seeds, :name => @rs.repl_set_name) assert client.connected? assert client.seeds.include?([host, 19356]), "Original seed nodes not cached!" assert_equal [host, 19356], client.seeds.last, "Original seed nodes not cached!" client.close end def test_accessors seeds = @rs.repl_set_seeds args = {:name => @rs.repl_set_name} client = MongoReplicaSetClient.new(seeds, args) assert_equal @rs.primary_name, [client.host, client.port].join(':') assert_equal client.host, client.primary_pool.host assert_equal client.port, client.primary_pool.port assert_equal 2, client.secondaries.length assert_equal 2, client.secondary_pools.length assert_equal @rs.repl_set_name, client.replica_set_name assert client.secondary_pools.include?(client.read_pool({:mode => :secondary})) assert_equal 90, client.refresh_interval assert_equal client.refresh_mode, false client.close end def test_write_commands_and_operations seeds = @rs.repl_set_seeds args = {:name => @rs.repl_set_name} @client = MongoReplicaSetClient.new(seeds, args) @coll = @client[TEST_DB]['test-write-commands-and-operations'] with_write_commands_and_operations(@client) do @coll.remove @coll.insert({:foo => "bar"}) assert_equal(1, @coll.count) end end context "Socket pools" do context "checking out writers" do setup do seeds = @rs.repl_set_seeds args = {:name => @rs.repl_set_name} @client = MongoReplicaSetClient.new(seeds, args) @coll = @client[TEST_DB]['test-connection-exceptions'] end should "close the connection on send_message for major exceptions" do with_write_operations(@client) do # explicit even if w 0 maps to write operations @client.expects(:checkout_writer).raises(SystemStackError) @client.expects(:close) begin @coll.insert({:foo => "bar"}, :w => 0) rescue SystemStackError end end end should "close the connection on send_write_command for major exceptions" do with_write_commands(@client) do @client.expects(:checkout_reader).raises(SystemStackError) @client.expects(:close) begin @coll.insert({:foo => "bar"}) rescue SystemStackError end end end should "close the connection on send_message_with_gle for major exceptions" do with_write_operations(@client) do @client.expects(:checkout_writer).raises(SystemStackError) @client.expects(:close) begin @coll.insert({:foo => "bar"}) rescue SystemStackError end end end should "close the connection on receive_message for major exceptions" do @client.expects(:checkout_reader).raises(SystemStackError) @client.expects(:close) begin @coll.find({}, :read => :primary).next rescue SystemStackError end end end context "checking out readers" do setup do seeds = @rs.repl_set_seeds args = {:name => @rs.repl_set_name} @client = MongoReplicaSetClient.new(seeds, args) @coll = @client[TEST_DB]['test-connection-exceptions'] end should "close the connection on receive_message for major exceptions" do @client.expects(:checkout_reader).raises(SystemStackError) @client.expects(:close) begin @coll.find({}, :read => :secondary).next rescue SystemStackError end end end end end ruby-mongo-1.10.0/test/replica_set/client_test.rb000066400000000000000000000252541233461006100220060ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' class ReplicaSetClientTest < Test::Unit::TestCase def setup ensure_cluster(:rs) @client = nil end def teardown @client.close if @client end def test_reconnection @client = MongoReplicaSetClient.new @rs.repl_set_seeds assert @client.connected? manager = @client.local_manager @client.close assert !@client.connected? assert !@client.local_manager @client.connect assert @client.connected? assert_equal @client.local_manager, manager end # TODO: test connect timeout. def test_connect_with_deprecated_multi silently do # guaranteed to have one data-holding member @client = MongoClient.multi(@rs.repl_set_seeds_old, :name => @rs.repl_set_name) end assert !@client.nil? assert @client.connected? end def test_connect_bad_name assert_raise_error(ReplicaSetConnectionError, "-wrong") do @client = MongoReplicaSetClient.new(@rs.repl_set_seeds, :name => @rs.repl_set_name + "-wrong") end end def test_connect_with_first_secondary_node_terminated @rs.secondaries.first.stop rescue_connection_failure do @client = MongoReplicaSetClient.new @rs.repl_set_seeds end assert @client.connected? end def test_connect_with_last_secondary_node_terminated @rs.secondaries.last.stop rescue_connection_failure do @client = MongoReplicaSetClient.new @rs.repl_set_seeds end assert @client.connected? end def test_connect_with_primary_stepped_down @client = MongoReplicaSetClient.new @rs.repl_set_seeds @client[TEST_DB]['bar'].save({:a => 1}, {:w => 3}) assert @client[TEST_DB]['bar'].find_one primary = Mongo::MongoClient.new(*@client.primary) assert_raise Mongo::ConnectionFailure do perform_step_down(primary) end assert @client.connected? rescue_connection_failure do @client[TEST_DB]['bar'].find_one end @client[TEST_DB]['bar'].find_one end def test_connect_with_primary_killed @client = MongoReplicaSetClient.new @rs.repl_set_seeds assert @client.connected? @client[TEST_DB]['bar'].save({:a => 1}, {:w => 3}) assert @client[TEST_DB]['bar'].find_one @rs.primary.kill(Signal.list['KILL']) sleep(3) rescue_connection_failure do @client[TEST_DB]['bar'].find_one end @client[TEST_DB]['bar'].find_one end def test_save_with_primary_stepped_down @client = MongoReplicaSetClient.new @rs.repl_set_seeds assert @client.connected? primary = Mongo::MongoClient.new(*@client.primary) assert_raise Mongo::ConnectionFailure do perform_step_down(primary) end rescue_connection_failure do @client[TEST_DB]['bar'].save({:a => 1}, {:w => 2}) end @client[TEST_DB]['bar'].find_one end # def test_connect_with_first_node_removed # @client = MongoReplicaSetClient.new @rs.repl_set_seeds # @client[TEST_DB]['bar'].save({:a => 1}, {:w => 3}) # # Make sure everyone's views of optimes are caught up # loop do # break if @rs.repl_set_get_status.all? do |status| # members = status['members'] # primary_optime = members.find{|m| m['state'] == 1}['optime'].seconds # members.any?{|m| m['state'] == 2 && primary_optime - m['optime'].seconds < 5} # end # sleep 1 # end # old_primary = [@client.primary_pool.host, @client.primary_pool.port] # old_primary_conn = Mongo::MongoClient.new(*old_primary) # assert_raise Mongo::ConnectionFailure do # perform_step_down(old_primary_conn) # end # # Wait for new primary # rescue_connection_failure do # sleep 1 until @rs.primary # end # new_primary = [@rs.primary.host, @rs.primary.port] # new_primary_conn = Mongo::MongoClient.new(*new_primary) # assert new_primary != old_primary # config = nil # # Remove old primary from replset # rescue_connection_failure do # config = @client['local']['system.replset'].find_one # end # old_member = config['members'].select {|m| m['host'] == old_primary.join(':')}.first # config['members'].reject! {|m| m['host'] == old_primary.join(':')} # config['version'] += 1 # begin # new_primary_conn['admin'].command({'replSetReconfig' => config}) # rescue Mongo::ConnectionFailure # end # # Wait for the dust to settle # rescue_connection_failure do # assert @client[TEST_DB]['bar'].find_one # end # begin # # Make sure a new connection skips the old primary # @new_conn = MongoReplicaSetClient.new @rs.repl_set_seeds # @new_conn.connect # new_nodes = @new_conn.secondaries + [@new_conn.primary] # assert !new_nodes.include?(old_primary) # ensure # # Add the old primary back # config['members'] << old_member # config['version'] += 1 # begin # new_primary_conn['admin'].command({'replSetReconfig' => config}) # rescue Mongo::ConnectionFailure # end # end # end def test_connect_with_hung_first_node hung_node = nil begin hung_node = IO.popen('nc -lk 127.0.0.1 29999 >/dev/null 2>&1') Timeout.timeout(3) do @client = MongoReplicaSetClient.new(['localhost:29999'] + @rs.repl_set_seeds, :connect_timeout => 2) @client.connect end assert ['localhost:29999'] != @client.primary assert !@client.secondaries.include?('localhost:29999') ensure Process.kill("KILL", hung_node.pid) if hung_node end end def test_connect_with_connection_string @client = MongoClient.from_uri("mongodb://#{@rs.replicas[0].host_port},#{@rs.replicas[1].host_port}?replicaset=#{@rs.repl_set_name}") assert !@client.nil? assert @client.connected? end def test_connect_with_connection_string_in_env_var uri = "mongodb://#{@rs.replicas[0].host_port},#{@rs.replicas[1].host_port}?replicaset=#{@rs.repl_set_name}" with_preserved_env_uri(uri) do @client = MongoReplicaSetClient.new assert !@client.nil? assert_equal 2, @client.seeds.length assert_equal @rs.replicas[0].host, @client.seeds[0][0] assert_equal @rs.replicas[1].host, @client.seeds[1][0] assert_equal @rs.replicas[0].port, @client.seeds[0][1] assert_equal @rs.replicas[1].port, @client.seeds[1][1] assert_equal @rs.repl_set_name, @client.replica_set_name assert @client.connected? end end def test_connect_with_connection_string_in_implicit_mongodb_uri uri = "mongodb://#{@rs.replicas[0].host_port},#{@rs.replicas[1].host_port}?replicaset=#{@rs.repl_set_name}" with_preserved_env_uri(uri) do @client = MongoClient.from_uri assert !@client.nil? assert_equal 2, @client.seeds.length assert_equal @rs.replicas[0].host, @client.seeds[0][0] assert_equal @rs.replicas[1].host, @client.seeds[1][0] assert_equal @rs.replicas[0].port, @client.seeds[0][1] assert_equal @rs.replicas[1].port, @client.seeds[1][1] assert_equal @rs.repl_set_name, @client.replica_set_name assert @client.connected? end end def test_connect_with_new_seed_format @client = MongoReplicaSetClient.new @rs.repl_set_seeds assert @client.connected? end def test_connect_with_old_seed_format silently do @client = MongoReplicaSetClient.new(@rs.repl_set_seeds_old) end assert @client.connected? end def test_connect_with_full_connection_string @client = MongoClient.from_uri("mongodb://#{@rs.replicas[0].host_port},#{@rs.replicas[1].host_port}?replicaset=#{@rs.repl_set_name};w=2;fsync=true;slaveok=true") assert !@client.nil? assert @client.connected? assert_equal 2, @client.write_concern[:w] assert @client.write_concern[:fsync] assert @client.read_pool end def test_connect_with_full_connection_string_in_env_var uri = "mongodb://#{@rs.replicas[0].host_port},#{@rs.replicas[1].host_port}?replicaset=#{@rs.repl_set_name};w=2;fsync=true;slaveok=true" with_preserved_env_uri(uri) do @client = MongoReplicaSetClient.new assert !@client.nil? assert @client.connected? assert_equal 2, @client.write_concern[:w] assert @client.write_concern[:fsync] assert @client.read_pool end end def test_connect_options_override_env_var uri = "mongodb://#{@rs.replicas[0].host_port},#{@rs.replicas[1].host_port}?replicaset=#{@rs.repl_set_name};w=2;fsync=true;slaveok=true" with_preserved_env_uri(uri) do @client = MongoReplicaSetClient.new({:w => 0}) assert !@client.nil? assert @client.connected? assert_equal 0, @client.write_concern[:w] end end def test_ipv6 @client = MongoReplicaSetClient.new(@rs.repl_set_seeds) with_ipv6_enabled(@client) do assert MongoReplicaSetClient.new(["[::1]:#{@rs.replicas[0].port}"]) end end def test_ipv6_with_uri @client = MongoReplicaSetClient.new(@rs.repl_set_seeds) with_ipv6_enabled(@client) do uri = "mongodb://[::1]:#{@rs.replicas[0].port},[::1]:#{@rs.replicas[1].port}" with_preserved_env_uri(uri) do assert MongoReplicaSetClient.new end end end def test_ipv6_with_uri_opts @client = MongoReplicaSetClient.new(@rs.repl_set_seeds) with_ipv6_enabled(@client) do uri = "mongodb://[::1]:#{@rs.replicas[0].port},[::1]:#{@rs.replicas[1].port}/?safe=true;" with_preserved_env_uri(uri) do assert MongoReplicaSetClient.new end end end def test_ipv6_with_different_formats @client = MongoReplicaSetClient.new(@rs.repl_set_seeds) with_ipv6_enabled(@client) do uri = "mongodb://[::1]:#{@rs.replicas[0].port},localhost:#{@rs.replicas[1].port}" with_preserved_env_uri(uri) do assert MongoReplicaSetClient.new end end end def test_find_and_modify_with_secondary_read_preference @client = MongoReplicaSetClient.new @rs.repl_set_seeds collection = @client[TEST_DB].collection('test', :read => :secondary) id = BSON::ObjectId.new collection << { :a => id, :processed => false } collection.find_and_modify( :query => { 'a' => id }, :update => { "$set" => { :processed => true }} ) assert_equal true, collection.find_one({ 'a' => id }, :read => :primary)['processed'] end end ruby-mongo-1.10.0/test/replica_set/complex_connect_test.rb000066400000000000000000000042721233461006100237050ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' class ComplexConnectTest < Test::Unit::TestCase def setup ensure_cluster(:rs) end def teardown @client.close if defined?(@conn) && @conn end def test_complex_connect host = @rs.servers.first.host primary = MongoClient.new(host, @rs.primary.port) @client = MongoReplicaSetClient.new([ @rs.servers[2].host_port, @rs.servers[1].host_port, @rs.servers[0].host_port ]) version = @client.server_version @client[TEST_DB]['complext-connect-test'].insert({:a => 1}) assert @client[TEST_DB]['complext-connect-test'].find_one config = primary['local']['system.replset'].find_one old_config = config.dup config['version'] += 1 # eliminate exception: can't find self in new replset config port_to_delete = @rs.servers.collect(&:port).find{|port| port != primary.port}.to_s config['members'].delete_if do |member| member['host'].include?(port_to_delete) end assert_raise ConnectionFailure do primary['admin'].command({:replSetReconfig => config}) end @rs.start assert_raise ConnectionFailure do perform_step_down(primary) end # isMaster is currently broken in 2.1+ when called on removed nodes puts version if version < "2.1" rescue_connection_failure do assert @client[TEST_DB]['complext-connect-test'].find_one end assert @client[TEST_DB]['complext-connect-test'].find_one end primary = MongoClient.new(host, @rs.primary.port) assert_raise ConnectionFailure do primary['admin'].command({:replSetReconfig => old_config}) end end end ruby-mongo-1.10.0/test/replica_set/connection_test.rb000066400000000000000000000116171233461006100226650ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' class ReplicaSetConnectionTest < Test::Unit::TestCase def setup ensure_cluster(:rs) end def test_connect_with_deprecated_multi silently do @connection = Connection.multi(@rs.repl_set_seeds_old, :name => @rs.repl_set_name) end assert !@connection.nil? assert @connection.connected? end def test_connect_bad_name assert_raise_error(ReplicaSetConnectionError, "-wrong") do @connection = ReplSetConnection.new(@rs.repl_set_seeds, :safe => true, :name => @rs.repl_set_name + "-wrong") end end def test_connect_with_first_secondary_node_terminated @rs.secondaries.first.stop rescue_connection_failure do @connection = ReplSetConnection.new @rs.repl_set_seeds end assert @connection.connected? end def test_connect_with_last_secondary_node_terminated @rs.secondaries.last.stop rescue_connection_failure do @connection = ReplSetConnection.new @rs.repl_set_seeds end assert @connection.connected? end def test_connect_with_connection_string @connection = Connection.from_uri("mongodb://#{@rs.repl_set_seeds_uri}?replicaset=#{@rs.repl_set_name}") assert !@connection.nil? assert @connection.connected? end def test_connect_with_connection_string_in_env_var uri = "mongodb://#{@rs.repl_set_seeds_uri}?replicaset=#{@rs.repl_set_name}" with_preserved_env_uri(uri) do @connection = ReplSetConnection.new assert !@connection.nil? assert_equal 3, @connection.seeds.length assert_equal @rs.replicas[0].host, @connection.seeds[0][0] assert_equal @rs.replicas[1].host, @connection.seeds[1][0] assert_equal @rs.replicas[2].host, @connection.seeds[2][0] assert_equal @rs.replicas[0].port, @connection.seeds[0][1] assert_equal @rs.replicas[1].port, @connection.seeds[1][1] assert_equal @rs.replicas[2].port, @connection.seeds[2][1] assert_equal @rs.repl_set_name, @connection.replica_set_name assert @connection.connected? end end def test_connect_with_connection_string_in_implicit_mongodb_uri uri = "mongodb://#{@rs.repl_set_seeds_uri}?replicaset=#{@rs.repl_set_name}" with_preserved_env_uri(uri) do @connection = Connection.from_uri assert !@connection.nil? assert_equal 3, @connection.seeds.length assert_equal @rs.replicas[0].host, @connection.seeds[0][0] assert_equal @rs.replicas[1].host, @connection.seeds[1][0] assert_equal @rs.replicas[2].host, @connection.seeds[2][0] assert_equal @rs.replicas[0].port, @connection.seeds[0][1] assert_equal @rs.replicas[1].port, @connection.seeds[1][1] assert_equal @rs.replicas[2].port, @connection.seeds[2][1] assert_equal @rs.repl_set_name, @connection.replica_set_name assert @connection.connected? end end def test_connect_with_new_seed_format @connection = ReplSetConnection.new @rs.repl_set_seeds assert @connection.connected? end def test_connect_with_old_seed_format silently do @connection = ReplSetConnection.new(@rs.repl_set_seeds_old) end assert @connection.connected? end def test_connect_with_full_connection_string @connection = Connection.from_uri("mongodb://#{@rs.repl_set_seeds_uri}?replicaset=#{@rs.repl_set_name};safe=true;w=2;fsync=true;slaveok=true") assert !@connection.nil? assert @connection.connected? assert_equal 2, @connection.write_concern[:w] assert @connection.write_concern[:fsync] assert @connection.read_pool end def test_connect_with_full_connection_string_in_env_var uri = "mongodb://#{@rs.repl_set_seeds_uri}?replicaset=#{@rs.repl_set_name};safe=true;w=2;fsync=true;slaveok=true" with_preserved_env_uri(uri) do @connection = ReplSetConnection.new assert !@connection.nil? assert @connection.connected? assert_equal 2, @connection.write_concern[:w] assert @connection.write_concern[:fsync] assert @connection.read_pool end end def test_connect_options_override_env_var uri = "mongodb://#{@rs.repl_set_seeds_uri}?replicaset=#{@rs.repl_set_name};safe=true;w=2;fsync=true;slaveok=true" with_preserved_env_uri(uri) do @connection = ReplSetConnection.new({:safe => {:w => 1}}) assert !@connection.nil? assert @connection.connected? assert_equal 1, @connection.write_concern[:w] end end end ruby-mongo-1.10.0/test/replica_set/count_test.rb000066400000000000000000000042741233461006100216570ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' class ReplicaSetCountTest < Test::Unit::TestCase def setup ensure_cluster(:rs) @client = MongoReplicaSetClient.new(@rs.repl_set_seeds, :read => :primary_preferred) assert @client.primary_pool @primary = MongoClient.new(@client.primary_pool.host, @client.primary_pool.port) @db = @client.db(TEST_DB) @db.drop_collection("test-sets") @coll = @db.collection("test-sets") end def teardown @client.close if @conn end def test_correct_count_after_insertion_reconnect @coll.insert({:a => 20}, :w => 3, :wtimeout => 10000) assert_equal 1, @coll.count # Kill the current master node @rs.primary.stop rescue_connection_failure do @coll.insert({:a => 30}) end @coll.insert({:a => 40}) assert_equal 3, @coll.count, "Second count failed" end def test_count_command_sent_to_primary @coll.insert({:a => 20}, :w => 3, :wtimeout => 10000) count_before = @primary['admin'].command({:serverStatus => 1})['opcounters']['command'] assert_equal 1, @coll.count count_after = @primary['admin'].command({:serverStatus => 1})['opcounters']['command'] assert_equal 2, count_after - count_before end def test_count_with_read @coll.insert({:a => 20}, :w => 3, :wtimeout => 10000) count_before = @primary['admin'].command({:serverStatus => 1})['opcounters']['command'] assert_equal 1, @coll.count(:read => :secondary) assert_equal 1, @coll.find({}, :read => :secondary).count() count_after = @primary['admin'].command({:serverStatus => 1})['opcounters']['command'] assert_equal 1, count_after - count_before end end ruby-mongo-1.10.0/test/replica_set/cursor_test.rb000066400000000000000000000142741233461006100220450ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' class ReplicaSetCursorTest < Test::Unit::TestCase def setup ensure_cluster(:rs) end def test_get_more_primary setup_client(:primary) cursor_get_more_test(:primary) end def test_get_more_secondary setup_client(:secondary) cursor_get_more_test(:secondary) end def test_close_primary setup_client(:primary) kill_cursor_test(:primary) end def test_close_secondary setup_client(:secondary) kill_cursor_test(:secondary) end def test_cursors_get_closed setup_client assert_cursors_on_members end def test_cursors_get_closed_secondary setup_client(:secondary) assert_cursors_on_members(:secondary) end def test_cursors_get_closed_secondary_query setup_client(:primary) assert_cursors_on_members(:secondary) end def test_intervening_query_secondary setup_client(:primary) refresh_while_iterating(:secondary) end private def setup_client(read=:primary) route_read ||= read # Setup ReplicaSet Connection @client = MongoReplicaSetClient.new(@rs.repl_set_seeds, :read => read) @db = @client.db(TEST_DB) @db.drop_collection("cursor_tests") @coll = @db.collection("cursor_tests") insert_docs # Setup Direct Connections @primary = Mongo::MongoClient.new(*@client.manager.primary) end def insert_docs @n_docs = 102 # batch size is 101 @n_docs.times do |i| @coll.insert({ "x" => i }, :w => 3) end end def set_read_client_and_tag(read) read_opts = {:read => read} @tag = (0...3).map{|i|i.to_s}.detect do |tag| begin read_opts[:tag_sets] = [{:node => tag}] unless read == :primary cursor = @coll.find({}, read_opts) cursor.next pool = cursor.instance_variable_get(:@pool) cursor.close @read = Mongo::MongoClient.new(pool.host, pool.port, :slave_ok => true) tag rescue Mongo::ConnectionFailure false end end end def route_query(read) read_opts = {:read => read} read_opts[:tag_sets] = [{:node => @tag}] unless read == :primary object_id = BSON::ObjectId.new read_opts[:comment] = object_id # set profiling level to 2 on client and member to which the query will be routed @client.db(TEST_DB).profiling_level = :all @client.secondaries.each do |node| node = Mongo::MongoClient.new(node[0], node[1], :slave_ok => true) node.db(TEST_DB).profiling_level = :all end @cursor = @coll.find({}, read_opts) @cursor.next # on client and other members set profiling level to 0 @client.db(TEST_DB).profiling_level = :off @client.secondaries.each do |node| node = Mongo::MongoClient.new(node[0], node[1], :slave_ok => true) node.db(TEST_DB).profiling_level = :off end # do a query on system.profile of the reader to see if it was used for the query profiled_queries = @read.db(TEST_DB).collection('system.profile').find({ 'ns' => "#{TEST_DB}.cursor_tests", "query.$comment" => object_id }) assert_equal 1, profiled_queries.count end # batch from send_initial_query is 101 documents # check that you get n_docs back from the query, with the same port def cursor_get_more_test(read=:primary) set_read_client_and_tag(read) 10.times do # assert that the query went to the correct member route_query(read) docs_count = 1 port = @cursor.instance_variable_get(:@pool).port assert @cursor.alive? while @cursor.has_next? docs_count += 1 @cursor.next assert_equal port, @cursor.instance_variable_get(:@pool).port end assert !@cursor.alive? assert_equal @n_docs, docs_count @cursor.close #cursor is already closed end end # batch from get_more can be huge, so close after send_initial_query def kill_cursor_test(read=:primary) set_read_client_and_tag(read) 10.times do # assert that the query went to the correct member route_query(read) cursor_id = @cursor.cursor_id cursor_clone = @cursor.clone assert_equal cursor_id, cursor_clone.cursor_id assert @cursor.instance_variable_get(:@pool) # .next was called once already and leave one for get more (@n_docs-2).times { @cursor.next } @cursor.close # an exception confirms the cursor has indeed been closed assert_raise Mongo::OperationFailure do cursor_clone.next end end end def assert_cursors_on_members(read=:primary) set_read_client_and_tag(read) # assert that the query went to the correct member route_query(read) cursor_id = @cursor.cursor_id cursor_clone = @cursor.clone assert_equal cursor_id, cursor_clone.cursor_id assert @cursor.instance_variable_get(:@pool) port = @cursor.instance_variable_get(:@pool).port while @cursor.has_next? @cursor.next assert_equal port, @cursor.instance_variable_get(:@pool).port end # an exception confirms the cursor has indeed been closed after query assert_raise Mongo::OperationFailure do cursor_clone.next end end def refresh_while_iterating(read) set_read_client_and_tag(read) read_opts = {:read => read} read_opts[:tag_sets] = [{:node => @tag}] read_opts[:batch_size] = 2 cursor = @coll.find({}, read_opts) 2.times { cursor.next } port = cursor.instance_variable_get(:@pool).port host = cursor.instance_variable_get(:@pool).host # Refresh connection @client.refresh assert_nothing_raised do cursor.next end assert_equal port, cursor.instance_variable_get(:@pool).port assert_equal host, cursor.instance_variable_get(:@pool).host end endruby-mongo-1.10.0/test/replica_set/insert_test.rb000066400000000000000000000116031233461006100220250ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' class ReplicaSetInsertTest < Test::Unit::TestCase def setup ensure_cluster(:rs) @client = MongoReplicaSetClient.new @rs.repl_set_seeds @version = @client.server_version @db = @client.db(TEST_DB) @db.drop_collection("test-sets") @coll = @db.collection("test-sets") end def teardown @client.close if @conn end def test_insert @coll.save({:a => 20}, :w => 3) @rs.primary.stop rescue_connection_failure do @coll.save({:a => 30}, :w => 1) end @coll.save({:a => 40}, :w => 1) @coll.save({:a => 50}, :w => 1) @coll.save({:a => 60}, :w => 1) @coll.save({:a => 70}, :w => 1) # Restart the old master and wait for sync @rs.start sleep(5) results = [] rescue_connection_failure do @coll.find.each {|r| results << r} [20, 30, 40, 50, 60, 70].each do |a| assert results.any? {|r| r['a'] == a}, "Could not find record for a => #{a}" end end @coll.save({:a => 80}, :w => 3) @coll.find.each {|r| results << r} [20, 30, 40, 50, 60, 70, 80].each do |a| assert results.any? {|r| r['a'] == a}, "Could not find record for a => #{a} on second find" end end context "Bulk API CollectionView" do setup do setup end should "handle error with deferred write concern error - spec Merging Results" do with_write_commands_and_operations(@db.connection) do |wire_version| @coll.remove @coll.ensure_index(BSON::OrderedHash[:a, Mongo::ASCENDING], {:unique => true}) bulk = @coll.initialize_ordered_bulk_op bulk.insert({:a => 1}) bulk.find({:a => 2}).upsert.update({'$set' => {:a => 2}}) bulk.insert({:a => 1}) ex = assert_raise BulkWriteError do bulk.execute({:w => 5, :wtimeout => 1}) end result = ex.result assert_match_document( { "ok" => 1, "n" => 2, "writeErrors" => [ { "index" => 2, "code" => 11000, "errmsg" => /duplicate key error/, } ], "writeConcernError" => [ { "errmsg" => /waiting for replication timed out|timed out waiting for slaves|timeout/, "code" => 64, "errInfo" => {"wtimeout" => true}, "index" => 0 }, { "errmsg" => /waiting for replication timed out|timed out waiting for slaves|timeout/, "code" => 64, "errInfo" => {"wtimeout" => true}, "index" => 1 } ], "code" => 65, "errmsg" => "batch item errors occurred", "nInserted" => 1 }, result, "wire_version:#{wire_version}") end assert_equal 2, @coll.find.to_a.size end should "handle unordered errors with deferred write concern error - spec Merging Results" do # TODO - spec review with_write_commands_and_operations(@db.connection) do |wire_version| @coll.remove @coll.ensure_index(BSON::OrderedHash[:a, Mongo::ASCENDING], {:unique => true}) bulk = @coll.initialize_unordered_bulk_op bulk.insert({:a => 1}) bulk.find({:a => 2}).upsert.update({'$set' => {:a => 1}}) bulk.insert({:a => 3}) ex = assert_raise BulkWriteError do bulk.execute({:w => 5, :wtimeout => 1}) end result = ex.result # unordered varies, don't use assert_bulk_exception assert_equal(1, result["ok"], "wire_version:#{wire_version}") assert_equal(2, result["n"], "wire_version:#{wire_version}") assert(result["nInserted"] >= 1, "wire_version:#{wire_version}") assert_equal(65, result["code"], "wire_version:#{wire_version}") assert_equal("batch item errors occurred", result["errmsg"], "wire_version:#{wire_version}") assert(result["writeErrors"].size >= 1, "wire_version:#{wire_version}") assert(result["writeConcernError"].size >= 1, "wire_version:#{wire_version}") if wire_version >= 2 assert(@coll.size >= 1, "wire_version:#{wire_version}") end end end end ruby-mongo-1.10.0/test/replica_set/max_values_test.rb000066400000000000000000000143121233461006100226650ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' class MaxValuesTest < Test::Unit::TestCase include Mongo def setup ensure_cluster(:rs) @client = MongoReplicaSetClient.new(@rs.repl_set_seeds, :name => @rs.repl_set_name) @db = new_mock_db @client.stubs(:[]).returns(@db) @ismaster = { 'hosts' => @client.local_manager.hosts.to_a, 'arbiters' => @client.local_manager.arbiters } end def test_initial_max_and_min_values assert @client.max_bson_size assert @client.max_message_size assert @client.max_wire_version assert @client.min_wire_version end def test_updated_max_and_min_sizes_after_node_config_change @db.stubs(:command).returns( @ismaster.merge({'ismaster' => true}), @ismaster.merge({'secondary' => true, 'maxMessageSizeBytes' => 1024 * MESSAGE_SIZE_FACTOR}), @ismaster.merge({'secondary' => true, 'maxBsonObjectSize' => 1024}), @ismaster.merge({'secondary' => true, 'maxWireVersion' => 0}), @ismaster.merge({'secondary' => true, 'minWireVersion' => 0}) ) @client.local_manager.stubs(:refresh_required?).returns(true) @client.refresh assert_equal 1024, @client.max_bson_size assert_equal 1024 * MESSAGE_SIZE_FACTOR, @client.max_message_size assert_equal 0, @client.max_wire_version assert_equal 0, @client.min_wire_version end def test_no_values_in_config @db.stubs(:command).returns( @ismaster.merge({'ismaster' => true}), @ismaster.merge({'secondary' => true}), @ismaster.merge({'secondary' => true}) ) @client.local_manager.stubs(:refresh_required?).returns(true) @client.refresh assert_equal DEFAULT_MAX_BSON_SIZE, @client.max_bson_size assert_equal DEFAULT_MAX_BSON_SIZE * MESSAGE_SIZE_FACTOR, @client.max_message_size assert_equal 0, @client.max_wire_version assert_equal 0, @client.min_wire_version end def test_only_bson_size_in_config @db.stubs(:command).returns( @ismaster.merge({'ismaster' => true}), @ismaster.merge({'secondary' => true}), @ismaster.merge({'secondary' => true, 'maxBsonObjectSize' => 1024}) ) @client.local_manager.stubs(:refresh_required?).returns(true) @client.refresh assert_equal 1024, @client.max_bson_size assert_equal 1024 * MESSAGE_SIZE_FACTOR, @client.max_message_size assert_equal 0, @client.max_wire_version assert_equal 0, @client.min_wire_version end def test_values_in_config #ismaster is called three times on the first node @db.stubs(:command).returns( @ismaster.merge({'ismaster' => true, 'maxMessageSizeBytes' => 1024 * 2 * MESSAGE_SIZE_FACTOR, 'maxBsonObjectSize' => 1024, 'maxWireVersion' => 2, 'minWireVersion' => 1}), @ismaster.merge({'ismaster' => true, 'maxMessageSizeBytes' => 1024 * 2 * MESSAGE_SIZE_FACTOR, 'maxBsonObjectSize' => 1024, 'maxWireVersion' => 2, 'minWireVersion' => 1}), @ismaster.merge({'ismaster' => true, 'maxMessageSizeBytes' => 1024 * 2 * MESSAGE_SIZE_FACTOR, 'maxBsonObjectSize' => 1024, 'maxWireVersion' => 2, 'minWireVersion' => 1}), @ismaster.merge({'secondary' => true, 'maxMessageSizeBytes' => 1024 * 2 * MESSAGE_SIZE_FACTOR, 'maxBsonObjectSize' => 1024, 'maxWireVersion' => 2, 'minWireVersion' => 0}), @ismaster.merge({'secondary' => true, 'maxMessageSizeBytes' => 1024 * 2 * MESSAGE_SIZE_FACTOR, 'maxBsonObjectSize' => 1024, 'maxWireVersion' => 1, 'minWireVersion' => 0}) ) @client.local_manager.stubs(:refresh_required?).returns(true) @client.refresh assert_equal 1024, @client.max_bson_size assert_equal 1024 * 2 * MESSAGE_SIZE_FACTOR, @client.max_message_size assert_equal 1, @client.max_wire_version # minimum of all max_wire_version assert_equal 1, @client.min_wire_version # maximum of all min_wire_version end def test_wire_version_not_in_range min_wire_version, max_wire_version = [Mongo::MongoClient::MIN_WIRE_VERSION-1, Mongo::MongoClient::MIN_WIRE_VERSION-1] #ismaster is called three times on the first node @db.stubs(:command).returns( @ismaster.merge({'ismaster' => true, 'maxWireVersion' => max_wire_version, 'minWireVersion' => min_wire_version}), @ismaster.merge({'ismaster' => true, 'maxWireVersion' => max_wire_version, 'minWireVersion' => min_wire_version}), @ismaster.merge({'ismaster' => true, 'maxWireVersion' => max_wire_version, 'minWireVersion' => min_wire_version}), @ismaster.merge({'secondary' => true, 'maxWireVersion' => max_wire_version, 'minWireVersion' => min_wire_version}), @ismaster.merge({'secondary' => true, 'maxWireVersion' => max_wire_version, 'minWireVersion' => min_wire_version}) ) @client.local_manager.stubs(:refresh_required?).returns(true) assert_raises Mongo::ConnectionFailure do @client.refresh end end def test_use_write_command with_write_commands(@client) do assert_true @client.use_write_command?({:w => 1}) assert_false @client.use_write_command?({:w => 0}) end with_write_operations(@client) do assert_false @client.use_write_command?({:w => 1}) assert_false @client.use_write_command?({:w => 0}) end @client.local_manager.primary_pool.node.expects(:wire_version_feature?).at_least_once.returns(true) assert_true @client.use_write_command?({:w => 1}) assert_false @client.use_write_command?({:w => 0}) end def test_max_write_batch_size assert_equal Mongo::MongoClient::DEFAULT_MAX_WRITE_BATCH_SIZE, @client.max_write_batch_size @client.local_manager.primary_pool.node.stubs(:max_write_batch_size).returns(999) assert_equal 999, @client.max_write_batch_size end end ruby-mongo-1.10.0/test/replica_set/pinning_test.rb000066400000000000000000000033651233461006100221710ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' class ReplicaSetPinningTest < Test::Unit::TestCase def setup ensure_cluster(:rs) @client = MongoReplicaSetClient.new(@rs.repl_set_seeds, :name => @rs.repl_set_name) @db = @client.db(TEST_DB) @coll = @db.collection("test-sets") @coll.insert({:a => 1}) end def test_unpinning # pin primary @coll.find_one assert_equal @client.pinned_pool[:pool], @client.primary_pool # pin secondary @coll.find_one({}, :read => :secondary_preferred) assert @client.secondary_pools.include? @client.pinned_pool[:pool] # repin primary @coll.find_one({}, :read => :primary_preferred) assert_equal @client.pinned_pool[:pool], @client.primary_pool end def test_pinned_pool_is_local_to_thread threads = [] 30.times do |i| threads << Thread.new do if i % 2 == 0 @coll.find_one({}, :read => :secondary_preferred) assert @client.secondary_pools.include? @client.pinned_pool[:pool] else @coll.find_one({}, :read => :primary_preferred) assert_equal @client.pinned_pool[:pool], @client.primary_pool end end end threads.each(&:join) end end ruby-mongo-1.10.0/test/replica_set/query_test.rb000066400000000000000000000044061233461006100216710ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' class ReplicaSetQueryTest < Test::Unit::TestCase def setup ensure_cluster(:rs) @client = MongoReplicaSetClient.new @rs.repl_set_seeds @db = @client.db(TEST_DB) @db.drop_collection("test-sets") @coll = @db.collection("test-sets") end def teardown @client.close if @conn end def test_query @coll.save({:a => 20}, :w => 3) @coll.save({:a => 30}, :w => 3) @coll.save({:a => 40}, :w => 3) results = [] @coll.find.each {|r| results << r} [20, 30, 40].each do |a| assert results.any? {|r| r['a'] == a}, "Could not find record for a => #{a}" end @rs.primary.stop results = [] rescue_connection_failure do @coll.find.each {|r| results << r} [20, 30, 40].each do |a| assert results.any? {|r| r['a'] == a}, "Could not find record for a => #{a}" end end end # Create a large collection and do a secondary query that returns # enough records to require sending a GETMORE. In between opening # the cursor and sending the GETMORE, do a :primary query. Confirm # that the cursor reading from the secondary continues to talk to # the secondary, rather than trying to read the cursor from the # primary, where it does not exist. # def test_secondary_getmore # 200.times do |i| # @coll.save({:a => i}, :w => 3) # end # as = [] # # Set an explicit batch size, in case the default ever changes. # @coll.find({}, { :batch_size => 100, :read => :secondary }) do |c| # c.each do |result| # as << result['a'] # @coll.find({:a => result['a']}, :read => :primary).map # end # end # assert_equal(as.sort, 0.upto(199).to_a) # end end ruby-mongo-1.10.0/test/replica_set/read_preference_test.rb000066400000000000000000000154251233461006100236400ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' class ReadPreferenceTest < Test::Unit::TestCase def setup ensure_cluster(:rs, :replicas => 2, :arbiters => 0) # Insert data primary = @rs.primary conn = Connection.new(primary.host, primary.port) db = conn.db(TEST_DB) coll = db.collection("test-sets") coll.save({:a => 20}, {:w => 2}) end def test_read_primary conn = make_connection rescue_connection_failure do assert conn.read_primary? assert conn.primary? end conn = make_connection(:primary_preferred) rescue_connection_failure do assert conn.read_primary? assert conn.primary? end conn = make_connection(:secondary) rescue_connection_failure do assert !conn.read_primary? assert !conn.primary? end conn = make_connection(:secondary_preferred) rescue_connection_failure do assert !conn.read_primary? assert !conn.primary? end end def test_connection_pools conn = make_connection assert conn.primary_pool, "No primary pool!" assert conn.read_pool, "No read pool!" assert conn.primary_pool.port == conn.read_pool.port, "Primary port and read port are not the same!" conn = make_connection(:primary_preferred) assert conn.primary_pool, "No primary pool!" assert conn.read_pool, "No read pool!" assert conn.primary_pool.port == conn.read_pool.port, "Primary port and read port are not the same!" conn = make_connection(:secondary) assert conn.primary_pool, "No primary pool!" assert conn.read_pool, "No read pool!" assert conn.primary_pool.port != conn.read_pool.port, "Primary port and read port are the same!" conn = make_connection(:secondary_preferred) assert conn.primary_pool, "No primary pool!" assert conn.read_pool, "No read pool!" assert conn.primary_pool.port != conn.read_pool.port, "Primary port and read port are the same!" end def test_read_routing prepare_routing_test # Test that reads are going to the right members assert_query_route(@primary, @primary_direct) assert_query_route(@primary_preferred, @primary_direct) assert_query_route(@secondary, @secondary_direct) assert_query_route(@secondary_preferred, @secondary_direct) end def test_read_routing_with_primary_down prepare_routing_test # Test that reads are going to the right members assert_query_route(@primary, @primary_direct) assert_query_route(@primary_preferred, @primary_direct) assert_query_route(@secondary, @secondary_direct) assert_query_route(@secondary_preferred, @secondary_direct) # Kill the primary so only a single secondary exists @rs.primary.kill # Test that reads are going to the right members assert_raise_error ConnectionFailure do @primary[TEST_DB]['test-sets'].find_one end assert_query_route(@primary_preferred, @secondary_direct) assert_query_route(@secondary, @secondary_direct) assert_query_route(@secondary_preferred, @secondary_direct) # Restore set @rs.restart sleep(1) @repl_cons.each { |con| con.refresh } sleep(1) @primary_direct = Connection.new( @rs.config['host'], @primary.read_pool.port ) # Test that reads are going to the right members assert_query_route(@primary, @primary_direct) assert_query_route(@primary_preferred, @primary_direct) assert_query_route(@secondary, @secondary_direct) assert_query_route(@secondary_preferred, @secondary_direct) end def test_read_routing_with_secondary_down prepare_routing_test # Test that reads are going to the right members assert_query_route(@primary, @primary_direct) assert_query_route(@primary_preferred, @primary_direct) assert_query_route(@secondary, @secondary_direct) assert_query_route(@secondary_preferred, @secondary_direct) # Kill the secondary so that only primary exists @rs.secondaries.first.kill # Test that reads are going to the right members assert_query_route(@primary, @primary_direct) assert_query_route(@primary_preferred, @primary_direct) assert_raise_error ConnectionFailure do @secondary[TEST_DB]['test-sets'].find_one end assert_query_route(@secondary_preferred, @primary_direct) # Restore set @rs.restart sleep(1) @repl_cons.each { |con| con.refresh } sleep(1) @secondary_direct = Connection.new( @rs.config['host'], @secondary.read_pool.port, :slave_ok => true ) # Test that reads are going to the right members assert_query_route(@primary, @primary_direct) assert_query_route(@primary_preferred, @primary_direct) assert_query_route(@secondary, @secondary_direct) assert_query_route(@secondary_preferred, @secondary_direct) end def test_write_lots_of_data @conn = make_connection(:secondary_preferred) @db = @conn[TEST_DB] @coll = @db.collection("test-sets", {:w => 2}) 6000.times do |n| @coll.save({:a => n}) end cursor = @coll.find() cursor.next cursor.close end private def prepare_routing_test # Setup replica set connections @primary = make_connection(:primary) @primary_preferred = make_connection(:primary_preferred) @secondary = make_connection(:secondary) @secondary_preferred = make_connection(:secondary_preferred) @repl_cons = [@primary, @primary_preferred, @secondary, @secondary_preferred] # Setup direct connections @primary_direct = Connection.new(@rs.config['host'], @primary.read_pool.port) @secondary_direct = Connection.new(@rs.config['host'], @secondary.read_pool.port, :slave_ok => true) end def make_connection(mode = :primary, opts = {}) opts.merge!({:read => mode}) MongoReplicaSetClient.new(@rs.repl_set_seeds, opts) end def query_count(connection) connection['admin'].command({:serverStatus => 1})['opcounters']['query'] end def assert_query_route(test_connection, expected_target) #puts "#{test_connection.read_pool.port} #{expected_target.read_pool.port}" queries_before = query_count(expected_target) assert_nothing_raised do test_connection[TEST_DB]['test-sets'].find_one end queries_after = query_count(expected_target) assert_equal 1, queries_after - queries_before end endruby-mongo-1.10.0/test/replica_set/refresh_test.rb000066400000000000000000000131561233461006100221640ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' class ReplicaSetRefreshTest < Test::Unit::TestCase def setup ensure_cluster(:rs) end def test_connect_and_manual_refresh_with_secondary_down num_secondaries = @rs.secondaries.size client = MongoReplicaSetClient.new(@rs.repl_set_seeds, :refresh_mode => false) assert_equal num_secondaries, client.secondaries.size assert client.connected? assert_equal client.read_pool, client.primary_pool old_refresh_version = client.refresh_version @rs.stop_secondary client.refresh assert_equal num_secondaries - 1, client.secondaries.size assert client.connected? assert_equal client.read_pool, client.primary_pool assert client.refresh_version > old_refresh_version old_refresh_version = client.refresh_version # Test no changes after restart until manual refresh @rs.restart assert_equal num_secondaries - 1, client.secondaries.size assert client.connected? assert_equal client.read_pool, client.primary_pool assert_equal client.refresh_version, old_refresh_version # Refresh and ensure state client.refresh assert_equal num_secondaries, client.secondaries.size assert client.connected? assert_equal client.read_pool, client.primary_pool assert client.refresh_version > old_refresh_version end def test_automated_refresh_with_secondary_down num_secondaries = @rs.secondaries.size client = MongoReplicaSetClient.new(@rs.repl_set_seeds, :refresh_interval => 1, :refresh_mode => :sync, :read => :secondary_preferred) # Ensure secondaries are all recognized by client and client is connected assert_equal num_secondaries, client.secondaries.size assert client.connected? assert client.secondary_pools.include?(client.read_pool) pool = client.read_pool @rs.member_by_name(pool.host_string).stop sleep(2) old_refresh_version = client.refresh_version # Trigger synchronous refresh client[TEST_DB]['rs-refresh-test'].find_one assert client.connected? assert client.refresh_version > old_refresh_version assert_equal num_secondaries - 1, client.secondaries.size assert client.secondary_pools.include?(client.read_pool) assert_not_equal pool, client.read_pool # Restart nodes and ensure refresh interval has passed @rs.restart sleep(2) old_refresh_version = client.refresh_version # Trigger synchronous refresh client[TEST_DB]['rs-refresh-test'].find_one assert client.connected? assert client.refresh_version > old_refresh_version, "Refresh version hasn't changed." assert_equal num_secondaries, client.secondaries.size "No secondaries have been added." assert_equal num_secondaries, client.secondary_pools.size end def test_concurrent_refreshes factor = 5 nthreads = factor * 10 threads = [] client = MongoReplicaSetClient.new(@rs.repl_set_seeds, :refresh_mode => :sync, :refresh_interval => 1) nthreads.times do |i| threads << Thread.new do # force a connection failure every couple of threads that causes a refresh if i % factor == 0 cursor = client[TEST_DB]['rs-refresh-test'].find cursor.stubs(:checkout_socket_from_connection).raises(ConnectionFailure) begin cursor.next rescue => ex raise ex unless ex.class == ConnectionFailure next end else # synchronous refreshes will happen every couple of find_ones cursor = client[TEST_DB]['rs-refresh-test'].find_one end end end threads.each do |t| t.join end end =begin def test_automated_refresh_with_removed_node client = MongoReplicaSetClient.new(@rs.repl_set_seeds, :refresh_interval => 1, :refresh_mode => :sync) num_secondaries = client.secondary_pools.length old_refresh_version = client.refresh_version n = @rs.repl_set_remove_node(2) sleep(2) rescue_connection_failure do client[TEST_DB]['rs-refresh-test'].find_one end assert client.refresh_version > old_refresh_version, "Refresh version hasn't changed." assert_equal num_secondaries - 1, client.secondaries.length assert_equal num_secondaries - 1, client.secondary_pools.length #@rs.add_node(n) end def test_adding_and_removing_nodes client = MongoReplicaSetClient.new(build_seeds(3), :refresh_interval => 2, :refresh_mode => :sync) @rs.add_node sleep(4) client[TEST_DB]['rs-refresh-test'].find_one @conn2 = MongoReplicaSetClient.new(build_seeds(3), :refresh_interval => 2, :refresh_mode => :sync) assert @conn2.secondaries.sort == client.secondaries.sort, "Second connection secondaries not equal to first." assert_equal 3, client.secondary_pools.length assert_equal 3, client.secondaries.length config = client['admin'].command({:ismaster => 1}) @rs.remove_secondary_node sleep(4) config = client['admin'].command({:ismaster => 1}) assert_equal 2, client.secondary_pools.length assert_equal 2, client.secondaries.length end =end endruby-mongo-1.10.0/test/replica_set/replication_ack_test.rb000066400000000000000000000057621233461006100236610ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' class ReplicaSetAckTest < Test::Unit::TestCase def setup ensure_cluster(:rs) @client = MongoReplicaSetClient.new(@rs.repl_set_seeds) @slave1 = MongoClient.new( @client.secondary_pools.first.host, @client.secondary_pools.first.port, :slave_ok => true) assert !@slave1.read_primary? @db = @client.db(TEST_DB) @db.drop_collection("test-sets") @col = @db.collection("test-sets") end def teardown @client.close if @conn end def test_safe_mode_with_w_failure assert_raise_error WriteConcernError, "time" do @col.insert({:foo => 1}, :w => 4, :wtimeout => 1, :fsync => true) end assert_raise_error WriteConcernError, "time" do @col.update({:foo => 1}, {:foo => 2}, :w => 4, :wtimeout => 1, :fsync => true) end assert_raise_error WriteConcernError, "time" do @col.remove({:foo => 2}, :w => 4, :wtimeout => 1, :fsync => true) end if @client.server_version >= '2.5.4' assert_raise_error WriteConcernError do @col.insert({:foo => 3}, :w => "test-tag") end else # indistinguishable "errmsg"=>"exception: unrecognized getLastError mode: test-tag" assert_raise_error OperationFailure do @col.insert({:foo => 3}, :w => "test-tag") end end end def test_safe_mode_replication_ack @col.insert({:baz => "bar"}, :w => 3, :wtimeout => 5000) assert @col.insert({:foo => "0" * 5000}, :w => 3, :wtimeout => 5000) assert_equal 2, @slave1[TEST_DB]["test-sets"].count assert @col.update({:baz => "bar"}, {:baz => "foo"}, :w => 3, :wtimeout => 5000) assert @slave1[TEST_DB]["test-sets"].find_one({:baz => "foo"}) assert @col.insert({:foo => "bar"}, :w => "majority") assert @col.insert({:bar => "baz"}, :w => :majority) assert @col.remove({}, :w => 3, :wtimeout => 5000) assert_equal 0, @slave1[TEST_DB]["test-sets"].count end def test_last_error_responses 20.times { @col.insert({:baz => "bar"}) } response = @db.get_last_error(:w => 3, :wtimeout => 5000) assert response['ok'] == 1 assert response['lastOp'] @col.update({}, {:baz => "foo"}) response = @db.get_last_error(:w => 3, :wtimeout => 5000) assert response['ok'] == 1 assert response['lastOp'] @col.remove({}) response = @db.get_last_error(:w => 3, :wtimeout => 5000) assert response['ok'] == 1 assert response['n'] == 20 assert response['lastOp'] end end ruby-mongo-1.10.0/test/replica_set/ssl_test.rb000066400000000000000000000017721233461006100213300ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' require 'shared/ssl_shared' class ReplicaSetSSLTest < Test::Unit::TestCase include Mongo include SSLTests SEEDS = ['server:3000','server:3001','server:3002'] BAD_SEEDS = ['localhost:3000','localhost:3001','localhost:3002'] def setup @client_class = MongoReplicaSetClient @uri_info = SEEDS.join(',') @connect_info = SEEDS @bad_connect_info = BAD_SEEDS end end ruby-mongo-1.10.0/test/sharded_cluster/000077500000000000000000000000001233461006100200155ustar00rootroot00000000000000ruby-mongo-1.10.0/test/sharded_cluster/basic_test.rb000066400000000000000000000136651233461006100224750ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' include Mongo class Cursor public :construct_query_spec end class ShardedClusterBasicTest < Test::Unit::TestCase def setup ensure_cluster(:sc) @document = { "name" => "test_user" } @seeds = @sc.mongos_seeds end # TODO member.primary? ==> true def test_connect @client = MongoShardedClient.new(@seeds) assert @client.connected? assert_equal(@seeds.size, @client.seeds.size) probe(@seeds.size) @client.close end def test_connect_from_standard_client mongos = @seeds.first @client = MongoClient.new(*mongos.split(':')) assert @client.connected? assert @client.mongos? @client.close end def test_read_from_client host, port = @seeds.first.split(':') tags = [{:dc => "mongolia"}] @client = MongoClient.new(host, port, {:read => :secondary, :tag_sets => tags}) assert @client.connected? cursor = Cursor.new(@client[TEST_DB]['whatever'], {}) assert_equal cursor.construct_query_spec['$readPreference'], {:mode => 'secondary', :tags => tags} end def test_find_one_with_read_secondary @client = MongoShardedClient.new(@seeds, { :read => :secondary }) @client[TEST_DB]["users"].insert([ @document ]) assert_equal @client[TEST_DB]['users'].find_one["name"], "test_user" end def test_find_one_with_read_secondary_preferred @client = MongoShardedClient.new(@seeds, { :read => :secondary_preferred }) @client[TEST_DB]["users"].insert([ @document ]) assert_equal @client[TEST_DB]['users'].find_one["name"], "test_user" end def test_find_one_with_read_primary @client = MongoShardedClient.new(@seeds, { :read => :primary }) @client[TEST_DB]["users"].insert([ @document ]) assert_equal @client[TEST_DB]['users'].find_one["name"], "test_user" end def test_find_one_with_read_primary_preferred @client = MongoShardedClient.new(@seeds, { :read => :primary_preferred }) @client[TEST_DB]["users"].insert([ @document ]) assert_equal @client[TEST_DB]['users'].find_one["name"], "test_user" end def test_read_from_sharded_client tags = [{:dc => "mongolia"}] @client = MongoShardedClient.new(@seeds, {:read => :secondary, :tag_sets => tags}) assert @client.connected? cursor = Cursor.new(@client[TEST_DB]['whatever'], {}) assert_equal cursor.construct_query_spec['$readPreference'], {:mode => 'secondary', :tags => tags} end def test_hard_refresh @client = MongoShardedClient.new(@seeds) assert @client.connected? @client.hard_refresh! assert @client.connected? @client.close end def test_reconnect @client = MongoShardedClient.new(@seeds) assert @client.connected? router = @sc.servers(:routers).first router.stop probe(@seeds.size) assert @client.connected? @client.close end def test_mongos_failover @client = MongoShardedClient.new(@seeds, :refresh_interval => 5, :refresh_mode => :sync) assert @client.connected? # do a find to pin a pool @client[TEST_DB]['test'].find_one original_primary = @client.manager.primary # stop the pinned member @sc.member_by_name("#{original_primary[0]}:#{original_primary[1]}").stop # assert that the client fails over to the next available mongos assert_nothing_raised do @client[TEST_DB]['test'].find_one end assert_not_equal original_primary, @client.manager.primary assert @client.connected? @client.close end def test_all_down @client = MongoShardedClient.new(@seeds) assert @client.connected? @sc.servers(:routers).each{|router| router.stop} assert_raises Mongo::ConnectionFailure do probe(@seeds.size) end assert_false @client.connected? @client.close end def test_cycle @client = MongoShardedClient.new(@seeds) assert @client.connected? routers = @sc.servers(:routers) while routers.size > 0 do rescue_connection_failure do probe(@seeds.size) end probe(@seeds.size) router = routers.detect{|r| r.port == @client.manager.primary.last} routers.delete(router) router.stop end assert_raises Mongo::ConnectionFailure do probe(@seeds.size) end assert_false @client.connected? routers = @sc.servers(:routers).reverse routers.each do |r| r.start @client.hard_refresh! rescue_connection_failure do probe(@seeds.size) end probe(@seeds.size) end @client.close end def test_wire_version_not_in_range [ [Mongo::MongoClient::MAX_WIRE_VERSION+1, Mongo::MongoClient::MAX_WIRE_VERSION+1], [Mongo::MongoClient::MIN_WIRE_VERSION-1, Mongo::MongoClient::MIN_WIRE_VERSION-1] ].each do |min_wire_version_value, max_wire_version_value| Mongo.module_eval <<-EVAL class ShardingPoolManager def max_wire_version return #{max_wire_version_value} end def min_wire_version return #{min_wire_version_value} end end EVAL @client = MongoShardedClient.new(@seeds, :connect => false) assert !@client.connected? assert_raises Mongo::ConnectionFailure do @client.connect end end Mongo.module_eval <<-EVAL class ShardingPoolManager attr_reader :max_wire_version, :min_wire_version end EVAL end private def probe(size) assert_equal(size, @client['config']['mongos'].find.to_a.size) end end ruby-mongo-1.10.0/test/shared/000077500000000000000000000000001233461006100161105ustar00rootroot00000000000000ruby-mongo-1.10.0/test/shared/authentication/000077500000000000000000000000001233461006100211275ustar00rootroot00000000000000ruby-mongo-1.10.0/test/shared/authentication/basic_auth_shared.rb000066400000000000000000000205101233461006100251020ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module BasicAuthTests def init_auth_basic # enable authentication by creating and logging in as admin user @admin = @client['admin'] @admin.add_user('admin', 'password', nil, :roles => ['readAnyDatabase', 'readWriteAnyDatabase', 'userAdminAnyDatabase', 'dbAdminAnyDatabase', 'clusterAdmin']) @admin.authenticate('admin', 'password') # db user for cleanup (for pre-2.4) @db.add_user('admin', 'cleanup', nil, :roles => []) end def teardown_basic remove_all_users(@db, 'admin', 'cleanup') remove_all_users(@admin, 'admin', 'password') if has_auth?(@admin.name) end def remove_all_users(database, username, password) database.authenticate(username, password) unless has_auth?(database.name) if @client.server_version < '2.5' database['system.users'].remove else database.command(:dropAllUsersFromDatabase => 1) end database.logout end def has_auth?(db_name) @client.auths.any? { |a| a[:source] == db_name } end def test_add_remove_user init_auth_basic # add user silently { @db.add_user('bob','user') } assert @db.authenticate('bob', 'user') # remove user assert @db.remove_user('bob') teardown_basic end def test_update_user init_auth_basic # add user silently { @db.add_user('bob', 'user') } assert @db.authenticate('bob', 'user') @db.logout # update user silently { @db.add_user('bob', 'updated') } assert_raise Mongo::AuthenticationError do @db.authenticate('bob', 'user') end assert @db.authenticate('bob', 'updated') teardown_basic end def test_remove_non_existent_user init_auth_basic if @client.server_version < '2.5' assert_equal false, @db.remove_user('joe') else assert_raise Mongo::OperationFailure do assert @db.remove_user('joe') end end teardown_basic end def test_authenticate init_auth_basic silently { @db.add_user('peggy', 'user') } assert @db.authenticate('peggy', 'user') @db.remove_user('peggy') teardown_basic end def test_authenticate_non_existent_user init_auth_basic assert_raise Mongo::AuthenticationError do @db.authenticate('frank', 'thetank') end teardown_basic end def test_logout init_auth_basic silently { @db.add_user('peggy', 'user') } assert @db.authenticate('peggy', 'user') assert @db.logout teardown_basic end def test_authenticate_with_special_characters init_auth_basic silently { assert @db.add_user('foo:bar','@foo') } assert @db.authenticate('foo:bar','@foo') teardown_basic end def test_authenticate_read_only init_auth_basic silently { @db.add_user('randy', 'readonly', true) } assert @db.authenticate('randy', 'readonly') teardown_basic end def test_authenticate_with_connection_uri init_auth_basic silently { @db.add_user('eunice', 'uritest') } uri = "mongodb://eunice:uritest@#{@host_info}/#{@db.name}" client = Mongo::URIParser.new(uri).connection assert client assert_equal client.auths.size, 1 assert client[TEST_DB]['auth_test'].count auth = client.auths.first assert_equal @db.name, auth[:db_name] assert_equal 'eunice', auth[:username] assert_equal 'uritest', auth[:password] teardown_basic end def test_socket_auths init_auth_basic # setup db_a = @client[TEST_DB + '_a'] silently { db_a.add_user('user_a', 'password') } assert db_a.authenticate('user_a', 'password') db_b = @client[TEST_DB + '_b'] silently { db_b.add_user('user_b', 'password') } assert db_b.authenticate('user_b', 'password') db_c = @client[TEST_DB + '_c'] silently { db_c.add_user('user_c', 'password') } assert db_c.authenticate('user_c', 'password') # client auths should be applied to socket on checkout socket = @client.checkout_reader(:mode => :primary) assert_equal 4, socket.auths.size assert_equal @client.auths, socket.auths @client.checkin(socket) # logout should remove saved auth on socket and client assert db_b.logout socket = @client.checkout_reader(:mode => :primary) assert_equal 3, socket.auths.size assert_equal @client.auths, socket.auths @client.checkin(socket) # clean-up db_b.authenticate('user_b', 'password') remove_all_users(db_a, 'user_a', 'password') remove_all_users(db_b, 'user_b', 'password') remove_all_users(db_c, 'user_c', 'password') teardown_basic end def test_default_roles_non_admin return unless @client.server_version >= '2.5.3' init_auth_basic silently { @db.add_user('user', 'pass') } silently { @db.authenticate('user', 'pass') } info = @db.command(:usersInfo => 'user')['users'].first assert_equal 'dbOwner', info['roles'].first['role'] # read-only silently { @db.add_user('ro-user', 'pass', true) } @db.logout @db.authenticate('ro-user', 'pass') info = @db.command(:usersInfo => 'ro-user')['users'].first assert_equal 'read', info['roles'].first['role'] @db.logout teardown_basic end def test_delegated_authentication return unless @client.server_version >= '2.4' && @client.server_version < '2.5' with_auth(@client) do init_auth_basic # create user in test databases accounts = @client[TEST_DB + '_accounts'] silently do accounts.add_user('debbie', 'delegate') @db.add_user('debbie', nil, nil, :roles => ['read'], :userSource => accounts.name) end @admin.logout # validate that direct authentication is not allowed assert_raise Mongo::AuthenticationError do @db.authenticate('debbie', 'delegate') end # validate delegated authentication assert accounts.authenticate('debbie', 'delegate') assert @db.collection_names accounts.logout assert_raise Mongo::OperationFailure do @db.collection_names end # validate auth using source database @db.authenticate('debbie', 'delegate', nil, accounts.name) assert @db.collection_names accounts.logout assert_raise Mongo::OperationFailure do @db.collection_names end # clean-up @admin.authenticate('admin', 'password') remove_all_users(accounts, 'debbie', 'delegate') teardown_basic end end def test_non_admin_default_roles return if @client.server_version < '2.5' init_auth_basic # add read-only user and verify that role is 'read' @db.add_user('randy', 'password', nil, :roles => ['read']) @db.authenticate('randy', 'password') users = @db.command(:usersInfo => 'randy')['users'] assert_equal 'read', users.first['roles'].first['role'] @db.logout # add dbOwner (default) user and verify role silently { @db.add_user('emily', 'password') } @db.authenticate('emily', 'password') users = @db.command(:usersInfo => 'emily')['users'] assert_equal 'dbOwner', users.first['roles'].first['role'] teardown_basic end def test_update_user_to_read_only with_auth(@client) do init_auth_basic silently { @db.add_user('emily', 'password') } @admin.logout @db.authenticate('emily', 'password') @db['test'].insert({}) @db.logout @admin.authenticate('admin', 'password') silently { @db.add_user('emily', 'password', true) } @admin.logout silently { @db.authenticate('emily', 'password') } assert_raise Mongo::OperationFailure do @db['test'].insert({}) end @db.logout @admin.authenticate('admin', 'password') teardown_basic end end end ruby-mongo-1.10.0/test/shared/authentication/bulk_api_auth_shared.rb000066400000000000000000000200031233461006100256040ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License") # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0x # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module BulkAPIAuthTests include Mongo def init_auth_bulk # enable authentication @admin = @client["admin"] @admin.add_user('admin', 'password', nil, :roles => ['readWriteAnyDatabase', 'userAdminAnyDatabase', 'dbAdminAnyDatabase']) @admin.authenticate('admin', 'password') # Set up the test db @collection = @db["bulk-api-auth-tests"] # db user can insert but not remove res = BSON::OrderedHash.new res[:db] = TEST_DB res[:collection] = "" cmd = BSON::OrderedHash.new cmd[:createRole] = "insertOnly" cmd[:privileges] = [{:resource => res, :actions => [ "insert", "find" ]}] cmd[:roles] = [] @db.command(cmd) @db.add_user('insertOnly', 'password', nil, :roles => ['insertOnly']) # db user can insert and remove cmd = BSON::OrderedHash.new cmd[:createRole] = "insertAndRemove" cmd[:privileges] = [{:resource => res, :actions => [ "insert", "remove", "find" ]}] cmd[:roles] = [] @db.command(cmd) @db.add_user('insertAndRemove', 'password', nil, :roles => ['insertAndRemove']) # for 2.4 cleanup etc. @db.add_user('admin', 'password', nil, :roles => ['readWrite', 'userAdmin', 'dbAdmin']) @admin.logout end def teardown_bulk remove_all_users_and_roles(@db, 'admin', 'password') remove_all_users_and_roles(@admin, 'admin', 'password') end def clear_collection(collection) @admin.authenticate('admin', 'password') collection.remove @admin.logout end def remove_all_users_and_roles(database, username, password) @admin.authenticate('admin', 'password') if @version < '2.5.3' database['system.users'].remove else database.command({:dropAllRolesFromDatabase => 1}) database.command({:dropAllUsersFromDatabase => 1}) end @admin.logout end def test_auth_no_error return unless @version >= '2.5.3' init_auth_bulk with_write_commands_and_operations(@db.connection) do |wire_version| clear_collection(@collection) @db.authenticate('insertAndRemove', 'password') bulk = @collection.initialize_ordered_bulk_op bulk.insert({:a => 1}) bulk.find({:a => 1}).remove_one result = bulk.execute assert_match_document( { "ok" => 1, "nInserted" => 1, "nRemoved" => 1 }, result, "wire_version:#{wire_version}") assert_equal 0, @collection.count @db.logout end teardown_bulk end def test_auth_error return unless @version >= '2.5.3' init_auth_bulk with_write_commands_and_operations(@db.connection) do |wire_version| clear_collection(@collection) @db.authenticate('insertOnly', 'password') bulk = @collection.initialize_ordered_bulk_op bulk.insert({:a => 1}) bulk.find({:a => 1}).remove bulk.insert({:a => 2}) ex = assert_raise Mongo::BulkWriteError do bulk.execute end result = ex.result assert_match_document( { "ok" => 1, "n" => 1, "writeErrors" => [{ "index" => 1, "code" => 13, "errmsg" => /not authorized/ }], "code" => 65, "errmsg" => "batch item errors occurred", "nInserted" => 1 }, result, "wire_version:#{wire_version}") assert_equal 1, @collection.count @db.logout end teardown_bulk end def test_auth_error_unordered return unless @version >= '2.5.3' init_auth_bulk with_write_commands_and_operations(@db.connection) do |wire_version| clear_collection(@collection) @db.authenticate('insertOnly', 'password') bulk = @collection.initialize_unordered_bulk_op bulk.insert({:a => 1}) bulk.find({:a => 1}).remove_one bulk.insert({:a => 2}) ex = assert_raise Mongo::BulkWriteError do bulk.execute end result = ex.result assert_equal 1, result["writeErrors"].length assert_equal 2, result["n"] assert_equal 2, result["nInserted"] assert_equal 2, @collection.count @db.logout end teardown_bulk end def test_duplicate_key_with_auth_error return unless @version >= '2.5.3' init_auth_bulk with_write_commands_and_operations(@db.connection) do |wire_version| clear_collection(@collection) @db.authenticate('insertOnly', 'password') bulk = @collection.initialize_ordered_bulk_op bulk.insert({:_id => 1, :a => 1}) bulk.insert({:_id => 1, :a => 2}) bulk.find({:a => 1}).remove_one ex = assert_raise Mongo::BulkWriteError do bulk.execute end result = ex.result assert_match_document( { "ok" => 1, "n" => 1, "writeErrors" => [{ "index" => 1, "code" => 11000, "errmsg" => /duplicate key error/ }], "code" => 65, "errmsg" => "batch item errors occurred", "nInserted" => 1 }, result, "wire_version:#{wire_version}") assert_equal 1, @collection.count @db.logout end teardown_bulk end def test_duplicate_key_with_auth_error_unordered return unless @version >= '2.5.3' init_auth_bulk with_write_commands_and_operations(@db.connection) do |wire_version| clear_collection(@collection) @db.authenticate('insertOnly', 'password') bulk = @collection.initialize_unordered_bulk_op bulk.insert({:_id => 1, :a => 1}) bulk.insert({:_id => 1, :a => 1}) bulk.find({:a => 1}).remove_one ex = assert_raise Mongo::BulkWriteError do bulk.execute end result = ex.result assert_equal 2, result["writeErrors"].length assert_equal 1, result["n"] assert_equal 1, result["nInserted"] assert_equal 1, @collection.count @db.logout end teardown_bulk end def test_write_concern_error_with_auth_error with_no_replication(@db.connection) do return unless @version >= '2.5.3' init_auth_bulk with_write_commands_and_operations(@db.connection) do |wire_version| clear_collection(@collection) @db.authenticate('insertOnly', 'password') bulk = @collection.initialize_ordered_bulk_op bulk.insert({:_id => 1, :a => 1}) bulk.insert({:_id => 2, :a => 1}) bulk.find({:a => 1}).remove_one ex = assert_raise Mongo::BulkWriteError do bulk.execute({:w => 2}) end result = ex.result assert_match_document( { "ok" => 0, "n" => 0, "nInserted" => 0, "writeErrors" => [{ "index" => 0, "code" => 2, "errmsg" => /'w' > 1/ }], "code" => 65, "errmsg" => "batch item errors occurred" }, result, "wire_version#{wire_version}") # Re-visit this when RUBY-731 is resolved: assert (@collection.count == batch_commands?(wire_version) ? 0 : 1) @db.logout end teardown_bulk end end end ruby-mongo-1.10.0/test/shared/authentication/gssapi_shared.rb000066400000000000000000000151151233461006100242730ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module GSSAPITests # Tests for the GSSAPI Authentication Mechanism. # # Note: These tests will be skipped automatically unless the test environment # has been configured. # # In order to run these tests, you must be using JRuby and must set the following # environment variables. The realm and KDC are required so that the corresponding # system properties can be set: # # export MONGODB_GSSAPI_HOST='server.domain.com' # export MONGODB_GSSAPI_USER='applicationuser@example.com' # export MONGODB_GSSAPI_REALM='applicationuser@example.com' # export MONGODB_GSSAPI_KDC='SERVER.DOMAIN.COM' # # You must either use kinit or provide a config file that references a keytab file: # # export JAAS_LOGIN_CONFIG_FILE='file:///path/to/config/file' # MONGODB_GSSAPI_HOST = ENV['MONGODB_GSSAPI_HOST'] MONGODB_GSSAPI_USER = ENV['MONGODB_GSSAPI_USER'] MONGODB_GSSAPI_REALM = ENV['MONGODB_GSSAPI_REALM'] MONGODB_GSSAPI_KDC = ENV['MONGODB_GSSAPI_KDC'] MONGODB_GSSAPI_PORT = ENV['MONGODB_GSSAPI_PORT'] || '27017' JAAS_LOGIN_CONFIG_FILE = ENV['JAAS_LOGIN_CONFIG_FILE'] if ENV.key?('MONGODB_GSSAPI_HOST') && ENV.key?('MONGODB_GSSAPI_USER') && ENV.key?('MONGODB_GSSAPI_REALM') && ENV.key?('MONGODB_GSSAPI_KDC') && RUBY_PLATFORM =~ /java/ def test_gssapi_authenticate client = Mongo::MongoClient.new(MONGODB_GSSAPI_HOST, MONGODB_GSSAPI_PORT) if client['admin'].command(:isMaster => 1)['setName'] client = Mongo::MongoReplicaSetClient.new(["#{MONGODB_GSSAPI_HOST}:#{MONGODB_GSSAPI_PORT}"]) end set_system_properties db = client['kerberos'] db.authenticate(MONGODB_GSSAPI_USER, nil, nil, nil, 'GSSAPI') assert db.command(:dbstats => 1) threads = [] 4.times do threads << Thread.new do assert db.command(:dbstats => 1) end end threads.each(&:join) end def test_gssapi_authenticate_uri require 'cgi' set_system_properties username = CGI::escape(ENV['MONGODB_GSSAPI_USER']) uri = "mongodb://#{username}@#{ENV['MONGODB_GSSAPI_HOST']}:#{ENV['MONGODB_GSSAPI_PORT']}/?" + "authMechanism=GSSAPI" client = @client.class.from_uri(uri) assert client['kerberos'].command(:dbstats => 1) end def test_wrong_service_name_fails extra_opts = { :gssapi_service_name => 'example' } client = Mongo::MongoClient.new(MONGODB_GSSAPI_HOST, MONGODB_GSSAPI_PORT) if client['admin'].command(:isMaster => 1)['setName'] client = Mongo::MongoReplicaSetClient.new(["#{MONGODB_GSSAPI_HOST}:#{MONGODB_GSSAPI_PORT}"]) end set_system_properties assert_raise_error Java::OrgMongodbSasl::MongoSecurityException do client['kerberos'].authenticate(MONGODB_GSSAPI_USER, nil, nil, nil, 'GSSAPI', extra_opts) end end def test_wrong_service_name_fails_uri set_system_properties require 'cgi' username = CGI::escape(ENV['MONGODB_GSSAPI_USER']) uri = "mongodb://#{username}@#{ENV['MONGODB_GSSAPI_HOST']}:#{ENV['MONGODB_GSSAPI_PORT']}/?" + "authMechanism=GSSAPI&gssapiServiceName=example" client = @client.class.from_uri(uri) assert_raise_error Java::OrgMongodbSasl::MongoSecurityException do client['kerberos'].command(:dbstats => 1) end end def test_extra_opts extra_opts = { :gssapi_service_name => 'example', :canonicalize_host_name => true } client = Mongo::MongoClient.new(MONGODB_GSSAPI_HOST, MONGODB_GSSAPI_PORT) set_system_properties Mongo::Sasl::GSSAPI.expects(:authenticate).with do |username, client, socket, opts| opts[:gssapi_service_name] == extra_opts[:gssapi_service_name] opts[:canonicalize_host_name] == extra_opts[:canonicalize_host_name] end.returns('ok' => true ) client['kerberos'].authenticate(MONGODB_GSSAPI_USER, nil, nil, nil, 'GSSAPI', extra_opts) end def test_extra_opts_uri extra_opts = { :gssapi_service_name => 'example', :canonicalize_host_name => true } set_system_properties Mongo::Sasl::GSSAPI.expects(:authenticate).with do |username, client, socket, opts| opts[:gssapi_service_name] == extra_opts[:gssapi_service_name] opts[:canonicalize_host_name] == extra_opts[:canonicalize_host_name] end.returns('ok' => true) require 'cgi' username = CGI::escape(ENV['MONGODB_GSSAPI_USER']) uri = "mongodb://#{username}@#{ENV['MONGODB_GSSAPI_HOST']}:#{ENV['MONGODB_GSSAPI_PORT']}/?" + "authMechanism=GSSAPI&gssapiServiceName=example&canonicalizeHostName=true" client = @client.class.from_uri(uri) client.expects(:receive_message).returns([[{ 'ok' => 1 }], 1, 1]) client['kerberos'].command(:dbstats => 1) end # In order to run this test, you must set the following environment variable: # # export MONGODB_GSSAPI_HOST_IP='---.---.---.---' # if ENV.key?('MONGODB_GSSAPI_HOST_IP') def test_canonicalize_host_name extra_opts = { :canonicalize_host_name => true } set_system_properties client = Mongo::MongoClient.new(ENV['MONGODB_GSSAPI_HOST_IP'], MONGODB_GSSAPI_PORT) db = client['kerberos'] db.authenticate(MONGODB_GSSAPI_USER, nil, nil, nil, 'GSSAPI', extra_opts) assert db.command(:dbstats => 1) end end def test_invalid_extra_options extra_opts = { :invalid => true, :option => true } client = Mongo::MongoClient.new(MONGODB_GSSAPI_HOST) assert_raise Mongo::MongoArgumentError do client['kerberos'].authenticate(MONGODB_GSSAPI_USER, nil, nil, nil, 'GSSAPI', extra_opts) end end private def set_system_properties java.lang.System.set_property 'javax.security.auth.useSubjectCredsOnly', 'false' java.lang.System.set_property "java.security.krb5.realm", MONGODB_GSSAPI_REALM java.lang.System.set_property "java.security.krb5.kdc", MONGODB_GSSAPI_KDC java.lang.System.set_property "java.security.auth.login.config", JAAS_LOGIN_CONFIG_FILE if JAAS_LOGIN_CONFIG_FILE end end end ruby-mongo-1.10.0/test/shared/authentication/sasl_plain_shared.rb000066400000000000000000000070221233461006100251300ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module SASLPlainTests # Tests for the PLAIN (LDAP) Authentication Mechanism. # # Note: These tests will be skipped automatically unless the test environment # has been configured. # # In order to run these tests, set the following environment variables: # # export MONGODB_SASL_HOST='server.domain.com' # export MONGODB_SASL_USER='application%2Fuser%40example.com' # export MONGODB_SASL_PASS='password' # # # optional (defaults to '$external') # export MONGODB_SASL_SOURCE='source_database' # if ENV.key?('MONGODB_SASL_HOST') && ENV.key?('MONGODB_SASL_USER') && ENV.key?('MONGODB_SASL_PASS') def test_plain_authenticate replica_set = @client.class.name == 'Mongo::MongoReplicaSetClient' # TODO: Remove this once we have a replica set configured for SASL in CI return if ENV.key?('CI') && replica_set host = replica_set ? [ENV['MONGODB_SASL_HOST']] : ENV['MONGODB_SASL_HOST'] client = @client.class.new(host) source = ENV['MONGODB_SASL_SOURCE'] || '$external' db = client['test'] # should successfully authenticate assert db.authenticate(ENV['MONGODB_SASL_USER'], ENV['MONGODB_SASL_PASS'], true, source, 'PLAIN') assert client[source].logout # should raise on missing password ex = assert_raise Mongo::MongoArgumentError do db.authenticate(ENV['MONGODB_SASL_USER'], nil, true, source, 'PLAIN') end assert_match /username and password are required/, ex.message # should raise on invalid password assert_raise Mongo::AuthenticationError do db.authenticate(ENV['MONGODB_SASL_USER'], 'foo', true, source, 'PLAIN') end end def test_plain_authenticate_from_uri source = ENV['MONGODB_SASL_SOURCE'] || '$external' uri = "mongodb://#{ENV['MONGODB_SASL_USER']}:#{ENV['MONGODB_SASL_PASS']}@" + "#{ENV['MONGODB_SASL_HOST']}/some_db?authSource=#{source}" + "&authMechanism=PLAIN" client = @client.class.from_uri(uri) db = client['test'] # should be able to checkout a socket (authentication gets applied) assert socket = client.checkout_reader(:mode => :primary) client[source].logout(:socket => socket) client.checkin(socket) uri = "mongodb://#{ENV['MONGODB_SASL_USER']}@#{ENV['MONGODB_SASL_HOST']}/" + "some_db?authSource=#{source}&authMechanism=PLAIN" # should raise for missing password ex = assert_raise Mongo::MongoArgumentError do client = @client.class.from_uri(uri) end assert_match /username and password are required/, ex.message uri = "mongodb://#{ENV['MONGODB_SASL_USER']}:foo@#{ENV['MONGODB_SASL_HOST']}/" + "some_db?authSource=#{source}&authMechanism=PLAIN" # should raise for invalid password client = @client.class.from_uri(uri) assert_raise Mongo::AuthenticationError do client.checkout_reader(:mode => :primary) end end end end ruby-mongo-1.10.0/test/shared/ssl_shared.rb000066400000000000000000000212611233461006100205660ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module SSLTests include Mongo MONGODB_X509_USERNAME = 'CN=client,OU=kerneluser,O=10Gen,L=New York City,ST=New York,C=US' CERT_PATH = "#{Dir.pwd}/test/fixtures/certificates/" CLIENT_CERT = "#{CERT_PATH}client.pem" CLIENT_CERT_PASS = "#{CERT_PATH}password_protected.pem" CA_CERT = "#{CERT_PATH}ca.pem" PASS_PHRASE = ENV['SSL_KEY_PASS_PHRASE'] def create_client(*args) if @client_class == MongoClient @client_class.new(*args[0], args[1]) else @client_class.new(args[0], args[1]) end end # Requires MongoDB not built with SSL # def test_ssl_not_configured assert_raise Mongo::ConnectionTimeoutError do create_client(['localhost', 27017], :connect_timeout => 2, :ssl => true) end end # This test doesn't connect, no server config required def test_ssl_configuration # raises when ssl=false and ssl opts specified assert_raise MongoArgumentError do create_client(@connect_info, :connect => false, :ssl => false, :ssl_cert => CLIENT_CERT) end # raises when ssl=nil and ssl opts specified assert_raise MongoArgumentError do create_client(@connect_info, :connect => false, :ssl_key => CLIENT_CERT) end # raises when verify=true and no ca_cert assert_raise MongoArgumentError do create_client(@connect_info, :connect => false, :ssl => true, :ssl_key => CLIENT_CERT, :ssl_cert => CLIENT_CERT, :ssl_verify => true) end # raises when key passphrase is given without key file assert_raise MongoArgumentError do create_client(@connect_info, :connect => false, :ssl => true, :ssl_key_pass_phrase => PASS_PHRASE) end end # Requires MongoDB built with SSL and the following options: # # mongod --dbpath /path/to/data/directory --sslOnNormalPorts \ # --sslPEMKeyFile /path/to/server.pem \ # --sslCAFile /path/to/ca.pem \ # --sslCRLFile /path/to/crl.pem \ # --sslWeakCertificateValidation # # Make sure you have 'server' as an alias for localhost in /etc/hosts # def test_ssl_basic client = create_client(@connect_info, :connect => false, :ssl => true) assert client.connect end # Requires MongoDB built with SSL and the following options: # # mongod --dbpath /path/to/data/directory --sslOnNormalPorts \ # --sslPEMKeyFile /path/to/server.pem \ # --sslCAFile /path/to/ca.pem \ # --sslCRLFile /path/to/crl.pem # # Make sure you have 'server' as an alias for localhost in /etc/hosts # def test_ssl_with_cert client = create_client(@connect_info, :connect => false, :ssl => true, :ssl_cert => CLIENT_CERT, :ssl_key => CLIENT_CERT) assert client.connect end def test_ssl_with_peer_cert_validation client = create_client(@connect_info, :connect => false, :ssl => true, :ssl_key => CLIENT_CERT, :ssl_cert => CLIENT_CERT, :ssl_verify => true, :ssl_ca_cert => CA_CERT) assert client.connect end def test_ssl_peer_cert_validation_hostname_fail client = create_client(@bad_connect_info, :connect => false, :ssl => true, :ssl_key => CLIENT_CERT, :ssl_cert => CLIENT_CERT, :ssl_verify => true, :ssl_ca_cert => CA_CERT) assert_raise ConnectionFailure do client.connect end end # Requires MongoDB built with SSL and the following options: # # mongod --dbpath /path/to/data/directory --sslOnNormalPorts \ # --sslPEMKeyFile /path/to/password_protected.pem \ # --sslCAFile /path/to/ca.pem \ # --sslCRLFile /path/to/crl.pem # # Make sure you have 'server' as an alias for localhost in /etc/hosts. # If SSL_KEY_PASS_PHRASE is not set as an environment variable, # you will be prompted to enter a passphrase at runtime. # def test_ssl_with_key_pass_phrase client = create_client(@connect_info, :connect => false, :ssl => true, :ssl_cert => CLIENT_CERT_PASS, :ssl_key => CLIENT_CERT_PASS, :ssl_key_pass_phrase => PASS_PHRASE) assert client.connect end def test_ssl_with_key_pass_phrase_fail client = create_client(@connect_info, :connect => false, :ssl => true, :ssl_cert => CLIENT_CERT_PASS, :ssl_key => CLIENT_CERT_PASS, :ssl_key_pass_phrase => "secret") assert_raise OpenSSL::PKey::RSAError do client.connect end end # Requires mongod built with SSL and the following options: # # mongod --dbpath /path/to/data/directory --sslOnNormalPorts \ # --sslPEMKeyFile /path/to/server.pem \ # --sslCAFile /path/to/ca.pem \ # --sslCRLFile /path/to/crl_client_revoked.pem # # Make sure you have 'server' as an alias for localhost in /etc/hosts # def test_ssl_with_invalid_cert assert_raise ConnectionFailure do create_client(@connect_info, :ssl => true, :ssl_key => CLIENT_CERT, :ssl_cert => CLIENT_CERT, :ssl_verify => true, :ssl_ca_cert => CA_CERT) end end # X509 Authentication Tests # # Requires MongoDB built with SSL and the following options: # # mongod --auth --dbpath /path/to/data/directory --sslOnNormalPorts \ # --sslPEMKeyFile /path/to/server.pem \ # --sslCAFile /path/to/ca.pem \ # --sslCRLFile /path/to/crl.pem # # Note that the cert requires username: # 'CN=client,OU=kerneluser,O=10Gen,L=New York City,ST=New York,C=US' # def test_x509_authentication mechanism = 'MONGODB-X509' client = create_client(@connect_info, :ssl => true, :ssl_cert => CLIENT_CERT, :ssl_key => CLIENT_CERT) return unless client.server_version > '2.5.2' db = client.db('$external') # add user for test (enable auth) roles = [{:role => 'readWriteAnyDatabase', :db => 'admin'}, {:role => 'userAdminAnyDatabase', :db => 'admin'}] db.add_user(MONGODB_X509_USERNAME, nil, false, :roles => roles) assert db.authenticate(MONGODB_X509_USERNAME, nil, nil, nil, mechanism) assert db.collection_names assert db.logout assert_raise Mongo::OperationFailure do db.collection_names end # username and valid certificate don't match assert_raise Mongo::AuthenticationError do db.authenticate('test', nil, nil, nil, mechanism) end # username required assert_raise Mongo::AuthenticationError do db.authenticate(nil, nil, nil, nil, mechanism) end assert MongoClient.from_uri( "mongodb://#{MONGODB_X509_USERNAME}@#{@uri_info}/?ssl=true;authMechanism=#{mechanism}", :ssl_cert => CLIENT_CERT, :ssl_key => CLIENT_CERT) assert db.authenticate(MONGODB_X509_USERNAME, nil, nil, nil, mechanism) assert db.collection_names # clean up and remove all users db.command(:dropAllUsersFromDatabase => 1) db.logout end end ruby-mongo-1.10.0/test/test_helper.rb000077500000000000000000000026571233461006100175220ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # NOTE: on ruby <1.9 you need to run individual tests with 'bundle exec' unless RUBY_VERSION < '1.9' || ENV.key?('JENKINS_CI') require 'simplecov' require 'coveralls' SimpleCov.formatter = SimpleCov::Formatter::MultiFormatter[ SimpleCov::Formatter::HTMLFormatter, Coveralls::SimpleCov::Formatter ] SimpleCov.start do add_group 'Driver', 'lib/mongo' add_group 'BSON', 'lib/bson' add_filter 'tasks' add_filter 'test' add_filter 'bin' end end # required for at_exit, at_start hooks require 'test-unit' require 'test/unit' require 'shoulda' require 'mocha/setup' # cluster manager require 'tools/mongo_config' # test helpers require 'helpers/general' require 'helpers/test_unit' # optional development and debug utilities begin require 'pry-rescue' require 'pry-nav' rescue LoadError # failed to load, skipping pry end ruby-mongo-1.10.0/test/threading/000077500000000000000000000000001233461006100166075ustar00rootroot00000000000000ruby-mongo-1.10.0/test/threading/basic_test.rb000066400000000000000000000055361233461006100212650ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' class ThreadingTest < Test::Unit::TestCase include Mongo def setup @client = standard_connection(:pool_size => 10, :pool_timeout => 30) @db = @client.db(TEST_DB) @coll = @db.collection('thread-test-collection') @coll.drop collections = ['duplicate', 'unique'] collections.each do |coll_name| coll = @db.collection(coll_name) coll.drop coll.insert("test" => "insert") coll.insert("test" => "update") instance_variable_set("@#{coll_name}", coll) end @unique.create_index("test", :unique => true) end def test_safe_update threads = [] 300.times do |i| threads << Thread.new do if i % 2 == 0 assert_raise Mongo::OperationFailure do @unique.update({"test" => "insert"}, {"$set" => {"test" => "update"}}) end else @duplicate.update({"test" => "insert"}, {"$set" => {"test" => "update"}}) @duplicate.update({"test" => "update"}, {"$set" => {"test" => "insert"}}) end end end threads.each {|thread| thread.join} end def test_safe_insert threads = [] 300.times do |i| threads << Thread.new do if i % 2 == 0 assert_raise Mongo::OperationFailure do @unique.insert({"test" => "insert"}) end else @duplicate.insert({"test" => "insert"}) end end end threads.each {|thread| thread.join} end def test_concurrent_find n_threads = 50 1000.times do |i| @coll.insert({ "x" => "a" }) end threads = [] n_threads.times do |i| threads << Thread.new do sum = 0 @coll.find.to_a.size end end thread_values = threads.map(&:value) assert thread_values.all?{|v| v == 1000} assert_equal thread_values.size, n_threads end def test_threading @coll.drop @coll = @db.collection('thread-test-collection') docs = [] 1000.times {|i| docs << {:x => i}} @coll.insert(docs) threads = [] 10.times do |i| threads[i] = Thread.new do sum = 0 @coll.find().each do |document| sum += document["x"] end assert_equal 499500, sum end end 10.times do |i| threads[i].join end end end ruby-mongo-1.10.0/test/tools/000077500000000000000000000000001233461006100160025ustar00rootroot00000000000000ruby-mongo-1.10.0/test/tools/mongo_config.rb000077500000000000000000000436471233461006100210140ustar00rootroot00000000000000#!/usr/bin/env ruby # Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'socket' require 'fileutils' require 'mongo' require 'sfl' $debug_level = 2 STDOUT.sync = true def debug(level, arg) if level <= $debug_level file_line = caller[0][/(.*:\d+):/, 1] calling_method = caller[0][/`([^']*)'/, 1] puts "#{file_line}:#{calling_method}:#{arg.class == String ? arg : arg.inspect}" end end # # Design Notes # Configuration and Cluster Management are modularized with the concept that the Cluster Manager # can be supplied with any configuration to run. # A configuration can be edited, modified, copied into a test file, and supplied to a cluster manager # as a parameter. # module Mongo class Config DEFAULT_BASE_OPTS = { :host => 'localhost', :dbpath => 'data', :logpath => 'data/log' } DEFAULT_REPLICA_SET = DEFAULT_BASE_OPTS.merge( :replicas => 3, :arbiters => 0 ) DEFAULT_SHARDED_SIMPLE = DEFAULT_BASE_OPTS.merge( :shards => 2, :configs => 1, :routers => 2 ) DEFAULT_SHARDED_REPLICA = DEFAULT_SHARDED_SIMPLE.merge( :replicas => 3, :arbiters => 0) IGNORE_KEYS = [:host, :command, :_id] SHARDING_OPT_KEYS = [:shards, :configs, :routers] REPLICA_OPT_KEYS = [:replicas, :arbiters] MONGODS_OPT_KEYS = [:mongods] CLUSTER_OPT_KEYS = SHARDING_OPT_KEYS + REPLICA_OPT_KEYS + MONGODS_OPT_KEYS FLAGS = [:noprealloc, :smallfiles, :logappend, :configsvr, :shardsvr, :quiet, :fastsync, :auth, :ipv6] DEFAULT_VERIFIES = 60 BASE_PORT = 3000 @@port = BASE_PORT def self.configdb(config) config[:configs].collect{|c|"#{c[:host]}:#{c[:port]}"}.join(' ') end def self.cluster(opts = DEFAULT_SHARDED_SIMPLE) raise "missing required option" if [:host, :dbpath].any?{|k| !opts[k]} config = opts.reject {|k,v| CLUSTER_OPT_KEYS.include?(k)} kinds = CLUSTER_OPT_KEYS.select{|key| opts.has_key?(key)} # order is significant replica_count = 0 kinds.each do |kind| config[kind] = opts.fetch(kind,1).times.collect do |i| #default to 1 of whatever if kind == :shards && opts[:replicas] self.cluster(opts.reject{|k,v| SHARDING_OPT_KEYS.include?(k)}.merge(:dbpath => path)) else node = case kind when :replicas make_replica(opts, replica_count) when :arbiters make_replica(opts, replica_count) when :configs make_config(opts) when :routers make_router(config, opts) else make_mongod(kind, opts) end replica_count += 1 if [:replicas, :arbiters].member?(kind) node end end end config end def self.make_mongo(kind, opts) dbpath = opts[:dbpath] port = self.get_available_port path = "#{dbpath}/#{kind}-#{port}" logpath = "#{path}/#{kind}.log" { :host => opts[:host], :port => port, :logpath => logpath, :logappend => true } end def self.make_mongod(kind, opts) params = make_mongo('mongods', opts) mongod = ENV['MONGOD'] || 'mongod' path = File.dirname(params[:logpath]) noprealloc = opts[:noprealloc] || true smallfiles = opts[:smallfiles] || true quiet = opts[:quiet] || true fast_sync = opts[:fastsync] || false auth = opts[:auth] || true ipv6 = opts[:ipv6].nil? ? true : opts[:ipv6] params.merge(:command => mongod, :dbpath => path, :smallfiles => smallfiles, :noprealloc => noprealloc, :quiet => quiet, :fastsync => fast_sync, :auth => auth, :ipv6 => ipv6) end def self.make_replica(opts, id) params = make_mongod('replicas', opts) replSet = opts[:replSet] || 'ruby-driver-test' oplogSize = opts[:oplog_size] || 5 keyFile = opts[:key_file] || '/test/fixtures/auth/keyfile' keyFile = Dir.pwd << keyFile system "chmod 600 #{keyFile}" params.merge(:_id => id, :replSet => replSet, :oplogSize => oplogSize, :keyFile => keyFile) end def self.make_config(opts) params = make_mongod('configs', opts) params.merge(:configsvr => nil) end def self.make_router(config, opts) params = make_mongo('routers', opts) mongos = ENV['MONGOS'] || 'mongos' params.merge( :command => mongos, :configdb => self.configdb(config) ) end def self.port_available?(port) ret = false socket = Socket.new(Socket::Constants::AF_INET, Socket::Constants::SOCK_STREAM, 0) socket.setsockopt(Socket::SOL_SOCKET, Socket::SO_REUSEADDR, 1) sockaddr = Socket.sockaddr_in(port, '0.0.0.0') begin socket.bind(sockaddr) ret = true rescue Exception end socket.close ret end def self.get_available_port while true port = @@port @@port += 1 break if port_available?(port) end port end class SysProc attr_reader :pid, :cmd def initialize(cmd = nil) @pid = nil @cmd = cmd end def clear_zombie if @pid begin pid = Process.waitpid(@pid, Process::WNOHANG) rescue Errno::ECHILD # JVM might have already reaped the exit status end @pid = nil if pid && pid > 0 end end def start(verifies = 0) clear_zombie return @pid if running? begin # redirection not supported in jruby if defined?(RUBY_ENGINE) && RUBY_ENGINE == 'jruby' @pid = Process.spawn(*@cmd) else cmd_and_opts = [@cmd, {:out => '/dev/null'}].flatten @pid = Process.spawn(*cmd_and_opts) end verify(verifies) if verifies > 0 @pid end end def stop kill wait end def kill(signal_no = 2) begin @pid && Process.kill(signal_no, @pid) && true rescue Errno::ESRCH false end # cleanup lock if unclean shutdown begin File.delete(File.join(@config[:dbpath], 'mongod.lock')) if @config[:dbpath] rescue Errno::ENOENT end end def wait begin Process.waitpid(@pid) if @pid rescue Errno::ECHILD # JVM might have already reaped the exit status end @pid = nil end def running? begin @pid && Process.kill(0, @pid) && true rescue Errno::ESRCH false end end def verify(verifies = DEFAULT_VERIFIES) verifies.times do |i| return @pid if running? sleep 1 end nil end end class Server < SysProc attr_reader :host, :port def initialize(cmd = nil, host = nil, port = nil) super(cmd) @host = host @port = port end def host_port [@host, @port].join(':') end def host_port_a # for old format [@host, @port] end end class DbServer < Server attr_accessor :config def initialize(config) @config = config dbpath = @config[:dbpath] [dbpath, File.dirname(@config[:logpath])].compact.each{|dir| FileUtils.mkdir_p(dir) unless File.directory?(dir) } command = @config[:command] || 'mongod' params = @config.reject{|k,v| IGNORE_KEYS.include?(k)} arguments = params.sort{|a, b| a[0].to_s <=> b[0].to_s}.collect do |arg, value| # sort block is needed for 1.8.7 which lacks Symbol#<=> argument = '--' + arg.to_s if FLAGS.member?(arg) && value == true [argument] elsif !FLAGS.member?(arg) [argument, value.to_s] end end cmd = [command, arguments].flatten.compact super(cmd, @config[:host], @config[:port]) end def start(verifies = DEFAULT_VERIFIES) super(verifies) verify(verifies) end def verify(verifies = 600) verifies.times do |i| #puts "DbServer.verify via connection probe - port:#{@port.inspect} iteration:#{i} @pid:#{@pid.inspect} kill:#{Process.kill(0, @pid).inspect} running?:#{running?.inspect} cmd:#{cmd.inspect}" begin raise Mongo::ConnectionFailure unless running? Mongo::MongoClient.new(@host, @port).close #puts "DbServer.verified via connection - port: #{@port} iteration: #{i}" return @pid rescue Mongo::ConnectionFailure sleep 1 end end system "ps -fp #{@pid}; cat #{@config[:logpath]}" raise Mongo::ConnectionFailure, "DbServer.start verify via connection probe failed - port:#{@port.inspect} @pid:#{@pid.inspect} kill:#{Process.kill(0, @pid).inspect} running?:#{running?.inspect} cmd:#{cmd.inspect}" end end class ClusterManager attr_reader :config def initialize(config) @config = config @servers = {} Mongo::Config::CLUSTER_OPT_KEYS.each do |key| @servers[key] = @config[key].collect{|conf| DbServer.new(conf)} if @config[key] end end def servers(key = nil) @servers.collect{|k,v| (!key || key == k) ? v : nil}.flatten.compact end def command( cmd_servers, db_name, cmd, opts = {} ) ret = [] cmd = cmd.class == Array ? cmd : [ cmd ] debug 3, "ClusterManager.command cmd:#{cmd.inspect}" cmd_servers = cmd_servers.class == Array ? cmd_servers : [cmd_servers] cmd_servers.each do |cmd_server| debug 3, cmd_server.inspect cmd_server = cmd_server.config if cmd_server.is_a?(DbServer) client = Mongo::MongoClient.new(cmd_server[:host], cmd_server[:port]) cmd.each do |c| debug 3, "ClusterManager.command c:#{c.inspect}" response = client[db_name].command( c, opts ) debug 3, "ClusterManager.command response:#{response.inspect}" raise Mongo::OperationFailure, "c:#{c.inspect} opts:#{opts.inspect} failed" unless response["ok"] == 1.0 || opts.fetch(:check_response, true) == false ret << response end client.close end debug 3, "command ret:#{ret.inspect}" ret.size == 1 ? ret.first : ret end def repl_set_get_status command( @config[:replicas], 'admin', { :replSetGetStatus => 1 }, {:check_response => false } ) end def repl_set_get_config host, port = primary_name.split(":") client = Mongo::MongoClient.new(host, port) client['local']['system.replset'].find_one end def repl_set_config members = [] @config[:replicas].each{|s| members << { :_id => s[:_id], :host => "#{s[:host]}:#{s[:port]}", :tags => { :node => s[:_id].to_s } } } @config[:arbiters].each{|s| members << { :_id => s[:_id], :host => "#{s[:host]}:#{s[:port]}", :arbiterOnly => true } } { :_id => @config[:replicas].first[:replSet], :members => members } end def repl_set_initiate( cfg = nil ) command( @config[:replicas].first, 'admin', { :replSetInitiate => cfg || repl_set_config } ) end def repl_set_startup states = nil healthy = false 60.times do # enter the thunderdome... states = repl_set_get_status.zip(repl_set_is_master) healthy = states.all? do |status, is_master| # check replica set status for member list next unless status['ok'] == 1.0 && (members = status['members']) # ensure all replica set members are in a valid state next unless members.all? { |m| [1,2,7].include?(m['state']) } # check for primary replica set member next unless (primary = members.find { |m| m['state'] == 1 }) # check replica set member optimes primary_optime = primary['optime'].seconds next unless primary_optime && members.all? do |m| m['state'] == 7 || primary_optime - m['optime'].seconds < 5 end # check replica set state case status['myState'] when 1 is_master['ismaster'] == true && is_master['secondary'] == false when 2 is_master['ismaster'] == false && is_master['secondary'] == true when 7 is_master['ismaster'] == false && is_master['secondary'] == false end end return healthy if healthy sleep(1) end raise Mongo::OperationFailure, "replSet startup failed - status: #{states.inspect}" end def repl_set_seeds @config[:replicas].collect{|node| "#{node[:host]}:#{node[:port]}"} end def repl_set_seeds_old @config[:replicas].collect{|node| [node[:host], node[:port]]} end def repl_set_seeds_uri repl_set_seeds.join(',') end def repl_set_name @config[:replicas].first[:replSet] end def member_names_by_state(state) states = Array(state) # Any status with a REMOVED node won't have the full cluster state status = repl_set_get_status.find {|status| status['members'].find {|m| m['state'] == 'REMOVED'}.nil?} status['members'].find_all{|member| states.index(member['state']) }.collect{|member| member['name']} end def primary_name member_names_by_state(1).first end def secondary_names member_names_by_state(2) end def replica_names member_names_by_state([1,2]) end def arbiter_names member_names_by_state(7) end def members_by_name(names) names.collect do |name| member_by_name(name) end.compact end def member_by_name(name) servers.find{|server| server.host_port == name} end def primary members_by_name([primary_name]).first end def secondaries members_by_name(secondary_names) end def stop_primary primary.stop end def stop_secondary secondaries[rand(secondaries.length)].stop end def replicas members_by_name(replica_names) end def arbiters members_by_name(arbiter_names) end def config_names_by_kind(kind) @config[kind].collect{|conf| "#{conf[:host]}:#{conf[:port]}"} end def shards members_by_name(config_names_by_kind(:shards)) end def repl_set_reconfig(new_config) new_config['version'] = repl_set_get_config['version'] + 1 command( primary, 'admin', { :replSetReconfig => new_config } ) repl_set_startup end def repl_set_remove_node(state = [1,2]) names = member_names_by_state(state) name = names[rand(names.length)] @config[:replicas].delete_if{|node| "#{node[:host]}:#{node[:port]}" == name} repl_set_reconfig(repl_set_config) end def repl_set_add_node end def configs members_by_name(config_names_by_kind(:configs)) end def routers members_by_name(config_names_by_kind(:routers)) end def mongos_seeds config_names_by_kind(:routers) end def ismaster(servers) command( servers, 'admin', { :ismaster => 1 } ) end def sharded_cluster_is_master ismaster(@config[:routers]) end def repl_set_is_master ismaster(@config[:replicas]) end def addshards(shards = @config[:shards]) command( @config[:routers].first, 'admin', Array(shards).collect{|s| { :addshard => "#{s[:host]}:#{s[:port]}" } } ) end def listshards command( @config[:routers].first, 'admin', { :listshards => 1 } ) end def enablesharding( dbname ) command( @config[:routers].first, 'admin', { :enablesharding => dbname } ) end def shardcollection( namespace, key, unique = false ) command( @config[:routers].first, 'admin', { :shardcollection => namespace, :key => key, :unique => unique } ) end def mongos_discover # can also do @config[:routers] find but only want mongos for connections (@config[:configs]).collect do |cmd_server| client = Mongo::MongoClient.new(cmd_server[:host], cmd_server[:port]) result = client['config']['mongos'].find.to_a client.close result end end def start # Must start configs before mongos -- hash order not guaranteed on 1.8.X servers(:configs).each{|server| server.start} servers.each{|server| server.start} # TODO - sharded replica sets - pending if @config[:replicas] repl_set_initiate if repl_set_get_status.first['startupStatus'] == 3 repl_set_startup end if @config[:routers] addshards if listshards['shards'].size == 0 end self end alias :restart :start def stop servers.each{|server| server.stop} self end def clobber FileUtils.rm_rf @config[:dbpath] self end end end end ruby-mongo-1.10.0/test/tools/mongo_config_test.rb000066400000000000000000000116121233461006100220330ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' class MongoConfig < Test::Unit::TestCase def startup @sys_proc = nil end def shutdown @sys_proc.stop if @sys_proc && @sys_proc.running? end test "config defaults" do [ Mongo::Config::DEFAULT_BASE_OPTS, Mongo::Config::DEFAULT_REPLICA_SET, Mongo::Config::DEFAULT_SHARDED_SIMPLE, Mongo::Config::DEFAULT_SHARDED_REPLICA ].each do |params| config = Mongo::Config.cluster(params) assert(config.size > 0) end end test "get available port" do assert_not_nil(Mongo::Config.get_available_port) end test "SysProc start" do cmd = "true" @sys_proc = Mongo::Config::SysProc.new(cmd) assert_equal(cmd, @sys_proc.cmd) assert_nil(@sys_proc.pid) start_and_assert_running?(@sys_proc) end test "SysProc wait" do @sys_proc = Mongo::Config::SysProc.new("true") start_and_assert_running?(@sys_proc) assert(@sys_proc.running?) @sys_proc.wait assert(!@sys_proc.running?) end test "SysProc kill" do @sys_proc = Mongo::Config::SysProc.new("true") start_and_assert_running?(@sys_proc) @sys_proc.kill @sys_proc.wait assert(!@sys_proc.running?) end test "SysProc stop" do @sys_proc = Mongo::Config::SysProc.new("true") start_and_assert_running?(@sys_proc) @sys_proc.stop assert(!@sys_proc.running?) end test "SysProc zombie respawn" do @sys_proc = Mongo::Config::SysProc.new("true") start_and_assert_running?(@sys_proc) prev_pid = @sys_proc.pid @sys_proc.kill # don't wait, leaving a zombie assert(@sys_proc.running?) start_and_assert_running?(@sys_proc) assert(prev_pid && @sys_proc.pid && prev_pid != @sys_proc.pid, 'SysProc#start should spawn a new process after a zombie') @sys_proc.stop assert(!@sys_proc.running?) end test "Server" do server = Mongo::Config::Server.new('a cmd', 'host', 1234) assert_equal('a cmd', server.cmd) assert_equal('host', server.host) assert_equal(1234, server.port) end test "DbServer" do config = Mongo::Config::DEFAULT_BASE_OPTS server = Mongo::Config::DbServer.new(config) assert_equal(config, server.config) assert_equal("mongod --dbpath data --logpath data/log", server.cmd) assert_equal(config[:host], server.host) assert_equal(config[:port], server.port) end def cluster_test(opts) #debug 1, opts.inspect config = Mongo::Config.cluster(opts) #debug 1, config.inspect manager = Mongo::Config::ClusterManager.new(config) assert_equal(config, manager.config) manager.start yield manager manager.stop manager.servers.each{|s| assert(!s.running?)} manager.clobber end test "cluster manager base" do cluster_test(Mongo::Config::DEFAULT_BASE_OPTS) do |manager| end end test "cluster manager replica set" do cluster_test(Mongo::Config::DEFAULT_REPLICA_SET) do |manager| servers = manager.servers servers.each do |server| assert_not_nil(Mongo::MongoClient.new(server.host, server.port)) assert_match(/oplogSize/, server.cmd, '--oplogSize option should be specified') assert_match(/smallfiles/, server.cmd, '--smallfiles option should be specified') assert_no_match(/nojournal/, server.cmd, '--nojournal option should not be specified') assert_match(/noprealloc/, server.cmd, '--noprealloc option should be specified') end end end test "cluster manager sharded simple" do cluster_test(Mongo::Config::DEFAULT_SHARDED_SIMPLE) do |manager| servers = manager.shards + manager.configs servers.each do |server| assert_not_nil(Mongo::MongoClient.new(server.host, server.port)) assert_match(/oplogSize/, server.cmd, '--oplogSize option should be specified') assert_match(/smallfiles/, server.cmd, '--smallfiles option should be specified') assert_no_match(/nojournal/, server.cmd, '--nojournal option should not be specified') assert_match(/noprealloc/, server.cmd, '--noprealloc option should be specified') end end end test "cluster manager sharded replica" do #cluster_test(Mongo::Config::DEFAULT_SHARDED_REPLICA) # not yet supported by ClusterManager end private def start_and_assert_running?(sys_proc) assert_not_nil(sys_proc.start(0)) assert_not_nil(sys_proc.pid) assert(sys_proc.running?) end end ruby-mongo-1.10.0/test/unit/000077500000000000000000000000001233461006100156215ustar00rootroot00000000000000ruby-mongo-1.10.0/test/unit/client_test.rb000066400000000000000000000277221233461006100204750ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' include Mongo class ClientUnitTest < Test::Unit::TestCase context "Mongo::MongoClient initialization " do context "given a single node" do setup do @client = MongoClient.new('localhost', 27017, :connect => false) TCPSocket.stubs(:new).returns(new_mock_socket) admin_db = new_mock_db admin_db.expects(:command).returns({'ok' => 1, 'ismaster' => 1}) @client.expects(:[]).with('admin').returns(admin_db) @client.connect end should "gle writes by default" do assert_equal 1, @client.write_concern[:w] end should "set localhost and port to master" do assert_equal 'localhost', @client.primary_pool.host assert_equal 27017, @client.primary_pool.port end should "set connection pool to 1" do assert_equal 1, @client.primary_pool.size end should "default slave_ok to false" do assert !@client.slave_ok? end should "not raise error if no host or port is supplied" do assert_nothing_raised do MongoClient.new(:w => 1, :connect => false) end assert_nothing_raised do MongoClient.new('localhost', :w => 1, :connect=> false) end end should "warn if invalid options are specified" do client = MongoClient.allocate opts = {:connect => false} MongoReplicaSetClient::REPL_SET_OPTS.each do |opt| client.expects(:warn).with("#{opt} is not a valid option for #{client.class}") opts[opt] = true end args = ['localhost', 27017, opts] client.send(:initialize, *args) end context "given a replica set" do should "warn if invalid options are specified" do client = MongoReplicaSetClient.allocate opts = {:connect => false} MongoClient::CLIENT_ONLY_OPTS.each do |opt| client.expects(:warn).with("#{opt} is not a valid option for #{client.class}") opts[opt] = true end args = [['localhost:27017'], opts] client.send(:initialize, *args) end should "throw error if superflous arguments are specified" do assert_raise MongoArgumentError do MongoReplicaSetClient.new(['localhost:27017'], ['localhost:27018'], {:connect => false}) end end end end context "initializing with a unix socket" do setup do @connection = Mongo::Connection.new('/tmp/mongod.sock', :connect => false) UNIXSocket.stubs(:new).returns(new_mock_unix_socket) end should "parse a unix socket" do assert_equal "/tmp/mongod.sock", @connection.host_port.first end end context "initializing with a mongodb uri" do should "parse a simple uri" do @client = MongoClient.from_uri("mongodb://localhost", :connect => false) assert_equal ['localhost', 27017], @client.host_port end should "set auth source" do @client = MongoClient.from_uri("mongodb://user:pass@localhost?authSource=foo", :connect => false) assert_equal 'foo', @client.auths.first[:source] end should "set auth mechanism" do @client = MongoClient.from_uri("mongodb://user@localhost?authMechanism=MONGODB-X509", :connect => false) assert_equal 'MONGODB-X509', @client.auths.first[:mechanism] assert_raise MongoArgumentError do MongoClient.from_uri("mongodb://user@localhost?authMechanism=INVALID", :connect => false) end end should "allow a complex host names" do host_name = "foo.bar-12345.org" @client = MongoClient.from_uri("mongodb://#{host_name}", :connect => false) assert_equal [host_name, 27017], @client.host_port end should "allow db without username and password" do host_name = "foo.bar-12345.org" @client = MongoClient.from_uri("mongodb://#{host_name}/foo", :connect => false) assert_equal [host_name, 27017], @client.host_port end should "set write concern options on connection" do host_name = "localhost" opts = "w=2&wtimeoutMS=1000&fsync=true&journal=true" @client = MongoClient.from_uri("mongodb://#{host_name}/foo?#{opts}", :connect => false) assert_equal({:w => 2, :wtimeout => 1000, :fsync => true, :j => true}, @client.write_concern) end should "set timeout options on connection" do host_name = "localhost" opts = "connectTimeoutMS=1000&socketTimeoutMS=5000" @client = MongoClient.from_uri("mongodb://#{host_name}/foo?#{opts}", :connect => false) assert_equal 1, @client.connect_timeout assert_equal 5, @client.op_timeout end should "parse a uri with a hyphen & underscore in the username or password" do @client = MongoClient.from_uri("mongodb://hyphen-user_name:p-s_s@localhost:27017/db", :connect => false) assert_equal ['localhost', 27017], @client.host_port auth_hash = { :db_name => 'db', :username => 'hyphen-user_name', :password => 'p-s_s', :source => 'db', :mechanism => Authentication::DEFAULT_MECHANISM, :extra => {} } assert_equal auth_hash, @client.auths.first end should "attempt to connect" do TCPSocket.stubs(:new).returns(new_mock_socket) @client = MongoClient.from_uri("mongodb://localhost", :connect => false) admin_db = new_mock_db admin_db.expects(:command).returns({'ok' => 1, 'ismaster' => 1}) @client.expects(:[]).with('admin').returns(admin_db) @client.connect end should "raise an error on invalid uris" do assert_raise MongoArgumentError do MongoClient.from_uri("mongo://localhost", :connect => false) end assert_raise MongoArgumentError do MongoClient.from_uri("mongodb://localhost:abc", :connect => false) end end should "require password if using legacy auth and username present" do assert MongoClient.from_uri("mongodb://kyle:jones@localhost/db", :connect => false) assert_raise MongoArgumentError do MongoClient.from_uri("mongodb://kyle:@localhost", :connect => false) end assert_raise MongoArgumentError do MongoClient.from_uri("mongodb://kyle@localhost", :connect => false) end end end context "initializing with ENV['MONGODB_URI']" do should "parse a simple uri" do uri = "mongodb://localhost?connect=false" with_preserved_env_uri(uri) do @client = MongoClient.new assert_equal ['localhost', 27017], @client.host_port end end should "set auth source" do uri = "mongodb://user:pass@localhost?authSource=foo&connect=false" with_preserved_env_uri(uri) do @client = MongoClient.new assert_equal 'foo', @client.auths.first[:source] end end should "set auth mechanism" do uri = "mongodb://user@localhost?authMechanism=MONGODB-X509&connect=false" with_preserved_env_uri(uri) do @client = MongoClient.new assert_equal 'MONGODB-X509', @client.auths.first[:mechanism] ENV['MONGODB_URI'] = "mongodb://user@localhost?authMechanism=INVALID&connect=false" assert_raise MongoArgumentError do MongoClient.new end end end should "allow a complex host names" do host_name = "foo.bar-12345.org" uri = "mongodb://#{host_name}?connect=false" with_preserved_env_uri(uri) do @client = MongoClient.new assert_equal [host_name, 27017], @client.host_port end end should "allow db without username and password" do host_name = "foo.bar-12345.org" uri = "mongodb://#{host_name}/foo?connect=false" with_preserved_env_uri(uri) do @client = MongoClient.new assert_equal [host_name, 27017], @client.host_port end end should "set write concern options on connection" do host_name = "localhost" opts = "w=2&wtimeoutMS=1000&fsync=true&journal=true&connect=false" uri = "mongodb://#{host_name}/foo?#{opts}" with_preserved_env_uri(uri) do @client = MongoClient.new assert_equal({:w => 2, :wtimeout => 1000, :fsync => true, :j => true}, @client.write_concern) end end should "set timeout options on connection" do host_name = "localhost" opts = "connectTimeoutMS=1000&socketTimeoutMS=5000&connect=false" uri = "mongodb://#{host_name}/foo?#{opts}" with_preserved_env_uri(uri) do @client = MongoClient.new assert_equal 1, @client.connect_timeout assert_equal 5, @client.op_timeout end end should "parse a uri with a hyphen & underscore in the username or password" do uri = "mongodb://hyphen-user_name:p-s_s@localhost:27017/db?connect=false" with_preserved_env_uri(uri) do @client = MongoClient.new assert_equal ['localhost', 27017], @client.host_port auth_hash = { :db_name => 'db', :username => 'hyphen-user_name', :password => 'p-s_s', :source => 'db', :mechanism => Authentication::DEFAULT_MECHANISM, :extra => {} } assert_equal auth_hash, @client.auths.first end end should "attempt to connect" do TCPSocket.stubs(:new).returns(new_mock_socket) uri = "mongodb://localhost?connect=false" with_preserved_env_uri(uri) do @client = MongoClient.new admin_db = new_mock_db admin_db.expects(:command).returns({'ok' => 1, 'ismaster' => 1}) @client.expects(:[]).with('admin').returns(admin_db) @client.connect end end should "raise an error on invalid uris" do uri = "mongo://localhost" with_preserved_env_uri(uri) do assert_raise MongoArgumentError do MongoClient.new end ENV['MONGODB_URI'] = "mongodb://localhost:abc?connect=false" assert_raise MongoArgumentError do MongoClient.new end end end should "require password if using legacy auth and username present" do uri = "mongodb://kyle:jones@localhost?connect=false" with_preserved_env_uri(uri) do assert MongoClient.new ENV['MONGODB_URI'] = "mongodb://kyle:@localhost?connect=false" assert_raise MongoArgumentError do MongoClient.new end ENV['MONGODB_URI'] = "mongodb://kyle@localhost?connect=false" assert_raise MongoArgumentError do MongoClient.new end end end should "require password if using PLAIN auth and username present" do uri = "mongodb://kyle:jones@localhost?connect=false&authMechanism=PLAIN" with_preserved_env_uri(uri) do assert MongoClient.new ENV['MONGODB_URI'] = "mongodb://kyle:@localhost?connect=false&authMechanism=PLAIN" assert_raise MongoArgumentError do MongoClient.new end ENV['MONGODB_URI'] = "mongodb://kyle@localhost?connect=false&authMechanism=PLAIN" assert_raise MongoArgumentError do MongoClient.new end end end end end end ruby-mongo-1.10.0/test/unit/collection_test.rb000066400000000000000000000130141233461006100213370ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' module Mongo class Collection attr_reader :operation_writer, :command_writer end end class CollectionUnitTest < Test::Unit::TestCase context "Basic operations: " do setup do @logger = mock() @logger.stubs(:level => 0) @logger.expects(:debug) @client = MongoClient.new('localhost', 27017, :logger => @logger, :connect => false) @db = @client[TEST_DB] @coll = @db.collection('collection-unit-test') end should "send update message" do @client.expects(:send_message_with_gle).with do |op, msg, log| op == 2001 end @coll.operation_writer.stubs(:log_operation) @coll.update({}, {:title => 'Moby Dick'}) end should "send insert message" do @client.expects(:send_message_with_gle).with do |op, msg, log| op == 2002 end @coll.operation_writer.expects(:log_operation).with do |name, payload| (name == :insert) && payload[:documents][:title].include?('Moby') end @coll.insert({:title => 'Moby Dick'}) end should "send sort data" do @client.expects(:checkout_reader).returns(new_mock_socket) @client.expects(:receive_message).with do |op, msg, log, sock| op == 2004 end.returns([[], 0, 0]) @logger.expects(:debug) @coll.find({:title => 'Moby Dick'}).sort([['title', 1], ['author', 1]]).next_document end should "not log binary data" do data = BSON::Binary.new(("BINARY " * 1000).unpack("c*")) @client.expects(:send_message_with_gle).with do |op, msg, log| op == 2002 end @coll.operation_writer.expects(:log_operation).with do |name, payload| (name == :insert) && payload[:documents][:data].inspect.include?('Binary') end @coll.insert({:data => data}) end should "send safe update message" do @client.expects(:send_message_with_gle).with do |op, msg, db_name, log| op == 2001 end @coll.operation_writer.expects(:log_operation).with do |name, payload| (name == :update) && payload[:documents][:title].include?('Moby') end @coll.update({}, {:title => 'Moby Dick'}) end should "send safe update message with legacy" do connection = Connection.new('localhost', 27017, :safe => true, :connect => false) db = connection[TEST_DB] coll = db.collection('collection-unit-test') connection.expects(:send_message_with_gle).with do |op, msg, db_name, log| op == 2001 end coll.operation_writer.expects(:log_operation).with do |name, payload| (name == :update) && payload[:documents][:title].include?('Moby') end coll.update({}, {:title => 'Moby Dick'}) end should "send safe insert message" do @client.expects(:send_message_with_gle).with do |op, msg, db_name, log| op == 2001 end @coll.operation_writer.stubs(:log_operation) @coll.update({}, {:title => 'Moby Dick'}) end should "not call insert for each ensure_index call" do @coll.expects(:generate_indexes).once @coll.ensure_index [["x", Mongo::DESCENDING]] @coll.ensure_index [["x", Mongo::DESCENDING]] end should "call generate_indexes for a new type on the same field for ensure_index" do @coll.expects(:generate_indexes).twice @coll.ensure_index [["x", Mongo::DESCENDING]] @coll.ensure_index [["x", Mongo::ASCENDING]] end should "call generate_indexes twice because the cache time is 0 seconds" do @db.cache_time = 0 @coll = @db.collection('collection-unit-test') @coll.expects(:generate_indexes).twice @coll.ensure_index [["x", Mongo::DESCENDING]] @coll.ensure_index [["x", Mongo::DESCENDING]] end should "call generate_indexes for each key when calling ensure_indexes" do @db.cache_time = 300 @coll = @db.collection('collection-unit-test') @coll.expects(:generate_indexes).once.with do |a, b, c| a == {"x"=>-1, "y"=>-1} end @coll.ensure_index [["x", Mongo::DESCENDING], ["y", Mongo::DESCENDING]] end should "call generate_indexes for each key when calling ensure_indexes with a hash" do @db.cache_time = 300 @coll = @db.collection('collection-unit-test') oh = BSON::OrderedHash.new oh['x'] = Mongo::DESCENDING oh['y'] = Mongo::DESCENDING @coll.expects(:generate_indexes).once.with do |a, b, c| a == oh end if RUBY_VERSION > '1.9' @coll.ensure_index({"x" => Mongo::DESCENDING, "y" => Mongo::DESCENDING}) else ordered_hash = BSON::OrderedHash.new ordered_hash['x'] = Mongo::DESCENDING ordered_hash['y'] = Mongo::DESCENDING @coll.ensure_index(ordered_hash) end end should "use the connection's logger" do @logger.expects(:warn).with do |msg| msg == "MONGODB [WARNING] test warning" end @coll.log(:warn, "test warning") end end end ruby-mongo-1.10.0/test/unit/connection_test.rb000066400000000000000000000272641233461006100213570ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' class ConnectionUnitTest < Test::Unit::TestCase context "Mongo::MongoClient initialization " do context "given a single node" do setup do @connection = Mongo::Connection.new('localhost', 27017, :safe => true, :connect => false) TCPSocket.stubs(:new).returns(new_mock_socket) admin_db = new_mock_db admin_db.expects(:command).returns({'ok' => 1, 'ismaster' => 1}) @connection.expects(:[]).with('admin').returns(admin_db) @connection.connect end should "set safe mode true" do assert_equal true, @connection.safe end should "set localhost and port to master" do assert_equal 'localhost', @connection.primary_pool.host assert_equal 27017, @connection.primary_pool.port end should "set connection pool to 1" do assert_equal 1, @connection.primary_pool.size end should "default slave_ok to false" do assert !@connection.slave_ok? end should "not raise error if no host or port is supplied" do assert_nothing_raised do Mongo::Connection.new(:safe => true, :connect => false) end assert_nothing_raised do Mongo::Connection.new('localhost', :safe => true, :connect => false) end end should "warn if invalid options are specified" do connection = Mongo::Connection.allocate opts = {:connect => false} Mongo::ReplSetConnection::REPL_SET_OPTS.each do |opt| connection.expects(:warn).with("#{opt} is not a valid option for #{connection.class}") opts[opt] = true end args = ['localhost', 27017, opts] connection.send(:initialize, *args) end end context "initializing with a unix socket" do setup do @connection = Mongo::Connection.new('/tmp/mongod.sock', :safe => true, :connect => false) UNIXSocket.stubs(:new).returns(new_mock_unix_socket) end should "parse a unix socket" do assert_equal "/tmp/mongod.sock", @connection.host_port.first end end context "initializing with a mongodb uri" do should "parse a simple uri" do @connection = Mongo::Connection.from_uri("mongodb://localhost", :connect => false) assert_equal ['localhost', 27017], @connection.host_port end should "set auth source" do @connection = Mongo::Connection.from_uri("mongodb://user:pass@localhost?authSource=foo", :connect => false) assert_equal 'foo', @connection.auths.first[:source] end should "set auth mechanism" do @connection = Mongo::Connection.from_uri("mongodb://user@localhost?authMechanism=MONGODB-X509", :connect => false) assert_equal 'MONGODB-X509', @connection.auths.first[:mechanism] assert_raise MongoArgumentError do Mongo::Connection.from_uri("mongodb://localhost?authMechanism=INVALID", :connect => false) end end should "allow a complex host names" do host_name = "foo.bar-12345.org" @connection = Mongo::Connection.from_uri("mongodb://#{host_name}", :connect => false) assert_equal [host_name, 27017], @connection.host_port end should "allow db without username and password" do host_name = "foo.bar-12345.org" @connection = Mongo::Connection.from_uri("mongodb://#{host_name}/foo", :connect => false) assert_equal [host_name, 27017], @connection.host_port end should "set safe options on connection" do host_name = "localhost" opts = "safe=true&w=2&wtimeoutMS=1000&fsync=true&journal=true" @connection = Mongo::Connection.from_uri("mongodb://#{host_name}/foo?#{opts}", :connect => false) assert_equal({:w => 2, :wtimeout => 1000, :fsync => true, :j => true}, @connection.write_concern) end should "set timeout options on connection" do host_name = "localhost" opts = "connectTimeoutMS=1000&socketTimeoutMS=5000" @connection = Mongo::Connection.from_uri("mongodb://#{host_name}/foo?#{opts}", :connect => false) assert_equal 1, @connection.connect_timeout assert_equal 5, @connection.op_timeout end should "parse a uri with a hyphen & underscore in the username or password" do @connection = Mongo::Connection.from_uri("mongodb://hyphen-user_name:p-s_s@localhost:27017/db", :connect => false) assert_equal ['localhost', 27017], @connection.host_port auth_hash = { :db_name => 'db', :username => 'hyphen-user_name', :password => 'p-s_s', :source => 'db', :mechanism => Authentication::DEFAULT_MECHANISM, :extra => {} } assert_equal auth_hash, @connection.auths.first end should "attempt to connect" do TCPSocket.stubs(:new).returns(new_mock_socket) @connection = Mongo::Connection.from_uri("mongodb://localhost", :connect => false) admin_db = new_mock_db admin_db.expects(:command).returns({'ok' => 1, 'ismaster' => 1}) @connection.expects(:[]).with('admin').returns(admin_db) @connection.connect end should "raise an error on invalid uris" do assert_raise MongoArgumentError do Mongo::Connection.from_uri("mongo://localhost", :connect => false) end assert_raise MongoArgumentError do Mongo::Connection.from_uri("mongodb://localhost:abc", :connect => false) end end should "require password if using legacy auth and username present" do assert Mongo::Connection.from_uri("mongodb://kyle:jones@localhost/db", :connect => false) assert_raise MongoArgumentError do Mongo::Connection.from_uri("mongodb://kyle:@localhost", :connect => false) end assert_raise MongoArgumentError do Mongo::Connection.from_uri("mongodb://kyle@localhost", :connect => false) end end end context "initializing with ENV['MONGODB_URI']" do should "parse a simple uri" do uri = "mongodb://localhost?connect=false" with_preserved_env_uri(uri) do @connection = Mongo::Connection.new assert_equal ['localhost', 27017], @connection.host_port end end should "set auth source" do uri = "mongodb://user:pass@localhost?authSource=foo&connect=false" with_preserved_env_uri(uri) do @connection = Mongo::Connection.new assert_equal 'foo', @connection.auths.first[:source] end end should "set auth mechanism" do uri = "mongodb://user@localhost?authMechanism=MONGODB-X509&connect=false" with_preserved_env_uri(uri) do @connection = Mongo::Connection.new assert_equal 'MONGODB-X509', @connection.auths.first[:mechanism] ENV['MONGODB_URI'] = "mongodb://user@localhost?authMechanism=INVALID&connect=false" assert_raise MongoArgumentError do Mongo::Connection.new end end end should "allow a complex host names" do host_name = "foo.bar-12345.org" uri = "mongodb://#{host_name}?connect=false" with_preserved_env_uri(uri) do @connection = Mongo::Connection.new assert_equal [host_name, 27017], @connection.host_port end end should "allow db without username and password" do host_name = "foo.bar-12345.org" uri = "mongodb://#{host_name}/foo?connect=false" with_preserved_env_uri(uri) do @connection = Mongo::Connection.new assert_equal [host_name, 27017], @connection.host_port end end should "set safe options on connection" do host_name = "localhost" opts = "safe=true&w=2&wtimeoutMS=1000&fsync=true&journal=true&connect=false" uri = "mongodb://#{host_name}/foo?#{opts}" with_preserved_env_uri(uri) do @connection = Mongo::Connection.new assert_equal({:w => 2, :wtimeout => 1000, :fsync => true, :j => true}, @connection.safe) end end should "set timeout options on connection" do host_name = "localhost" opts = "connectTimeoutMS=1000&socketTimeoutMS=5000&connect=false" uri = "mongodb://#{host_name}/foo?#{opts}" with_preserved_env_uri(uri) do @connection = Mongo::Connection.new assert_equal 1, @connection.connect_timeout assert_equal 5, @connection.op_timeout end end should "parse a uri with a hyphen & underscore in the username or password" do uri = "mongodb://hyphen-user_name:p-s_s@localhost:27017/db?connect=false" with_preserved_env_uri(uri) do @connection = Mongo::Connection.new assert_equal ['localhost', 27017], @connection.host_port auth_hash = { :db_name => 'db', :username => 'hyphen-user_name', :password => 'p-s_s', :source => 'db', :mechanism => Authentication::DEFAULT_MECHANISM, :extra => {} } assert_equal auth_hash, @connection.auths.first end end should "attempt to connect" do TCPSocket.stubs(:new).returns(new_mock_socket) uri = "mongodb://localhost?connect=false" with_preserved_env_uri(uri) do @connection = Mongo::Connection.new admin_db = new_mock_db admin_db.expects(:command).returns({'ok' => 1, 'ismaster' => 1}) @connection.expects(:[]).with('admin').returns(admin_db) @connection.connect end end should "raise an error on invalid uris" do uri = "mongo://localhost" with_preserved_env_uri(uri) do assert_raise MongoArgumentError do Mongo::Connection.new end ENV['MONGODB_URI'] = "mongodb://localhost:abc" assert_raise MongoArgumentError do Mongo::Connection.new end end end should "require password if using legacy auth and username present" do uri = "mongodb://kyle:jones@localhost/db?connect=false" with_preserved_env_uri(uri) do assert Mongo::Connection.new ENV['MONGODB_URI'] = "mongodb://kyle:@localhost?connect=false" assert_raise MongoArgumentError do Mongo::Connection.new end ENV['MONGODB_URI'] = "mongodb://kyle@localhost?connect=false" assert_raise MongoArgumentError do Mongo::Connection.new end end end should "require password if using PLAIN auth and username present" do uri = "mongodb://kyle:jones@localhost/db?connect=false&authMechanism=PLAIN" with_preserved_env_uri(uri) do assert Mongo::Connection.new ENV['MONGODB_URI'] = "mongodb://kyle:@localhost?connect=false&authMechanism=PLAIN" assert_raise MongoArgumentError do Mongo::Connection.new end ENV['MONGODB_URI'] = "mongodb://kyle@localhost?connect=false&authMechanism=PLAIN" assert_raise MongoArgumentError do Mongo::Connection.new end end end end end end ruby-mongo-1.10.0/test/unit/cursor_test.rb000066400000000000000000000257611233461006100205350ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' class CursorUnitTest < Test::Unit::TestCase class Mongo::Cursor public :construct_query_spec end context "Cursor options" do setup do @logger = mock() @logger.stubs(:debug) @connection = stub(:class => MongoClient, :logger => @logger, :slave_ok? => false, :read => :primary, :log_duration => false, :tag_sets => [], :acceptable_latency => 10) @db = stub(:name => "testing", :slave_ok? => false, :connection => @connection, :read => :primary, :tag_sets => [], :acceptable_latency => 10) @collection = stub(:db => @db, :name => "items", :read => :primary, :tag_sets => [], :acceptable_latency => 10) @cursor = Cursor.new(@collection) end should "set timeout" do assert @cursor.timeout assert @cursor.query_options_hash[:timeout] end should "set selector" do assert_equal({}, @cursor.selector) @cursor = Cursor.new(@collection, :selector => {:name => "Jones"}) assert_equal({:name => "Jones"}, @cursor.selector) assert_equal({:name => "Jones"}, @cursor.query_options_hash[:selector]) end should "set fields" do assert_nil @cursor.fields @cursor = Cursor.new(@collection, :fields => [:name, :date]) assert_equal({:name => 1, :date => 1}, @cursor.fields) assert_equal({:name => 1, :date => 1}, @cursor.query_options_hash[:fields]) end should "allow $meta projection operator" do assert_nil @cursor.fields @cursor = Cursor.new(@collection, :fields => [{ :score => { :$meta => 'textScore' } }]) assert_equal({ :score => { :$meta => 'textScore' } }, @cursor.fields) assert_equal({ :score => { :$meta => 'textScore' } }, @cursor.query_options_hash[:fields]) @cursor = Cursor.new(@collection, :fields => [:name, { :score => { :$meta => 'textScore' } }]) assert_equal({ :name => 1, :score => { :$meta => 'textScore' } }, @cursor.fields) assert_equal({ :name => 1, :score => { :$meta => 'textScore' } }, @cursor.query_options_hash[:fields]) end should "set mix fields 0 and 1" do assert_nil @cursor.fields @cursor = Cursor.new(@collection, :fields => {:name => 1, :date => 0}) assert_equal({:name => 1, :date => 0}, @cursor.fields) assert_equal({:name => 1, :date => 0}, @cursor.query_options_hash[:fields]) end should "set limit" do assert_equal 0, @cursor.limit @cursor = Cursor.new(@collection, :limit => 10) assert_equal 10, @cursor.limit assert_equal 10, @cursor.query_options_hash[:limit] end should "set skip" do assert_equal 0, @cursor.skip @cursor = Cursor.new(@collection, :skip => 5) assert_equal 5, @cursor.skip assert_equal 5, @cursor.query_options_hash[:skip] end should "set sort order" do assert_nil @cursor.order @cursor = Cursor.new(@collection, :order => "last_name") assert_equal "last_name", @cursor.order assert_equal "last_name", @cursor.query_options_hash[:order] end should "set hint" do assert_nil @cursor.hint @cursor = Cursor.new(@collection, :hint => "name") assert_equal "name", @cursor.hint assert_equal "name", @cursor.query_options_hash[:hint] end should "set comment" do assert_nil @cursor.comment @cursor = Cursor.new(@collection, :comment => "comment") assert_equal "comment", @cursor.comment assert_equal "comment", @cursor.query_options_hash[:comment] end should "cache full collection name" do assert_equal "testing.items", @cursor.full_collection_name end should "raise error when batch_size is 1" do e = assert_raise ArgumentError do @cursor.batch_size(1) end assert_equal "Invalid value for batch_size 1; must be 0 or > 1.", e.message end should "use the limit for batch size when it's smaller than the specified batch_size" do @cursor.limit(99) @cursor.batch_size(100) assert_equal 99, @cursor.batch_size end should "use the specified batch_size" do @cursor.batch_size(100) assert_equal 100, @cursor.batch_size end context "conected to mongos" do setup do @connection.stubs(:mongos?).returns(true) @tag_sets = [{:dc => "ny"}] end should "set $readPreference" do # secondary cursor = Cursor.new(@collection, { :read => :secondary }) spec = cursor.construct_query_spec assert spec.has_key?('$readPreference') assert_equal 'secondary', spec['$readPreference'][:mode] assert !spec['$readPreference'].has_key?(:tags) # secondary preferred with tags cursor = Cursor.new(@collection, { :read => :secondary_preferred, :tag_sets => @tag_sets }) spec = cursor.construct_query_spec assert spec.has_key?('$readPreference') assert_equal 'secondaryPreferred', spec['$readPreference'][:mode] assert_equal @tag_sets, spec['$readPreference'][:tags] # primary preferred cursor = Cursor.new(@collection, { :read => :primary_preferred }) spec = cursor.construct_query_spec assert spec.has_key?('$readPreference') assert_equal 'primaryPreferred', spec['$readPreference'][:mode] assert !spec['$readPreference'].has_key?(:tags) # primary preferred with tags cursor = Cursor.new(@collection, { :read => :primary_preferred, :tag_sets => @tag_sets }) spec = cursor.construct_query_spec assert spec.has_key?('$readPreference') assert_equal 'primaryPreferred', spec['$readPreference'][:mode] assert_equal @tag_sets, spec['$readPreference'][:tags] # nearest cursor = Cursor.new(@collection, { :read => :nearest }) spec = cursor.construct_query_spec assert spec.has_key?('$readPreference') assert_equal 'nearest', spec['$readPreference'][:mode] assert !spec['$readPreference'].has_key?(:tags) # nearest with tags cursor = Cursor.new(@collection, { :read => :nearest, :tag_sets => @tag_sets }) spec = cursor.construct_query_spec assert spec.has_key?('$readPreference') assert_equal 'nearest', spec['$readPreference'][:mode] assert_equal @tag_sets, spec['$readPreference'][:tags] end should "not set $readPreference" do # for primary cursor = Cursor.new(@collection, { :read => :primary, :tag_sets => @tag_sets }) assert !cursor.construct_query_spec.has_key?('$readPreference') # for secondary_preferred with no tags cursor = Cursor.new(@collection, { :read => :secondary_preferred }) assert !cursor.construct_query_spec.has_key?('$readPreference') cursor = Cursor.new(@collection, { :read => :secondary_preferred, :tag_sets => [] }) assert !cursor.construct_query_spec.has_key?('$readPreference') cursor = Cursor.new(@collection, { :read => :secondary_preferred, :tag_sets => nil }) assert !cursor.construct_query_spec.has_key?('$readPreference') end end context "not conected to mongos" do setup do @connection.stubs(:mongos?).returns(false) end should "not set $readPreference" do cursor = Cursor.new(@collection, { :read => :primary }) assert !cursor.construct_query_spec.has_key?('$readPreference') cursor = Cursor.new(@collection, { :read => :primary_preferred }) assert !cursor.construct_query_spec.has_key?('$readPreference') cursor = Cursor.new(@collection, { :read => :secondary }) assert !cursor.construct_query_spec.has_key?('$readPreference') cursor = Cursor.new(@collection, { :read => :secondary_preferred }) assert !cursor.construct_query_spec.has_key?('$readPreference') cursor = Cursor.new(@collection, { :read => :nearest }) assert !cursor.construct_query_spec.has_key?('$readPreference') cursor = Cursor.new(@collection, { :read => :secondary , :tag_sets => @tag_sets}) assert !cursor.construct_query_spec.has_key?('$readPreference') end end end context "Query fields" do setup do @logger = mock() @logger.stubs(:debug) @connection = stub(:class => MongoClient, :logger => @logger, :slave_ok? => false, :log_duration => false, :tag_sets =>{}, :acceptable_latency => 10) @db = stub(:slave_ok? => true, :name => "testing", :connection => @connection, :tag_sets => {}, :acceptable_latency => 10) @collection = stub(:db => @db, :name => "items", :read => :primary, :tag_sets => {}, :acceptable_latency => 10) end should "when an array should return a hash with each key" do @cursor = Cursor.new(@collection, :fields => [:name, :age]) result = @cursor.fields assert_equal result.keys.sort{|a,b| a.to_s <=> b.to_s}, [:age, :name].sort{|a,b| a.to_s <=> b.to_s} assert result.values.all? {|v| v == 1} end should "when a string, return a hash with just the key" do @cursor = Cursor.new(@collection, :fields => "name") result = @cursor.fields assert_equal result.keys.sort, ["name"] assert result.values.all? {|v| v == 1} end should "return nil when neither hash nor string nor symbol" do @cursor = Cursor.new(@collection, :fields => 1234567) assert_nil @cursor.fields end end context "counts" do setup do @logger = mock() @logger.stubs(:debug) @connection = stub(:class => Connection, :logger => @logger, :slave_ok? => false, :read => :primary, :log_duration => false, :tag_sets => {}, :acceptable_latency => 10) @db = stub(:name => "testing", :slave_ok? => false, :connection => @connection, :read => :primary, :tag_sets => {}, :acceptable_latency => 10) @collection = stub(:db => @db, :name => "items", :read => :primary, :tag_sets => {}, :acceptable_latency => 10) @cursor = Cursor.new(@collection) end should "pass the comment parameter" do query = {:field => 7} @db.expects(:command).with({ 'count' => "items", 'query' => query, 'fields' => nil}, { :read => :primary, :comment => "my comment"}). returns({'ok' => 1, 'n' => 1}) assert_equal(1, Cursor.new(@collection, :selector => query, :comment => 'my comment').count()) end end end ruby-mongo-1.10.0/test/unit/db_test.rb000066400000000000000000000113751233461006100176010ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' def insert_message(db, documents) documents = [documents] unless documents.is_a?(Array) message = ByteBuffer.new message.put_int(0) Mongo::BSON_CODER.serialize_cstr(message, "#{db.name}.test") documents.each { |doc| message.put_array(Mongo::BSON_CODER.new.serialize(doc, true).to_a) } message = db.add_message_headers(Mongo::Constants::OP_INSERT, message) end class DBUnitTest < Test::Unit::TestCase context "DBTest: " do context "DB commands" do setup do @client = stub() @client.stubs(:write_concern).returns({}) @client.stubs(:read).returns(:primary) @client.stubs(:tag_sets) @client.stubs(:acceptable_latency) @client.stubs(:add_auth).returns({}) @db = DB.new("testing", @client) @db.stubs(:safe) @db.stubs(:read) @db.stubs(:tag_sets) @db.stubs(:acceptable_latency) @collection = mock() @db.stubs(:system_command_collection).returns(@collection) end should "raise an error if given a hash with more than one key" do if RUBY_VERSION < '1.9' assert_raise MongoArgumentError do @db.command(:buildinfo => 1, :somekey => 1) end end end should "raise an error if the selector is omitted" do assert_raise MongoArgumentError do @db.command({}, :check_response => true) end end should "not include named nil opts in selector" do @cursor = mock(:next_document => {"ok" => 1}) Cursor.expects(:new).with(@collection, :limit => -1, :selector => {:ping => 1}, :socket => nil).returns(@cursor) command = {:ping => 1} @db.command(command, :socket => nil) end should "create the proper cursor" do @cursor = mock(:next_document => {"ok" => 1}) Cursor.expects(:new).with(@collection, :limit => -1, :selector => {:buildinfo => 1}).returns(@cursor) command = {:buildinfo => 1} @db.command(command, :check_response => true) end should "raise an error when the command fails" do @cursor = mock(:next_document => {"ok" => 0}) Cursor.expects(:new).with(@collection, :limit => -1, :selector => {:buildinfo => 1}).returns(@cursor) assert_raise OperationFailure do command = {:buildinfo => 1} @db.command(command, :check_response => true) end end should "pass on the comment" do @cursor = mock(:next_document => {"ok" => 0}) Cursor.expects(:new).with(@collection, :limit => -1, :selector => {:buildinfo => 1}, :comment => "my comment").returns(@cursor) assert_raise OperationFailure do command = {:buildinfo => 1} @db.command(command, :check_response => true, :comment => 'my comment') end end should "raise an error if collection creation fails" do @db.expects(:command).returns({'ok' => 0}) assert_raise Mongo::MongoDBError do @db.create_collection("foo") end end should "raise an error if getlasterror fails" do @db.expects(:command).returns({}) assert_raise Mongo::MongoDBError do @db.get_last_error end end should "raise an error if drop_index fails" do @db.expects(:command).returns({}) assert_raise Mongo::MongoDBError do @db.drop_index("foo", "bar") end end should "raise an error if set_profiling_level fails" do @db.expects(:command).returns({}) assert_raise Mongo::MongoDBError do @db.profiling_level = :slow_only end end should "warn when save_auth is not nil" do assert @db.expects(:warn).with(regexp_matches(/\[DEPRECATED\] Disabling the 'save_auth' option/)) @db.authenticate('foo', 'bar', false) end should "allow extra authentication options" do extra_opts = { :gssapiservicename => 'example', :canonicalizehostname => true } assert @client.expects(:add_auth).with(@db.name, 'emily', nil, nil, 'GSSAPI', extra_opts) @db.authenticate('emily', nil, nil, nil, 'GSSAPI', extra_opts) end end end end ruby-mongo-1.10.0/test/unit/grid_test.rb000066400000000000000000000042401233461006100201320ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' class GridUnitTest < Test::Unit::TestCase context "GridFS: " do setup do @client = stub() @client.stubs(:write_concern).returns({}) @client.stubs(:read).returns(:primary) @client.stubs(:tag_sets) @client.stubs(:acceptable_latency) @db = DB.new("testing", @client) @files = mock() @chunks = mock() @db.stubs(:[]).with('fs.files').returns(@files) @db.stubs(:[]).with('fs.chunks').returns(@chunks) @db.stubs(:safe) @db.stubs(:read).returns(:primary) end context "Grid classes with standard connections" do setup do @chunks.expects(:ensure_index) end should "create indexes for Grid" do Grid.new(@db) end should "create indexes for GridFileSystem" do @files.expects(:ensure_index) GridFileSystem.new(@db) end end context "Grid classes with slave connection" do setup do @chunks.stubs(:ensure_index).raises(Mongo::ConnectionFailure) @files.stubs(:ensure_index).raises(Mongo::ConnectionFailure) end should "not create indexes for Grid" do grid = Grid.new(@db) data = "hello world!" assert_raise Mongo::ConnectionFailure do grid.put(data) end end should "not create indexes for GridFileSystem" do gridfs = GridFileSystem.new(@db) data = "hello world!" assert_raise Mongo::ConnectionFailure do gridfs.open('image.jpg', 'w') do |f| f.write data end end end end end endruby-mongo-1.10.0/test/unit/mongo_sharded_client_test.rb000066400000000000000000000033121233461006100233530ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require "test_helper" class MongoShardedClientUnitTest < Test::Unit::TestCase include Mongo def test_initialize_with_single_mongos_uri uri = "mongodb://localhost:27017" with_preserved_env_uri(uri) do client = MongoShardedClient.new(:connect => false) assert_equal [[ "localhost", 27017 ]], client.seeds end end def test_initialize_with_multiple_mongos_uris uri = "mongodb://localhost:27017,localhost:27018" with_preserved_env_uri(uri) do client = MongoShardedClient.new(:connect => false) assert_equal [[ "localhost", 27017 ], [ "localhost", 27018 ]], client.seeds end end def test_from_uri_with_string client = MongoShardedClient.from_uri("mongodb://localhost:27017,localhost:27018", :connect => false) assert_equal [[ "localhost", 27017 ], [ "localhost", 27018 ]], client.seeds end def test_from_uri_with_env_variable uri = "mongodb://localhost:27017,localhost:27018" with_preserved_env_uri(uri) do client = MongoShardedClient.from_uri(nil, :connect => false) assert_equal [[ "localhost", 27017 ], [ "localhost", 27018 ]], client.seeds end end end ruby-mongo-1.10.0/test/unit/node_test.rb000066400000000000000000000062421233461006100201360ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' class NodeUnitTest < Test::Unit::TestCase def setup @client = stub() manager = mock('pool_manager') manager.stubs(:update_max_sizes) @client.stubs(:local_manager).returns(manager) end should "refuse to connect to node without 'hosts' key" do tcp = mock() node = Node.new(@client, ['localhost', 27017]) tcp.stubs(:new).returns(new_mock_socket) @client.stubs(:socket_class).returns(tcp) admin_db = new_mock_db admin_db.stubs(:command).returns({'ok' => 1, 'ismaster' => 1}) @client.stubs(:[]).with('admin').returns(admin_db) @client.stubs(:op_timeout).returns(nil) @client.stubs(:connect_timeout).returns(nil) @client.expects(:log) @client.expects(:mongos?).returns(false) @client.stubs(:socket_opts) assert node.connect node.config end should "load a node from an array" do node = Node.new(@client, ['power.level.com', 9001]) assert_equal 'power.level.com', node.host assert_equal 9001, node.port assert_equal 'power.level.com:9001', node.address end should "should default the port for an array" do node = Node.new(@client, ['power.level.com']) assert_equal 'power.level.com', node.host assert_equal MongoClient::DEFAULT_PORT, node.port assert_equal "power.level.com:#{MongoClient::DEFAULT_PORT}", node.address end should "load a node from a string" do node = Node.new(@client, 'localhost:1234') assert_equal 'localhost', node.host assert_equal 1234, node.port assert_equal 'localhost:1234', node.address end should "should default the port for a string" do node = Node.new(@client, '192.168.0.1') assert_equal '192.168.0.1', node.host assert_equal MongoClient::DEFAULT_PORT, node.port assert_equal "192.168.0.1:#{MongoClient::DEFAULT_PORT}", node.address end should "two nodes with the same address should be equal" do assert_equal Node.new(@client, '192.168.0.1'), Node.new(@client, ['192.168.0.1', MongoClient::DEFAULT_PORT]) end should "two nodes with the same address should have the same hash" do assert_equal Node.new(@client, '192.168.0.1').hash, Node.new(@client, ['192.168.0.1', MongoClient::DEFAULT_PORT]).hash end should "two nodes with different addresses should not be equal" do assert_not_equal Node.new(@client, '192.168.0.2'), Node.new(@client, ['192.168.0.1', MongoClient::DEFAULT_PORT]) end should "two nodes with the same address should have the same hash negate" do assert_not_equal Node.new(@client, '192.168.0.1').hash, Node.new(@client, '1239.33.4.2393:29949').hash end end ruby-mongo-1.10.0/test/unit/pool_manager_test.rb000066400000000000000000000126021233461006100216510ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' include Mongo class PoolManagerUnitTest < Test::Unit::TestCase context "Initialization: " do setup do TCPSocket.stubs(:new).returns(new_mock_socket) @db = new_mock_db @client = stub("MongoClient") @client.stubs(:connect_timeout).returns(5) @client.stubs(:op_timeout).returns(5) @client.stubs(:pool_size).returns(2) @client.stubs(:pool_timeout).returns(100) @client.stubs(:seeds).returns(['localhost:30000']) @client.stubs(:socket_class).returns(TCPSocket) @client.stubs(:mongos?).returns(false) @client.stubs(:[]).returns(@db) @client.stubs(:socket_opts) @client.stubs(:replica_set_name).returns(nil) @client.stubs(:log) @arbiters = ['localhost:27020'] @hosts = [ 'localhost:27017', 'localhost:27018', 'localhost:27019', 'localhost:27020' ] @ismaster = { 'hosts' => @hosts, 'arbiters' => @arbiters, 'maxBsonObjectSize' => 1024, 'maxMessageSizeBytes' => 1024 * 2.5, 'maxWireVersion' => 1, 'minWireVersion' => 0 } end should "populate pools correctly" do @db.stubs(:command).returns( # First call to get a socket. @ismaster.merge({'ismaster' => true}), # Subsequent calls to configure pools. @ismaster.merge({'ismaster' => true}), @ismaster.merge({'secondary' => true, 'maxBsonObjectSize' => 500}), @ismaster.merge({'secondary' => true, 'maxMessageSizeBytes' => 700}), @ismaster.merge({'arbiterOnly' => true}) ) seeds = [['localhost', 27017]] manager = Mongo::PoolManager.new(@client, seeds) @client.stubs(:local_manager).returns(manager) manager.connect assert_equal ['localhost', 27017], manager.primary assert_equal 27017, manager.primary_pool.port assert_equal 2, manager.secondaries.length assert_equal [27018, 27019], manager.secondary_pools.map(&:port).sort assert_equal [['localhost', 27020]], manager.arbiters assert_equal 500, manager.max_bson_size assert_equal 700, manager.max_message_size end should "populate pools with single unqueryable seed" do @db.stubs(:command).returns( # First call to recovering node @ismaster.merge({'ismaster' => false, 'secondary' => false}), # Subsequent calls to configure pools. @ismaster.merge({'ismaster' => false, 'secondary' => false}), @ismaster.merge({'ismaster' => true}), @ismaster.merge({'secondary' => true}), @ismaster.merge({'arbiterOnly' => true}) ) seeds = [['localhost', 27017]] manager = PoolManager.new(@client, seeds) @client.stubs(:local_manager).returns(manager) manager.connect assert_equal ['localhost', 27018], manager.primary assert_equal 27018, manager.primary_pool.port assert_equal 1, manager.secondaries.length assert_equal 27019, manager.secondary_pools[0].port assert_equal [['localhost', 27020]], manager.arbiters end should "return clones of pool lists" do @db.stubs(:command).returns( # First call to get a socket. @ismaster.merge({'ismaster' => true}), # Subsequent calls to configure pools. @ismaster.merge({'ismaster' => true}), @ismaster.merge({'secondary' => true, 'maxBsonObjectSize' => 500}), @ismaster.merge({'secondary' => true, 'maxMessageSizeBytes' => 700}), @ismaster.merge({'arbiterOnly' => true}) ) seeds = [['localhost', 27017], ['localhost', 27018]] manager = Mongo::PoolManager.new(@client, seeds) @client.stubs(:local_manager).returns(manager) manager.connect assert_not_equal manager.instance_variable_get(:@arbiters).object_id, manager.arbiters.object_id assert_not_equal manager.instance_variable_get(:@secondaries).object_id, manager.secondaries.object_id assert_not_equal manager.instance_variable_get(:@secondary_pools).object_id, manager.secondary_pools.object_id assert_not_equal manager.instance_variable_get(:@hosts).object_id, manager.hosts.object_id assert_not_equal manager.instance_variable_get(:@pools).object_id, manager.pools.object_id assert_not_equal manager.instance_variable_get(:@arbiters).object_id, manager.state_snapshot[:arbiters].object_id assert_not_equal manager.instance_variable_get(:@secondaries).object_id, manager.state_snapshot[:secondaries].object_id assert_not_equal manager.instance_variable_get(:@secondary_pools).object_id, manager.state_snapshot[:secondary_pools].object_id assert_not_equal manager.instance_variable_get(:@hosts).object_id, manager.state_snapshot[:hosts].object_id assert_not_equal manager.instance_variable_get(:@pools).object_id, manager.state_snapshot[:pools].object_id end end end ruby-mongo-1.10.0/test/unit/read_pref_test.rb000066400000000000000000000105321233461006100211350ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' class ReadPreferenceUnitTest < Test::Unit::TestCase include ReadPreference def setup mock_pool = mock() mock_pool.stubs(:ping_time).returns(Pool::MAX_PING_TIME) stubs(:primary_pool).returns(mock_pool) stubs(:secondary_pools).returns([mock_pool]) stubs(:pools).returns([mock_pool]) end def test_select_pool ReadPreference::READ_PREFERENCES.map do |rp| assert select_pool({:mode => rp, :tags => [], :latency => 15}) end end def test_sok_mapreduce_out_string_returns_false command = BSON::OrderedHash['mapreduce', 'test-collection', 'out', 'new-test-collection'] assert_equal false, ReadPreference::secondary_ok?(command) end def test_sok_mapreduce_replace_collection_returns_false command = BSON::OrderedHash['mapreduce', 'test-collection', 'out', BSON::OrderedHash['replace', 'new-test-collection']] assert_equal false, ReadPreference::secondary_ok?(command) end def test_sok_mapreduce_inline_collection_returns_false command = BSON::OrderedHash['mapreduce', 'test-collection', 'out', 'inline'] assert_equal false, ReadPreference::secondary_ok?(command) end def test_sok_inline_symbol_mapreduce_returns_true command = BSON::OrderedHash['mapreduce', 'test-collection', 'out', BSON::OrderedHash[:inline, 'true']] assert_equal true, ReadPreference::secondary_ok?(command) end def test_sok_inline_string_mapreduce_returns_true command = BSON::OrderedHash['mapreduce', 'test-collection', 'out', BSON::OrderedHash['inline', 'true']] assert_equal true, ReadPreference::secondary_ok?(command) end def test_sok_count_true command = BSON::OrderedHash['count', 'test-collection', 'query', BSON::OrderedHash['a', 'b']] assert_equal true, ReadPreference::secondary_ok?(command) end def test_sok_server_status_returns_false command = BSON::OrderedHash['serverStatus', 1] assert_equal false, ReadPreference::secondary_ok?(command) end def test_cmd_reroute_with_secondary ReadPreference::expects(:warn).with(regexp_matches(/rerouted to primary/)) command = BSON::OrderedHash['mapreduce', 'test-collection', 'out', 'new-test-collection'] assert_equal :primary, ReadPreference::cmd_read_pref(:secondary, command) end def test_find_and_modify_reroute_with_secondary ReadPreference::expects(:warn).with(regexp_matches(/rerouted to primary/)) command = BSON::OrderedHash['findAndModify', 'test-collection', 'query', {}] assert_equal :primary, ReadPreference::cmd_read_pref(:secondary, command) end def test_cmd_no_reroute_with_secondary command = BSON::OrderedHash['mapreduce', 'test-collection', 'out', BSON::OrderedHash['inline', 'true']] assert_equal :secondary, ReadPreference::cmd_read_pref(:secondary, command) end def test_cmd_no_reroute_with_primary command = BSON::OrderedHash['mapreduce', 'test-collection', 'out', 'new-test-collection'] assert_equal :primary, ReadPreference::cmd_read_pref(:primary, command) end def test_cmd_no_reroute_with_primary_secondary_ok command = BSON::OrderedHash['mapreduce', 'test-collection', 'out', BSON::OrderedHash['inline', 'true']] assert_equal :primary, ReadPreference::cmd_read_pref(:primary, command) end def test_parallel_scan_secondary_ok command = BSON::OrderedHash['parallelCollectionScan', 'test-collection', 'numCursors', 3] assert_equal true, ReadPreference::secondary_ok?(command) end end ruby-mongo-1.10.0/test/unit/read_test.rb000066400000000000000000000125771233461006100201340ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' class ReadUnitTest < Test::Unit::TestCase context "Read mode on standard connection: " do setup do @read = :secondary @client = MongoClient.new('localhost', 27017, :read => @read, :connect => false) end end context "Read preferences on replica set connection: " do setup do @read = :secondary_preferred @acceptable_latency = 100 @tags = {"dc" => "Tyler", "rack" => "Brock"} @bad_tags = {"wow" => "cool"} @client = MongoReplicaSetClient.new( ['localhost:27017'], :read => @read, :tag_sets => @tags, :secondary_acceptable_latency_ms => @acceptable_latency, :connect => false ) end should "store read preference on MongoClient" do assert_equal @read, @client.read assert_equal @tags, @client.tag_sets assert_equal @acceptable_latency, @client.acceptable_latency end should "propogate to DB" do db = @client[TEST_DB] assert_equal @read, db.read assert_equal @tags, db.tag_sets assert_equal @acceptable_latency, db.acceptable_latency db = @client.db(TEST_DB) assert_equal @read, db.read assert_equal @tags, db.tag_sets assert_equal @acceptable_latency, db.acceptable_latency db = DB.new(TEST_DB, @client) assert_equal @read, db.read assert_equal @tags, db.tag_sets assert_equal @acceptable_latency, db.acceptable_latency end should "allow db override" do db = DB.new(TEST_DB, @client, :read => :primary, :tag_sets => @bad_tags, :acceptable_latency => 25) assert_equal :primary, db.read assert_equal @bad_tags, db.tag_sets assert_equal 25, db.acceptable_latency db = @client.db(TEST_DB, :read => :primary, :tag_sets => @bad_tags, :acceptable_latency => 25) assert_equal :primary, db.read assert_equal @bad_tags, db.tag_sets assert_equal 25, db.acceptable_latency end context "on DB: " do setup do @db = @client[TEST_DB] end should "propogate to collection" do col = @db.collection('read-unit-test') assert_equal @read, col.read assert_equal @tags, col.tag_sets assert_equal @acceptable_latency, col.acceptable_latency col = @db['read-unit-test'] assert_equal @read, col.read assert_equal @tags, col.tag_sets assert_equal @acceptable_latency, col.acceptable_latency col = Collection.new('read-unit-test', @db) assert_equal @read, col.read assert_equal @tags, col.tag_sets assert_equal @acceptable_latency, col.acceptable_latency end should "allow override on collection" do col = @db.collection('read-unit-test', :read => :primary, :tag_sets => @bad_tags, :acceptable_latency => 25) assert_equal :primary, col.read assert_equal @bad_tags, col.tag_sets assert_equal 25, col.acceptable_latency col = Collection.new('read-unit-test', @db, :read => :primary, :tag_sets => @bad_tags, :acceptable_latency => 25) assert_equal :primary, col.read assert_equal @bad_tags, col.tag_sets assert_equal 25, col.acceptable_latency end end context "on read mode ops" do setup do @col = @client[TEST_DB]['read-unit-test'] @mock_socket = new_mock_socket end should "use default value on query" do @cursor = @col.find({:a => 1}) sock = new_mock_socket read_pool = stub(:checkin => true) @client.stubs(:read_pool).returns(read_pool) local_manager = PoolManager.new(@client, @client.seeds) @client.stubs(:local_manager).returns(local_manager) primary_pool = stub(:checkin => true) sock.stubs(:pool).returns(primary_pool) @client.stubs(:primary_pool).returns(primary_pool) @client.expects(:checkout_reader).returns(sock) @client.expects(:receive_message).with do |o, m, l, s, c, r| r == nil end.returns([[], 0, 0]) @cursor.next end should "allow override default value on query" do @cursor = @col.find({:a => 1}, :read => :primary) sock = new_mock_socket local_manager = PoolManager.new(@client, @client.seeds) @client.stubs(:local_manager).returns(local_manager) primary_pool = stub(:checkin => true) sock.stubs(:pool).returns(primary_pool) @client.stubs(:primary_pool).returns(primary_pool) @client.expects(:checkout_reader).returns(sock) @client.expects(:receive_message).with do |o, m, l, s, c, r| r == nil end.returns([[], 0, 0]) @cursor.next end should "allow override alternate value on query" do assert_raise MongoArgumentError do @col.find_one({:a => 1}, :read => {:dc => "ny"}) end end end end end ruby-mongo-1.10.0/test/unit/safe_test.rb000066400000000000000000000114451233461006100201300ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' class SafeUnitTest < Test::Unit::TestCase context "Write-Concern modes on Mongo::Connection " do setup do @safe_value = {:w => 7, :j => false, :fsync => false, :wtimeout => nil} @connection = Mongo::Connection.new('localhost', 27017, :safe => @safe_value, :connect => false) end should "propogate to DB" do db = @connection[TEST_DB] assert_equal @safe_value[:w], db.write_concern[:w] db = @connection.db(TEST_DB) assert_equal @safe_value[:w], db.write_concern[:w] db = DB.new(TEST_DB, @connection) assert_equal @safe_value[:w], db.write_concern[:w] end should "allow db override" do db = DB.new(TEST_DB, @connection, :safe => false) assert_equal 0, db.write_concern[:w] db = @connection.db(TEST_DB, :safe => false) assert_equal 0, db.write_concern[:w] end context "on DB: " do setup do @db = @connection[TEST_DB] end should "propogate to collection" do col = @db.collection('bar') assert_equal @safe_value, col.write_concern col = @db['bar'] assert_equal @safe_value, col.write_concern col = Collection.new('bar', @db) assert_equal @safe_value, col.write_concern end should "allow override on collection" do col = @db.collection('bar', :safe => false) assert_equal 0, col.write_concern[:w] col = Collection.new('bar', @db, :safe => false) assert_equal 0, col.write_concern[:w] end end context "on operations supporting safe mode" do setup do @col = @connection[TEST_DB]['bar'] end should "use default value on insert" do @connection.expects(:send_message_with_gle).with do |op, msg, log, n, safe| safe == @safe_value end @col.insert({:a => 1}) end should "allow override alternate value on insert" do @connection.expects(:send_message_with_gle).with do |op, msg, log, n, safe| safe == {:w => 100, :j => false, :fsync => false, :wtimeout => nil} end @col.insert({:a => 1}, :safe => {:w => 100}) end should "allow override to disable on insert" do @connection.expects(:send_message) @col.insert({:a => 1}, :safe => false) end should "use default value on update" do @connection.expects(:send_message_with_gle).with do |op, msg, log, n, safe| safe == @safe_value end @col.update({:a => 1}, {:a => 2}) end should "allow override alternate value on update" do @connection.expects(:send_message_with_gle).with do |op, msg, log, n, safe| safe == {:w => 100, :j => false, :fsync => false, :wtimeout => nil} end @col.update({:a => 1}, {:a => 2}, :safe => {:w => 100}) end should "allow override to disable on update" do @connection.expects(:send_message) @col.update({:a => 1}, {:a => 2}, :safe => false) end should "use default value on save" do @connection.expects(:send_message_with_gle).with do |op, msg, log, n, safe| safe == @safe_value end @col.save({:a => 1}) end should "allow override alternate value on save" do @connection.expects(:send_message_with_gle).with do |op, msg, log, n, safe| safe == @safe_value.merge(:w => 1) end @col.save({:a => 1}, :safe => true) end should "allow override to disable on save" do @connection.expects(:send_message) @col.save({:a => 1}, :safe => false) end should "use default value on remove" do @connection.expects(:send_message_with_gle).with do |op, msg, log, n, safe| safe == @safe_value end @col.remove end should "allow override alternate value on remove" do @connection.expects(:send_message_with_gle).with do |op, msg, log, n, safe| safe == {:w => 100, :j => false, :fsync => false, :wtimeout => nil} end @col.remove({}, :safe => {:w => 100}) end should "allow override to disable on remove" do @connection.expects(:send_message) @col.remove({}, :safe => false) end end end end ruby-mongo-1.10.0/test/unit/sharding_pool_manager_test.rb000066400000000000000000000052751233461006100235400ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' include Mongo class ShardingPoolManagerUnitTest < Test::Unit::TestCase context "Initialization: " do setup do TCPSocket.stubs(:new).returns(new_mock_socket) @db = new_mock_db @client = stub("MongoShardedClient") @client.stubs(:connect_timeout).returns(5) @client.stubs(:op_timeout).returns(5) @client.stubs(:pool_size).returns(2) @client.stubs(:pool_timeout).returns(100) @client.stubs(:socket_class).returns(TCPSocket) @client.stubs(:mongos?).returns(true) @client.stubs(:[]).returns(@db) @client.stubs(:socket_opts) @client.stubs(:replica_set_name).returns(nil) @client.stubs(:log) @arbiters = ['localhost:27020'] @hosts = [ 'localhost:27017', 'localhost:27018', 'localhost:27019' ] @ismaster = { 'hosts' => @hosts, 'arbiters' => @arbiters, 'maxBsonObjectSize' => 1024, 'maxMessageSizeBytes' => 1024 * 2.5, 'maxWireVersion' => 1, 'minWireVersion' => 0 } end should "populate pools correctly" do @db.stubs(:command).returns( # First call to get a socket. @ismaster.merge({'ismaster' => true}), # Subsequent calls to configure pools. @ismaster.merge({'ismaster' => true}), @ismaster.merge({'secondary' => true, 'maxBsonObjectSize' => 500}), @ismaster.merge({'secondary' => true, 'maxMessageSizeBytes' => 700}), @ismaster.merge({'secondary' => true, 'maxWireVersion' => 0}), @ismaster.merge({'secondary' => true, 'minWireVersion' => 0}), @ismaster.merge({'arbiterOnly' => true}) ) seed = ['localhost:27017'] manager = Mongo::ShardingPoolManager.new(@client, seed) @client.stubs(:local_manager).returns(manager) manager.connect formatted_seed = ['localhost', 27017] assert manager.seeds.include? formatted_seed assert_equal 500, manager.max_bson_size assert_equal 700, manager.max_message_size assert_equal 0, manager.max_wire_version assert_equal 0, manager.min_wire_version end end end ruby-mongo-1.10.0/test/unit/write_concern_test.rb000066400000000000000000000122401233461006100220450ustar00rootroot00000000000000# Copyright (C) 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'test_helper' class WriteConcernUnitTest < Test::Unit::TestCase context "Write-Concern modes on Mongo::MongoClient " do setup do @write_concern = { :w => 7, :j => false, :fsync => false, :wtimeout => nil } class Mongo::MongoClient public :build_get_last_error_message, :build_command_message end @client = MongoClient.new('localhost', 27017, @write_concern.merge({:connect => false})) end should "propogate to DB" do db = @client[TEST_DB] assert_equal @write_concern, db.write_concern db = @client.db(TEST_DB) assert_equal @write_concern, db.write_concern db = DB.new(TEST_DB, @client) assert_equal @write_concern, db.write_concern end should "allow db override" do db = DB.new(TEST_DB, @client, :w => 0) assert_equal 0, db.write_concern[:w] db = @client.db(TEST_DB, :w => 0) assert_equal 0, db.write_concern[:w] end context "on DB: " do setup do @db = @client[TEST_DB] end should "propogate to collection" do collection = @db.collection('bar') assert_equal @write_concern, collection.write_concern collection = @db['bar'] assert_equal @write_concern, collection.write_concern collection = Collection.new('bar', @db) assert_equal @write_concern, collection.write_concern end should "allow override on collection" do collection = @db.collection('bar', :w => 0) assert_equal 0, collection.write_concern[:w] collection = Collection.new('bar', @db, :w => 0) assert_equal 0, collection.write_concern[:w] end end context "on operations supporting 'gle' mode" do setup do @collection = @client[TEST_DB]['bar'] end should "not send w = 1 to the server" do gle = @client.build_get_last_error_message("fake", {:w => 1}) assert_equal gle, @client.build_command_message("fake", {:getlasterror => 1}) end should "use default value on insert" do @client.expects(:send_message_with_gle).with do |op, msg, log, n, wc| wc == @write_concern end @collection.insert({:a => 1}) end should "allow override alternate value on insert" do @client.expects(:send_message_with_gle).with do |op, msg, log, n, wc| wc == {:w => 100, :j => false, :fsync => false, :wtimeout => nil} end @collection.insert({:a => 1}, {:w => 100}) end should "allow override to disable on insert" do @client.expects(:send_message) @collection.insert({:a => 1}, :w => 0) end should "use default value on update" do @client.expects(:send_message_with_gle).with do |op, msg, log, n, wc| wc == @write_concern end @collection.update({:a => 1}, {:a => 2}) end should "allow override alternate value on update" do @client.expects(:send_message_with_gle).with do |op, msg, log, n, wc| wc == {:w => 100, :j => false, :fsync => false, :wtimeout => nil} end @collection.update({:a => 1}, {:a => 2}, {:w => 100}) end should "allow override to disable on update" do @client.expects(:send_message) @collection.update({:a => 1}, {:a => 2}, :w => 0) end should "use default value on save" do @client.expects(:send_message_with_gle).with do |op, msg, log, n, wc| wc == @write_concern end @collection.save({:a => 1}) end should "allow override alternate value on save" do @client.expects(:send_message_with_gle).with do |op, msg, log, n, wc| wc == @write_concern.merge(:w => 1) end @collection.save({:a => 1}, :w => 1) end should "allow override to disable on save" do @client.expects(:send_message) @collection.save({:a => 1}, :w => 0) end should "use default value on remove" do @client.expects(:send_message_with_gle).with do |op, msg, log, n, wc| wc == @write_concern end @collection.remove end should "allow override alternate value on remove" do @client.expects(:send_message_with_gle).with do |op, msg, log, n, wc| wc == {:w => 100, :j => false, :fsync => false, :wtimeout => nil} end @collection.remove({}, {:w => 100}) end should "allow override to disable on remove" do @client.expects(:send_message) @collection.remove({}, :w => 0) end end end end