pax_global_header 0000666 0000000 0000000 00000000064 12334610061 0014506 g ustar 00root root 0000000 0000000 52 comment=e25781a2e0237bc6ba6b4ad4faccea045625f706
ruby-mongo-1.10.0/ 0000775 0000000 0000000 00000000000 12334610061 0013663 5 ustar 00root root 0000000 0000000 ruby-mongo-1.10.0/LICENSE 0000664 0000000 0000000 00000025017 12334610061 0014675 0 ustar 00root root 0000000 0000000 Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
Copyright (C) 2008-2013 MongoDB, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
ruby-mongo-1.10.0/README.md 0000664 0000000 0000000 00000012130 12334610061 0015137 0 ustar 00root root 0000000 0000000 MongoDB Ruby Driver [![Build Status][travis-img]][travis-url] [![Code Climate][codeclimate-img]][codeclimate-url] [![Coverage Status][coveralls-img]][coveralls-url] [![Gem Version][rubygems-img]][rubygems-url]
-----
The officially supported Ruby driver for [MongoDB](http://www.mongodb.org).
Installation
-----
**Gem Installation**
The Ruby driver is released and distributed through RubyGems and it can be installed with the following command:
```bash
gem install mongo
```
For a significant performance boost, you'll want to install the C-extension:
```bash
gem install bson_ext
```
**Github Installation**
For development and test environments (not recommended for production) you can also install the Ruby driver directly from source:
```bash
# clone the repository
git clone https://github.com/mongodb/mongo-ruby-driver.git
cd mongo-ruby-driver
# checkout a specific version by tag (optional)
git checkout 1.x.x
# install all development dependencies
gem install bundler
bundle install
# install the ruby driver
rake install
```
Usage
-----
Here is a quick example of basic usage for the Ruby driver:
```ruby
require 'mongo'
include Mongo
# connecting to the database
client = MongoClient.new # defaults to localhost:27017
db = client['example-db']
coll = db['example-collection']
# inserting documents
10.times { |i| coll.insert({ :count => i+1 }) }
# finding documents
puts "There are #{coll.count} total documents. Here they are:"
coll.find.each { |doc| puts doc.inspect }
# updating documents
coll.update({ :count => 5 }, { :count => 'foobar' })
# removing documents
coll.remove({ :count => 8 })
coll.remove
```
Wiki - Tutorials & Examples
-----
For many more usage examples and a full tutorial, please visit our [wiki](https://github.com/mongodb/mongo-ruby-driver/wiki).
API Reference Documentation
-----
For API reference documentation, please visit [here](http://api.mongodb.org/ruby).
Compatibility
-----
The MongoDB Ruby driver requires Ruby 1.8.7 or greater and is regularly tested against the platforms and environments listed below.
Ruby Platforms | Operating Systems | Architectures
-------------- | ----------------- | -------------
MRI 1.8.7, 1.9.3, 2.0.0
JRuby 1.7.x | Windows
Linux
OS X | x86
x64
ARM
Support & Feedback
-----
For issues, questions or feedback related to the Ruby driver, please look into
our [support channels](http://www.mongodb.org/about/support). Please
do not email any of the Ruby developers directly with issues or
questions - you're more likely to get an answer quickly on the [mongodb-user list](http://groups.google.com/group/mongodb-user) on Google Groups.
Bugs & Feature Requests
-----
Do you have a bug to report or a feature request to make?
1. Visit [our issue tracker](https://jira.mongodb.org) and login (or create an account if necessary).
2. Navigate to the [RUBY](https://jira.mongodb.org/browse/RUBY) project.
3. Click 'Create Issue' and fill out all the applicable form fields.
When reporting an issue, please keep in mind that all information in JIRA for all driver projects (ex. RUBY, CSHARP, JAVA) and the Core Server (ex. SERVER) project is **PUBLICLY** visible.
**PLEASE DO**
* Provide as much information as possible about the issue.
* Provide detailed steps for reproducing the issue.
* Provide any applicable code snippets, stack traces and log data.
* Specify version information for the driver and MongoDB.
**PLEASE DO NOT**
* Provide any sensitive data or server logs.
* Report potential security issues publicly (see 'Security Issues').
Security Issues
-----
If you’ve identified a potential security related issue in a driver or any other MongoDB project, please report it by following the [instructions here](http://docs.mongodb.org/manual/tutorial/create-a-vulnerability-report).
Release History
-----
Full release notes and release history are available [here](https://github.com/mongodb/mongo-ruby-driver/releases).
License
-----
Copyright (C) 2009-2013 MongoDB, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
[rubygems-img]: https://badge.fury.io/rb/mongo.png
[rubygems-url]: http://badge.fury.io/rb/mongo
[travis-img]: https://secure.travis-ci.org/mongodb/mongo-ruby-driver.png?branch=1.x-stable
[travis-url]: http://travis-ci.org/mongodb/mongo-ruby-driver?branch=1.x-stable
[codeclimate-img]: https://codeclimate.com/github/mongodb/mongo-ruby-driver.png?branch=1.x-stable
[codeclimate-url]: https://codeclimate.com/github/mongodb/mongo-ruby-driver?branch=1.x-stable
[coveralls-img]: https://coveralls.io/repos/mongodb/mongo-ruby-driver/badge.png?branch=1.x-stable
[coveralls-url]: https://coveralls.io/r/mongodb/mongo-ruby-driver?branch=1.x-stable
ruby-mongo-1.10.0/Rakefile 0000664 0000000 0000000 00000002034 12334610061 0015327 0 ustar 00root root 0000000 0000000 # Copyright (C) 2009-2013 MongoDB, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
require 'rubygems'
begin
require 'bundler'
rescue LoadError
raise '[FAIL] Bundler not found! Install it with `gem install bundler && bundle`.'
end
rake_tasks = Dir.glob(File.join('tasks', '**', '*.rake')).sort
if ENV.keys.any? { |k| k.end_with?('_CI') }
Bundler.require(:default, :testing)
rake_tasks.reject! { |r| r =~ /deploy/ }
else
Bundler.require(:default, :testing, :deploy, :development)
end
rake_tasks.each { |rake| load File.expand_path(rake) }
ruby-mongo-1.10.0/VERSION 0000664 0000000 0000000 00000000006 12334610061 0014727 0 ustar 00root root 0000000 0000000 1.10.0 ruby-mongo-1.10.0/bin/ 0000775 0000000 0000000 00000000000 12334610061 0014433 5 ustar 00root root 0000000 0000000 ruby-mongo-1.10.0/bin/mongo_console 0000775 0000000 0000000 00000002447 12334610061 0017231 0 ustar 00root root 0000000 0000000 #!/usr/bin/env ruby
# Copyright (C) 2009-2013 MongoDB, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
org_argv = ARGV.dup
ARGV.clear
$LOAD_PATH[0,0] = File.join(File.dirname(__FILE__), '..', 'lib')
require 'mongo'
include Mongo
host = org_argv[0] || ENV['MONGO_RUBY_DRIVER_HOST'] || 'localhost'
port = org_argv[1] || ENV['MONGO_RUBY_DRIVER_PORT'] || MongoClient::DEFAULT_PORT
dbnm = org_argv[2] || ENV['MONGO_RUBY_DRIVER_DB'] || 'ruby-mongo-console'
puts "Connecting to #{host}:#{port} (CLIENT) on with database #{dbnm} (DB)"
CLIENT = MongoClient.new(host, port)
DB = CLIENT.db(dbnm)
# try pry if available, fall back to irb
begin
require 'pry'
CONSOLE_CLASS = Pry
rescue LoadError
require 'irb'
CONSOLE_CLASS = IRB
end
puts "Starting #{CONSOLE_CLASS.name} session..."
CONSOLE_CLASS.start(__FILE__)
ruby-mongo-1.10.0/checksums.yaml.gz 0000664 0000000 0000000 00000000414 12334610061 0017152 0 ustar 00root root 0000000 0000000 j=Se@b^tb"z5!Rz|?~{ۿtiw5
vZRŶ]yONjvHX&H`Ĉw#*^E
5-:A=H=C,E͔@X'!REE}6C n=+تWfE#M2Qbf2Q( U\^wg8ȋ,aslخųCȝ{"K> \L ruby-mongo-1.10.0/checksums.yaml.gz.sig 0000664 0000000 0000000 00000000400 12334610061 0017726 0 ustar 00root root 0000000 0000000 JW$NJ.ZⲤՃxI؋^]yPr)@,JVT/ҫ{~9ۛL
Ȟ
{ƣgLjOm~N uWA$9 Ak1bM-^Ajf1(ɪ7#4( ?uw5̈N4WɌ[=G[ܥTRsQVY0v ^)ZvW^NTpXxM<ü ruby-mongo-1.10.0/data.tar.gz.sig 0000664 0000000 0000000 00000000400 12334610061 0016476 0 ustar 00root root 0000000 0000000 Eq¡|ab2^ՄQ+jrL9T H~_HxŌx"#'kQͫR7|㑒Agk ޥt4%_`YETp8|Q :
R`6)x!5ln҆uܡsx 'FW%.0J_lvs1mh[k\R'9,pI+Am jSxDs`P1{a ruby-mongo-1.10.0/lib/ 0000775 0000000 0000000 00000000000 12334610061 0014431 5 ustar 00root root 0000000 0000000 ruby-mongo-1.10.0/lib/mongo.rb 0000664 0000000 0000000 00000004721 12334610061 0016101 0 ustar 00root root 0000000 0000000 # Copyright (C) 2009-2013 MongoDB, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
module Mongo
ASCENDING = 1
DESCENDING = -1
GEO2D = '2d'
GEO2DSPHERE = '2dsphere'
GEOHAYSTACK = 'geoHaystack'
TEXT = 'text'
HASHED = 'hashed'
INDEX_TYPES = {
'ASCENDING' => ASCENDING,
'DESCENDING' => DESCENDING,
'GEO2D' => GEO2D,
'GEO2DSPHERE' => GEO2DSPHERE,
'GEOHAYSTACK' => GEOHAYSTACK,
'TEXT' => TEXT,
'HASHED' => HASHED
}
DEFAULT_MAX_BSON_SIZE = 4 * 1024 * 1024
MESSAGE_SIZE_FACTOR = 2
module Constants
OP_REPLY = 1
OP_MSG = 1000
OP_UPDATE = 2001
OP_INSERT = 2002
OP_QUERY = 2004
OP_GET_MORE = 2005
OP_DELETE = 2006
OP_KILL_CURSORS = 2007
OP_QUERY_TAILABLE = 2 ** 1
OP_QUERY_SLAVE_OK = 2 ** 2
OP_QUERY_OPLOG_REPLAY = 2 ** 3
OP_QUERY_NO_CURSOR_TIMEOUT = 2 ** 4
OP_QUERY_AWAIT_DATA = 2 ** 5
OP_QUERY_EXHAUST = 2 ** 6
OP_QUERY_PARTIAL = 2 ** 7
REPLY_CURSOR_NOT_FOUND = 2 ** 0
REPLY_QUERY_FAILURE = 2 ** 1
REPLY_SHARD_CONFIG_STALE = 2 ** 2
REPLY_AWAIT_CAPABLE = 2 ** 3
end
module ErrorCode # MongoDB Core Server src/mongo/base/error_codes.err
BAD_VALUE = 2
UNKNOWN_ERROR = 8
INVALID_BSON = 22
COMMAND_NOT_FOUND = 59
WRITE_CONCERN_FAILED = 64
MULTIPLE_ERRORS_OCCURRED = 65
end
end
require 'bson'
require 'set'
require 'thread'
require 'mongo/utils'
require 'mongo/exception'
require 'mongo/functional'
require 'mongo/connection'
require 'mongo/collection_writer'
require 'mongo/collection'
require 'mongo/bulk_write_collection_view'
require 'mongo/cursor'
require 'mongo/db'
require 'mongo/gridfs'
require 'mongo/networking'
require 'mongo/mongo_client'
require 'mongo/mongo_replica_set_client'
require 'mongo/mongo_sharded_client'
require 'mongo/legacy'
ruby-mongo-1.10.0/lib/mongo/ 0000775 0000000 0000000 00000000000 12334610061 0015550 5 ustar 00root root 0000000 0000000 ruby-mongo-1.10.0/lib/mongo/bulk_write_collection_view.rb 0000664 0000000 0000000 00000035075 12334610061 0023523 0 ustar 00root root 0000000 0000000 # Copyright (C) 2009-2013 MongoDB, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
module Mongo
# A bulk write view to a collection of documents in a database.
class BulkWriteCollectionView
include Mongo::WriteConcern
DEFAULT_OP_ARGS = {:q => nil}
MULTIPLE_ERRORS_MSG = "batch item errors occurred"
EMPTY_BATCH_MSG = "batch is empty"
attr_reader :collection, :options, :ops, :op_args
# Initialize a bulk-write-view object to a collection with default query selector {}.
#
# A bulk write operation is initialized from a collection object.
# For example, for an ordered bulk write view:
#
# bulk = collection.initialize_ordered_bulk_op
#
# or for an unordered bulk write view:
#
# bulk = collection.initialize_unordered_bulk_op
#
# The bulk write view collects individual write operations together so that they can be
# executed as a batch for significant performance gains.
# The ordered bulk operation will execute each operation serially in order.
# Execution will stop at the first occurrence of an error for an ordered bulk operation.
# The unordered bulk operation will be executed and may take advantage of parallelism.
# There are no guarantees for the order of execution of the operations on the server.
# Execution will continue even if there are errors for an unordered bulk operation.
#
# A bulk operation is programmed as a sequence of individual operations.
# An individual operation is composed of a method chain of modifiers or setters terminated by a write method.
# A modify method sets a value on the current object.
# A set methods returns a duplicate of the current object with a value set.
# A terminator write method appends a write operation to the bulk batch collected in the view.
#
# The API supports mixing of write operation types in a bulk operation.
# However, server support affects the implementation and performance of bulk operations.
#
# MongoDB version 2.6 servers currently support only bulk commands of the same type.
# With an ordered bulk operation,
# contiguous individual ops of the same type can be batched into the same db request,
# and the next op of a different type must be sent separately in the next request.
# Performance will improve if you can arrange your ops to reduce the number of db requests.
# With an unordered bulk operation,
# individual ops can be grouped by type and sent in at most three requests,
# one each per insert, update, or delete.
#
# MongoDB pre-version 2.6 servers do not support bulk write commands.
# The bulk operation must be sent one request per individual op.
# This also applies to inserts in order to have accurate counts and error reporting.
#
# Important note on pre-2.6 performance:
# Performance is very poor compared to version 2.6.
# We recommend bulk operation with pre-2.6 only for compatibility or
# for development in preparation for version 2.6.
# For better performance with pre-version 2.6, use bulk insertion with Collection#insert.
#
# @param [Collection] collection the parent collection object
#
# @option opts [Boolean] :ordered (true) Set bulk execution for ordered or unordered
#
# @return [BulkWriteCollectionView]
def initialize(collection, options = {})
@collection = collection
@options = options
@ops = []
@op_args = DEFAULT_OP_ARGS.dup
end
def inspect
vars = [:@options, :@ops, :@op_args]
vars_inspect = vars.collect{|var| "#{var}=#{instance_variable_get(var).inspect}"}
"#, #{vars_inspect.join(', ')}>"
end
# Modify the query selector for subsequent bulk write operations.
# The default query selector on creation of the bulk write view is {}.
# For operations that require a query selector, find() must be set
# per operation, or set once for all operations on the bulk object.
# For example, these operations:
#
# bulk.find({"a" => 2}).update({"$inc" => {"x" => 2}})
# bulk.find({"a" => 2}).update({"$set" => {"b" => 3}})
#
# may be rewritten as:
#
# bulk = find({"a" => 2})
# bulk.update({"$inc" => {"x" => 2}})
# bulk.update({"$set" => {"b" => 3}})
#
# Note that modifying the query selector in this way will not affect
# operations that do not use a query selector, like insert().
#
# @param [Hash] q the query selector
#
# @return [BulkWriteCollectionView]
def find(q)
op_args_set(:q, q)
end
# Modify the upsert option argument for subsequent bulk write operations.
#
# @param [Boolean] value (true) the upsert option value
#
# @return [BulkWriteCollectionView]
def upsert!(value = true)
op_args_set(:upsert, value)
end
# Set the upsert option argument for subsequent bulk write operations.
#
# @param [Boolean] value (true) the upsert option value
#
# @return [BulkWriteCollectionView] a duplicated object
def upsert(value = true)
dup.upsert!(value)
end
# Update one document matching the selector.
#
# bulk.find({"a" => 1}).update_one({"$inc" => {"x" => 1}})
#
# Use the upsert! or upsert method to specify an upsert. For example:
#
# bulk.find({"a" => 1}).upsert.updateOne({"$inc" => {"x" => 1}})
#
# @param [Hash] u the update document
#
# @return [BulkWriteCollectionView]
def update_one(u)
raise MongoArgumentError, "document must start with an operator" unless update_doc?(u)
op_push([:update, @op_args.merge(:u => u, :multi => false)])
end
# Update all documents matching the selector. For example:
#
# bulk.find({"a" => 2}).update({"$inc" => {"x" => 2}})
#
# Use the upsert! or upsert method to specify an upsert. For example:
#
# bulk.find({"a" => 2}).upsert.update({"$inc" => {"x" => 2}})
#
# @param [Hash] u the update document
#
# @return [BulkWriteCollectionView]
def update(u)
raise MongoArgumentError, "document must start with an operator" unless update_doc?(u)
op_push([:update, @op_args.merge(:u => u, :multi => true)])
end
# Replace entire document (update with whole doc replace). For example:
#
# bulk.find({"a" => 3}).replace_one({"x" => 3})
#
# @param [Hash] u the replacement document
#
# @return [BulkWriteCollectionView]
def replace_one(u)
raise MongoArgumentError, "document must not contain any operators" unless replace_doc?(u)
op_push([:update, @op_args.merge(:u => u, :multi => false)])
end
# Remove a single document matching the selector. For example:
#
# bulk.find({"a" => 4}).remove_one;
#
# @return [BulkWriteCollectionView]
def remove_one
op_push([:delete, @op_args.merge(:limit => 1)])
end
# Remove all documents matching the selector. For example:
#
# bulk.find({"a" => 5}).remove;
#
# @return [BulkWriteCollectionView]
def remove
op_push([:delete, @op_args.merge(:limit => 0)])
end
# Insert a document. For example:
#
# bulk.insert({"x" => 4})
#
# @return [BulkWriteCollectionView]
def insert(document)
# TODO - check keys
op_push([:insert, {:d => document}])
end
# Execute the bulk operation, with an optional write concern overwriting the default w:1.
# For example:
#
# write_concern = {:w => 1, :j => 1}
# bulk.execute({write_concern})
#
# On return from execute, the bulk operation is cleared,
# but the selector and upsert settings are preserved.
#
# @return [BulkWriteCollectionView]
def execute(opts = {})
raise MongoArgumentError, EMPTY_BATCH_MSG if @ops.empty?
write_concern = get_write_concern(opts, @collection)
@ops.each_with_index{|op, index| op.last.merge!(:ord => index)} # infuse ordinal here to avoid issues with upsert
if @collection.db.connection.use_write_command?(write_concern)
errors, write_concern_errors, exchanges = @collection.command_writer.bulk_execute(@ops, @options, opts)
else
errors, write_concern_errors, exchanges = @collection.operation_writer.bulk_execute(@ops, @options, opts)
end
@ops = []
return true if errors.empty? && (exchanges.empty? || exchanges.first[:response] == true) # w 0 without GLE
result = merge_result(errors + write_concern_errors, exchanges)
raise BulkWriteError.new(MULTIPLE_ERRORS_MSG, Mongo::ErrorCode::MULTIPLE_ERRORS_OCCURRED, result) if !errors.empty? || !write_concern_errors.empty?
result
end
private
def hash_except(h, *keys)
keys.each { |key| h.delete(key) }
h
end
def hash_select(h, *keys)
Hash[*keys.zip(h.values_at(*keys)).flatten]
end
def tally(h, key, n)
h[key] = h.fetch(key, 0) + n
end
def nil_tally(h, key, n)
if !h.has_key?(key)
h[key] = n
elsif h[key]
h[key] = n ? h[key] + n : n
end
end
def append(h, key, obj)
h[key] = h.fetch(key, []) << obj
end
def concat(h, key, a)
h[key] = h.fetch(key, []) + a
end
def merge_index(h, exchange)
h.merge("index" => exchange[:batch][h.fetch("index", 0)][:ord])
end
def merge_indexes(a, exchange)
a.collect{|h| merge_index(h, exchange)}
end
def merge_result(errors, exchanges)
ok = 0
result = {"ok" => 0, "n" => 0}
unless errors.empty?
unless (writeErrors = errors.select { |error| error.class != Mongo::OperationFailure && error.class != WriteConcernError }).empty? # assignment
concat(result, "writeErrors",
writeErrors.collect { |error|
{"index" => error.result[:ord], "code" => error.error_code, "errmsg" => error.result[:error].message}
})
end
result.merge!("code" => Mongo::ErrorCode::MULTIPLE_ERRORS_OCCURRED, "errmsg" => MULTIPLE_ERRORS_MSG)
end
exchanges.each do |exchange|
response = exchange[:response]
next unless response
ok += response["ok"].to_i
n = response["n"] || 0
op_type = exchange[:op_type]
if op_type == :insert
n = 1 if response.key?("err") && (response["err"].nil? || response["err"] == "norepl" || response["err"] == "timeout") # OP_INSERT override n = 0 bug, n = exchange[:batch].size always 1
tally(result, "nInserted", n)
elsif op_type == :update
n_upserted = 0
if (upserted = response.fetch("upserted", nil)) # assignment
upserted = [{"_id" => upserted}] if upserted.class == BSON::ObjectId # OP_UPDATE non-array
n_upserted = upserted.size
concat(result, "upserted", merge_indexes(upserted, exchange))
end
tally(result, "nUpserted", n_upserted) if n_upserted > 0
tally(result, "nMatched", n - n_upserted)
nil_tally(result, "nModified", response["nModified"])
elsif op_type == :delete
tally(result, "nRemoved", n)
end
result["n"] += n
write_concern_error = nil
errmsg = response["errmsg"] || response["err"] # top level
if (writeErrors = response["writeErrors"] || response["errDetails"]) # assignment
concat(result, "writeErrors", merge_indexes(writeErrors, exchange))
elsif response["err"] == "timeout" # errmsg == "timed out waiting for slaves" # OP_*
write_concern_error = {"errmsg" => errmsg, "code" => Mongo::ErrorCode::WRITE_CONCERN_FAILED,
"errInfo" => {"wtimeout" => response["wtimeout"]}} # OP_* does not have "code"
elsif errmsg == "norepl" # OP_*
write_concern_error = {"errmsg" => errmsg, "code" => Mongo::ErrorCode::WRITE_CONCERN_FAILED} # OP_* does not have "code"
elsif errmsg # OP_INSERT, OP_UPDATE have "err"
append(result, "writeErrors", merge_index({"errmsg" => errmsg, "code" => response["code"]}, exchange))
end
if response["writeConcernError"]
write_concern_error = response["writeConcernError"]
elsif (wnote = response["wnote"]) # assignment - OP_*
write_concern_error = {"errmsg" => wnote, "code" => Mongo::ErrorCode::WRITE_CONCERN_FAILED} # OP_* does not have "code"
elsif (jnote = response["jnote"]) # assignment - OP_*
write_concern_error = {"errmsg" => jnote, "code" => Mongo::ErrorCode::BAD_VALUE} # OP_* does not have "code"
end
append(result, "writeConcernError", merge_index(write_concern_error, exchange)) if write_concern_error
end
result.delete("nModified") if result.has_key?("nModified") && !result["nModified"]
result.merge!("ok" => [ok + result["n"], 1].min)
end
def initialize_copy(other)
other.instance_variable_set(:@options, other.options.dup)
end
def op_args_set(op, value)
@op_args[op] = value
self
end
def op_push(op)
raise MongoArgumentError, "non-nil query must be set via find" if op.first != :insert && !op.last[:q]
@ops << op
self
end
def update_doc?(doc)
!doc.empty? && doc.keys.first.to_s =~ /^\$/
end
def replace_doc?(doc)
doc.keys.all?{|key| key !~ /^\$/}
end
end
class Collection
# Initialize an ordered bulk write view for this collection
# Execution will stop at the first occurrence of an error for an ordered bulk operation.
#
# @return [BulkWriteCollectionView]
def initialize_ordered_bulk_op
BulkWriteCollectionView.new(self, :ordered => true)
end
# Initialize an unordered bulk write view for this collection
# The unordered bulk operation will be executed and may take advantage of parallelism.
# There are no guarantees for the order of execution of the operations on the server.
# Execution will continue even if there are errors for an unordered bulk operation.
#
# @return [BulkWriteCollectionView]
def initialize_unordered_bulk_op
BulkWriteCollectionView.new(self, :ordered => false)
end
end
end
ruby-mongo-1.10.0/lib/mongo/collection.rb 0000664 0000000 0000000 00000146404 12334610061 0020241 0 ustar 00root root 0000000 0000000 # Copyright (C) 2009-2013 MongoDB, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
module Mongo
# A named collection of documents in a database.
class Collection
include Mongo::Logging
include Mongo::WriteConcern
attr_reader :db,
:name,
:pk_factory,
:hint,
:write_concern,
:capped,
:operation_writer,
:command_writer
# Read Preference
attr_accessor :read,
:tag_sets,
:acceptable_latency
# Initialize a collection object.
#
# @param [String, Symbol] name the name of the collection.
# @param [DB] db a MongoDB database instance.
#
# @option opts [String, Integer, Symbol] :w (1) Set default number of nodes to which a write
# should be acknowledged.
# @option opts [Integer] :wtimeout (nil) Set replica set acknowledgement timeout.
# @option opts [Boolean] :j (false) If true, block until write operations have been committed
# to the journal. Cannot be used in combination with 'fsync'. Prior to MongoDB 2.6 this option was
# ignored if the server was running without journaling. Starting with MongoDB 2.6, write operations will
# fail with an exception if this option is used when the server is running without journaling.
# @option opts [Boolean] :fsync (false) If true, and the server is running without journaling, blocks until
# the server has synced all data files to disk. If the server is running with journaling, this acts the same as
# the 'j' option, blocking until write operations have been committed to the journal.
# Cannot be used in combination with 'j'.
#
# Notes about write concern:
# These write concern options will be used for insert, update, and remove methods called on this
# Collection instance. If no value is provided, the default values set on this instance's DB will be used.
# These option values can be overridden for any invocation of insert, update, or remove.
#
# @option opts [:create_pk] :pk (BSON::ObjectId) A primary key factory to use
# other than the default BSON::ObjectId.
# @option opts [:primary, :secondary] :read The default read preference for queries
# initiates from this connection object. If +:secondary+ is chosen, reads will be sent
# to one of the closest available secondary nodes. If a secondary node cannot be located, the
# read will be sent to the primary. If this option is left unspecified, the value of the read
# preference for this collection's associated Mongo::DB object will be used.
#
# @raise [InvalidNSName]
# if collection name is empty, contains '$', or starts or ends with '.'
#
# @raise [TypeError]
# if collection name is not a string or symbol
#
# @return [Collection]
def initialize(name, db, opts={})
if db.is_a?(String) && name.is_a?(Mongo::DB)
warn "Warning: the order of parameters to initialize a collection have changed. " +
"Please specify the collection name first, followed by the db. This will be made permanent" +
"in v2.0."
db, name = name, db
end
raise TypeError,
"Collection name must be a String or Symbol." unless [String, Symbol].include?(name.class)
name = name.to_s
raise Mongo::InvalidNSName,
"Collection names cannot be empty." if name.empty? || name.include?("..")
if name.include?("$")
raise Mongo::InvalidNSName,
"Collection names must not contain '$'" unless name =~ /((^\$cmd)|(oplog\.\$main))/
end
raise Mongo::InvalidNSName,
"Collection names must not start or end with '.'" if name.match(/^\./) || name.match(/\.$/)
pk_factory = nil
if opts.respond_to?(:create_pk) || !opts.is_a?(Hash)
warn "The method for specifying a primary key factory on a Collection has changed.\n" +
"Please specify it as an option (e.g., :pk => PkFactory)."
pk_factory = opts
end
@db, @name = db, name
@connection = @db.connection
@logger = @connection.logger
@cache_time = @db.cache_time
@cache = Hash.new(0)
unless pk_factory
@write_concern = get_write_concern(opts, db)
@read = opts[:read] || @db.read
Mongo::ReadPreference::validate(@read)
@capped = opts[:capped]
@tag_sets = opts.fetch(:tag_sets, @db.tag_sets)
@acceptable_latency = opts.fetch(:acceptable_latency, @db.acceptable_latency)
end
@pk_factory = pk_factory || opts[:pk] || BSON::ObjectId
@hint = nil
@operation_writer = CollectionOperationWriter.new(self)
@command_writer = CollectionCommandWriter.new(self)
end
# Indicate whether this is a capped collection.
#
# @raise [Mongo::OperationFailure]
# if the collection doesn't exist.
#
# @return [Boolean]
def capped?
@capped ||= [1, true].include?(@db.command({:collstats => @name})['capped'])
end
# Return a sub-collection of this collection by name. If 'users' is a collection, then
# 'users.comments' is a sub-collection of users.
#
# @param [String, Symbol] name
# the collection to return
#
# @raise [Mongo::InvalidNSName]
# if passed an invalid collection name
#
# @return [Collection]
# the specified sub-collection
def [](name)
name = "#{self.name}.#{name}"
return Collection.new(name, db) if !db.strict? ||
db.collection_names.include?(name.to_s)
raise "Collection #{name} doesn't exist. Currently in strict mode."
end
# Set a hint field for query optimizer. Hint may be a single field
# name, array of field names, or a hash (preferably an [OrderedHash]).
# If using MongoDB > 1.1, you probably don't ever need to set a hint.
#
# @param [String, Array, OrderedHash] hint a single field, an array of
# fields, or a hash specifying fields
def hint=(hint=nil)
@hint = normalize_hint_fields(hint)
self
end
# Set a hint field using a named index.
# @param [String] hint index name
def named_hint=(hint=nil)
@hint = hint
self
end
# Query the database.
#
# The +selector+ argument is a prototype document that all results must
# match. For example:
#
# collection.find({"hello" => "world"})
#
# only matches documents that have a key "hello" with value "world".
# Matches can have other keys *in addition* to "hello".
#
# If given an optional block +find+ will yield a Cursor to that block,
# close the cursor, and then return nil. This guarantees that partially
# evaluated cursors will be closed. If given no block +find+ returns a
# cursor.
#
# @param [Hash] selector
# a document specifying elements which must be present for a
# document to be included in the result set. Note that in rare cases,
# (e.g., with $near queries), the order of keys will matter. To preserve
# key order on a selector, use an instance of BSON::OrderedHash (only applies
# to Ruby 1.8).
#
# @option opts [Array, Hash] :fields field names that should be returned in the result
# set ("_id" will be included unless explicitly excluded). By limiting results to a certain subset of fields,
# you can cut down on network traffic and decoding time. If using a Hash, keys should be field
# names and values should be either 1 or 0, depending on whether you want to include or exclude
# the given field.
# @option opts [:primary, :secondary] :read The default read preference for queries
# initiates from this connection object. If +:secondary+ is chosen, reads will be sent
# to one of the closest available secondary nodes. If a secondary node cannot be located, the
# read will be sent to the primary. If this option is left unspecified, the value of the read
# preference for this Collection object will be used.
# @option opts [Integer] :skip number of documents to skip from the beginning of the result set
# @option opts [Integer] :limit maximum number of documents to return
# @option opts [Array] :sort an array of [key, direction] pairs to sort by. Direction should
# be specified as Mongo::ASCENDING (or :ascending / :asc) or Mongo::DESCENDING (or :descending / :desc)
# @option opts [String, Array, OrderedHash] :hint hint for query optimizer, usually not necessary if
# using MongoDB > 1.1
# @option opts [String] :named_hint for specifying a named index as a hint, will be overriden by :hint
# if :hint is also provided.
# @option opts [Boolean] :snapshot (false) if true, snapshot mode will be used for this query.
# Snapshot mode assures no duplicates are returned, or objects missed, which were preset at both the start and
# end of the query's execution.
# For details see http://www.mongodb.org/display/DOCS/How+to+do+Snapshotting+in+the+Mongo+Database
# @option opts [Boolean] :batch_size (100) the number of documents to returned by the database per
# GETMORE operation. A value of 0 will let the database server decide how many results to return.
# This option can be ignored for most use cases.
# @option opts [Boolean] :timeout (true) when +true+, the returned cursor will be subject to
# the normal cursor timeout behavior of the mongod process. When +false+, the returned cursor will
# never timeout. Note that disabling timeout will only work when #find is invoked with a block.
# This is to prevent any inadvertent failure to close the cursor, as the cursor is explicitly
# closed when block code finishes.
# @option opts [Integer] :max_scan (nil) Limit the number of items to scan on both collection scans and indexed queries..
# @option opts [Boolean] :show_disk_loc (false) Return the disk location of each query result (for debugging).
# @option opts [Boolean] :return_key (false) Return the index key used to obtain the result (for debugging).
# @option opts [Block] :transformer (nil) a block for transforming returned documents.
# This is normally used by object mappers to convert each returned document to an instance of a class.
# @option opts [String] :comment (nil) a comment to include in profiling logs
# @option opts [Boolean] :compile_regex (true) whether BSON regex objects should be compiled into Ruby regexes.
# If false, a BSON::Regex object will be returned instead.
#
# @raise [ArgumentError]
# if timeout is set to false and find is not invoked in a block
#
# @raise [RuntimeError]
# if given unknown options
def find(selector={}, opts={})
opts = opts.dup
fields = opts.delete(:fields)
fields = ["_id"] if fields && fields.empty?
skip = opts.delete(:skip) || skip || 0
limit = opts.delete(:limit) || 0
sort = opts.delete(:sort)
hint = opts.delete(:hint)
named_hint = opts.delete(:named_hint)
snapshot = opts.delete(:snapshot)
batch_size = opts.delete(:batch_size)
timeout = (opts.delete(:timeout) == false) ? false : true
max_scan = opts.delete(:max_scan)
return_key = opts.delete(:return_key)
transformer = opts.delete(:transformer)
show_disk_loc = opts.delete(:show_disk_loc)
comment = opts.delete(:comment)
read = opts.delete(:read) || @read
tag_sets = opts.delete(:tag_sets) || @tag_sets
acceptable_latency = opts.delete(:acceptable_latency) || @acceptable_latency
compile_regex = opts.key?(:compile_regex) ? opts.delete(:compile_regex) : true
if timeout == false && !block_given?
raise ArgumentError, "Collection#find must be invoked with a block when timeout is disabled."
end
if hint
hint = normalize_hint_fields(hint)
else
hint = @hint # assumed to be normalized already
end
raise RuntimeError, "Unknown options [#{opts.inspect}]" unless opts.empty?
cursor = Cursor.new(self, {
:selector => selector,
:fields => fields,
:skip => skip,
:limit => limit,
:order => sort,
:hint => hint || named_hint,
:snapshot => snapshot,
:timeout => timeout,
:batch_size => batch_size,
:transformer => transformer,
:max_scan => max_scan,
:show_disk_loc => show_disk_loc,
:return_key => return_key,
:read => read,
:tag_sets => tag_sets,
:comment => comment,
:acceptable_latency => acceptable_latency,
:compile_regex => compile_regex
})
if block_given?
begin
yield cursor
ensure
cursor.close
end
nil
else
cursor
end
end
# Return a single object from the database.
#
# @return [OrderedHash, Nil]
# a single document or nil if no result is found.
#
# @param [Hash, ObjectId, Nil] spec_or_object_id a hash specifying elements
# which must be present for a document to be included in the result set or an
# instance of ObjectId to be used as the value for an _id query.
# If nil, an empty selector, {}, will be used.
#
# @option opts [Hash]
# any valid options that can be send to Collection#find
#
# @raise [TypeError]
# if the argument is of an improper type.
def find_one(spec_or_object_id=nil, opts={})
spec = case spec_or_object_id
when nil
{}
when BSON::ObjectId
{:_id => spec_or_object_id}
when Hash
spec_or_object_id
else
raise TypeError, "spec_or_object_id must be an instance of ObjectId or Hash, or nil"
end
timeout = opts.delete(:max_time_ms)
cursor = find(spec, opts.merge(:limit => -1))
timeout ? cursor.max_time_ms(timeout).next_document : cursor.next_document
end
# Save a document to this collection.
#
# @param [Hash] doc
# the document to be saved. If the document already has an '_id' key,
# then an update (upsert) operation will be performed, and any existing
# document with that _id is overwritten. Otherwise an insert operation is performed.
#
# @return [ObjectId] the _id of the saved document.
#
# @option opts [String, Integer, Symbol] :w (1) Set default number of nodes to which a write
# should be acknowledged.
# @option opts [Integer] :wtimeout (nil) Set replica set acknowledgement timeout.
# @option opts [Boolean] :j (false) If true, block until write operations have been committed
# to the journal. Cannot be used in combination with 'fsync'. Prior to MongoDB 2.6 this option was
# ignored if the server was running without journaling. Starting with MongoDB 2.6, write operations will
# fail with an exception if this option is used when the server is running without journaling.
# @option opts [Boolean] :fsync (false) If true, and the server is running without journaling, blocks until
# the server has synced all data files to disk. If the server is running with journaling, this acts the same as
# the 'j' option, blocking until write operations have been committed to the journal.
# Cannot be used in combination with 'j'.
#
# Options provided here will override any write concern options set on this collection,
# its database object, or the current connection. See the options
# for DB#get_last_error.
#
# @raise [Mongo::OperationFailure] will be raised iff :w > 0 and the operation fails.
def save(doc, opts={})
if doc.has_key?(:_id) || doc.has_key?('_id')
id = doc[:_id] || doc['_id']
update({:_id => id}, doc, opts.merge!({:upsert => true}))
id
else
insert(doc, opts)
end
end
# Insert one or more documents into the collection.
#
# @param [Hash, Array] doc_or_docs
# a document (as a hash) or array of documents to be inserted.
#
# @return [ObjectId, Array]
# The _id of the inserted document or a list of _ids of all inserted documents.
# @return [[ObjectId, Array], [Hash, Array]]
# 1st, the _id of the inserted document or a list of _ids of all inserted documents.
# 2nd, a list of invalid documents.
# Return this result format only when :collect_on_error is true.
#
# @option opts [String, Integer, Symbol] :w (1) Set default number of nodes to which a write
# should be acknowledged.
# @option opts [Integer] :wtimeout (nil) Set replica set acknowledgement timeout.
# @option opts [Boolean] :j (false) If true, block until write operations have been committed
# to the journal. Cannot be used in combination with 'fsync'. Prior to MongoDB 2.6 this option was
# ignored if the server was running without journaling. Starting with MongoDB 2.6, write operations will
# fail with an exception if this option is used when the server is running without journaling.
# @option opts [Boolean] :fsync (false) If true, and the server is running without journaling, blocks until
# the server has synced all data files to disk. If the server is running with journaling, this acts the same as
# the 'j' option, blocking until write operations have been committed to the journal.
# Cannot be used in combination with 'j'.
#
# Notes on write concern:
# Options provided here will override any write concern options set on this collection,
# its database object, or the current connection. See the options for +DB#get_last_error+.
#
# @option opts [Boolean] :continue_on_error (+false+) If true, then
# continue a bulk insert even if one of the documents inserted
# triggers a database assertion (as in a duplicate insert, for instance).
# If not acknowledging writes, the list of ids returned will
# include the object ids of all documents attempted on insert, even
# if some are rejected on error. When acknowledging writes, any error will raise an
# OperationFailure exception.
# MongoDB v2.0+.
# @option opts [Boolean] :collect_on_error (+false+) if true, then
# collects invalid documents as an array. Note that this option changes the result format.
#
# @raise [Mongo::OperationFailure] will be raised iff :w > 0 and the operation fails.
def insert(doc_or_docs, opts={})
if doc_or_docs.respond_to?(:collect!)
doc_or_docs.collect! { |doc| @pk_factory.create_pk(doc) }
error_docs, errors, write_concern_errors, rest_ignored = batch_write(:insert, doc_or_docs, true, opts)
errors = write_concern_errors + errors
raise errors.last if !opts[:collect_on_error] && !errors.empty?
inserted_docs = doc_or_docs - error_docs
inserted_ids = inserted_docs.collect {|o| o[:_id] || o['_id']}
opts[:collect_on_error] ? [inserted_ids, error_docs] : inserted_ids
else
@pk_factory.create_pk(doc_or_docs)
send_write(:insert, nil, doc_or_docs, true, opts)
return doc_or_docs[:_id] || doc_or_docs['_id']
end
end
alias_method :<<, :insert
# Remove all documents from this collection.
#
# @param [Hash] selector
# If specified, only matching documents will be removed.
#
# @option opts [String, Integer, Symbol] :w (1) Set default number of nodes to which a write
# should be acknowledged.
# @option opts [Integer] :wtimeout (nil) Set replica set acknowledgement timeout.
# @option opts [Boolean] :j (false) If true, block until write operations have been committed
# to the journal. Cannot be used in combination with 'fsync'. Prior to MongoDB 2.6 this option was
# ignored if the server was running without journaling. Starting with MongoDB 2.6, write operations will
# fail with an exception if this option is used when the server is running without journaling.
# @option opts [Boolean] :fsync (false) If true, and the server is running without journaling, blocks until
# the server has synced all data files to disk. If the server is running with journaling, this acts the same as
# the 'j' option, blocking until write operations have been committed to the journal.
# Cannot be used in combination with 'j'.
# @option opts [Integer] :limit (0) Set limit option, currently only 0 for all or 1 for just one.
#
# Notes on write concern:
# Options provided here will override any write concern options set on this collection,
# its database object, or the current connection. See the options for +DB#get_last_error+.
#
# @example remove all documents from the 'users' collection:
# users.remove
# users.remove({})
#
# @example remove only documents that have expired:
# users.remove({:expire => {"$lte" => Time.now}})
#
# @return [Hash, true] Returns a Hash containing the last error object if acknowledging writes
# Otherwise, returns true.
#
# @raise [Mongo::OperationFailure] will be raised iff :w > 0 and the operation fails.
def remove(selector={}, opts={})
send_write(:delete, selector, nil, nil, opts)
end
# Update one or more documents in this collection.
#
# @param [Hash] selector
# a hash specifying elements which must be present for a document to be updated. Note:
# the update command currently updates only the first document matching the
# given selector. If you want all matching documents to be updated, be sure
# to specify :multi => true.
# @param [Hash] document
# a hash specifying the fields to be changed in the selected document,
# or (in the case of an upsert) the document to be inserted
#
# @option opts [Boolean] :upsert (+false+) if true, performs an upsert (update or insert)
# @option opts [Boolean] :multi (+false+) update all documents matching the selector, as opposed to
# just the first matching document. Note: only works in MongoDB 1.1.3 or later.
# @option opts [String, Integer, Symbol] :w (1) Set default number of nodes to which a write
# should be acknowledged.
# @option opts [Integer] :wtimeout (nil) Set replica set acknowledgement timeout.
# @option opts [Boolean] :j (false) If true, block until write operations have been committed
# to the journal. Cannot be used in combination with 'fsync'. Prior to MongoDB 2.6 this option was
# ignored if the server was running without journaling. Starting with MongoDB 2.6, write operations will
# fail with an exception if this option is used when the server is running without journaling.
# @option opts [Boolean] :fsync (false) If true, and the server is running without journaling, blocks until
# the server has synced all data files to disk. If the server is running with journaling, this acts the same as
# the 'j' option, blocking until write operations have been committed to the journal.
# Cannot be used in combination with 'j'.
#
# Notes on write concern:
# Options provided here will override any write concern options set on this collection,
# its database object, or the current connection. See the options for DB#get_last_error.
#
# @return [Hash, true] Returns a Hash containing the last error object if acknowledging writes.
# Otherwise, returns true.
#
# @raise [Mongo::OperationFailure] will be raised iff :w > 0 and the operation fails.
def update(selector, document, opts={})
send_write(:update, selector, document, !document.keys.first.to_s.start_with?("$"), opts)
end
# Create a new index.
#
# @param [String, Array] spec
# should be either a single field name or an array of
# [field name, type] pairs. Index types should be specified
# as Mongo::ASCENDING, Mongo::DESCENDING, Mongo::GEO2D, Mongo::GEO2DSPHERE, Mongo::GEOHAYSTACK,
# Mongo::TEXT or Mongo::HASHED.
#
# Note that geospatial indexing only works with versions of MongoDB >= 1.3.3+. Keep in mind, too,
# that in order to geo-index a given field, that field must reference either an array or a sub-object
# where the first two values represent x- and y-coordinates. Examples can be seen below.
#
# Also note that it is permissible to create compound indexes that include a geospatial index as
# long as the geospatial index comes first.
#
# If your code calls create_index frequently, you can use Collection#ensure_index to cache these calls
# and thereby prevent excessive round trips to the database.
#
# @option opts [Boolean] :unique (false) if true, this index will enforce a uniqueness constraint.
# @option opts [Boolean] :background (false) indicate that the index should be built in the background. This
# feature is only available in MongoDB >= 1.3.2.
# @option opts [Boolean] :drop_dups (nil) If creating a unique index on a collection with pre-existing records,
# this option will keep the first document the database indexes and drop all subsequent with duplicate values.
# @option opts [Integer] :bucket_size (nil) For use with geoHaystack indexes. Number of documents to group
# together within a certain proximity to a given longitude and latitude.
# @option opts [Integer] :min (nil) specify the minimum longitude and latitude for a geo index.
# @option opts [Integer] :max (nil) specify the maximum longitude and latitude for a geo index.
#
# @example Creating a compound index using a hash: (Ruby 1.9+ Syntax)
# @posts.create_index({'subject' => Mongo::ASCENDING, 'created_at' => Mongo::DESCENDING})
#
# @example Creating a compound index:
# @posts.create_index([['subject', Mongo::ASCENDING], ['created_at', Mongo::DESCENDING]])
#
# @example Creating a geospatial index using a hash: (Ruby 1.9+ Syntax)
# @restaurants.create_index(:location => Mongo::GEO2D)
#
# @example Creating a geospatial index:
# @restaurants.create_index([['location', Mongo::GEO2D]])
#
# # Note that this will work only if 'location' represents x,y coordinates:
# {'location': [0, 50]}
# {'location': {'x' => 0, 'y' => 50}}
# {'location': {'latitude' => 0, 'longitude' => 50}}
#
# @example A geospatial index with alternate longitude and latitude:
# @restaurants.create_index([['location', Mongo::GEO2D]], :min => 500, :max => 500)
#
# @return [String] the name of the index created.
def create_index(spec, opts={})
opts[:dropDups] = opts[:drop_dups] if opts[:drop_dups]
opts[:bucketSize] = opts[:bucket_size] if opts[:bucket_size]
field_spec = parse_index_spec(spec)
opts = opts.dup
name = opts.delete(:name) || generate_index_name(field_spec)
name = name.to_s if name
generate_indexes(field_spec, name, opts)
name
end
# Calls create_index and sets a flag to not do so again for another X minutes.
# this time can be specified as an option when initializing a Mongo::DB object as options[:cache_time]
# Any changes to an index will be propagated through regardless of cache time (e.g., a change of index direction)
#
# The parameters and options for this methods are the same as those for Collection#create_index.
#
# @example Call sequence (Ruby 1.9+ Syntax):
# Time t: @posts.ensure_index(:subject => Mongo::ASCENDING) -- calls create_index and
# sets the 5 minute cache
# Time t+2min : @posts.ensure_index(:subject => Mongo::ASCENDING) -- doesn't do anything
# Time t+3min : @posts.ensure_index(:something_else => Mongo::ASCENDING) -- calls create_index
# and sets 5 minute cache
# Time t+10min : @posts.ensure_index(:subject => Mongo::ASCENDING) -- calls create_index and
# resets the 5 minute counter
#
# @return [String] the name of the index.
def ensure_index(spec, opts={})
now = Time.now.utc.to_i
opts[:dropDups] = opts[:drop_dups] if opts[:drop_dups]
opts[:bucketSize] = opts[:bucket_size] if opts[:bucket_size]
field_spec = parse_index_spec(spec)
name = opts[:name] || generate_index_name(field_spec)
name = name.to_s if name
if !@cache[name] || @cache[name] <= now
generate_indexes(field_spec, name, opts)
end
# Reset the cache here in case there are any errors inserting. Best to be safe.
@cache[name] = now + @cache_time
name
end
# Drop a specified index.
#
# @param [String] name
def drop_index(name)
if name.is_a?(Array)
return drop_index(index_name(name))
end
@cache[name.to_s] = nil
@db.drop_index(@name, name)
end
# Drop all indexes.
def drop_indexes
@cache = {}
# Note: calling drop_indexes with no args will drop them all.
@db.drop_index(@name, '*')
end
# Drop the entire collection. USE WITH CAUTION.
def drop
@db.drop_collection(@name)
end
# Atomically update and return a document using MongoDB's findAndModify command. (MongoDB > 1.3.0)
#
# @option opts [Hash] :query ({}) a query selector document for matching
# the desired document.
# @option opts [Hash] :update (nil) the update operation to perform on the
# matched document.
# @option opts [Array, String, OrderedHash] :sort ({}) specify a sort
# option for the query using any
# of the sort options available for Cursor#sort. Sort order is important
# if the query will be matching multiple documents since only the first
# matching document will be updated and returned.
# @option opts [Boolean] :remove (false) If true, removes the returned
# document from the collection.
# @option opts [Boolean] :new (false) If true, returns the updated
# document; otherwise, returns the document prior to update.
# @option opts [Boolean] :upsert (false) If true, creates a new document
# if the query returns no document.
# @option opts [Hash] :fields (nil) A subset of fields to return.
# Specify an inclusion of a field with 1. _id is included by default and must
# be explicitly excluded.
# @option opts [Boolean] :full_response (false) If true, returns the entire
# response object from the server including 'ok' and 'lastErrorObject'.
#
# @return [Hash] the matched document.
def find_and_modify(opts={})
full_response = opts.delete(:full_response)
cmd = BSON::OrderedHash.new
cmd[:findandmodify] = @name
cmd.merge!(opts)
cmd[:sort] =
Mongo::Support.format_order_clause(opts[:sort]) if opts[:sort]
full_response ? @db.command(cmd) : @db.command(cmd)['value']
end
# Perform an aggregation using the aggregation framework on the current collection.
# @note Aggregate requires server version >= 2.1.1
# @note Field References: Within an expression, field names must be quoted and prefixed by a dollar sign ($).
#
# @example Define the pipeline as an array of operator hashes:
# coll.aggregate([ {"$project" => {"last_name" => 1, "first_name" => 1 }}, {"$match" => {"last_name" => "Jones"}} ])
#
# @example With server version 2.5.1 or newer, pass a cursor option to retrieve unlimited aggregation results:
# coll.aggregate([ {"$group" => { :_id => "$_id", :count => { "$sum" => "$members" }}} ], :cursor => {} )
#
# @param [Array] pipeline Should be a single array of pipeline operator hashes.
#
# '$project' Reshapes a document stream by including fields, excluding fields, inserting computed fields,
# renaming fields,or creating/populating fields that hold sub-documents.
#
# '$match' Query-like interface for filtering documents out of the aggregation pipeline.
#
# '$limit' Restricts the number of documents that pass through the pipeline.
#
# '$skip' Skips over the specified number of documents and passes the rest along the pipeline.
#
# '$unwind' Peels off elements of an array individually, returning one document for each member.
#
# '$group' Groups documents for calculating aggregate values.
#
# '$sort' Sorts all input documents and returns them to the pipeline in sorted order.
#
# '$out' The name of a collection to which the result set will be saved.
#
# @option opts [:primary, :secondary] :read Read preference indicating which server to perform this operation
# on. If $out is specified and :read is not :primary, the aggregation will be rerouted to the primary with
# a warning. See Collection#find for more details.
# @option opts [String] :comment (nil) a comment to include in profiling logs
# @option opts [Hash] :cursor return a cursor object instead of an Array. Takes an optional batchSize parameter
# to specify the maximum size, in documents, of the first batch returned.
#
# @return [Array] An Array with the aggregate command's results.
#
# @raise MongoArgumentError if operators either aren't supplied or aren't in the correct format.
# @raise MongoOperationFailure if the aggregate command fails.
#
def aggregate(pipeline=nil, opts={})
raise MongoArgumentError, "pipeline must be an array of operators" unless pipeline.class == Array
raise MongoArgumentError, "pipeline operators must be hashes" unless pipeline.all? { |op| op.class == Hash }
selector = BSON::OrderedHash.new
selector['aggregate'] = self.name
selector['pipeline'] = pipeline
result = @db.command(selector, command_options(opts))
unless Mongo::Support.ok?(result)
raise Mongo::OperationFailure, "aggregate failed: #{result['errmsg']}"
end
if result.key?('cursor')
cursor_info = result['cursor']
seed = {
:cursor_id => cursor_info['id'],
:first_batch => cursor_info['firstBatch'],
:pool => @connection.pinned_pool
}
return Cursor.new(self, seed.merge!(opts))
elsif selector['pipeline'].any? { |op| op.key?('$out') || op.key?(:$out) }
return result
end
result['result'] || result
end
# Perform a map-reduce operation on the current collection.
#
# @param [String, BSON::Code] map a map function, written in JavaScript.
# @param [String, BSON::Code] reduce a reduce function, written in JavaScript.
#
# @option opts [Hash] :query ({}) a query selector document, like what's passed to #find, to limit
# the operation to a subset of the collection.
# @option opts [Array] :sort ([]) an array of [key, direction] pairs to sort by. Direction should
# be specified as Mongo::ASCENDING (or :ascending / :asc) or Mongo::DESCENDING (or :descending / :desc)
# @option opts [Integer] :limit (nil) if passing a query, number of objects to return from the collection.
# @option opts [String, BSON::Code] :finalize (nil) a javascript function to apply to the result set after the
# map/reduce operation has finished.
# @option opts [String, Hash] :out Location of the result of the map-reduce operation. You can output to a
# collection, output to a collection with an action, or output inline. You may output to a collection
# when performing map reduce operations on the primary members of the set; on secondary members you
# may only use the inline output. See the server mapReduce documentation for available options.
# @option opts [Boolean] :keeptemp (false) if true, the generated collection will be persisted. The default
# is false. Note that this option has no effect is versions of MongoDB > v1.7.6.
# @option opts [Boolean ] :verbose (false) if true, provides statistics on job execution time.
# @option opts [Boolean] :raw (false) if true, return the raw result object from the map_reduce command, and not
# the instantiated collection that's returned by default. Note if a collection name isn't returned in the
# map-reduce output (as, for example, when using :out => { :inline => 1 }), then you must specify this option
# or an ArgumentError will be raised.
# @option opts [:primary, :secondary] :read Read preference indicating which server to run this map-reduce
# on. See Collection#find for more details.
# @option opts [String] :comment (nil) a comment to include in profiling logs
#
# @return [Collection, Hash] a Mongo::Collection object or a Hash with the map-reduce command's results.
#
# @raise ArgumentError if you specify { :out => { :inline => true }} but don't specify :raw => true.
#
# @see http://www.mongodb.org/display/DOCS/MapReduce Offical MongoDB map/reduce documentation.
def map_reduce(map, reduce, opts={})
opts = opts.dup
map = BSON::Code.new(map) unless map.is_a?(BSON::Code)
reduce = BSON::Code.new(reduce) unless reduce.is_a?(BSON::Code)
raw = opts.delete(:raw)
hash = BSON::OrderedHash.new
hash['mapreduce'] = self.name
hash['map'] = map
hash['reduce'] = reduce
hash['out'] = opts.delete(:out)
hash['sort'] = Mongo::Support.format_order_clause(opts.delete(:sort)) if opts.key?(:sort)
result = @db.command(hash, command_options(opts))
unless Mongo::Support.ok?(result)
raise Mongo::OperationFailure, "map-reduce failed: #{result['errmsg']}"
end
if raw
result
elsif result['result']
if result['result'].is_a?(BSON::OrderedHash) &&
result['result'].key?('db') &&
result['result'].key?('collection')
otherdb = @db.connection[result['result']['db']]
otherdb[result['result']['collection']]
else
@db[result["result"]]
end
else
raise ArgumentError, "Could not instantiate collection from result. If you specified " +
"{:out => {:inline => true}}, then you must also specify :raw => true to get the results."
end
end
alias :mapreduce :map_reduce
# Perform a group aggregation.
#
# @param [Hash] opts the options for this group operation. The minimum required are :initial
# and :reduce.
#
# @option opts [Array, String, Symbol] :key (nil) Either the name of a field or a list of fields to group by (optional).
# @option opts [String, BSON::Code] :keyf (nil) A JavaScript function to be used to generate the grouping keys (optional).
# @option opts [String, BSON::Code] :cond ({}) A document specifying a query for filtering the documents over
# which the aggregation is run (optional).
# @option opts [Hash] :initial the initial value of the aggregation counter object (required).
# @option opts [String, BSON::Code] :reduce (nil) a JavaScript aggregation function (required).
# @option opts [String, BSON::Code] :finalize (nil) a JavaScript function that receives and modifies
# each of the resultant grouped objects. Available only when group is run with command
# set to true.
# @option opts [:primary, :secondary] :read Read preference indicating which server to perform this group
# on. See Collection#find for more details.
# @option opts [String] :comment (nil) a comment to include in profiling logs
#
# @return [Array] the command response consisting of grouped items.
def group(opts, condition={}, initial={}, reduce=nil, finalize=nil)
opts = opts.dup
if opts.is_a?(Hash)
return new_group(opts)
elsif opts.is_a?(Symbol)
raise MongoArgumentError, "Group takes either an array of fields to group by or a JavaScript function" +
"in the form of a String or BSON::Code."
end
warn "Collection#group no longer takes a list of parameters. This usage is deprecated and will be removed in v2.0." +
"Check out the new API at http://api.mongodb.org/ruby/current/Mongo/Collection.html#group-instance_method"
reduce = BSON::Code.new(reduce) unless reduce.is_a?(BSON::Code)
group_command = {
"group" => {
"ns" => @name,
"$reduce" => reduce,
"cond" => condition,
"initial" => initial
}
}
unless opts.nil?
if opts.is_a? Array
key_type = "key"
key_value = {}
opts.each { |k| key_value[k] = 1 }
else
key_type = "$keyf"
key_value = opts.is_a?(BSON::Code) ? opts : BSON::Code.new(opts)
end
group_command["group"][key_type] = key_value
end
finalize = BSON::Code.new(finalize) if finalize.is_a?(String)
if finalize.is_a?(BSON::Code)
group_command['group']['finalize'] = finalize
end
result = @db.command(group_command)
if Mongo::Support.ok?(result)
result["retval"]
else
raise OperationFailure, "group command failed: #{result['errmsg']}"
end
end
# Scan this entire collection in parallel.
# Returns a list of up to num_cursors cursors that can be iterated concurrently. As long as the collection
# is not modified during scanning, each document appears once in one of the cursors' result sets.
#
# @note Requires server version >= 2.5.5
#
# @param [Integer] num_cursors the number of cursors to return.
# @param [Hash] opts
#
# @return [Array] An array of up to num_cursors cursors for iterating over the collection.
def parallel_scan(num_cursors, opts={})
cmd = BSON::OrderedHash.new
cmd[:parallelCollectionScan] = self.name
cmd[:numCursors] = num_cursors
result = @db.command(cmd, command_options(opts))
result['cursors'].collect do |cursor_info|
seed = {
:cursor_id => cursor_info['cursor']['id'],
:first_batch => cursor_info['cursor']['firstBatch'],
:pool => @connection.pinned_pool
}
Cursor.new(self, seed.merge!(opts))
end
end
private
def new_group(opts={})
reduce = opts.delete(:reduce)
finalize = opts.delete(:finalize)
cond = opts.delete(:cond) || {}
initial = opts.delete(:initial)
if !(reduce && initial)
raise MongoArgumentError, "Group requires at minimum values for initial and reduce."
end
cmd = {
"group" => {
"ns" => @name,
"$reduce" => reduce.to_bson_code,
"cond" => cond,
"initial" => initial
}
}
if finalize
cmd['group']['finalize'] = finalize.to_bson_code
end
if key = opts.delete(:key)
if key.is_a?(String) || key.is_a?(Symbol)
key = [key]
end
key_value = {}
key.each { |k| key_value[k] = 1 }
cmd["group"]["key"] = key_value
elsif keyf = opts.delete(:keyf)
cmd["group"]["$keyf"] = keyf.to_bson_code
end
result = @db.command(cmd, command_options(opts))
result["retval"]
end
public
# Return a list of distinct values for +key+ across all
# documents in the collection. The key may use dot notation
# to reach into an embedded object.
#
# @param [String, Symbol, OrderedHash] key or hash to group by.
# @param [Hash] query a selector for limiting the result set over which to group.
# @param [Hash] opts the options for this distinct operation.
#
# @option opts [:primary, :secondary] :read Read preference indicating which server to perform this query
# on. See Collection#find for more details.
# @option opts [String] :comment (nil) a comment to include in profiling logs
#
# @example Saving zip codes and ages and returning distinct results.
# @collection.save({:zip => 10010, :name => {:age => 27}})
# @collection.save({:zip => 94108, :name => {:age => 24}})
# @collection.save({:zip => 10010, :name => {:age => 27}})
# @collection.save({:zip => 99701, :name => {:age => 24}})
# @collection.save({:zip => 94108, :name => {:age => 27}})
#
# @collection.distinct(:zip)
# [10010, 94108, 99701]
# @collection.distinct("name.age")
# [27, 24]
#
# # You may also pass a document selector as the second parameter
# # to limit the documents over which distinct is run:
# @collection.distinct("name.age", {"name.age" => {"$gt" => 24}})
# [27]
#
# @return [Array] an array of distinct values.
def distinct(key, query=nil, opts={})
raise MongoArgumentError unless [String, Symbol].include?(key.class)
command = BSON::OrderedHash.new
command[:distinct] = @name
command[:key] = key.to_s
command[:query] = query
@db.command(command, command_options(opts))["values"]
end
# Rename this collection.
#
# Note: If operating in auth mode, the client must be authorized as an admin to
# perform this operation.
#
# @param [String] new_name the new name for this collection
#
# @return [String] the name of the new collection.
#
# @raise [Mongo::InvalidNSName] if +new_name+ is an invalid collection name.
def rename(new_name)
case new_name
when Symbol, String
else
raise TypeError, "new_name must be a string or symbol"
end
new_name = new_name.to_s
if new_name.empty? or new_name.include? ".."
raise Mongo::InvalidNSName, "collection names cannot be empty"
end
if new_name.include? "$"
raise Mongo::InvalidNSName, "collection names must not contain '$'"
end
if new_name.match(/^\./) or new_name.match(/\.$/)
raise Mongo::InvalidNSName, "collection names must not start or end with '.'"
end
@db.rename_collection(@name, new_name)
@name = new_name
end
# Get information on the indexes for this collection.
#
# @return [Hash] a hash where the keys are index names.
def index_information
@db.index_information(@name)
end
# Return a hash containing options that apply to this collection.
# For all possible keys and values, see DB#create_collection.
#
# @return [Hash] options that apply to this collection.
def options
@db.collections_info(@name).next_document['options']
end
# Return stats on the collection. Uses MongoDB's collstats command.
#
# @return [Hash]
def stats
@db.command({:collstats => @name})
end
# Get the number of documents in this collection.
#
# @option opts [Hash] :query ({}) A query selector for filtering the documents counted.
# @option opts [Integer] :skip (nil) The number of documents to skip.
# @option opts [Integer] :limit (nil) The number of documents to limit.
# @option opts [:primary, :secondary] :read Read preference for this command. See Collection#find for
# more details.
# @option opts [String] :comment (nil) a comment to include in profiling logs
#
# @return [Integer]
def count(opts={})
find(opts[:query],
:skip => opts[:skip],
:limit => opts[:limit],
:read => opts[:read],
:comment => opts[:comment]).count(true)
end
alias :size :count
protected
# Provide required command options if they are missing in the command options hash.
#
# @return [Hash] The command options hash
def command_options(opts)
opts[:read] ? opts : opts.merge(:read => @read)
end
def normalize_hint_fields(hint)
case hint
when String
{hint => 1}
when Hash
hint
when nil
nil
else
h = BSON::OrderedHash.new
hint.to_a.each { |k| h[k] = 1 }
h
end
end
private
def send_write(op_type, selector, doc_or_docs, check_keys, opts, collection_name=@name)
write_concern = get_write_concern(opts, self)
if @db.connection.use_write_command?(write_concern)
@command_writer.send_write_command(op_type, selector, doc_or_docs, check_keys, opts, write_concern, collection_name)
else
@operation_writer.send_write_operation(op_type, selector, doc_or_docs, check_keys, opts, write_concern, collection_name)
end
end
def index_name(spec)
field_spec = parse_index_spec(spec)
index_information.each do |index|
return index[0] if index[1]['key'] == field_spec
end
nil
end
def parse_index_spec(spec)
field_spec = BSON::OrderedHash.new
if spec.is_a?(String) || spec.is_a?(Symbol)
field_spec[spec.to_s] = 1
elsif spec.is_a?(Hash)
if RUBY_VERSION < '1.9' && !spec.is_a?(BSON::OrderedHash)
raise MongoArgumentError, "Must use OrderedHash in Ruby < 1.9.0"
end
validate_index_types(spec.values)
field_spec = spec.is_a?(BSON::OrderedHash) ? spec : BSON::OrderedHash.try_convert(spec)
elsif spec.is_a?(Array) && spec.all? {|field| field.is_a?(Array) }
spec.each do |f|
validate_index_types(f[1])
field_spec[f[0].to_s] = f[1]
end
else
raise MongoArgumentError, "Invalid index specification #{spec.inspect}; " +
"should be either a hash (OrderedHash), string, symbol, or an array of arrays."
end
field_spec
end
def validate_index_types(*types)
types.flatten!
types.each do |t|
unless Mongo::INDEX_TYPES.values.include?(t)
raise MongoArgumentError, "Invalid index field #{t.inspect}; " +
"should be one of " + Mongo::INDEX_TYPES.map {|k,v| "Mongo::#{k} (#{v})"}.join(', ')
end
end
end
def generate_indexes(field_spec, name, opts)
selector = {
:name => name,
:key => field_spec
}
selector.merge!(opts)
begin
cmd = BSON::OrderedHash[:createIndexes, @name, :indexes, [selector]]
@db.command(cmd)
rescue Mongo::OperationFailure => ex
if ex.error_code == Mongo::ErrorCode::COMMAND_NOT_FOUND || ex.error_code.nil?
selector[:ns] = "#{@db.name}.#{@name}"
send_write(:insert, nil, selector, false, {:w => 1}, Mongo::DB::SYSTEM_INDEX_COLLECTION)
else
raise Mongo::OperationFailure, "Failed to create index #{selector.inspect} with the following error: " +
"#{ex.message}"
end
end
nil
end
def generate_index_name(spec)
indexes = []
spec.each_pair do |field, type|
indexes.push("#{field}_#{type}")
end
indexes.join("_")
end
def batch_write(op_type, documents, check_keys=true, opts={})
write_concern = get_write_concern(opts, self)
if @db.connection.use_write_command?(write_concern)
return @command_writer.batch_write(op_type, documents, check_keys, opts)
else
return @operation_writer.batch_write(op_type, documents, check_keys, opts)
end
end
end
end
ruby-mongo-1.10.0/lib/mongo/collection_writer.rb 0000664 0000000 0000000 00000040355 12334610061 0021633 0 ustar 00root root 0000000 0000000 # Copyright (C) 2009-2013 MongoDB, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
module Mongo
class CollectionWriter
include Mongo::Logging
include Mongo::WriteConcern
OPCODE = {
:insert => Mongo::Constants::OP_INSERT,
:update => Mongo::Constants::OP_UPDATE,
:delete => Mongo::Constants::OP_DELETE
}
WRITE_COMMAND_ARG_KEY = {
:insert => :documents,
:update => :updates,
:delete => :deletes
}
def initialize(collection)
@collection = collection
@name = @collection.name
@db = @collection.db
@connection = @db.connection
@logger = @connection.logger
@max_write_batch_size = Mongo::MongoClient::DEFAULT_MAX_WRITE_BATCH_SIZE
end
# common implementation only for new batch write commands (insert, update, delete) and old batch insert
def batch_write_incremental(op_type, documents, check_keys=true, opts={})
raise Mongo::OperationFailure, "Request contains no documents" if documents.empty?
write_concern = get_write_concern(opts, @collection)
max_message_size, max_append_size, max_serialize_size = batch_write_max_sizes(write_concern)
ordered = opts[:ordered]
continue_on_error = !!opts[:continue_on_error] || ordered == false
collect_on_error = !!opts[:collect_on_error] || ordered == false
error_docs = [] # docs with serialization errors
errors = []
write_concern_errors = []
exchanges = []
serialized_doc = nil
message = BSON::ByteBuffer.new("", max_message_size)
@max_write_batch_size = @collection.db.connection.max_write_batch_size
docs = documents.dup
catch(:error) do
until docs.empty? || (!errors.empty? && !collect_on_error) # process documents a batch at a time
batch_docs = []
batch_message_initialize(message, op_type, continue_on_error, write_concern)
while !docs.empty? && batch_docs.size < @max_write_batch_size
begin
doc = docs.first
doc = doc[:d] if op_type == :insert && !ordered.nil? #check_keys for :update outside of serialize
serialized_doc ||= BSON::BSON_CODER.serialize(doc, check_keys, true, max_serialize_size)
rescue BSON::InvalidDocument, BSON::InvalidKeyName, BSON::InvalidStringEncoding => ex
bulk_message = "Bulk write error - #{ex.message} - examine result for complete information"
ex = BulkWriteError.new(bulk_message, Mongo::ErrorCode::INVALID_BSON,
{:op_type => op_type, :serialize => doc, :ord => docs.first[:ord], :error => ex}) unless ordered.nil?
error_docs << docs.shift
errors << ex
next if collect_on_error
throw(:error) if batch_docs.empty?
break # defer exit and send batch
end
break if message.size + serialized_doc.size > max_append_size
batch_docs << docs.shift
batch_message_append(message, serialized_doc, write_concern)
serialized_doc = nil
end
begin
response = batch_message_send(message, op_type, batch_docs, write_concern, continue_on_error) if batch_docs.size > 0
exchanges << {:op_type => op_type, :batch => batch_docs, :opts => opts, :response => response}
rescue Mongo::WriteConcernError => ex
write_concern_errors << ex
exchanges << {:op_type => op_type, :batch => batch_docs, :opts => opts, :response => ex.result}
rescue Mongo::OperationFailure => ex
errors << ex
exchanges << {:op_type => op_type, :batch => batch_docs, :opts => opts, :response => ex.result}
throw(:error) unless continue_on_error
end
end
end
[error_docs, errors, write_concern_errors, exchanges]
end
def batch_write_partition(op_type, documents, check_keys, opts)
raise Mongo::OperationFailure, "Request contains no documents" if documents.empty?
write_concern = get_write_concern(opts, @collection)
ordered = opts[:ordered]
continue_on_error = !!opts[:continue_on_error] || ordered == false # continue_on_error default false
collect_on_error = !!opts[:collect_on_error] # collect_on_error default false
error_docs = [] # docs with serialization errors
errors = []
write_concern_errors = []
exchanges = []
@max_write_batch_size = @collection.db.connection.max_write_batch_size
@write_batch_size = [documents.size, @max_write_batch_size].min
docs = documents.dup
until docs.empty?
batch = docs.take(@write_batch_size)
begin
batch_to_send = batch #(op_type == :insert && !ordered.nil?) ? batch.collect{|doc|doc[:d]} : batch
if @collection.db.connection.use_write_command?(write_concern) # TODO - polymorphic send_write including legacy insert
response = send_bulk_write_command(op_type, batch_to_send, check_keys, opts)
else
response = send_write_operation(op_type, nil, batch_to_send, check_keys, opts, write_concern)
end
exchanges << {:op_type => op_type, :batch => batch, :opts => opts, :response => response}
docs = docs.drop(batch.size)
@write_batch_size = [(@write_batch_size*1097) >> 10, @write_batch_size+1].max unless docs.empty? # 2**(1/10) multiplicative increase
@write_batch_size = @max_write_batch_size if @write_batch_size > @max_write_batch_size
rescue BSON::InvalidDocument, BSON::InvalidKeyName, BSON::InvalidStringEncoding => ex
if @write_batch_size > 1 # decrease batch size
@write_batch_size = (@write_batch_size+1) >> 1 # 2**(-1) multiplicative decrease
next
end
# error on a single document
bulk_message = "Bulk write error - #{ex.message} - examine result for complete information"
ex = BulkWriteError.new(bulk_message, Mongo::ErrorCode::INVALID_BSON,
{:op_type => op_type, :batch => batch, :ord => batch.first[:ord], :opts => opts, :error => ex}) unless ordered.nil?
error_docs << docs.shift
next if collect_on_error
errors << ex
break unless continue_on_error
rescue Mongo::WriteConcernError => ex
write_concern_errors << ex
exchanges << {:op_type => op_type, :batch => batch_docs, :opts => opts, :response => ex.result}
docs = docs.drop(batch.size)
rescue Mongo::OperationFailure => ex
errors << ex
exchanges << {:op_type => op_type, :batch => batch, :opts => opts, :response => ex.result}
docs = docs.drop(batch.size)
break if !continue_on_error && !collect_on_error
end
end
[error_docs, errors, write_concern_errors, exchanges]
end
alias :batch_write :batch_write_incremental
def send_bulk_write_command(op_type, documents, check_keys, opts, collection_name=@name)
if op_type == :insert
documents = documents.collect{|doc| doc[:d]} if opts.key?(:ordered)
documents.each do |doc|
# TODO - @pk_factory.create_pk(doc)
if check_keys
doc.each_key do |key|
key = key.to_s
raise BSON::InvalidKeyName.new("key #{key} must not start with '$'") if key[0] == ?$
raise BSON::InvalidKeyName.new("key #{key} must not contain '.'") if key.include? ?.
end
end
end
#elsif op_type == :update # TODO - check keys
#elsif op_type == :delete
#else
# raise ArgumentError, "Write operation type must be :insert, :update or :delete"
end
request = BSON::OrderedHash[op_type, collection_name].merge!(
Mongo::CollectionWriter::WRITE_COMMAND_ARG_KEY[op_type] => documents,
:writeConcern => get_write_concern(opts, @collection),
:ordered => opts[:ordered] || !opts[:continue_on_error]
)
@db.command(request)
end
private
def sort_by_first_sym(pairs)
pairs = pairs.collect{|first, rest| [first.to_s, rest]} #stringify_first
pairs = pairs.sort{|x,y| x.first <=> y.first }
pairs.collect{|first, rest| [first.to_sym, rest]} #symbolize_first
end
def ordered_group_by_first(pairs)
pairs.inject([[], nil]) do |memo, pair|
result, previous_value = memo
current_value = pair.first
result << [current_value, []] if previous_value != current_value
result.last.last << pair.last
[result, current_value]
end.first
end
end
class CollectionOperationWriter < CollectionWriter
def initialize(collection)
super(collection)
end
def send_write_operation(op_type, selector, doc_or_docs, check_keys, opts, write_concern, collection_name=@name)
message = BSON::ByteBuffer.new("", @connection.max_message_size)
message.put_int((op_type == :insert && !!opts[:continue_on_error]) ? 1 : 0)
BSON::BSON_RUBY.serialize_cstr(message, "#{@db.name}.#{collection_name}")
if op_type == :update
update_options = 0
update_options += 1 if opts[:upsert]
update_options += 2 if opts[:multi]
message.put_int(update_options)
elsif op_type == :delete
delete_options = 0
delete_options += 1 if opts[:limit] && opts[:limit] != 0
message.put_int(delete_options)
end
message.put_binary(BSON::BSON_CODER.serialize(selector, false, true, @connection.max_bson_size).to_s) if selector
[doc_or_docs].flatten(1).compact.each do |document|
message.put_binary(BSON::BSON_CODER.serialize(document, check_keys, true, @connection.max_bson_size).to_s)
if message.size > @connection.max_message_size
raise BSON::InvalidDocument, "Message is too large. This message is limited to #{@connection.max_message_size} bytes."
end
end
instrument(op_type, :database => @db.name, :collection => collection_name, :selector => selector, :documents => doc_or_docs) do
op_code = OPCODE[op_type]
if Mongo::WriteConcern.gle?(write_concern)
@connection.send_message_with_gle(op_code, message, @db.name, nil, write_concern)
else
@connection.send_message(op_code, message)
end
end
end
def bulk_execute(ops, options, opts = {})
write_concern = get_write_concern(opts, @collection)
errors = []
write_concern_errors = []
exchanges = []
ops.each do |op_type, doc|
doc = {:d => @collection.pk_factory.create_pk(doc[:d]), :ord => doc[:ord]} if op_type == :insert
doc_opts = doc.merge(opts)
d = doc_opts.delete(:d)
q = doc_opts.delete(:q)
u = doc_opts.delete(:u)
begin # use single and NOT batch inserts since there no index for an error
response = @collection.operation_writer.send_write_operation(op_type, q, d || u, check_keys = false, doc_opts, write_concern)
exchanges << {:op_type => op_type, :batch => [doc], :opts => opts, :response => response}
rescue BSON::InvalidDocument, BSON::InvalidKeyName, BSON::InvalidStringEncoding => ex
bulk_message = "Bulk write error - #{ex.message} - examine result for complete information"
ex = BulkWriteError.new(bulk_message, Mongo::ErrorCode::INVALID_BSON,
{:op_type => op_type, :serialize => doc, :ord => doc[:ord], :error => ex})
errors << ex
break if options[:ordered]
rescue Mongo::WriteConcernError => ex
write_concern_errors << ex
exchanges << {:op_type => op_type, :batch => [doc], :opts => opts, :response => ex.result}
rescue Mongo::OperationFailure => ex
errors << ex
exchanges << {:op_type => op_type, :batch => [doc], :opts => opts, :response => ex.result}
break if options[:ordered] && ex.result["err"] != "norepl"
end
end
[errors, write_concern_errors, exchanges]
end
private
def batch_message_initialize(message, op_type, continue_on_error, write_concern)
message.clear!.clear
message.put_int(continue_on_error ? 1 : 0)
BSON::BSON_RUBY.serialize_cstr(message, "#{@db.name}.#{@name}")
end
def batch_message_append(message, serialized_doc, write_concern)
message.put_binary(serialized_doc.to_s)
end
def batch_message_send(message, op_type, batch_docs, write_concern, continue_on_error)
instrument(:insert, :database => @db.name, :collection => @name, :documents => batch_docs) do
if Mongo::WriteConcern.gle?(write_concern)
@connection.send_message_with_gle(Mongo::Constants::OP_INSERT, message, @db.name, nil, write_concern)
else
@connection.send_message(Mongo::Constants::OP_INSERT, message)
end
end
end
def batch_write_max_sizes(write_concern)
[@connection.max_message_size, @connection.max_message_size, @connection.max_bson_size]
end
end
class CollectionCommandWriter < CollectionWriter
def initialize(collection)
super(collection)
end
def send_write_command(op_type, selector, doc_or_docs, check_keys, opts, write_concern, collection_name=@name)
if op_type == :insert
argument = [doc_or_docs].flatten(1).compact
elsif op_type == :update
argument = [{:q => selector, :u => doc_or_docs, :multi => !!opts[:multi]}]
argument.first.merge!(:upsert => opts[:upsert]) if opts[:upsert]
elsif op_type == :delete
argument = [{:q => selector, :limit => (opts[:limit] || 0)}]
else
raise ArgumentError, "Write operation type must be :insert, :update or :delete"
end
request = BSON::OrderedHash[op_type, collection_name, WRITE_COMMAND_ARG_KEY[op_type], argument]
request.merge!(:writeConcern => write_concern, :ordered => !opts[:continue_on_error])
request.merge!(opts)
instrument(op_type, :database => @db.name, :collection => collection_name, :selector => selector, :documents => doc_or_docs) do
@db.command(request)
end
end
def bulk_execute(ops, options, opts = {})
errors = []
write_concern_errors = []
exchanges = []
ops = (options[:ordered] == false) ? sort_by_first_sym(ops) : ops # sort by write-type
ordered_group_by_first(ops).each do |op_type, documents|
documents.collect! {|doc| {:d => @collection.pk_factory.create_pk(doc[:d]), :ord => doc[:ord]} } if op_type == :insert
error_docs, batch_errors, batch_write_concern_errors, batch_exchanges =
batch_write(op_type, documents, check_keys = false, opts.merge(:ordered => options[:ordered]))
errors += batch_errors
write_concern_errors += batch_write_concern_errors
exchanges += batch_exchanges
break if options[:ordered] && !batch_errors.empty?
end
[errors, write_concern_errors, exchanges]
end
private
def batch_message_initialize(message, op_type, continue_on_error, write_concern)
message.clear!.clear
@bson_empty ||= BSON::BSON_CODER.serialize({})
message.put_binary(@bson_empty.to_s)
message.unfinish!.array!(WRITE_COMMAND_ARG_KEY[op_type])
end
def batch_message_append(message, serialized_doc, write_concern)
message.push_doc!(serialized_doc)
end
def batch_message_send(message, op_type, batch_docs, write_concern, continue_on_error)
message.finish!
request = BSON::OrderedHash[op_type, @name, :bson, message]
request.merge!(:writeConcern => write_concern, :ordered => !continue_on_error)
instrument(:insert, :database => @db.name, :collection => @name, :documents => batch_docs) do
@db.command(request)
end
end
def batch_write_max_sizes(write_concern)
[MongoClient::COMMAND_HEADROOM, MongoClient::APPEND_HEADROOM, MongoClient::SERIALIZE_HEADROOM].collect{|h| @connection.max_bson_size + h}
end
end
end
ruby-mongo-1.10.0/lib/mongo/connection.rb 0000664 0000000 0000000 00000001404 12334610061 0020233 0 ustar 00root root 0000000 0000000 # Copyright (C) 2009-2013 MongoDB, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
require 'mongo/connection/socket'
require 'mongo/connection/node'
require 'mongo/connection/pool'
require 'mongo/connection/pool_manager'
require 'mongo/connection/sharding_pool_manager'
ruby-mongo-1.10.0/lib/mongo/connection/ 0000775 0000000 0000000 00000000000 12334610061 0017707 5 ustar 00root root 0000000 0000000 ruby-mongo-1.10.0/lib/mongo/connection/node.rb 0000664 0000000 0000000 00000015574 12334610061 0021175 0 ustar 00root root 0000000 0000000 # Copyright (C) 2009-2013 MongoDB, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
module Mongo
class Node
attr_accessor :host, :port, :address, :client, :socket, :last_state
def initialize(client, host_port)
@client = client
@manager = @client.local_manager
@host, @port = Support.normalize_seeds(host_port)
@address = "#{@host}:#{@port}"
@config = nil
@socket = nil
@node_mutex = Mutex.new
end
def eql?(other)
(other.is_a?(Node) && @address == other.address)
end
alias :== :eql?
def =~(other)
if other.is_a?(String)
h, p = Support.normalize_seeds(other)
h == @host && p == @port
else
false
end
end
def host_string
address
end
def config
connect unless connected?
set_config unless @config || !connected?
@config
end
def inspect
""
end
# Create a connection to the provided node,
# and, if successful, return the socket. Otherwise,
# return nil.
def connect
@node_mutex.synchronize do
begin
@socket = @client.socket_class.new(@host, @port,
@client.op_timeout,
@client.connect_timeout,
@client.socket_opts)
rescue ConnectionTimeoutError, OperationTimeout, ConnectionFailure, OperationFailure,
SocketError, SystemCallError, IOError => ex
@client.log(:debug, "Failed connection to #{host_string} with #{ex.class}, #{ex.message}.")
close
end
end
end
# This should only be called within a mutex
def close
if @socket && !@socket.closed?
@socket.close
end
@socket = nil
@config = nil
end
def connected?
@socket != nil && !@socket.closed?
end
def active?
begin
result = @client['admin'].command({:ping => 1}, :socket => @socket)
rescue OperationFailure, SocketError, SystemCallError, IOError
return nil
end
result['ok'] == 1
end
# Get the configuration for the provided node as returned by the
# ismaster command. Additionally, check that the replica set name
# matches with the name provided.
def set_config
@node_mutex.synchronize do
begin
if @config
@last_state = @config['ismaster'] ? :primary : :other
end
if @client.connect_timeout
Timeout::timeout(@client.connect_timeout, OperationTimeout) do
@config = @client['admin'].command({:ismaster => 1}, :socket => @socket)
end
else
@config = @client['admin'].command({:ismaster => 1}, :socket => @socket)
end
update_max_sizes
if @config['msg']
@client.log(:warn, "#{config['msg']}")
end
unless @client.mongos?
check_set_membership(@config)
check_set_name(@config)
end
rescue ConnectionFailure, OperationFailure, OperationTimeout, SocketError, SystemCallError, IOError => ex
@client.log(:warn, "Attempted connection to node #{host_string} raised " +
"#{ex.class}: #{ex.message}")
# Socket may already be nil from issuing command
close
end
end
end
# Return a list of replica set nodes from the config.
# Note: this excludes arbiters.
def node_list
nodes = []
nodes += config['hosts'] if config['hosts']
nodes += config['passives'] if config['passives']
nodes += ["#{@host}:#{@port}"] if @client.mongos?
nodes
end
def arbiters
return [] unless config['arbiters']
config['arbiters'].map do |arbiter|
Support.normalize_seeds(arbiter)
end
end
def primary?
config['ismaster'] == true || config['ismaster'] == 1
end
def secondary?
config['secondary'] == true || config['secondary'] == 1
end
def tags
config['tags'] || {}
end
def host_port
[@host, @port]
end
def hash
address.hash
end
def healthy?
connected? && config
end
def max_bson_size
@max_bson_size || DEFAULT_MAX_BSON_SIZE
end
def max_message_size
@max_message_size || max_bson_size * MESSAGE_SIZE_FACTOR
end
def max_wire_version
@max_wire_version || 0
end
def min_wire_version
@min_wire_version || 0
end
def wire_version_feature?(feature)
min_wire_version <= feature && feature <= max_wire_version
end
def max_write_batch_size
@max_write_batch_size || Mongo::MongoClient::DEFAULT_MAX_WRITE_BATCH_SIZE
end
protected
# Ensure that this node is a healthy member of a replica set.
def check_set_membership(config)
if !config.has_key?('hosts')
message = "Will not connect to #{host_string} because it's not a member " +
"of a replica set."
raise ConnectionFailure, message
elsif config['hosts'].length == 1 && !config['ismaster'] &&
!config['secondary']
message = "Attempting to connect to an unhealthy, single-node replica set."
raise ConnectionFailure, message
end
end
# Ensure that this node is part of a replica set of the expected name.
def check_set_name(config)
if @client.replica_set_name
if !config['setName']
@client.log(:warn, "Could not verify replica set name for member #{host_string} " +
"because ismaster does not return name in this version of MongoDB")
elsif @client.replica_set_name != config['setName']
message = "Attempting to connect to replica set '#{config['setName']}' on member #{host_string} " +
"but expected '#{@client.replica_set_name}'"
raise ReplicaSetConnectionError, message
end
end
end
private
def update_max_sizes
@max_bson_size = config['maxBsonObjectSize'] || DEFAULT_MAX_BSON_SIZE
@max_message_size = config['maxMessageSizeBytes'] || @max_bson_size * MESSAGE_SIZE_FACTOR
@max_wire_version = config['maxWireVersion'] || 0
@min_wire_version = config['minWireVersion'] || 0
@max_write_batch_size = config['maxWriteBatchSize'] || Mongo::MongoClient::DEFAULT_MAX_WRITE_BATCH_SIZE
end
end
end
ruby-mongo-1.10.0/lib/mongo/connection/pool.rb 0000664 0000000 0000000 00000022664 12334610061 0021217 0 ustar 00root root 0000000 0000000 # Copyright (C) 2009-2013 MongoDB, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
module Mongo
class Pool
PING_ATTEMPTS = 6
MAX_PING_TIME = 1_000_000
PRUNE_INTERVAL = 10_000
attr_accessor :host,
:port,
:address,
:size,
:timeout,
:checked_out,
:client,
:node
# Create a new pool of connections.
def initialize(client, host, port, opts={})
@client = client
@host, @port = host, port
# A Mongo::Node object.
@node = opts[:node]
# The string address
@address = "#{@host}:#{@port}"
# Pool size and timeout.
@size = opts.fetch(:size, 20)
@timeout = opts.fetch(:timeout, 30)
# Mutex for synchronizing pool access
@connection_mutex = Mutex.new
# Mutex for synchronizing pings
@ping_mutex = Mutex.new
# Condition variable for signal and wait
@queue = ConditionVariable.new
@sockets = []
@checked_out = []
@ping_time = nil
@last_ping = nil
@closed = false
@thread_ids_to_sockets = {}
@checkout_counter = 0
end
# Close this pool.
#
# @option opts [Boolean]:soft (false) If true,
# close only those sockets that are not checked out.
def close(opts={})
@connection_mutex.synchronize do
if opts[:soft] && !@checked_out.empty?
@closing = true
close_sockets(@sockets - @checked_out)
else
close_sockets(@sockets)
@closed = true
end
@node.close if @node
end
true
end
def tags
@node.tags
end
def healthy?
close if @sockets.all?(&:closed?)
!closed? && node.healthy?
end
def closed?
@closed
end
def up?
!@closed
end
def inspect
"#"
end
def host_string
"#{@host}:#{@port}"
end
def host_port
[@host, @port]
end
# Refresh ping time only if we haven't
# checked within the last five minutes.
def ping_time
@ping_mutex.synchronize do
if !@last_ping || (Time.now - @last_ping) > 300
@ping_time = refresh_ping_time
@last_ping = Time.now
end
end
@ping_time
end
# Return the time it takes on average
# to do a round-trip against this node.
def refresh_ping_time
trials = []
PING_ATTEMPTS.times do
t1 = Time.now
if !self.ping
return MAX_PING_TIME
end
trials << (Time.now - t1) * 1000
end
trials.sort!
# Delete shortest and longest times
trials.delete_at(trials.length-1)
trials.delete_at(0)
total = 0.0
trials.each { |t| total += t }
(total / trials.length).ceil
end
def ping
begin
return self.client['admin'].command({:ping => 1}, :socket => @node.socket, :timeout => MAX_PING_TIME)
rescue ConnectionFailure, OperationFailure, SocketError, SystemCallError, IOError
return false
end
end
# Return a socket to the pool.
def checkin(socket)
@connection_mutex.synchronize do
if @checked_out.delete(socket)
@queue.broadcast
else
return false
end
end
true
end
# Adds a new socket to the pool and checks it out.
#
# This method is called exclusively from #checkout;
# therefore, it runs within a mutex.
def checkout_new_socket
begin
socket = @client.socket_class.new(@host, @port, @client.op_timeout,
@client.connect_timeout,
@client.socket_opts)
socket.pool = self
rescue => ex
socket.close if socket
@node.close if @node
raise ConnectionFailure, "Failed to connect to host #{@host} and port #{@port}: #{ex}"
end
# If any saved authentications exist, we want to apply those
# when creating new sockets and process logouts.
check_auths(socket)
@sockets << socket
@checked_out << socket
@thread_ids_to_sockets[Thread.current.object_id] = socket
socket
end
# If a user calls DB#authenticate, and several sockets exist,
# then we need a way to apply the authentication on each socket.
# So we store the apply_authentication method, and this will be
# applied right before the next use of each socket.
#
# @deprecated This method has been replaced by Pool#check_auths (private)
# and it isn't necessary to ever invoke this method directly.
def authenticate_existing
@connection_mutex.synchronize do
@sockets.each do |socket|
check_auths(socket)
end
end
end
# Store the logout op for each existing socket to be applied before
# the next use of each socket.
#
# @deprecated This method has been replaced by Pool#check_auths (private)
# and it isn't necessary to ever invoke this method directly.
def logout_existing(database)
@connection_mutex.synchronize do
@sockets.each do |socket|
check_auths(socket)
end
end
end
# Checks out the first available socket from the pool.
#
# If the pid has changed, remove the socket and check out
# new one.
#
# This method is called exclusively from #checkout;
# therefore, it runs within a mutex.
def checkout_existing_socket(socket=nil)
if !socket
available = @sockets - @checked_out
socket = available[rand(available.length)]
end
if socket.pid != Process.pid
@sockets.delete(socket)
if socket
socket.close unless socket.closed?
end
checkout_new_socket
else
@checked_out << socket
@thread_ids_to_sockets[Thread.current.object_id] = socket
socket
end
end
def prune_threads
live_threads = Thread.list.map(&:object_id)
@thread_ids_to_sockets.reject! do |key, value|
!live_threads.include?(key)
end
end
def check_prune
if @checkout_counter > PRUNE_INTERVAL
@checkout_counter = 0
prune_threads
else
@checkout_counter += 1
end
end
# Check out an existing socket or create a new socket if the maximum
# pool size has not been exceeded. Otherwise, wait for the next
# available socket.
def checkout
@client.connect if !@client.connected?
start_time = Time.now
loop do
if (Time.now - start_time) > @timeout
raise ConnectionTimeoutError, "could not obtain connection within " +
"#{@timeout} seconds. The max pool size is currently #{@size}; " +
"consider increasing the pool size or timeout."
end
@connection_mutex.synchronize do
check_prune
socket = nil
if socket_for_thread = @thread_ids_to_sockets[Thread.current.object_id]
if !@checked_out.include?(socket_for_thread)
socket = checkout_existing_socket(socket_for_thread)
end
else
if @sockets.size < @size
socket = checkout_new_socket
elsif @checked_out.size < @sockets.size
socket = checkout_existing_socket
end
end
if socket
check_auths(socket)
if socket.closed?
@checked_out.delete(socket)
@sockets.delete(socket)
@thread_ids_to_sockets.delete(Thread.current.object_id)
socket = checkout_new_socket
end
return socket
else
# Otherwise, wait
@queue.wait(@connection_mutex)
end
end
end
end
private
# Helper method to handle keeping track of auths/logouts for sockets.
#
# @param socket [Socket] The socket instance to be checked.
#
# @return [Socket] The authenticated socket instance.
def check_auths(socket)
# find and handle logouts
(socket.auths - @client.auths).each do |auth|
@client.issue_logout(auth[:source], :socket => socket)
socket.auths.delete(auth)
end
# find and handle new auths
(@client.auths - socket.auths).each do |auth|
@client.issue_authentication(auth, :socket => socket)
socket.auths.add(auth)
end
socket
end
def close_sockets(sockets)
sockets.each do |socket|
@sockets.delete(socket)
begin
socket.close unless socket.closed?
rescue IOError => ex
warn "IOError when attempting to close socket connected to #{@host}:#{@port}: #{ex.inspect}"
end
end
end
end
end
ruby-mongo-1.10.0/lib/mongo/connection/pool_manager.rb 0000664 0000000 0000000 00000022500 12334610061 0022676 0 ustar 00root root 0000000 0000000 # Copyright (C) 2009-2013 MongoDB, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
module Mongo
class PoolManager
include ThreadLocalVariableManager
attr_reader :client,
:primary,
:primary_pool,
:seeds,
:max_bson_size,
:max_message_size,
:max_wire_version,
:min_wire_version
# Create a new set of connection pools.
#
# The pool manager will by default use the original seed list passed
# to the connection objects, accessible via connection.seeds. In addition,
# the user may pass an additional list of seeds nodes discovered in real
# time. The union of these lists will be used when attempting to connect,
# with the newly-discovered nodes being used first.
def initialize(client, seeds=[])
@client = client
@seeds = seeds
@pools = Set.new
@primary = nil
@primary_pool = nil
@secondaries = Set.new
@secondary_pools = []
@hosts = Set.new
@members = Set.new
@refresh_required = false
@max_bson_size = DEFAULT_MAX_BSON_SIZE
@max_message_size = @max_bson_size * MESSAGE_SIZE_FACTOR
@max_wire_version = 0
@min_wire_version = 0
@connect_mutex = Mutex.new
thread_local[:locks][:connecting_manager] = false
end
def inspect
""
end
def connect
@connect_mutex.synchronize do
begin
thread_local[:locks][:connecting_manager] = true
@refresh_required = false
disconnect_old_members
connect_to_members
initialize_pools(@members)
update_max_sizes
@seeds = discovered_seeds
ensure
thread_local[:locks][:connecting_manager] = false
end
end
end
def refresh!(additional_seeds)
@seeds |= additional_seeds
connect
end
# We're healthy if all members are pingable and if the view
# of the replica set returned by isMaster is equivalent
# to our view. If any of these isn't the case,
# set @refresh_required to true, and return.
def check_connection_health
return if thread_local[:locks][:connecting_manager]
members = copy_members
begin
seed = get_valid_seed_node
rescue ConnectionFailure
@refresh_required = true
return
end
unless current_config = seed.config
@refresh_required = true
seed.close
return
end
if current_config['hosts'].length != members.length
@refresh_required = true
seed.close
return
end
current_config['hosts'].each do |host|
member = members.detect do |m|
m.address == host
end
if member && validate_existing_member(current_config, member)
next
else
@refresh_required = true
seed.close
return
end
end
seed.close
end
# The replica set connection should initiate a full refresh.
def refresh_required?
@refresh_required
end
def closed?
pools.all? { |pool| pool.closed? }
end
def close(opts={})
begin
pools.each { |pool| pool.close(opts) }
rescue ConnectionFailure
end
end
def read
read_pool.host_port
end
def hosts
@connect_mutex.synchronize do
@hosts.nil? ? nil : @hosts.clone
end
end
def pools
@connect_mutex.synchronize do
@pools.nil? ? nil : @pools.clone
end
end
def secondaries
@connect_mutex.synchronize do
@secondaries.nil? ? nil : @secondaries.clone
end
end
def secondary_pools
@connect_mutex.synchronize do
@secondary_pools.nil? ? nil : @secondary_pools.clone
end
end
def arbiters
@connect_mutex.synchronize do
@arbiters.nil? ? nil : @arbiters.clone
end
end
def state_snapshot
@connect_mutex.synchronize do
{ :pools => @pools.nil? ? nil : @pools.clone,
:secondaries => @secondaries.nil? ? nil : @secondaries.clone,
:secondary_pools => @secondary_pools.nil? ? nil : @secondary_pools.clone,
:hosts => @hosts.nil? ? nil : @hosts.clone,
:arbiters => @arbiters.nil? ? nil : @arbiters.clone
}
end
end
private
def update_max_sizes
unless @members.size == 0
@max_bson_size = @members.map(&:max_bson_size).min
@max_message_size = @members.map(&:max_message_size).min
@max_wire_version = @members.map(&:max_wire_version).min
@min_wire_version = @members.map(&:min_wire_version).max
end
end
def validate_existing_member(current_config, member)
if current_config['ismaster'] && member.last_state != :primary
return false
elsif member.last_state != :other
return false
end
return true
end
# For any existing members, close and remove any that are unhealthy or already closed.
def disconnect_old_members
@pools.reject! {|pool| !pool.healthy? }
@members.reject! {|node| !node.healthy? }
end
# Connect to each member of the replica set
# as reported by the given seed node.
def connect_to_members
seed = get_valid_seed_node
seed.node_list.each do |host|
if existing = @members.detect {|node| node =~ host }
if existing.healthy?
# Refresh this node's configuration
existing.set_config
# If we are unhealthy after refreshing our config, drop from the set.
if !existing.healthy?
@members.delete(existing)
else
next
end
else
existing.close
@members.delete(existing)
end
end
node = Mongo::Node.new(self.client, host)
node.connect
@members << node if node.healthy?
end
seed.close
if @members.empty?
raise ConnectionFailure, "Failed to connect to any given member."
end
end
# Initialize the connection pools for the primary and secondary nodes.
def initialize_pools(members)
@primary_pool = nil
@primary = nil
@secondaries.clear
@secondary_pools.clear
@hosts.clear
members.each do |member|
member.last_state = nil
@hosts << member.host_string
if member.primary?
assign_primary(member)
elsif member.secondary?
# member could be not primary but secondary still is false
assign_secondary(member)
end
end
@arbiters = members.first.arbiters
end
def assign_primary(member)
member.last_state = :primary
@primary = member.host_port
if existing = @pools.detect {|pool| pool.node == member }
@primary_pool = existing
else
@primary_pool = Pool.new(self.client, member.host, member.port,
:size => self.client.pool_size,
:timeout => self.client.pool_timeout,
:node => member
)
@pools << @primary_pool
end
end
def assign_secondary(member)
member.last_state = :secondary
@secondaries << member.host_port
if existing = @pools.detect {|pool| pool.node == member }
@secondary_pools << existing
else
pool = Pool.new(self.client, member.host, member.port,
:size => self.client.pool_size,
:timeout => self.client.pool_timeout,
:node => member
)
@secondary_pools << pool
@pools << pool
end
end
# Iterate through the list of provided seed
# nodes until we've gotten a response from the
# replica set we're trying to connect to.
#
# If we don't get a response, raise an exception.
def get_valid_seed_node
@seeds.each do |seed|
node = Mongo::Node.new(self.client, seed)
node.connect
return node if node.healthy?
end
raise ConnectionFailure, "Cannot connect to a replica set using seeds " +
"#{@seeds.map {|s| "#{s[0]}:#{s[1]}" }.join(', ')}"
end
def discovered_seeds
@members.map(&:host_port)
end
def copy_members
members = Set.new
@connect_mutex.synchronize do
@members.map do |m|
members << m.dup
end
end
members
end
end
end
ruby-mongo-1.10.0/lib/mongo/connection/sharding_pool_manager.rb 0000664 0000000 0000000 00000004133 12334610061 0024557 0 ustar 00root root 0000000 0000000 # Copyright (C) 2009-2013 MongoDB, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
module Mongo
class ShardingPoolManager < PoolManager
def inspect
""
end
# "Best" should be the member with the fastest ping time
# but connect/connect_to_members reinitializes @members
def best(members)
Array(members.first)
end
def connect
@connect_mutex.synchronize do
begin
thread_local[:locks][:connecting_manager] = true
@refresh_required = false
disconnect_old_members
connect_to_members
initialize_pools best(@members)
update_max_sizes
@seeds = discovered_seeds
ensure
thread_local[:locks][:connecting_manager] = false
end
end
end
# Checks that each node is healthy (via check_is_master) and that each
# node is in fact a mongos. If either criteria are not true, a refresh is
# set to be triggered and close() is called on the node.
#
# @return [Boolean] indicating if a refresh is required.
def check_connection_health
@refresh_required = false
@members.each do |member|
begin
config = @client.check_is_master([member.host, member.port])
unless config && config.has_key?('msg')
@refresh_required = true
member.close
end
rescue OperationTimeout
@refresh_required = true
member.close
end
break if @refresh_required
end
@refresh_required
end
end
end
ruby-mongo-1.10.0/lib/mongo/connection/socket.rb 0000664 0000000 0000000 00000001413 12334610061 0021523 0 ustar 00root root 0000000 0000000 # Copyright (C) 2009-2013 MongoDB, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
require 'mongo/connection/socket/socket_util.rb'
require 'mongo/connection/socket/ssl_socket.rb'
require 'mongo/connection/socket/tcp_socket.rb'
require 'mongo/connection/socket/unix_socket.rb'
ruby-mongo-1.10.0/lib/mongo/connection/socket/ 0000775 0000000 0000000 00000000000 12334610061 0021177 5 ustar 00root root 0000000 0000000 ruby-mongo-1.10.0/lib/mongo/connection/socket/socket_util.rb 0000664 0000000 0000000 00000001555 12334610061 0024057 0 ustar 00root root 0000000 0000000 # Copyright (C) 2009-2013 MongoDB, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
require 'socket'
require 'timeout'
module SocketUtil
attr_accessor :pool, :pid, :auths
def checkout
@pool.checkout if @pool
end
def checkin
@pool.checkin(self) if @pool
end
def close
@socket.close unless closed?
end
def closed?
@socket.closed?
end
end
ruby-mongo-1.10.0/lib/mongo/connection/socket/ssl_socket.rb 0000664 0000000 0000000 00000005361 12334610061 0023702 0 ustar 00root root 0000000 0000000 # Copyright (C) 2009-2013 MongoDB, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
require 'openssl'
module Mongo
# A basic wrapper over Ruby's SSLSocket that initiates
# a TCP connection over SSL and then provides an basic interface
# mirroring Ruby's TCPSocket, vis., TCPSocket#send and TCPSocket#read.
class SSLSocket
include SocketUtil
def initialize(host, port, op_timeout=nil, connect_timeout=nil, opts={})
@op_timeout = op_timeout
@connect_timeout = connect_timeout
@pid = Process.pid
@auths = Set.new
@tcp_socket = ::TCPSocket.new(host, port)
@tcp_socket.setsockopt(Socket::IPPROTO_TCP, Socket::TCP_NODELAY, 1)
@context = OpenSSL::SSL::SSLContext.new
if opts[:cert]
@context.cert = OpenSSL::X509::Certificate.new(File.open(opts[:cert]))
end
if opts[:key]
if opts[:key_pass_phrase]
@context.key = OpenSSL::PKey::RSA.new(File.open(opts[:key]), opts[:key_pass_phrase])
else
@context.key = OpenSSL::PKey::RSA.new(File.open(opts[:key]))
end
end
if opts[:verify]
@context.ca_file = opts[:ca_cert]
@context.verify_mode = OpenSSL::SSL::VERIFY_PEER
end
begin
@socket = OpenSSL::SSL::SSLSocket.new(@tcp_socket, @context)
@socket.sync_close = true
connect
rescue OpenSSL::SSL::SSLError
raise ConnectionFailure, "SSL handshake failed. MongoDB may " +
"not be configured with SSL support."
end
if opts[:verify]
unless OpenSSL::SSL.verify_certificate_identity(@socket.peer_cert, host)
raise ConnectionFailure, "SSL handshake failed. Hostname mismatch."
end
end
self
end
def connect
if @connect_timeout
Timeout::timeout(@connect_timeout, ConnectionTimeoutError) do
@socket.connect
end
else
@socket.connect
end
end
def send(data)
@socket.syswrite(data)
end
def read(length, buffer)
if @op_timeout
Timeout::timeout(@op_timeout, OperationTimeout) do
@socket.sysread(length, buffer)
end
else
@socket.sysread(length, buffer)
end
end
end
end
ruby-mongo-1.10.0/lib/mongo/connection/socket/tcp_socket.rb 0000664 0000000 0000000 00000004746 12334610061 0023675 0 ustar 00root root 0000000 0000000 # Copyright (C) 2009-2013 MongoDB, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
module Mongo
# Wrapper class for Socket
#
# Emulates TCPSocket with operation and connection timeout
# sans Timeout::timeout
#
class TCPSocket
include SocketUtil
def initialize(host, port, op_timeout=nil, connect_timeout=nil, opts={})
@op_timeout = op_timeout
@connect_timeout = connect_timeout
@pid = Process.pid
@auths = Set.new
@socket = handle_connect(host, port)
end
def handle_connect(host, port)
error = nil
# Following python's lead (see PYTHON-356)
family = host == 'localhost' ? Socket::AF_INET : Socket::AF_UNSPEC
addr_info = Socket.getaddrinfo(host, nil, family, Socket::SOCK_STREAM)
addr_info.each do |info|
begin
sock = Socket.new(info[4], Socket::SOCK_STREAM, 0)
sock.setsockopt(Socket::IPPROTO_TCP, Socket::TCP_NODELAY, 1)
socket_address = Socket.pack_sockaddr_in(port, info[3])
connect(sock, socket_address)
return sock
rescue IOError, SystemCallError => e
error = e
sock.close if sock
end
end
raise error
end
def connect(socket, socket_address)
if @connect_timeout
Timeout::timeout(@connect_timeout, ConnectionTimeoutError) do
socket.connect(socket_address)
end
else
socket.connect(socket_address)
end
end
def send(data)
@socket.write(data)
end
def read(maxlen, buffer)
# Block on data to read for @op_timeout seconds
begin
ready = IO.select([@socket], nil, [@socket], @op_timeout)
unless ready
raise OperationTimeout
end
rescue IOError
raise ConnectionFailure
end
# Read data from socket
begin
@socket.sysread(maxlen, buffer)
rescue SystemCallError, IOError => ex
raise ConnectionFailure, ex
end
end
end
end
ruby-mongo-1.10.0/lib/mongo/connection/socket/unix_socket.rb 0000664 0000000 0000000 00000002405 12334610061 0024060 0 ustar 00root root 0000000 0000000 # Copyright (C) 2009-2013 MongoDB, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
module Mongo
attr_accessor :auths
# Wrapper class for Socket
#
# Emulates UNIXSocket with operation and connection timeout
# sans Timeout::timeout
#
class UNIXSocket < TCPSocket
def initialize(socket_path, port=:socket, op_timeout=nil, connect_timeout=nil, opts={})
@op_timeout = op_timeout
@connect_timeout = connect_timeout
@pid = Process.pid
@auths = Set.new
@address = socket_path
@port = :socket # purposely override input
@socket_address = Socket.pack_sockaddr_un(@address)
@socket = Socket.new(Socket::AF_UNIX, Socket::SOCK_STREAM, 0)
connect
end
end
end
ruby-mongo-1.10.0/lib/mongo/cursor.rb 0000664 0000000 0000000 00000055131 12334610061 0017417 0 ustar 00root root 0000000 0000000 # Copyright (C) 2009-2013 MongoDB, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
module Mongo
# A cursor over query results. Returned objects are hashes.
class Cursor
include Enumerable
include Mongo::Constants
include Mongo::Conversions
include Mongo::Logging
include Mongo::ReadPreference
attr_reader :collection, :selector, :fields,
:order, :hint, :snapshot, :timeout,
:full_collection_name, :transformer,
:options, :cursor_id, :show_disk_loc,
:comment, :compile_regex, :read, :tag_sets,
:acceptable_latency
# Create a new cursor.
#
# Note: cursors are created when executing queries using [Collection#find] and other
# similar methods. Application developers shouldn't have to create cursors manually.
#
# @return [Cursor]
def initialize(collection, opts={})
opts = opts.dup
@cursor_id = opts.delete(:cursor_id)
@db = collection.db
@collection = collection
@connection = @db.connection
@logger = @connection.logger
# Query selector
@selector = opts.delete(:selector) || {}
# Query pre-serialized bson to append
@bson = @selector.delete(:bson)
# Special operators that form part of $query
@order = opts.delete(:order)
@explain = opts.delete(:explain)
@hint = opts.delete(:hint)
@snapshot = opts.delete(:snapshot)
@max_scan = opts.delete(:max_scan)
@return_key = opts.delete(:return_key)
@show_disk_loc = opts.delete(:show_disk_loc)
@comment = opts.delete(:comment)
@compile_regex = opts.key?(:compile_regex) ? opts.delete(:compile_regex) : true
# Wire-protocol settings
@fields = convert_fields_for_query(opts.delete(:fields))
@skip = opts.delete(:skip) || 0
@limit = opts.delete(:limit) || 0
@tailable = opts.delete(:tailable)
@timeout = opts.key?(:timeout) ? opts.delete(:timeout) : true
@options = 0
# Use this socket for the query
@socket = opts.delete(:socket)
@pool = opts.delete(:pool)
@closed = false
@query_run = false
@transformer = opts.delete(:transformer)
@read = opts.delete(:read) || @collection.read
Mongo::ReadPreference::validate(@read)
@tag_sets = opts.delete(:tag_sets) || @collection.tag_sets
@acceptable_latency = opts.delete(:acceptable_latency) || @collection.acceptable_latency
batch_size(opts.delete(:batch_size) || 0)
@full_collection_name = "#{@collection.db.name}.#{@collection.name}"
@cache = opts.delete(:first_batch) || []
@returned = 0
if(!@timeout)
add_option(OP_QUERY_NO_CURSOR_TIMEOUT)
end
if(@read != :primary)
add_option(OP_QUERY_SLAVE_OK)
end
if(@tailable)
add_option(OP_QUERY_TAILABLE)
end
# If a cursor_id is provided, this is a cursor for a command
if @cursor_id
@command_cursor = true
@query_run = true
end
if @collection.name =~ /^\$cmd/ || @collection.name =~ /^system/
@command = true
else
@command = false
end
@opts = opts
end
# Guess whether the cursor is alive on the server.
#
# Note that this method only checks whether we have
# a cursor id. The cursor may still have timed out
# on the server. This will be indicated in the next
# call to Cursor#next.
#
# @return [Boolean]
def alive?
@cursor_id && @cursor_id != 0
end
# Get the next document specified the cursor options.
#
# @return [Hash, Nil] the next document or Nil if no documents remain.
def next
if @cache.length == 0
if @query_run && exhaust?
close
return nil
else
refresh
end
end
doc = @cache.shift
if doc && (err = doc['errmsg'] || doc['$err']) # assignment
code = doc['code']
# If the server has stopped being the master (e.g., it's one of a
# pair but it has died or something like that) then we close that
# connection. The next request will re-open on master server.
if err.include?("not master")
@connection.close
raise ConnectionFailure.new(err, code, doc)
end
# Handle server side operation execution timeout
if code == 50
raise ExecutionTimeout.new(err, code, doc)
end
raise OperationFailure.new(err, code, doc)
elsif doc && (write_concern_error = doc['writeConcernError']) # assignment
raise WriteConcernError.new(write_concern_error['errmsg'], write_concern_error['code'], doc)
end
if @transformer.nil?
doc
else
@transformer.call(doc) if doc
end
end
alias :next_document :next
# Reset this cursor on the server. Cursor options, such as the
# query string and the values for skip and limit, are preserved.
def rewind!
check_command_cursor
close
@cache.clear
@cursor_id = nil
@closed = false
@query_run = false
@n_received = nil
true
end
# Determine whether this cursor has any remaining results.
#
# @return [Boolean]
def has_next?
num_remaining > 0
end
# Get the size of the result set for this query.
#
# @param [Boolean] skip_and_limit whether or not to take skip or limit into account.
#
# @return [Integer] the number of objects in the result set for this query.
#
# @raise [OperationFailure] on a database error.
def count(skip_and_limit = false)
check_command_cursor
command = BSON::OrderedHash["count", @collection.name, "query", @selector]
if skip_and_limit
command.merge!(BSON::OrderedHash["limit", @limit]) if @limit != 0
command.merge!(BSON::OrderedHash["skip", @skip]) if @skip != 0
end
command.merge!(BSON::OrderedHash["fields", @fields])
response = @db.command(command, :read => @read, :comment => @comment)
return response['n'].to_i if Mongo::Support.ok?(response)
return 0 if response['errmsg'] == "ns missing"
raise OperationFailure.new("Count failed: #{response['errmsg']}", response['code'], response)
end
# Sort this cursor's results.
#
# This method overrides any sort order specified in the Collection#find
# method, and only the last sort applied has an effect.
#
# @param [Symbol, Array, Hash, OrderedHash] order either 1) a key to sort by 2)
# an array of [key, direction] pairs to sort by or 3) a hash of
# field => direction pairs to sort by. Direction should be specified as
# Mongo::ASCENDING (or :ascending / :asc) or Mongo::DESCENDING
# (or :descending / :desc)
#
# @raise [InvalidOperation] if this cursor has already been used.
#
# @raise [InvalidSortValueError] if the specified order is invalid.
def sort(order, direction=nil)
check_modifiable
order = [[order, direction]] unless direction.nil?
@order = order
self
end
# Limit the number of results to be returned by this cursor.
#
# This method overrides any limit specified in the Collection#find method,
# and only the last limit applied has an effect.
#
# @return [Integer] the current number_to_return if no parameter is given.
#
# @raise [InvalidOperation] if this cursor has already been used.
def limit(number_to_return=nil)
return @limit unless number_to_return
check_modifiable
if (number_to_return != 0) && exhaust?
raise MongoArgumentError, "Limit is incompatible with exhaust option."
end
@limit = number_to_return
self
end
# Skips the first +number_to_skip+ results of this cursor.
# Returns the current number_to_skip if no parameter is given.
#
# This method overrides any skip specified in the Collection#find method,
# and only the last skip applied has an effect.
#
# @return [Integer]
#
# @raise [InvalidOperation] if this cursor has already been used.
def skip(number_to_skip=nil)
return @skip unless number_to_skip
check_modifiable
@skip = number_to_skip
self
end
# Instruct the server to abort queries after they exceed the specified
# wall-clock execution time.
#
# A query that completes in under its time limit will "roll over"
# remaining time to the first getmore op (which will then "roll over"
# its remaining time to the second getmore op and so on, until the
# time limit is hit).
#
# Cursors returned by successful time-limited queries will still obey
# the default cursor idle timeout (unless the "no cursor idle timeout"
# flag has been set).
#
# @note This will only have an effect in MongoDB 2.5+
#
# @param max_time_ms [Fixnum] max execution time (in milliseconds)
#
# @return [Fixnum, Cursor] either the current max_time_ms or cursor
def max_time_ms(max_time_ms=nil)
return @max_time_ms unless max_time_ms
check_modifiable
@max_time_ms = max_time_ms
self
end
# Set the batch size for server responses.
#
# Note that the batch size will take effect only on queries
# where the number to be returned is greater than 100.
#
# This can not override MongoDB's limit on the amount of data it will
# return to the client. Depending on server version this can be 4-16mb.
#
# @param [Integer] size either 0 or some integer greater than 1. If 0,
# the server will determine the batch size.
#
# @return [Cursor]
def batch_size(size=nil)
return @batch_size unless size
check_modifiable
if size < 0 || size == 1
raise ArgumentError, "Invalid value for batch_size #{size}; must be 0 or > 1."
else
@batch_size = @limit != 0 && size > @limit ? @limit : size
end
self
end
# Iterate over each document in this cursor, yielding it to the given
# block, if provided. An Enumerator is returned if no block is given.
#
# Iterating over an entire cursor will close it.
#
# @yield passes each document to a block for processing.
#
# @example if 'comments' represents a collection of comments:
# comments.find.each do |doc|
# puts doc['user']
# end
def each
if block_given? || !defined?(Enumerator)
while doc = self.next
yield doc
end
else
Enumerator.new do |yielder|
while doc = self.next
yielder.yield doc
end
end
end
end
# Receive all the documents from this cursor as an array of hashes.
#
# Notes:
#
# If you've already started iterating over the cursor, the array returned
# by this method contains only the remaining documents. See Cursor#rewind! if you
# need to reset the cursor.
#
# Use of this method is discouraged - in most cases, it's much more
# efficient to retrieve documents as you need them by iterating over the cursor.
#
# @return [Array] an array of documents.
def to_a
super
end
# Get the explain plan for this cursor.
#
# @return [Hash] a document containing the explain plan for this cursor.
def explain
check_command_cursor
c = Cursor.new(@collection,
query_options_hash.merge(:limit => -@limit.abs, :explain => true))
explanation = c.next_document
c.close
explanation
end
# Close the cursor.
#
# Note: if a cursor is read until exhausted (read until Mongo::Constants::OP_QUERY or
# Mongo::Constants::OP_GETMORE returns zero for the cursor id), there is no need to
# close it manually.
#
# Note also: Collection#find takes an optional block argument which can be used to
# ensure that your cursors get closed.
#
# @return [True]
def close
if @cursor_id && @cursor_id != 0
message = BSON::ByteBuffer.new([0, 0, 0, 0])
message.put_int(1)
message.put_long(@cursor_id)
log(:debug, "Cursor#close #{@cursor_id}")
@connection.send_message(
Mongo::Constants::OP_KILL_CURSORS,
message,
:pool => @pool
)
end
@cursor_id = 0
@closed = true
end
# Is this cursor closed?
#
# @return [Boolean]
def closed?
@closed
end
# Returns an integer indicating which query options have been selected.
#
# @return [Integer]
#
# @see http://www.mongodb.org/display/DOCS/Mongo+Wire+Protocol#MongoWireProtocol-Mongo::Constants::OPQUERY
# The MongoDB wire protocol.
def query_opts
warn "The method Cursor#query_opts has been deprecated " +
"and will removed in v2.0. Use Cursor#options instead."
@options
end
# Add an option to the query options bitfield.
#
# @param opt a valid query option
#
# @raise InvalidOperation if this method is run after the cursor has bee
# iterated for the first time.
#
# @return [Integer] the current value of the options bitfield for this cursor.
#
# @see http://www.mongodb.org/display/DOCS/Mongo+Wire+Protocol#MongoWireProtocol-Mongo::Constants::OPQUERY
def add_option(opt)
check_modifiable
if exhaust?(opt)
if @limit != 0
raise MongoArgumentError, "Exhaust is incompatible with limit."
elsif @connection.mongos?
raise MongoArgumentError, "Exhaust is incompatible with mongos."
end
end
@options |= opt
@options
end
# Remove an option from the query options bitfield.
#
# @param opt a valid query option
#
# @raise InvalidOperation if this method is run after the cursor has bee
# iterated for the first time.
#
# @return [Integer] the current value of the options bitfield for this cursor.
#
# @see http://www.mongodb.org/display/DOCS/Mongo+Wire+Protocol#MongoWireProtocol-Mongo::Constants::OPQUERY
def remove_option(opt)
check_modifiable
@options &= ~opt
@options
end
# Get the query options for this Cursor.
#
# @return [Hash]
def query_options_hash
BSON::OrderedHash[
:selector => @selector,
:fields => @fields,
:skip => @skip,
:limit => @limit,
:order => @order,
:hint => @hint,
:snapshot => @snapshot,
:timeout => @timeout,
:max_scan => @max_scan,
:return_key => @return_key,
:show_disk_loc => @show_disk_loc,
:comment => @comment ]
end
# Clean output for inspect.
def inspect
""
end
private
# Convert the +:fields+ parameter from a single field name or an array
# of fields names to a hash, with the field names for keys and '1' for each
# value.
def convert_fields_for_query(fields)
case fields
when String, Symbol
{fields => 1}
when Array
return nil if fields.length.zero?
fields.inject({}) do |hash, field|
field.is_a?(Hash) ? hash.merge!(field) : hash[field] = 1
hash
end
when Hash
return fields
end
end
# Return the number of documents remaining for this cursor.
def num_remaining
if @cache.length == 0
if @query_run && exhaust?
close
return 0
else
refresh
end
end
@cache.length
end
# Refresh the documents in @cache. This means either
# sending the initial query or sending a GET_MORE operation.
def refresh
if !@query_run
send_initial_query
elsif !@cursor_id.zero?
send_get_more
end
end
# Sends initial query -- which is always a read unless it is a command
#
# Upon ConnectionFailure, tries query 3 times if socket was not provided
# and the query is either not a command or is a secondary_ok command.
#
# Pins pools upon successful read and unpins pool upon ConnectionFailure
#
def send_initial_query
tries = 0
instrument(:find, instrument_payload) do
begin
message = construct_query_message
socket = @socket || checkout_socket_from_connection
results, @n_received, @cursor_id = @connection.receive_message(
Mongo::Constants::OP_QUERY, message, nil, socket, @command,
nil, exhaust?, compile_regex?)
rescue ConnectionFailure => ex
socket.close if socket
@pool = nil
@connection.unpin_pool
@connection.refresh
if tries < 3 && !@socket && (!@command || Mongo::ReadPreference::secondary_ok?(@selector))
tries += 1
retry
else
raise ex
end
rescue OperationFailure, OperationTimeout => ex
raise ex
ensure
socket.checkin unless @socket || socket.nil?
end
if !@socket && !@command
@connection.pin_pool(socket.pool, read_preference)
end
@returned += @n_received
@cache += results
@query_run = true
close_cursor_if_query_complete
end
end
def send_get_more
message = BSON::ByteBuffer.new([0, 0, 0, 0])
# DB name.
BSON::BSON_RUBY.serialize_cstr(message, "#{@db.name}.#{@collection.name}")
# Number of results to return.
if @limit > 0
limit = @limit - @returned
if @batch_size > 0
limit = limit < @batch_size ? limit : @batch_size
end
message.put_int(limit)
else
message.put_int(@batch_size)
end
# Cursor id.
message.put_long(@cursor_id)
log(:debug, "cursor.refresh() for cursor #{@cursor_id}") if @logger
socket = @pool.checkout
begin
results, @n_received, @cursor_id = @connection.receive_message(
Mongo::Constants::OP_GET_MORE, message, nil, socket, @command,
nil, exhaust?, compile_regex?)
ensure
socket.checkin
end
@returned += @n_received
@cache += results
close_cursor_if_query_complete
end
def checkout_socket_from_connection
begin
if @pool
socket = @pool.checkout
elsif @command && !Mongo::ReadPreference::secondary_ok?(@selector)
socket = @connection.checkout_reader({:mode => :primary})
else
socket = @connection.checkout_reader(read_preference)
end
rescue SystemStackError, NoMemoryError, SystemCallError => ex
@connection.close
raise ex
end
@pool = socket.pool
socket
end
def checkin_socket(sock)
@connection.checkin(sock)
end
def construct_query_message
message = BSON::ByteBuffer.new("", @connection.max_bson_size + MongoClient::COMMAND_HEADROOM)
message.put_int(@options)
BSON::BSON_RUBY.serialize_cstr(message, "#{@db.name}.#{@collection.name}")
message.put_int(@skip)
@batch_size > 1 ? message.put_int(@batch_size) : message.put_int(@limit)
if query_contains_special_fields? && @bson # costs two serialize calls
query_message = BSON::BSON_CODER.serialize(@selector, false, false, @connection.max_bson_size + MongoClient::APPEND_HEADROOM)
query_message.grow(@bson)
query_spec = construct_query_spec
query_spec.delete('$query')
query_message.grow(BSON::BSON_CODER.serialize(query_spec, false, false, @connection.max_bson_size))
else # costs only one serialize call
spec = query_contains_special_fields? ? construct_query_spec : @selector
spec.merge!(@opts)
query_message = BSON::BSON_CODER.serialize(spec, false, false, @connection.max_bson_size + MongoClient::APPEND_HEADROOM)
query_message.grow(@bson) if @bson
end
message.put_binary(query_message.to_s)
message.put_binary(BSON::BSON_CODER.serialize(@fields, false, false, @connection.max_bson_size).to_s) if @fields
message
end
def instrument_payload
log = { :database => @db.name, :collection => @collection.name, :selector => selector }
log[:fields] = @fields if @fields
log[:skip] = @skip if @skip && (@skip != 0)
log[:limit] = @limit if @limit && (@limit != 0)
log[:order] = @order if @order
log
end
def construct_query_spec
return @selector if @selector.has_key?('$query')
spec = BSON::OrderedHash.new
spec['$query'] = @selector
spec['$orderby'] = Mongo::Support.format_order_clause(@order) if @order
spec['$hint'] = @hint if @hint && @hint.length > 0
spec['$explain'] = true if @explain
spec['$snapshot'] = true if @snapshot
spec['$maxScan'] = @max_scan if @max_scan
spec['$returnKey'] = true if @return_key
spec['$showDiskLoc'] = true if @show_disk_loc
spec['$comment'] = @comment if @comment
spec['$maxTimeMS'] = @max_time_ms if @max_time_ms
if needs_read_pref?
read_pref = Mongo::ReadPreference::mongos(@read, @tag_sets)
spec['$readPreference'] = read_pref if read_pref
end
spec
end
def needs_read_pref?
@connection.mongos? && @read != :primary
end
def query_contains_special_fields?
@order || @explain || @hint || @snapshot || @show_disk_loc ||
@max_scan || @return_key || @comment || @max_time_ms || needs_read_pref?
end
def close_cursor_if_query_complete
if @limit > 0 && @returned >= @limit
close
end
end
# Check whether the exhaust option is set
#
# @return [true, false] The state of the exhaust flag.
def exhaust?(opts = options)
!(opts & OP_QUERY_EXHAUST).zero?
end
def check_modifiable
if @query_run || @closed
raise InvalidOperation, "Cannot modify the query once it has been run or closed."
end
end
def check_command_cursor
if @command_cursor
raise InvalidOperation, "Cannot call #{caller.first} on command cursors"
end
end
def compile_regex?
@compile_regex
end
end
end
ruby-mongo-1.10.0/lib/mongo/db.rb 0000664 0000000 0000000 00000066403 12334610061 0016473 0 ustar 00root root 0000000 0000000 # Copyright (C) 2009-2013 MongoDB, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
module Mongo
# A MongoDB database.
class DB
include Mongo::WriteConcern
SYSTEM_NAMESPACE_COLLECTION = 'system.namespaces'
SYSTEM_INDEX_COLLECTION = 'system.indexes'
SYSTEM_PROFILE_COLLECTION = 'system.profile'
SYSTEM_USER_COLLECTION = 'system.users'
SYSTEM_JS_COLLECTION = 'system.js'
SYSTEM_COMMAND_COLLECTION = '$cmd'
MAX_TIME_MS_CODE = 50
PROFILE_LEVEL = {
:off => 0,
:slow_only => 1,
:all => 2
}
# Counter for generating unique request ids.
@@current_request_id = 0
# Strict mode enforces collection existence checks. When +true+,
# asking for a collection that does not exist, or trying to create a
# collection that already exists, raises an error.
#
# Strict mode is disabled by default, but enabled (+true+) at any time.
#
# @deprecated Support for strict will be removed in version 2.0 of the driver.
def strict=(value)
unless ENV['TEST_MODE']
warn "Support for strict mode has been deprecated and will be " +
"removed in version 2.0 of the driver."
end
@strict = value
end
# Returns the value of the +strict+ flag.
#
# @deprecated Support for strict will be removed in version 2.0 of the driver.
def strict?
@strict
end
# The name of the database and the local write concern options.
attr_reader :name, :write_concern
# The Mongo::MongoClient instance connecting to the MongoDB server.
attr_reader :client
# for backward compatibility
alias_method :connection, :client
# The length of time that Collection.ensure_index should cache index calls
attr_accessor :cache_time
# Read Preference
attr_accessor :read, :tag_sets, :acceptable_latency
# Instances of DB are normally obtained by calling Mongo#db.
#
# @param [String] name the database name.
# @param [Mongo::MongoClient] client a connection object pointing to MongoDB. Note
# that databases are usually instantiated via the MongoClient class. See the examples below.
#
# @option opts [Boolean] :strict (False) [DEPRECATED] If true, collections existence checks are
# performed during a number of relevant operations. See DB#collection, DB#create_collection and
# DB#drop_collection.
#
# @option opts [Object, #create_pk(doc)] :pk (BSON::ObjectId) A primary key factory object,
# which should take a hash and return a hash which merges the original hash with any primary key
# fields the factory wishes to inject. (NOTE: if the object already has a primary key,
# the factory should not inject a new key).
#
# @option opts [String, Integer, Symbol] :w (1) Set default number of nodes to which a write
# should be acknowledged.
# @option opts [Integer] :wtimeout (nil) Set replica set acknowledgement timeout.
# @option opts [Boolean] :j (false) If true, block until write operations have been committed
# to the journal. Cannot be used in combination with 'fsync'. Prior to MongoDB 2.6 this option was
# ignored if the server was running without journaling. Starting with MongoDB 2.6, write operations will
# fail with an exception if this option is used when the server is running without journaling.
# @option opts [Boolean] :fsync (false) If true, and the server is running without journaling, blocks until
# the server has synced all data files to disk. If the server is running with journaling, this acts the same as
# the 'j' option, blocking until write operations have been committed to the journal.
# Cannot be used in combination with 'j'.
#
# Notes on write concern:
# These write concern options are propagated to Collection objects instantiated off of this DB. If no
# options are provided, the default write concern set on this instance's MongoClient object will be used. This
# default can be overridden upon instantiation of any collection by explicitly setting write concern options
# on initialization or at the time of an operation.
#
# @option opts [Integer] :cache_time (300) Set the time that all ensure_index calls should cache the command.
def initialize(name, client, opts={})
# A database name of '$external' is permitted for some auth types
Support.validate_db_name(name) unless name == '$external'
@name = name
@client = client
@strict = opts[:strict]
@pk_factory = opts[:pk]
@write_concern = get_write_concern(opts, client)
@read = opts[:read] || @client.read
ReadPreference::validate(@read)
@tag_sets = opts.fetch(:tag_sets, @client.tag_sets)
@acceptable_latency = opts.fetch(:acceptable_latency,
@client.acceptable_latency)
@cache_time = opts[:cache_time] || 300 #5 minutes.
end
# Authenticate with the given username and password.
#
# @param username [String] The username.
# @param password [String] The user's password. This is not required for
# some authentication mechanisms.
# @param save_auth [Boolean]
# Save this authentication to the client object using
# MongoClient#add_auth. This will ensure that the authentication will
# be applied to all sockets and upon database reconnect.
# @param source [String] Database with user credentials. This should be
# used to authenticate against a database when the credentials exist
# elsewhere.
# @param mechanism [String] The authentication mechanism to be used.
# @param extra [Hash] A optional hash of extra options to be stored with
# the credential set.
#
# @note The ability to disable the save_auth option has been deprecated.
# With save_auth=false specified, driver authentication behavior during
# failovers and reconnections becomes unreliable. This option still
# exists for API compatibility, but it no longer has any effect if
# disabled and now always uses the default behavior (safe_auth=true).
#
# @raise [AuthenticationError] Raised if authentication fails.
# @return [Boolean] The result of the authentication operation.
def authenticate(username, password=nil, save_auth=nil, source=nil, mechanism=nil, extra=nil)
warn "[DEPRECATED] Disabling the 'save_auth' option no longer has " +
"any effect. Please see the API documentation for more details " +
"on this change." unless save_auth.nil?
@client.add_auth(self.name, username, password, source, mechanism, extra)
true
end
# Deauthorizes use for this database for this client connection. Also removes
# the saved authentication in the MongoClient class associated with this
# database.
#
# @return [Boolean]
def logout(opts={})
@client.remove_auth(self.name)
true
end
# Adds a stored Javascript function to the database which can executed
# server-side in map_reduce, db.eval and $where clauses.
#
# @param [String] function_name
# @param [String] code
#
# @return [String] the function name saved to the database
def add_stored_function(function_name, code)
self[SYSTEM_JS_COLLECTION].save(
{
"_id" => function_name,
:value => BSON::Code.new(code)
}
)
end
# Removes stored Javascript function from the database. Returns
# false if the function does not exist
#
# @param [String] function_name
#
# @return [Boolean]
def remove_stored_function(function_name)
return false unless self[SYSTEM_JS_COLLECTION].find_one({"_id" => function_name})
self[SYSTEM_JS_COLLECTION].remove({"_id" => function_name}, :w => 1)
end
# Adds a user to this database for use with authentication. If the user already
# exists in the system, the password and any additional fields provided in opts
# will be updated.
#
# @param [String] username
# @param [String] password
# @param [Boolean] read_only
# Create a read-only user.
#
# @param [Hash] opts
# Optional fields for the user document (e.g. +userSource+, or +roles+)
#
# See {http://docs.mongodb.org/manual/reference/privilege-documents}
# for more information.
#
# @note The use of the opts argument to provide or update additional fields
# on the user document requires MongoDB >= 2.4.0
#
# @return [Hash] an object representing the user.
def add_user(username, password=nil, read_only=false, opts={})
begin
user_info = command(:usersInfo => username)
# MongoDB >= 2.5.3 requires the use of commands to manage users.
# "Command not found" error didn't return an error code (59) before
# MongoDB 2.4.7 so we assume that a nil error code means the usersInfo
# command doesn't exist and we should fall back to the legacy add user code.
rescue OperationFailure => ex
raise ex unless ex.error_code == Mongo::ErrorCode::COMMAND_NOT_FOUND || ex.error_code.nil?
return legacy_add_user(username, password, read_only, opts)
end
if user_info.key?('users') && !user_info['users'].empty?
create_or_update_user(:updateUser, username, password, read_only, opts)
else
create_or_update_user(:createUser, username, password, read_only, opts)
end
end
# Remove the given user from this database. Returns false if the user
# doesn't exist in the system.
#
# @param [String] username
#
# @return [Boolean]
def remove_user(username)
begin
command(:dropUser => username)
rescue OperationFailure => ex
raise ex unless ex.error_code == Mongo::ErrorCode::COMMAND_NOT_FOUND || ex.error_code.nil?
response = self[SYSTEM_USER_COLLECTION].remove({:user => username}, :w => 1)
response.key?('n') && response['n'] > 0 ? response : false
end
end
# Get an array of collection names in this database.
#
# @return [Array]
def collection_names
names = collections_info.collect { |doc| doc['name'] || '' }
names = names.delete_if {|name| name.index(@name).nil? || name.index('$')}
names.map {|name| name.sub(@name + '.', '')}
end
# Get an array of Collection instances, one for each collection in this database.
#
# @return [Array]
def collections
collection_names.map do |name|
Collection.new(name, self)
end
end
# Get info on system namespaces (collections). This method returns
# a cursor which can be iterated over. For each collection, a hash
# will be yielded containing a 'name' string and, optionally, an 'options' hash.
#
# @param [String] coll_name return info for the specified collection only.
#
# @return [Mongo::Cursor]
def collections_info(coll_name=nil)
selector = {}
selector[:name] = full_collection_name(coll_name) if coll_name
Cursor.new(Collection.new(SYSTEM_NAMESPACE_COLLECTION, self), :selector => selector)
end
# Create a collection.
#
# new collection. If +strict+ is true, will raise an error if
# collection +name+ already exists.
#
# @param [String, Symbol] name the name of the new collection.
#
# @option opts [Boolean] :capped (False) created a capped collection.
#
# @option opts [Integer] :size (Nil) If +capped+ is +true+,
# specifies the maximum number of bytes for the capped collection.
# If +false+, specifies the number of bytes allocated
# for the initial extent of the collection.
#
# @option opts [Integer] :max (Nil) If +capped+ is +true+, indicates
# the maximum number of records in a capped collection.
#
# @raise [MongoDBError] raised under two conditions:
# either we're in +strict+ mode and the collection
# already exists or collection creation fails on the server.
#
# @return [Mongo::Collection]
def create_collection(name, opts={})
name = name.to_s
if strict? && collection_names.include?(name)
raise MongoDBError, "Collection '#{name}' already exists. (strict=true)"
end
begin
cmd = BSON::OrderedHash.new
cmd[:create] = name
doc = command(cmd.merge(opts || {}))
return Collection.new(name, self, :pk => @pk_factory) if ok?(doc)
rescue OperationFailure => e
return Collection.new(name, self, :pk => @pk_factory) if e.message =~ /exists/
raise e
end
raise MongoDBError, "Error creating collection: #{doc.inspect}"
end
# Get a collection by name.
#
# @param [String, Symbol] name the collection name.
# @param [Hash] opts any valid options that can be passed to Collection#new.
#
# @raise [MongoDBError] if collection does not already exist and we're in
# +strict+ mode.
#
# @return [Mongo::Collection]
def collection(name, opts={})
if strict? && !collection_names.include?(name.to_s)
raise MongoDBError, "Collection '#{name}' doesn't exist. (strict=true)"
else
opts = opts.dup
opts.merge!(:pk => @pk_factory) unless opts[:pk]
Collection.new(name, self, opts)
end
end
alias_method :[], :collection
# Drop a collection by +name+.
#
# @param [String, Symbol] name
#
# @return [Boolean] +true+ on success or +false+ if the collection name doesn't exist.
def drop_collection(name)
return false if strict? && !collection_names.include?(name.to_s)
begin
ok?(command(:drop => name))
rescue OperationFailure
false
end
end
# Run the getlasterror command with the specified replication options.
#
# @option opts [Boolean] :fsync (false)
# @option opts [Integer] :w (nil)
# @option opts [Integer] :wtimeout (nil)
# @option opts [Boolean] :j (false)
#
# @return [Hash] the entire response to getlasterror.
#
# @raise [MongoDBError] if the operation fails.
def get_last_error(opts={})
cmd = BSON::OrderedHash.new
cmd[:getlasterror] = 1
cmd.merge!(opts)
doc = command(cmd, :check_response => false)
raise MongoDBError, "Error retrieving last error: #{doc.inspect}" unless ok?(doc)
doc
end
# Return +true+ if an error was caused by the most recently executed
# database operation.
#
# @return [Boolean]
def error?
get_last_error['err'] != nil
end
# Get the most recent error to have occurred on this database.
#
# This command only returns errors that have occurred since the last call to
# DB#reset_error_history - returns +nil+ if there is no such error.
#
# @return [String, Nil] the text of the error or +nil+ if no error has occurred.
def previous_error
error = command(:getpreverror => 1)
error["err"] ? error : nil
end
# Reset the error history of this database
#
# Calls to DB#previous_error will only return errors that have occurred
# since the most recent call to this method.
#
# @return [Hash]
def reset_error_history
command(:reseterror => 1)
end
# Dereference a DBRef, returning the document it points to.
#
# @param [Mongo::DBRef] dbref
#
# @return [Hash] the document indicated by the db reference.
#
# @see http://www.mongodb.org/display/DOCS/DB+Ref MongoDB DBRef spec.
def dereference(dbref)
collection(dbref.namespace).find_one("_id" => dbref.object_id)
end
# Evaluate a JavaScript expression in MongoDB.
#
# @param [String, Code] code a JavaScript expression to evaluate server-side.
# @param [Integer, Hash] args any additional argument to be passed to the +code+ expression when
# it's run on the server.
#
# @return [String] the return value of the function.
def eval(code, *args)
unless code.is_a?(BSON::Code)
code = BSON::Code.new(code)
end
cmd = BSON::OrderedHash.new
cmd[:$eval] = code
cmd.merge!(args.pop) if args.last.respond_to?(:keys) && args.last.key?(:nolock)
cmd[:args] = args
doc = command(cmd)
doc['retval']
end
# Rename a collection.
#
# @param [String] from original collection name.
# @param [String] to new collection name.
#
# @return [True] returns +true+ on success.
#
# @raise MongoDBError if there's an error renaming the collection.
def rename_collection(from, to)
cmd = BSON::OrderedHash.new
cmd[:renameCollection] = "#{@name}.#{from}"
cmd[:to] = "#{@name}.#{to}"
doc = DB.new('admin', @client).command(cmd, :check_response => false)
ok?(doc) || raise(MongoDBError, "Error renaming collection: #{doc.inspect}")
end
# Drop an index from a given collection. Normally called from
# Collection#drop_index or Collection#drop_indexes.
#
# @param [String] collection_name
# @param [String] index_name
#
# @return [True] returns +true+ on success.
#
# @raise MongoDBError if there's an error dropping the index.
def drop_index(collection_name, index_name)
cmd = BSON::OrderedHash.new
cmd[:deleteIndexes] = collection_name
cmd[:index] = index_name.to_s
doc = command(cmd, :check_response => false)
ok?(doc) || raise(MongoDBError, "Error with drop_index command: #{doc.inspect}")
end
# Get information on the indexes for the given collection.
# Normally called by Collection#index_information.
#
# @param [String] collection_name
#
# @return [Hash] keys are index names and the values are lists of [key, type] pairs
# defining the index.
def index_information(collection_name)
sel = {:ns => full_collection_name(collection_name)}
info = {}
Cursor.new(Collection.new(SYSTEM_INDEX_COLLECTION, self), :selector => sel).each do |index|
info[index['name']] = index
end
info
end
# Return stats on this database. Uses MongoDB's dbstats command.
#
# @return [Hash]
def stats
self.command(:dbstats => 1)
end
# Return +true+ if the supplied +doc+ contains an 'ok' field with the value 1.
#
# @param [Hash] doc
#
# @return [Boolean]
def ok?(doc)
Mongo::Support.ok?(doc)
end
# Send a command to the database.
#
# Note: DB commands must start with the "command" key. For this reason,
# any selector containing more than one key must be an OrderedHash.
#
# Note also that a command in MongoDB is just a kind of query
# that occurs on the system command collection ($cmd). Examine this method's implementation
# to see how it works.
#
# @param [OrderedHash, Hash] selector an OrderedHash, or a standard Hash with just one
# key, specifying the command to be performed. In Ruby 1.9 and above, OrderedHash isn't necessary
# because hashes are ordered by default.
#
# @option opts [Boolean] :check_response (true) If +true+, raises an exception if the
# command fails.
# @option opts [Socket] :socket a socket to use for sending the command. This is mainly for internal use.
# @option opts [:primary, :secondary] :read Read preference for this command. See Collection#find for
# more details.
# @option opts [String] :comment (nil) a comment to include in profiling logs
# @option opts [Boolean] :compile_regex (true) whether BSON regex objects should be compiled into Ruby regexes.
# If false, a BSON::Regex object will be returned instead.
#
# @return [Hash]
def command(selector, opts={})
raise MongoArgumentError, "Command must be given a selector" unless selector.respond_to?(:keys) && !selector.empty?
opts = opts.dup
# deletes :check_response and returns the value, if nil defaults to the block result
check_response = opts.delete(:check_response) { true }
# build up the command hash
command = opts.key?(:socket) ? { :socket => opts.delete(:socket) } : {}
command.merge!(:comment => opts.delete(:comment)) if opts.key?(:comment)
command.merge!(:compile_regex => opts.delete(:compile_regex)) if opts.key?(:compile_regex)
command[:limit] = -1
command[:read] = Mongo::ReadPreference::cmd_read_pref(opts.delete(:read), selector) if opts.key?(:read)
if RUBY_VERSION < '1.9' && selector.class != BSON::OrderedHash
if selector.keys.length > 1
raise MongoArgumentError, "DB#command requires an OrderedHash when hash contains multiple keys"
end
if opts.keys.size > 0
# extra opts will be merged into the selector, so make sure it's an OH in versions < 1.9
selector = selector.dup
selector = BSON::OrderedHash.new.merge!(selector)
end
end
# arbitrary opts are merged into the selector
command[:selector] = selector.merge!(opts)
begin
result = Cursor.new(system_command_collection, command).next_document
rescue OperationFailure => ex
if check_response
raise ex.class.new("Database command '#{selector.keys.first}' failed: #{ex.message}", ex.error_code, ex.result)
else
result = ex.result
end
end
raise OperationFailure,
"Database command '#{selector.keys.first}' failed: returned null." unless result
if check_response && (!ok?(result) || result['writeErrors'] || result['writeConcernError'])
message = "Database command '#{selector.keys.first}' failed: ("
message << result.map do |key, value|
"#{key}: '#{value}'"
end.join('; ')
message << ').'
code = result['code'] || result['assertionCode']
raise ExecutionTimeout.new(message, code, result) if code == MAX_TIME_MS_CODE
raise OperationFailure.new(message, code, result)
end
result
end
# A shortcut returning db plus dot plus collection name.
#
# @param [String] collection_name
#
# @return [String]
def full_collection_name(collection_name)
"#{@name}.#{collection_name}"
end
# The primary key factory object (or +nil+).
#
# @return [Object, Nil]
def pk_factory
@pk_factory
end
# Specify a primary key factory if not already set.
#
# @raise [MongoArgumentError] if the primary key factory has already been set.
def pk_factory=(pk_factory)
raise MongoArgumentError,
"Cannot change primary key factory once it's been set" if @pk_factory
@pk_factory = pk_factory
end
# Return the current database profiling level. If profiling is enabled, you can
# get the results using DB#profiling_info.
#
# @return [Symbol] :off, :slow_only, or :all
def profiling_level
cmd = BSON::OrderedHash.new
cmd[:profile] = -1
doc = command(cmd, :check_response => false)
raise "Error with profile command: #{doc.inspect}" unless ok?(doc)
level_sym = PROFILE_LEVEL.invert[doc['was'].to_i]
raise "Error: illegal profiling level value #{doc['was']}" unless level_sym
level_sym
end
# Set this database's profiling level. If profiling is enabled, you can
# get the results using DB#profiling_info.
#
# @param [Symbol] level acceptable options are +:off+, +:slow_only+, or +:all+.
def profiling_level=(level)
cmd = BSON::OrderedHash.new
cmd[:profile] = PROFILE_LEVEL[level]
doc = command(cmd, :check_response => false)
ok?(doc) || raise(MongoDBError, "Error with profile command: #{doc.inspect}")
end
# Get the current profiling information.
#
# @return [Array] a list of documents containing profiling information.
def profiling_info
Cursor.new(Collection.new(SYSTEM_PROFILE_COLLECTION, self), :selector => {}).to_a
end
# Validate a named collection.
#
# @param [String] name the collection name.
#
# @return [Hash] validation information.
#
# @raise [MongoDBError] if the command fails or there's a problem with the validation
# data, or if the collection is invalid.
def validate_collection(name)
cmd = BSON::OrderedHash.new
cmd[:validate] = name
cmd[:full] = true
doc = command(cmd, :check_response => false)
raise MongoDBError, "Error with validate command: #{doc.inspect}" unless ok?(doc)
if (doc.has_key?('valid') && !doc['valid']) || (doc['result'] =~ /\b(exception|corrupt)\b/i)
raise MongoDBError, "Error: invalid collection #{name}: #{doc.inspect}"
end
doc
end
private
def system_command_collection
Collection.new(SYSTEM_COMMAND_COLLECTION, self)
end
# Create a new user.
#
# @param username [String] The username.
# @param password [String] The user's password.
# @param read_only [Boolean] Create a read-only user (deprecated in MongoDB >= 2.6)
# @param opts [Hash]
#
# @private
def create_or_update_user(command, username, password, read_only, opts)
if read_only || !opts.key?(:roles)
warn "Creating a user with the read_only option or without roles is " +
"deprecated in MongoDB >= 2.6"
end
# The password is always salted and hashed by the driver.
if opts.key?(:digestPassword)
raise MongoArgumentError,
"The digestPassword option is not available via DB#add_user. " +
"Use DB#command(:createUser => ...) instead for this option."
end
opts = opts.dup
pwd = Mongo::Authentication.hash_password(username, password) if password
cmd_opts = pwd ? { :pwd => pwd } : {}
# specify that the server shouldn't digest the password because the driver does
cmd_opts[:digestPassword] = false
unless opts.key?(:roles)
if name == 'admin'
roles = read_only ? ['readAnyDatabase'] : ['root']
else
roles = read_only ? ['read'] : ["dbOwner"]
end
cmd_opts[:roles] = roles
end
cmd_opts[:writeConcern] =
opts.key?(:writeConcern) ? opts.delete(:writeConcern) : { :w => 1 }
cmd_opts.merge!(opts)
command({ command => username }, cmd_opts)
end
# Create a user in MongoDB versions < 2.5.3.
# Called by #add_user if the 'usersInfo' command fails.
#
# @param username [String] The username.
# @param password [String] (nil) The user's password.
# @param read_only [Boolean] (false) Create a read-only user.
# @param opts [Hash]
#
# @private
def legacy_add_user(username, password=nil, read_only=false, opts={})
users = self[SYSTEM_USER_COLLECTION]
user = users.find_one(:user => username) || {:user => username}
user['pwd'] =
Mongo::Authentication.hash_password(username, password) if password
user['readOnly'] = true if read_only
user.merge!(opts)
begin
users.save(user)
rescue OperationFailure => ex
# adding first admin user fails GLE in MongoDB 2.2
raise ex unless ex.message =~ /login/
end
user
end
end
end
ruby-mongo-1.10.0/lib/mongo/exception.rb 0000664 0000000 0000000 00000005665 12334610061 0020107 0 ustar 00root root 0000000 0000000 # Copyright (C) 2009-2013 MongoDB, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
module Mongo
# Generic Mongo Ruby Driver exception class.
class MongoRubyError < StandardError; end
# Raised when MongoDB itself has returned an error.
class MongoDBError < RuntimeError
# @return The entire failed command's response object, if available.
attr_reader :result
# @return The failed command's error code, if availab.e
attr_reader :error_code
def initialize(message=nil, error_code=nil, result=nil)
@error_code = error_code
@result = result
super(message)
end
end
# Raised on fatal errors to GridFS.
class GridError < MongoRubyError; end
# Raised on fatal errors to GridFS.
class GridFileNotFound < GridError; end
# Raised on fatal errors to GridFS.
class GridMD5Failure < GridError; end
# Raised when invalid arguments are sent to Mongo Ruby methods.
class MongoArgumentError < MongoRubyError; end
# Raised on failures in connection to the database server.
class ConnectionError < MongoRubyError; end
# Raised on failures in connection to the database server.
class ReplicaSetConnectionError < ConnectionError; end
# Raised on failures in connection to the database server.
class ConnectionTimeoutError < MongoRubyError; end
# Raised when no tags in a read preference maps to a given connection.
class NodeWithTagsNotFound < MongoRubyError; end
# Raised when a connection operation fails.
class ConnectionFailure < MongoDBError; end
# Raised when authentication fails.
class AuthenticationError < MongoDBError; end
# Raised when a database operation fails.
class OperationFailure < MongoDBError; end
# Raised when a database operation exceeds maximum specified time.
class ExecutionTimeout < OperationFailure; end
# Raised when a database operation has a write concern error.
class WriteConcernError < OperationFailure; end
# Raised when a socket read operation times out.
class OperationTimeout < SocketError; end
# Raised when a client attempts to perform an invalid operation.
class InvalidOperation < MongoDBError; end
# Raised when an invalid collection or database name is used (invalid namespace name).
class InvalidNSName < RuntimeError; end
# Raised when the client supplies an invalid value to sort by.
class InvalidSortValueError < MongoRubyError; end
# Raised for bulk write errors.
class BulkWriteError < OperationFailure; end
end
ruby-mongo-1.10.0/lib/mongo/functional.rb 0000664 0000000 0000000 00000001521 12334610061 0020236 0 ustar 00root root 0000000 0000000 # Copyright (C) 2009-2013 MongoDB, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
require 'mongo/functional/authentication'
require 'mongo/functional/logging'
require 'mongo/functional/read_preference'
require 'mongo/functional/write_concern'
require 'mongo/functional/uri_parser'
require 'mongo/functional/sasl_java' if RUBY_PLATFORM =~ /java/
ruby-mongo-1.10.0/lib/mongo/functional/ 0000775 0000000 0000000 00000000000 12334610061 0017712 5 ustar 00root root 0000000 0000000 ruby-mongo-1.10.0/lib/mongo/functional/authentication.rb 0000664 0000000 0000000 00000026617 12334610061 0023272 0 ustar 00root root 0000000 0000000 # Copyright (C) 2009-2013 MongoDB, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
require 'digest/md5'
module Mongo
module Authentication
DEFAULT_MECHANISM = 'MONGODB-CR'
MECHANISMS = ['GSSAPI', 'MONGODB-CR', 'MONGODB-X509', 'PLAIN']
EXTRA = { 'GSSAPI' => [:gssapi_service_name, :canonicalize_host_name] }
# authentication module methods
class << self
# Helper to validate an authentication mechanism and optionally
# raise an error if invalid.
#
# @param mechanism [String] [description]
# @param raise_error [Boolean] [description]
#
# @raise [ArgumentError] if raise_error and not a valid auth mechanism.
# @return [Boolean] returns the validation result.
def validate_mechanism(mechanism, raise_error=false)
return true if MECHANISMS.include?(mechanism.upcase)
if raise_error
raise ArgumentError,
"Invalid authentication mechanism provided. Must be one of " +
"#{Mongo::Authentication::MECHANISMS.join(', ')}."
end
false
end
# Helper to validate and normalize credential sets.
#
# @param auth [Hash] A hash containing the credential set.
#
# @raise [MongoArgumentError] if the credential set is invalid.
# @return [Hash] The validated credential set.
def validate_credentials(auth)
# set the default auth mechanism if not defined
auth[:mechanism] ||= DEFAULT_MECHANISM
# set the default auth source if not defined
auth[:source] = auth[:source] || auth[:db_name] || 'admin'
if (auth[:mechanism] == 'MONGODB-CR' || auth[:mechanism] == 'PLAIN') && !auth[:password]
raise MongoArgumentError,
"When using the authentication mechanism #{auth[:mechanism]} " +
"both username and password are required."
end
# if extra opts exist, validate them
allowed_keys = EXTRA[auth[:mechanism]]
if auth[:extra] && !auth[:extra].empty?
invalid_opts = []
auth[:extra].keys.each { |k| invalid_opts << k unless allowed_keys.include?(k) }
raise MongoArgumentError,
"Invalid extra option(s): #{invalid_opts} found. Please check the extra options" +
" passed and try again." unless invalid_opts.empty?
end
auth
end
# Generate an MD5 for authentication.
#
# @param username [String] The username.
# @param password [String] The user's password.
# @param nonce [String] The nonce value.
#
# @return [String] MD5 key for db authentication.
def auth_key(username, password, nonce)
Digest::MD5.hexdigest("#{nonce}#{username}#{hash_password(username, password)}")
end
# Return a hashed password for auth.
#
# @param username [String] The username.
# @param password [String] The users's password.
#
# @return [String] The hashed password value.
def hash_password(username, password)
Digest::MD5.hexdigest("#{username}:mongo:#{password}")
end
end
# Saves a cache of authentication credentials to the current
# client instance. This method is called automatically by DB#authenticate.
#
# @param db_name [String] The current database name.
# @param username [String] The current username.
# @param password [String] (nil) The users's password (not required for
# all authentication mechanisms).
# @param source [String] (nil) The authentication source database
# (if different than the current database).
# @param mechanism [String] (nil) The authentication mechanism being used
# (default: 'MONGODB-CR').
# @param extra [Hash] (nil) A optional hash of extra options to be stored with
# the credential set.
#
# @raise [MongoArgumentError] Raised if the database has already been used
# for authentication. A log out is required before additional auths can
# be issued against a given database.
# @raise [AuthenticationError] Raised if authentication fails.
# @return [Hash] a hash representing the authentication just added.
def add_auth(db_name, username, password=nil, source=nil, mechanism=nil, extra=nil)
auth = Authentication.validate_credentials({
:db_name => db_name,
:username => username,
:password => password,
:source => source,
:mechanism => mechanism,
:extra => extra
})
if @auths.any? {|a| a[:source] == auth[:source]}
raise MongoArgumentError,
"Another user has already authenticated to the database " +
"'#{auth[:source]}' and multiple authentications are not " +
"permitted. Please logout first."
end
begin
socket = self.checkout_reader(:mode => :primary_preferred)
self.issue_authentication(auth, :socket => socket)
ensure
socket.checkin if socket
end
@auths << auth
auth
end
# Remove a saved authentication for this connection.
#
# @param db_name [String] The database name.
#
# @return [Boolean] The result of the operation.
def remove_auth(db_name)
return false unless @auths
@auths.reject! { |a| a[:source] == db_name } ? true : false
end
# Remove all authentication information stored in this connection.
#
# @return [Boolean] result of the operation.
def clear_auths
@auths = Set.new
true
end
# Method to handle and issue logout commands.
#
# @note This method should not be called directly. Use DB#logout.
#
# @param db_name [String] The database name.
# @param opts [Hash] Hash of optional settings and configuration values.
#
# @option opts [Socket] socket (nil) Optional socket instance to use.
#
# @raise [MongoDBError] Raised if the logout operation fails.
# @return [Boolean] The result of the logout operation.
def issue_logout(db_name, opts={})
doc = db(db_name).command({:logout => 1}, :socket => opts[:socket])
unless Support.ok?(doc)
raise MongoDBError, "Error logging out on DB #{db_name}."
end
true # somewhat pointless, but here to preserve the existing API
end
# Method to handle and issue authentication commands.
#
# @note This method should not be called directly. Use DB#authenticate.
#
# @param auth [Hash] The authentication credentials to be used.
# @param opts [Hash] Hash of optional settings and configuration values.
#
# @option opts [Socket] socket (nil) Optional socket instance to use.
#
# @raise [AuthenticationError] Raised if the authentication fails.
# @return [Boolean] Result of the authentication operation.
def issue_authentication(auth, opts={})
result = case auth[:mechanism]
when 'MONGODB-CR'
issue_cr(auth, opts)
when 'MONGODB-X509'
issue_x509(auth, opts)
when 'PLAIN'
issue_plain(auth, opts)
when 'GSSAPI'
issue_gssapi(auth, opts)
end
unless Support.ok?(result)
raise AuthenticationError,
"Failed to authenticate user '#{auth[:username]}' " +
"on db '#{auth[:source]}'."
end
true
end
private
# Handles issuing authentication commands for the MONGODB-CR auth mechanism.
#
# @param auth [Hash] The authentication credentials to be used.
# @param opts [Hash] Hash of optional settings and configuration values.
#
# @option opts [Socket] socket (nil) Optional socket instance to use.
#
# @return [Boolean] Result of the authentication operation.
#
# @private
def issue_cr(auth, opts={})
database = db(auth[:source])
nonce = get_nonce(database, opts)
# build auth command document
cmd = BSON::OrderedHash.new
cmd['authenticate'] = 1
cmd['user'] = auth[:username]
cmd['nonce'] = nonce
cmd['key'] = Authentication.auth_key(auth[:username],
auth[:password],
nonce)
database.command(cmd, :check_response => false,
:socket => opts[:socket])
end
# Handles issuing authentication commands for the MONGODB-X509 auth mechanism.
#
# @param auth [Hash] The authentication credentials to be used.
# @param opts [Hash] Hash of optional settings and configuration values.
#
# @private
def issue_x509(auth, opts={})
database = db('$external')
cmd = BSON::OrderedHash.new
cmd[:authenticate] = 1
cmd[:mechanism] = auth[:mechanism]
cmd[:user] = auth[:username]
database.command(cmd, :check_response => false,
:socket => opts[:socket])
end
# Handles issuing authentication commands for the PLAIN auth mechanism.
#
# @param auth [Hash] The authentication credentials to be used.
# @param opts [Hash] Hash of optional settings and configuration values.
#
# @option opts [Socket] socket (nil) Optional socket instance to use.
#
# @return [Boolean] Result of the authentication operation.
#
# @private
def issue_plain(auth, opts={})
database = db(auth[:source])
payload = "\x00#{auth[:username]}\x00#{auth[:password]}"
cmd = BSON::OrderedHash.new
cmd[:saslStart] = 1
cmd[:mechanism] = auth[:mechanism]
cmd[:payload] = BSON::Binary.new(payload)
cmd[:autoAuthorize] = 1
database.command(cmd, :check_response => false,
:socket => opts[:socket])
end
# Handles issuing authentication commands for the GSSAPI auth mechanism.
#
# @param auth [Hash] The authentication credentials to be used.
# @param opts [Hash] Hash of optional settings and configuration values.
#
# @private
def issue_gssapi(auth, opts={})
raise NotImplementedError,
"The #{auth[:mechanism]} authentication mechanism is only supported " +
"for JRuby." unless RUBY_PLATFORM =~ /java/
Mongo::Sasl::GSSAPI.authenticate(auth[:username], self, opts[:socket], auth[:extra] || {})
end
# Helper to fetch a nonce value from a given database instance.
#
# @param database [Mongo::DB] The DB instance to use for issue the nonce command.
# @param opts [Hash] Hash of optional settings and configuration values.
#
# @option opts [Socket] socket (nil) Optional socket instance to use.
#
# @raise [MongoDBError] Raised if there is an error executing the command.
# @return [String] Returns the nonce value.
#
# @private
def get_nonce(database, opts={})
doc = database.command({:getnonce => 1}, :check_response => false,
:socket => opts[:socket])
unless Support.ok?(doc)
raise MongoDBError, "Error retrieving nonce: #{doc}"
end
doc['nonce']
end
end
end
ruby-mongo-1.10.0/lib/mongo/functional/logging.rb 0000664 0000000 0000000 00000005105 12334610061 0021666 0 ustar 00root root 0000000 0000000 # Copyright (C) 2009-2013 MongoDB, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
module Mongo
module Logging
module Instrumenter
def self.instrument(name, payload = {})
yield
end
end
@instrumenter = Instrumenter
def write_logging_startup_message
log(:debug, "Logging level is currently :debug which could negatively impact " +
"client-side performance. You should set your logging level no lower than " +
":info in production.")
end
# Log a message with the given level.
def log(level, msg)
return unless @logger
case level
when :fatal then
@logger.fatal "MONGODB [FATAL] #{msg}"
when :error then
@logger.error "MONGODB [ERROR] #{msg}"
when :warn then
@logger.warn "MONGODB [WARNING] #{msg}"
when :info then
@logger.info "MONGODB [INFO] #{msg}"
when :debug then
@logger.debug "MONGODB [DEBUG] #{msg}"
else
@logger.debug "MONGODB [DEBUG] #{msg}"
end
end
# Execute the block and log the operation described by name and payload.
def instrument(name, payload = {})
start_time = Time.now
res = Logging.instrumenter.instrument(name, payload) do
yield
end
duration = Time.now - start_time
log_operation(name, payload, duration)
res
end
def self.instrumenter
@instrumenter
end
def self.instrumenter=(instrumenter)
@instrumenter = instrumenter
end
protected
def log_operation(name, payload, duration)
@logger && @logger.debug do
msg = "MONGODB "
msg << "(%.1fms) " % (duration * 1000)
msg << "#{payload[:database]}['#{payload[:collection]}'].#{name}("
msg << payload.values_at(:selector, :document, :documents, :fields ).compact.map(&:inspect).join(', ') + ")"
msg << ".skip(#{payload[:skip]})" if payload[:skip]
msg << ".limit(#{payload[:limit]})" if payload[:limit]
msg << ".sort(#{payload[:order]})" if payload[:order]
msg
end
end
end
end
ruby-mongo-1.10.0/lib/mongo/functional/read_preference.rb 0000664 0000000 0000000 00000012502 12334610061 0023350 0 ustar 00root root 0000000 0000000 # Copyright (C) 2009-2013 MongoDB, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
module Mongo
module ReadPreference
READ_PREFERENCES = [
:primary,
:primary_preferred,
:secondary,
:secondary_preferred,
:nearest
]
MONGOS_MODES = {
:primary => 'primary',
:primary_preferred => 'primaryPreferred',
:secondary => 'secondary',
:secondary_preferred => 'secondaryPreferred',
:nearest => 'nearest'
}
# Commands that may be sent to replica-set secondaries, depending on
# read preference and tags. All other commands are always run on the primary.
SECONDARY_OK_COMMANDS = [
'group',
'aggregate',
'collstats',
'dbstats',
'count',
'distinct',
'geonear',
'geosearch',
'geowalk',
'mapreduce',
'replsetgetstatus',
'ismaster',
'parallelcollectionscan'
]
def self.mongos(mode, tag_sets)
if mode != :secondary_preferred || !tag_sets.empty?
mongos_read_preference = BSON::OrderedHash[:mode => MONGOS_MODES[mode]]
mongos_read_preference[:tags] = tag_sets if !tag_sets.empty?
end
mongos_read_preference
end
def self.validate(value)
if READ_PREFERENCES.include?(value)
return true
else
raise MongoArgumentError, "#{value} is not a valid read preference. " +
"Please specify one of the following read preferences as a symbol: #{READ_PREFERENCES}"
end
end
# Returns true if it's ok to run the command on a secondary
def self.secondary_ok?(selector)
command = selector.keys.first.to_s.downcase
if command == 'mapreduce'
out = selector.select { |k, v| k.to_s.downcase == 'out' }.first.last
# the server only looks at the first key in the out object
return out.respond_to?(:keys) && out.keys.first.to_s.downcase == 'inline'
elsif command == 'aggregate'
return selector['pipeline'].none? { |op| op.key?('$out') || op.key?(:$out) }
end
SECONDARY_OK_COMMANDS.member?(command)
end
# Returns true if the command should be rerouted to the primary.
def self.reroute_cmd_primary?(read_pref, selector)
return false if read_pref == :primary
!secondary_ok?(selector)
end
# Given a command and read preference, possibly reroute to primary.
def self.cmd_read_pref(read_pref, selector)
ReadPreference::validate(read_pref)
if reroute_cmd_primary?(read_pref, selector)
warn "Database command '#{selector.keys.first}' rerouted to primary node"
read_pref = :primary
end
read_pref
end
def read_preference
{
:mode => @read,
:tags => @tag_sets,
:latency => @acceptable_latency
}
end
def read_pool(read_preference_override={})
return primary_pool if mongos?
read_pref = read_preference.merge(read_preference_override)
if pinned_pool && pinned_pool[:read_preference] == read_pref
pool = pinned_pool[:pool]
else
unpin_pool
pool = select_pool(read_pref)
end
unless pool
raise ConnectionFailure, "No replica set member available for query " +
"with read preference matching mode #{read_pref[:mode]} and tags " +
"matching #{read_pref[:tags]}."
end
pool
end
def select_pool(read_pref)
if read_pref[:mode] == :primary && !read_pref[:tags].empty?
raise MongoArgumentError, "Read preference :primary cannot be combined with tags"
end
case read_pref[:mode]
when :primary
primary_pool
when :primary_preferred
primary_pool || select_secondary_pool(secondary_pools, read_pref)
when :secondary
select_secondary_pool(secondary_pools, read_pref)
when :secondary_preferred
select_secondary_pool(secondary_pools, read_pref) || primary_pool
when :nearest
select_near_pool(pools, read_pref)
end
end
def select_secondary_pool(candidates, read_pref)
tag_sets = read_pref[:tags]
if !tag_sets.empty?
matches = []
tag_sets.detect do |tag_set|
matches = candidates.select do |candidate|
tag_set.none? { |k,v| candidate.tags[k.to_s] != v } &&
candidate.ping_time
end
!matches.empty?
end
else
matches = candidates
end
matches.empty? ? nil : select_near_pool(matches, read_pref)
end
def select_near_pool(candidates, read_pref)
latency = read_pref[:latency]
nearest_pool = candidates.min_by { |candidate| candidate.ping_time }
near_pools = candidates.select do |candidate|
(candidate.ping_time - nearest_pool.ping_time) <= latency
end
near_pools[ rand(near_pools.length) ]
end
end
end
ruby-mongo-1.10.0/lib/mongo/functional/sasl_java.rb 0000664 0000000 0000000 00000003564 12334610061 0022212 0 ustar 00root root 0000000 0000000 # Copyright (C) 2009-2013 MongoDB, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
require 'jruby'
include Java
jar_dir = File.expand_path(File.join(File.dirname(__FILE__), '../../../ext/jsasl'))
require File.join(jar_dir, 'target/jsasl.jar')
module Mongo
module Sasl
module GSSAPI
def self.authenticate(username, client, socket, opts={})
db = client.db('$external')
hostname = socket.pool.host
servicename = opts[:gssapi_service_name] || 'mongodb'
canonicalize = opts[:canonicalize_host_name] ? opts[:canonicalize_host_name] : false
authenticator = org.mongodb.sasl.GSSAPIAuthenticator.new(JRuby.runtime, username, hostname, servicename, canonicalize)
token = BSON::Binary.new(authenticator.initialize_challenge)
cmd = BSON::OrderedHash['saslStart', 1, 'mechanism', 'GSSAPI', 'payload', token, 'autoAuthorize', 1]
response = db.command(cmd, :check_response => false, :socket => socket)
until response['done'] do
token = BSON::Binary.new(authenticator.evaluate_challenge(response['payload'].to_s))
cmd = BSON::OrderedHash['saslContinue', 1, 'conversationId', response['conversationId'], 'payload', token]
response = db.command(cmd, :check_response => false, :socket => socket)
end
response
end
end
end
end
ruby-mongo-1.10.0/lib/mongo/functional/uri_parser.rb 0000664 0000000 0000000 00000032357 12334610061 0022424 0 ustar 00root root 0000000 0000000 # Copyright (C) 2009-2013 MongoDB, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
require 'cgi'
require 'uri'
module Mongo
class URIParser
AUTH_REGEX = /((.+)@)?/
HOST_REGEX = /([-.\w]+)|(\[[^\]]+\])/
PORT_REGEX = /(?::(\w+))?/
NODE_REGEX = /((#{HOST_REGEX}#{PORT_REGEX},?)+)/
PATH_REGEX = /(?:\/([-\w]+))?/
MONGODB_URI_MATCHER = /#{AUTH_REGEX}#{NODE_REGEX}#{PATH_REGEX}/
MONGODB_URI_SPEC = "mongodb://[username:password@]host1[:port1][,host2[:port2],...[,hostN[:portN]]][/[database][?options]]"
SPEC_ATTRS = [:nodes, :auths]
READ_PREFERENCES = {
'primary' => :primary,
'primarypreferred' => :primary_preferred,
'secondary' => :secondary,
'secondarypreferred' => :secondary_preferred,
'nearest' => :nearest
}
OPT_ATTRS = [
:authmechanism,
:authsource,
:canonicalizehostname,
:connect,
:connecttimeoutms,
:fsync,
:gssapiservicename,
:journal,
:pool_size,
:readpreference,
:replicaset,
:safe,
:slaveok,
:sockettimeoutms,
:ssl,
:w,
:wtimeout,
:wtimeoutms
]
OPT_VALID = {
:authmechanism => lambda { |arg| Mongo::Authentication.validate_mechanism(arg) },
:authsource => lambda { |arg| arg.length > 0 },
:canonicalizehostname => lambda { |arg| ['true', 'false'].include?(arg) },
:connect => lambda { |arg| [ 'direct', 'replicaset', 'true', 'false', true, false ].include?(arg) },
:connecttimeoutms => lambda { |arg| arg =~ /^\d+$/ },
:fsync => lambda { |arg| ['true', 'false'].include?(arg) },
:gssapiservicename => lambda { |arg| arg.length > 0 },
:journal => lambda { |arg| ['true', 'false'].include?(arg) },
:pool_size => lambda { |arg| arg.to_i > 0 },
:readpreference => lambda { |arg| READ_PREFERENCES.keys.include?(arg) },
:replicaset => lambda { |arg| arg.length > 0 },
:safe => lambda { |arg| ['true', 'false'].include?(arg) },
:slaveok => lambda { |arg| ['true', 'false'].include?(arg) },
:sockettimeoutms => lambda { |arg| arg =~ /^\d+$/ },
:ssl => lambda { |arg| ['true', 'false'].include?(arg) },
:w => lambda { |arg| arg =~ /^\w+$/ },
:wtimeout => lambda { |arg| arg =~ /^\d+$/ },
:wtimeoutms => lambda { |arg| arg =~ /^\d+$/ }
}
OPT_ERR = {
:authmechanism => "must be one of #{Mongo::Authentication::MECHANISMS.join(', ')}",
:authsource => "must be a string containing the name of the database being used for authentication",
:canonicalizehostname => "must be 'true' or 'false'",
:connect => "must be 'direct', 'replicaset', 'true', or 'false'",
:connecttimeoutms => "must be an integer specifying milliseconds",
:fsync => "must be 'true' or 'false'",
:gssapiservicename => "must be a string containing the name of the GSSAPI service",
:journal => "must be 'true' or 'false'",
:pool_size => "must be an integer greater than zero",
:readpreference => "must be on of #{READ_PREFERENCES.keys.map(&:inspect).join(",")}",
:replicaset => "must be a string containing the name of the replica set to connect to",
:safe => "must be 'true' or 'false'",
:slaveok => "must be 'true' or 'false'",
:settimeoutms => "must be an integer specifying milliseconds",
:ssl => "must be 'true' or 'false'",
:w => "must be an integer indicating number of nodes to replicate to or a string " +
"specifying that replication is required to the majority or nodes with a " +
"particilar getLastErrorMode.",
:wtimeout => "must be an integer specifying milliseconds",
:wtimeoutms => "must be an integer specifying milliseconds"
}
OPT_CONV = {
:authmechanism => lambda { |arg| arg.upcase },
:authsource => lambda { |arg| arg },
:canonicalizehostname => lambda { |arg| arg == 'true' ? true : false },
:connect => lambda { |arg| arg == 'false' ? false : arg }, # convert 'false' to FalseClass
:connecttimeoutms => lambda { |arg| arg.to_f / 1000 }, # stored as seconds
:fsync => lambda { |arg| arg == 'true' ? true : false },
:gssapiservicename => lambda { |arg| arg },
:journal => lambda { |arg| arg == 'true' ? true : false },
:pool_size => lambda { |arg| arg.to_i },
:readpreference => lambda { |arg| READ_PREFERENCES[arg] },
:replicaset => lambda { |arg| arg },
:safe => lambda { |arg| arg == 'true' ? true : false },
:slaveok => lambda { |arg| arg == 'true' ? true : false },
:sockettimeoutms => lambda { |arg| arg.to_f / 1000 }, # stored as seconds
:ssl => lambda { |arg| arg == 'true' ? true : false },
:w => lambda { |arg| Mongo::Support.is_i?(arg) ? arg.to_i : arg.to_sym },
:wtimeout => lambda { |arg| arg.to_i },
:wtimeoutms => lambda { |arg| arg.to_i }
}
attr_reader :auths,
:authmechanism,
:authsource,
:canonicalizehostname,
:connect,
:connecttimeoutms,
:db_name,
:fsync,
:gssapiservicename,
:journal,
:nodes,
:pool_size,
:readpreference,
:replicaset,
:safe,
:slaveok,
:sockettimeoutms,
:ssl,
:w,
:wtimeout,
:wtimeoutms
# Parse a MongoDB URI. This method is used by MongoClient.from_uri.
# Returns an array of nodes and an array of db authorizations, if applicable.
#
# @note Passwords can contain any character except for ','
#
# @param [String] uri The MongoDB URI string.
def initialize(uri)
if uri.start_with?('mongodb://')
uri = uri[10..-1]
else
raise MongoArgumentError, "MongoDB URI must match this spec: #{MONGODB_URI_SPEC}"
end
hosts, opts = uri.split('?')
parse_options(opts)
parse_hosts(hosts)
validate_connect
end
# Create a Mongo::MongoClient or a Mongo::MongoReplicaSetClient based on the URI.
#
# @note Don't confuse this with attribute getter method #connect.
#
# @return [MongoClient,MongoReplicaSetClient]
def connection(extra_opts={}, legacy = false, sharded = false)
opts = connection_options.merge!(extra_opts)
if(legacy)
if replicaset?
ReplSetConnection.new(node_strings, opts)
else
Connection.new(host, port, opts)
end
else
if sharded
MongoShardedClient.new(node_strings, opts)
elsif replicaset?
MongoReplicaSetClient.new(node_strings, opts)
else
MongoClient.new(host, port, opts)
end
end
end
# Whether this represents a replica set.
# @return [true,false]
def replicaset?
replicaset.is_a?(String) || nodes.length > 1
end
# Whether to immediately connect to the MongoDB node[s]. Defaults to true.
# @return [true, false]
def connect?
connect != false
end
# Whether this represents a direct connection.
#
# @note Specifying :connect => 'direct' has no effect... other than to raise an exception if other variables suggest a replicaset.
#
# @return [true,false]
def direct?
!replicaset?
end
# For direct connections, the host of the (only) node.
# @return [String]
def host
nodes[0][0]
end
# For direct connections, the port of the (only) node.
# @return [Integer]
def port
nodes[0][1].to_i
end
# Options that can be passed to MongoClient.new or MongoReplicaSetClient.new
# @return [Hash]
def connection_options
opts = {}
if @wtimeout
warn "Using wtimeout in a URI is deprecated, please use wtimeoutMS. It will be removed in v2.0."
opts[:wtimeout] = @wtimeout
end
opts[:wtimeout] = @wtimeoutms if @wtimeoutms
opts[:w] = 1 if @safe
opts[:w] = @w if @w
opts[:j] = @journal if @journal
opts[:fsync] = @fsync if @fsync
opts[:connect_timeout] = @connecttimeoutms if @connecttimeoutms
opts[:op_timeout] = @sockettimeoutms if @sockettimeoutms
opts[:pool_size] = @pool_size if @pool_size
opts[:read] = @readpreference if @readpreference
if @slaveok && !@readpreference
unless replicaset?
opts[:slave_ok] = true
else
opts[:read] = :secondary_preferred
end
end
if replicaset.is_a?(String)
opts[:name] = replicaset
end
opts[:db_name] = @db_name if @db_name
opts[:auths] = @auths if @auths
opts[:ssl] = @ssl if @ssl
opts[:connect] = connect?
opts
end
def node_strings
nodes.map { |node| node.join(':') }
end
private
def parse_hosts(uri_without_protocol)
@nodes = []
@auths = Set.new
unless matches = MONGODB_URI_MATCHER.match(uri_without_protocol)
raise MongoArgumentError,
"MongoDB URI must match this spec: #{MONGODB_URI_SPEC}"
end
user_info = matches[2].split(':') if matches[2]
host_info = matches[3].split(',')
@db_name = matches[8]
host_info.each do |host|
if host[0,1] == '['
host, port = host.split(']:') << MongoClient::DEFAULT_PORT
host = host.end_with?(']') ? host[1...-1] : host[1..-1]
else
host, port = host.split(':') << MongoClient::DEFAULT_PORT
end
unless port.to_s =~ /^\d+$/
raise MongoArgumentError,
"Invalid port #{port}; port must be specified as digits."
end
@nodes << [host, port.to_i]
end
if @nodes.empty?
raise MongoArgumentError,
"No nodes specified. Please ensure that you've provided at " +
"least one node."
end
# no user info to parse, exit here
return unless user_info
# check for url encoding for username and password
username, password = user_info
if user_info.size > 2 ||
(username && username.include?('@')) ||
(password && password.include?('@'))
raise MongoArgumentError,
"The characters ':' and '@' in a username or password " +
"must be escaped (RFC 2396)."
end
# if username exists, proceed adding to auth set
unless username.nil? || username.empty?
auth = Authentication.validate_credentials({
:db_name => @db_name,
:username => URI.unescape(username),
:password => password ? URI.unescape(password) : nil,
:source => @authsource,
:mechanism => @authmechanism
})
auth[:extra] = @canonicalizehostname ? { :canonicalize_host_name => @canonicalizehostname } : {}
auth[:extra].merge!(:gssapi_service_name => @gssapiservicename) if @gssapiservicename
@auths << auth
end
end
# This method uses the lambdas defined in OPT_VALID and OPT_CONV to validate
# and convert the given options.
def parse_options(string_opts)
# initialize instance variables for available options
OPT_VALID.keys.each { |k| instance_variable_set("@#{k}", nil) }
string_opts ||= ''
return if string_opts.empty?
if string_opts.include?(';') and string_opts.include?('&')
raise MongoArgumentError, 'must not mix URL separators ; and &'
end
opts = CGI.parse(string_opts).inject({}) do |memo, (key, value)|
value = value.first
memo[key.downcase.to_sym] = value.strip.downcase
memo
end
opts.each do |key, value|
if !OPT_ATTRS.include?(key)
raise MongoArgumentError, "Invalid Mongo URI option #{key}"
end
if OPT_VALID[key].call(value)
instance_variable_set("@#{key}", OPT_CONV[key].call(value))
else
raise MongoArgumentError, "Invalid value #{value.inspect} for #{key}: #{OPT_ERR[key]}"
end
end
end
def validate_connect
if replicaset? and @connect == 'direct'
# Make sure the user doesn't specify something contradictory
raise MongoArgumentError, "connect=direct conflicts with setting a replicaset name"
end
end
end
end
ruby-mongo-1.10.0/lib/mongo/functional/write_concern.rb 0000664 0000000 0000000 00000004216 12334610061 0023103 0 ustar 00root root 0000000 0000000 # Copyright (C) 2009-2013 MongoDB, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
module Mongo
module WriteConcern
VALID_KEYS = [:w, :j, :fsync, :wtimeout]
DEFAULT_WRITE_CONCERN = {:w => 1}
attr_reader :legacy_write_concern
@@safe_warn = nil
def write_concern_from_legacy(opts)
# Warn if 'safe' parameter is being used,
if opts.key?(:safe) && !@@safe_warn && !ENV['TEST_MODE']
warn "[DEPRECATED] The 'safe' write concern option has been deprecated in favor of 'w'."
@@safe_warn = true
end
# nil: set :w => 0
# false: set :w => 0
# true: set :w => 1
# hash: set :w => 0 and merge with opts
unless opts.has_key?(:w)
opts[:w] = 0 # legacy default, unacknowledged
safe = opts.delete(:safe)
if(safe && safe.is_a?(Hash))
opts.merge!(safe)
elsif(safe == true)
opts[:w] = 1
end
end
end
# todo: throw exception for conflicting write concern options
def get_write_concern(opts, parent=nil)
write_concern_from_legacy(opts) if opts.key?(:safe) || legacy_write_concern
write_concern = DEFAULT_WRITE_CONCERN.dup
write_concern.merge!(parent.write_concern) if parent
write_concern.merge!(opts.reject {|k,v| !VALID_KEYS.include?(k)})
write_concern[:w] = write_concern[:w].to_s if write_concern[:w].is_a?(Symbol)
write_concern
end
def self.gle?(write_concern)
(write_concern[:w].is_a? Symbol) ||
(write_concern[:w].is_a? String) ||
write_concern[:w] > 0 ||
write_concern[:j] ||
write_concern[:fsync] ||
write_concern[:wtimeout]
end
end
end
ruby-mongo-1.10.0/lib/mongo/gridfs.rb 0000664 0000000 0000000 00000001314 12334610061 0017352 0 ustar 00root root 0000000 0000000 # Copyright (C) 2009-2013 MongoDB, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
require 'mongo/gridfs/grid_ext'
require 'mongo/gridfs/grid'
require 'mongo/gridfs/grid_file_system'
require 'mongo/gridfs/grid_io'
ruby-mongo-1.10.0/lib/mongo/gridfs/ 0000775 0000000 0000000 00000000000 12334610061 0017026 5 ustar 00root root 0000000 0000000 ruby-mongo-1.10.0/lib/mongo/gridfs/grid.rb 0000664 0000000 0000000 00000010270 12334610061 0020300 0 ustar 00root root 0000000 0000000 # Copyright (C) 2009-2013 MongoDB, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
module Mongo
# Implementation of the MongoDB GridFS specification. A file store.
class Grid
include GridExt::InstanceMethods
DEFAULT_FS_NAME = 'fs'
# Initialize a new Grid instance, consisting of a MongoDB database
# and a filesystem prefix if not using the default.
#
# @see GridFileSystem
def initialize(db, fs_name=DEFAULT_FS_NAME)
raise MongoArgumentError, "db must be a Mongo::DB." unless db.is_a?(Mongo::DB)
@db = db
@files = @db["#{fs_name}.files"]
@chunks = @db["#{fs_name}.chunks"]
@fs_name = fs_name
# This will create indexes only if we're connected to a primary node.
begin
@chunks.ensure_index([['files_id', Mongo::ASCENDING], ['n', Mongo::ASCENDING]], :unique => true)
rescue Mongo::ConnectionFailure
end
end
# Store a file in the file store. This method is designed only for writing new files;
# if you need to update a given file, first delete it using Grid#delete.
#
# Note that arbitrary metadata attributes can be saved to the file by passing
# them in as options.
#
# @param [String, #read] data a string or io-like object to store.
#
# @option opts [String] :filename (nil) a name for the file.
# @option opts [Hash] :metadata ({}) any additional data to store with the file.
# @option opts [ObjectId] :_id (ObjectId) a unique id for
# the file to be use in lieu of an automatically generated one.
# @option opts [String] :content_type ('binary/octet-stream') If no content type is specified,
# the content type will may be inferred from the filename extension if the mime-types gem can be
# loaded. Otherwise, the content type 'binary/octet-stream' will be used.
# @option opts [Integer] (261120) :chunk_size size of file chunks in bytes.
# @option opts [String, Integer, Symbol] :w (1) Set write concern
#
# Notes on write concern:
# When :w > 0, the chunks sent to the server are validated using an md5 hash.
# If validation fails, an exception will be raised.
#
# @return [BSON::ObjectId] the file's id.
def put(data, opts={})
begin
# Ensure there is an index on files_id and n, as state may have changed since instantiation of self.
# Recall that index definitions are cached with ensure_index so this statement won't unneccesarily repeat index creation.
@chunks.ensure_index([['files_id', Mongo::ASCENDING], ['n', Mongo::ASCENDING]], :unique => true)
opts = opts.dup
filename = opts.delete(:filename)
opts.merge!(default_grid_io_opts)
file = GridIO.new(@files, @chunks, filename, 'w', opts)
file.write(data)
file.close
file.files_id
rescue Mongo::ConnectionFailure => e
raise e, "Failed to create necessary index and write data."
end
end
# Read a file from the file store.
#
# @param id the file's unique id.
#
# @return [Mongo::GridIO]
def get(id)
opts = {:query => {'_id' => id}}.merge!(default_grid_io_opts)
GridIO.new(@files, @chunks, nil, 'r', opts)
end
# Delete a file from the store.
#
# Note that deleting a GridFS file can result in read errors if another process
# is attempting to read a file while it's being deleted. While the odds for this
# kind of race condition are small, it's important to be aware of.
#
# @param id
#
# @return [Boolean]
def delete(id)
@files.remove({"_id" => id})
@chunks.remove({"files_id" => id})
end
private
def default_grid_io_opts
{:fs_name => @fs_name}
end
end
end
ruby-mongo-1.10.0/lib/mongo/gridfs/grid_ext.rb 0000664 0000000 0000000 00000004127 12334610061 0021164 0 ustar 00root root 0000000 0000000 # Copyright (C) 2009-2013 MongoDB, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
module Mongo
module GridExt
module InstanceMethods
# Check the existence of a file matching the given query selector.
#
# Note that this method can be used with both the Grid and GridFileSystem classes. Also
# keep in mind that if you're going to be performing lots of existence checks, you should
# keep an instance of Grid or GridFileSystem handy rather than instantiating for each existence
# check. Alternatively, simply keep a reference to the proper files collection and query that
# as needed. That's exactly how this methods works.
#
# @param [Hash] selector a query selector.
#
# @example
#
# # Check for the existence of a given filename
# @grid = Mongo::GridFileSystem.new(@db)
# @grid.exist?(:filename => 'foo.txt')
#
# # Check for existence filename and content type
# @grid = Mongo::GridFileSystem.new(@db)
# @grid.exist?(:filename => 'foo.txt', :content_type => 'image/jpg')
#
# # Check for existence by _id
# @grid = Mongo::Grid.new(@db)
# @grid.exist?(:_id => BSON::ObjectId.from_string('4bddcd24beffd95a7db9b8c8'))
#
# # Check for existence by an arbitrary attribute.
# @grid = Mongo::Grid.new(@db)
# @grid.exist?(:tags => {'$in' => ['nature', 'zen', 'photography']})
#
# @return [nil, Hash] either nil for the file's metadata as a hash.
def exist?(selector)
@files.find_one(selector)
end
end
end
end
ruby-mongo-1.10.0/lib/mongo/gridfs/grid_file_system.rb 0000664 0000000 0000000 00000014675 12334610061 0022720 0 ustar 00root root 0000000 0000000 # Copyright (C) 2009-2013 MongoDB, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
module Mongo
# A file store built on the GridFS specification featuring
# an API and behavior similar to that of a traditional file system.
class GridFileSystem
include GridExt::InstanceMethods
# Initialize a new GridFileSystem instance, consisting of a MongoDB database
# and a filesystem prefix if not using the default.
#
# @param [Mongo::DB] db a MongoDB database.
# @param [String] fs_name A name for the file system. The default name, based on
# the specification, is 'fs'.
def initialize(db, fs_name=Grid::DEFAULT_FS_NAME)
raise MongoArgumentError, "db must be a Mongo::DB." unless db.is_a?(Mongo::DB)
@db = db
@files = @db["#{fs_name}.files"]
@chunks = @db["#{fs_name}.chunks"]
@fs_name = fs_name
@default_query_opts = {:sort => [['filename', 1], ['uploadDate', -1]], :limit => 1}
# This will create indexes only if we're connected to a primary node.
begin
@files.ensure_index([['filename', 1], ['uploadDate', -1]])
@chunks.ensure_index([['files_id', Mongo::ASCENDING], ['n', Mongo::ASCENDING]], :unique => true)
rescue Mongo::ConnectionFailure
end
end
# Open a file for reading or writing. Note that the options for this method only apply
# when opening in 'w' mode.
#
# Note that arbitrary metadata attributes can be saved to the file by passing
# them is as options.
#
# @param [String] filename the name of the file.
# @param [String] mode either 'r' or 'w' for reading from
# or writing to the file.
# @param [Hash] opts see GridIO#new
#
# @option opts [Hash] :metadata ({}) any additional data to store with the file.
# @option opts [ObjectId] :_id (ObjectId) a unique id for
# the file to be use in lieu of an automatically generated one.
# @option opts [String] :content_type ('binary/octet-stream') If no content type is specified,
# the content type will may be inferred from the filename extension if the mime-types gem can be
# loaded. Otherwise, the content type 'binary/octet-stream' will be used.
# @option opts [Integer] (261120) :chunk_size size of file chunks in bytes.
# @option opts [Boolean] :delete_old (false) ensure that old versions of the file are deleted. This option
# only work in 'w' mode. Certain precautions must be taken when deleting GridFS files. See the notes under
# GridFileSystem#delete.
# @option opts [String, Integer, Symbol] :w (1) Set write concern
#
# Notes on write concern:
# When :w > 0, the chunks sent to the server
# will be validated using an md5 hash. If validation fails, an exception will be raised.
# @option opts [Integer] :versions (false) deletes all versions which exceed the number specified to
# retain ordered by uploadDate. This option only works in 'w' mode. Certain precautions must be taken when
# deleting GridFS files. See the notes under GridFileSystem#delete.
#
# @example
#
# # Store the text "Hello, world!" in the grid file system.
# @grid = Mongo::GridFileSystem.new(@db)
# @grid.open('filename', 'w') do |f|
# f.write "Hello, world!"
# end
#
# # Output "Hello, world!"
# @grid = Mongo::GridFileSystem.new(@db)
# @grid.open('filename', 'r') do |f|
# puts f.read
# end
#
# # Write a file on disk to the GridFileSystem
# @file = File.open('image.jpg')
# @grid = Mongo::GridFileSystem.new(@db)
# @grid.open('image.jpg, 'w') do |f|
# f.write @file
# end
#
# @return [Mongo::GridIO]
def open(filename, mode, opts={})
opts = opts.dup
opts.merge!(default_grid_io_opts(filename))
if mode == 'w'
begin
# Ensure there are the appropriate indexes, as state may have changed since instantiation of self.
# Recall that index definitions are cached with ensure_index so this statement won't unneccesarily repeat index creation.
@files.ensure_index([['filename', 1], ['uploadDate', -1]])
@chunks.ensure_index([['files_id', Mongo::ASCENDING], ['n', Mongo::ASCENDING]], :unique => true)
versions = opts.delete(:versions)
if opts.delete(:delete_old) || (versions && versions < 1)
versions = 1
end
rescue Mongo::ConnectionFailure => e
raise e, "Failed to create necessary indexes and write data."
return
end
end
file = GridIO.new(@files, @chunks, filename, mode, opts)
return file unless block_given?
result = nil
begin
result = yield file
ensure
id = file.close
if versions
self.delete do
@files.find({'filename' => filename, '_id' => {'$ne' => id}}, :fields => ['_id'], :sort => ['uploadDate', -1], :skip => (versions - 1))
end
end
end
result
end
# Delete the file with the given filename. Note that this will delete
# all versions of the file.
#
# Be careful with this. Deleting a GridFS file can result in read errors if another process
# is attempting to read a file while it's being deleted. While the odds for this
# kind of race condition are small, it's important to be aware of.
#
# @param [String] filename
#
# @yield [] pass a block that returns an array of documents to be deleted.
#
# @return [Boolean]
def delete(filename=nil)
if block_given?
files = yield
else
files = @files.find({'filename' => filename}, :fields => ['_id'])
end
files.each do |file|
@files.remove({'_id' => file['_id']})
@chunks.remove({'files_id' => file['_id']})
end
end
alias_method :unlink, :delete
private
def default_grid_io_opts(filename=nil)
{:fs_name => @fs_name, :query => {'filename' => filename}, :query_opts => @default_query_opts}
end
end
end
ruby-mongo-1.10.0/lib/mongo/gridfs/grid_io.rb 0000664 0000000 0000000 00000036611 12334610061 0020776 0 ustar 00root root 0000000 0000000 # Copyright (C) 2009-2013 MongoDB, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
require 'digest/md5'
module Mongo
# GridIO objects represent files in the GridFS specification. This class
# manages the reading and writing of file chunks and metadata.
class GridIO
include Mongo::WriteConcern
DEFAULT_CHUNK_SIZE = 255 * 1024
DEFAULT_CONTENT_TYPE = 'binary/octet-stream'
PROTECTED_ATTRS = [:files_id, :file_length, :client_md5, :server_md5]
attr_reader :content_type, :chunk_size, :upload_date, :files_id, :filename,
:metadata, :server_md5, :client_md5, :file_length, :file_position
# Create a new GridIO object. Note that most users will not need to use this class directly;
# the Grid and GridFileSystem classes will instantiate this class
#
# @param [Mongo::Collection] files a collection for storing file metadata.
# @param [Mongo::Collection] chunks a collection for storing file chunks.
# @param [String] filename the name of the file to open or write.
# @param [String] mode 'r' or 'w' or reading or creating a file.
#
# @option opts [Hash] :query a query selector used when opening the file in 'r' mode.
# @option opts [Hash] :query_opts any query options to be used when opening the file in 'r' mode.
# @option opts [String] :fs_name the file system prefix.
# @option opts [Integer] (261120) :chunk_size size of file chunks in bytes.
# @option opts [Hash] :metadata ({}) any additional data to store with the file.
# @option opts [ObjectId] :_id (ObjectId) a unique id for
# the file to be use in lieu of an automatically generated one.
# @option opts [String] :content_type ('binary/octet-stream') If no content type is specified,
# the content type will may be inferred from the filename extension if the mime-types gem can be
# loaded. Otherwise, the content type 'binary/octet-stream' will be used.
# @option opts [String, Integer, Symbol] :w (1) Set the write concern
#
# Notes on write concern:
# When :w > 0, the chunks sent to the server
# will be validated using an md5 hash. If validation fails, an exception will be raised.
def initialize(files, chunks, filename, mode, opts={})
@files = files
@chunks = chunks
@filename = filename
@mode = mode
opts = opts.dup
@query = opts.delete(:query) || {}
@query_opts = opts.delete(:query_opts) || {}
@fs_name = opts.delete(:fs_name) || Grid::DEFAULT_FS_NAME
@write_concern = get_write_concern(opts)
@local_md5 = Digest::MD5.new if Mongo::WriteConcern.gle?(@write_concern)
@custom_attrs = {}
case @mode
when 'r' then init_read
when 'w' then init_write(opts)
else
raise GridError, "Invalid file mode #{@mode}. Mode should be 'r' or 'w'."
end
end
def [](key)
@custom_attrs[key] || instance_variable_get("@#{key.to_s}")
end
def []=(key, value)
if PROTECTED_ATTRS.include?(key.to_sym)
warn "Attempting to overwrite protected value."
return nil
else
@custom_attrs[key] = value
end
end
# Read the data from the file. If a length if specified, will read from the
# current file position.
#
# @param [Integer] length
#
# @return [String]
# the data in the file
def read(length=nil)
return '' if @file_length.zero?
if length == 0
return ''
elsif length.nil? && @file_position.zero?
read_all
else
read_length(length)
end
end
alias_method :data, :read
# Write the given string (binary) data to the file.
#
# @param [String] io the data to write.
#
# @return [Integer] the number of bytes written.
def write(io)
raise GridError, "file not opened for write" unless @mode[0] == ?w
if io.is_a? String
if Mongo::WriteConcern.gle?(@write_concern)
@local_md5.update(io)
end
write_string(io)
else
length = 0
if Mongo::WriteConcern.gle?(@write_concern)
while(string = io.read(@chunk_size))
@local_md5.update(string)
length += write_string(string)
end
else
while(string = io.read(@chunk_size))
length += write_string(string)
end
end
length
end
end
# Position the file pointer at the provided location.
#
# @param [Integer] pos
# the number of bytes to advance the file pointer. this can be a negative
# number.
# @param [Integer] whence
# one of IO::SEEK_CUR, IO::SEEK_END, or IO::SEEK_SET
#
# @return [Integer] the new file position
def seek(pos, whence=IO::SEEK_SET)
raise GridError, "Seek is only allowed in read mode." unless @mode == 'r'
target_pos = case whence
when IO::SEEK_CUR
@file_position + pos
when IO::SEEK_END
@file_length + pos
when IO::SEEK_SET
pos
end
new_chunk_number = (target_pos / @chunk_size).to_i
if new_chunk_number != @current_chunk['n']
save_chunk(@current_chunk) if @mode[0] == ?w
@current_chunk = get_chunk(new_chunk_number)
end
@file_position = target_pos
@chunk_position = @file_position % @chunk_size
@file_position
end
# The current position of the file.
#
# @return [Integer]
def tell
@file_position
end
alias :pos :tell
# Rewind the file. This is equivalent to seeking to the zeroth position.
#
# @return [Integer] the position of the file after rewinding (always zero).
def rewind
raise GridError, "file not opened for read" unless @mode[0] == ?r
seek(0)
end
# Return a boolean indicating whether the position pointer is
# at the end of the file.
#
# @return [Boolean]
def eof
raise GridError, "file not opened for read #{@mode}" unless @mode[0] == ?r
@file_position >= @file_length
end
alias :eof? :eof
# Return the next line from a GridFS file. This probably
# makes sense only if you're storing plain text. This method
# has a somewhat tricky API, which it inherits from Ruby's
# StringIO#gets.
#
# @param [String, Integer] separator or length. If a separator,
# read up to the separator. If a length, read the +length+ number
# of bytes. If nil, read the entire file.
# @param [Integer] length If a separator is provided, then
# read until either finding the separator or
# passing over the +length+ number of bytes.
#
# @return [String]
def gets(separator="\n", length=nil)
if separator.nil?
read_all
elsif separator.is_a?(Integer)
read_length(separator)
elsif separator.length > 1
read_to_string(separator, length)
else
read_to_character(separator, length)
end
end
# Return the next byte from the GridFS file.
#
# @return [String]
def getc
read_length(1)
end
# Creates or updates the document from the files collection that
# stores the chunks' metadata. The file becomes available only after
# this method has been called.
#
# This method will be invoked automatically when
# on GridIO#open is passed a block. Otherwise, it must be called manually.
#
# @return [BSON::ObjectId]
def close
if @mode[0] == ?w
if @current_chunk['n'].zero? && @chunk_position.zero?
warn "Warning: Storing a file with zero length."
end
@upload_date = Time.now.utc
id = @files.insert(to_mongo_object)
end
id
end
# Read a chunk of the data from the file and yield it to the given
# block.
#
# Note that this method reads from the current file position.
#
# @yield Yields on chunk per iteration as defined by this file's
# chunk size.
#
# @return [Mongo::GridIO] self
def each
return read_all unless block_given?
while chunk = read(chunk_size)
yield chunk
break if chunk.empty?
end
self
end
def inspect
"#"
end
private
def create_chunk(n)
chunk = BSON::OrderedHash.new
chunk['_id'] = BSON::ObjectId.new
chunk['n'] = n
chunk['files_id'] = @files_id
chunk['data'] = ''
@chunk_position = 0
chunk
end
def save_chunk(chunk)
@chunks.save(chunk)
end
def get_chunk(n)
chunk = @chunks.find({'files_id' => @files_id, 'n' => n}).next_document
@chunk_position = 0
chunk
end
# Read a file in its entirety.
def read_all
buf = ''
if @current_chunk
buf << @current_chunk['data'].to_s
while buf.size < @file_length
@current_chunk = get_chunk(@current_chunk['n'] + 1)
break if @current_chunk.nil?
buf << @current_chunk['data'].to_s
end
@file_position = @file_length
end
buf
end
# Read a file incrementally.
def read_length(length)
cache_chunk_data
remaining = (@file_length - @file_position)
if length.nil?
to_read = remaining
else
to_read = length > remaining ? remaining : length
end
return nil unless remaining > 0
buf = ''
while to_read > 0
if @chunk_position == @chunk_data_length
@current_chunk = get_chunk(@current_chunk['n'] + 1)
cache_chunk_data
end
chunk_remainder = @chunk_data_length - @chunk_position
size = (to_read >= chunk_remainder) ? chunk_remainder : to_read
buf << @current_chunk_data[@chunk_position, size]
to_read -= size
@chunk_position += size
@file_position += size
end
buf
end
def read_to_character(character="\n", length=nil)
result = ''
len = 0
while char = getc
result << char
len += 1
break if char == character || (length ? len >= length : false)
end
result.length > 0 ? result : nil
end
def read_to_string(string="\n", length=nil)
result = ''
len = 0
match_idx = 0
match_num = string.length - 1
to_match = string[match_idx].chr
if length
matcher = lambda {|idx, num| idx < num && len < length }
else
matcher = lambda {|idx, num| idx < num}
end
while matcher.call(match_idx, match_num) && char = getc
result << char
len += 1
if char == to_match
while match_idx < match_num do
match_idx += 1
to_match = string[match_idx].chr
char = getc
result << char
if char != to_match
match_idx = 0
to_match = string[match_idx].chr
break
end
end
end
end
result.length > 0 ? result : nil
end
def cache_chunk_data
@current_chunk_data = @current_chunk['data'].to_s
if @current_chunk_data.respond_to?(:force_encoding)
@current_chunk_data.force_encoding("binary")
end
@chunk_data_length = @current_chunk['data'].length
end
def write_string(string)
# Since Ruby 1.9.1 doesn't necessarily store one character per byte.
if string.respond_to?(:force_encoding)
string.force_encoding("binary")
end
to_write = string.length
while (to_write > 0) do
if @current_chunk && @chunk_position == @chunk_size
next_chunk_number = @current_chunk['n'] + 1
@current_chunk = create_chunk(next_chunk_number)
end
chunk_available = @chunk_size - @chunk_position
step_size = (to_write > chunk_available) ? chunk_available : to_write
@current_chunk['data'] = BSON::Binary.new((@current_chunk['data'].to_s << string[-to_write, step_size]).unpack("c*"))
@chunk_position += step_size
to_write -= step_size
save_chunk(@current_chunk)
end
string.length - to_write
end
# Initialize the class for reading a file.
def init_read
doc = @files.find(@query, @query_opts).next_document
raise GridFileNotFound, "Could not open file matching #{@query.inspect} #{@query_opts.inspect}" unless doc
@files_id = doc['_id']
@content_type = doc['contentType']
@chunk_size = doc['chunkSize']
@upload_date = doc['uploadDate']
@aliases = doc['aliases']
@file_length = doc['length']
@metadata = doc['metadata']
@md5 = doc['md5']
@filename = doc['filename']
@custom_attrs = doc
@current_chunk = get_chunk(0)
@file_position = 0
end
# Initialize the class for writing a file.
def init_write(opts)
opts = opts.dup
@files_id = opts.delete(:_id) || BSON::ObjectId.new
@content_type = opts.delete(:content_type) || (defined? MIME) && get_content_type || DEFAULT_CONTENT_TYPE
@chunk_size = opts.delete(:chunk_size) || DEFAULT_CHUNK_SIZE
@metadata = opts.delete(:metadata)
@aliases = opts.delete(:aliases)
@file_length = 0
opts.each {|k, v| self[k] = v}
check_existing_file if Mongo::WriteConcern.gle?(@write_concern)
@current_chunk = create_chunk(0)
@file_position = 0
end
def check_existing_file
if @files.find_one('_id' => @files_id)
raise GridError, "Attempting to overwrite with Grid#put. You must delete the file first."
end
end
def to_mongo_object
h = BSON::OrderedHash.new
h['_id'] = @files_id
h['filename'] = @filename if @filename
h['contentType'] = @content_type
h['length'] = @current_chunk ? @current_chunk['n'] * @chunk_size + @chunk_position : 0
h['chunkSize'] = @chunk_size
h['uploadDate'] = @upload_date
h['aliases'] = @aliases if @aliases
h['metadata'] = @metadata if @metadata
h['md5'] = get_md5
h.merge!(@custom_attrs)
h
end
# Get a server-side md5 and validate against the client if running with acknowledged writes
def get_md5
md5_command = BSON::OrderedHash.new
md5_command['filemd5'] = @files_id
md5_command['root'] = @fs_name
@server_md5 = @files.db.command(md5_command)['md5']
if Mongo::WriteConcern.gle?(@write_concern)
@client_md5 = @local_md5.hexdigest
if @local_md5 == @server_md5
@server_md5
else
raise GridMD5Failure, "File on server failed MD5 check"
end
else
@server_md5
end
end
# Determine the content type based on the filename.
def get_content_type
if @filename
if types = MIME::Types.type_for(@filename)
types.first.simplified unless types.empty?
end
end
end
end
end
ruby-mongo-1.10.0/lib/mongo/legacy.rb 0000664 0000000 0000000 00000010037 12334610061 0017342 0 ustar 00root root 0000000 0000000 # Copyright (C) 2009-2013 MongoDB, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
module Mongo
module LegacyWriteConcern
@legacy_write_concern = true
def safe=(value)
@write_concern = value
end
def safe
if @write_concern[:w] == 0
return false
elsif @write_concern[:w] == 1
return true
else
return @write_concern
end
end
def self.from_uri(uri = ENV['MONGODB_URI'], extra_opts={})
parser = URIParser.new uri
parser.connection(extra_opts, true)
end
end
end
module Mongo
# @deprecated Use Mongo::MongoClient instead. Support will be removed after
# v2.0. Please see old documentation for the Connection class.
class Connection < MongoClient
include Mongo::LegacyWriteConcern
def initialize(*args)
if args.last.is_a?(Hash)
opts = args.pop
write_concern_from_legacy(opts)
args.push(opts)
end
super
end
end
# @deprecated Use Mongo::MongoReplicaSetClient instead. Support will be
# removed after v2.0. Please see old documentation for the
# ReplSetConnection class.
class ReplSetConnection < MongoReplicaSetClient
include Mongo::LegacyWriteConcern
def initialize(*args)
if args.last.is_a?(Hash)
opts = args.pop
write_concern_from_legacy(opts)
args.push(opts)
end
super
end
end
# @deprecated Use Mongo::MongoShardedClient instead. Support will be removed
# after v2.0. Please see old documentation for the ShardedConnection class.
class ShardedConnection < MongoShardedClient
include Mongo::LegacyWriteConcern
def initialize(*args)
if args.last.is_a?(Hash)
opts = args.pop
write_concern_from_legacy(opts)
args.push(opts)
end
super
end
end
class MongoClient
# @deprecated This method is no longer in use and never needs to be called
# directly. Support will be removed after v2.0
def authenticate_pools
@primary_pool.authenticate_existing
end
# @deprecated This method is no longer in use and never needs to be called
# directly. Support will be removed after v2.0
def logout_pools(database)
@primary_pool.logout_existing(database)
end
# @deprecated This method is no longer in use and never needs to be called
# directly. Support will be removed after v2.0
def apply_saved_authentication
true
end
end
class MongoReplicaSetClient
# @deprecated This method is no longer in use and never needs to be called
# directly. Support will be removed after v2.0
def authenticate_pools
@manager.pools.each { |pool| pool.authenticate_existing }
end
# @deprecated This method is no longer in use and never needs to be called
# directly. Support will be removed after v2.0
def logout_pools(database)
@manager.pools.each { |pool| pool.logout_existing(database) }
end
end
class DB
# @deprecated Please use MongoClient#issue_authentication instead. Support
# will be removed after v2.0
def issue_authentication(username, password, save_auth=true, opts={})
auth = Authentication.validate_credentials({
:db_name => self.name,
:username => username,
:password => password
})
opts[:save_auth] = save_auth
@client.issue_authentication(auth, opts)
end
# @deprecated Please use MongoClient#issue_logout instead. Support will be
# removed after v2.0
def issue_logout(opts={})
@client.issue_logout(self.name, opts)
end
end
end
ruby-mongo-1.10.0/lib/mongo/mongo_client.rb 0000664 0000000 0000000 00000061145 12334610061 0020561 0 ustar 00root root 0000000 0000000 # Copyright (C) 2009-2013 MongoDB, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
module Mongo
# Instantiates and manages self.connections to MongoDB.
class MongoClient
include Mongo::Logging
include Mongo::Networking
include Mongo::WriteConcern
include Mongo::Authentication
# Wire version
RELEASE_2_4_AND_BEFORE = 0 # Everything before we started tracking.
AGG_RETURNS_CURSORS = 1 # The aggregation command may now be requested to return cursors.
BATCH_COMMANDS = 2 # insert, update, and delete batch command
MAX_WIRE_VERSION = BATCH_COMMANDS # supported by this client implementation
MIN_WIRE_VERSION = RELEASE_2_4_AND_BEFORE # supported by this client implementation
# Server command headroom
COMMAND_HEADROOM = 16_384
APPEND_HEADROOM = COMMAND_HEADROOM / 2
SERIALIZE_HEADROOM = APPEND_HEADROOM / 2
DEFAULT_MAX_WRITE_BATCH_SIZE = 1000
Mutex = ::Mutex
ConditionVariable = ::ConditionVariable
DEFAULT_HOST = 'localhost'
DEFAULT_PORT = 27017
DEFAULT_DB_NAME = 'test'
GENERIC_OPTS = [:auths, :logger, :connect, :db_name]
TIMEOUT_OPTS = [:timeout, :op_timeout, :connect_timeout]
SSL_OPTS = [:ssl, :ssl_key, :ssl_cert, :ssl_verify, :ssl_ca_cert, :ssl_key_pass_phrase]
POOL_OPTS = [:pool_size, :pool_timeout]
READ_PREFERENCE_OPTS = [:read, :tag_sets, :secondary_acceptable_latency_ms]
WRITE_CONCERN_OPTS = [:w, :j, :fsync, :wtimeout]
CLIENT_ONLY_OPTS = [:slave_ok]
mongo_thread_local_accessor :connections
attr_reader :logger,
:size,
:auths,
:primary,
:write_concern,
:host_to_try,
:pool_size,
:connect_timeout,
:pool_timeout,
:primary_pool,
:socket_class,
:socket_opts,
:op_timeout,
:tag_sets,
:acceptable_latency,
:read,
:max_wire_version,
:min_wire_version,
:max_write_batch_size
# Create a connection to single MongoDB instance.
#
# If no args are provided, it will check ENV["MONGODB_URI"]
.
#
# You may specify whether connection to slave is permitted.
# In all cases, the default host is "localhost" and the default port is 27017.
#
# If you're connecting to a replica set, you'll need to use MongoReplicaSetClient.new instead.
#
# Once connected to a replica set, you can find out which nodes are primary, secondary, and
# arbiters with the corresponding accessors: MongoClient#primary, MongoClient#secondaries, and
# MongoClient#arbiters. This is useful if your application needs to connect manually to nodes other
# than the primary.
#
# @overload initialize(host, port, opts={})
# @param [String] host hostname for the target MongoDB server.
# @param [Integer] port specify a port number here if only one host is being specified.
# @param [Hash] opts hash of optional settings and configuration values.
#
# @option opts [String, Integer, Symbol] :w (1) Set default number of nodes to which a write
# should be acknowledged.
# @option opts [Integer] :wtimeout (nil) Set replica set acknowledgement timeout.
# @option opts [Boolean] :j (false) If true, block until write operations have been committed
# to the journal. Cannot be used in combination with 'fsync'. Prior to MongoDB 2.6 this option was
# ignored if the server was running without journaling. Starting with MongoDB 2.6, write operations will
# fail with an exception if this option is used when the server is running without journaling.
# @option opts [Boolean] :fsync (false) If true, and the server is running without journaling, blocks until
# the server has synced all data files to disk. If the server is running with journaling, this acts the same as
# the 'j' option, blocking until write operations have been committed to the journal.
# Cannot be used in combination with 'j'.
#
# Notes about Write-Concern Options:
# Write concern options are propagated to objects instantiated from this MongoClient.
# These defaults can be overridden upon instantiation of any object by explicitly setting an options hash
# on initialization.
#
# @option opts [Boolean] :ssl (false) If true, create the connection to the server using SSL.
# @option opts [String] :ssl_cert (nil) The certificate file used to identify the local connection against MongoDB.
# @option opts [String] :ssl_key (nil) The private keyfile used to identify the local connection against MongoDB.
# Note that even if the key is stored in the same file as the certificate, both need to be explicitly specified.
# @option opts [String] :ssl_key_pass_phrase (nil) A passphrase for the private key.
# @option opts [Boolean] :ssl_verify (nil) Specifies whether or not peer certification validation should occur.
# @option opts [String] :ssl_ca_cert (nil) The ca_certs file contains a set of concatenated "certification authority"
# certificates, which are used to validate certificates passed from the other end of the connection.
# Required for :ssl_verify.
# @option opts [Boolean] :slave_ok (false) Must be set to +true+ when connecting
# to a single, slave node.
# @option opts [Logger, #debug] :logger (nil) A Logger instance for debugging driver ops. Note that
# logging negatively impacts performance; therefore, it should not be used for high-performance apps.
# @option opts [Integer] :pool_size (1) The maximum number of socket self.connections allowed per
# connection pool. Note: this setting is relevant only for multi-threaded applications.
# @option opts [Float] :pool_timeout (5.0) When all of the self.connections a pool are checked out,
# this is the number of seconds to wait for a new connection to be released before throwing an exception.
# Note: this setting is relevant only for multi-threaded applications.
# @option opts [Float] :op_timeout (nil) The number of seconds to wait for a read operation to time out.
# Disabled by default.
# @option opts [Float] :connect_timeout (nil) The number of seconds to wait before timing out a
# connection attempt.
#
# @example localhost, 27017 (or ENV["MONGODB_URI"]
if available)
# MongoClient.new
#
# @example localhost, 27017
# MongoClient.new("localhost")
#
# @example localhost, 3000, max 5 self.connections, with max 5 seconds of wait time.
# MongoClient.new("localhost", 3000, :pool_size => 5, :pool_timeout => 5)
#
# @example localhost, 3000, where this node may be a slave
# MongoClient.new("localhost", 3000, :slave_ok => true)
#
# @example Unix Domain Socket
# MongoClient.new("/var/run/mongodb.sock")
#
# @see http://api.mongodb.org/ruby/current/file.REPLICA_SETS.html Replica sets in Ruby
#
# @raise [ReplicaSetConnectionError] This is raised if a replica set name is specified and the
# driver fails to connect to a replica set with that name.
#
# @raise [MongoArgumentError] If called with no arguments and ENV["MONGODB_URI"]
implies a replica set.
def initialize(*args)
opts = args.last.is_a?(Hash) ? args.pop : {}
@host, @port = parse_init(args[0], args[1], opts)
# Lock for request ids.
@id_lock = Mutex.new
# Connection pool for primary node
@primary = nil
@primary_pool = nil
@mongos = false
# Not set for direct connection
@tag_sets = []
@acceptable_latency = 15
@max_bson_size = nil
@max_message_size = nil
@max_wire_version = nil
@min_wire_version = nil
@max_write_batch_size = nil
check_opts(opts)
setup(opts.dup)
end
# DEPRECATED
#
# Initialize a connection to a MongoDB replica set using an array of seed nodes.
#
# The seed nodes specified will be used on the initial connection to the replica set, but note
# that this list of nodes will be replaced by the list of canonical nodes returned by running the
# is_master command on the replica set.
#
# @param nodes [Array] An array of arrays, each of which specifies a host and port.
# @param opts [Hash] Any of the available options that can be passed to MongoClient.new.
#
# @option opts [String] :rs_name (nil) The name of the replica set to connect to. An exception will be
# raised if unable to connect to a replica set with this name.
# @option opts [Boolean] :read_secondary (false) When true, this connection object will pick a random slave
# to send reads to.
#
# @example
# Mongo::MongoClient.multi([["db1.example.com", 27017], ["db2.example.com", 27017]])
#
# @example This connection will read from a random secondary node.
# Mongo::MongoClient.multi([["db1.example.com", 27017], ["db2.example.com", 27017], ["db3.example.com", 27017]],
# :read_secondary => true)
#
# @return [Mongo::MongoClient]
#
# @deprecated
def self.multi(nodes, opts={})
warn 'MongoClient.multi is now deprecated and will be removed in v2.0. Please use MongoReplicaSetClient.new instead.'
MongoReplicaSetClient.new(nodes, opts)
end
# Initialize a connection to MongoDB using the MongoDB URI spec.
#
# Since MongoClient.new cannot be used with any ENV["MONGODB_URI"]
that has multiple hosts (implying a replicaset),
# you may use this when the type of your connection varies by environment and should be determined solely from ENV["MONGODB_URI"]
.
#
# @param uri [String]
# A string of the format mongodb://[username:password@]host1[:port1][,host2[:port2],...[,hostN[:portN]]][/database]
#
# @param [Hash] extra_opts Any of the options available for MongoClient.new
#
# @return [Mongo::MongoClient, Mongo::MongoReplicaSetClient]
def self.from_uri(uri = ENV['MONGODB_URI'], extra_opts={})
parser = URIParser.new(uri)
parser.connection(extra_opts)
end
# The host name used for this connection.
#
# @return [String]
def host
@primary_pool.host
end
# The port used for this connection.
#
# @return [Integer]
def port
@primary_pool.port
end
def host_port
[@host, @port]
end
# Flush all pending writes to datafiles.
#
# @return [BSON::OrderedHash] the command response
def lock!
cmd = BSON::OrderedHash.new
cmd[:fsync] = 1
cmd[:lock] = true
self['admin'].command(cmd)
end
# Is this database locked against writes?
#
# @return [Boolean]
def locked?
[1, true].include? self['admin']['$cmd.sys.inprog'].find_one['fsyncLock']
end
# Unlock a previously fsync-locked mongod process.
#
# @return [BSON::OrderedHash] command response
def unlock!
self['admin']['$cmd.sys.unlock'].find_one
end
# Return a hash with all database names
# and their respective sizes on disk.
#
# @return [Hash]
def database_info
doc = self['admin'].command({:listDatabases => 1})
doc['databases'].inject({}) do |info, db|
info[db['name']] = db['sizeOnDisk'].to_i
info
end
end
# Return an array of database names.
#
# @return [Array]
def database_names
database_info.keys
end
# Return a database with the given name.
# See DB#new for valid options hash parameters.
#
# @param name [String] The name of the database.
# @param opts [Hash] A hash of options to be passed to the DB constructor.
#
# @return [DB] The DB instance.
def db(name = nil, opts = {})
DB.new(name || @db_name || DEFAULT_DB_NAME, self, opts)
end
# Shortcut for returning a database. Use MongoClient#db to accept options.
#
# @param name [String] The name of the database.
#
# @return [DB] The DB instance.
def [](name)
DB.new(name, self)
end
def refresh; end
def pinned_pool
@primary_pool
end
def pin_pool(pool, read_prefs); end
def unpin_pool; end
# Drop a database.
#
# @param database [String] name of an existing database.
def drop_database(database)
self[database].command(:dropDatabase => 1)
end
# Copy the database +from+ to +to+ on localhost. The +from+ database is
# assumed to be on localhost, but an alternate host can be specified.
#
# @param from [String] name of the database to copy from.
# @param to [String] name of the database to copy to.
# @param from_host [String] host of the 'from' database.
# @param username [String] username (applies to 'from' db)
# @param password [String] password (applies to 'from' db)
#
# @note This command only supports the MONGODB-CR authentication mechanism.
def copy_database(from, to, from_host=DEFAULT_HOST, username=nil, password=nil)
oh = BSON::OrderedHash.new
oh[:copydb] = 1
oh[:fromhost] = from_host
oh[:fromdb] = from
oh[:todb] = to
if username || password
unless username && password
raise MongoArgumentError,
'Both username and password must be supplied for authentication.'
end
nonce_cmd = BSON::OrderedHash.new
nonce_cmd[:copydbgetnonce] = 1
nonce_cmd[:fromhost] = from_host
result = self['admin'].command(nonce_cmd)
oh[:nonce] = result['nonce']
oh[:username] = username
oh[:key] = Mongo::Authentication.auth_key(username, password, oh[:nonce])
end
self['admin'].command(oh)
end
# Checks if a server is alive. This command will return immediately
# even if the server is in a lock.
#
# @return [Hash]
def ping
self['admin'].command({:ping => 1})
end
# Get the build information for the current connection.
#
# @return [Hash]
def server_info
self['admin'].command({:buildinfo => 1})
end
# Get the build version of the current server.
#
# @return [Mongo::ServerVersion]
# object allowing easy comparability of version.
def server_version
ServerVersion.new(server_info['version'])
end
# Is it okay to connect to a slave?
#
# @return [Boolean]
def slave_ok?
@slave_ok
end
def mongos?
@mongos
end
# Create a new socket and attempt to connect to master.
# If successful, sets host and port to master and returns the socket.
#
# If connecting to a replica set, this method will replace the
# initially-provided seed list with any nodes known to the set.
#
# @raise [ConnectionFailure] if unable to connect to any host or port.
def connect
close
config = check_is_master(host_port)
if config
if config['ismaster'] == 1 || config['ismaster'] == true
@read_primary = true
elsif @slave_ok
@read_primary = false
end
if config.has_key?('msg') && config['msg'] == 'isdbgrid'
@mongos = true
end
@max_bson_size = config['maxBsonObjectSize']
@max_message_size = config['maxMessageSizeBytes']
@max_wire_version = config['maxWireVersion']
@min_wire_version = config['minWireVersion']
@max_write_batch_size = config['maxWriteBatchSize']
check_wire_version_in_range
set_primary(host_port)
end
unless connected?
raise ConnectionFailure,
"Failed to connect to a master node at #{host_port.join(":")}"
end
true
end
alias :reconnect :connect
# It's possible that we defined connected as all nodes being connected???
# NOTE: Do check if this needs to be more stringent.
# Probably not since if any node raises a connection failure, all nodes will be closed.
def connected?
!!(@primary_pool && !@primary_pool.closed?)
end
# Determine if the connection is active. In a normal case the *server_info* operation
# will be performed without issues, but if the connection was dropped by the server or
# for some reason the sockets are unsynchronized, a ConnectionFailure will be raised and
# the return will be false.
#
# @return [Boolean]
def active?
return false unless connected?
ping
true
rescue ConnectionFailure
false
end
# Determine whether we're reading from a primary node. If false,
# this connection connects to a secondary node and @slave_ok is true.
#
# @return [Boolean]
def read_primary?
@read_primary
end
alias :primary? :read_primary?
# The socket pool that this connection reads from.
#
# @return [Mongo::Pool]
def read_pool
@primary_pool
end
# Close the connection to the database.
def close
@primary_pool.close if @primary_pool
@primary_pool = nil
@primary = nil
end
# Returns the maximum BSON object size as returned by the core server.
# Use the 4MB default when the server doesn't report this.
#
# @return [Integer]
def max_bson_size
@max_bson_size || DEFAULT_MAX_BSON_SIZE
end
def max_message_size
@max_message_size || max_bson_size * MESSAGE_SIZE_FACTOR
end
def max_wire_version
@max_wire_version || 0
end
def min_wire_version
@min_wire_version || 0
end
def max_write_batch_size
@max_write_batch_size || DEFAULT_MAX_WRITE_BATCH_SIZE
end
def wire_version_feature?(feature)
min_wire_version <= feature && feature <= max_wire_version
end
def primary_wire_version_feature?(feature)
min_wire_version <= feature && feature <= max_wire_version
end
def use_write_command?(write_concern)
write_concern[:w] != 0 && primary_wire_version_feature?(Mongo::MongoClient::BATCH_COMMANDS)
end
# Checkout a socket for reading (i.e., a secondary node).
# Note: this is overridden in MongoReplicaSetClient.
def checkout_reader(read_preference)
connect unless connected?
@primary_pool.checkout
end
# Checkout a socket for writing (i.e., a primary node).
# Note: this is overridden in MongoReplicaSetClient.
def checkout_writer
connect unless connected?
@primary_pool.checkout
end
# Check a socket back into its pool.
# Note: this is overridden in MongoReplicaSetClient.
def checkin(socket)
if @primary_pool && socket && socket.pool
socket.checkin
end
end
# Internal method for checking isMaster() on a given node.
#
# @param node [Array] Port and host for the target node
# @return [Hash] Response from isMaster()
#
# @private
def check_is_master(node)
begin
host, port = *node
config = nil
socket = @socket_class.new(host, port, @op_timeout, @connect_timeout, @socket_opts)
if @connect_timeout
Timeout::timeout(@connect_timeout, OperationTimeout) do
config = self['admin'].command({:isMaster => 1}, :socket => socket)
end
else
config = self['admin'].command({:isMaster => 1}, :socket => socket)
end
rescue OperationFailure, SocketError, SystemCallError, IOError
close
ensure
socket.close unless socket.nil? || socket.closed?
end
config
end
protected
def valid_opts
GENERIC_OPTS +
CLIENT_ONLY_OPTS +
POOL_OPTS +
READ_PREFERENCE_OPTS +
WRITE_CONCERN_OPTS +
TIMEOUT_OPTS +
SSL_OPTS
end
def check_opts(opts)
bad_opts = opts.keys.reject { |opt| valid_opts.include?(opt) }
unless bad_opts.empty?
bad_opts.each {|opt| warn "#{opt} is not a valid option for #{self.class}"}
end
end
# Parse option hash
def setup(opts)
@slave_ok = opts.delete(:slave_ok)
@ssl = opts.delete(:ssl)
@unix = @host ? @host.end_with?('.sock') : false
# if ssl options are present, but ssl is nil/false raise for misconfig
ssl_opts = opts.keys.select { |k| k.to_s.start_with?('ssl') }
if ssl_opts.size > 0 && !@ssl
raise MongoArgumentError, "SSL has not been enabled (:ssl=false) " +
"but the following SSL related options were " +
"specified: #{ssl_opts.join(', ')}"
end
@socket_opts = {}
if @ssl
# construct ssl socket opts
@socket_opts[:key] = opts.delete(:ssl_key)
@socket_opts[:cert] = opts.delete(:ssl_cert)
@socket_opts[:verify] = opts.delete(:ssl_verify)
@socket_opts[:ca_cert] = opts.delete(:ssl_ca_cert)
@socket_opts[:key_pass_phrase] = opts.delete(:ssl_key_pass_phrase)
# verify peer requires ca_cert, raise if only one is present
if @socket_opts[:verify] && !@socket_opts[:ca_cert]
raise MongoArgumentError,
'If :ssl_verify_mode has been specified, then you must include ' +
':ssl_ca_cert in order to perform server validation.'
end
# if we have a keyfile passphrase but no key file, raise
if @socket_opts[:key_pass_phrase] && !@socket_opts[:key]
raise MongoArgumentError,
'If :ssl_key_pass_phrase has been specified, then you must include ' +
':ssl_key, the passphrase-protected keyfile.'
end
@socket_class = Mongo::SSLSocket
elsif @unix
@socket_class = Mongo::UNIXSocket
else
@socket_class = Mongo::TCPSocket
end
@db_name = opts.delete(:db_name)
@auths = opts.delete(:auths) || Set.new
# Pool size and timeout.
@pool_size = opts.delete(:pool_size) || 1
if opts[:timeout]
warn 'The :timeout option has been deprecated ' +
'and will be removed in the 2.0 release. ' +
'Use :pool_timeout instead.'
end
@pool_timeout = opts.delete(:pool_timeout) || opts.delete(:timeout) || 5.0
# Timeout on socket read operation.
@op_timeout = opts.delete(:op_timeout)
# Timeout on socket connect.
@connect_timeout = opts.delete(:connect_timeout) || 30
@logger = opts.delete(:logger)
if @logger
write_logging_startup_message
end
# Determine read preference
if defined?(@slave_ok) && (@slave_ok) || defined?(@read_secondary) && @read_secondary
@read = :secondary_preferred
else
@read = opts.delete(:read) || :primary
end
Mongo::ReadPreference::validate(@read)
@tag_sets = opts.delete(:tag_sets) || []
@acceptable_latency = opts.delete(:secondary_acceptable_latency_ms) || 15
# Connection level write concern options.
@write_concern = get_write_concern(opts)
connect if opts.fetch(:connect, true)
end
private
# Parses client initialization info from MONGODB_URI env variable
def parse_init(host, port, opts)
if host.nil? && port.nil? && ENV.has_key?('MONGODB_URI')
parser = URIParser.new(ENV['MONGODB_URI'])
if parser.replicaset?
raise MongoArgumentError,
'ENV[\'MONGODB_URI\'] implies a replica set.'
end
opts.merge!(parser.connection_options)
[parser.host, parser.port]
else
host = host[1...-1] if host && host[0,1] == '[' # ipv6 support
[host || DEFAULT_HOST, port || DEFAULT_PORT]
end
end
# Set the specified node as primary
def set_primary(node)
host, port = *node
@primary = [host, port]
@primary_pool = Pool.new(self, host, port, :size => @pool_size, :timeout => @pool_timeout)
end
# calculate wire version in range
def check_wire_version_in_range
unless MIN_WIRE_VERSION <= max_wire_version &&
MAX_WIRE_VERSION >= min_wire_version
close
raise ConnectionFailure,
"Client wire-version range #{MIN_WIRE_VERSION} to " +
"#{MAX_WIRE_VERSION} does not support server range " +
"#{min_wire_version} to #{max_wire_version}, please update " +
"clients or servers"
end
end
end
end
ruby-mongo-1.10.0/lib/mongo/mongo_replica_set_client.rb 0000664 0000000 0000000 00000043034 12334610061 0023130 0 ustar 00root root 0000000 0000000 # Copyright (C) 2009-2013 MongoDB, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
module Mongo
# Instantiates and manages connections to a MongoDB replica set.
class MongoReplicaSetClient < MongoClient
include ReadPreference
include ThreadLocalVariableManager
REPL_SET_OPTS = [
:refresh_mode,
:refresh_interval,
:read_secondary,
:rs_name,
:name
]
attr_reader :replica_set_name,
:seeds,
:refresh_interval,
:refresh_mode,
:refresh_version,
:manager
# Create a connection to a MongoDB replica set.
#
# If no args are provided, it will check ENV["MONGODB_URI"]
.
#
# Once connected to a replica set, you can find out which nodes are primary, secondary, and
# arbiters with the corresponding accessors: MongoClient#primary, MongoClient#secondaries, and
# MongoClient#arbiters. This is useful if your application needs to connect manually to nodes other
# than the primary.
#
# @overload initialize(seeds=ENV["MONGODB_URI"], opts={})
# @param [Array, Array] seeds
#
# @option opts [String, Integer, Symbol] :w (1) Set default number of nodes to which a write
# should be acknowledged.
# @option opts [Integer] :wtimeout (nil) Set replica set acknowledgement timeout.
# @option opts [Boolean] :j (false) If true, block until write operations have been committed
# to the journal. Cannot be used in combination with 'fsync'. Prior to MongoDB 2.6 this option was
# ignored if the server was running without journaling. Starting with MongoDB 2.6, write operations will
# fail with an exception if this option is used when the server is running without journaling.
# @option opts [Boolean] :fsync (false) If true, and the server is running without journaling, blocks until
# the server has synced all data files to disk. If the server is running with journaling, this acts the same as
# the 'j' option, blocking until write operations have been committed to the journal.
# Cannot be used in combination with 'j'.
#
# Notes about write concern options:
# Write concern options are propagated to objects instantiated from this MongoReplicaSetClient.
# These defaults can be overridden upon instantiation of any object by explicitly setting an options hash
# on initialization.
# @option opts [:primary, :primary_preferred, :secondary, :secondary_preferred, :nearest] :read (:primary)
# A "read preference" determines the candidate replica set members to which a query or command can be sent.
# [:primary]
# * Read from primary only.
# * Cannot be combined with tags.
# [:primary_preferred]
# * Read from primary if available, otherwise read from a secondary.
# [:secondary]
# * Read from secondary if available.
# [:secondary_preferred]
# * Read from a secondary if available, otherwise read from the primary.
# [:nearest]
# * Read from any member.
# @option opts [Array Tag Value }>] :tag_sets ([])
# Read from replica-set members with these tags.
# @option opts [Integer] :secondary_acceptable_latency_ms (15) The acceptable
# nearest available member for a member to be considered "near".
# @option opts [Logger] :logger (nil) Logger instance to receive driver operation log.
# @option opts [Integer] :pool_size (1) The maximum number of socket connections allowed per
# connection pool. Note: this setting is relevant only for multi-threaded applications.
# @option opts [Float] :pool_timeout (5.0) When all of the connections a pool are checked out,
# this is the number of seconds to wait for a new connection to be released before throwing an exception.
# Note: this setting is relevant only for multi-threaded applications.
# @option opts [Float] :op_timeout (nil) The number of seconds to wait for a read operation to time out.
# @option opts [Float] :connect_timeout (30) The number of seconds to wait before timing out a
# connection attempt.
# @option opts [Boolean] :ssl (false) If true, create the connection to the server using SSL.
# @option opts [String] :ssl_cert (nil) The certificate file used to identify the local connection against MongoDB.
# @option opts [String] :ssl_key (nil) The private keyfile used to identify the local connection against MongoDB.
# Note that even if the key is stored in the same file as the certificate, both need to be explicitly specified.
# @option opts [String] :ssl_key_pass_phrase (nil) A passphrase for the private key.
# @option opts [Boolean] :ssl_verify (nil) Specifies whether or not peer certification validation should occur.
# @option opts [String] :ssl_ca_cert (nil) The ca_certs file contains a set of concatenated "certification authority"
# certificates, which are used to validate certificates passed from the other end of the connection.
# Required for :ssl_verify.
# @option opts [Boolean] :refresh_mode (false) Set this to :sync to periodically update the
# state of the connection every :refresh_interval seconds. Replica set connection failures
# will always trigger a complete refresh. This option is useful when you want to add new nodes
# or remove replica set nodes not currently in use by the driver.
# @option opts [Integer] :refresh_interval (90) If :refresh_mode is enabled, this is the number of seconds
# between calls to check the replica set's state.
# @note the number of seed nodes does not have to be equal to the number of replica set members.
# The purpose of seed nodes is to permit the driver to find at least one replica set member even if a member is down.
#
# @example Connect to a replica set and provide two seed nodes.
# MongoReplicaSetClient.new(['localhost:30000', 'localhost:30001'])
#
# @example Connect to a replica set providing two seed nodes and ensuring a connection to the replica set named 'prod':
# MongoReplicaSetClient.new(['localhost:30000', 'localhost:30001'], :name => 'prod')
#
# @example Connect to a replica set providing two seed nodes and allowing reads from a secondary node:
# MongoReplicaSetClient.new(['localhost:30000', 'localhost:30001'], :read => :secondary)
#
# @see http://api.mongodb.org/ruby/current/file.REPLICA_SETS.html Replica sets in Ruby
#
# @raise [MongoArgumentError] This is raised for usage errors.
#
# @raise [ConnectionFailure] This is raised for the various connection failures.
def initialize(*args)
opts = args.last.is_a?(Hash) ? args.pop : {}
nodes = args.shift || []
raise MongoArgumentError, "Too many arguments" unless args.empty?
# This is temporary until support for the old format is dropped
@seeds = nodes.collect do |node|
if node.is_a?(Array)
warn "Initiating a MongoReplicaSetClient with seeds passed as individual [host, port] array arguments is deprecated."
warn "Please specify hosts as an array of 'host:port' strings; the old format will be removed in v2.0"
node
elsif node.is_a?(String)
Support.normalize_seeds(node)
else
raise MongoArgumentError "Bad seed format!"
end
end
if @seeds.empty? && ENV.has_key?('MONGODB_URI')
parser = URIParser.new ENV['MONGODB_URI']
if parser.direct?
raise MongoArgumentError,
"ENV['MONGODB_URI'] implies a direct connection."
end
opts = parser.connection_options.merge! opts
@seeds = parser.nodes
end
if @seeds.length.zero?
raise MongoArgumentError, "A MongoReplicaSetClient requires at least one seed node."
end
@seeds.freeze
# Refresh
@last_refresh = Time.now
@refresh_version = 0
# No connection manager by default.
@manager = nil
# Lock for request ids.
@id_lock = Mutex.new
@connected = false
@connect_mutex = Mutex.new
@mongos = false
check_opts(opts)
setup(opts.dup)
end
def valid_opts
super + REPL_SET_OPTS - CLIENT_ONLY_OPTS
end
def inspect
""
end
# Initiate a connection to the replica set.
def connect(force = !connected?)
return unless force
log(:info, "Connecting...")
# Prevent recursive connection attempts from the same thread.
# This is done rather than using a Monitor to prevent potentially recursing
# infinitely while attempting to connect and continually failing. Instead, fail fast.
raise ConnectionFailure, "Failed to get node data." if thread_local[:locks][:connecting] == true
current_version = @refresh_version
@connect_mutex.synchronize do
# don't try to connect if another thread has done so while we were waiting for the lock
return unless current_version == @refresh_version
begin
thread_local[:locks][:connecting] = true
if @manager
ensure_manager
@manager.refresh!(@seeds)
else
@manager = PoolManager.new(self, @seeds)
ensure_manager
@manager.connect
end
ensure
thread_local[:locks][:connecting] = false
end
@refresh_version += 1
if @manager.pools.empty?
close
raise ConnectionFailure, "Failed to connect to any node."
end
check_wire_version_in_range
@connected = true
end
end
# Determine whether a replica set refresh is
# required. If so, run a hard refresh. You can
# force a hard refresh by running
# MongoReplicaSetClient#hard_refresh!
#
# @return [Boolean] +true+ unless a hard refresh
# is run and the refresh lock can't be acquired.
def refresh(opts={})
if !connected?
log(:info, "Trying to check replica set health but not " +
"connected...")
return hard_refresh!
end
log(:debug, "Checking replica set connection health...")
ensure_manager
@manager.check_connection_health
if @manager.refresh_required?
return hard_refresh!
end
return true
end
# Force a hard refresh of this connection's view
# of the replica set.
#
# @return [Boolean] +true+ if hard refresh
# occurred. +false+ is returned when unable
# to get the refresh lock.
def hard_refresh!
log(:info, "Initiating hard refresh...")
connect(true)
return true
end
def connected?
@connected && !@manager.pools.empty?
end
# @deprecated
def connecting?
warn "MongoReplicaSetClient#connecting? is deprecated and will be removed in v2.0."
false
end
# The replica set primary's host name.
#
# @return [String]
def host
@manager.primary_pool.host
end
# The replica set primary's port.
#
# @return [Integer]
def port
@manager.primary_pool.port
end
def nodes
warn "MongoReplicaSetClient#nodes is DEPRECATED and will be removed in v2.0. " +
"Please use MongoReplicaSetClient#seeds instead."
@seeds
end
# Determine whether we're reading from a primary node. If false,
# this connection connects to a secondary node and @read_secondaries is true.
#
# @return [Boolean]
def read_primary?
read_pool == primary_pool
end
alias :primary? :read_primary?
# Close the connection to the database.
def close(opts={})
if opts[:soft]
@manager.close(:soft => true) if @manager
else
@manager.close if @manager
end
# Clear the reference to this object.
thread_local[:managers].delete(self)
unpin_pool
@connected = false
end
# If a ConnectionFailure is raised, this method will be called
# to close the connection and reset connection values.
# @deprecated
def reset_connection
close
warn "MongoReplicaSetClient#reset_connection is now deprecated and will be removed in v2.0. " +
"Use MongoReplicaSetClient#close instead."
end
# Returns +true+ if it's okay to read from a secondary node.
#
# This method exist primarily so that Cursor objects will
# generate query messages with a slaveOkay value of +true+.
#
# @return [Boolean] +true+
def slave_ok?
@read != :primary
end
# Generic socket checkout
# Takes a block that returns a socket from pool
def checkout
ensure_manager
connected? ? sync_refresh : connect
begin
socket = yield
rescue => ex
checkin(socket) if socket
raise ex
end
if socket
return socket
else
@connected = false
raise ConnectionFailure.new("Could not checkout a socket.")
end
end
def checkout_reader(read_pref={})
checkout do
pool = read_pool(read_pref)
get_socket_from_pool(pool)
end
end
# Checkout a socket for writing (i.e., a primary node).
def checkout_writer
checkout do
get_socket_from_pool(primary_pool)
end
end
# Checkin a socket used for reading.
def checkin(socket)
if socket && socket.pool
socket.checkin
end
sync_refresh
end
def ensure_manager
thread_local[:managers][self] = @manager
end
def pinned_pool
thread_local[:pinned_pools][@manager.object_id] if @manager
end
def pin_pool(pool, read_preference)
if @manager
thread_local[:pinned_pools][@manager.object_id] = {
:pool => pool,
:read_preference => read_preference
}
end
end
def unpin_pool
thread_local[:pinned_pools].delete @manager.object_id if @manager
end
def get_socket_from_pool(pool)
begin
pool.checkout if pool
rescue ConnectionFailure
nil
end
end
def local_manager
thread_local[:managers][self]
end
def arbiters
local_manager.arbiters.nil? ? [] : local_manager.arbiters
end
def primary
local_manager ? local_manager.primary : nil
end
# Note: might want to freeze these after connecting.
def secondaries
local_manager ? local_manager.secondaries : []
end
def hosts
local_manager ? local_manager.hosts : []
end
def primary_pool
local_manager ? local_manager.primary_pool : nil
end
def secondary_pool
local_manager ? local_manager.secondary_pool : nil
end
def secondary_pools
local_manager ? local_manager.secondary_pools : []
end
def pools
local_manager ? local_manager.pools : []
end
def tag_map
local_manager ? local_manager.tag_map : {}
end
def max_bson_size
return local_manager.max_bson_size if local_manager
DEFAULT_MAX_BSON_SIZE
end
def max_message_size
return local_manager.max_message_size if local_manager
max_bson_size * MESSAGE_SIZE_FACTOR
end
def max_wire_version
return local_manager.max_wire_version if local_manager
0
end
def min_wire_version
return local_manager.min_wire_version if local_manager
0
end
def primary_wire_version_feature?(feature)
local_manager && local_manager.primary_pool && local_manager.primary_pool.node.wire_version_feature?(feature)
end
def max_write_batch_size
local_manager && local_manager.primary_pool && local_manager.primary_pool.node.max_write_batch_size
end
private
# Parse option hash
def setup(opts)
# Refresh
@refresh_mode = opts.delete(:refresh_mode) || false
@refresh_interval = opts.delete(:refresh_interval) || 90
if @refresh_mode && @refresh_interval < 60
@refresh_interval = 60 unless ENV['TEST_MODE'] = 'TRUE'
end
if @refresh_mode == :async
warn ":async refresh mode has been deprecated. Refresh
mode will be disabled."
elsif ![:sync, false].include?(@refresh_mode)
raise MongoArgumentError,
"Refresh mode must be either :sync or false."
end
if opts[:read_secondary]
warn ":read_secondary options has now been deprecated and will " +
"be removed in driver v2.0. Use the :read option instead."
@read_secondary = opts.delete(:read_secondary) || false
end
# Replica set name
if opts[:rs_name]
warn ":rs_name option has been deprecated and will be removed in v2.0. " +
"Please use :name instead."
@replica_set_name = opts.delete(:rs_name)
else
@replica_set_name = opts.delete(:name)
end
super opts
end
def sync_refresh
if @refresh_mode == :sync &&
((Time.now - @last_refresh) > @refresh_interval)
@last_refresh = Time.now
refresh
end
end
end
end
ruby-mongo-1.10.0/lib/mongo/mongo_sharded_client.rb 0000664 0000000 0000000 00000011077 12334610061 0022252 0 ustar 00root root 0000000 0000000 # Copyright (C) 2009-2013 MongoDB, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
module Mongo
# Instantiates and manages connections to a MongoDB sharded cluster for high availability.
class MongoShardedClient < MongoReplicaSetClient
include ThreadLocalVariableManager
SHARDED_CLUSTER_OPTS = [:refresh_mode, :refresh_interval, :tag_sets, :read]
attr_reader :seeds, :refresh_interval, :refresh_mode,
:refresh_version, :manager
def initialize(*args)
opts = args.last.is_a?(Hash) ? args.pop : {}
nodes = args.flatten
if nodes.empty? and ENV.has_key?('MONGODB_URI')
parser = URIParser.new ENV['MONGODB_URI']
opts = parser.connection_options.merge! opts
nodes = parser.node_strings
end
unless nodes.length > 0
raise MongoArgumentError, "A MongoShardedClient requires at least one seed node."
end
@seeds = nodes.map do |host_port|
Support.normalize_seeds(host_port)
end
# TODO: add a method for replacing this list of node.
@seeds.freeze
# Refresh
@last_refresh = Time.now
@refresh_version = 0
# No connection manager by default.
@manager = nil
# Lock for request ids.
@id_lock = Mutex.new
@connected = false
@connect_mutex = Mutex.new
@mongos = true
check_opts(opts)
setup(opts)
end
def valid_opts
super + SHARDED_CLUSTER_OPTS
end
def inspect
""
end
# Initiate a connection to the sharded cluster.
def connect(force = !connected?)
return unless force
log(:info, "Connecting...")
# Prevent recursive connection attempts from the same thread.
# This is done rather than using a Monitor to prevent potentially recursing
# infinitely while attempting to connect and continually failing. Instead, fail fast.
raise ConnectionFailure, "Failed to get node data." if thread_local[:locks][:connecting]
@connect_mutex.synchronize do
begin
thread_local[:locks][:connecting] = true
if @manager
thread_local[:managers][self] = @manager
@manager.refresh! @seeds
else
@manager = ShardingPoolManager.new(self, @seeds)
ensure_manager
@manager.connect
check_wire_version_in_range
end
ensure
thread_local[:locks][:connecting] = false
end
@refresh_version += 1
@last_refresh = Time.now
@connected = true
end
end
# Force a hard refresh of this connection's view
# of the sharded cluster.
#
# @return [Boolean] +true+ if hard refresh
# occurred. +false+ is returned when unable
# to get the refresh lock.
def hard_refresh!
log(:info, "Initiating hard refresh...")
connect(true)
return true
end
def connected?
!!(@connected && @manager.primary_pool)
end
# Returns +true+ if it's okay to read from a secondary node.
# Since this is a sharded cluster, this must always be false.
#
# This method exist primarily so that Cursor objects will
# generate query messages with a slaveOkay value of +true+.
#
# @return [Boolean] +true+
def slave_ok?
false
end
def checkout(&block)
tries = 0
begin
super(&block)
rescue ConnectionFailure
tries +=1
tries < 2 ? retry : raise
end
end
# Initialize a connection to MongoDB using the MongoDB URI spec.
#
# @param uri [ String ] string of the format:
# mongodb://[username:password@]host1[:port1][,host2[:port2],...[,hostN[:portN]]][/database]
#
# @param options [ Hash ] Any of the options available for MongoShardedClient.new
#
# @return [ Mongo::MongoShardedClient ] The sharded client.
def self.from_uri(uri, options={})
uri ||= ENV['MONGODB_URI']
URIParser.new(uri).connection(options, false, true)
end
end
end
ruby-mongo-1.10.0/lib/mongo/networking.rb 0000664 0000000 0000000 00000031013 12334610061 0020262 0 ustar 00root root 0000000 0000000 # Copyright (C) 2009-2013 MongoDB, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
module Mongo
module Networking
STANDARD_HEADER_SIZE = 16
RESPONSE_HEADER_SIZE = 20
# Counter for generating unique request ids.
@@current_request_id = 0
# Send a message to MongoDB, adding the necessary headers.
#
# @param [Integer] operation a MongoDB opcode.
# @param [BSON::ByteBuffer] message a message to send to the database.
#
# @option opts [Symbol] :connection (:writer) The connection to which
# this message should be sent. Valid options are :writer and :reader.
#
# @return [Integer] number of bytes sent
def send_message(operation, message, opts={})
if opts.is_a?(String)
warn "MongoClient#send_message no longer takes a string log message. " +
"Logging is now handled within the Collection and Cursor classes."
opts = {}
end
add_message_headers(message, operation)
packed_message = message.to_s
sock = nil
pool = opts.fetch(:pool, nil)
begin
if pool
#puts "send_message pool.port:#{pool.port}"
sock = pool.checkout
else
sock ||= checkout_writer
end
send_message_on_socket(packed_message, sock)
rescue SystemStackError, NoMemoryError, SystemCallError => ex
close
raise ex
ensure
if sock
sock.checkin
end
end
true
end
# Sends a message to the database, waits for a response, and raises
# an exception if the operation has failed.
#
# @param [Integer] operation a MongoDB opcode.
# @param [BSON::ByteBuffer] message a message to send to the database.
# @param [String] db_name the name of the database. used on call to get_last_error.
# @param [String] log_message this is currently a no-op and will be removed.
# @param [Hash] write_concern write concern.
#
# @see DB#get_last_error for valid last error params.
#
# @return [Hash] The document returned by the call to getlasterror.
def send_message_with_gle(operation, message, db_name, log_message=nil, write_concern=false)
docs = num_received = cursor_id = ''
add_message_headers(message, operation)
last_error_message = build_get_last_error_message(db_name, write_concern)
last_error_id = add_message_headers(last_error_message, Mongo::Constants::OP_QUERY)
packed_message = message.append!(last_error_message).to_s
sock = nil
begin
sock = checkout_writer
send_message_on_socket(packed_message, sock)
docs, num_received, cursor_id = receive(sock, last_error_id)
checkin(sock)
rescue ConnectionFailure, OperationFailure, OperationTimeout => ex
checkin(sock)
raise ex
rescue SystemStackError, NoMemoryError, SystemCallError => ex
close
raise ex
end
if num_received == 1
error = docs[0]['err'] || docs[0]['errmsg']
if error && error.include?("not master")
close
raise ConnectionFailure.new(docs[0]['code'].to_s + ': ' + error, docs[0]['code'], docs[0])
elsif (note = docs[0]['jnote'] || docs[0]['wnote']) # assignment
code = docs[0]['code'] || Mongo::ErrorCode::BAD_VALUE # as of server version 2.5.5
raise WriteConcernError.new(code.to_s + ': ' + note, code, docs[0])
elsif error
code = docs[0]['code'] || Mongo::ErrorCode::UNKNOWN_ERROR
error = "wtimeout" if error == "timeout"
raise WriteConcernError.new(code.to_s + ': ' + error, code, docs[0]) if error == "wtimeout"
raise OperationFailure.new(code.to_s + ': ' + error, code, docs[0])
end
end
docs[0]
end
# Sends a message to the database and waits for the response.
#
# @param [Integer] operation a MongoDB opcode.
# @param [BSON::ByteBuffer] message a message to send to the database.
# @param [String] log_message this is currently a no-op and will be removed.
# @param [Socket] socket a socket to use in lieu of checking out a new one.
# @param [Boolean] command (false) indicate whether this is a command. If this is a command,
# the message will be sent to the primary node.
# @param [Symbol] read the read preference.
# @param [Boolean] exhaust (false) indicate whether the cursor should be exhausted. Set
# this to true only when the OP_QUERY_EXHAUST flag is set.
# @param [Boolean] compile_regex whether BSON regex objects should be compiled into Ruby regexes.
#
# @return [Array]
# An array whose indexes include [0] documents returned, [1] number of document received,
# and [3] a cursor_id.
def receive_message(operation, message, log_message=nil, socket=nil, command=false,
read=:primary, exhaust=false, compile_regex=true)
request_id = add_message_headers(message, operation)
packed_message = message.to_s
opts = { :exhaust => exhaust,
:compile_regex => compile_regex }
result = ''
begin
send_message_on_socket(packed_message, socket)
result = receive(socket, request_id, opts)
rescue ConnectionFailure => ex
socket.close
checkin(socket)
raise ex
rescue SystemStackError, NoMemoryError, SystemCallError => ex
close
raise ex
rescue Exception => ex
if defined?(IRB)
close if ex.class == IRB::Abort
end
raise ex
end
result
end
private
def receive(sock, cursor_id, opts={})
exhaust = !!opts.delete(:exhaust)
if exhaust
docs = []
num_received = 0
while(cursor_id != 0) do
receive_header(sock, cursor_id, exhaust)
number_received, cursor_id = receive_response_header(sock)
new_docs, n = read_documents(number_received, sock, opts)
docs += new_docs
num_received += n
end
return [docs, num_received, cursor_id]
else
receive_header(sock, cursor_id, exhaust)
number_received, cursor_id = receive_response_header(sock)
docs, num_received = read_documents(number_received, sock, opts)
return [docs, num_received, cursor_id]
end
end
def receive_header(sock, expected_response, exhaust=false)
header = receive_message_on_socket(16, sock)
# unpacks to size, request_id, response_to
response_to = header.unpack('VVV')[2]
if !exhaust && expected_response != response_to
raise Mongo::ConnectionFailure, "Expected response #{expected_response} but got #{response_to}"
end
unless header.size == STANDARD_HEADER_SIZE
raise "Short read for DB response header: " +
"expected #{STANDARD_HEADER_SIZE} bytes, saw #{header.size}"
end
nil
end
def receive_response_header(sock)
header_buf = receive_message_on_socket(RESPONSE_HEADER_SIZE, sock)
if header_buf.length != RESPONSE_HEADER_SIZE
raise "Short read for DB response header; " +
"expected #{RESPONSE_HEADER_SIZE} bytes, saw #{header_buf.length}"
end
# unpacks to flags, cursor_id_a, cursor_id_b, starting_from, number_remaining
flags, cursor_id_a, cursor_id_b, _, number_remaining = header_buf.unpack('VVVVV')
check_response_flags(flags)
cursor_id = (cursor_id_b << 32) + cursor_id_a
[number_remaining, cursor_id]
end
def check_response_flags(flags)
if flags & Mongo::Constants::REPLY_CURSOR_NOT_FOUND != 0
raise Mongo::OperationFailure, "Query response returned CURSOR_NOT_FOUND. " +
"Either an invalid cursor was specified, or the cursor may have timed out on the server."
elsif flags & Mongo::Constants::REPLY_QUERY_FAILURE != 0
# Mongo query reply failures are handled in Cursor#next.
end
end
def read_documents(number_received, sock, opts)
docs = []
number_remaining = number_received
while number_remaining > 0 do
buf = receive_message_on_socket(4, sock)
size = buf.unpack('V')[0]
buf << receive_message_on_socket(size - 4, sock)
number_remaining -= 1
docs << BSON::BSON_CODER.deserialize(buf, opts)
end
[docs, number_received]
end
def build_command_message(db_name, query, projection=nil, skip=0, limit=-1)
message = BSON::ByteBuffer.new("", max_message_size)
message.put_int(0)
BSON::BSON_RUBY.serialize_cstr(message, "#{db_name}.$cmd")
message.put_int(skip)
message.put_int(limit)
message.put_binary(BSON::BSON_CODER.serialize(query, false, false, max_bson_size).to_s)
message.put_binary(BSON::BSON_CODER.serialize(projection, false, false, max_bson_size).to_s) if projection
message
end
# Constructs a getlasterror message. This method is used exclusively by
# MongoClient#send_message_with_gle.
def build_get_last_error_message(db_name, write_concern)
gle = BSON::OrderedHash.new
gle[:getlasterror] = 1
if write_concern.is_a?(Hash)
write_concern.assert_valid_keys(:w, :wtimeout, :fsync, :j)
gle.merge!(write_concern)
gle.delete(:w) if gle[:w] == 1
end
gle[:w] = gle[:w].to_s if gle[:w].is_a?(Symbol)
build_command_message(db_name, gle)
end
# Prepares a message for transmission to MongoDB by
# constructing a valid message header.
#
# Note: this method modifies message by reference.
#
# @return [Integer] the request id used in the header
def add_message_headers(message, operation)
headers = [
# Message size.
16 + message.size,
# Unique request id.
request_id = get_request_id,
# Response id.
0,
# Opcode.
operation
].pack('VVVV')
message.prepend!(headers)
request_id
end
# Increment and return the next available request id.
#
# return [Integer]
def get_request_id
request_id = ''
@id_lock.synchronize do
request_id = @@current_request_id += 1
end
request_id
end
# Low-level method for sending a message on a socket.
# Requires a packed message and an available socket,
#
# @return [Integer] number of bytes sent
def send_message_on_socket(packed_message, socket)
begin
total_bytes_sent = socket.send(packed_message)
if total_bytes_sent != packed_message.size
packed_message.slice!(0, total_bytes_sent)
while packed_message.size > 0
byte_sent = socket.send(packed_message)
total_bytes_sent += byte_sent
packed_message.slice!(0, byte_sent)
end
end
total_bytes_sent
rescue => ex
socket.close
raise ConnectionFailure, "Operation failed with the following exception: #{ex}:#{ex.message}"
end
end
# Low-level method for receiving data from socket.
# Requires length and an available socket.
def receive_message_on_socket(length, socket)
begin
message = receive_data(length, socket)
rescue OperationTimeout, ConnectionFailure => ex
socket.close
if ex.class == OperationTimeout
raise OperationTimeout, "Timed out waiting on socket read."
else
raise ConnectionFailure, "Operation failed with the following exception: #{ex}"
end
end
message
end
def receive_data(length, socket)
message = new_binary_string
socket.read(length, message)
raise ConnectionFailure, "connection closed" unless message && message.length > 0
if message.length < length
chunk = new_binary_string
while message.length < length
socket.read(length - message.length, chunk)
raise ConnectionFailure, "connection closed" unless chunk.length > 0
message << chunk
end
end
message
end
if defined?(Encoding)
BINARY_ENCODING = Encoding.find("binary")
def new_binary_string
"".force_encoding(BINARY_ENCODING)
end
else
def new_binary_string
""
end
end
end
end
ruby-mongo-1.10.0/lib/mongo/utils.rb 0000664 0000000 0000000 00000001401 12334610061 0017231 0 ustar 00root root 0000000 0000000 # Copyright (C) 2009-2013 MongoDB, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
require 'mongo/utils/conversions'
require 'mongo/utils/core_ext'
require 'mongo/utils/server_version'
require 'mongo/utils/support'
require 'mongo/utils/thread_local_variable_manager'
ruby-mongo-1.10.0/lib/mongo/utils/ 0000775 0000000 0000000 00000000000 12334610061 0016710 5 ustar 00root root 0000000 0000000 ruby-mongo-1.10.0/lib/mongo/utils/conversions.rb 0000664 0000000 0000000 00000007436 12334610061 0021617 0 ustar 00root root 0000000 0000000 # Copyright (C) 2009-2013 MongoDB, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
module Mongo #:nodoc:
# Utility module to include when needing to convert certain types of
# objects to mongo-friendly parameters.
module Conversions
ASCENDING_CONVERSION = ["ascending", "asc", "1"]
DESCENDING_CONVERSION = ["descending", "desc", "-1"]
# Allows sort parameters to be defined as a Hash.
# Does not allow usage of un-ordered hashes, therefore
# Ruby 1.8.x users must use BSON::OrderedHash.
#
# Example:
#
# hash_as_sort_parameters({:field1 => :asc, "field2" => :desc}) =>
# { "field1" => 1, "field2" => -1}
def hash_as_sort_parameters(value)
if RUBY_VERSION < '1.9' && !value.is_a?(BSON::OrderedHash)
raise InvalidSortValueError.new(
"Hashes used to supply sort order must maintain ordering." +
"Use BSON::OrderedHash."
)
else
order_by = value.inject({}) do |memo, (key, direction)|
memo[key.to_s] = sort_value(direction)
memo
end
end
order_by
end
# Converts the supplied +Array+ to a +Hash+ to pass to mongo as
# sorting parameters. The returned +Hash+ will vary depending
# on whether the passed +Array+ is one or two dimensional.
#
# Example:
#
# array_as_sort_parameters([["field1", :asc], ["field2", :desc]]) =>
# { "field1" => 1, "field2" => -1}
def array_as_sort_parameters(value)
order_by = BSON::OrderedHash.new
if value.first.is_a? Array
value.each do |param|
if (param.class.name == "String")
order_by[param] = 1
else
order_by[param[0]] = sort_value(param[1]) unless param[1].nil?
end
end
elsif !value.empty?
if order_by.size == 1
order_by[value.first] = 1
else
order_by[value.first] = sort_value(value[1])
end
end
order_by
end
# Converts the supplied +String+ or +Symbol+ to a +Hash+ to pass to mongo as
# a sorting parameter with ascending order. If the +String+
# is empty then an empty +Hash+ will be returned.
#
# Example:
#
# *DEPRECATED
#
# string_as_sort_parameters("field") => { "field" => 1 }
# string_as_sort_parameters("") => {}
def string_as_sort_parameters(value)
return {} if (str = value.to_s).empty?
{ str => 1 }
end
# Converts the +String+, +Symbol+, or +Integer+ to the
# corresponding sort value in MongoDB.
#
# Valid conversions (case-insensitive):
#
# ascending, asc, :ascending, :asc, 1 => 1
# descending, desc, :descending, :desc, -1 => -1
#
# If the value is invalid then an error will be raised.
def sort_value(value)
return value if value.is_a?(Hash)
val = value.to_s.downcase
return 1 if ASCENDING_CONVERSION.include?(val)
return -1 if DESCENDING_CONVERSION.include?(val)
raise InvalidSortValueError.new(
"#{self} was supplied as a sort direction when acceptable values are: " +
"Mongo::ASCENDING, 'ascending', 'asc', :ascending, :asc, 1, Mongo::DESCENDING, " +
"'descending', 'desc', :descending, :desc, -1.")
end
end
end
ruby-mongo-1.10.0/lib/mongo/utils/core_ext.rb 0000664 0000000 0000000 00000002666 12334610061 0021057 0 ustar 00root root 0000000 0000000 # Copyright (C) 2009-2013 MongoDB, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#:nodoc:
class Object
#:nodoc:
def tap
yield self
self
end unless respond_to? :tap
end
#:nodoc:
class Hash
#:nodoc:
def assert_valid_keys(*valid_keys)
unknown_keys = keys - [valid_keys].flatten
raise(ArgumentError, "Unknown key(s): #{unknown_keys.join(", ")}") unless unknown_keys.empty?
end
end
#:nodoc:
class String
#:nodoc:
def to_bson_code
BSON::Code.new(self)
end
end
#:nodoc:
class Class
def mongo_thread_local_accessor name, options = {}
m = Module.new
m.module_eval do
class_variable_set :"@@#{name}", Hash.new {|h,k| h[k] = options[:default] }
end
m.module_eval %{
def #{name}
@@#{name}[Thread.current.object_id]
end
def #{name}=(val)
@@#{name}[Thread.current.object_id] = val
end
}
class_eval do
include m
extend m
end
end
end
ruby-mongo-1.10.0/lib/mongo/utils/server_version.rb 0000664 0000000 0000000 00000003461 12334610061 0022314 0 ustar 00root root 0000000 0000000 # Copyright (C) 2009-2013 MongoDB, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
module Mongo
# Simple class for comparing server versions.
class ServerVersion
include Comparable
def initialize(version)
@version = version
end
# Implements comparable.
def <=>(new)
local, new = self.to_a, to_array(new)
for n in 0...local.size do
break if elements_include_mods?(local[n], new[n])
if local[n] < new[n].to_i
result = -1
break;
elsif local[n] > new[n].to_i
result = 1
break;
end
end
result || 0
end
# Return an array representation of this server version.
def to_a
to_array(@version)
end
# Return a string representation of this server version.
def to_s
@version
end
private
# Returns true if any elements include mod symbols (-, +)
def elements_include_mods?(*elements)
elements.any? { |n| n =~ /[\-\+]/ }
end
# Converts argument to an array of integers,
# appending any mods as the final element.
def to_array(version)
array = version.split(".").map {|n| (n =~ /^\d+$/) ? n.to_i : n }
if array.last =~ /(\d+)([\-\+])/
array[array.length-1] = $1.to_i
array << $2
end
array
end
end
end
ruby-mongo-1.10.0/lib/mongo/utils/support.rb 0000664 0000000 0000000 00000005054 12334610061 0020755 0 ustar 00root root 0000000 0000000 # Copyright (C) 2009-2013 MongoDB, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
module Mongo
module Support
include Mongo::Conversions
extend self
def validate_db_name(db_name)
unless [String, Symbol].include?(db_name.class)
raise TypeError, "db_name must be a string or symbol"
end
[" ", ".", "$", "/", "\\"].each do |invalid_char|
if db_name.include? invalid_char
raise Mongo::InvalidNSName, "database names cannot contain the character '#{invalid_char}'"
end
end
raise Mongo::InvalidNSName, "database name cannot be the empty string" if db_name.empty?
db_name
end
def format_order_clause(order)
case order
when Hash, BSON::OrderedHash then hash_as_sort_parameters(order)
when String, Symbol then string_as_sort_parameters(order)
when Array then array_as_sort_parameters(order)
else
raise InvalidSortValueError, "Illegal sort clause, '#{order.class.name}'; must be of the form " +
"[['field1', '(ascending|descending)'], ['field2', '(ascending|descending)']]"
end
end
def normalize_seeds(seeds)
pairs = Array(seeds)
pairs = [ seeds ] if pairs.last.is_a?(Fixnum)
pairs = pairs.collect do |hostport|
if hostport.is_a?(String)
if hostport[0,1] == '['
host, port = hostport.split(']:') << MongoClient::DEFAULT_PORT
host = host.end_with?(']') ? host[1...-1] : host[1..-1]
else
host, port = hostport.split(':') << MongoClient::DEFAULT_PORT
end
[ host, port.to_i ]
else
hostport
end
end
pairs.length > 1 ? pairs : pairs.first
end
def is_i?(value)
return !!(value =~ /^\d+$/)
end
# Determine if a database command has succeeded by
# checking the document response.
#
# @param [Hash] doc
#
# @return [Boolean] true if the 'ok' key is either 1 or *true*.
def ok?(doc)
ok = doc['ok']
ok == 1 || ok == 1.0 || ok == true
end
end
end
ruby-mongo-1.10.0/lib/mongo/utils/thread_local_variable_manager.rb 0000664 0000000 0000000 00000001476 12334610061 0025225 0 ustar 00root root 0000000 0000000 # Copyright (C) 2009-2013 MongoDB, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#:nodoc:
module Mongo
module ThreadLocalVariableManager
def thread_local
Thread.current[:mongo_thread_locals] ||= Hash.new do |hash, key|
hash[key] = Hash.new unless hash.key? key
hash[key]
end
end
end
end ruby-mongo-1.10.0/metadata.gz.sig 0000664 0000000 0000000 00000000400 12334610061 0016560 0 ustar 00root root 0000000 0000000 8r&lwG&iͨ ]e"ڑSxP(bCvl-_C:|&(G-COVhL\hEb ,0* w zbX BJno15˵"_`]gbs"`S(Ԁws|3O|!!h-X)Dk%A6XEnbVYt
opG"UTF
y4FfzJ`Ɗs]ڝ⸊J<