MongoDB-v1.2.2/000755 000765 000024 00000000000 12651754051 013427 5ustar00davidstaff000000 000000 MongoDB-v1.2.2/bson/000755 000765 000024 00000000000 12651754051 014370 5ustar00davidstaff000000 000000 MongoDB-v1.2.2/Changes000644 000765 000024 00000133572 12651754051 014735 0ustar00davidstaff000000 000000 # Change history for the MongoDB Perl driver: v1.2.2 2016-01-26 15:33:30-05:00 America/New_York [Bug fixes] - PERL-602 Support legacy Cpanel::JSON::XS booleans (before 2.3404) - PERL-604 Improve detection of stale primaries when a replica set election protocol version is being upgraded/downgraded. - Fix uninitilized 'inserted_count' in MongoDB::InsertManyResult [Documentation] - Fixed broken link in POD for MongoDB::DataTypes v1.2.1 2015-12-18 11:32:19-05:00 America/New_York [Bug fixes] - PERL-599 Fix bson/bson-error.c compilation problem on Win32 v1.2.0 2015-12-07 12:55:11-05:00 America/New_York [Additions] - PERL-561 Add support for bypassDocumentValidation option to relevant CRUD methods. - PERL-564 Add support for readConcern (for MongoDB 3.2 only). - PERL-569 Add 'batch' method to QueryResult for retrieving a chunk of results instead of just one (via 'next') or all. - PERL-594 Add maxAwaitTimeMS option for tailable-await cursors on MongoDB 3.2 servers. - Add find_id method to MongoDB::Collection for easy retrieval of a single document by _id. - Add support for write concern find-and-modify-style methods (for MongoDB 3.2 only) [Bug fixes] - PERL-493 Don't send writeConcern if it is not set; this allows the user to get the default write concern set on the server. - PERL-571 Add -D_GNU_SOURCE to ccflags if needed. - PERL-597 Check findAndModify-type command results for writeConcernErrors (for MongoDB 3.2 only). [Changes] - PERL-595 Change limit/batchSize behavior to match CRUD spec; most users won't notice the difference, but generally speaking, when there is both a limit and a batch size, under MongoDB 3.2, the batch size is respected if it is smaller than the limit. Previously, in some cases, the batch size was ignored and the limit used instead. [Documentation] - PERL-570 Update MongoDB::Cursor::info documentation. - Replace term 'slave' with 'secondary' in docs. [Testing] - Skip fsync test on inMemory storage engine. [~ Internal changes ~] - PERL-558 Implement fsyncUnlock as a command for MongoDB 3.2+. - PERL-563 Implement find/getMore/killCursors as commands for MongoDB 3.2+. - Verify that server replies are less than maxMessageSizeBytes. v1.1.1 2015-12-01 20:24:04-05:00 America/New_York (TRIAL RELEASE) v1.1.0 2015-11-18 10:37:37-05:00 America/New_York (TRIAL RELEASE) v1.0.4 2015-12-02 10:21:03-05:00 America/New_York [Bug fixes] - PERL-571 Add -D_GNU_SOURCE to ccflags if needed. [Documentation] - Fixed SYNOPSIS bug in MongoDB::IndexView for create_many v1.0.3 2015-11-03 22:25:12-05:00 America/New_York [Bug fixes] - Fixed BSON encoding tests for big-endian platforms. v1.0.2 2015-10-14 15:26:30-04:00 America/New_York [Bug fixes] - PERL-198 Validate user-constructed MongoDB::OID objects; also coerces to lower case for consistency with generated OIDs. - PERL-495 Preserve fractional seconds when using dt_type 'raw' - PERL-571 Include limits.h explicitly rather than relying on other headers to load it. - PERL-526 Detect stale primaries by election_id (only supported by MongoDB 3.0 or later) - PERL-575 Copy inflated booleans instead of aliasing them. - Fix a failing test in the case where a user is running a single-node replica set. [Documentation] - PERL-532 Document loss of precision when serializing long doubles - Noted that IPv6 support requires IO::Socket::IP (core since Perl v5.20.0). [Prerequisites] - PERL-579 Require at least version 0.25 of boolean.pm [~ Internal changes ~] - PERL-475 Optimize 'all' QueryResult method v1.0.1 2015-09-22 12:55:08-04:00 America/New_York [Bug fixes] - PERL-567 Fixed a failing test in the case where a user is running a replica set on the default port 27017. [Documentation] - PERL-568 Fixed SYNOPSIS of MongoDB.pm - Clarified some confusing sections of MongoDB::Tutorial and added hyperlinks to documentation for methods used in the tutorial. - Clarified some sections of MongoDB::Collection and MongoDB::Cursor and added some hyperlinks. v1.0.0 2015-09-21 16:15:04-04:00 America/New_York [!!! Incompatible Changes !!!] - The v1.0.0 driver includes numerous incompatible changes; users are STRONGLY encouraged to read MongoDB::Upgrading for advice on upgrading applications written for the 'v0' driver. - PERL-221 The 'inflate_regexps' MongoDB::MongoClient option has been removed. BSON regular expressions always decode to MongoDB::BSON::Regexp objects. This ensure safety and consistency with other drivers. - PERL-330 The driver now uses pure-Perl networking; SSL and SASL now implemented via optional CPAN modules IO::Socket::SSL and Authen::SASL. - PERL-442 Connection string options have revised to match MongoClient options; connection string options always take precedence over MongoClient constructor arguments. - PERL-470 The MongoDB::Cursor globals "slave_ok" and "timeout" no longer have any effect and have been removed. - PERL-471 The MongoDB::Cursor 'snapshot' method now requires a boolean argument. - PERL-505 When bulk inserting a document without an '_id' field, the _id will be added during BSON encoding, but the original document will NOT be changed. (This was the case for regular insertion in the v0.x series, but not for the Bulk API.) - PERL-519 The $MongoDB::BSON::use_binary global variable has been removed. Binary data always decodes to MongoDB::BSON::Binary objects (which now overload stringification). This ensures that binary data will correctly round-trip. - PERL-520 The $MongoDB::BSON::utf8_flag_on global variable has been removed. BSON strings will always be decoded to Perl character strings. This ensures that string data will correctly round-trip. - PERL-523 Requires a replica set name explicitly to connect to a replica set. Connecting to a single host is always in a 'direct' mode otherwise. - PERL-546 MongoDB::DBRef objects no longer have a 'fetch' method or 'client' attribute. This is consistent with the design of the MongoDB drivers for other language. For the Perl driver, specifically, it decouples the BSON model from the MongoClient model, eliminates a circular reference, and avoid Perl memory bugs using weak references under threads. - MongoDB::MongoClient configuration options are now read-only and may not be modified after client construction. - The $MongoDB::BSON::looks_like_number and $MongoDB::BSON::char global variables now ONLY have an effect at MongoDB::MongoClient construction. Changing them later does not change BSON encoding. Both are deprecated as well and should not be used in new code. Instead, the enhanced MongoDB::BSON codec class has attributes that encapsulate these behaviors. - The 'dt_type' MongoDB::MongoClient option has been deprecated and made read-only. It now only takes effect if C constructs a MongoDB::BSON codec object and is read-only so that any code that relied on changing it after client construction will fail rather that being silently ignored. - The 'inflate_dbrefs' MongoDB::MongoClient option has been removed. By default, dbrefs are always inflated to MongoDB::DBRef objects. - The MongoDB::MongoClient 'read_preference' method is no longer a mutator. It is now only an accessor for a MongoDB::ReadPreference object constructed from 'read_preference_mode' and 'read_preference_tag_sets'. - The legacy read preference constants in MongoDB::MongoClient have been removed, as they are no longer are used with the new MongoDB::ReadPreference class. - The MongoDB::MongoClient 'authenticate' method has been removed; credentials now must be passed via configuration options and authentication is automatic on server connection. - The MongoDB::Cursor class has been split. Actual result iteration is done via a new MongoDB::QueryResult class. - MongoDB::Error exception objects are now used consistently throughout the driver, replacing other error mechanism and raw "die" calls. - The MongoDB::WriteResult class was renamed to MongoDB::BulkWriteResult. - The long-deprecated MongoDB::Connection class has been removed. - Low-level client functions have been removed. [*** Deprecations ***] - PERL-398 The MongoDB::MongoClient 'timeout' and 'query_timeout' options are deprecated in favor of new, more explicit 'connect_timeout_ms' and 'socket_timeout_ms' options. - PERL-424 The MongoDB::Cursor 'count' method has been deprecated. - PERL-464 The MongoDB::Database 'last_error' method has been deprecated. - PERL-507 MongoDB::Collection 'get_collection' method is deprecated; it implied sub-collections, which don't actually exist in MongoDB. - PERL-511 The old CRUD method names for the MongoDB::Bulk API have been deprecated in favor of names that match the new MongoDB::Collection CRUD API. - PERL-516 The MongoDB::Collection index management methods have been deprecated in favor of the new MongoDB::IndexView API. - PERL-533 The MongoDB::Collection 'save' method has been deprecated. - PERL-534 The MongoDB::Collection 'validate' method has been deprecated. - PERL-559 The MongoDB::Database 'eval' method has been deprecated, as the MongoDB server version 3.0 deprecated the '$eval' command. - The MongoDB::MongoClient 'sasl' and 'sasl_mechanism' config options have been deprecated in favor of the more generic 'auth_mechanism' option. - Legacy MongoDB::Collection CRUD methods (insert, update, etc.) have been deprecated in favor of new CRUD API methods. - MongoDB::CommandResult changed the name of the accessor for the document returned by the server to 'output' instead of 'result' for clarity. The 'result' method is deprecated. - As mentioned above, 'dt_type', '$MongoDB::BSON::looks_like_number' and '$MongoDB::BSON::char' have been deprecated in addition to their other behavior changes. [Additions] - PERL-93 Implemented awaitData cursor support. - PERL-135 Added the ability to set write_concern at database and collection level, rather than only in MongoDB::MongoClient. - PERL-233 Implemented SSL certificate support via IO::Socket::SSL options. - PERL-375 Added support for cursor options to the MongoDB::Collection 'find_one' method. - PERL-378 Implemented the cross-driver Server Discovery and Monitoring specification. - PERL-379 Implemented the cross-driver Server Selection specification. - PERL-406 Allowed count methods to work with query hints. - PERL-408 Implemented SCRAM-SHA-1 and revised handshake for MongoDB 3.0 and later. - PERL-413 Added max_time_ms as a MongoDB::MongoClient configuration option to set a default for database and collection objects. - PERL-422 Added support for specifying read preferences in the connection string URI. - PERL-465 Added support for arbitrary options on index creation. - PERL-466 Added the ability to set read preference at the database and collection level. - PERL-486 Added 'has_modified_count' method to MongoDB::UpdateResult and MongoDB::BulkWriteResult to ease detection of when that attribute is supported by a server or not. - PERL-490 Added 'list_collections' method to MongoDB::Database. - PERL-500 Added 'topology_status' method to MongoDB::MongoClient. - PERL-502 and PERL-503 Implemented new common driver CRUD API specification in MongoDB::Collection. - PERL-506 Added support for serializing/deserializing Time::Moment objects. - PERL-515 Added new MongoDB::IndexView API. - PERL-554 Implemented 'server_selection_try_once' configuration option on MongoDB::MongoClient. - Added an optional read preference argument to 'run_command'. - Added 'db' and 'coll' methods as aliases for 'get_database' and 'get_collection' on MongoDB::MongoClient and MongoDB::Database, respectively. - Added the 'get_namespace' method to MongoDB::MongoClient (with the alias 'ns'), to get a MongoDB::Collection object directly from a MongoDB::MongoClient object. - Added a 'connect' class method to the MongoDB class for syntactic sugar to create a client object. - Added a 'with_codec' method to MongoDB::Collection for easier localized changes to BSON codec attributes. - Added a 'reconnect' method to MongoClient to handle reconnection after a fork or thread spawn. - Added support for correctly encoding boolean objects from JSON::XS, Cpanel::JSON::XS, JSON::PP, JSON::Tiny and Mojo::JSON. [Bug Fixes] - PERL-146 Normalized server addresses to lower case. - PERL-401 Fixed index creation to always have a non-zero write concern. - PERL-409 Added missing declarations for MinGW on Windows. - PERL-410 Fixed BSON encoding/decoding to detect and throw and error if invalid UTF-8 is detected. - PERL-429 Fixed read preference tag sets logic. - PERL-435 Switched to http-style SSL certificate name validation. - PERL-454 Prevented warnings when creating BSON datetimes at the epoch. - PERL-477 Fixed list_indexes and list_collections when responses are too big to fit in a single database response. - PERL-480 Fixed GridFS bug: retrieving a GridFS file now throws an error if no chunks exist instead of returning an empty string. - PERL-489 Fixed fatal BSON encoding bug serializing references to dual-vars. - PERL-531 Bulk update/replace documents would not validate properly when $MongoDB::BSON::char was not '$'. While that functionality has moved to the MongoDB::BSON codec instead of the global variable, all update/replace documents (bulk and CRUD API) are now validated after key munging. - PERL-536 Fixed GridFS to stop throwing an error when a known empty file has no chunks; errors will still be thrown if a non-empty file has no chunks. - PERL-540 Fixed memory leak in DateTime::Tiny inflation. - PERL-543 Fixed a bug serializing undef from Tie::IxHash objects. - PERL-556 Fixed serialization of thread-shared variables. - Fixed t/cursor.t for new explain format. - Removed storage engine dependent code and tests. - Fixed MSVC compilation: fix unused vars and statements before declarations; removed unused, but problematic bcon.c and bcon.h; use Perl memory alloc functions instead of malloc. - Made conflicting 'multi' and 'multiple' update options fatal. - Fixed use of slave_ok and $readPreference for communicating read preferences to a mongos. - Fixed t/database.t for change in server error message. - Ensured topology type is correct whenever a server is marked unavailable. - Fixed incorrect 'matched_count' result attribute for upserts. - Fixed failing BSON element tests on 32-bit perls. - Fixed bug in MongoDB::MongoClient::database_names error handling. - Use of -Wall compiler flag during smoke testing has been restricted to gcc compilers, only. - Fixed encoding to raise an error if an array-reference document contains duplicate keys. - Stopped encoding scalar-ref objects as BSON BINARY type. (Throws an error instead about an unhandled type.) - Fixed incorrect configuration test for GCC atomic operations. - Fixed bug numifying wtimeout in write concern serialization. - Fixed BSON double tests on Perls with long-doubles enabled. - Fixed t/gridfs to work around a bug in MongoDB 3.1.2. - Fixed a number of XS memory leaks from non-mortalized variables during BSON encoding. [Changes] - PERL-127 Integers that fit in 32-bits are now encoded as BSON Int32; larger integers are encoded as BSON Int64; Math::BigInt objects are always encoded as BSON Int64. - PERL-331 The MongoDB::BSON package is now a full class, implementing a BSON encoder-decoder (codec). It can be supplied as an attribute to MongoDB::MongoClient, MongoDB::Database and MongoDB::Collection objects. - PERL-488 MongoDB::WriteConcern method 'is_safe' renamed to 'is_acknowledged'. - PERL-527 A database name is now optional for MongoDB::DBRef, which is consistent with the DBRef specification. - PERL-529 Connection string option keys are now parsed case-insensitively. - PERL-530 The driver now warns on unsupported connection options. - PERL-550 DBRefs allow extra fields (for compatibility); this is not recommended for new DBRefs. - Renamed DocumentSizeError to a more general general DocumentError. - MongoDB::Collection attributes that should not be set in the constructor have been made private, but with public accessors for backwards compatibility. Private attributes that are set in the constructor (e.g. 'database') are now public. - Failure to create indexes when constructing a GridFS object are ignored rather than a fatal error. - Calls to Carp::confess() or die() have been replaced with exceptions from MongoDB::Error subclasses, typically MongoDB::UsageError. - Generic MongoDB::Error exceptions have been replaced with subclasses that have a specific, documented purpose, including: MongoDB::AuthError, MongoDB::GridFSError, MongoDB::InternalError and MongoDB::UsageError. - Configuration options representing times have stricter validation such that options that should be non-negative will raise exceptions when given negative numbers. - BSON code derived from libbson has been updated to libbson 1.1.7. - Returns MongoDB::UnacknowledgedResult from unacknowledged writes (i.e. { w => 0 } write concern) instead of the corresponding result object (i.e. MongoDB::InsertResult for inserts). - Loads Authen::SCRAM::Client only on demand, as its Unicode module dependencies are costly when not needed. - MongoDB::QueryResult attributes have become private, as they are an implementation detail and not intended for end-users to inspect. - Aborts Makefile.PL on Windows before Vista/2008 for better error message than subsequent compilation/test failures. - Changes default connect_timeout_ms to 10,000. - Credential details omitted from usage error messages. [Documentation] - PERL-423 Improved documentation of cursor snapshot mode, as it doesn't do what many people think it does. - PERL-425 Documented deprecation of the 'drop_dups' option for index creation. - PERL-524 Updated legacy author emails in docs and metadata. - Added contributors section to MongoDB main documentation based on git commit logs. - Added MongoDB::Upgrading document with changes from v0.x. - Documented how to disable returning _id from queries. - Rearranged Collection and Database documentation. - Corrected errors in MongoDB::Cursor documentation. [Prerequisites] - Added core modules IO::Socket and MIME::Base64 to the dependency list for completeness. - Added Class::XSAccessor, Moo and Type::Tiny::XS. - Enforced minimum versions for configuration requirements. - Moved DateTime::Tiny from a test_requires dependency to a test_recommends dependency. - Removed core modules File::Copy, File::Path and File::Spec from the list of test dependencies. - Removed Class::MOP::Class as an explicit dependency (still used internally by Moose). - Removed Data::Types and Data::Dump as test dependencies. - Removed File::Slurp. - Removed JSON module in favor of JSON::MaybeXS. - Removed Moose, Syntax::Keyword::Junction and Throwable. - Removed Test::Warn. - Updated Path::Tiny minimum version to 0.054 (rather than unspecified). - Updated IO::Socket requirement on Windows to 1.31. - Updated Authen::SCRAM::Client minimum version to 0.003. [Removals] - PERL-467 Removed outdated MongoDB::Indexing document. - PERL-497 The $MongoDB::BSON::use_boolean never worked; BSON boolean values were always deserialized as boolean.pm objects. Rather than "fix" this and break people's code, the option has been removed and documented as such. - The MongoDB::MongoClient 'auto_connect', 'auto_reconnect', and 'find_master' methods have been removed, as server discovery and selection is now automatic. [Testing] - PERL-371 Added tests for parsing localhost:port. - PERL-492 Implemented server selection tests. - PERL-513 Added maxTimeMS tests for CRUD API methods. - Changed text index test to use $text operator, not text command (which was removed in MongoDB 3.0). - Changed t/max_time_ms.t to skip unless $ENV{FAILPOINT_TESTING} is true. - Reduced number of threads used in threads testing to avoid out of memory errors on memory constrained systems. [~ Internal changes ~] - PERL-133 Implemented a test that client can connect to replica sets without primary. - PERL-259 Implemented write commands for MongoDB 2.6+ (i.e. doing writes via database commands versus via the wire protocol with OP_INSERT, OP_UPDATE, etc.). - PERL-325 Updated vendored ppport.h to version 3.31. - PERL-433 Updated listCollections to use command form on 3.0 servers. - PERL-434 Updated listIndexes to use command form on 3.0 servers. - PERL-436 Bumped maxWireProtocolVersion for 3.0 support. - PERL-455 Changed to use connect timeout as the socket timeout for topology scans. - Implemented a 5 second "cooldown" period after a network error during topology scanning during which new connection attempts will not be made. This avoids excessive blocking in the driver when it's unlikely that the server will be available right away. - Removed unused vendored libyajl files. - Refactored and reorganized perl-mongo.h and perl_mongo.c. Removed unused functions and macros. - Disabled many internal class type constraints and runtime assertions unless the PERL_MONGO_WITH_ASSERTS environment variable is true. - Changed use of 'strerror_s' to 'strerror' to attempt to get C/XS linking on Windows XP. - Changed all Moose classes to Moo classes for speed and to minimize the deep dependency tree. - Changed argument handling for CRUD API methods to stop coercing inputs to Tie::IxHash. This makes them significantly faster. - Optimized networking code paths substantially. - Consolidated various constants to MongoDB::_Constants. - Inlined and adapted the Throwable CPAN module to avoid deep dependencies for MongoDB::Error. v0.999.999.6 2015-08-24 10:42:35-04:00 America/New_York (TRIAL RELEASE) v0.999.999.5 2015-08-13 17:11:49-04:00 America/New_York (TRIAL RELEASE) v0.999.999.4 2015-07-31 17:02:21-04:00 America/New_York (TRIAL RELEASE) v0.999.999.3 2015-06-29 12:21:07-04:00 America/New_York (TRIAL RELEASE) v0.999.999.2 2015-06-17 11:34:48-04:00 America/New_York (TRIAL RELEASE) v0.999.999.1 2015-06-10 09:48:53-06:00 America/Denver (TRIAL RELEASE) v0.999.998.6 2015-05-20 14:28:18-04:00 America/New_York (TRIAL RELEASE) v0.999.998.5 2015-04-30 06:12:39-04:00 America/New_York (TRIAL RELEASE) v0.999.998.4 2015-03-25 14:37:52-04:00 America/New_York (TRIAL RELEASE) v0.999.998.3 2015-03-25 14:30:00-05:00 America/New_York (TRIAL RELEASE) v0.999.998.2 2015-02-23 17:10:36-05:00 America/New_York (TRIAL RELEASE) v0.999.998.1 2014-11-12 15:10:40-05:00 America/New_York (TRIAL RELEASE) v0.708.4.0 2015-08-11 16:06:55-04:00 America/New_York [Bug fixes] - Fixes handling of 'safe' option for 'remove' - PERL-555 Fixes serialization of thread-shared scalars (and likely other tied/magic-using scalars) on Perls before 5.18 v0.708.3.0 2015-07-14 15:42:16-04:00 America/New_York [Bug fixes] - PERL-543 fix serialization of undef in tied hashes - PERL-553 fix duplicate _id bug with Tie::IxHash and array reference documents with an existing _id - Fix BSON tests on Perl with long doubled enabled v0.708.2.0 2015-06-05 16:39:00-04:00 America/New_York [Bug fixes] - PERL-536 fix GridFS to stop throwing an error when a known empty file has no chunks; errors will still be thrown if a non-empty file has no chunks. - PERL-541 fixed remove() to respect MongoClient write concern [Documentation] - PERL-525 updated legacy author emails in docs and metadata v0.708.1.0 2015-04-29 16:51:52-04:00 America/New_York [Bug fixes] - PERL-479 retrieving a GridFS file now throws an error if no chunks exist instead of returning an empty string [Removals] - PERL-496 The $MongoDB::BSON::use_boolean never worked; BSON boolean values were always deserialized as boolean.pm objects. Rather than "fix" this and break people's code, the option has been removed and documented as such. v0.708.0.0 2015-01-20 16:57:11-05:00 America/New_York [Additions] - Added 'get_namespace' method (and 'ns' alias) to MongoDB::MongoClient for getting a collection directly without an intermediate database object. - Added 'db' and 'coll' aliases for 'get_database' and 'get_collection' [Bug fixes] - PERL-489 references to scalars with both number and string slots internally would crash on BSON encoding, rather than encode the string part as a binary. [Diagnostics] - Added parenthetical note to "can't get db response" errors to disambiguate where they occur. v0.707.2.0 2014-12-22 05:35:31-05:00 America/New_York [Bug fixes] - PERL-476 fixed getting lists of collections and indices for changes in MongoDB 2.8-rc3 and later. v0.707.1.0 2014-12-10 12:50:45-05:00 America/New_York [Bug fixes] - PERL-465 allowed arbitrary options on index creation - Fixed t/database.t for change in error message for missing commands - Fixed undef warning from get_indexes on older MongoDB versions [Prerequisites] - Removed Data::Types as a test dependency as it was barely used and not necessary v0.707.0.0 2014-11-12 15:04:46-05:00 America/New_York [Additions] - Supports MongoDB 3.0; in addition to prior feature support, this release fixes tests that were broken against non-default storage engines [Bug fixes] - PERL-454 suppress warnings storing datetimes at the epoch v0.706.0.0 2014-10-28 11:30:42-04:00 America/New_York [*** Deprecations ***] - PERL-425 the 'drop_dups' indexing option is deprecated because it is ignored as of server version 2.7.5 [Additions] - PERL-408 added support for SCRAM-SHA-1 (for MongoDB 2.7.8+) [Bug fixes] - PERL-409 fixed compilation on MSWin32 using the MinGW compiler - Fixed compilation errors on MSWin32 using the MSVC compiler - Fixed construction of Makefile LIBS argument for some platforms - Fixed parallel scan and explain tests for changes in the MongoDB 2.7.x development series [Diagnostics] - Passing the "ssl" parameter to MongoDB::MongoClient will now warn if SSL support is not available. [Documentation] - Revised "run_command" documentation to explain that array references or Tie::IxHash should be used. [Prerequisites] - Added dependency on Authen::SCRAM::Client 0.003 - Removed (test) dependency on File::Slurp - Minimum required versions of configuration dependencies Path::Tiny and Config::AutoConf are now enforced in the code, not just specified in META.json [~ Internal changes ~] - PERL-433 uses the listCollections command if available (MongoDB 2.7.8+) - PERL-434 uses the listIndexes command if available (MongoDB 2.7.8+) - PERL-436 bumped supported maxWireProtocolVersion to 3 (MongoDB 2.7.8+) v0.705.0.0 2014-09-09 10:04:59-04:00 America/New_York [Additions] - PERL-406 allow count() to use hints [Prerequisites] - Clarified that Test::Deep 0.111 is required rather than any version v0.704.5.0 2014-08-19 14:17:00-04:00 America/New_York [Bug fixes] - PERL-407 fixed request_id race condition under threads - PERL-410 dies on BSON encoding/decoding if invalid UTF-8 is detected v0.704.4.0 2014-07-30 05:43:11-04:00 America/New_York [Testing] - Restores behavior of skipping tests if no mongod is available for testing v0.704.3.0 2014-07-28 17:02:13-04:00 America/New_York [Additions] - PERL-130 improved support for connection string URI; added support for options: ssl, connectTimeoutMS, w, wtimeoutMS, and journal [Bug fixes] - PERL-130 fixed parsing of connection string to allow for usernames containing : and passwords containing @ if they are percent encoded (RFC 3986) - PERL-166 fixed tailable cursors with no initial results - PERL-290 when find_master is 0, the driver now consistently picks the first server in the list - PERL-387 made database_names() retry up to three times if the server returns a lock error v0.704.2.0 2014-07-08 12:04:02-04:00 America/New_York [Bug fixes] - PERL-376 fixed fatal error loading the MongoDB::MongoClient module before loading the top-level MongoDB module - Fixed cursor to catch query or timeout errors that occur after the initial query batch is received - Fixed primary server selection to retry for 60 seconds instead of immediately failing with an error - Changed bulk insert to shallow copy inserted documents before adding an '_id' field (if it didn't exist) to avoid modifying the original [Testing] - Fixed t/database.t for old versions of mongos - PERL-355 Added support for parallel testing - Finished converting from Test::Exception to the more robust Test::Fatal - Improved test coverage v0.704.1.0 2014-06-17 21:55:18-04:00 America/New_York [Bug fixes] - PERL-336 fixed unknown command exception with index creation on 2.2 and older servers; we now correctly fall back to legacy index creation - PERL-349 fixed request ID misordering when reconnecting to a server; this fixes the known issue regarding test failures with threads under find_master - PERL-368 changed all query docs to be coerced to Tie::IxHash; this ensures that command queries are properly ordered and fixes a crashing bug when using command helpers in concert with read preference - PERL-369 fixed segfaults deserializing 64-bit integers from BSON on pure 32-bit perls - PERL-370 fixed bulk update results for upserts with non OID _id on servers prior to 2.6 - Fixed stale detection of write command support for bulk operations - Fixed wire version checks and max BSON size inspection for replica sets with multiple hosts in the connection URI [Documentation] - PERL-366 documented bulk write initializers in Collection docs - Updated Example.pod docs for field projection (Johann Rolschewski) [Testing] - PERL-348 tests report MongoDB version in test diagnostics - PERL-351 fixed test failures if the local database has auth enabled; tests will skip instead of fail - PERL-356 enabled additional tests if the test database is a replica set or sharded cluster - Added test for field projection (Johann Rolschewski) - Fixed various tests to run against a sharded cluster - Moved unused orchestration tests out of the main test suite [~ Internal changes ~] - PERL-357 added developer tools for testing different cluster configuration v0.704.0.0 2014-05-27 13:54:01-04:00 America/New_York [!!! Incompatible Changes !!!] - PERL-108 removed previously-deprecated AUTOLOAD functions [*** Deprecations ***] - PERL-320 low-level protocol functions in MongoDB.pm are deprecated [Additions] - PERL-221 added MongoDB::Regex class to represent stored regexes - PERL-251 implemented support for aggregation command cursors - PERL-252 added 'max_time_ms' method to cursors - PERL-258 added support for '$out' aggregation pipeline operator - PERL-262 read preference implementation - PERL-278 added explain support for aggregation queries - PERL-298 implement parallel_scan method for collections - PERL-299 implemented new bulk write API - Added nolock support to eval in MongoDB::Database (Ashley Willis) [Bug Fixes] - PERL-233 Fix find_and_modify error handling - PERL-260 ensure_index no longer ignores weights, default_language, and language_override options - PERL-267 memory leak fixes (Casey Rojas) - PERL-307 fix drop_dups option for ensure_index - PERL-315 require DateTime 0.78 or later - PERL-318 fix compiler warnings - PERL-319 fix compilation failures on some platforms - PERL-322 change return value and document low-level recv - PERL-323 fixed possible socket leaks on communication errors - PERL-336 fixed index creation legacy fallback against older mongos - Cached client constructor arguments for replica set connections - Cleaned Moose class namespaces of imported methods - Ensured internal run_command exceptions include correct error string - Fixed a bug that would serialize an index direction as a string on some older Perls - Fixed clock race in OID unit test - Fixed exception handling for internal run_command calls - Fixed fatal error in DESTROY with find_master and down server - Fixed gridfs test for unique keys - Fixed hint tests for MongoDB >= 2.5.4 - Fixed index creation and drop on MongoDB 2.6 - Fixed memory corruption error on Perl 5.19.1+ - Fixed several compiler warnings on Perl 5.8 - Fixed use of re::regexp_pattern for 5.10.0 - Made t/dbref.t use fresh test database - Prevented GridFS MD5 calculation when 'safe' is not set (mapbuh) - Provided backwards compatible HeUTF8 macro for Perl v5.10 and v5.8.8 and earlier - Removed hard-coded compiler flags for Darwin - Updated ppport.h to version 3.22 [Documentation] - PERL-217 improved documentation of GridFS::get - PERL-287 updated "j" and "fsync" option docs for MongoDB 2.6 - PERL-311 fix legacy docs authentication link - PERL-317 Clarified support for threads - PERL-341 Added install documentation including use of non-standard C library paths - Added abstract to MongoDB::BSON::Regexp documentation - Revised main MongoDB module and MongoDB::MongoClient docs - Updated Changes file with changes since 0.701 [Prerequisites] - Added namespace::clean - Added Test::Deep - Added Test::Fatal - Added Throwable - Added Syntax::Keyword::Junction - Changed Test::More requirement to 0.96 - Removed Devel::Size [~ Internal changes ~] - PERL-261 use setVersion field in isMaster for replica set discovery (David Storch) - PERL-264 test for closing connection when MongoClient object leaves scope. (Ashley Willis) - PERL-269 test libraries for replica set and sharded cluster testing - PERL-285 added wire protocol check - PERL-296 implemented new index creation command for MongoDB 2.6 - PERL-312 default GridFS chunk size changed from 1mb -> 255kb - Switched BSON implementation to libbson and bundled patched libbson 0.6.4 to avoid external library dependency [~ Known Issues ~] - PERL-233 SSL certificate validation not yet implemented - PERL-349 changes to the testing framework revealed a bug when threads are used with 'find_master' on the client; the offending test is marked TODO and the bug will be addressed in the next stable release. - Some platforms may not compile, including Windows and some Solaris and OpenBSD systems; these issues will be addressed in a future release v0.703.5 2014-05-23 06:26:46-04:00 America/New_York (TRIAL RELEASE) v0.703.4 2014-04-07 20:12:27-04:00 America/New_York (TRIAL RELEASE) v0.703.3 2014-04-01 10:32:49-04:00 America/New_York (TRIAL RELEASE) 0.703_2 (TRIAL RELEASE) 0.702.2 [Bug Fixes] - Fix double-from-buffer alignment issue on ARM platform (Robin Lee) - Set BSON_MINKEY to 255 if char is unsigned (Robin Lee) - Fix test plans in connection.t and delegation.t (Robin Lee) [Internal] - Copyright update s/10gen/MongoDB/ due to company name change 0.702.1 [Bug Fixes] - Query Fields accept Tie::IxHash and Hashref.. (Colin Cyr) - Fix for gridfs and creation of indexes (mapbuh) 0.702.0 [Enhancements] - SASL PLAIN suport added - Makefile.PL can enable SSL/SASL builds via environment variables [Bug Fixes] - PERL-162 set_timeout fix - PERL-245 fix fractional seconds in BSON datetime deserialization - Fix specifying index keys as an array ref (D. Ilmari Mannsåker) - Prevent legacy auth when in SASL mode - Drop all created collections in dbref.t (D. Ilmari Mannsåker) [Documentation] - Deprecated AUTOLOAD functions removed from documenation - Various module docs revised and updated [Internal] - Refactored boilerplate test code to a separate testing module 0.701 [Enhancements] - Support for Kerberos authentication on Linux (EXPERIMENTAL) - Add a get_collection method to MongoDB::Collection (@sanbeg, pull #52) - Optimizations on inserts and fetch (@ilmari, pull #66, PERL-129) - Hash ordering fixes (@ilmari, pull #64) - Double and int type helpers (@kenahoo, pull #65, PERL-227) - TTL index support (@drtz, pull #60, PERL-222) - Restored support for Perl 5.8. - Support for native DBRefs. [Bug Fixes] - UTF-8 fixes (@ilmari, pull #67, #68) - DateTime fixes (@kenahoo, pull #65) - Don't do aggregation tests when running against MongoDB < 2.2. 0.47 - 0.503.4 [Enhancements] - Ordered hash support for MongoDB::Cursor::hint() (Colin Syr) - timegm() implementation for Windows (Stevie-O) - aggregate() helper method - find_and_modify helper - Connection URI support enhancements (Tianon Gravi) - MongoClient new top-level object - Removing AUTOLOAD method examples from documentation - Replacing $conn examples with $client in docs. - Deprecation warning for MongoDB::Connection - Removed dependence on Any::Moose - Support for fsyncLock/unlock (Casey Rojas) - Support for dt_type param, DateTime::Tiny and raw epoch times - Support for UTF8 hash keys (Roman Yerin) - Support for 'j' param to turn on journaling (Casey Rojas) [Bug Fixes] - Miscellaneous documentation fixes (Andrey Khozov, others) - Fixed socket timeout bug (nightlord) - Fixed broken regex test for Perls < 5.14. - More accurate isUTF8 function (Jan Anderssen) - Proper serialization of regex flags via re::regexp_pattern 0.46 [Enhancements] - Added SSL support (Casey Rojas). See new documentation on MongoDB::Connection's ssl attribute. - Added MongoDB::BSON::Binary type and MongoDB::BSON::use_binary option. See the Data Types documentation on using the Binary type instead of string refs for binary data. - Change default binary type from 2 to 0. See MongoDB::BSON::Binary for more information about the implications of this change. [Bug Fixes] - Fix auth connection issues (Olly Stephens) - Fix driver creating duplicate connections when port isn't specified (Olly Stephens) - Fix authentication check on some versions of Perl (Olly Stephens) 0.45 - September 7, 2011 This is a recommended upgrade. There are no backwards-breaking changes, only bug fixes and enhancements. [Enhancements] - Perl 5.8.4 and higher is now officially supported (5.8.7 was the previous minimum version). - Improved the way that connecting handles an interrupt signal. The driver now continues to attempt connection for the remaining duration of the timeout, instead of erroring out immediately. [Bug Fixes] - Fixed MaxKey and MinKey deserialization. Deserializing these types would seg fault if they hadn't been serialized previously. - Fixed Windows compilation (Taro Nishino) - Fixed MakeMaker arguments which were causing build problems on 5.14. 0.44 - July 26, 2011 This is a recommended upgrade. There are no backwards-breaking changes, only bug-fixes and enhancements. [Enhancements] - Added MongoDB::BSON::looks_like_number flag. The Perl driver has always been coy about turning strings into numbers. If you would like aggressive number parsing (if it looks like a number, send it to the DB as a number), you can set MongoDB::BSON::looks_like_number to 1 (defaults to 0, the previous behavior). See the MongoDB::DataTypes pod for more info. - Tests should now clean up after themselves, leaving no test databases behind. [Bug Fixes] - Setting a sort in the arguments to MongoDB::Collection::find is now passed through correctly to the cursor. - Fixed segmentation fault in array serialization: caused by specifying an _id field on insert and using an array (not a hash or Tie::IxHash). - Fixed segmentation fault in threading: if Mouse was used instead of Moose, version 0.43 of the driver would segfault if multiple threads were used. - MongoDB::Cursor now inherits the $Mongo::Cursor::slave_okay global setting, as well as checking if slave_okay is set on the cursor instance. - Fix GridFS functions to only ensure GridFS indexes on writes, allowing GridFS API usage on slaves. 0.43 - May 31, 2011 This is a recommended upgrade. There are no backwards-breaking changes, only bug-fixes and enhancements. [Enhancements] - Auto-detects max BSON size for inserts, which means documents larger than 4MB can now be inserted. See L for details. - Added the L method, which returns meta information about the results being returned. [Bug Fixes] - When high UTF-8 values as hash keys, the driver now croaks instead of segfaulting. - Added 'use IO::File' before IO::File is used (Michael Langner) - Fixed Perl 5.14 compile (Chip Salzenberg) 0.42 - Fixes for Sparc architecture - Fixed PVIV misinterpretations 0.41 - Re-discover master on "not master" errors - Make driver thread safe (Florian Ragwitz) - POD fix (Ronald Kimball) - Fix GridFS warning (Graham Barr) - Allow auto_connect => 0 for replica sets (Graham Barr) 0.40 - DateTime floating timezones now warn on serialization - Attempting to serialize unrecognized object types now croaks - MongoDB::Cursor::explain now resets cursor properly - Added BSON::encode_bson and BSON::decode_bson (Jason Toffaletti) - Safe writes return a hash of information instead of 1 (on success) - Improved last_error/safe docs - Fixed doc spelling errors (Stefan Hornburg) 0.39 - Fixed memory leak 0.38 - Fixed indexing subdocuments (x.y.z) - Fixed GridFS to accept non-fs prefixes (Olly Stephens) - Fixed compile for old C compilers (Taro Nishino) - Added MongoDB::read_documents for handling db replies (Graham Barr) 0.37 - Fixed cursor not found error condition - Fixed compile for old C compilers - Fixed weird file behavoir on some machines 0.36 - Replica set support - Deserialize booleans as booleans (instead of ints) (Andrew Page) - Fixed OS X build (Todd Caine) - Added background option for index creation (Graham Barr) - Fixed slurp tests (Josh Rabinowitz) - Added MongoDB::Timestamp type 0.35 - 02 July 2010 - Added MongoDB::BSON::utf8_flag_on (Pan Fan) - Added MongoDB::GridFS::File::slurp (Pan Fan) - Fixed memory leak 0.34 - 17 June 2010 - $conn->foo->bar->baz now gets the bar.baz collection in the foo database - Slight speed improvements on inserts - Added $conn->query_timeout option to control timeout lengths for all queries done over a given connection - MongoDB::Cursor::tailable and MongoDB::Cursor::immortal - Added TO_JSON function to MongoDB::OID - Fixed safe save (Othello Maurer) - BACKWARD-BREAKING: removed old indexing syntax (if you started using the driver less than a year ago, this shouldn't affect you. If you're an old- timer, make sure you're not using the syntax that has been deprecated for a year). 0.33 26 April 2010 - Fixed tests 0.32 21 April 2010 - BACKWARD COMPATIBILITY BREAK: croak on failed safe update/insert/remove/ensure_index (Eric Wilhelm) - w and wtimeout (see MongoDB::Connection::w) - die correctly on MongoCollection::count errors (help from Josh Rabinowitz) - Added MongoDB::Collection::find (same as query) - Added get, put, and delete methods to MongoDB::GridFS - Perl 5.12 compatibility 0.31 05 April 2010 - C89 fix (Taro Nishino) - Added MongoDB::Code type - Use connection format: mongodb://host1,host2:port2,host3... - Arbitrary number of hosts supported - Auto-reauthentication on dropped connection - ensure_index name option 0.30 10 March 2010 - Support BigInt - On 64-bit machines, support 64-bit nums w/out BigInt (Ryan Olson) - Added connection timeout option (Othello Maurer) - Added clarifying docs on fields (Josh Rabinowitz) 0.29 01 March 2010 - Added safe options for remove, update, and ensure_index - Added save method - Fixed bug in UTF8 checking - Fixed serialization of "tie %hash, 'Tie:IxHash'" 0.28 28 Jan 2010 - Fixed undef values (Andrew Bryan) - Added GridFS multi-chunk test using File::Temp (Josh Rabinowitz) - Allow tie(%h, 'Tie::IxHash') to be used as well as Tie::IxHash->new - Fixed GridFS indexes and added chunkSize and uploadDate to metadata - Fixed batch_insert doc (Eric Wilhelm) - Fixed big endian build 0.27 22 Dec 2009 - Indexes: Improved ensure_index syntax, added drop_dups option - Inserts: Added safe insert, checks object is < 4 MB before inserting - Fixed socket closing bug - Big-endian support - $ can be replaced by any character using MongoDB::BSON::char - MongoDB::OIDs: Fixed undefined behavior in serialization (Peter Edwards), added OID::get_time - 5.8.7-compatible memory allocation (Peter Edwards) - Added MongoDB::MaxKey and MongoDB::MinKey support 0.26 09 Nov 2009 - Don't force i386 arch (Needed to compile on OS X with x86_64) (Graham Barr) - Include inc/ dir for CPAN - Memory leak fixes - Added tutorial 0.24 15 Oct 2009 - Fix for uninitialized array values (David Morrison) - Boolean support - Connection memory leak fix - Added MongoDB::Cursor::count 0.23 25 Sept 2009 Changes in this version by Ask Bjørn Hansen, Florian Ragwitz, Orlando Vazquez, Kristina Chodorow, and Eric Wilhelm: - Make inserting double's (floats/NV's), undefined/null, Tie::IxHash values - Query sorting, snapshot, explain, and hint - Added non-unique ensure_index - Added GridFS - Added regex support - find_one takes optional fields parameter - DateTime used for dates - No C++ driver dependency 0.01 06 May 2009 - Initial release. # vim: set ts=2 sts=2 sw=2 et tw=75: MongoDB-v1.2.2/CONTRIBUTING.md000644 000765 000024 00000004032 12651754051 015657 0ustar00davidstaff000000 000000 # Contributing Guidelines ## Introduction `mongo-perl-driver` is the official client-side driver for talking to MongoDB with Perl. It is free software released under the Apache 2.0 license and available on CPAN under the distribution name `MongoDB`. ## Installation See [INSTALL.md](INSTALL.md) for more detailed installation instructions. ## How to Ask for Help If you are having difficulty building the driver after reading the below instructions, please email the [mongodb-user mailing list](https://groups.google.com/forum/#!forum/mongodb-user) to ask for help. Please include in your email **all** of the following information: - The version of the driver you are trying to build (branch or tag). - Examples: _maint-v0 branch_, _v0.704.2.0 tag_ - The output of _perl -V_ - How your version of perl was built or installed. - Examples: _plenv_, _perlbrew_, _built from source_ - The error you encountered. This may be compiler, Config::AutoConf, or other output. Failure to include the relevant information will result in additional round-trip communications to ascertain the necessary details, delaying a useful response. ## How to Contribute The code for `mongo-perl-driver` is hosted on GitHub at: https://github.com/mongodb/mongo-perl-driver/ If you would like to contribute code, documentation, tests, or bugfixes, follow these steps: 1. Fork the project on GitHub. 2. Clone the fork to your local machine. 3. Make your changes and push them back up to your GitHub account. 4. Send a "pull request" with a brief description of your changes, and a link to a JIRA ticket if there is one. If you are unfamiliar with GitHub, start with their excellent documentation here: https://help.github.com/articles/fork-a-repo ## Working with the Repository You will need to install Config::AutoConf and Path::Tiny to be able to run the Makefile.PL. While this distribution is shipped using Dist::Zilla, you do not need to install it or use it for testing. $ cpan Config::AutoConf Path::Tiny $ perl Makefile.PL $ make $ make test MongoDB-v1.2.2/inc/000755 000765 000024 00000000000 12651754051 014200 5ustar00davidstaff000000 000000 MongoDB-v1.2.2/INSTALL.md000644 000765 000024 00000010003 12651754051 015051 0ustar00davidstaff000000 000000 # Installation Instructions for the MongoDB Perl Driver ## Supported platforms The driver requires Perl v5.8.4 or later for most Unix-like platforms. It is known to build successfully on the following operating systems: * Linux * FreeBSD, OpenBSD, NetBSD * Mac OSX * Windows Vista/2008+ with Strawberry Perl 5.14 or later Please see the [CPAN Testers Matrix](http://matrix.cpantesters.org/?dist=MongoDB) for more details on platform/perl compatibility. The driver has not been tested on big-endian platforms. Big-endian platforms will require Perl 5.10 or later. ## Compiler tool requirements This module requires `make` and a compiler. For example, Debian and Ubuntu users should issue the following command: $ sudo apt-get install build-essential Users of Red Hat based distributions (RHEL, CentOS, Amazon Linux, Oracle Linux, Fedora, etc.) should issue the following command: $ sudo yum install make gcc On Windows, [StrawberryPerl](http://strawberryperl.com/) ships with a GCC compiler. ## Configuration requirements Configuration requires the following Perl modules: * Config::AutoConf * Path::Tiny If you are using a modern CPAN client (anything since Perl v5.12), these will be installed automatically as needed. If you have an older CPAN client or are doing manual installation, install these before running `Makefile.PL`. ## Testing with a database Most tests will skip unless a MongoDB database is available either on the default localhost and port or on an alternate `host:port` specified by the `MONGOD` environment variable: $ export MONGOD=localhosts:31017 ## Installing as a non-privileged user If you do not have write permissions to your Perl's site library directory (`perl -V:sitelib`), then you will need to use your CPAN client or run `make install` as root or with `sudo`. Alternatively, you configure a local library. See [local::lib](https://metacpan.org/pod/local::lib#The-bootstrapping-technique) on CPAN for more details. ## Installing from CPAN You can install the latest stable release by installing the `MongoDB` package: $ cpan MongoDB To install a development release, specify it by author and tarball path. For example: $ cpan MONGODB/MongoDB-v0.999.999.4-TRIAL.tar.gz ## Installing from a tarball downloaded from CPAN You can install using a CPAN client. Unpack the tarball and from inside the unpacked directly, run your CPAN client with `.` as the target: $ cpan . To install manually, first install the configuration requirements listed above. Then run the `Makefile.PL` manually: $ perl Makefile.PL This will report any missing prerequisites and you will need to install them all. You can then run `make`, etc. as usual: $ make $ make test $ make install ## Installing from the git repository If you have checked out the git repository (or downloaded a tarball from Github), you will need to install configuration requirements and follow the manual procedure described above. ## SSL and/or SASL support SSL support requires installing the [IO::Socket::SSL](http://p3rl.org/IO::Socket::SSL) module. You will need to have the libssl-dev package or equivalent installed for that to build successfully. SASL support requires [Authen::SASL](http://p3rl.org/Authen::SASL) and possibly a Kerberos-capable backend. The [Authen::SASL::Perl](http://p3rl.org/Authen::SASL::Perl) backend comes with Authen::SASL and requires the [GSSAPI](http://p3rl.org/GSSAPI) CPAN module for GSSAPI support. Installing the GSSAPI module from CPAN rather than an OS package requires libkrb5 and the krb5-config utility (available for Debian/RHEL systems in the C or equivalent package). Alternatively, the [Authen::SASL::XS](http://p3rl.org/Authen::SASL::XS) or [Authen::SASL::Cyrus](http://p3rl.org/Authen::SASL::Cyrus) modules may be used. Both rely on Cyrus libsasl. Authen::SASL::XS is preferred. Installing Authen::SASL::XS or Authen::SASL::Cyrus from CPAN requires libsasl. On Debian systems, it is available from libsasl2-dev; on RHEL, it is available in cyrus-sasl-devel. MongoDB-v1.2.2/lib/000755 000765 000024 00000000000 12651754051 014175 5ustar00davidstaff000000 000000 MongoDB-v1.2.2/LICENSE000644 000765 000024 00000026136 12651754051 014444 0ustar00davidstaff000000 000000 Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. MongoDB-v1.2.2/Makefile.PL000644 000765 000024 00000005021 12651754051 015377 0ustar00davidstaff000000 000000 use strict; use warnings; BEGIN { if ( $^O eq "MSWin32" ) { my (undef, $major, undef, undef, $id ) = Win32::GetOSVersion(); die "OS unsupported. Windows Vista or later is required.\n" unless $id > 2 || $major > 5; } } use inc::Module::Install; name 'MongoDB'; perl_version '5.8.4'; author 'Florian Ragwitz '; author 'Kristina Chodorow '; author 'Mike Friedman '; author 'David.Golden '; license 'Apache'; all_from 'lib/MongoDB.pm'; requires 'Authen::SCRAM::Client' => '0.003'; requires 'Carp'; requires 'Class::XSAccessor'; requires 'DateTime' => '0.78'; requires 'Digest::MD5'; requires 'Encode'; requires 'Exporter' => '5.57'; requires 'IO::File'; requires 'IO::Socket' => ( $^O eq 'MSWin32' ? '1.31' : '0' ); requires 'List::Util'; requires 'MIME::Base64'; requires 'Moo' => '2'; requires 'Moo::Role'; requires 'Safe::Isa'; requires 'Scalar::Util'; requires 'Socket'; requires 'Tie::IxHash'; requires 'Time::HiRes'; requires 'Try::Tiny'; requires 'Type::Library'; requires 'Type::Tiny' => '1'; requires 'Type::Tiny::XS' if $] ge '5.010001'; requires 'Type::Utils'; requires 'Types::Standard'; requires 'XSLoader'; requires 'boolean' => 0.25; requires 'constant'; requires 'if'; requires 'namespace::clean'; requires 'overload'; requires 're'; requires 'strict'; requires 'version'; requires 'warnings'; test_requires 'Data::Dumper'; test_requires 'Devel::Peek'; test_requires 'ExtUtils::MakeMaker'; test_requires 'File::Spec'; test_requires 'File::Temp' => '0.17'; test_requires 'FileHandle'; test_requires 'JSON::MaybeXS' => '1.002005'; test_requires 'Math::BigInt'; test_requires 'Path::Tiny' => '0.054'; test_requires 'Test::Deep' => '0.111'; test_requires 'Test::Fatal'; test_requires 'Test::More' => '0.96'; test_requires 'bigint'; test_requires 'lib'; test_requires 'utf8'; mongo; repository 'git://github.com/mongodb/mongo-perl-driver.git'; tests_recursive; WriteAll; package MY; use Config; # Because we keep our XS in an 'xs' subdirectory, this ensures the object files # are built there, too, which is needed for linking to work. # Originally added by Florian Ragwitz, based on Glib::MakeHelper. See # https://metacpan.org/source/XAOC/Glib-1.304/lib/Glib/MakeHelper.pm#L553 sub const_cccmd { my $inherited = shift->SUPER::const_cccmd(@_); return '' unless $inherited; if ($Config{cc} =~ /^cl\b/) { $inherited .= ' /Fo$@'; } else { $inherited .= ' -o $@'; } return $inherited; } MongoDB-v1.2.2/MANIFEST000644 000765 000024 00000033564 12651754051 014573 0ustar00davidstaff000000 000000 # This file was automatically generated by Dist::Zilla::Plugin::Manifest v5.043. CONTRIBUTING.md Changes INSTALL.md LICENSE MANIFEST META.json META.yml Makefile.PL README README.md bson/b64_ntop.h bson/b64_pton.h bson/bson-atomic.c bson/bson-atomic.h bson/bson-clock.c bson/bson-clock.h bson/bson-compat.h bson/bson-config.h.in bson/bson-context-private.h bson/bson-context.c bson/bson-context.h bson/bson-endian.h bson/bson-error.c bson/bson-error.h bson/bson-iso8601-private.h bson/bson-iso8601.c bson/bson-iter.c bson/bson-iter.h bson/bson-keys.c bson/bson-keys.h bson/bson-macros.h bson/bson-md5.c bson/bson-md5.h bson/bson-memory.c bson/bson-memory.h bson/bson-oid.c bson/bson-oid.h bson/bson-private.h bson/bson-reader.c bson/bson-reader.h bson/bson-stdint-win32.h bson/bson-stdint.h bson/bson-string.c bson/bson-string.h bson/bson-thread-private.h bson/bson-timegm-private.h bson/bson-timegm.c bson/bson-types.h bson/bson-utf8.c bson/bson-utf8.h bson/bson-value.c bson/bson-value.h bson/bson-version.c bson/bson-version.h bson/bson-writer.c bson/bson-writer.h bson/bson.c bson/bson.h inc/CheckJiraInChanges.pm inc/Module/AutoInstall.pm inc/Module/Install.pm inc/Module/Install/AutoInstall.pm inc/Module/Install/Base.pm inc/Module/Install/Can.pm inc/Module/Install/Compiler.pm inc/Module/Install/Fetch.pm inc/Module/Install/Include.pm inc/Module/Install/Makefile.pm inc/Module/Install/Metadata.pm inc/Module/Install/PRIVATE/Mongo.pm inc/Module/Install/Win32.pm inc/Module/Install/WriteAll.pm lib/MongoDB.pm lib/MongoDB/BSON.pm lib/MongoDB/BSON/Binary.pm lib/MongoDB/BSON/Regexp.pm lib/MongoDB/BSON/_EncodedDoc.pm lib/MongoDB/BulkWrite.pm lib/MongoDB/BulkWriteResult.pm lib/MongoDB/BulkWriteView.pm lib/MongoDB/Code.pm lib/MongoDB/Collection.pm lib/MongoDB/CommandResult.pm lib/MongoDB/Cursor.pm lib/MongoDB/DBRef.pm lib/MongoDB/DataTypes.pod lib/MongoDB/Database.pm lib/MongoDB/DeleteResult.pm lib/MongoDB/Error.pm lib/MongoDB/Examples.pod lib/MongoDB/GridFS.pm lib/MongoDB/GridFS/File.pm lib/MongoDB/IndexView.pm lib/MongoDB/InsertManyResult.pm lib/MongoDB/InsertOneResult.pm lib/MongoDB/MongoClient.pm lib/MongoDB/OID.pm lib/MongoDB/Op/_Aggregate.pm lib/MongoDB/Op/_BatchInsert.pm lib/MongoDB/Op/_BulkWrite.pm lib/MongoDB/Op/_Command.pm lib/MongoDB/Op/_Count.pm lib/MongoDB/Op/_CreateIndexes.pm lib/MongoDB/Op/_Delete.pm lib/MongoDB/Op/_Distinct.pm lib/MongoDB/Op/_Explain.pm lib/MongoDB/Op/_FSyncUnlock.pm lib/MongoDB/Op/_FindAndDelete.pm lib/MongoDB/Op/_FindAndUpdate.pm lib/MongoDB/Op/_GetMore.pm lib/MongoDB/Op/_InsertOne.pm lib/MongoDB/Op/_KillCursors.pm lib/MongoDB/Op/_ListCollections.pm lib/MongoDB/Op/_ListIndexes.pm lib/MongoDB/Op/_ParallelScan.pm lib/MongoDB/Op/_Query.pm lib/MongoDB/Op/_Update.pm lib/MongoDB/QueryResult.pm lib/MongoDB/QueryResult/Filtered.pm lib/MongoDB/ReadConcern.pm lib/MongoDB/ReadPreference.pm lib/MongoDB/Role/_BypassValidation.pm lib/MongoDB/Role/_CommandCursorOp.pm lib/MongoDB/Role/_CommandOp.pm lib/MongoDB/Role/_Cursor.pm lib/MongoDB/Role/_DatabaseOp.pm lib/MongoDB/Role/_InsertPreEncoder.pm lib/MongoDB/Role/_LastError.pm lib/MongoDB/Role/_PrivateConstructor.pm lib/MongoDB/Role/_ReadOp.pm lib/MongoDB/Role/_ReadPrefModifier.pm lib/MongoDB/Role/_UpdatePreEncoder.pm lib/MongoDB/Role/_WriteOp.pm lib/MongoDB/Role/_WriteResult.pm lib/MongoDB/Timestamp.pm lib/MongoDB/Tutorial.pod lib/MongoDB/UnacknowledgedResult.pm lib/MongoDB/UpdateResult.pm lib/MongoDB/Upgrading.pod lib/MongoDB/WriteConcern.pm lib/MongoDB/_Constants.pm lib/MongoDB/_Credential.pm lib/MongoDB/_Link.pm lib/MongoDB/_Protocol.pm lib/MongoDB/_Query.pm lib/MongoDB/_Server.pm lib/MongoDB/_Topology.pm lib/MongoDB/_Types.pm lib/MongoDB/_URI.pm perl_mongo.c perl_mongo.h ppport.h pstdint.h t/00-report-mongod.t t/00-report-prereqs.dd t/00-report-prereqs.t t/bson.t t/bson_codec/booleans.t t/bson_codec/containers.t t/bson_codec/elements.t t/bson_codec/time_moment.t t/bulk.t t/bypass_doc_validation.t t/collection.t t/connection.t t/crud.t t/crud_spec.t t/cursor.t t/data/CRUD/README.rst t/data/CRUD/read/aggregate.json t/data/CRUD/read/aggregate.yml t/data/CRUD/read/count.json t/data/CRUD/read/count.yml t/data/CRUD/read/distinct.json t/data/CRUD/read/distinct.yml t/data/CRUD/read/find.json t/data/CRUD/read/find.yml t/data/CRUD/write/deleteMany.json t/data/CRUD/write/deleteMany.yml t/data/CRUD/write/deleteOne.json t/data/CRUD/write/deleteOne.yml t/data/CRUD/write/findOneAndDelete.json t/data/CRUD/write/findOneAndDelete.yml t/data/CRUD/write/findOneAndReplace.json t/data/CRUD/write/findOneAndReplace.yml t/data/CRUD/write/findOneAndUpdate.json t/data/CRUD/write/findOneAndUpdate.yml t/data/CRUD/write/insertMany.json t/data/CRUD/write/insertMany.yml t/data/CRUD/write/insertOne.json t/data/CRUD/write/insertOne.yml t/data/CRUD/write/replaceOne.json t/data/CRUD/write/replaceOne.yml t/data/CRUD/write/updateMany.json t/data/CRUD/write/updateMany.yml t/data/CRUD/write/updateOne.json t/data/CRUD/write/updateOne.yml t/data/SDAM/README.rst t/data/SDAM/rs/discover_arbiters.json t/data/SDAM/rs/discover_arbiters.yml t/data/SDAM/rs/discover_passives.json t/data/SDAM/rs/discover_passives.yml t/data/SDAM/rs/discover_primary.json t/data/SDAM/rs/discover_primary.yml t/data/SDAM/rs/discover_secondary.json t/data/SDAM/rs/discover_secondary.yml t/data/SDAM/rs/discovery.json t/data/SDAM/rs/discovery.yml t/data/SDAM/rs/equal_electionids.json t/data/SDAM/rs/equal_electionids.yml t/data/SDAM/rs/ghost_discovered.json t/data/SDAM/rs/ghost_discovered.yml t/data/SDAM/rs/hosts_differ_from_seeds.json t/data/SDAM/rs/hosts_differ_from_seeds.yml t/data/SDAM/rs/member_reconfig.json t/data/SDAM/rs/member_reconfig.yml t/data/SDAM/rs/member_standalone.json t/data/SDAM/rs/member_standalone.yml t/data/SDAM/rs/new_primary.json t/data/SDAM/rs/new_primary.yml t/data/SDAM/rs/new_primary_new_electionid.json t/data/SDAM/rs/new_primary_new_electionid.yml t/data/SDAM/rs/new_primary_new_setversion.json t/data/SDAM/rs/new_primary_new_setversion.yml t/data/SDAM/rs/new_primary_wrong_set_name.json t/data/SDAM/rs/new_primary_wrong_set_name.yml t/data/SDAM/rs/non_rs_member.json t/data/SDAM/rs/non_rs_member.yml t/data/SDAM/rs/normalize_case.json t/data/SDAM/rs/normalize_case.yml t/data/SDAM/rs/null_election_id.json t/data/SDAM/rs/null_election_id.yml t/data/SDAM/rs/primary_becomes_standalone.json t/data/SDAM/rs/primary_becomes_standalone.yml t/data/SDAM/rs/primary_changes_set_name.json t/data/SDAM/rs/primary_changes_set_name.yml t/data/SDAM/rs/primary_disconnect.json t/data/SDAM/rs/primary_disconnect.yml t/data/SDAM/rs/primary_disconnect_electionid.json t/data/SDAM/rs/primary_disconnect_electionid.yml t/data/SDAM/rs/primary_disconnect_setversion.json t/data/SDAM/rs/primary_disconnect_setversion.yml t/data/SDAM/rs/primary_mismatched_me.json t/data/SDAM/rs/primary_mismatched_me.yml t/data/SDAM/rs/primary_to_no_primary_mismatched_me.json t/data/SDAM/rs/primary_to_no_primary_mismatched_me.yml t/data/SDAM/rs/primary_wrong_set_name.json t/data/SDAM/rs/primary_wrong_set_name.yml t/data/SDAM/rs/response_from_removed.json t/data/SDAM/rs/response_from_removed.yml t/data/SDAM/rs/rsother_discovered.json t/data/SDAM/rs/rsother_discovered.yml t/data/SDAM/rs/sec_not_auth.json t/data/SDAM/rs/sec_not_auth.yml t/data/SDAM/rs/secondary_mismatched_me.json t/data/SDAM/rs/secondary_mismatched_me.yml t/data/SDAM/rs/secondary_wrong_set_name.json t/data/SDAM/rs/secondary_wrong_set_name.yml t/data/SDAM/rs/secondary_wrong_set_name_with_primary.json t/data/SDAM/rs/secondary_wrong_set_name_with_primary.yml t/data/SDAM/rs/setversion_without_electionid.json t/data/SDAM/rs/setversion_without_electionid.yml t/data/SDAM/rs/stepdown_change_set_name.json t/data/SDAM/rs/stepdown_change_set_name.yml t/data/SDAM/rs/unexpected_mongos.json t/data/SDAM/rs/unexpected_mongos.yml t/data/SDAM/rs/use_setversion_without_electionid.json t/data/SDAM/rs/use_setversion_without_electionid.yml t/data/SDAM/rs/wrong_set_name.json t/data/SDAM/rs/wrong_set_name.yml t/data/SDAM/sharded/mongos_disconnect.json t/data/SDAM/sharded/mongos_disconnect.yml t/data/SDAM/sharded/multiple_mongoses.json t/data/SDAM/sharded/multiple_mongoses.yml t/data/SDAM/sharded/non_mongos_removed.json t/data/SDAM/sharded/non_mongos_removed.yml t/data/SDAM/sharded/normalize_uri_case.json t/data/SDAM/sharded/normalize_uri_case.yml t/data/SDAM/single/direct_connection_external_ip.json t/data/SDAM/single/direct_connection_external_ip.yml t/data/SDAM/single/direct_connection_mongos.json t/data/SDAM/single/direct_connection_mongos.yml t/data/SDAM/single/direct_connection_rsarbiter.json t/data/SDAM/single/direct_connection_rsarbiter.yml t/data/SDAM/single/direct_connection_rsprimary.json t/data/SDAM/single/direct_connection_rsprimary.yml t/data/SDAM/single/direct_connection_rssecondary.json t/data/SDAM/single/direct_connection_rssecondary.yml t/data/SDAM/single/direct_connection_slave.json t/data/SDAM/single/direct_connection_slave.yml t/data/SDAM/single/direct_connection_standalone.json t/data/SDAM/single/direct_connection_standalone.yml t/data/SDAM/single/not_ok_response.json t/data/SDAM/single/not_ok_response.yml t/data/SDAM/single/standalone_removed.json t/data/SDAM/single/standalone_removed.yml t/data/SDAM/single/unavailable_seed.json t/data/SDAM/single/unavailable_seed.yml t/data/SS/README.rst t/data/SS/rtt/first_value.json t/data/SS/rtt/first_value.yml t/data/SS/rtt/first_value_zero.json t/data/SS/rtt/first_value_zero.yml t/data/SS/rtt/value_test_1.json t/data/SS/rtt/value_test_1.yml t/data/SS/rtt/value_test_2.json t/data/SS/rtt/value_test_2.yml t/data/SS/rtt/value_test_3.json t/data/SS/rtt/value_test_3.yml t/data/SS/rtt/value_test_4.json t/data/SS/rtt/value_test_4.yml t/data/SS/rtt/value_test_5.json t/data/SS/rtt/value_test_5.yml t/data/SS/server_selection/ReplicaSetNoPrimary/read/Nearest.json t/data/SS/server_selection/ReplicaSetNoPrimary/read/Nearest.yml t/data/SS/server_selection/ReplicaSetNoPrimary/read/Nearest_non_matching.json t/data/SS/server_selection/ReplicaSetNoPrimary/read/Nearest_non_matching.yml t/data/SS/server_selection/ReplicaSetNoPrimary/read/Primary.json t/data/SS/server_selection/ReplicaSetNoPrimary/read/Primary.yml t/data/SS/server_selection/ReplicaSetNoPrimary/read/PrimaryPreferred.json t/data/SS/server_selection/ReplicaSetNoPrimary/read/PrimaryPreferred.yml t/data/SS/server_selection/ReplicaSetNoPrimary/read/PrimaryPreferred_non_matching.json t/data/SS/server_selection/ReplicaSetNoPrimary/read/PrimaryPreferred_non_matching.yml t/data/SS/server_selection/ReplicaSetNoPrimary/read/Secondary.json t/data/SS/server_selection/ReplicaSetNoPrimary/read/Secondary.yml t/data/SS/server_selection/ReplicaSetNoPrimary/read/SecondaryPreferred.json t/data/SS/server_selection/ReplicaSetNoPrimary/read/SecondaryPreferred.yml t/data/SS/server_selection/ReplicaSetNoPrimary/read/SecondaryPreferred_non_matching.json t/data/SS/server_selection/ReplicaSetNoPrimary/read/SecondaryPreferred_non_matching.yml t/data/SS/server_selection/ReplicaSetNoPrimary/read/Secondary_non_matching.json t/data/SS/server_selection/ReplicaSetNoPrimary/read/Secondary_non_matching.yml t/data/SS/server_selection/ReplicaSetNoPrimary/write/SecondaryPreferred.json t/data/SS/server_selection/ReplicaSetNoPrimary/write/SecondaryPreferred.yml t/data/SS/server_selection/ReplicaSetWithPrimary/read/Nearest.json t/data/SS/server_selection/ReplicaSetWithPrimary/read/Nearest.yml t/data/SS/server_selection/ReplicaSetWithPrimary/read/Nearest_non_matching.json t/data/SS/server_selection/ReplicaSetWithPrimary/read/Nearest_non_matching.yml t/data/SS/server_selection/ReplicaSetWithPrimary/read/Primary.json t/data/SS/server_selection/ReplicaSetWithPrimary/read/Primary.yml t/data/SS/server_selection/ReplicaSetWithPrimary/read/PrimaryPreferred.json t/data/SS/server_selection/ReplicaSetWithPrimary/read/PrimaryPreferred.yml t/data/SS/server_selection/ReplicaSetWithPrimary/read/PrimaryPreferred_non_matching.json t/data/SS/server_selection/ReplicaSetWithPrimary/read/PrimaryPreferred_non_matching.yml t/data/SS/server_selection/ReplicaSetWithPrimary/read/Secondary.json t/data/SS/server_selection/ReplicaSetWithPrimary/read/Secondary.yml t/data/SS/server_selection/ReplicaSetWithPrimary/read/SecondaryPreferred.json t/data/SS/server_selection/ReplicaSetWithPrimary/read/SecondaryPreferred.yml t/data/SS/server_selection/ReplicaSetWithPrimary/read/SecondaryPreferred_non_matching.json t/data/SS/server_selection/ReplicaSetWithPrimary/read/SecondaryPreferred_non_matching.yml t/data/SS/server_selection/ReplicaSetWithPrimary/read/Secondary_non_matching.json t/data/SS/server_selection/ReplicaSetWithPrimary/read/Secondary_non_matching.yml t/data/SS/server_selection/ReplicaSetWithPrimary/write/SecondaryPreferred.json t/data/SS/server_selection/ReplicaSetWithPrimary/write/SecondaryPreferred.yml t/data/SS/server_selection/Sharded/read/SecondaryPreferred.json t/data/SS/server_selection/Sharded/read/SecondaryPreferred.yml t/data/SS/server_selection/Sharded/write/SecondaryPreferred.json t/data/SS/server_selection/Sharded/write/SecondaryPreferred.yml t/data/SS/server_selection/Single/read/SecondaryPreferred.json t/data/SS/server_selection/Single/read/SecondaryPreferred.yml t/data/SS/server_selection/Single/write/SecondaryPreferred.json t/data/SS/server_selection/Single/write/SecondaryPreferred.yml t/data/SS/server_selection/Unknown/read/SecondaryPreferred.json t/data/SS/server_selection/Unknown/read/SecondaryPreferred.yml t/data/SS/server_selection/Unknown/write/SecondaryPreferred.json t/data/SS/server_selection/Unknown/write/SecondaryPreferred.yml t/data/gridfs/img.png t/data/gridfs/input.txt t/database.t t/dbref.t t/deprecated/bulk.t t/deprecated/collection.t t/deprecated/indexes.t t/dt_types.t t/errors.t t/fsync.t t/gridfs.t t/indexview.t t/lib/MongoDBTest.pm t/lib/TestBSON.pm t/max_time_ms.t t/parallel_scan.t t/readpref.t t/regexp_obj.t t/sdam_spec.t t/ss_spec.t t/testrules.yml t/threads/basic.t t/threads/bson.t t/threads/cursor.t t/threads/oid.t t/types.t t/unit/configuration.t t/unit/link.t t/unit/read_preference.t t/unit/uri.t t/unit/write_concern.t xs/BSON.xs xt/author/circular-refs.t xt/author/pod-syntax.t xt/author/test-version.t xt/release/check-jira-in-changes.t xt/release/minimum-version.t MongoDB-v1.2.2/META.json000644 000765 000024 00000030106 12651754051 015050 0ustar00davidstaff000000 000000 { "abstract" : "Official MongoDB Driver for Perl", "author" : [ "David Golden ", "Mike Friedman ", "Kristina Chodorow ", "Florian Ragwitz " ], "dynamic_config" : 0, "generated_by" : "Dist::Zilla version 5.043, CPAN::Meta::Converter version 2.150001", "license" : [ "apache_2_0" ], "meta-spec" : { "url" : "http://search.cpan.org/perldoc?CPAN::Meta::Spec", "version" : 2 }, "name" : "MongoDB", "no_index" : { "directory" : [ "devel", "inc", "t", "xt" ] }, "prereqs" : { "configure" : { "requires" : { "Config::AutoConf" : "0.22", "ExtUtils::MakeMaker" : "0", "Path::Tiny" : "0.052" } }, "develop" : { "requires" : { "Test::Memory::Cycle" : "0", "Test::More" : "0", "Test::Pod" : "1.41", "Test::Version" : "1", "lib" : "0" } }, "runtime" : { "recommends" : { "IO::Socket::IP" : "0.25", "IO::Socket::SSL" : "1.42", "Mozilla::CA" : "20130114", "Net::SSLeay" : "1.49" }, "requires" : { "Authen::SCRAM::Client" : "0.003", "Carp" : "0", "Class::XSAccessor" : "0", "DateTime" : "0.78", "Digest::MD5" : "0", "Encode" : "0", "Exporter" : "5.57", "IO::File" : "0", "IO::Socket" : "0", "JSON::PP" : "2.27300", "List::Util" : "0", "MIME::Base64" : "0", "Moo" : "2", "Moo::Role" : "0", "Safe::Isa" : "0", "Scalar::Util" : "0", "Socket" : "0", "Sub::Quote" : "0", "Tie::IxHash" : "0", "Time::HiRes" : "0", "Try::Tiny" : "0", "Type::Library" : "0", "Type::Tiny::XS" : "0", "Type::Utils" : "0", "Types::Standard" : "0", "XSLoader" : "0", "boolean" : "0.25", "constant" : "0", "if" : "0", "namespace::clean" : "0", "overload" : "0", "perl" : "v5.8.0", "re" : "0", "strict" : "0", "version" : "0", "warnings" : "0" }, "suggests" : { "IO::Socket::SSL" : "1.56" } }, "test" : { "recommends" : { "CPAN::Meta" : "2.120900", "DateTime::Tiny" : "1", "Test::Harness" : "3.31", "Time::Moment" : "0.22" }, "requires" : { "Data::Dumper" : "0", "ExtUtils::MakeMaker" : "0", "File::Spec" : "0", "File::Temp" : "0", "FileHandle" : "0", "JSON::MaybeXS" : "0", "Math::BigInt" : "0", "Path::Tiny" : "0.054", "Test::Deep" : "0.111", "Test::Fatal" : "0", "Test::More" : "0.96", "bigint" : "0", "lib" : "0", "threads::shared" : "0", "utf8" : "0" } } }, "provides" : { "MongoDB" : { "file" : "lib/MongoDB.pm", "version" : "v1.2.2" }, "MongoDB::AuthError" : { "file" : "lib/MongoDB/Error.pm", "version" : "v1.2.2" }, "MongoDB::BSON" : { "file" : "lib/MongoDB/BSON.pm", "version" : "v1.2.2" }, "MongoDB::BSON::Binary" : { "file" : "lib/MongoDB/BSON/Binary.pm", "version" : "v1.2.2" }, "MongoDB::BSON::Regexp" : { "file" : "lib/MongoDB/BSON/Regexp.pm", "version" : "v1.2.2" }, "MongoDB::BulkWrite" : { "file" : "lib/MongoDB/BulkWrite.pm", "version" : "v1.2.2" }, "MongoDB::BulkWriteResult" : { "file" : "lib/MongoDB/BulkWriteResult.pm", "version" : "v1.2.2" }, "MongoDB::BulkWriteView" : { "file" : "lib/MongoDB/BulkWriteView.pm", "version" : "v1.2.2" }, "MongoDB::Code" : { "file" : "lib/MongoDB/Code.pm", "version" : "v1.2.2" }, "MongoDB::Collection" : { "file" : "lib/MongoDB/Collection.pm", "version" : "v1.2.2" }, "MongoDB::CommandResult" : { "file" : "lib/MongoDB/CommandResult.pm", "version" : "v1.2.2" }, "MongoDB::ConnectionError" : { "file" : "lib/MongoDB/Error.pm", "version" : "v1.2.2" }, "MongoDB::Cursor" : { "file" : "lib/MongoDB/Cursor.pm", "version" : "v1.2.2" }, "MongoDB::CursorNotFoundError" : { "file" : "lib/MongoDB/Error.pm", "version" : "v1.2.2" }, "MongoDB::DBRef" : { "file" : "lib/MongoDB/DBRef.pm", "version" : "v1.2.2" }, "MongoDB::Database" : { "file" : "lib/MongoDB/Database.pm", "version" : "v1.2.2" }, "MongoDB::DatabaseError" : { "file" : "lib/MongoDB/Error.pm", "version" : "v1.2.2" }, "MongoDB::DecodingError" : { "file" : "lib/MongoDB/Error.pm", "version" : "v1.2.2" }, "MongoDB::DeleteResult" : { "file" : "lib/MongoDB/DeleteResult.pm", "version" : "v1.2.2" }, "MongoDB::DocumentError" : { "file" : "lib/MongoDB/Error.pm", "version" : "v1.2.2" }, "MongoDB::DuplicateKeyError" : { "file" : "lib/MongoDB/Error.pm", "version" : "v1.2.2" }, "MongoDB::Error" : { "file" : "lib/MongoDB/Error.pm", "version" : "v1.2.2" }, "MongoDB::ExecutionTimeout" : { "file" : "lib/MongoDB/Error.pm", "version" : "v1.2.2" }, "MongoDB::GridFS" : { "file" : "lib/MongoDB/GridFS.pm", "version" : "v1.2.2" }, "MongoDB::GridFS::File" : { "file" : "lib/MongoDB/GridFS/File.pm", "version" : "v1.2.2" }, "MongoDB::GridFSError" : { "file" : "lib/MongoDB/Error.pm", "version" : "v1.2.2" }, "MongoDB::HandshakeError" : { "file" : "lib/MongoDB/Error.pm", "version" : "v1.2.2" }, "MongoDB::IndexView" : { "file" : "lib/MongoDB/IndexView.pm", "version" : "v1.2.2" }, "MongoDB::InsertManyResult" : { "file" : "lib/MongoDB/InsertManyResult.pm", "version" : "v1.2.2" }, "MongoDB::InsertOneResult" : { "file" : "lib/MongoDB/InsertOneResult.pm", "version" : "v1.2.2" }, "MongoDB::InternalError" : { "file" : "lib/MongoDB/Error.pm", "version" : "v1.2.2" }, "MongoDB::MongoClient" : { "file" : "lib/MongoDB/MongoClient.pm", "version" : "v1.2.2" }, "MongoDB::NetworkError" : { "file" : "lib/MongoDB/Error.pm", "version" : "v1.2.2" }, "MongoDB::NetworkTimeout" : { "file" : "lib/MongoDB/Error.pm", "version" : "v1.2.2" }, "MongoDB::NotMasterError" : { "file" : "lib/MongoDB/Error.pm", "version" : "v1.2.2" }, "MongoDB::OID" : { "file" : "lib/MongoDB/OID.pm", "version" : "v1.2.2" }, "MongoDB::ProtocolError" : { "file" : "lib/MongoDB/Error.pm", "version" : "v1.2.2" }, "MongoDB::QueryResult" : { "file" : "lib/MongoDB/QueryResult.pm", "version" : "v1.2.2" }, "MongoDB::QueryResult::Filtered" : { "file" : "lib/MongoDB/QueryResult/Filtered.pm", "version" : "v1.2.2" }, "MongoDB::ReadConcern" : { "file" : "lib/MongoDB/ReadConcern.pm", "version" : "v1.2.2" }, "MongoDB::ReadPreference" : { "file" : "lib/MongoDB/ReadPreference.pm", "version" : "v1.2.2" }, "MongoDB::SelectionError" : { "file" : "lib/MongoDB/Error.pm", "version" : "v1.2.2" }, "MongoDB::TimeoutError" : { "file" : "lib/MongoDB/Error.pm", "version" : "v1.2.2" }, "MongoDB::Timestamp" : { "file" : "lib/MongoDB/Timestamp.pm", "version" : "v1.2.2" }, "MongoDB::UnacknowledgedResult" : { "file" : "lib/MongoDB/UnacknowledgedResult.pm", "version" : "v1.2.2" }, "MongoDB::UpdateResult" : { "file" : "lib/MongoDB/UpdateResult.pm", "version" : "v1.2.2" }, "MongoDB::UsageError" : { "file" : "lib/MongoDB/Error.pm", "version" : "v1.2.2" }, "MongoDB::WriteConcern" : { "file" : "lib/MongoDB/WriteConcern.pm", "version" : "v1.2.2" }, "MongoDB::WriteConcernError" : { "file" : "lib/MongoDB/Error.pm", "version" : "v1.2.2" }, "MongoDB::WriteError" : { "file" : "lib/MongoDB/Error.pm", "version" : "v1.2.2" }, "MongoDB::WriteResult" : { "file" : "lib/MongoDB/BulkWriteResult.pm", "version" : "v1.2.2" } }, "release_status" : "stable", "resources" : { "bugtracker" : { "web" : "https://jira.mongodb.org/browse/PERL" }, "homepage" : "https://github.com/mongodb/mongo-perl-driver", "repository" : { "type" : "git", "url" : "https://github.com/mongodb/mongo-perl-driver.git", "web" : "https://github.com/mongodb/mongo-perl-driver" } }, "version" : "v1.2.2", "x_contributors" : [ "Andrew Page ", "Andrey Khozov ", "Ashley Willis ", "Ask Bjørn Hansen ", "Bernard Gorman ", "Brendan W. McAdams ", "Casey Rojas ", "Christian Hansen ", "Christian Sturm ", "Christian Walde ", "Colin Cyr ", "Danny Raetzsch ", "David Morrison ", "David Nadle ", "David Steinbrunner ", "David Storch ", "D. Ilmari Mannsåker ", "Eric Daniels ", "Gerard Goossen ", "Glenn Fowler ", "Graham Barr ", "Hao Wu ", "Jason Carey ", "Jason Toffaletti ", "Johann Rolschewski ", "Joseph Harnish ", "Josh Matthews ", "Joshua Juran ", "J. Stewart ", "Kamil Slowikowski ", "Ken Williams ", "Matthew Shopsin ", "Michael Langner ", "Michael Rotmanov ", "Mike Dirolf ", "Mohammad S Anwar ", "Nickola Trupcheff ", "Nigel Gregoire ", "Niko Tyni ", "Nuno Carvalho ", "Orlando Vazquez ", "Othello Maurer ", "Pan Fan ", "Rahul Dhodapkar ", "Robin Lee ", "Roman Yerin ", "Ronald J Kimball ", "Ryan Chipman ", "Stephen Oberholtzer ", "Steve Sanbeg ", "Stuart Watt ", "Uwe Voelker ", "Whitney Jackson ", "Xtreak ", "Zhihong Zhang " ] } MongoDB-v1.2.2/META.yml000644 000765 000024 00000020155 12651754051 014703 0ustar00davidstaff000000 000000 --- abstract: 'Official MongoDB Driver for Perl' author: - 'David Golden ' - 'Mike Friedman ' - 'Kristina Chodorow ' - 'Florian Ragwitz ' build_requires: Data::Dumper: '0' ExtUtils::MakeMaker: '0' File::Spec: '0' File::Temp: '0' FileHandle: '0' JSON::MaybeXS: '0' Math::BigInt: '0' Path::Tiny: '0.054' Test::Deep: '0.111' Test::Fatal: '0' Test::More: '0.96' bigint: '0' lib: '0' threads::shared: '0' utf8: '0' configure_requires: Config::AutoConf: '0.22' ExtUtils::MakeMaker: '0' Path::Tiny: '0.052' dynamic_config: 0 generated_by: 'Dist::Zilla version 5.043, CPAN::Meta::Converter version 2.150001' license: apache meta-spec: url: http://module-build.sourceforge.net/META-spec-v1.4.html version: '1.4' name: MongoDB no_index: directory: - devel - inc - t - xt provides: MongoDB: file: lib/MongoDB.pm version: v1.2.2 MongoDB::AuthError: file: lib/MongoDB/Error.pm version: v1.2.2 MongoDB::BSON: file: lib/MongoDB/BSON.pm version: v1.2.2 MongoDB::BSON::Binary: file: lib/MongoDB/BSON/Binary.pm version: v1.2.2 MongoDB::BSON::Regexp: file: lib/MongoDB/BSON/Regexp.pm version: v1.2.2 MongoDB::BulkWrite: file: lib/MongoDB/BulkWrite.pm version: v1.2.2 MongoDB::BulkWriteResult: file: lib/MongoDB/BulkWriteResult.pm version: v1.2.2 MongoDB::BulkWriteView: file: lib/MongoDB/BulkWriteView.pm version: v1.2.2 MongoDB::Code: file: lib/MongoDB/Code.pm version: v1.2.2 MongoDB::Collection: file: lib/MongoDB/Collection.pm version: v1.2.2 MongoDB::CommandResult: file: lib/MongoDB/CommandResult.pm version: v1.2.2 MongoDB::ConnectionError: file: lib/MongoDB/Error.pm version: v1.2.2 MongoDB::Cursor: file: lib/MongoDB/Cursor.pm version: v1.2.2 MongoDB::CursorNotFoundError: file: lib/MongoDB/Error.pm version: v1.2.2 MongoDB::DBRef: file: lib/MongoDB/DBRef.pm version: v1.2.2 MongoDB::Database: file: lib/MongoDB/Database.pm version: v1.2.2 MongoDB::DatabaseError: file: lib/MongoDB/Error.pm version: v1.2.2 MongoDB::DecodingError: file: lib/MongoDB/Error.pm version: v1.2.2 MongoDB::DeleteResult: file: lib/MongoDB/DeleteResult.pm version: v1.2.2 MongoDB::DocumentError: file: lib/MongoDB/Error.pm version: v1.2.2 MongoDB::DuplicateKeyError: file: lib/MongoDB/Error.pm version: v1.2.2 MongoDB::Error: file: lib/MongoDB/Error.pm version: v1.2.2 MongoDB::ExecutionTimeout: file: lib/MongoDB/Error.pm version: v1.2.2 MongoDB::GridFS: file: lib/MongoDB/GridFS.pm version: v1.2.2 MongoDB::GridFS::File: file: lib/MongoDB/GridFS/File.pm version: v1.2.2 MongoDB::GridFSError: file: lib/MongoDB/Error.pm version: v1.2.2 MongoDB::HandshakeError: file: lib/MongoDB/Error.pm version: v1.2.2 MongoDB::IndexView: file: lib/MongoDB/IndexView.pm version: v1.2.2 MongoDB::InsertManyResult: file: lib/MongoDB/InsertManyResult.pm version: v1.2.2 MongoDB::InsertOneResult: file: lib/MongoDB/InsertOneResult.pm version: v1.2.2 MongoDB::InternalError: file: lib/MongoDB/Error.pm version: v1.2.2 MongoDB::MongoClient: file: lib/MongoDB/MongoClient.pm version: v1.2.2 MongoDB::NetworkError: file: lib/MongoDB/Error.pm version: v1.2.2 MongoDB::NetworkTimeout: file: lib/MongoDB/Error.pm version: v1.2.2 MongoDB::NotMasterError: file: lib/MongoDB/Error.pm version: v1.2.2 MongoDB::OID: file: lib/MongoDB/OID.pm version: v1.2.2 MongoDB::ProtocolError: file: lib/MongoDB/Error.pm version: v1.2.2 MongoDB::QueryResult: file: lib/MongoDB/QueryResult.pm version: v1.2.2 MongoDB::QueryResult::Filtered: file: lib/MongoDB/QueryResult/Filtered.pm version: v1.2.2 MongoDB::ReadConcern: file: lib/MongoDB/ReadConcern.pm version: v1.2.2 MongoDB::ReadPreference: file: lib/MongoDB/ReadPreference.pm version: v1.2.2 MongoDB::SelectionError: file: lib/MongoDB/Error.pm version: v1.2.2 MongoDB::TimeoutError: file: lib/MongoDB/Error.pm version: v1.2.2 MongoDB::Timestamp: file: lib/MongoDB/Timestamp.pm version: v1.2.2 MongoDB::UnacknowledgedResult: file: lib/MongoDB/UnacknowledgedResult.pm version: v1.2.2 MongoDB::UpdateResult: file: lib/MongoDB/UpdateResult.pm version: v1.2.2 MongoDB::UsageError: file: lib/MongoDB/Error.pm version: v1.2.2 MongoDB::WriteConcern: file: lib/MongoDB/WriteConcern.pm version: v1.2.2 MongoDB::WriteConcernError: file: lib/MongoDB/Error.pm version: v1.2.2 MongoDB::WriteError: file: lib/MongoDB/Error.pm version: v1.2.2 MongoDB::WriteResult: file: lib/MongoDB/BulkWriteResult.pm version: v1.2.2 recommends: IO::Socket::IP: '0.25' IO::Socket::SSL: '1.42' Mozilla::CA: '20130114' Net::SSLeay: '1.49' requires: Authen::SCRAM::Client: '0.003' Carp: '0' Class::XSAccessor: '0' DateTime: '0.78' Digest::MD5: '0' Encode: '0' Exporter: '5.57' IO::File: '0' IO::Socket: '0' JSON::PP: '2.27300' List::Util: '0' MIME::Base64: '0' Moo: '2' Moo::Role: '0' Safe::Isa: '0' Scalar::Util: '0' Socket: '0' Sub::Quote: '0' Tie::IxHash: '0' Time::HiRes: '0' Try::Tiny: '0' Type::Library: '0' Type::Tiny::XS: '0' Type::Utils: '0' Types::Standard: '0' XSLoader: '0' boolean: '0.25' constant: '0' if: '0' namespace::clean: '0' overload: '0' perl: v5.8.0 re: '0' strict: '0' version: '0' warnings: '0' resources: bugtracker: https://jira.mongodb.org/browse/PERL homepage: https://github.com/mongodb/mongo-perl-driver repository: https://github.com/mongodb/mongo-perl-driver.git version: v1.2.2 x_contributors: - 'Andrew Page ' - 'Andrey Khozov ' - 'Ashley Willis ' - 'Ask Bjørn Hansen ' - 'Bernard Gorman ' - 'Brendan W. McAdams ' - 'Casey Rojas ' - 'Christian Hansen ' - 'Christian Sturm ' - 'Christian Walde ' - 'Colin Cyr ' - 'Danny Raetzsch ' - 'David Morrison ' - 'David Nadle ' - 'David Steinbrunner ' - 'David Storch ' - 'D. Ilmari Mannsåker ' - 'Eric Daniels ' - 'Gerard Goossen ' - 'Glenn Fowler ' - 'Graham Barr ' - 'Hao Wu ' - 'Jason Carey ' - 'Jason Toffaletti ' - 'Johann Rolschewski ' - 'Joseph Harnish ' - 'Josh Matthews ' - 'Joshua Juran ' - 'J. Stewart ' - 'Kamil Slowikowski ' - 'Ken Williams ' - 'Matthew Shopsin ' - 'Michael Langner ' - 'Michael Rotmanov ' - 'Mike Dirolf ' - 'Mohammad S Anwar ' - 'Nickola Trupcheff ' - 'Nigel Gregoire ' - 'Niko Tyni ' - 'Nuno Carvalho ' - 'Orlando Vazquez ' - 'Othello Maurer ' - 'Pan Fan ' - 'Rahul Dhodapkar ' - 'Robin Lee ' - 'Roman Yerin ' - 'Ronald J Kimball ' - 'Ryan Chipman ' - 'Stephen Oberholtzer ' - 'Steve Sanbeg ' - 'Stuart Watt ' - 'Uwe Voelker ' - 'Whitney Jackson ' - 'Xtreak ' - 'Zhihong Zhang ' MongoDB-v1.2.2/perl_mongo.c000644 000765 000024 00000116021 12651754051 015735 0ustar00davidstaff000000 000000 /* * Copyright 2009-2015 MongoDB, Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #include "bson.h" #include "EXTERN.h" #include "perl.h" #include "XSUB.h" #include "regcomp.h" #include "string.h" #include "limits.h" /* load after other Perl headers */ #include "ppport.h" /* adapted from perl.h and must come after it */ #if !defined(Strtoll) # ifdef __hpux # define Strtoll __strtoll # endif # ifdef WIN32 # define Strtoll _strtoi64 # endif # if !defined(Strtoll) && defined(HAS_STRTOLL) # define Strtoll strtoll # endif # if !defined(Strtoll) && defined(HAS_STRTOQ) # define Strtoll strtoq # endif # if !defined(Strtoll) # error strtoll not available # endif #endif /* whether to add an _id field */ #define PREP 1 #define NO_PREP 0 /* define regex macros for Perl 5.8 */ #ifndef RX_PRECOMP #define RX_PRECOMP(re) ((re)->precomp) #define RX_PRELEN(re) ((re)->prelen) #endif #define SUBTYPE_BINARY_DEPRECATED 2 #define SUBTYPE_BINARY 0 /* struct for circular ref checks */ typedef struct _stackette { void *ptr; struct _stackette *prev; } stackette; #define EMPTY_STACK 0 /* convenience functions taken from Text::CSV_XS by H.M. Brand */ #define _is_arrayref(f) ( f && \ (SvROK (f) || (SvRMAGICAL (f) && (mg_get (f), 1) && SvROK (f))) && \ SvOK (f) && SvTYPE (SvRV (f)) == SVt_PVAV ) #define _is_hashref(f) ( f && \ (SvROK (f) || (SvRMAGICAL (f) && (mg_get (f), 1) && SvROK (f))) && \ SvOK (f) && SvTYPE (SvRV (f)) == SVt_PVHV ) #define _is_coderef(f) ( f && \ (SvROK (f) || (SvRMAGICAL (f) && (mg_get (f), 1) && SvROK (f))) && \ SvOK (f) && SvTYPE (SvRV (f)) == SVt_PVCV ) /* shorthand for getting an SV* from a hash and key */ #define _hv_fetchs_sv(h,k) \ (((svp = hv_fetchs(h, k, FALSE)) && *svp) ? *svp : 0) #include "perl_mongo.h" /* perl call helpers * * For convenience, these functions encapsulate the verbose stack * manipulation code necessary to call perl functions from C. * */ static SV * call_method_va(SV *self, const char *method, int num, ...); static SV * call_method_with_pairs(SV *self, const char *method, ...); static SV * new_object_from_pairs(const char *klass, ...); static SV * _call_method_with_pairs (SV *self, const char *method, va_list args); static SV * call_sv_va (SV *func, int num, ...); static SV * call_pv_va (char *func, int num, ...); #define call_perl_reader(s,m) call_method_va(s,m,0) /* BSON encoding * * Public function perl_mongo_sv_to_bson is the entry point. It calls one of * the container encoding functions, hv_to_bson, ixhash_to_bson or * avdoc_to_bson. Those iterate their contents, encoding them with * sv_to_bson_elem. sv_to_bson_elem delegates to various append_* functions * for particular types. * * Other functions are utility functions used during encoding. */ static void _hv_to_bson(bson_t * bson, SV *sv, HV *opts, stackette *stack, bool subdoc); static void _ixhash_to_bson(bson_t * bson, SV *sv, HV *opts, stackette *stack, bool subdoc); #define hvdoc_to_bson(b,d,o,s) _hv_to_bson((b),(d),(o),(s),0) #define hv_to_bson(b,d,o,s) _hv_to_bson((b),(d),(o),(s),1) #define ixhashdoc_to_bson(b,d,o,s) _ixhash_to_bson((b),(d),(o),(s),0) #define ixhash_to_bson(b,d,o,s) _ixhash_to_bson((b),(d),(o),(s),1) static void avdoc_to_bson(bson_t * bson, SV *sv, HV *opts, stackette *stack); static void sv_to_bson_elem (bson_t * bson, const char *key, SV *sv, HV *opts, stackette *stack); const char * maybe_append_first_key(bson_t *bson, HV *opts, stackette *stack); static void append_binary(bson_t * bson, const char * key, bson_subtype_t subtype, SV * sv); static void append_regex(bson_t * bson, const char *key, REGEXP *re, SV * sv); static void append_decomposed_regex(bson_t *bson, const char *key, const char *pattern, const char *flags); static void assert_valid_key(const char* str, STRLEN len); static const char * bson_key(const char * str, HV *opts); static void get_regex_flags(char * flags, SV *sv); static stackette * check_circular_ref(void *ptr, stackette *stack); /* BSON decoding * * Public function perl_mongo_bson_to_sv is the entry point. It calls * bson_doc_to_hashref, which construct a container and fills it using * bson_elem_to_sv. That may call bson_doc_to_hashref or * bson_doc_to_arrayref to decode sub-containers. * * The bson_oid_to_sv function manually constructs a MongoDB::OID object to * avoid the overhead of calling its constructor. This optimization is * fragile and might need to be reconsidered. * */ static SV * bson_doc_to_hashref(bson_iter_t * iter, HV *opts); static SV * bson_array_to_arrayref(bson_iter_t * iter, HV *opts); static SV * bson_elem_to_sv(const bson_iter_t * iter, HV *opts); static SV * bson_oid_to_sv(const bson_iter_t * iter); /******************************************************************** * Some C libraries (e.g. MSVCRT) do not have a "timegm" function. * Here is a surrogate implementation. ********************************************************************/ #if defined(WIN32) || defined(sun) static int is_leap_year(unsigned year) { year += 1900; return (year % 4) == 0 && ((year % 100) != 0 || (year % 400) == 0); } static time_t timegm(struct tm *tm) { static const unsigned month_start[2][12] = { { 0, 31, 59, 90, 120, 151, 181, 212, 243, 273, 304, 334 }, { 0, 31, 60, 91, 121, 152, 182, 213, 244, 274, 305, 335 }, }; time_t ret = 0; int i; for (i = 70; i < tm->tm_year; ++i) ret += is_leap_year(i) ? 366 : 365; ret += month_start[is_leap_year(tm->tm_year)][tm->tm_mon]; ret += tm->tm_mday - 1; ret *= 24; ret += tm->tm_hour; ret *= 60; ret += tm->tm_min; ret *= 60; ret += tm->tm_sec; return ret; } #endif /* WIN32 */ /******************************************************************** * perl call helpers ********************************************************************/ /* call_method_va -- calls a method with a variable number * of SV * arguments. The SV* arguments are NOT mortalized. * Must give the number of arguments before the variable list */ static SV * call_method_va (SV *self, const char *method, int num, ...) { dSP; SV *ret; I32 count; va_list args; ENTER; SAVETMPS; PUSHMARK (SP); XPUSHs (self); va_start (args, num); for( ; num > 0; num-- ) { XPUSHs (va_arg( args, SV* )); } va_end(args); PUTBACK; count = call_method (method, G_SCALAR); SPAGAIN; if (count != 1) { croak ("method didn't return a value"); } ret = POPs; SvREFCNT_inc (ret); PUTBACK; FREETMPS; LEAVE; return ret; } /* call_method_va_paris -- calls a method with a variable number * of key/value pairs as paired char* and SV* arguments. The SV* arguments * are NOT mortalized. The final argument must be a NULL key. */ static SV * call_method_with_pairs (SV *self, const char *method, ...) { SV *ret; va_list args; va_start (args, method); ret = _call_method_with_pairs(self, method, args); va_end(args); return ret; } /* new_object_from_pairs -- calls 'new' with a variable number of * of key/value pairs as paired char* and SV* arguments. The SV* arguments * are NOT mortalized. The final argument must be a NULL key. */ static SV * new_object_from_pairs(const char *klass, ...) { SV *ret; va_list args; va_start (args, klass); ret = _call_method_with_pairs(sv_2mortal(newSVpv(klass,0)), "new", args); va_end(args); return ret; } static SV * _call_method_with_pairs (SV *self, const char *method, va_list args) { dSP; SV *ret = NULL; char *key; I32 count; ENTER; SAVETMPS; PUSHMARK (SP); XPUSHs (self); while ((key = va_arg (args, char *))) { mXPUSHp (key, strlen (key)); XPUSHs (va_arg (args, SV *)); } PUTBACK; count = call_method (method, G_SCALAR); SPAGAIN; if (count != 1) { croak ("method didn't return a value"); } ret = POPs; SvREFCNT_inc (ret); PUTBACK; FREETMPS; LEAVE; return ret; } static SV * call_sv_va (SV *func, int num, ...) { dSP; SV *ret; I32 count; va_list args; ENTER; SAVETMPS; PUSHMARK (SP); va_start (args, num); for( ; num > 0; num-- ) { XPUSHs (va_arg( args, SV* )); } va_end(args); PUTBACK; count = call_sv(func, G_SCALAR); SPAGAIN; if (count != 1) { croak ("method didn't return a value"); } ret = POPs; SvREFCNT_inc (ret); PUTBACK; FREETMPS; LEAVE; return ret; } static SV * call_pv_va (char *func, int num, ...) { dSP; SV *ret; I32 count; va_list args; ENTER; SAVETMPS; PUSHMARK (SP); va_start (args, num); for( ; num > 0; num-- ) { XPUSHs (va_arg( args, SV* )); } va_end(args); PUTBACK; count = call_pv(func, G_SCALAR); SPAGAIN; if (count != 1) { croak ("function %s didn't return a value", func); } ret = POPs; SvREFCNT_inc (ret); PUTBACK; FREETMPS; LEAVE; return ret; } /******************************************************************** * BSON encoding ********************************************************************/ void perl_mongo_sv_to_bson (bson_t * bson, SV *sv, HV *opts) { if (!SvROK (sv)) { croak ("not a reference"); } if ( ! sv_isobject(sv) ) { switch ( SvTYPE(SvRV(sv)) ) { case SVt_PVHV: hvdoc_to_bson (bson, sv, opts, EMPTY_STACK); break; case SVt_PVAV: avdoc_to_bson(bson, sv, opts, EMPTY_STACK); break; default: sv_dump(sv); croak ("type unhandled"); } } else { SV *obj; char *class; obj = SvRV(sv); class = HvNAME(SvSTASH(obj)); if ( strEQ(class, "Tie::IxHash") ) { ixhashdoc_to_bson(bson, sv, opts, EMPTY_STACK); } else if ( strEQ(class, "MongoDB::BSON::_EncodedDoc") ) { STRLEN str_len; SV **svp; SV *encoded; const char *bson_str; bson_t *child; encoded = _hv_fetchs_sv((HV *)obj, "bson"); bson_str = SvPV(encoded, str_len); child = bson_new_from_data((uint8_t*) bson_str, str_len); bson_concat(bson, child); bson_destroy(child); } else if (SvTYPE(obj) == SVt_PVHV) { hvdoc_to_bson(bson, sv, opts, EMPTY_STACK); } else { croak ("type (%s) unhandled", class); } } } static void _hv_to_bson(bson_t * bson, SV *sv, HV *opts, stackette *stack, bool subdoc) { HE *he; HV *hv; const char *first_key = NULL; hv = (HV*)SvRV(sv); if (!(stack = check_circular_ref(hv, stack))) { croak("circular ref"); } if ( ! subdoc ) { first_key = maybe_append_first_key(bson, opts, stack); } (void)hv_iterinit (hv); while ((he = hv_iternext (hv))) { SV **hval; STRLEN len; const char *key = HePV (he, len); uint32_t utf8 = HeUTF8(he); assert_valid_key(key, len); /* if we've already added the first key, continue */ if (first_key && strcmp(key, first_key) == 0) { continue; } /* * HeVAL doesn't return the correct value for tie(%foo, 'Tie::IxHash') * so we're using hv_fetch */ if ((hval = hv_fetch(hv, key, utf8 ? -len : len, 0)) == 0) { croak("could not find hash value for key %s, len:%lu", key, len); } if (!utf8) { key = (const char *) bytes_to_utf8((U8 *)key, &len); } if ( ! is_utf8_string((const U8*)key,len)) { croak( "Invalid UTF-8 detected while encoding BSON" ); } sv_to_bson_elem (bson, key, *hval, opts, stack); if (!utf8) { Safefree(key); } } /* free the hv elem */ Safefree(stack); } /* This is for an array reference of key/value pairs given as a document * instead of a hash reference or Tie::Ixhash, not for an array ref contained * within* a document. */ static void avdoc_to_bson (bson_t * bson, SV *sv, HV *opts, stackette *stack) { I32 i; HV* seen; const char *first_key = NULL; AV *av = (AV *)SvRV (sv); if ((av_len (av) % 2) == 0) { croak ("odd number of elements in structure"); } first_key = maybe_append_first_key(bson, opts, stack); /* XXX handle first key here */ seen = (HV *) sv_2mortal((SV *) newHV()); for (i = 0; i <= av_len (av); i += 2) { SV **key, **val; STRLEN len; const char *str; if ( !((key = av_fetch (av, i, 0)) && (val = av_fetch (av, i + 1, 0))) ) { croak ("failed to fetch array element"); } if ( hv_exists_ent(seen, *key, 0) ) { croak ("duplicate key '%s' in array document", SvPV_nolen(*key)); } else { hv_store_ent(seen, *key, newSV(0), 0); } str = SvPVutf8(*key, len); assert_valid_key(str, len); if (first_key && strcmp(str, first_key) == 0) { continue; } sv_to_bson_elem (bson, str, *val, opts, EMPTY_STACK); } } static void _ixhash_to_bson(bson_t * bson, SV *sv, HV *opts, stackette *stack, bool subdoc) { int i; SV **keys_sv, **values_sv; AV *array, *keys, *values; const char *first_key = NULL; /* * a Tie::IxHash is of the form: * [ {hash}, [keys], [order], 0 ] */ array = (AV*)SvRV(sv); /* check if we're in an infinite loop */ if (!(stack = check_circular_ref(array, stack))) { croak("circular ref"); } /* keys in order, from position 1 */ keys_sv = av_fetch(array, 1, 0); keys = (AV*)SvRV(*keys_sv); /* values in order, from position 2 */ values_sv = av_fetch(array, 2, 0); values = (AV*)SvRV(*values_sv); if ( ! subdoc ) { first_key = maybe_append_first_key(bson, opts, stack); } for (i=0; i<=av_len(keys); i++) { SV **k, **v; STRLEN len; const char *str; if (!(k = av_fetch(keys, i, 0)) || !(v = av_fetch(values, i, 0))) { croak ("failed to fetch associative array value"); } str = SvPVutf8(*k, len); assert_valid_key(str,len); if (first_key && strcmp(str, first_key) == 0) { continue; } sv_to_bson_elem(bson, str, *v, opts, stack); } /* free the ixhash elem */ Safefree(stack); } /* This is for an array reference contained *within* a document */ static void av_to_bson (bson_t * bson, AV *av, HV *opts, stackette *stack) { I32 i; if (!(stack = check_circular_ref(av, stack))) { croak("circular ref"); } for (i = 0; i <= av_len (av); i++) { SV **sv; SV *key = sv_2mortal(newSViv (i)); if (!(sv = av_fetch (av, i, 0))) sv_to_bson_elem (bson, SvPV_nolen(key), newSV(0), opts, stack); else sv_to_bson_elem (bson, SvPV_nolen(key), *sv, opts, stack); } /* free the av elem */ Safefree(stack); } /* verify and transform key, if necessary */ static const char * bson_key(const char * str, HV *opts) { SV **svp; SV *tempsv; STRLEN len; /* first swap op_char if necessary */ if ( (tempsv = _hv_fetchs_sv(opts, "op_char")) && SvOK(tempsv) && SvPV_nolen(tempsv)[0] == str[0] ) { char *out = savepv(str); SAVEFREEPV(out); *out = '$'; str = out; } /* then check for validity */ if ( (tempsv = _hv_fetchs_sv(opts, "invalid_chars")) && SvOK(tempsv) && (len = sv_len(tempsv)) ) { STRLEN i; const char *invalid = SvPV_nolen(tempsv); for (i=0; i=12 REGEXP * re = SvRX(sv); #else REGEXP * re = (REGEXP *) mg_find((SV*)SvRV(sv), PERL_MAGIC_qr)->mg_obj; #endif append_regex(bson, key, re, sv); } else if (sv_isa(sv, "MongoDB::BSON::Regexp") ) { /* Abstract regexp object */ SV *pattern, *flags; pattern = sv_2mortal(call_perl_reader( sv, "pattern" )); flags = sv_2mortal(call_perl_reader( sv, "flags" )); append_decomposed_regex( bson, key, SvPV_nolen( pattern ), SvPV_nolen( flags ) ); } else { croak ("type (%s) unhandled", HvNAME(SvSTASH(SvRV(sv)))); } } else { SV *deref = SvRV(sv); switch (SvTYPE (deref)) { case SVt_PVHV: { /* hash */ bson_t child; bson_append_document_begin(bson, key, -1, &child); /* don't add a _id to inner objs */ hv_to_bson (&child, sv, opts, stack); bson_append_document_end(bson, &child); break; } case SVt_PVAV: { /* array */ bson_t child; bson_append_array_begin(bson, key, -1, &child); av_to_bson (&child, (AV *)SvRV (sv), opts, stack); bson_append_array_end(bson, &child); break; } default: { if ( SvPOK(deref) ) { /* binary */ append_binary(bson, key, BSON_SUBTYPE_BINARY, deref); } else { sv_dump(deref); croak ("type (ref) unhandled"); } } } } } else { SV *tempsv; int is_string = 0, aggressively_number = 0; #if PERL_REVISION==5 && PERL_VERSION<=10 /* Flags usage changed in Perl 5.10.1. In Perl 5.8, there is no way to tell from flags whether something is a string or an int! Therefore, for 5.8, we check: if (isString(sv) and number(sv) == 0 and string(sv) != '0') { return string; } else { return number; } This will incorrectly return '0' as a number in 5.8. */ if (SvPOK(sv) && ((SvNOK(sv) && SvNV(sv) == 0) || (SvIOK(sv) && SvIV(sv) == 0)) && strcmp(SvPV_nolen(sv), "0") != 0) { is_string = 1; } #endif #if PERL_REVISION==5 && PERL_VERSION<=18 /* Before 5.18, get magic would clear public flags. This restores them * from private flags but ONLY if there is no public flag already, as * we have nothing else to go on for serialization. */ if (!(SvFLAGS(sv) & (SVf_IOK|SVf_NOK|SVf_POK))) { SvFLAGS(sv) |= (SvFLAGS(sv) & (SVp_IOK|SVp_NOK|SVp_POK)) >> PRIVSHIFT; } #endif if ( (tempsv = _hv_fetchs_sv(opts, "prefer_numeric")) && SvTRUE (tempsv) ) { aggressively_number = looks_like_number(sv); } switch (SvTYPE (sv)) { /* double */ case SVt_PV: case SVt_NV: case SVt_PVNV: { if ((aggressively_number & IS_NUMBER_NOT_INT) || (!is_string && SvNOK(sv))) { bson_append_double(bson, key, -1, (double)SvNV(sv)); break; } } /* int */ case SVt_IV: case SVt_PVIV: case SVt_PVLV: case SVt_PVMG: { if ((aggressively_number & IS_NUMBER_NOT_INT) || (!is_string && SvNOK(sv))) { bson_append_double(bson, key, -1, (double)SvNV(sv)); break; } /* if it's publicly an int OR (privately an int AND not publicly a string) */ if (aggressively_number || (!is_string && (SvIOK(sv) || (SvIOKp(sv) && !SvPOK(sv))))) { #if defined(MONGO_USE_64_BIT_INT) IV i = SvIV(sv); /* intentionally use -INT32_MAX to avoid the weird most negative number */ if ( i >= -INT32_MAX && i <= INT32_MAX) { bson_append_int32(bson, key, -1, (int)i); } else { bson_append_int64(bson, key, -1, (int64_t)i); } #else bson_append_int32(bson, key, -1, (int)SvIV(sv)); #endif break; } /* string */ if (sv_len (sv) != strlen (SvPV_nolen (sv))) { append_binary(bson, key, SUBTYPE_BINARY, sv); } else { STRLEN len; const char *str = SvPVutf8(sv, len); if ( ! is_utf8_string((const U8*)str,len)) { croak( "Invalid UTF-8 detected while encoding BSON" ); } bson_append_utf8(bson, key, -1, str, len); } break; } default: sv_dump(sv); croak ("type (sv) unhandled"); } } } const char * maybe_append_first_key(bson_t *bson, HV *opts, stackette *stack) { SV *tempsv; SV **svp; const char *first_key = NULL; if ( (tempsv = _hv_fetchs_sv(opts, "first_key")) && SvOK (tempsv) ) { STRLEN len; first_key = SvPVutf8(tempsv, len); assert_valid_key(first_key, len); if ( (tempsv = _hv_fetchs_sv(opts, "first_value")) ) { sv_to_bson_elem(bson, first_key, tempsv, opts, stack); } else { bson_append_null(bson, first_key, -1); } } return first_key; } static void append_decomposed_regex(bson_t *bson, const char *key, const char *pattern, const char *flags ) { size_t pattern_length = strlen( pattern ); char *buf; Newx(buf, pattern_length + 1, char ); Copy(pattern, buf, pattern_length, char ); buf[ pattern_length ] = '\0'; bson_append_regex(bson, key, -1, buf, flags); Safefree(buf); } static void append_regex(bson_t * bson, const char *key, REGEXP *re, SV * sv) { char flags[] = {0,0,0,0,0}; char *buf; int i, j; get_regex_flags(flags, sv); /* sort flags -- how cool to write a sort algorithm by hand! Since we're * only sorting a tiny array, who cares if it's n-squared? */ for ( i=0; flags[i]; i++ ) { for ( j=i+1; flags[j] ; j++ ) { if ( flags[i] > flags[j] ) { char t = flags[j]; flags[j] = flags[i]; flags[i] = t; } } } Newx(buf, (RX_PRELEN(re) + 1), char ); Copy(RX_PRECOMP(re), buf, RX_PRELEN(re), char ); buf[RX_PRELEN(re)] = '\0'; bson_append_regex(bson, key, -1, buf, flags); Safefree(buf); } static void append_binary(bson_t * bson, const char * key, bson_subtype_t subtype, SV * sv) { STRLEN len; uint8_t * bytes = (uint8_t *) SvPVbyte(sv, len); bson_append_binary(bson, key, -1, subtype, bytes, len); } static void assert_valid_key(const char* str, STRLEN len) { if(strlen(str) < len) { croak("key contains null char"); } if (len == 0) { croak("empty key name, did you use a $ with double quotes?"); } } static void get_regex_flags(char * flags, SV *sv) { unsigned int i = 0, f = 0; #if PERL_REVISION == 5 && PERL_VERSION < 10 /* pre-5.10 doesn't have the re API */ STRLEN string_length; char *re_string = SvPV( sv, string_length ); /* pre-5.14 regexes are stringified in the format: (?ix-sm:foo) where everything between ? and - are the current flags. The format changed around 5.14, but for everything after 5.10 we use the re API anyway. */ for( i = 2; i < string_length && re_string[i] != '-'; i++ ) { if ( re_string[i] == 'i' || re_string[i] == 'm' || re_string[i] == 'x' || re_string[i] == 's' ) { flags[f++] = re_string[i]; } else if ( re_string[i] == ':' ) { break; } } #else /* 5.10 added an API to extract flags, so we use that */ int ret_count; SV *flags_sv; SV *pat_sv; char *flags_tmp; dSP; ENTER; SAVETMPS; PUSHMARK (SP); XPUSHs (sv); PUTBACK; ret_count = call_pv( "re::regexp_pattern", G_ARRAY ); SPAGAIN; if ( ret_count != 2 ) { croak( "error introspecting regex" ); } /* regexp_pattern returns two items (in list context), the pattern and a list of flags */ flags_sv = POPs; pat_sv = POPs; /* too bad we throw this away */ flags_tmp = SvPVutf8_nolen(flags_sv); for ( i = 0; i < sizeof( flags_tmp ); i++ ) { if ( flags_tmp[i] == 0 ) break; /* MongoDB supports only flags /imxs, so warn if we get anything else and discard them. */ if ( flags_tmp[i] == 'i' || flags_tmp[i] == 'm' || flags_tmp[i] == 'x' || flags_tmp[i] == 's' ) { flags[f++] = flags_tmp[i]; } else if ( flags_tmp[i] == 'u' ) { /* do nothing as this is default */ } else { warn( "stripped unsupported regex flag /%c from MongoDB regex\n", flags_tmp[i] ); } } PUTBACK; FREETMPS; LEAVE; #endif } /** * checks if a ptr has been parsed already and, if not, adds it to the stack. If * we do have a circular ref, this function returns 0. */ static stackette* check_circular_ref(void *ptr, stackette *stack) { stackette *ette, *start = stack; while (stack) { if (ptr == stack->ptr) { return 0; } stack = stack->prev; } /* push this onto the circular ref stack */ Newx(ette, 1, stackette); ette->ptr = ptr; /* if stack has not been initialized, stack will be 0 so this will work out */ ette->prev = start; return ette; } /******************************************************************** * BSON decoding ********************************************************************/ SV * perl_mongo_bson_to_sv (const bson_t * bson, HV *opts) { bson_iter_t iter; if ( ! bson_iter_init(&iter, bson) ) { croak( "error creating BSON iterator" ); } return bson_doc_to_hashref(&iter, opts); } static SV * bson_doc_to_hashref(bson_iter_t * iter, HV *opts) { SV **svp; SV *cb; SV *ret; HV *hv = newHV(); int is_dbref = 1; int key_num = 0; while (bson_iter_next(iter)) { const char *name; SV *value; name = bson_iter_key(iter); if ( ! is_utf8_string((const U8*)name,strlen(name))) { croak( "Invalid UTF-8 detected while decoding BSON" ); } key_num++; /* check if this is a DBref. We must see the keys $ref, $id, and optionally $db in that order, with no extra keys */ if ( key_num == 1 && strcmp( name, "$ref" ) ) is_dbref = 0; if ( key_num == 2 && is_dbref == 1 && strcmp( name, "$id" ) ) is_dbref = 0; /* get value and store into hash */ value = bson_elem_to_sv(iter, opts); if (!hv_store (hv, name, 0-strlen(name), value, 0)) { croak ("failed storing value in hash"); } } ret = newRV_noinc ((SV *)hv); /* XXX shouldn't need to limit to size 3 */ if ( key_num >= 2 && is_dbref == 1 && (cb = _hv_fetchs_sv(opts, "dbref_callback")) && SvOK(cb) ) { SV *dbref = call_sv_va(cb, 1, ret); return dbref; } return ret; } static SV * bson_array_to_arrayref(bson_iter_t * iter, HV *opts) { AV *ret = newAV (); while (bson_iter_next(iter)) { SV *sv; /* get value */ if ((sv = bson_elem_to_sv(iter, opts ))) { av_push (ret, sv); } } return newRV_noinc ((SV *)ret); } static SV * bson_elem_to_sv (const bson_iter_t * iter, HV *opts ) { SV **svp; SV *value = 0; switch(bson_iter_type(iter)) { case BSON_TYPE_OID: { value = bson_oid_to_sv(iter); break; } case BSON_TYPE_DOUBLE: { value = newSVnv(bson_iter_double(iter)); break; } case BSON_TYPE_SYMBOL: case BSON_TYPE_UTF8: { const char * str; uint32_t len; if (bson_iter_type(iter) == BSON_TYPE_SYMBOL) { str = bson_iter_symbol(iter, &len); } else { str = bson_iter_utf8(iter, &len); } if ( ! is_utf8_string((const U8*)str,len)) { croak( "Invalid UTF-8 detected while decoding BSON" ); } /* this makes a copy of the buffer */ /* len includes \0 */ value = newSVpvn(str, len); SvUTF8_on(value); break; } case BSON_TYPE_DOCUMENT: { bson_iter_t child; bson_iter_recurse(iter, &child); value = bson_doc_to_hashref(&child, opts); break; } case BSON_TYPE_ARRAY: { bson_iter_t child; bson_iter_recurse(iter, &child); value = bson_array_to_arrayref(&child, opts); break; } case BSON_TYPE_BINARY: { const char * buf; uint32_t len; bson_subtype_t type; bson_iter_binary(iter, &type, &len, (const uint8_t **)&buf); value = new_object_from_pairs( "MongoDB::BSON::Binary", "data", sv_2mortal(newSVpvn(buf, len)), "subtype", sv_2mortal(newSViv(type)), NULL ); break; } case BSON_TYPE_BOOL: { value = bson_iter_bool(iter) ? newSVsv(get_sv("MongoDB::BSON::_boolean_true", GV_ADD)) : newSVsv(get_sv("MongoDB::BSON::_boolean_false", GV_ADD)); break; } case BSON_TYPE_UNDEFINED: case BSON_TYPE_NULL: { value = newSV(0); break; } case BSON_TYPE_INT32: { value = newSViv(bson_iter_int32(iter)); break; } case BSON_TYPE_INT64: { #if defined(MONGO_USE_64_BIT_INT) value = newSViv(bson_iter_int64(iter)); #else char buf[22]; SV *as_str; SV *big_int; sprintf(buf,"%" PRIi64,bson_iter_int64(iter)); as_str = sv_2mortal(newSVpv(buf,0)); big_int = sv_2mortal(newSVpvs("Math::BigInt")); value = call_method_va(big_int, "new", 1, as_str); #endif break; } case BSON_TYPE_DATE_TIME: { const int64_t msec = bson_iter_date_time(iter); SV *tempsv; const char *dt_type = NULL; if ( (tempsv = _hv_fetchs_sv(opts, "dt_type")) && SvOK(tempsv) ) { dt_type = SvPV_nolen(tempsv); } if ( dt_type == NULL ) { /* raw epoch */ value = (msec % 1000 == 0) ? newSViv(msec / 1000) : newSVnv((double) msec / 1000); } else if ( strcmp( dt_type, "Time::Moment" ) == 0 ) { SV *tm = sv_2mortal(newSVpvs("Time::Moment")); SV *sec = sv_2mortal(newSViv(msec / 1000)); SV *nos = sv_2mortal(newSViv((msec % 1000) * 1000000)); value = call_method_va(tm, "from_epoch", 2, sec, nos); } else if ( strcmp( dt_type, "DateTime::Tiny" ) == 0 ) { time_t epoch; struct tm *dt; epoch = msec / 1000; dt = gmtime( &epoch ); value = new_object_from_pairs( dt_type, "year", sv_2mortal(newSViv( dt->tm_year + 1900 )), "month", sv_2mortal(newSViv( dt->tm_mon + 1 )), "day", sv_2mortal(newSViv( dt->tm_mday )), "hour", sv_2mortal(newSViv( dt->tm_hour )), "minute", sv_2mortal(newSViv( dt->tm_min )), "second", sv_2mortal(newSViv( dt->tm_sec )), NULL ); } else if ( strcmp( dt_type, "DateTime" ) == 0 ) { SV *epoch = sv_2mortal(newSVnv((NV)msec / 1000)); value = call_method_with_pairs( sv_2mortal(newSVpv(dt_type,0)), "from_epoch", "epoch", epoch, NULL ); } else { croak( "Invalid dt_type \"%s\"", dt_type ); } break; } case BSON_TYPE_REGEX: { const char * regex_str; const char * options; regex_str = bson_iter_regex(iter, &options); /* always make a MongoDB::BSON::Regexp object instead of a native Perl * regexp to prevent the risk of compilation failure as well as * security risks compiling unknown regular expressions. */ value = new_object_from_pairs( "MongoDB::BSON::Regexp", "pattern", sv_2mortal(newSVpv(regex_str,0)), "flags", sv_2mortal(newSVpv(options,0)), NULL ); break; } case BSON_TYPE_CODE: { const char * code; uint32_t len; SV *code_sv; code = bson_iter_code(iter, &len); code_sv = sv_2mortal(newSVpvn(code, len)); value = new_object_from_pairs("MongoDB::Code", "code", code_sv, NULL); break; } case BSON_TYPE_CODEWSCOPE: { const char * code; const uint8_t * scope; uint32_t code_len, scope_len; SV * code_sv; SV * scope_sv; bson_t bson; bson_iter_t child; code = bson_iter_codewscope(iter, &code_len, &scope_len, &scope); code_sv = sv_2mortal(newSVpvn(code, code_len)); if ( ! ( bson_init_static(&bson, scope, scope_len) && bson_iter_init(&child, &bson) ) ) { croak("error iterating BSON type %d\n", bson_iter_type(iter)); } scope_sv = bson_doc_to_hashref(&child, opts); value = new_object_from_pairs("MongoDB::Code", "code", code_sv, "scope", scope_sv, NULL); break; } case BSON_TYPE_TIMESTAMP: { SV *sec_sv, *inc_sv; uint32_t sec, inc; bson_iter_timestamp(iter, &sec, &inc); sec_sv = sv_2mortal(newSViv(sec)); inc_sv = sv_2mortal(newSViv(inc)); value = new_object_from_pairs("MongoDB::Timestamp", "sec", sec_sv, "inc", inc_sv, NULL); break; } case BSON_TYPE_MINKEY: { HV *stash = gv_stashpv("MongoDB::MinKey", GV_ADD); value = sv_bless(newRV((SV*)newHV()), stash); break; } case BSON_TYPE_MAXKEY: { HV *stash = gv_stashpv("MongoDB::MaxKey", GV_ADD); value = sv_bless(newRV((SV*)newHV()), stash); break; } default: { croak("type %d not supported\n", bson_iter_type(iter)); /* give up, it'll be trouble if we keep going */ } } return value; } static SV * bson_oid_to_sv (const bson_iter_t * iter) { HV *stash, *id_hv; char oid_s[25]; const bson_oid_t * oid = bson_iter_oid(iter); bson_oid_to_string(oid, oid_s); id_hv = newHV(); (void)hv_stores(id_hv, "value", newSVpvn(oid_s, 24)); stash = gv_stashpv("MongoDB::OID", 0); return sv_bless(newRV_noinc((SV *)id_hv), stash); } /* vim: set ts=2 sts=2 sw=2 et tw=75: */ MongoDB-v1.2.2/perl_mongo.h000644 000765 000024 00000001677 12651754051 015754 0ustar00davidstaff000000 000000 /* * Copyright 2009 MongoDB, Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #ifndef PERL_MONGO_H #define PERL_MONGO_H #define PERL_GCC_BRACE_GROUPS_FORBIDDEN #include "bson.h" #include "EXTERN.h" #include "perl.h" #include "XSUB.h" void perl_mongo_init(); SV * perl_mongo_bson_to_sv (const bson_t * bson, HV *opts); void perl_mongo_sv_to_bson (bson_t * bson, SV *sv, HV *opts); #endif /* vim: set ts=2 sts=2 sw=2 et tw=75: */ MongoDB-v1.2.2/ppport.h000644 000765 000024 00000575616 12651754051 015150 0ustar00davidstaff000000 000000 #if 0 <<'SKIP'; #endif /* ---------------------------------------------------------------------- ppport.h -- Perl/Pollution/Portability Version 3.31 Automatically created by Devel::PPPort running under perl 5.020002. Do NOT edit this file directly! -- Edit PPPort_pm.PL and the includes in parts/inc/ instead. Use 'perldoc ppport.h' to view the documentation below. ---------------------------------------------------------------------- SKIP =pod =head1 NAME ppport.h - Perl/Pollution/Portability version 3.31 =head1 SYNOPSIS perl ppport.h [options] [source files] Searches current directory for files if no [source files] are given --help show short help --version show version --patch=file write one patch file with changes --copy=suffix write changed copies with suffix --diff=program use diff program and options --compat-version=version provide compatibility with Perl version --cplusplus accept C++ comments --quiet don't output anything except fatal errors --nodiag don't show diagnostics --nohints don't show hints --nochanges don't suggest changes --nofilter don't filter input files --strip strip all script and doc functionality from ppport.h --list-provided list provided API --list-unsupported list unsupported API --api-info=name show Perl API portability information =head1 COMPATIBILITY This version of F is designed to support operation with Perl installations back to 5.003, and has been tested up to 5.20. =head1 OPTIONS =head2 --help Display a brief usage summary. =head2 --version Display the version of F. =head2 --patch=I If this option is given, a single patch file will be created if any changes are suggested. This requires a working diff program to be installed on your system. =head2 --copy=I If this option is given, a copy of each file will be saved with the given suffix that contains the suggested changes. This does not require any external programs. Note that this does not automagically add a dot between the original filename and the suffix. If you want the dot, you have to include it in the option argument. If neither C<--patch> or C<--copy> are given, the default is to simply print the diffs for each file. This requires either C or a C program to be installed. =head2 --diff=I Manually set the diff program and options to use. The default is to use C, when installed, and output unified context diffs. =head2 --compat-version=I Tell F to check for compatibility with the given Perl version. The default is to check for compatibility with Perl version 5.003. You can use this option to reduce the output of F if you intend to be backward compatible only down to a certain Perl version. =head2 --cplusplus Usually, F will detect C++ style comments and replace them with C style comments for portability reasons. Using this option instructs F to leave C++ comments untouched. =head2 --quiet Be quiet. Don't print anything except fatal errors. =head2 --nodiag Don't output any diagnostic messages. Only portability alerts will be printed. =head2 --nohints Don't output any hints. Hints often contain useful portability notes. Warnings will still be displayed. =head2 --nochanges Don't suggest any changes. Only give diagnostic output and hints unless these are also deactivated. =head2 --nofilter Don't filter the list of input files. By default, files not looking like source code (i.e. not *.xs, *.c, *.cc, *.cpp or *.h) are skipped. =head2 --strip Strip all script and documentation functionality from F. This reduces the size of F dramatically and may be useful if you want to include F in smaller modules without increasing their distribution size too much. The stripped F will have a C<--unstrip> option that allows you to undo the stripping, but only if an appropriate C module is installed. =head2 --list-provided Lists the API elements for which compatibility is provided by F. Also lists if it must be explicitly requested, if it has dependencies, and if there are hints or warnings for it. =head2 --list-unsupported Lists the API elements that are known not to be supported by F and below which version of Perl they probably won't be available or work. =head2 --api-info=I Show portability information for API elements matching I. If I is surrounded by slashes, it is interpreted as a regular expression. =head1 DESCRIPTION In order for a Perl extension (XS) module to be as portable as possible across differing versions of Perl itself, certain steps need to be taken. =over 4 =item * Including this header is the first major one. This alone will give you access to a large part of the Perl API that hasn't been available in earlier Perl releases. Use perl ppport.h --list-provided to see which API elements are provided by ppport.h. =item * You should avoid using deprecated parts of the API. For example, using global Perl variables without the C prefix is deprecated. Also, some API functions used to have a C prefix. Using this form is also deprecated. You can safely use the supported API, as F will provide wrappers for older Perl versions. =item * If you use one of a few functions or variables that were not present in earlier versions of Perl, and that can't be provided using a macro, you have to explicitly request support for these functions by adding one or more C<#define>s in your source code before the inclusion of F. These functions or variables will be marked C in the list shown by C<--list-provided>. Depending on whether you module has a single or multiple files that use such functions or variables, you want either C or global variants. For a C function or variable (used only in a single source file), use: #define NEED_function #define NEED_variable For a global function or variable (used in multiple source files), use: #define NEED_function_GLOBAL #define NEED_variable_GLOBAL Note that you mustn't have more than one global request for the same function or variable in your project. Function / Variable Static Request Global Request ----------------------------------------------------------------------------------------- PL_parser NEED_PL_parser NEED_PL_parser_GLOBAL PL_signals NEED_PL_signals NEED_PL_signals_GLOBAL caller_cx() NEED_caller_cx NEED_caller_cx_GLOBAL eval_pv() NEED_eval_pv NEED_eval_pv_GLOBAL grok_bin() NEED_grok_bin NEED_grok_bin_GLOBAL grok_hex() NEED_grok_hex NEED_grok_hex_GLOBAL grok_number() NEED_grok_number NEED_grok_number_GLOBAL grok_numeric_radix() NEED_grok_numeric_radix NEED_grok_numeric_radix_GLOBAL grok_oct() NEED_grok_oct NEED_grok_oct_GLOBAL load_module() NEED_load_module NEED_load_module_GLOBAL mg_findext() NEED_mg_findext NEED_mg_findext_GLOBAL my_snprintf() NEED_my_snprintf NEED_my_snprintf_GLOBAL my_sprintf() NEED_my_sprintf NEED_my_sprintf_GLOBAL my_strlcat() NEED_my_strlcat NEED_my_strlcat_GLOBAL my_strlcpy() NEED_my_strlcpy NEED_my_strlcpy_GLOBAL newCONSTSUB() NEED_newCONSTSUB NEED_newCONSTSUB_GLOBAL newRV_noinc() NEED_newRV_noinc NEED_newRV_noinc_GLOBAL newSV_type() NEED_newSV_type NEED_newSV_type_GLOBAL newSVpvn_flags() NEED_newSVpvn_flags NEED_newSVpvn_flags_GLOBAL newSVpvn_share() NEED_newSVpvn_share NEED_newSVpvn_share_GLOBAL pv_display() NEED_pv_display NEED_pv_display_GLOBAL pv_escape() NEED_pv_escape NEED_pv_escape_GLOBAL pv_pretty() NEED_pv_pretty NEED_pv_pretty_GLOBAL sv_2pv_flags() NEED_sv_2pv_flags NEED_sv_2pv_flags_GLOBAL sv_2pvbyte() NEED_sv_2pvbyte NEED_sv_2pvbyte_GLOBAL sv_catpvf_mg() NEED_sv_catpvf_mg NEED_sv_catpvf_mg_GLOBAL sv_catpvf_mg_nocontext() NEED_sv_catpvf_mg_nocontext NEED_sv_catpvf_mg_nocontext_GLOBAL sv_pvn_force_flags() NEED_sv_pvn_force_flags NEED_sv_pvn_force_flags_GLOBAL sv_setpvf_mg() NEED_sv_setpvf_mg NEED_sv_setpvf_mg_GLOBAL sv_setpvf_mg_nocontext() NEED_sv_setpvf_mg_nocontext NEED_sv_setpvf_mg_nocontext_GLOBAL sv_unmagicext() NEED_sv_unmagicext NEED_sv_unmagicext_GLOBAL vload_module() NEED_vload_module NEED_vload_module_GLOBAL vnewSVpvf() NEED_vnewSVpvf NEED_vnewSVpvf_GLOBAL warner() NEED_warner NEED_warner_GLOBAL To avoid namespace conflicts, you can change the namespace of the explicitly exported functions / variables using the C macro. Just C<#define> the macro before including C: #define DPPP_NAMESPACE MyOwnNamespace_ #include "ppport.h" The default namespace is C. =back The good thing is that most of the above can be checked by running F on your source code. See the next section for details. =head1 EXAMPLES To verify whether F is needed for your module, whether you should make any changes to your code, and whether any special defines should be used, F can be run as a Perl script to check your source code. Simply say: perl ppport.h The result will usually be a list of patches suggesting changes that should at least be acceptable, if not necessarily the most efficient solution, or a fix for all possible problems. If you know that your XS module uses features only available in newer Perl releases, if you're aware that it uses C++ comments, and if you want all suggestions as a single patch file, you could use something like this: perl ppport.h --compat-version=5.6.0 --cplusplus --patch=test.diff If you only want your code to be scanned without any suggestions for changes, use: perl ppport.h --nochanges You can specify a different C program or options, using the C<--diff> option: perl ppport.h --diff='diff -C 10' This would output context diffs with 10 lines of context. If you want to create patched copies of your files instead, use: perl ppport.h --copy=.new To display portability information for the C function, use: perl ppport.h --api-info=newSVpvn Since the argument to C<--api-info> can be a regular expression, you can use perl ppport.h --api-info=/_nomg$/ to display portability information for all C<_nomg> functions or perl ppport.h --api-info=/./ to display information for all known API elements. =head1 BUGS If this version of F is causing failure during the compilation of this module, please check if newer versions of either this module or C are available on CPAN before sending a bug report. If F was generated using the latest version of C and is causing failure of this module, please file a bug report here: L Please include the following information: =over 4 =item 1. The complete output from running "perl -V" =item 2. This file. =item 3. The name and version of the module you were trying to build. =item 4. A full log of the build that failed. =item 5. Any other information that you think could be relevant. =back For the latest version of this code, please get the C module from CPAN. =head1 COPYRIGHT Version 3.x, Copyright (c) 2004-2013, Marcus Holland-Moritz. Version 2.x, Copyright (C) 2001, Paul Marquess. Version 1.x, Copyright (C) 1999, Kenneth Albanowski. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. =head1 SEE ALSO See L. =cut use strict; # Disable broken TRIE-optimization BEGIN { eval '${^RE_TRIE_MAXBUF} = -1' if $] >= 5.009004 && $] <= 5.009005 } my $VERSION = 3.31; my %opt = ( quiet => 0, diag => 1, hints => 1, changes => 1, cplusplus => 0, filter => 1, strip => 0, version => 0, ); my($ppport) = $0 =~ /([\w.]+)$/; my $LF = '(?:\r\n|[\r\n])'; # line feed my $HS = "[ \t]"; # horizontal whitespace # Never use C comments in this file! my $ccs = '/'.'*'; my $cce = '*'.'/'; my $rccs = quotemeta $ccs; my $rcce = quotemeta $cce; eval { require Getopt::Long; Getopt::Long::GetOptions(\%opt, qw( help quiet diag! filter! hints! changes! cplusplus strip version patch=s copy=s diff=s compat-version=s list-provided list-unsupported api-info=s )) or usage(); }; if ($@ and grep /^-/, @ARGV) { usage() if "@ARGV" =~ /^--?h(?:elp)?$/; die "Getopt::Long not found. Please don't use any options.\n"; } if ($opt{version}) { print "This is $0 $VERSION.\n"; exit 0; } usage() if $opt{help}; strip() if $opt{strip}; if (exists $opt{'compat-version'}) { my($r,$v,$s) = eval { parse_version($opt{'compat-version'}) }; if ($@) { die "Invalid version number format: '$opt{'compat-version'}'\n"; } die "Only Perl 5 is supported\n" if $r != 5; die "Invalid version number: $opt{'compat-version'}\n" if $v >= 1000 || $s >= 1000; $opt{'compat-version'} = sprintf "%d.%03d%03d", $r, $v, $s; } else { $opt{'compat-version'} = 5; } my %API = map { /^(\w+)\|([^|]*)\|([^|]*)\|(\w*)$/ ? ( $1 => { ($2 ? ( base => $2 ) : ()), ($3 ? ( todo => $3 ) : ()), (index($4, 'v') >= 0 ? ( varargs => 1 ) : ()), (index($4, 'p') >= 0 ? ( provided => 1 ) : ()), (index($4, 'n') >= 0 ? ( nothxarg => 1 ) : ()), } ) : die "invalid spec: $_" } qw( ASCII_TO_NEED||5.007001|n AvFILLp|5.004050||p AvFILL||| BhkDISABLE||5.021008| BhkENABLE||5.021008| BhkENTRY_set||5.021008| BhkENTRY||| BhkFLAGS||| CALL_BLOCK_HOOKS||| CLASS|||n CPERLscope|5.005000||p CX_CURPAD_SAVE||| CX_CURPAD_SV||| CopFILEAV|5.006000||p CopFILEGV_set|5.006000||p CopFILEGV|5.006000||p CopFILESV|5.006000||p CopFILE_set|5.006000||p CopFILE|5.006000||p CopSTASHPV_set|5.006000||p CopSTASHPV|5.006000||p CopSTASH_eq|5.006000||p CopSTASH_set|5.006000||p CopSTASH|5.006000||p CopyD|5.009002|5.004050|p Copy||| CvPADLIST||5.008001| CvSTASH||| CvWEAKOUTSIDE||| DEFSV_set|5.010001||p DEFSV|5.004050||p END_EXTERN_C|5.005000||p ENTER||| ERRSV|5.004050||p EXTEND||| EXTERN_C|5.005000||p F0convert|||n FREETMPS||| GIMME_V||5.004000|n GIMME|||n GROK_NUMERIC_RADIX|5.007002||p G_ARRAY||| G_DISCARD||| G_EVAL||| G_METHOD|5.006001||p G_NOARGS||| G_SCALAR||| G_VOID||5.004000| GetVars||| GvAV||| GvCV||| GvHV||| GvSVn|5.009003||p GvSV||| Gv_AMupdate||5.011000| HEf_SVKEY|5.003070||p HeHASH||5.003070| HeKEY||5.003070| HeKLEN||5.003070| HePV||5.004000| HeSVKEY_force||5.003070| HeSVKEY_set||5.004000| HeSVKEY||5.003070| HeUTF8|5.010001|5.008000|p HeVAL||5.003070| HvENAMELEN||5.015004| HvENAMEUTF8||5.015004| HvENAME||5.013007| HvNAMELEN_get|5.009003||p HvNAMELEN||5.015004| HvNAMEUTF8||5.015004| HvNAME_get|5.009003||p HvNAME||| INT2PTR|5.006000||p IN_LOCALE_COMPILETIME|5.007002||p IN_LOCALE_RUNTIME|5.007002||p IN_LOCALE|5.007002||p IN_PERL_COMPILETIME|5.008001||p IS_NUMBER_GREATER_THAN_UV_MAX|5.007002||p IS_NUMBER_INFINITY|5.007002||p IS_NUMBER_IN_UV|5.007002||p IS_NUMBER_NAN|5.007003||p IS_NUMBER_NEG|5.007002||p IS_NUMBER_NOT_INT|5.007002||p IVSIZE|5.006000||p IVTYPE|5.006000||p IVdf|5.006000||p LEAVE||| LINKLIST||5.013006| LVRET||| MARK||| MULTICALL||5.021008| MUTABLE_PTR|5.010001||p MUTABLE_SV|5.010001||p MY_CXT_CLONE|5.009002||p MY_CXT_INIT|5.007003||p MY_CXT|5.007003||p MoveD|5.009002|5.004050|p Move||| NATIVE_TO_NEED||5.007001|n NOOP|5.005000||p NUM2PTR|5.006000||p NVTYPE|5.006000||p NVef|5.006001||p NVff|5.006001||p NVgf|5.006001||p Newxc|5.009003||p Newxz|5.009003||p Newx|5.009003||p Nullav||| Nullch||| Nullcv||| Nullhv||| Nullsv||| OP_CLASS||5.013007| OP_DESC||5.007003| OP_NAME||5.007003| OP_TYPE_IS_OR_WAS||5.019010| OP_TYPE_IS||5.019007| ORIGMARK||| OpHAS_SIBLING||5.021007| OpSIBLING_set||5.021007| OpSIBLING||5.021007| PAD_BASE_SV||| PAD_CLONE_VARS||| PAD_COMPNAME_FLAGS||| PAD_COMPNAME_GEN_set||| PAD_COMPNAME_GEN||| PAD_COMPNAME_OURSTASH||| PAD_COMPNAME_PV||| PAD_COMPNAME_TYPE||| PAD_RESTORE_LOCAL||| PAD_SAVE_LOCAL||| PAD_SAVE_SETNULLPAD||| PAD_SETSV||| PAD_SET_CUR_NOSAVE||| PAD_SET_CUR||| PAD_SVl||| PAD_SV||| PERLIO_FUNCS_CAST|5.009003||p PERLIO_FUNCS_DECL|5.009003||p PERL_ABS|5.008001||p PERL_BCDVERSION|5.021008||p PERL_GCC_BRACE_GROUPS_FORBIDDEN|5.008001||p PERL_HASH|5.003070||p PERL_INT_MAX|5.003070||p PERL_INT_MIN|5.003070||p PERL_LONG_MAX|5.003070||p PERL_LONG_MIN|5.003070||p PERL_MAGIC_arylen|5.007002||p PERL_MAGIC_backref|5.007002||p PERL_MAGIC_bm|5.007002||p PERL_MAGIC_collxfrm|5.007002||p PERL_MAGIC_dbfile|5.007002||p PERL_MAGIC_dbline|5.007002||p PERL_MAGIC_defelem|5.007002||p PERL_MAGIC_envelem|5.007002||p PERL_MAGIC_env|5.007002||p PERL_MAGIC_ext|5.007002||p PERL_MAGIC_fm|5.007002||p PERL_MAGIC_glob|5.021008||p PERL_MAGIC_isaelem|5.007002||p PERL_MAGIC_isa|5.007002||p PERL_MAGIC_mutex|5.021008||p PERL_MAGIC_nkeys|5.007002||p PERL_MAGIC_overload_elem|5.021008||p PERL_MAGIC_overload_table|5.007002||p PERL_MAGIC_overload|5.021008||p PERL_MAGIC_pos|5.007002||p PERL_MAGIC_qr|5.007002||p PERL_MAGIC_regdata|5.007002||p PERL_MAGIC_regdatum|5.007002||p PERL_MAGIC_regex_global|5.007002||p PERL_MAGIC_shared_scalar|5.007003||p PERL_MAGIC_shared|5.007003||p PERL_MAGIC_sigelem|5.007002||p PERL_MAGIC_sig|5.007002||p PERL_MAGIC_substr|5.007002||p PERL_MAGIC_sv|5.007002||p PERL_MAGIC_taint|5.007002||p PERL_MAGIC_tiedelem|5.007002||p PERL_MAGIC_tiedscalar|5.007002||p PERL_MAGIC_tied|5.007002||p PERL_MAGIC_utf8|5.008001||p PERL_MAGIC_uvar_elem|5.007003||p PERL_MAGIC_uvar|5.007002||p PERL_MAGIC_vec|5.007002||p PERL_MAGIC_vstring|5.008001||p PERL_PV_ESCAPE_ALL|5.009004||p PERL_PV_ESCAPE_FIRSTCHAR|5.009004||p PERL_PV_ESCAPE_NOBACKSLASH|5.009004||p PERL_PV_ESCAPE_NOCLEAR|5.009004||p PERL_PV_ESCAPE_QUOTE|5.009004||p PERL_PV_ESCAPE_RE|5.009005||p PERL_PV_ESCAPE_UNI_DETECT|5.009004||p PERL_PV_ESCAPE_UNI|5.009004||p PERL_PV_PRETTY_DUMP|5.009004||p PERL_PV_PRETTY_ELLIPSES|5.010000||p PERL_PV_PRETTY_LTGT|5.009004||p PERL_PV_PRETTY_NOCLEAR|5.010000||p PERL_PV_PRETTY_QUOTE|5.009004||p PERL_PV_PRETTY_REGPROP|5.009004||p PERL_QUAD_MAX|5.003070||p PERL_QUAD_MIN|5.003070||p PERL_REVISION|5.006000||p PERL_SCAN_ALLOW_UNDERSCORES|5.007003||p PERL_SCAN_DISALLOW_PREFIX|5.007003||p PERL_SCAN_GREATER_THAN_UV_MAX|5.007003||p PERL_SCAN_SILENT_ILLDIGIT|5.008001||p PERL_SHORT_MAX|5.003070||p PERL_SHORT_MIN|5.003070||p PERL_SIGNALS_UNSAFE_FLAG|5.008001||p PERL_SUBVERSION|5.006000||p PERL_SYS_INIT3||5.006000| PERL_SYS_INIT||| PERL_SYS_TERM||5.021008| PERL_UCHAR_MAX|5.003070||p PERL_UCHAR_MIN|5.003070||p PERL_UINT_MAX|5.003070||p PERL_UINT_MIN|5.003070||p PERL_ULONG_MAX|5.003070||p PERL_ULONG_MIN|5.003070||p PERL_UNUSED_ARG|5.009003||p PERL_UNUSED_CONTEXT|5.009004||p PERL_UNUSED_DECL|5.007002||p PERL_UNUSED_VAR|5.007002||p PERL_UQUAD_MAX|5.003070||p PERL_UQUAD_MIN|5.003070||p PERL_USE_GCC_BRACE_GROUPS|5.009004||p PERL_USHORT_MAX|5.003070||p PERL_USHORT_MIN|5.003070||p PERL_VERSION|5.006000||p PL_DBsignal|5.005000||p PL_DBsingle|||pn PL_DBsub|||pn PL_DBtrace|||pn PL_Sv|5.005000||p PL_bufend|5.021008||p PL_bufptr|5.021008||p PL_check||5.006000| PL_compiling|5.004050||p PL_comppad_name||5.017004| PL_comppad||5.008001| PL_copline|5.021008||p PL_curcop|5.004050||p PL_curpad||5.005000| PL_curstash|5.004050||p PL_debstash|5.004050||p PL_defgv|5.004050||p PL_diehook|5.004050||p PL_dirty|5.004050||p PL_dowarn|||pn PL_errgv|5.004050||p PL_error_count|5.021008||p PL_expect|5.021008||p PL_hexdigit|5.005000||p PL_hints|5.005000||p PL_in_my_stash|5.021008||p PL_in_my|5.021008||p PL_keyword_plugin||5.011002| PL_last_in_gv|||n PL_laststatval|5.005000||p PL_lex_state|5.021008||p PL_lex_stuff|5.021008||p PL_linestr|5.021008||p PL_modglobal||5.005000|n PL_na|5.004050||pn PL_no_modify|5.006000||p PL_ofsgv|||n PL_opfreehook||5.011000|n PL_parser|5.009005||p PL_peepp||5.007003|n PL_perl_destruct_level|5.004050||p PL_perldb|5.004050||p PL_ppaddr|5.006000||p PL_rpeepp||5.013005|n PL_rsfp_filters|5.021008||p PL_rsfp|5.021008||p PL_rs|||n PL_signals|5.008001||p PL_stack_base|5.004050||p PL_stack_sp|5.004050||p PL_statcache|5.005000||p PL_stdingv|5.004050||p PL_sv_arenaroot|5.004050||p PL_sv_no|5.004050||pn PL_sv_undef|5.004050||pn PL_sv_yes|5.004050||pn PL_tainted|5.004050||p PL_tainting|5.004050||p PL_tokenbuf|5.021008||p POP_MULTICALL||5.021008| POPi|||n POPl|||n POPn|||n POPpbytex||5.007001|n POPpx||5.005030|n POPp|||n POPs|||n PTR2IV|5.006000||p PTR2NV|5.006000||p PTR2UV|5.006000||p PTR2nat|5.009003||p PTR2ul|5.007001||p PTRV|5.006000||p PUSHMARK||| PUSH_MULTICALL||5.021008| PUSHi||| PUSHmortal|5.009002||p PUSHn||| PUSHp||| PUSHs||| PUSHu|5.004000||p PUTBACK||| PadARRAY||5.021008| PadMAX||5.021008| PadlistARRAY||5.021008| PadlistMAX||5.021008| PadlistNAMESARRAY||5.021008| PadlistNAMESMAX||5.021008| PadlistNAMES||5.021008| PadlistREFCNT||5.017004| PadnameIsOUR||| PadnameIsSTATE||| PadnameLEN||5.021008| PadnameOURSTASH||| PadnameOUTER||| PadnamePV||5.021008| PadnameREFCNT_dec||5.021008| PadnameREFCNT||5.021008| PadnameSV||5.021008| PadnameTYPE||| PadnameUTF8||5.021007| PadnamelistARRAY||5.021008| PadnamelistMAX||5.021008| PadnamelistREFCNT_dec||5.021008| PadnamelistREFCNT||5.021008| PerlIO_clearerr||5.007003| PerlIO_close||5.007003| PerlIO_context_layers||5.009004| PerlIO_eof||5.007003| PerlIO_error||5.007003| PerlIO_fileno||5.007003| PerlIO_fill||5.007003| PerlIO_flush||5.007003| PerlIO_get_base||5.007003| PerlIO_get_bufsiz||5.007003| PerlIO_get_cnt||5.007003| PerlIO_get_ptr||5.007003| PerlIO_read||5.007003| PerlIO_restore_errno||| PerlIO_save_errno||| PerlIO_seek||5.007003| PerlIO_set_cnt||5.007003| PerlIO_set_ptrcnt||5.007003| PerlIO_setlinebuf||5.007003| PerlIO_stderr||5.007003| PerlIO_stdin||5.007003| PerlIO_stdout||5.007003| PerlIO_tell||5.007003| PerlIO_unread||5.007003| PerlIO_write||5.007003| Perl_signbit||5.009005|n PoisonFree|5.009004||p PoisonNew|5.009004||p PoisonWith|5.009004||p Poison|5.008000||p READ_XDIGIT||5.017006| RETVAL|||n Renewc||| Renew||| SAVECLEARSV||| SAVECOMPPAD||| SAVEPADSV||| SAVETMPS||| SAVE_DEFSV|5.004050||p SPAGAIN||| SP||| START_EXTERN_C|5.005000||p START_MY_CXT|5.007003||p STMT_END|||p STMT_START|||p STR_WITH_LEN|5.009003||p ST||| SV_CONST_RETURN|5.009003||p SV_COW_DROP_PV|5.008001||p SV_COW_SHARED_HASH_KEYS|5.009005||p SV_GMAGIC|5.007002||p SV_HAS_TRAILING_NUL|5.009004||p SV_IMMEDIATE_UNREF|5.007001||p SV_MUTABLE_RETURN|5.009003||p SV_NOSTEAL|5.009002||p SV_SMAGIC|5.009003||p SV_UTF8_NO_ENCODING|5.008001||p SVfARG|5.009005||p SVf_UTF8|5.006000||p SVf|5.006000||p SVt_INVLIST||5.019002| SVt_IV||| SVt_NULL||| SVt_NV||| SVt_PVAV||| SVt_PVCV||| SVt_PVFM||| SVt_PVGV||| SVt_PVHV||| SVt_PVIO||| SVt_PVIV||| SVt_PVLV||| SVt_PVMG||| SVt_PVNV||| SVt_PV||| SVt_REGEXP||5.011000| Safefree||| Slab_Alloc||| Slab_Free||| Slab_to_ro||| Slab_to_rw||| StructCopy||| SvCUR_set||| SvCUR||| SvEND||| SvGAMAGIC||5.006001| SvGETMAGIC|5.004050||p SvGROW||| SvIOK_UV||5.006000| SvIOK_notUV||5.006000| SvIOK_off||| SvIOK_only_UV||5.006000| SvIOK_only||| SvIOK_on||| SvIOKp||| SvIOK||| SvIVX||| SvIV_nomg|5.009001||p SvIV_set||| SvIVx||| SvIV||| SvIsCOW_shared_hash||5.008003| SvIsCOW||5.008003| SvLEN_set||| SvLEN||| SvLOCK||5.007003| SvMAGIC_set|5.009003||p SvNIOK_off||| SvNIOKp||| SvNIOK||| SvNOK_off||| SvNOK_only||| SvNOK_on||| SvNOKp||| SvNOK||| SvNVX||| SvNV_nomg||5.013002| SvNV_set||| SvNVx||| SvNV||| SvOK||| SvOOK_offset||5.011000| SvOOK||| SvPOK_off||| SvPOK_only_UTF8||5.006000| SvPOK_only||| SvPOK_on||| SvPOKp||| SvPOK||| SvPVX_const|5.009003||p SvPVX_mutable|5.009003||p SvPVX||| SvPV_const|5.009003||p SvPV_flags_const_nolen|5.009003||p SvPV_flags_const|5.009003||p SvPV_flags_mutable|5.009003||p SvPV_flags|5.007002||p SvPV_force_flags_mutable|5.009003||p SvPV_force_flags_nolen|5.009003||p SvPV_force_flags|5.007002||p SvPV_force_mutable|5.009003||p SvPV_force_nolen|5.009003||p SvPV_force_nomg_nolen|5.009003||p SvPV_force_nomg|5.007002||p SvPV_force|||p SvPV_mutable|5.009003||p SvPV_nolen_const|5.009003||p SvPV_nolen|5.006000||p SvPV_nomg_const_nolen|5.009003||p SvPV_nomg_const|5.009003||p SvPV_nomg_nolen|5.013007||p SvPV_nomg|5.007002||p SvPV_renew|5.009003||p SvPV_set||| SvPVbyte_force||5.009002| SvPVbyte_nolen||5.006000| SvPVbytex_force||5.006000| SvPVbytex||5.006000| SvPVbyte|5.006000||p SvPVutf8_force||5.006000| SvPVutf8_nolen||5.006000| SvPVutf8x_force||5.006000| SvPVutf8x||5.006000| SvPVutf8||5.006000| SvPVx||| SvPV||| SvREFCNT_dec_NN||5.017007| SvREFCNT_dec||| SvREFCNT_inc_NN|5.009004||p SvREFCNT_inc_simple_NN|5.009004||p SvREFCNT_inc_simple_void_NN|5.009004||p SvREFCNT_inc_simple_void|5.009004||p SvREFCNT_inc_simple|5.009004||p SvREFCNT_inc_void_NN|5.009004||p SvREFCNT_inc_void|5.009004||p SvREFCNT_inc|||p SvREFCNT||| SvROK_off||| SvROK_on||| SvROK||| SvRV_set|5.009003||p SvRV||| SvRXOK||5.009005| SvRX||5.009005| SvSETMAGIC||| SvSHARED_HASH|5.009003||p SvSHARE||5.007003| SvSTASH_set|5.009003||p SvSTASH||| SvSetMagicSV_nosteal||5.004000| SvSetMagicSV||5.004000| SvSetSV_nosteal||5.004000| SvSetSV||| SvTAINTED_off||5.004000| SvTAINTED_on||5.004000| SvTAINTED||5.004000| SvTAINT||| SvTHINKFIRST||| SvTRUE_nomg||5.013006| SvTRUE||| SvTYPE||| SvUNLOCK||5.007003| SvUOK|5.007001|5.006000|p SvUPGRADE||| SvUTF8_off||5.006000| SvUTF8_on||5.006000| SvUTF8||5.006000| SvUVXx|5.004000||p SvUVX|5.004000||p SvUV_nomg|5.009001||p SvUV_set|5.009003||p SvUVx|5.004000||p SvUV|5.004000||p SvVOK||5.008001| SvVSTRING_mg|5.009004||p THIS|||n UNDERBAR|5.009002||p UTF8_MAXBYTES|5.009002||p UVSIZE|5.006000||p UVTYPE|5.006000||p UVXf|5.007001||p UVof|5.006000||p UVuf|5.006000||p UVxf|5.006000||p WARN_ALL|5.006000||p WARN_AMBIGUOUS|5.006000||p WARN_ASSERTIONS|5.021008||p WARN_BAREWORD|5.006000||p WARN_CLOSED|5.006000||p WARN_CLOSURE|5.006000||p WARN_DEBUGGING|5.006000||p WARN_DEPRECATED|5.006000||p WARN_DIGIT|5.006000||p WARN_EXEC|5.006000||p WARN_EXITING|5.006000||p WARN_GLOB|5.006000||p WARN_INPLACE|5.006000||p WARN_INTERNAL|5.006000||p WARN_IO|5.006000||p WARN_LAYER|5.008000||p WARN_MALLOC|5.006000||p WARN_MISC|5.006000||p WARN_NEWLINE|5.006000||p WARN_NUMERIC|5.006000||p WARN_ONCE|5.006000||p WARN_OVERFLOW|5.006000||p WARN_PACK|5.006000||p WARN_PARENTHESIS|5.006000||p WARN_PIPE|5.006000||p WARN_PORTABLE|5.006000||p WARN_PRECEDENCE|5.006000||p WARN_PRINTF|5.006000||p WARN_PROTOTYPE|5.006000||p WARN_QW|5.006000||p WARN_RECURSION|5.006000||p WARN_REDEFINE|5.006000||p WARN_REGEXP|5.006000||p WARN_RESERVED|5.006000||p WARN_SEMICOLON|5.006000||p WARN_SEVERE|5.006000||p WARN_SIGNAL|5.006000||p WARN_SUBSTR|5.006000||p WARN_SYNTAX|5.006000||p WARN_TAINT|5.006000||p WARN_THREADS|5.008000||p WARN_UNINITIALIZED|5.006000||p WARN_UNOPENED|5.006000||p WARN_UNPACK|5.006000||p WARN_UNTIE|5.006000||p WARN_UTF8|5.006000||p WARN_VOID|5.006000||p WIDEST_UTYPE|5.015004||p XCPT_CATCH|5.009002||p XCPT_RETHROW|5.009002||p XCPT_TRY_END|5.009002||p XCPT_TRY_START|5.009002||p XPUSHi||| XPUSHmortal|5.009002||p XPUSHn||| XPUSHp||| XPUSHs||| XPUSHu|5.004000||p XSPROTO|5.010000||p XSRETURN_EMPTY||| XSRETURN_IV||| XSRETURN_NO||| XSRETURN_NV||| XSRETURN_PV||| XSRETURN_UNDEF||| XSRETURN_UV|5.008001||p XSRETURN_YES||| XSRETURN|||p XST_mIV||| XST_mNO||| XST_mNV||| XST_mPV||| XST_mUNDEF||| XST_mUV|5.008001||p XST_mYES||| XS_APIVERSION_BOOTCHECK||5.021008| XS_EXTERNAL||5.021008| XS_INTERNAL||5.021008| XS_VERSION_BOOTCHECK||5.021008| XS_VERSION||| XSprePUSH|5.006000||p XS||| XopDISABLE||5.021008| XopENABLE||5.021008| XopENTRYCUSTOM||5.021008| XopENTRY_set||5.021008| XopENTRY||5.021008| XopFLAGS||5.013007| ZeroD|5.009002||p Zero||| _aMY_CXT|5.007003||p _add_range_to_invlist||| _append_range_to_invlist||| _core_swash_init||| _get_encoding||| _get_regclass_nonbitmap_data||| _get_swash_invlist||| _invlist_array_init|||n _invlist_contains_cp|||n _invlist_contents||| _invlist_dump||| _invlist_intersection_maybe_complement_2nd||| _invlist_intersection||| _invlist_invert||| _invlist_len|||n _invlist_populate_swatch|||n _invlist_search|||n _invlist_subtract||| _invlist_union_maybe_complement_2nd||| _invlist_union||| _is_cur_LC_category_utf8||| _is_in_locale_category||5.021001| _is_uni_FOO||5.017008| _is_uni_perl_idcont||5.017008| _is_uni_perl_idstart||5.017007| _is_utf8_FOO||5.017008| _is_utf8_char_slow||5.021001|n _is_utf8_idcont||5.021001| _is_utf8_idstart||5.021001| _is_utf8_mark||5.017008| _is_utf8_perl_idcont||5.017008| _is_utf8_perl_idstart||5.017007| _is_utf8_xidcont||5.021001| _is_utf8_xidstart||5.021001| _load_PL_utf8_foldclosures||| _make_exactf_invlist||| _new_invlist_C_array||| _new_invlist||| _pMY_CXT|5.007003||p _setup_canned_invlist||| _swash_inversion_hash||| _swash_to_invlist||| _to_fold_latin1||| _to_uni_fold_flags||5.014000| _to_upper_title_latin1||| _to_utf8_fold_flags||5.019009| _to_utf8_lower_flags||5.019009| _to_utf8_title_flags||5.019009| _to_utf8_upper_flags||5.019009| _warn_problematic_locale|||n aMY_CXT_|5.007003||p aMY_CXT|5.007003||p aTHXR_|5.021008||p aTHXR|5.021008||p aTHX_|5.006000||p aTHX|5.006000||p aassign_common_vars||| add_above_Latin1_folds||| add_cp_to_invlist||| add_data|||n add_multi_match||| add_utf16_textfilter||| adjust_size_and_find_bucket|||n advance_one_SB||| advance_one_WB||| alloc_maybe_populate_EXACT||| alloccopstash||| allocmy||| amagic_call||| amagic_cmp_locale||| amagic_cmp||| amagic_deref_call||5.013007| amagic_i_ncmp||| amagic_is_enabled||| amagic_ncmp||| anonymise_cv_maybe||| any_dup||| ao||| append_utf8_from_native_byte||5.019004|n apply_attrs_my||| apply_attrs_string||5.006001| apply_attrs||| apply||| assert_uft8_cache_coherent||| assignment_type||| atfork_lock||5.007003|n atfork_unlock||5.007003|n av_arylen_p||5.009003| av_clear||| av_create_and_push||5.009005| av_create_and_unshift_one||5.009005| av_delete||5.006000| av_exists||5.006000| av_extend_guts||| av_extend||| av_fetch||| av_fill||| av_iter_p||5.011000| av_len||| av_make||| av_pop||| av_push||| av_reify||| av_shift||| av_store||| av_tindex||5.017009| av_top_index||5.017009| av_undef||| av_unshift||| ax|||n backup_one_SB||| backup_one_WB||| bad_type_gv||| bad_type_pv||| bind_match||| block_end||5.004000| block_gimme||5.004000| block_start||5.004000| blockhook_register||5.013003| boolSV|5.004000||p boot_core_PerlIO||| boot_core_UNIVERSAL||| boot_core_mro||| bytes_cmp_utf8||5.013007| bytes_from_utf8||5.007001| bytes_to_utf8||5.006001| call_argv|5.006000||p call_atexit||5.006000| call_list||5.004000| call_method|5.006000||p call_pv|5.006000||p call_sv|5.006000||p caller_cx|5.013005|5.006000|p calloc||5.007002|n cando||| cast_i32||5.006000|n cast_iv||5.006000|n cast_ulong||5.006000|n cast_uv||5.006000|n check_locale_boundary_crossing||| check_type_and_open||| check_uni||| check_utf8_print||| checkcomma||| ckWARN|5.006000||p ck_entersub_args_core||| ck_entersub_args_list||5.013006| ck_entersub_args_proto_or_list||5.013006| ck_entersub_args_proto||5.013006| ck_warner_d||5.011001|v ck_warner||5.011001|v ckwarn_common||| ckwarn_d||5.009003| ckwarn||5.009003| clear_placeholders||| clear_special_blocks||| clone_params_del|||n clone_params_new|||n closest_cop||| cntrl_to_mnemonic|||n compute_EXACTish|||n construct_ahocorasick_from_trie||| cop_fetch_label||5.015001| cop_free||| cop_hints_2hv||5.013007| cop_hints_fetch_pvn||5.013007| cop_hints_fetch_pvs||5.013007| cop_hints_fetch_pv||5.013007| cop_hints_fetch_sv||5.013007| cop_store_label||5.015001| cophh_2hv||5.013007| cophh_copy||5.013007| cophh_delete_pvn||5.013007| cophh_delete_pvs||5.013007| cophh_delete_pv||5.013007| cophh_delete_sv||5.013007| cophh_fetch_pvn||5.013007| cophh_fetch_pvs||5.013007| cophh_fetch_pv||5.013007| cophh_fetch_sv||5.013007| cophh_free||5.013007| cophh_new_empty||5.021008| cophh_store_pvn||5.013007| cophh_store_pvs||5.013007| cophh_store_pv||5.013007| cophh_store_sv||5.013007| core_prototype||| coresub_op||| could_it_be_a_POSIX_class|||n cr_textfilter||| create_eval_scope||| croak_memory_wrap||5.019003|n croak_no_mem|||n croak_no_modify||5.013003|n croak_nocontext|||vn croak_popstack|||n croak_sv||5.013001| croak_xs_usage||5.010001|n croak|||v csighandler||5.009003|n current_re_engine||| curse||| custom_op_desc||5.007003| custom_op_get_field||| custom_op_name||5.007003| custom_op_register||5.013007| custom_op_xop||5.013007| cv_ckproto_len_flags||| cv_clone_into||| cv_clone||| cv_const_sv_or_av|||n cv_const_sv||5.003070|n cv_dump||| cv_forget_slab||| cv_get_call_checker||5.013006| cv_name||5.021005| cv_set_call_checker_flags||5.021004| cv_set_call_checker||5.013006| cv_undef_flags||| cv_undef||| cvgv_from_hek||| cvgv_set||| cvstash_set||| cx_dump||5.005000| cx_dup||| cxinc||| dAXMARK|5.009003||p dAX|5.007002||p dITEMS|5.007002||p dMARK||| dMULTICALL||5.009003| dMY_CXT_SV|5.007003||p dMY_CXT|5.007003||p dNOOP|5.006000||p dORIGMARK||| dSP||| dTHR|5.004050||p dTHXR|5.021008||p dTHXa|5.006000||p dTHXoa|5.006000||p dTHX|5.006000||p dUNDERBAR|5.009002||p dVAR|5.009003||p dXCPT|5.009002||p dXSARGS||| dXSI32||| dXSTARG|5.006000||p deb_curcv||| deb_nocontext|||vn deb_stack_all||| deb_stack_n||| debop||5.005000| debprofdump||5.005000| debprof||| debstackptrs||5.007003| debstack||5.007003| debug_start_match||| deb||5.007003|v defelem_target||| del_sv||| delete_eval_scope||| delimcpy||5.004000|n deprecate_commaless_var_list||| despatch_signals||5.007001| destroy_matcher||| die_nocontext|||vn die_sv||5.013001| die_unwind||| die|||v dirp_dup||| div128||| djSP||| do_aexec5||| do_aexec||| do_aspawn||| do_binmode||5.004050| do_chomp||| do_close||| do_delete_local||| do_dump_pad||| do_eof||| do_exec3||| do_execfree||| do_exec||| do_gv_dump||5.006000| do_gvgv_dump||5.006000| do_hv_dump||5.006000| do_ipcctl||| do_ipcget||| do_join||| do_magic_dump||5.006000| do_msgrcv||| do_msgsnd||| do_ncmp||| do_oddball||| do_op_dump||5.006000| do_open6||| do_open9||5.006000| do_open_raw||| do_openn||5.007001| do_open||5.003070| do_pmop_dump||5.006000| do_print||| do_readline||| do_seek||| do_semop||| do_shmio||| do_smartmatch||| do_spawn_nowait||| do_spawn||| do_sprintf||| do_sv_dump||5.006000| do_sysseek||| do_tell||| do_trans_complex_utf8||| do_trans_complex||| do_trans_count_utf8||| do_trans_count||| do_trans_simple_utf8||| do_trans_simple||| do_trans||| do_vecget||| do_vecset||| do_vop||| docatch||| doeval||| dofile||| dofindlabel||| doform||| doing_taint||5.008001|n dooneliner||| doopen_pm||| doparseform||| dopoptoeval||| dopoptogiven||| dopoptolabel||| dopoptoloop||| dopoptosub_at||| dopoptowhen||| doref||5.009003| dounwind||| dowantarray||| drand48_init_r|||n drand48_r|||n dump_all_perl||| dump_all||5.006000| dump_c_backtrace||| dump_eval||5.006000| dump_exec_pos||| dump_form||5.006000| dump_indent||5.006000|v dump_mstats||| dump_packsubs_perl||| dump_packsubs||5.006000| dump_sub_perl||| dump_sub||5.006000| dump_sv_child||| dump_trie_interim_list||| dump_trie_interim_table||| dump_trie||| dump_vindent||5.006000| dumpuntil||| dup_attrlist||| emulate_cop_io||| eval_pv|5.006000||p eval_sv|5.006000||p exec_failed||| expect_number||| fbm_compile||5.005000| fbm_instr||5.005000| feature_is_enabled||| filter_add||| filter_del||| filter_gets||| filter_read||| finalize_optree||| finalize_op||| find_and_forget_pmops||| find_array_subscript||| find_beginning||| find_byclass||| find_default_stash||| find_hash_subscript||| find_in_my_stash||| find_lexical_cv||| find_runcv_where||| find_runcv||5.008001| find_rundefsv2||| find_rundefsvoffset||5.009002| find_rundefsv||5.013002| find_script||| find_uninit_var||| first_symbol|||n fixup_errno_string||| foldEQ_latin1||5.013008|n foldEQ_locale||5.013002|n foldEQ_utf8_flags||5.013010| foldEQ_utf8||5.013002| foldEQ||5.013002|n fold_constants||| forbid_setid||| force_ident_maybe_lex||| force_ident||| force_list||| force_next||| force_strict_version||| force_version||| force_word||| forget_pmop||| form_nocontext|||vn form_short_octal_warning||| form||5.004000|v fp_dup||| fprintf_nocontext|||vn free_c_backtrace||| free_global_struct||| free_tied_hv_pool||| free_tmps||| gen_constant_list||| get_ANYOF_cp_list_for_ssc||| get_and_check_backslash_N_name||| get_aux_mg||| get_av|5.006000||p get_c_backtrace_dump||| get_c_backtrace||| get_context||5.006000|n get_cvn_flags|5.009005||p get_cvs|5.011000||p get_cv|5.006000||p get_db_sub||| get_debug_opts||| get_hash_seed||| get_hv|5.006000||p get_invlist_iter_addr|||n get_invlist_offset_addr|||n get_invlist_previous_index_addr|||n get_mstats||| get_no_modify||| get_num||| get_op_descs||5.005000| get_op_names||5.005000| get_opargs||| get_ppaddr||5.006000| get_re_arg||| get_sv|5.006000||p get_vtbl||5.005030| getcwd_sv||5.007002| getenv_len||| glob_2number||| glob_assign_glob||| gp_dup||| gp_free||| gp_ref||| grok_atoUV|||n grok_bin|5.007003||p grok_bslash_N||| grok_bslash_c||| grok_bslash_o||| grok_bslash_x||| grok_hex|5.007003||p grok_infnan||5.021004| grok_number_flags||5.021002| grok_number|5.007002||p grok_numeric_radix|5.007002||p grok_oct|5.007003||p group_end||| gv_AVadd||| gv_HVadd||| gv_IOadd||| gv_SVadd||| gv_add_by_type||5.011000| gv_autoload4||5.004000| gv_autoload_pvn||5.015004| gv_autoload_pv||5.015004| gv_autoload_sv||5.015004| gv_check||| gv_const_sv||5.009003| gv_dump||5.006000| gv_efullname3||5.003070| gv_efullname4||5.006001| gv_efullname||| gv_fetchfile_flags||5.009005| gv_fetchfile||| gv_fetchmeth_autoload||5.007003| gv_fetchmeth_internal||| gv_fetchmeth_pv_autoload||5.015004| gv_fetchmeth_pvn_autoload||5.015004| gv_fetchmeth_pvn||5.015004| gv_fetchmeth_pv||5.015004| gv_fetchmeth_sv_autoload||5.015004| gv_fetchmeth_sv||5.015004| gv_fetchmethod_autoload||5.004000| gv_fetchmethod_pv_flags||5.015004| gv_fetchmethod_pvn_flags||5.015004| gv_fetchmethod_sv_flags||5.015004| gv_fetchmethod||| gv_fetchmeth||| gv_fetchpvn_flags|5.009002||p gv_fetchpvs|5.009004||p gv_fetchpv||| gv_fetchsv|5.009002||p gv_fullname3||5.003070| gv_fullname4||5.006001| gv_fullname||| gv_handler||5.007001| gv_init_pvn||5.015004| gv_init_pv||5.015004| gv_init_svtype||| gv_init_sv||5.015004| gv_init||| gv_is_in_main||| gv_magicalize_isa||| gv_magicalize||| gv_name_set||5.009004| gv_override||| gv_setref||| gv_stashpvn_internal||| gv_stashpvn|5.003070||p gv_stashpvs|5.009003||p gv_stashpv||| gv_stashsvpvn_cached||| gv_stashsv||| gv_try_downgrade||| handle_regex_sets||| he_dup||| hek_dup||| hfree_next_entry||| hfreeentries||| hsplit||| hv_assert||| hv_auxinit_internal|||n hv_auxinit||| hv_backreferences_p||| hv_clear_placeholders||5.009001| hv_clear||| hv_common_key_len||5.010000| hv_common||5.010000| hv_copy_hints_hv||5.009004| hv_delayfree_ent||5.004000| hv_delete_common||| hv_delete_ent||5.003070| hv_delete||| hv_eiter_p||5.009003| hv_eiter_set||5.009003| hv_ename_add||| hv_ename_delete||| hv_exists_ent||5.003070| hv_exists||| hv_fetch_ent||5.003070| hv_fetchs|5.009003||p hv_fetch||| hv_fill||5.013002| hv_free_ent_ret||| hv_free_ent||5.004000| hv_iterinit||| hv_iterkeysv||5.003070| hv_iterkey||| hv_iternext_flags||5.008000| hv_iternextsv||| hv_iternext||| hv_iterval||| hv_kill_backrefs||| hv_ksplit||5.003070| hv_magic_check|||n hv_magic||| hv_name_set||5.009003| hv_notallowed||| hv_placeholders_get||5.009003| hv_placeholders_p||| hv_placeholders_set||5.009003| hv_rand_set||5.018000| hv_riter_p||5.009003| hv_riter_set||5.009003| hv_scalar||5.009001| hv_store_ent||5.003070| hv_store_flags||5.008000| hv_stores|5.009004||p hv_store||| hv_undef_flags||| hv_undef||| ibcmp_locale||5.004000| ibcmp_utf8||5.007003| ibcmp||| incline||| incpush_if_exists||| incpush_use_sep||| incpush||| ingroup||| init_argv_symbols||| init_constants||| init_dbargs||| init_debugger||| init_global_struct||| init_i18nl10n||5.006000| init_i18nl14n||5.006000| init_ids||| init_interp||| init_main_stash||| init_perllib||| init_postdump_symbols||| init_predump_symbols||| init_stacks||5.005000| init_tm||5.007002| inplace_aassign||| instr|||n intro_my||5.004000| intuit_method||| intuit_more||| invert||| invlist_array|||n invlist_clone||| invlist_extend||| invlist_highest|||n invlist_is_iterating|||n invlist_iterfinish|||n invlist_iterinit|||n invlist_iternext|||n invlist_max|||n invlist_previous_index|||n invlist_set_len||| invlist_set_previous_index|||n invlist_trim|||n invoke_exception_hook||| io_close||| isALNUMC|5.006000||p isALNUM_lazy||5.021001| isALPHANUMERIC||5.017008| isALPHA||| isASCII|5.006000||p isBLANK|5.006001||p isCNTRL|5.006000||p isDIGIT||| isFOO_lc||| isFOO_utf8_lc||| isGCB|||n isGRAPH|5.006000||p isGV_with_GP|5.009004||p isIDCONT||5.017008| isIDFIRST_lazy||5.021001| isIDFIRST||| isLOWER||| isOCTAL||5.013005| isPRINT|5.004000||p isPSXSPC|5.006001||p isPUNCT|5.006000||p isSB||| isSPACE||| isUPPER||| isUTF8_CHAR||5.021001| isWB||| isWORDCHAR||5.013006| isXDIGIT|5.006000||p is_an_int||| is_ascii_string||5.011000| is_handle_constructor|||n is_invariant_string||5.021007|n is_lvalue_sub||5.007001| is_safe_syscall||5.019004| is_ssc_worth_it|||n is_uni_alnum_lc||5.006000| is_uni_alnumc_lc||5.017007| is_uni_alnumc||5.017007| is_uni_alnum||5.006000| is_uni_alpha_lc||5.006000| is_uni_alpha||5.006000| is_uni_ascii_lc||5.006000| is_uni_ascii||5.006000| is_uni_blank_lc||5.017002| is_uni_blank||5.017002| is_uni_cntrl_lc||5.006000| is_uni_cntrl||5.006000| is_uni_digit_lc||5.006000| is_uni_digit||5.006000| is_uni_graph_lc||5.006000| is_uni_graph||5.006000| is_uni_idfirst_lc||5.006000| is_uni_idfirst||5.006000| is_uni_lower_lc||5.006000| is_uni_lower||5.006000| is_uni_print_lc||5.006000| is_uni_print||5.006000| is_uni_punct_lc||5.006000| is_uni_punct||5.006000| is_uni_space_lc||5.006000| is_uni_space||5.006000| is_uni_upper_lc||5.006000| is_uni_upper||5.006000| is_uni_xdigit_lc||5.006000| is_uni_xdigit||5.006000| is_utf8_alnumc||5.017007| is_utf8_alnum||5.006000| is_utf8_alpha||5.006000| is_utf8_ascii||5.006000| is_utf8_blank||5.017002| is_utf8_char_buf||5.015008|n is_utf8_char||5.006000|n is_utf8_cntrl||5.006000| is_utf8_common||| is_utf8_digit||5.006000| is_utf8_graph||5.006000| is_utf8_idcont||5.008000| is_utf8_idfirst||5.006000| is_utf8_lower||5.006000| is_utf8_mark||5.006000| is_utf8_perl_space||5.011001| is_utf8_perl_word||5.011001| is_utf8_posix_digit||5.011001| is_utf8_print||5.006000| is_utf8_punct||5.006000| is_utf8_space||5.006000| is_utf8_string_loclen||5.009003|n is_utf8_string_loc||5.008001|n is_utf8_string||5.006001|n is_utf8_upper||5.006000| is_utf8_xdigit||5.006000| is_utf8_xidcont||5.013010| is_utf8_xidfirst||5.013010| isa_lookup||| isinfnansv||| isinfnan||5.021004|n items|||n ix|||n jmaybe||| join_exact||| keyword_plugin_standard||| keyword||| leave_common||| leave_scope||| lex_bufutf8||5.011002| lex_discard_to||5.011002| lex_grow_linestr||5.011002| lex_next_chunk||5.011002| lex_peek_unichar||5.011002| lex_read_space||5.011002| lex_read_to||5.011002| lex_read_unichar||5.011002| lex_start||5.009005| lex_stuff_pvn||5.011002| lex_stuff_pvs||5.013005| lex_stuff_pv||5.013006| lex_stuff_sv||5.011002| lex_unstuff||5.011002| listkids||| list||| load_module_nocontext|||vn load_module|5.006000||pv localize||| looks_like_bool||| looks_like_number||| lop||| mPUSHi|5.009002||p mPUSHn|5.009002||p mPUSHp|5.009002||p mPUSHs|5.010001||p mPUSHu|5.009002||p mXPUSHi|5.009002||p mXPUSHn|5.009002||p mXPUSHp|5.009002||p mXPUSHs|5.010001||p mXPUSHu|5.009002||p magic_clear_all_env||| magic_cleararylen_p||| magic_clearenv||| magic_clearhints||| magic_clearhint||| magic_clearisa||| magic_clearpack||| magic_clearsig||| magic_copycallchecker||| magic_dump||5.006000| magic_existspack||| magic_freearylen_p||| magic_freeovrld||| magic_getarylen||| magic_getdebugvar||| magic_getdefelem||| magic_getnkeys||| magic_getpack||| magic_getpos||| magic_getsig||| magic_getsubstr||| magic_gettaint||| magic_getuvar||| magic_getvec||| magic_get||| magic_killbackrefs||| magic_methcall1||| magic_methcall|||v magic_methpack||| magic_nextpack||| magic_regdata_cnt||| magic_regdatum_get||| magic_regdatum_set||| magic_scalarpack||| magic_set_all_env||| magic_setarylen||| magic_setcollxfrm||| magic_setdbline||| magic_setdebugvar||| magic_setdefelem||| magic_setenv||| magic_sethint||| magic_setisa||| magic_setlvref||| magic_setmglob||| magic_setnkeys||| magic_setpack||| magic_setpos||| magic_setregexp||| magic_setsig||| magic_setsubstr||| magic_settaint||| magic_setutf8||| magic_setuvar||| magic_setvec||| magic_set||| magic_sizepack||| magic_wipepack||| make_matcher||| make_trie||| malloc_good_size|||n malloced_size|||n malloc||5.007002|n markstack_grow||5.021001| matcher_matches_sv||| maybe_multimagic_gv||| mayberelocate||| measure_struct||| memEQs|5.009005||p memEQ|5.004000||p memNEs|5.009005||p memNE|5.004000||p mem_collxfrm||| mem_log_common|||n mess_alloc||| mess_nocontext|||vn mess_sv||5.013001| mess||5.006000|v mfree||5.007002|n mg_clear||| mg_copy||| mg_dup||| mg_find_mglob||| mg_findext|5.013008||pn mg_find|||n mg_free_type||5.013006| mg_free||| mg_get||| mg_length||5.005000| mg_localize||| mg_magical|||n mg_set||| mg_size||5.005000| mini_mktime||5.007002|n minus_v||| missingterm||| mode_from_discipline||| modkids||| more_bodies||| more_sv||| moreswitches||| move_proto_attr||| mro_clean_isarev||| mro_gather_and_rename||| mro_get_from_name||5.010001| mro_get_linear_isa_dfs||| mro_get_linear_isa||5.009005| mro_get_private_data||5.010001| mro_isa_changed_in||| mro_meta_dup||| mro_meta_init||| mro_method_changed_in||5.009005| mro_package_moved||| mro_register||5.010001| mro_set_mro||5.010001| mro_set_private_data||5.010001| mul128||| mulexp10|||n multideref_stringify||| my_atof2||5.007002| my_atof||5.006000| my_attrs||| my_bcopy|||n my_bytes_to_utf8|||n my_bzero|||n my_chsize||| my_clearenv||| my_cxt_index||| my_cxt_init||| my_dirfd||5.009005|n my_exit_jump||| my_exit||| my_failure_exit||5.004000| my_fflush_all||5.006000| my_fork||5.007003|n my_kid||| my_lstat_flags||| my_lstat||5.021008| my_memcmp|||n my_memset|||n my_pclose||5.003070| my_popen_list||5.007001| my_popen||5.003070| my_setenv||| my_setlocale||| my_snprintf|5.009004||pvn my_socketpair||5.007003|n my_sprintf|5.009003||pvn my_stat_flags||| my_stat||5.021008| my_strerror||5.021001| my_strftime||5.007002| my_strlcat|5.009004||pn my_strlcpy|5.009004||pn my_unexec||| my_vsnprintf||5.009004|n need_utf8|||n newANONATTRSUB||5.006000| newANONHASH||| newANONLIST||| newANONSUB||| newASSIGNOP||| newATTRSUB_x||| newATTRSUB||5.006000| newAVREF||| newAV||| newBINOP||| newCONDOP||| newCONSTSUB_flags||5.015006| newCONSTSUB|5.004050||p newCVREF||| newDEFSVOP||5.021006| newFORM||| newFOROP||5.013007| newGIVENOP||5.009003| newGIVWHENOP||| newGP||| newGVOP||| newGVREF||| newGVgen_flags||5.015004| newGVgen||| newHVREF||| newHVhv||5.005000| newHV||| newIO||| newLISTOP||| newLOGOP||| newLOOPEX||| newLOOPOP||| newMETHOP_internal||| newMETHOP_named||5.021005| newMETHOP||5.021005| newMYSUB||5.017004| newNULLLIST||| newOP||| newPADNAMELIST||5.021007|n newPADNAMEouter||5.021007|n newPADNAMEpvn||5.021007|n newPADOP||| newPMOP||| newPROG||| newPVOP||| newRANGE||| newRV_inc|5.004000||p newRV_noinc|5.004000||p newRV||| newSLICEOP||| newSTATEOP||| newSTUB||| newSUB||| newSVOP||| newSVREF||| newSV_type|5.009005||p newSVavdefelem||| newSVhek||5.009003| newSViv||| newSVnv||| newSVpadname||5.017004| newSVpv_share||5.013006| newSVpvf_nocontext|||vn newSVpvf||5.004000|v newSVpvn_flags|5.010001||p newSVpvn_share|5.007001||p newSVpvn_utf8|5.010001||p newSVpvn|5.004050||p newSVpvs_flags|5.010001||p newSVpvs_share|5.009003||p newSVpvs|5.009003||p newSVpv||| newSVrv||| newSVsv||| newSVuv|5.006000||p newSV||| newUNOP_AUX||5.021007| newUNOP||| newWHENOP||5.009003| newWHILEOP||5.013007| newXS_deffile||| newXS_flags||5.009004| newXS_len_flags||| newXSproto||5.006000| newXS||5.006000| new_collate||5.006000| new_constant||| new_ctype||5.006000| new_he||| new_logop||| new_numeric||5.006000| new_stackinfo||5.005000| new_version||5.009000| new_warnings_bitfield||| next_symbol||| nextargv||| nextchar||| ninstr|||n no_bareword_allowed||| no_fh_allowed||| no_op||| noperl_die|||vn not_a_number||| not_incrementable||| nothreadhook||5.008000| nuke_stacks||| num_overflow|||n oopsAV||| oopsHV||| op_append_elem||5.013006| op_append_list||5.013006| op_clear||| op_contextualize||5.013006| op_convert_list||5.021006| op_dump||5.006000| op_free||| op_integerize||| op_linklist||5.013006| op_lvalue_flags||| op_lvalue||5.013007| op_null||5.007002| op_parent||5.021002|n op_prepend_elem||5.013006| op_refcnt_dec||| op_refcnt_inc||| op_refcnt_lock||5.009002| op_refcnt_unlock||5.009002| op_relocate_sv||| op_scope||5.013007| op_sibling_splice||5.021002|n op_std_init||| op_unscope||| open_script||| openn_cleanup||| openn_setup||| opmethod_stash||| opslab_force_free||| opslab_free_nopad||| opslab_free||| pMY_CXT_|5.007003||p pMY_CXT|5.007003||p pTHX_|5.006000||p pTHX|5.006000||p packWARN|5.007003||p pack_cat||5.007003| pack_rec||| package_version||| package||| packlist||5.008001| pad_add_anon||5.008001| pad_add_name_pvn||5.015001| pad_add_name_pvs||5.015001| pad_add_name_pv||5.015001| pad_add_name_sv||5.015001| pad_add_weakref||| pad_alloc_name||| pad_alloc||| pad_block_start||| pad_check_dup||| pad_compname_type||5.009003| pad_findlex||| pad_findmy_pvn||5.015001| pad_findmy_pvs||5.015001| pad_findmy_pv||5.015001| pad_findmy_sv||5.015001| pad_fixup_inner_anons||| pad_free||| pad_leavemy||| pad_new||5.008001| pad_push||| pad_reset||| pad_setsv||| pad_sv||| pad_swipe||| pad_tidy||5.008001| padlist_dup||| padlist_store||| padname_dup||| padname_free||| padnamelist_dup||| padnamelist_fetch||5.021007|n padnamelist_free||| padnamelist_store||5.021007| parse_arithexpr||5.013008| parse_barestmt||5.013007| parse_block||5.013007| parse_body||| parse_fullexpr||5.013008| parse_fullstmt||5.013005| parse_gv_stash_name||| parse_ident||| parse_label||5.013007| parse_listexpr||5.013008| parse_lparen_question_flags||| parse_stmtseq||5.013006| parse_subsignature||| parse_termexpr||5.013008| parse_unicode_opts||| parser_dup||| parser_free_nexttoke_ops||| parser_free||| path_is_searchable|||n peep||| pending_ident||| perl_alloc_using|||n perl_alloc|||n perl_clone_using|||n perl_clone|||n perl_construct|||n perl_destruct||5.007003|n perl_free|||n perl_parse||5.006000|n perl_run|||n pidgone||| pm_description||| pmop_dump||5.006000| pmruntime||| pmtrans||| pop_scope||| populate_ANYOF_from_invlist||| populate_isa|||v pregcomp||5.009005| pregexec||| pregfree2||5.011000| pregfree||| prescan_version||5.011004| printbuf||| printf_nocontext|||vn process_special_blocks||| ptr_hash|||n ptr_table_clear||5.009005| ptr_table_fetch||5.009005| ptr_table_find|||n ptr_table_free||5.009005| ptr_table_new||5.009005| ptr_table_split||5.009005| ptr_table_store||5.009005| push_scope||| put_charclass_bitmap_innards||| put_code_point||| put_range||| pv_display|5.006000||p pv_escape|5.009004||p pv_pretty|5.009004||p pv_uni_display||5.007003| qerror||| qsortsvu||| quadmath_format_needed|||n quadmath_format_single|||n re_compile||5.009005| re_croak2||| re_dup_guts||| re_intuit_start||5.019001| re_intuit_string||5.006000| re_op_compile||| realloc||5.007002|n reentrant_free||5.021008| reentrant_init||5.021008| reentrant_retry||5.021008|vn reentrant_size||5.021008| ref_array_or_hash||| refcounted_he_chain_2hv||| refcounted_he_fetch_pvn||| refcounted_he_fetch_pvs||| refcounted_he_fetch_pv||| refcounted_he_fetch_sv||| refcounted_he_free||| refcounted_he_inc||| refcounted_he_new_pvn||| refcounted_he_new_pvs||| refcounted_he_new_pv||| refcounted_he_new_sv||| refcounted_he_value||| refkids||| refto||| ref||5.021008| reg2Lanode||| reg_check_named_buff_matched|||n reg_named_buff_all||5.009005| reg_named_buff_exists||5.009005| reg_named_buff_fetch||5.009005| reg_named_buff_firstkey||5.009005| reg_named_buff_iter||| reg_named_buff_nextkey||5.009005| reg_named_buff_scalar||5.009005| reg_named_buff||| reg_node||| reg_numbered_buff_fetch||| reg_numbered_buff_length||| reg_numbered_buff_store||| reg_qr_package||| reg_recode||| reg_scan_name||| reg_skipcomment|||n reg_temp_copy||| reganode||| regatom||| regbranch||| regclass_swash||5.009004| regclass||| regcppop||| regcppush||| regcurly|||n regdump_extflags||| regdump_intflags||| regdump||5.005000| regdupe_internal||| regexec_flags||5.005000| regfree_internal||5.009005| reghop3|||n reghop4|||n reghopmaybe3|||n reginclass||| reginitcolors||5.006000| reginsert||| regmatch||| regnext||5.005000| regnode_guts||| regpatws|||n regpiece||| regpposixcc||| regprop||| regrepeat||| regtail_study||| regtail||| regtry||| reg||| repeatcpy|||n report_evil_fh||| report_redefined_cv||| report_uninit||| report_wrongway_fh||| require_pv||5.006000| require_tie_mod||| restore_magic||| rninstr|||n rpeep||| rsignal_restore||| rsignal_save||| rsignal_state||5.004000| rsignal||5.004000| run_body||| run_user_filter||| runops_debug||5.005000| runops_standard||5.005000| rv2cv_op_cv||5.013006| rvpv_dup||| rxres_free||| rxres_restore||| rxres_save||| safesyscalloc||5.006000|n safesysfree||5.006000|n safesysmalloc||5.006000|n safesysrealloc||5.006000|n same_dirent||| save_I16||5.004000| save_I32||| save_I8||5.006000| save_adelete||5.011000| save_aelem_flags||5.011000| save_aelem||5.004050| save_aliased_sv||| save_alloc||5.006000| save_aptr||| save_ary||| save_bool||5.008001| save_clearsv||| save_delete||| save_destructor_x||5.006000| save_destructor||5.006000| save_freeop||| save_freepv||| save_freesv||| save_generic_pvref||5.006001| save_generic_svref||5.005030| save_gp||5.004000| save_hash||| save_hdelete||5.011000| save_hek_flags|||n save_helem_flags||5.011000| save_helem||5.004050| save_hints||5.010001| save_hptr||| save_int||| save_item||| save_iv||5.005000| save_lines||| save_list||| save_long||| save_magic_flags||| save_mortalizesv||5.007001| save_nogv||| save_op||5.005000| save_padsv_and_mortalize||5.010001| save_pptr||| save_pushi32ptr||5.010001| save_pushptri32ptr||| save_pushptrptr||5.010001| save_pushptr||5.010001| save_re_context||5.006000| save_scalar_at||| save_scalar||| save_set_svflags||5.009000| save_shared_pvref||5.007003| save_sptr||| save_strlen||| save_svref||| save_vptr||5.006000| savepvn||| savepvs||5.009003| savepv||| savesharedpvn||5.009005| savesharedpvs||5.013006| savesharedpv||5.007003| savesharedsvpv||5.013006| savestack_grow_cnt||5.008001| savestack_grow||| savesvpv||5.009002| sawparens||| scalar_mod_type|||n scalarboolean||| scalarkids||| scalarseq||| scalarvoid||| scalar||| scan_bin||5.006000| scan_commit||| scan_const||| scan_formline||| scan_heredoc||| scan_hex||| scan_ident||| scan_inputsymbol||| scan_num||5.007001| scan_oct||| scan_pat||| scan_str||| scan_subst||| scan_trans||| scan_version||5.009001| scan_vstring||5.009005| scan_word||| search_const||| seed||5.008001| sequence_num||| set_ANYOF_arg||| set_caret_X||| set_context||5.006000|n set_numeric_local||5.006000| set_numeric_radix||5.006000| set_numeric_standard||5.006000| set_padlist|||n setdefout||| share_hek_flags||| share_hek||5.004000| should_warn_nl|||n si_dup||| sighandler|||n simplify_sort||| skipspace_flags||| softref2xv||| sortcv_stacked||| sortcv_xsub||| sortcv||| sortsv_flags||5.009003| sortsv||5.007003| space_join_names_mortal||| ss_dup||| ssc_add_range||| ssc_and||| ssc_anything||| ssc_clear_locale|||n ssc_cp_and||| ssc_finalize||| ssc_init||| ssc_intersection||| ssc_is_anything|||n ssc_is_cp_posixl_init|||n ssc_or||| ssc_union||| stack_grow||| start_glob||| start_subparse||5.004000| stdize_locale||| strEQ||| strGE||| strGT||| strLE||| strLT||| strNE||| str_to_version||5.006000| strip_return||| strnEQ||| strnNE||| study_chunk||| sub_crush_depth||| sublex_done||| sublex_push||| sublex_start||| sv_2bool_flags||5.013006| sv_2bool||| sv_2cv||| sv_2io||| sv_2iuv_common||| sv_2iuv_non_preserve||| sv_2iv_flags||5.009001| sv_2iv||| sv_2mortal||| sv_2num||| sv_2nv_flags||5.013001| sv_2pv_flags|5.007002||p sv_2pv_nolen|5.006000||p sv_2pvbyte_nolen|5.006000||p sv_2pvbyte|5.006000||p sv_2pvutf8_nolen||5.006000| sv_2pvutf8||5.006000| sv_2pv||| sv_2uv_flags||5.009001| sv_2uv|5.004000||p sv_add_arena||| sv_add_backref||| sv_backoff|||n sv_bless||| sv_buf_to_ro||| sv_buf_to_rw||| sv_cat_decode||5.008001| sv_catpv_flags||5.013006| sv_catpv_mg|5.004050||p sv_catpv_nomg||5.013006| sv_catpvf_mg_nocontext|||pvn sv_catpvf_mg|5.006000|5.004000|pv sv_catpvf_nocontext|||vn sv_catpvf||5.004000|v sv_catpvn_flags||5.007002| sv_catpvn_mg|5.004050||p sv_catpvn_nomg|5.007002||p sv_catpvn||| sv_catpvs_flags||5.013006| sv_catpvs_mg||5.013006| sv_catpvs_nomg||5.013006| sv_catpvs|5.009003||p sv_catpv||| sv_catsv_flags||5.007002| sv_catsv_mg|5.004050||p sv_catsv_nomg|5.007002||p sv_catsv||| sv_chop||| sv_clean_all||| sv_clean_objs||| sv_clear||| sv_cmp_flags||5.013006| sv_cmp_locale_flags||5.013006| sv_cmp_locale||5.004000| sv_cmp||| sv_collxfrm_flags||5.013006| sv_collxfrm||| sv_copypv_flags||5.017002| sv_copypv_nomg||5.017002| sv_copypv||| sv_dec_nomg||5.013002| sv_dec||| sv_del_backref||| sv_derived_from_pvn||5.015004| sv_derived_from_pv||5.015004| sv_derived_from_sv||5.015004| sv_derived_from||5.004000| sv_destroyable||5.010000| sv_display||| sv_does_pvn||5.015004| sv_does_pv||5.015004| sv_does_sv||5.015004| sv_does||5.009004| sv_dump||| sv_dup_common||| sv_dup_inc_multiple||| sv_dup_inc||| sv_dup||| sv_eq_flags||5.013006| sv_eq||| sv_exp_grow||| sv_force_normal_flags||5.007001| sv_force_normal||5.006000| sv_free2||| sv_free_arenas||| sv_free||| sv_get_backrefs||5.021008|n sv_gets||5.003070| sv_grow||| sv_i_ncmp||| sv_inc_nomg||5.013002| sv_inc||| sv_insert_flags||5.010001| sv_insert||| sv_isa||| sv_isobject||| sv_iv||5.005000| sv_kill_backrefs||| sv_len_utf8_nomg||| sv_len_utf8||5.006000| sv_len||| sv_magic_portable|5.021008|5.004000|p sv_magicext_mglob||| sv_magicext||5.007003| sv_magic||| sv_mortalcopy_flags||| sv_mortalcopy||| sv_ncmp||| sv_newmortal||| sv_newref||| sv_nolocking||5.007003| sv_nosharing||5.007003| sv_nounlocking||| sv_nv||5.005000| sv_only_taint_gmagic|||n sv_or_pv_pos_u2b||| sv_peek||5.005000| sv_pos_b2u_flags||5.019003| sv_pos_b2u_midway||| sv_pos_b2u||5.006000| sv_pos_u2b_cached||| sv_pos_u2b_flags||5.011005| sv_pos_u2b_forwards|||n sv_pos_u2b_midway|||n sv_pos_u2b||5.006000| sv_pvbyten_force||5.006000| sv_pvbyten||5.006000| sv_pvbyte||5.006000| sv_pvn_force_flags|5.007002||p sv_pvn_force||| sv_pvn_nomg|5.007003|5.005000|p sv_pvn||5.005000| sv_pvutf8n_force||5.006000| sv_pvutf8n||5.006000| sv_pvutf8||5.006000| sv_pv||5.006000| sv_recode_to_utf8||5.007003| sv_reftype||| sv_ref||| sv_release_COW||| sv_replace||| sv_report_used||| sv_resetpvn||| sv_reset||| sv_rvweaken||5.006000| sv_sethek||| sv_setiv_mg|5.004050||p sv_setiv||| sv_setnv_mg|5.006000||p sv_setnv||| sv_setpv_mg|5.004050||p sv_setpvf_mg_nocontext|||pvn sv_setpvf_mg|5.006000|5.004000|pv sv_setpvf_nocontext|||vn sv_setpvf||5.004000|v sv_setpviv_mg||5.008001| sv_setpviv||5.008001| sv_setpvn_mg|5.004050||p sv_setpvn||| sv_setpvs_mg||5.013006| sv_setpvs|5.009004||p sv_setpv||| sv_setref_iv||| sv_setref_nv||| sv_setref_pvn||| sv_setref_pvs||5.021008| sv_setref_pv||| sv_setref_uv||5.007001| sv_setsv_cow||| sv_setsv_flags||5.007002| sv_setsv_mg|5.004050||p sv_setsv_nomg|5.007002||p sv_setsv||| sv_setuv_mg|5.004050||p sv_setuv|5.004000||p sv_tainted||5.004000| sv_taint||5.004000| sv_true||5.005000| sv_unglob||| sv_uni_display||5.007003| sv_unmagicext|5.013008||p sv_unmagic||| sv_unref_flags||5.007001| sv_unref||| sv_untaint||5.004000| sv_upgrade||| sv_usepvn_flags||5.009004| sv_usepvn_mg|5.004050||p sv_usepvn||| sv_utf8_decode||5.006000| sv_utf8_downgrade||5.006000| sv_utf8_encode||5.006000| sv_utf8_upgrade_flags_grow||5.011000| sv_utf8_upgrade_flags||5.007002| sv_utf8_upgrade_nomg||5.007002| sv_utf8_upgrade||5.007001| sv_uv|5.005000||p sv_vcatpvf_mg|5.006000|5.004000|p sv_vcatpvfn_flags||5.017002| sv_vcatpvfn||5.004000| sv_vcatpvf|5.006000|5.004000|p sv_vsetpvf_mg|5.006000|5.004000|p sv_vsetpvfn||5.004000| sv_vsetpvf|5.006000|5.004000|p svtype||| swallow_bom||| swash_fetch||5.007002| swash_init||5.006000| swash_scan_list_line||| swatch_get||| sync_locale||5.021004| sys_init3||5.010000|n sys_init||5.010000|n sys_intern_clear||| sys_intern_dup||| sys_intern_init||| sys_term||5.010000|n taint_env||| taint_proper||| tied_method|||v tmps_grow_p||| toFOLD_uni||5.007003| toFOLD_utf8||5.019001| toFOLD||5.019001| toLOWER_L1||5.019001| toLOWER_LC||5.004000| toLOWER_uni||5.007003| toLOWER_utf8||5.015007| toLOWER||| toTITLE_uni||5.007003| toTITLE_utf8||5.015007| toTITLE||5.019001| toUPPER_uni||5.007003| toUPPER_utf8||5.015007| toUPPER||| to_byte_substr||| to_lower_latin1|||n to_uni_fold||5.007003| to_uni_lower_lc||5.006000| to_uni_lower||5.007003| to_uni_title_lc||5.006000| to_uni_title||5.007003| to_uni_upper_lc||5.006000| to_uni_upper||5.007003| to_utf8_case||5.007003| to_utf8_fold||5.015007| to_utf8_lower||5.015007| to_utf8_substr||| to_utf8_title||5.015007| to_utf8_upper||5.015007| tokenize_use||| tokeq||| tokereport||| too_few_arguments_pv||| too_many_arguments_pv||| translate_substr_offsets|||n try_amagic_bin||| try_amagic_un||| uiv_2buf|||n unlnk||| unpack_rec||| unpack_str||5.007003| unpackstring||5.008001| unreferenced_to_tmp_stack||| unshare_hek_or_pvn||| unshare_hek||| unsharepvn||5.003070| unwind_handler_stack||| update_debugger_info||| upg_version||5.009005| usage||| utf16_textfilter||| utf16_to_utf8_reversed||5.006001| utf16_to_utf8||5.006001| utf8_distance||5.006000| utf8_hop||5.006000|n utf8_length||5.007001| utf8_mg_len_cache_update||| utf8_mg_pos_cache_update||| utf8_to_bytes||5.006001| utf8_to_uvchr_buf||5.015009| utf8_to_uvchr||5.007001| utf8_to_uvuni_buf||5.015009| utf8_to_uvuni||5.007001| utf8n_to_uvchr||5.007001| utf8n_to_uvuni||5.007001| utilize||| uvchr_to_utf8_flags||5.007003| uvchr_to_utf8||5.007001| uvoffuni_to_utf8_flags||5.019004| uvuni_to_utf8_flags||5.007003| uvuni_to_utf8||5.007001| valid_utf8_to_uvchr||5.015009| valid_utf8_to_uvuni||5.015009| validate_proto||| validate_suid||| varname||| vcmp||5.009000| vcroak||5.006000| vdeb||5.007003| vform||5.006000| visit||| vivify_defelem||| vivify_ref||| vload_module|5.006000||p vmess||5.006000| vnewSVpvf|5.006000|5.004000|p vnormal||5.009002| vnumify||5.009000| vstringify||5.009000| vverify||5.009003| vwarner||5.006000| vwarn||5.006000| wait4pid||| warn_nocontext|||vn warn_sv||5.013001| warner_nocontext|||vn warner|5.006000|5.004000|pv warn|||v was_lvalue_sub||| watch||| whichsig_pvn||5.015004| whichsig_pv||5.015004| whichsig_sv||5.015004| whichsig||| win32_croak_not_implemented|||n with_queued_errors||| wrap_op_checker||5.015008| write_to_stderr||| xs_boot_epilog||| xs_handshake|||vn xs_version_bootcheck||| yyerror_pvn||| yyerror_pv||| yyerror||| yylex||| yyparse||| yyunlex||| yywarn||| ); if (exists $opt{'list-unsupported'}) { my $f; for $f (sort { lc $a cmp lc $b } keys %API) { next unless $API{$f}{todo}; print "$f ", '.'x(40-length($f)), " ", format_version($API{$f}{todo}), "\n"; } exit 0; } # Scan for possible replacement candidates my(%replace, %need, %hints, %warnings, %depends); my $replace = 0; my($hint, $define, $function); sub find_api { my $code = shift; $code =~ s{ / (?: \*[^*]*\*+(?:[^$ccs][^*]*\*+)* / | /[^\r\n]*) | "[^"\\]*(?:\\.[^"\\]*)*" | '[^'\\]*(?:\\.[^'\\]*)*' }{}egsx; grep { exists $API{$_} } $code =~ /(\w+)/mg; } while () { if ($hint) { my $h = $hint->[0] eq 'Hint' ? \%hints : \%warnings; if (m{^\s*\*\s(.*?)\s*$}) { for (@{$hint->[1]}) { $h->{$_} ||= ''; # suppress warning with older perls $h->{$_} .= "$1\n"; } } else { undef $hint } } $hint = [$1, [split /,?\s+/, $2]] if m{^\s*$rccs\s+(Hint|Warning):\s+(\w+(?:,?\s+\w+)*)\s*$}; if ($define) { if ($define->[1] =~ /\\$/) { $define->[1] .= $_; } else { if (exists $API{$define->[0]} && $define->[1] !~ /^DPPP_\(/) { my @n = find_api($define->[1]); push @{$depends{$define->[0]}}, @n if @n } undef $define; } } $define = [$1, $2] if m{^\s*#\s*define\s+(\w+)(?:\([^)]*\))?\s+(.*)}; if ($function) { if (/^}/) { if (exists $API{$function->[0]}) { my @n = find_api($function->[1]); push @{$depends{$function->[0]}}, @n if @n } undef $function; } else { $function->[1] .= $_; } } $function = [$1, ''] if m{^DPPP_\(my_(\w+)\)}; $replace = $1 if m{^\s*$rccs\s+Replace:\s+(\d+)\s+$rcce\s*$}; $replace{$2} = $1 if $replace and m{^\s*#\s*define\s+(\w+)(?:\([^)]*\))?\s+(\w+)}; $replace{$2} = $1 if m{^\s*#\s*define\s+(\w+)(?:\([^)]*\))?\s+(\w+).*$rccs\s+Replace\s+$rcce}; $replace{$1} = $2 if m{^\s*$rccs\s+Replace (\w+) with (\w+)\s+$rcce\s*$}; if (m{^\s*$rccs\s+(\w+(\s*,\s*\w+)*)\s+depends\s+on\s+(\w+(\s*,\s*\w+)*)\s+$rcce\s*$}) { my @deps = map { s/\s+//g; $_ } split /,/, $3; my $d; for $d (map { s/\s+//g; $_ } split /,/, $1) { push @{$depends{$d}}, @deps; } } $need{$1} = 1 if m{^#if\s+defined\(NEED_(\w+)(?:_GLOBAL)?\)}; } for (values %depends) { my %s; $_ = [sort grep !$s{$_}++, @$_]; } if (exists $opt{'api-info'}) { my $f; my $count = 0; my $match = $opt{'api-info'} =~ m!^/(.*)/$! ? $1 : "^\Q$opt{'api-info'}\E\$"; for $f (sort { lc $a cmp lc $b } keys %API) { next unless $f =~ /$match/; print "\n=== $f ===\n\n"; my $info = 0; if ($API{$f}{base} || $API{$f}{todo}) { my $base = format_version($API{$f}{base} || $API{$f}{todo}); print "Supported at least starting from perl-$base.\n"; $info++; } if ($API{$f}{provided}) { my $todo = $API{$f}{todo} ? format_version($API{$f}{todo}) : "5.003"; print "Support by $ppport provided back to perl-$todo.\n"; print "Support needs to be explicitly requested by NEED_$f.\n" if exists $need{$f}; print "Depends on: ", join(', ', @{$depends{$f}}), ".\n" if exists $depends{$f}; print "\n$hints{$f}" if exists $hints{$f}; print "\nWARNING:\n$warnings{$f}" if exists $warnings{$f}; $info++; } print "No portability information available.\n" unless $info; $count++; } $count or print "Found no API matching '$opt{'api-info'}'."; print "\n"; exit 0; } if (exists $opt{'list-provided'}) { my $f; for $f (sort { lc $a cmp lc $b } keys %API) { next unless $API{$f}{provided}; my @flags; push @flags, 'explicit' if exists $need{$f}; push @flags, 'depend' if exists $depends{$f}; push @flags, 'hint' if exists $hints{$f}; push @flags, 'warning' if exists $warnings{$f}; my $flags = @flags ? ' ['.join(', ', @flags).']' : ''; print "$f$flags\n"; } exit 0; } my @files; my @srcext = qw( .xs .c .h .cc .cpp -c.inc -xs.inc ); my $srcext = join '|', map { quotemeta $_ } @srcext; if (@ARGV) { my %seen; for (@ARGV) { if (-e) { if (-f) { push @files, $_ unless $seen{$_}++; } else { warn "'$_' is not a file.\n" } } else { my @new = grep { -f } glob $_ or warn "'$_' does not exist.\n"; push @files, grep { !$seen{$_}++ } @new; } } } else { eval { require File::Find; File::Find::find(sub { $File::Find::name =~ /($srcext)$/i and push @files, $File::Find::name; }, '.'); }; if ($@) { @files = map { glob "*$_" } @srcext; } } if (!@ARGV || $opt{filter}) { my(@in, @out); my %xsc = map { /(.*)\.xs$/ ? ("$1.c" => 1, "$1.cc" => 1) : () } @files; for (@files) { my $out = exists $xsc{$_} || /\b\Q$ppport\E$/i || !/($srcext)$/i; push @{ $out ? \@out : \@in }, $_; } if (@ARGV && @out) { warning("Skipping the following files (use --nofilter to avoid this):\n| ", join "\n| ", @out); } @files = @in; } die "No input files given!\n" unless @files; my(%files, %global, %revreplace); %revreplace = reverse %replace; my $filename; my $patch_opened = 0; for $filename (@files) { unless (open IN, "<$filename") { warn "Unable to read from $filename: $!\n"; next; } info("Scanning $filename ..."); my $c = do { local $/; }; close IN; my %file = (orig => $c, changes => 0); # Temporarily remove C/XS comments and strings from the code my @ccom; $c =~ s{ ( ^$HS*\#$HS*include\b[^\r\n]+\b(?:\Q$ppport\E|XSUB\.h)\b[^\r\n]* | ^$HS*\#$HS*(?:define|elif|if(?:def)?)\b[^\r\n]* ) | ( ^$HS*\#[^\r\n]* | "[^"\\]*(?:\\.[^"\\]*)*" | '[^'\\]*(?:\\.[^'\\]*)*' | / (?: \*[^*]*\*+(?:[^$ccs][^*]*\*+)* / | /[^\r\n]* ) ) }{ defined $2 and push @ccom, $2; defined $1 ? $1 : "$ccs$#ccom$cce" }mgsex; $file{ccom} = \@ccom; $file{code} = $c; $file{has_inc_ppport} = $c =~ /^$HS*#$HS*include[^\r\n]+\b\Q$ppport\E\b/m; my $func; for $func (keys %API) { my $match = $func; $match .= "|$revreplace{$func}" if exists $revreplace{$func}; if ($c =~ /\b(?:Perl_)?($match)\b/) { $file{uses_replace}{$1}++ if exists $revreplace{$func} && $1 eq $revreplace{$func}; $file{uses_Perl}{$func}++ if $c =~ /\bPerl_$func\b/; if (exists $API{$func}{provided}) { $file{uses_provided}{$func}++; if (!exists $API{$func}{base} || $API{$func}{base} > $opt{'compat-version'}) { $file{uses}{$func}++; my @deps = rec_depend($func); if (@deps) { $file{uses_deps}{$func} = \@deps; for (@deps) { $file{uses}{$_} = 0 unless exists $file{uses}{$_}; } } for ($func, @deps) { $file{needs}{$_} = 'static' if exists $need{$_}; } } } if (exists $API{$func}{todo} && $API{$func}{todo} > $opt{'compat-version'}) { if ($c =~ /\b$func\b/) { $file{uses_todo}{$func}++; } } } } while ($c =~ /^$HS*#$HS*define$HS+(NEED_(\w+?)(_GLOBAL)?)\b/mg) { if (exists $need{$2}) { $file{defined $3 ? 'needed_global' : 'needed_static'}{$2}++; } else { warning("Possibly wrong #define $1 in $filename") } } for (qw(uses needs uses_todo needed_global needed_static)) { for $func (keys %{$file{$_}}) { push @{$global{$_}{$func}}, $filename; } } $files{$filename} = \%file; } # Globally resolve NEED_'s my $need; for $need (keys %{$global{needs}}) { if (@{$global{needs}{$need}} > 1) { my @targets = @{$global{needs}{$need}}; my @t = grep $files{$_}{needed_global}{$need}, @targets; @targets = @t if @t; @t = grep /\.xs$/i, @targets; @targets = @t if @t; my $target = shift @targets; $files{$target}{needs}{$need} = 'global'; for (@{$global{needs}{$need}}) { $files{$_}{needs}{$need} = 'extern' if $_ ne $target; } } } for $filename (@files) { exists $files{$filename} or next; info("=== Analyzing $filename ==="); my %file = %{$files{$filename}}; my $func; my $c = $file{code}; my $warnings = 0; for $func (sort keys %{$file{uses_Perl}}) { if ($API{$func}{varargs}) { unless ($API{$func}{nothxarg}) { my $changes = ($c =~ s{\b(Perl_$func\s*\(\s*)(?!aTHX_?)(\)|[^\s)]*\))} { $1 . ($2 eq ')' ? 'aTHX' : 'aTHX_ ') . $2 }ge); if ($changes) { warning("Doesn't pass interpreter argument aTHX to Perl_$func"); $file{changes} += $changes; } } } else { warning("Uses Perl_$func instead of $func"); $file{changes} += ($c =~ s{\bPerl_$func(\s*)\((\s*aTHX_?)?\s*} {$func$1(}g); } } for $func (sort keys %{$file{uses_replace}}) { warning("Uses $func instead of $replace{$func}"); $file{changes} += ($c =~ s/\b$func\b/$replace{$func}/g); } for $func (sort keys %{$file{uses_provided}}) { if ($file{uses}{$func}) { if (exists $file{uses_deps}{$func}) { diag("Uses $func, which depends on ", join(', ', @{$file{uses_deps}{$func}})); } else { diag("Uses $func"); } } $warnings += hint($func); } unless ($opt{quiet}) { for $func (sort keys %{$file{uses_todo}}) { print "*** WARNING: Uses $func, which may not be portable below perl ", format_version($API{$func}{todo}), ", even with '$ppport'\n"; $warnings++; } } for $func (sort keys %{$file{needed_static}}) { my $message = ''; if (not exists $file{uses}{$func}) { $message = "No need to define NEED_$func if $func is never used"; } elsif (exists $file{needs}{$func} && $file{needs}{$func} ne 'static') { $message = "No need to define NEED_$func when already needed globally"; } if ($message) { diag($message); $file{changes} += ($c =~ s/^$HS*#$HS*define$HS+NEED_$func\b.*$LF//mg); } } for $func (sort keys %{$file{needed_global}}) { my $message = ''; if (not exists $global{uses}{$func}) { $message = "No need to define NEED_${func}_GLOBAL if $func is never used"; } elsif (exists $file{needs}{$func}) { if ($file{needs}{$func} eq 'extern') { $message = "No need to define NEED_${func}_GLOBAL when already needed globally"; } elsif ($file{needs}{$func} eq 'static') { $message = "No need to define NEED_${func}_GLOBAL when only used in this file"; } } if ($message) { diag($message); $file{changes} += ($c =~ s/^$HS*#$HS*define$HS+NEED_${func}_GLOBAL\b.*$LF//mg); } } $file{needs_inc_ppport} = keys %{$file{uses}}; if ($file{needs_inc_ppport}) { my $pp = ''; for $func (sort keys %{$file{needs}}) { my $type = $file{needs}{$func}; next if $type eq 'extern'; my $suffix = $type eq 'global' ? '_GLOBAL' : ''; unless (exists $file{"needed_$type"}{$func}) { if ($type eq 'global') { diag("Files [@{$global{needs}{$func}}] need $func, adding global request"); } else { diag("File needs $func, adding static request"); } $pp .= "#define NEED_$func$suffix\n"; } } if ($pp && ($c =~ s/^(?=$HS*#$HS*define$HS+NEED_\w+)/$pp/m)) { $pp = ''; $file{changes}++; } unless ($file{has_inc_ppport}) { diag("Needs to include '$ppport'"); $pp .= qq(#include "$ppport"\n) } if ($pp) { $file{changes} += ($c =~ s/^($HS*#$HS*define$HS+NEED_\w+.*?)^/$1$pp/ms) || ($c =~ s/^(?=$HS*#$HS*include.*\Q$ppport\E)/$pp/m) || ($c =~ s/^($HS*#$HS*include.*XSUB.*\s*?)^/$1$pp/m) || ($c =~ s/^/$pp/); } } else { if ($file{has_inc_ppport}) { diag("No need to include '$ppport'"); $file{changes} += ($c =~ s/^$HS*?#$HS*include.*\Q$ppport\E.*?$LF//m); } } # put back in our C comments my $ix; my $cppc = 0; my @ccom = @{$file{ccom}}; for $ix (0 .. $#ccom) { if (!$opt{cplusplus} && $ccom[$ix] =~ s!^//!!) { $cppc++; $file{changes} += $c =~ s/$rccs$ix$rcce/$ccs$ccom[$ix] $cce/; } else { $c =~ s/$rccs$ix$rcce/$ccom[$ix]/; } } if ($cppc) { my $s = $cppc != 1 ? 's' : ''; warning("Uses $cppc C++ style comment$s, which is not portable"); } my $s = $warnings != 1 ? 's' : ''; my $warn = $warnings ? " ($warnings warning$s)" : ''; info("Analysis completed$warn"); if ($file{changes}) { if (exists $opt{copy}) { my $newfile = "$filename$opt{copy}"; if (-e $newfile) { error("'$newfile' already exists, refusing to write copy of '$filename'"); } else { local *F; if (open F, ">$newfile") { info("Writing copy of '$filename' with changes to '$newfile'"); print F $c; close F; } else { error("Cannot open '$newfile' for writing: $!"); } } } elsif (exists $opt{patch} || $opt{changes}) { if (exists $opt{patch}) { unless ($patch_opened) { if (open PATCH, ">$opt{patch}") { $patch_opened = 1; } else { error("Cannot open '$opt{patch}' for writing: $!"); delete $opt{patch}; $opt{changes} = 1; goto fallback; } } mydiff(\*PATCH, $filename, $c); } else { fallback: info("Suggested changes:"); mydiff(\*STDOUT, $filename, $c); } } else { my $s = $file{changes} == 1 ? '' : 's'; info("$file{changes} potentially required change$s detected"); } } else { info("Looks good"); } } close PATCH if $patch_opened; exit 0; sub try_use { eval "use @_;"; return $@ eq '' } sub mydiff { local *F = shift; my($file, $str) = @_; my $diff; if (exists $opt{diff}) { $diff = run_diff($opt{diff}, $file, $str); } if (!defined $diff and try_use('Text::Diff')) { $diff = Text::Diff::diff($file, \$str, { STYLE => 'Unified' }); $diff = <
$tmp") { print F $str; close F; if (open F, "$prog $file $tmp |") { while () { s/\Q$tmp\E/$file.patched/; $diff .= $_; } close F; unlink $tmp; return $diff; } unlink $tmp; } else { error("Cannot open '$tmp' for writing: $!"); } return undef; } sub rec_depend { my($func, $seen) = @_; return () unless exists $depends{$func}; $seen = {%{$seen||{}}}; return () if $seen->{$func}++; my %s; grep !$s{$_}++, map { ($_, rec_depend($_, $seen)) } @{$depends{$func}}; } sub parse_version { my $ver = shift; if ($ver =~ /^(\d+)\.(\d+)\.(\d+)$/) { return ($1, $2, $3); } elsif ($ver !~ /^\d+\.[\d_]+$/) { die "cannot parse version '$ver'\n"; } $ver =~ s/_//g; $ver =~ s/$/000000/; my($r,$v,$s) = $ver =~ /(\d+)\.(\d{3})(\d{3})/; $v = int $v; $s = int $s; if ($r < 5 || ($r == 5 && $v < 6)) { if ($s % 10) { die "cannot parse version '$ver'\n"; } } return ($r, $v, $s); } sub format_version { my $ver = shift; $ver =~ s/$/000000/; my($r,$v,$s) = $ver =~ /(\d+)\.(\d{3})(\d{3})/; $v = int $v; $s = int $s; if ($r < 5 || ($r == 5 && $v < 6)) { if ($s % 10) { die "invalid version '$ver'\n"; } $s /= 10; $ver = sprintf "%d.%03d", $r, $v; $s > 0 and $ver .= sprintf "_%02d", $s; return $ver; } return sprintf "%d.%d.%d", $r, $v, $s; } sub info { $opt{quiet} and return; print @_, "\n"; } sub diag { $opt{quiet} and return; $opt{diag} and print @_, "\n"; } sub warning { $opt{quiet} and return; print "*** ", @_, "\n"; } sub error { print "*** ERROR: ", @_, "\n"; } my %given_hints; my %given_warnings; sub hint { $opt{quiet} and return; my $func = shift; my $rv = 0; if (exists $warnings{$func} && !$given_warnings{$func}++) { my $warn = $warnings{$func}; $warn =~ s!^!*** !mg; print "*** WARNING: $func\n", $warn; $rv++; } if ($opt{hints} && exists $hints{$func} && !$given_hints{$func}++) { my $hint = $hints{$func}; $hint =~ s/^/ /mg; print " --- hint for $func ---\n", $hint; } $rv; } sub usage { my($usage) = do { local(@ARGV,$/)=($0); <> } =~ /^=head\d$HS+SYNOPSIS\s*^(.*?)\s*^=/ms; my %M = ( 'I' => '*' ); $usage =~ s/^\s*perl\s+\S+/$^X $0/; $usage =~ s/([A-Z])<([^>]+)>/$M{$1}$2$M{$1}/g; print < }; my($copy) = $self =~ /^=head\d\s+COPYRIGHT\s*^(.*?)^=\w+/ms; $copy =~ s/^(?=\S+)/ /gms; $self =~ s/^$HS+Do NOT edit.*?(?=^-)/$copy/ms; $self =~ s/^SKIP.*(?=^__DATA__)/SKIP if (\@ARGV && \$ARGV[0] eq '--unstrip') { eval { require Devel::PPPort }; \$@ and die "Cannot require Devel::PPPort, please install.\\n"; if (eval \$Devel::PPPort::VERSION < $VERSION) { die "$0 was originally generated with Devel::PPPort $VERSION.\\n" . "Your Devel::PPPort is only version \$Devel::PPPort::VERSION.\\n" . "Please install a newer version, or --unstrip will not work.\\n"; } Devel::PPPort::WriteFile(\$0); exit 0; } print <$0" or die "cannot strip $0: $!\n"; print OUT "$pl$c\n"; exit 0; } __DATA__ */ #ifndef _P_P_PORTABILITY_H_ #define _P_P_PORTABILITY_H_ #ifndef DPPP_NAMESPACE # define DPPP_NAMESPACE DPPP_ #endif #define DPPP_CAT2(x,y) CAT2(x,y) #define DPPP_(name) DPPP_CAT2(DPPP_NAMESPACE, name) #ifndef PERL_REVISION # if !defined(__PATCHLEVEL_H_INCLUDED__) && !(defined(PATCHLEVEL) && defined(SUBVERSION)) # define PERL_PATCHLEVEL_H_IMPLICIT # include # endif # if !(defined(PERL_VERSION) || (defined(SUBVERSION) && defined(PATCHLEVEL))) # include # endif # ifndef PERL_REVISION # define PERL_REVISION (5) /* Replace: 1 */ # define PERL_VERSION PATCHLEVEL # define PERL_SUBVERSION SUBVERSION /* Replace PERL_PATCHLEVEL with PERL_VERSION */ /* Replace: 0 */ # endif #endif #define _dpppDEC2BCD(dec) ((((dec)/100)<<8)|((((dec)%100)/10)<<4)|((dec)%10)) #define PERL_BCDVERSION ((_dpppDEC2BCD(PERL_REVISION)<<24)|(_dpppDEC2BCD(PERL_VERSION)<<12)|_dpppDEC2BCD(PERL_SUBVERSION)) /* It is very unlikely that anyone will try to use this with Perl 6 (or greater), but who knows. */ #if PERL_REVISION != 5 # error ppport.h only works with Perl version 5 #endif /* PERL_REVISION != 5 */ #ifndef dTHR # define dTHR dNOOP #endif #ifndef dTHX # define dTHX dNOOP #endif #ifndef dTHXa # define dTHXa(x) dNOOP #endif #ifndef pTHX # define pTHX void #endif #ifndef pTHX_ # define pTHX_ #endif #ifndef aTHX # define aTHX #endif #ifndef aTHX_ # define aTHX_ #endif #if (PERL_BCDVERSION < 0x5006000) # ifdef USE_THREADS # define aTHXR thr # define aTHXR_ thr, # else # define aTHXR # define aTHXR_ # endif # define dTHXR dTHR #else # define aTHXR aTHX # define aTHXR_ aTHX_ # define dTHXR dTHX #endif #ifndef dTHXoa # define dTHXoa(x) dTHXa(x) #endif #ifdef I_LIMITS # include #endif #ifndef PERL_UCHAR_MIN # define PERL_UCHAR_MIN ((unsigned char)0) #endif #ifndef PERL_UCHAR_MAX # ifdef UCHAR_MAX # define PERL_UCHAR_MAX ((unsigned char)UCHAR_MAX) # else # ifdef MAXUCHAR # define PERL_UCHAR_MAX ((unsigned char)MAXUCHAR) # else # define PERL_UCHAR_MAX ((unsigned char)~(unsigned)0) # endif # endif #endif #ifndef PERL_USHORT_MIN # define PERL_USHORT_MIN ((unsigned short)0) #endif #ifndef PERL_USHORT_MAX # ifdef USHORT_MAX # define PERL_USHORT_MAX ((unsigned short)USHORT_MAX) # else # ifdef MAXUSHORT # define PERL_USHORT_MAX ((unsigned short)MAXUSHORT) # else # ifdef USHRT_MAX # define PERL_USHORT_MAX ((unsigned short)USHRT_MAX) # else # define PERL_USHORT_MAX ((unsigned short)~(unsigned)0) # endif # endif # endif #endif #ifndef PERL_SHORT_MAX # ifdef SHORT_MAX # define PERL_SHORT_MAX ((short)SHORT_MAX) # else # ifdef MAXSHORT /* Often used in */ # define PERL_SHORT_MAX ((short)MAXSHORT) # else # ifdef SHRT_MAX # define PERL_SHORT_MAX ((short)SHRT_MAX) # else # define PERL_SHORT_MAX ((short) (PERL_USHORT_MAX >> 1)) # endif # endif # endif #endif #ifndef PERL_SHORT_MIN # ifdef SHORT_MIN # define PERL_SHORT_MIN ((short)SHORT_MIN) # else # ifdef MINSHORT # define PERL_SHORT_MIN ((short)MINSHORT) # else # ifdef SHRT_MIN # define PERL_SHORT_MIN ((short)SHRT_MIN) # else # define PERL_SHORT_MIN (-PERL_SHORT_MAX - ((3 & -1) == 3)) # endif # endif # endif #endif #ifndef PERL_UINT_MAX # ifdef UINT_MAX # define PERL_UINT_MAX ((unsigned int)UINT_MAX) # else # ifdef MAXUINT # define PERL_UINT_MAX ((unsigned int)MAXUINT) # else # define PERL_UINT_MAX (~(unsigned int)0) # endif # endif #endif #ifndef PERL_UINT_MIN # define PERL_UINT_MIN ((unsigned int)0) #endif #ifndef PERL_INT_MAX # ifdef INT_MAX # define PERL_INT_MAX ((int)INT_MAX) # else # ifdef MAXINT /* Often used in */ # define PERL_INT_MAX ((int)MAXINT) # else # define PERL_INT_MAX ((int)(PERL_UINT_MAX >> 1)) # endif # endif #endif #ifndef PERL_INT_MIN # ifdef INT_MIN # define PERL_INT_MIN ((int)INT_MIN) # else # ifdef MININT # define PERL_INT_MIN ((int)MININT) # else # define PERL_INT_MIN (-PERL_INT_MAX - ((3 & -1) == 3)) # endif # endif #endif #ifndef PERL_ULONG_MAX # ifdef ULONG_MAX # define PERL_ULONG_MAX ((unsigned long)ULONG_MAX) # else # ifdef MAXULONG # define PERL_ULONG_MAX ((unsigned long)MAXULONG) # else # define PERL_ULONG_MAX (~(unsigned long)0) # endif # endif #endif #ifndef PERL_ULONG_MIN # define PERL_ULONG_MIN ((unsigned long)0L) #endif #ifndef PERL_LONG_MAX # ifdef LONG_MAX # define PERL_LONG_MAX ((long)LONG_MAX) # else # ifdef MAXLONG # define PERL_LONG_MAX ((long)MAXLONG) # else # define PERL_LONG_MAX ((long) (PERL_ULONG_MAX >> 1)) # endif # endif #endif #ifndef PERL_LONG_MIN # ifdef LONG_MIN # define PERL_LONG_MIN ((long)LONG_MIN) # else # ifdef MINLONG # define PERL_LONG_MIN ((long)MINLONG) # else # define PERL_LONG_MIN (-PERL_LONG_MAX - ((3 & -1) == 3)) # endif # endif #endif #if defined(HAS_QUAD) && (defined(convex) || defined(uts)) # ifndef PERL_UQUAD_MAX # ifdef ULONGLONG_MAX # define PERL_UQUAD_MAX ((unsigned long long)ULONGLONG_MAX) # else # ifdef MAXULONGLONG # define PERL_UQUAD_MAX ((unsigned long long)MAXULONGLONG) # else # define PERL_UQUAD_MAX (~(unsigned long long)0) # endif # endif # endif # ifndef PERL_UQUAD_MIN # define PERL_UQUAD_MIN ((unsigned long long)0L) # endif # ifndef PERL_QUAD_MAX # ifdef LONGLONG_MAX # define PERL_QUAD_MAX ((long long)LONGLONG_MAX) # else # ifdef MAXLONGLONG # define PERL_QUAD_MAX ((long long)MAXLONGLONG) # else # define PERL_QUAD_MAX ((long long) (PERL_UQUAD_MAX >> 1)) # endif # endif # endif # ifndef PERL_QUAD_MIN # ifdef LONGLONG_MIN # define PERL_QUAD_MIN ((long long)LONGLONG_MIN) # else # ifdef MINLONGLONG # define PERL_QUAD_MIN ((long long)MINLONGLONG) # else # define PERL_QUAD_MIN (-PERL_QUAD_MAX - ((3 & -1) == 3)) # endif # endif # endif #endif /* This is based on code from 5.003 perl.h */ #ifdef HAS_QUAD # ifdef cray #ifndef IVTYPE # define IVTYPE int #endif #ifndef IV_MIN # define IV_MIN PERL_INT_MIN #endif #ifndef IV_MAX # define IV_MAX PERL_INT_MAX #endif #ifndef UV_MIN # define UV_MIN PERL_UINT_MIN #endif #ifndef UV_MAX # define UV_MAX PERL_UINT_MAX #endif # ifdef INTSIZE #ifndef IVSIZE # define IVSIZE INTSIZE #endif # endif # else # if defined(convex) || defined(uts) #ifndef IVTYPE # define IVTYPE long long #endif #ifndef IV_MIN # define IV_MIN PERL_QUAD_MIN #endif #ifndef IV_MAX # define IV_MAX PERL_QUAD_MAX #endif #ifndef UV_MIN # define UV_MIN PERL_UQUAD_MIN #endif #ifndef UV_MAX # define UV_MAX PERL_UQUAD_MAX #endif # ifdef LONGLONGSIZE #ifndef IVSIZE # define IVSIZE LONGLONGSIZE #endif # endif # else #ifndef IVTYPE # define IVTYPE long #endif #ifndef IV_MIN # define IV_MIN PERL_LONG_MIN #endif #ifndef IV_MAX # define IV_MAX PERL_LONG_MAX #endif #ifndef UV_MIN # define UV_MIN PERL_ULONG_MIN #endif #ifndef UV_MAX # define UV_MAX PERL_ULONG_MAX #endif # ifdef LONGSIZE #ifndef IVSIZE # define IVSIZE LONGSIZE #endif # endif # endif # endif #ifndef IVSIZE # define IVSIZE 8 #endif #ifndef LONGSIZE # define LONGSIZE 8 #endif #ifndef PERL_QUAD_MIN # define PERL_QUAD_MIN IV_MIN #endif #ifndef PERL_QUAD_MAX # define PERL_QUAD_MAX IV_MAX #endif #ifndef PERL_UQUAD_MIN # define PERL_UQUAD_MIN UV_MIN #endif #ifndef PERL_UQUAD_MAX # define PERL_UQUAD_MAX UV_MAX #endif #else #ifndef IVTYPE # define IVTYPE long #endif #ifndef LONGSIZE # define LONGSIZE 4 #endif #ifndef IV_MIN # define IV_MIN PERL_LONG_MIN #endif #ifndef IV_MAX # define IV_MAX PERL_LONG_MAX #endif #ifndef UV_MIN # define UV_MIN PERL_ULONG_MIN #endif #ifndef UV_MAX # define UV_MAX PERL_ULONG_MAX #endif #endif #ifndef IVSIZE # ifdef LONGSIZE # define IVSIZE LONGSIZE # else # define IVSIZE 4 /* A bold guess, but the best we can make. */ # endif #endif #ifndef UVTYPE # define UVTYPE unsigned IVTYPE #endif #ifndef UVSIZE # define UVSIZE IVSIZE #endif #ifndef sv_setuv # define sv_setuv(sv, uv) \ STMT_START { \ UV TeMpUv = uv; \ if (TeMpUv <= IV_MAX) \ sv_setiv(sv, TeMpUv); \ else \ sv_setnv(sv, (double)TeMpUv); \ } STMT_END #endif #ifndef newSVuv # define newSVuv(uv) ((uv) <= IV_MAX ? newSViv((IV)uv) : newSVnv((NV)uv)) #endif #ifndef sv_2uv # define sv_2uv(sv) ((PL_Sv = (sv)), (UV) (SvNOK(PL_Sv) ? SvNV(PL_Sv) : sv_2nv(PL_Sv))) #endif #ifndef SvUVX # define SvUVX(sv) ((UV)SvIVX(sv)) #endif #ifndef SvUVXx # define SvUVXx(sv) SvUVX(sv) #endif #ifndef SvUV # define SvUV(sv) (SvIOK(sv) ? SvUVX(sv) : sv_2uv(sv)) #endif #ifndef SvUVx # define SvUVx(sv) ((PL_Sv = (sv)), SvUV(PL_Sv)) #endif /* Hint: sv_uv * Always use the SvUVx() macro instead of sv_uv(). */ #ifndef sv_uv # define sv_uv(sv) SvUVx(sv) #endif #if !defined(SvUOK) && defined(SvIOK_UV) # define SvUOK(sv) SvIOK_UV(sv) #endif #ifndef XST_mUV # define XST_mUV(i,v) (ST(i) = sv_2mortal(newSVuv(v)) ) #endif #ifndef XSRETURN_UV # define XSRETURN_UV(v) STMT_START { XST_mUV(0,v); XSRETURN(1); } STMT_END #endif #ifndef PUSHu # define PUSHu(u) STMT_START { sv_setuv(TARG, (UV)(u)); PUSHTARG; } STMT_END #endif #ifndef XPUSHu # define XPUSHu(u) STMT_START { sv_setuv(TARG, (UV)(u)); XPUSHTARG; } STMT_END #endif #ifdef HAS_MEMCMP #ifndef memNE # define memNE(s1,s2,l) (memcmp(s1,s2,l)) #endif #ifndef memEQ # define memEQ(s1,s2,l) (!memcmp(s1,s2,l)) #endif #else #ifndef memNE # define memNE(s1,s2,l) (bcmp(s1,s2,l)) #endif #ifndef memEQ # define memEQ(s1,s2,l) (!bcmp(s1,s2,l)) #endif #endif #ifndef memEQs # define memEQs(s1, l, s2) \ (sizeof(s2)-1 == l && memEQ(s1, (s2 ""), (sizeof(s2)-1))) #endif #ifndef memNEs # define memNEs(s1, l, s2) !memEQs(s1, l, s2) #endif #ifndef MoveD # define MoveD(s,d,n,t) memmove((char*)(d),(char*)(s), (n) * sizeof(t)) #endif #ifndef CopyD # define CopyD(s,d,n,t) memcpy((char*)(d),(char*)(s), (n) * sizeof(t)) #endif #ifdef HAS_MEMSET #ifndef ZeroD # define ZeroD(d,n,t) memzero((char*)(d), (n) * sizeof(t)) #endif #else #ifndef ZeroD # define ZeroD(d,n,t) ((void)memzero((char*)(d), (n) * sizeof(t)), d) #endif #endif #ifndef PoisonWith # define PoisonWith(d,n,t,b) (void)memset((char*)(d), (U8)(b), (n) * sizeof(t)) #endif #ifndef PoisonNew # define PoisonNew(d,n,t) PoisonWith(d,n,t,0xAB) #endif #ifndef PoisonFree # define PoisonFree(d,n,t) PoisonWith(d,n,t,0xEF) #endif #ifndef Poison # define Poison(d,n,t) PoisonFree(d,n,t) #endif #ifndef Newx # define Newx(v,n,t) New(0,v,n,t) #endif #ifndef Newxc # define Newxc(v,n,t,c) Newc(0,v,n,t,c) #endif #ifndef Newxz # define Newxz(v,n,t) Newz(0,v,n,t) #endif #ifndef PERL_UNUSED_DECL # ifdef HASATTRIBUTE # if (defined(__GNUC__) && defined(__cplusplus)) || defined(__INTEL_COMPILER) # define PERL_UNUSED_DECL # else # define PERL_UNUSED_DECL __attribute__((unused)) # endif # else # define PERL_UNUSED_DECL # endif #endif #ifndef PERL_UNUSED_ARG # if defined(lint) && defined(S_SPLINT_S) /* www.splint.org */ # include # define PERL_UNUSED_ARG(x) NOTE(ARGUNUSED(x)) # else # define PERL_UNUSED_ARG(x) ((void)x) # endif #endif #ifndef PERL_UNUSED_VAR # define PERL_UNUSED_VAR(x) ((void)x) #endif #ifndef PERL_UNUSED_CONTEXT # ifdef USE_ITHREADS # define PERL_UNUSED_CONTEXT PERL_UNUSED_ARG(my_perl) # else # define PERL_UNUSED_CONTEXT # endif #endif #ifndef NOOP # define NOOP /*EMPTY*/(void)0 #endif #ifndef dNOOP # define dNOOP extern int /*@unused@*/ Perl___notused PERL_UNUSED_DECL #endif #ifndef NVTYPE # if defined(USE_LONG_DOUBLE) && defined(HAS_LONG_DOUBLE) # define NVTYPE long double # else # define NVTYPE double # endif typedef NVTYPE NV; #endif #ifndef INT2PTR # if (IVSIZE == PTRSIZE) && (UVSIZE == PTRSIZE) # define PTRV UV # define INT2PTR(any,d) (any)(d) # else # if PTRSIZE == LONGSIZE # define PTRV unsigned long # else # define PTRV unsigned # endif # define INT2PTR(any,d) (any)(PTRV)(d) # endif #endif #ifndef PTR2ul # if PTRSIZE == LONGSIZE # define PTR2ul(p) (unsigned long)(p) # else # define PTR2ul(p) INT2PTR(unsigned long,p) # endif #endif #ifndef PTR2nat # define PTR2nat(p) (PTRV)(p) #endif #ifndef NUM2PTR # define NUM2PTR(any,d) (any)PTR2nat(d) #endif #ifndef PTR2IV # define PTR2IV(p) INT2PTR(IV,p) #endif #ifndef PTR2UV # define PTR2UV(p) INT2PTR(UV,p) #endif #ifndef PTR2NV # define PTR2NV(p) NUM2PTR(NV,p) #endif #undef START_EXTERN_C #undef END_EXTERN_C #undef EXTERN_C #ifdef __cplusplus # define START_EXTERN_C extern "C" { # define END_EXTERN_C } # define EXTERN_C extern "C" #else # define START_EXTERN_C # define END_EXTERN_C # define EXTERN_C extern #endif #if defined(PERL_GCC_PEDANTIC) # ifndef PERL_GCC_BRACE_GROUPS_FORBIDDEN # define PERL_GCC_BRACE_GROUPS_FORBIDDEN # endif #endif #if defined(__GNUC__) && !defined(PERL_GCC_BRACE_GROUPS_FORBIDDEN) && !defined(__cplusplus) # ifndef PERL_USE_GCC_BRACE_GROUPS # define PERL_USE_GCC_BRACE_GROUPS # endif #endif #undef STMT_START #undef STMT_END #ifdef PERL_USE_GCC_BRACE_GROUPS # define STMT_START (void)( /* gcc supports ``({ STATEMENTS; })'' */ # define STMT_END ) #else # if defined(VOIDFLAGS) && (VOIDFLAGS) && (defined(sun) || defined(__sun__)) && !defined(__GNUC__) # define STMT_START if (1) # define STMT_END else (void)0 # else # define STMT_START do # define STMT_END while (0) # endif #endif #ifndef boolSV # define boolSV(b) ((b) ? &PL_sv_yes : &PL_sv_no) #endif /* DEFSV appears first in 5.004_56 */ #ifndef DEFSV # define DEFSV GvSV(PL_defgv) #endif #ifndef SAVE_DEFSV # define SAVE_DEFSV SAVESPTR(GvSV(PL_defgv)) #endif #ifndef DEFSV_set # define DEFSV_set(sv) (DEFSV = (sv)) #endif /* Older perls (<=5.003) lack AvFILLp */ #ifndef AvFILLp # define AvFILLp AvFILL #endif #ifndef ERRSV # define ERRSV get_sv("@",FALSE) #endif /* Hint: gv_stashpvn * This function's backport doesn't support the length parameter, but * rather ignores it. Portability can only be ensured if the length * parameter is used for speed reasons, but the length can always be * correctly computed from the string argument. */ #ifndef gv_stashpvn # define gv_stashpvn(str,len,create) gv_stashpv(str,create) #endif /* Replace: 1 */ #ifndef get_cv # define get_cv perl_get_cv #endif #ifndef get_sv # define get_sv perl_get_sv #endif #ifndef get_av # define get_av perl_get_av #endif #ifndef get_hv # define get_hv perl_get_hv #endif /* Replace: 0 */ #ifndef dUNDERBAR # define dUNDERBAR dNOOP #endif #ifndef UNDERBAR # define UNDERBAR DEFSV #endif #ifndef dAX # define dAX I32 ax = MARK - PL_stack_base + 1 #endif #ifndef dITEMS # define dITEMS I32 items = SP - MARK #endif #ifndef dXSTARG # define dXSTARG SV * targ = sv_newmortal() #endif #ifndef dAXMARK # define dAXMARK I32 ax = POPMARK; \ register SV ** const mark = PL_stack_base + ax++ #endif #ifndef XSprePUSH # define XSprePUSH (sp = PL_stack_base + ax - 1) #endif #if (PERL_BCDVERSION < 0x5005000) # undef XSRETURN # define XSRETURN(off) \ STMT_START { \ PL_stack_sp = PL_stack_base + ax + ((off) - 1); \ return; \ } STMT_END #endif #ifndef XSPROTO # define XSPROTO(name) void name(pTHX_ CV* cv) #endif #ifndef SVfARG # define SVfARG(p) ((void*)(p)) #endif #ifndef PERL_ABS # define PERL_ABS(x) ((x) < 0 ? -(x) : (x)) #endif #ifndef dVAR # define dVAR dNOOP #endif #ifndef SVf # define SVf "_" #endif #ifndef UTF8_MAXBYTES # define UTF8_MAXBYTES UTF8_MAXLEN #endif #ifndef CPERLscope # define CPERLscope(x) x #endif #ifndef PERL_HASH # define PERL_HASH(hash,str,len) \ STMT_START { \ const char *s_PeRlHaSh = str; \ I32 i_PeRlHaSh = len; \ U32 hash_PeRlHaSh = 0; \ while (i_PeRlHaSh--) \ hash_PeRlHaSh = hash_PeRlHaSh * 33 + *s_PeRlHaSh++; \ (hash) = hash_PeRlHaSh; \ } STMT_END #endif #ifndef PERLIO_FUNCS_DECL # ifdef PERLIO_FUNCS_CONST # define PERLIO_FUNCS_DECL(funcs) const PerlIO_funcs funcs # define PERLIO_FUNCS_CAST(funcs) (PerlIO_funcs*)(funcs) # else # define PERLIO_FUNCS_DECL(funcs) PerlIO_funcs funcs # define PERLIO_FUNCS_CAST(funcs) (funcs) # endif #endif /* provide these typedefs for older perls */ #if (PERL_BCDVERSION < 0x5009003) # ifdef ARGSproto typedef OP* (CPERLscope(*Perl_ppaddr_t))(ARGSproto); # else typedef OP* (CPERLscope(*Perl_ppaddr_t))(pTHX); # endif typedef OP* (CPERLscope(*Perl_check_t)) (pTHX_ OP*); #endif #ifndef isPSXSPC # define isPSXSPC(c) (isSPACE(c) || (c) == '\v') #endif #ifndef isBLANK # define isBLANK(c) ((c) == ' ' || (c) == '\t') #endif #ifdef EBCDIC #ifndef isALNUMC # define isALNUMC(c) isalnum(c) #endif #ifndef isASCII # define isASCII(c) isascii(c) #endif #ifndef isCNTRL # define isCNTRL(c) iscntrl(c) #endif #ifndef isGRAPH # define isGRAPH(c) isgraph(c) #endif #ifndef isPRINT # define isPRINT(c) isprint(c) #endif #ifndef isPUNCT # define isPUNCT(c) ispunct(c) #endif #ifndef isXDIGIT # define isXDIGIT(c) isxdigit(c) #endif #else # if (PERL_BCDVERSION < 0x5010000) /* Hint: isPRINT * The implementation in older perl versions includes all of the * isSPACE() characters, which is wrong. The version provided by * Devel::PPPort always overrides a present buggy version. */ # undef isPRINT # endif #ifdef HAS_QUAD # ifdef U64TYPE # define WIDEST_UTYPE U64TYPE # else # define WIDEST_UTYPE Quad_t # endif #else # define WIDEST_UTYPE U32 #endif #ifndef isALNUMC # define isALNUMC(c) (isALPHA(c) || isDIGIT(c)) #endif #ifndef isASCII # define isASCII(c) ((WIDEST_UTYPE) (c) <= 127) #endif #ifndef isCNTRL # define isCNTRL(c) ((WIDEST_UTYPE) (c) < ' ' || (c) == 127) #endif #ifndef isGRAPH # define isGRAPH(c) (isALNUM(c) || isPUNCT(c)) #endif #ifndef isPRINT # define isPRINT(c) (((c) >= 32 && (c) < 127)) #endif #ifndef isPUNCT # define isPUNCT(c) (((c) >= 33 && (c) <= 47) || ((c) >= 58 && (c) <= 64) || ((c) >= 91 && (c) <= 96) || ((c) >= 123 && (c) <= 126)) #endif #ifndef isXDIGIT # define isXDIGIT(c) (isDIGIT(c) || ((c) >= 'a' && (c) <= 'f') || ((c) >= 'A' && (c) <= 'F')) #endif #endif /* Until we figure out how to support this in older perls... */ #if (PERL_BCDVERSION >= 0x5008000) #ifndef HeUTF8 # define HeUTF8(he) ((HeKLEN(he) == HEf_SVKEY) ? \ SvUTF8(HeKEY_sv(he)) : \ (U32)HeKUTF8(he)) #endif #endif #ifndef PERL_SIGNALS_UNSAFE_FLAG #define PERL_SIGNALS_UNSAFE_FLAG 0x0001 #if (PERL_BCDVERSION < 0x5008000) # define D_PPP_PERL_SIGNALS_INIT PERL_SIGNALS_UNSAFE_FLAG #else # define D_PPP_PERL_SIGNALS_INIT 0 #endif #if defined(NEED_PL_signals) static U32 DPPP_(my_PL_signals) = D_PPP_PERL_SIGNALS_INIT; #elif defined(NEED_PL_signals_GLOBAL) U32 DPPP_(my_PL_signals) = D_PPP_PERL_SIGNALS_INIT; #else extern U32 DPPP_(my_PL_signals); #endif #define PL_signals DPPP_(my_PL_signals) #endif /* Hint: PL_ppaddr * Calling an op via PL_ppaddr requires passing a context argument * for threaded builds. Since the context argument is different for * 5.005 perls, you can use aTHXR (supplied by ppport.h), which will * automatically be defined as the correct argument. */ #if (PERL_BCDVERSION <= 0x5005005) /* Replace: 1 */ # define PL_ppaddr ppaddr # define PL_no_modify no_modify /* Replace: 0 */ #endif #if (PERL_BCDVERSION <= 0x5004005) /* Replace: 1 */ # define PL_DBsignal DBsignal # define PL_DBsingle DBsingle # define PL_DBsub DBsub # define PL_DBtrace DBtrace # define PL_Sv Sv # define PL_bufend bufend # define PL_bufptr bufptr # define PL_compiling compiling # define PL_copline copline # define PL_curcop curcop # define PL_curstash curstash # define PL_debstash debstash # define PL_defgv defgv # define PL_diehook diehook # define PL_dirty dirty # define PL_dowarn dowarn # define PL_errgv errgv # define PL_error_count error_count # define PL_expect expect # define PL_hexdigit hexdigit # define PL_hints hints # define PL_in_my in_my # define PL_laststatval laststatval # define PL_lex_state lex_state # define PL_lex_stuff lex_stuff # define PL_linestr linestr # define PL_na na # define PL_perl_destruct_level perl_destruct_level # define PL_perldb perldb # define PL_rsfp_filters rsfp_filters # define PL_rsfp rsfp # define PL_stack_base stack_base # define PL_stack_sp stack_sp # define PL_statcache statcache # define PL_stdingv stdingv # define PL_sv_arenaroot sv_arenaroot # define PL_sv_no sv_no # define PL_sv_undef sv_undef # define PL_sv_yes sv_yes # define PL_tainted tainted # define PL_tainting tainting # define PL_tokenbuf tokenbuf /* Replace: 0 */ #endif /* Warning: PL_parser * For perl versions earlier than 5.9.5, this is an always * non-NULL dummy. Also, it cannot be dereferenced. Don't * use it if you can avoid is and unless you absolutely know * what you're doing. * If you always check that PL_parser is non-NULL, you can * define DPPP_PL_parser_NO_DUMMY to avoid the creation of * a dummy parser structure. */ #if (PERL_BCDVERSION >= 0x5009005) # ifdef DPPP_PL_parser_NO_DUMMY # define D_PPP_my_PL_parser_var(var) ((PL_parser ? PL_parser : \ (croak("panic: PL_parser == NULL in %s:%d", \ __FILE__, __LINE__), (yy_parser *) NULL))->var) # else # ifdef DPPP_PL_parser_NO_DUMMY_WARNING # define D_PPP_parser_dummy_warning(var) # else # define D_PPP_parser_dummy_warning(var) \ warn("warning: dummy PL_" #var " used in %s:%d", __FILE__, __LINE__), # endif # define D_PPP_my_PL_parser_var(var) ((PL_parser ? PL_parser : \ (D_PPP_parser_dummy_warning(var) &DPPP_(dummy_PL_parser)))->var) #if defined(NEED_PL_parser) static yy_parser DPPP_(dummy_PL_parser); #elif defined(NEED_PL_parser_GLOBAL) yy_parser DPPP_(dummy_PL_parser); #else extern yy_parser DPPP_(dummy_PL_parser); #endif # endif /* PL_expect, PL_copline, PL_rsfp, PL_rsfp_filters, PL_linestr, PL_bufptr, PL_bufend, PL_lex_state, PL_lex_stuff, PL_tokenbuf depends on PL_parser */ /* Warning: PL_expect, PL_copline, PL_rsfp, PL_rsfp_filters, PL_linestr, PL_bufptr, PL_bufend, PL_lex_state, PL_lex_stuff, PL_tokenbuf * Do not use this variable unless you know exactly what you're * doint. It is internal to the perl parser and may change or even * be removed in the future. As of perl 5.9.5, you have to check * for (PL_parser != NULL) for this variable to have any effect. * An always non-NULL PL_parser dummy is provided for earlier * perl versions. * If PL_parser is NULL when you try to access this variable, a * dummy is being accessed instead and a warning is issued unless * you define DPPP_PL_parser_NO_DUMMY_WARNING. * If DPPP_PL_parser_NO_DUMMY is defined, the code trying to access * this variable will croak with a panic message. */ # define PL_expect D_PPP_my_PL_parser_var(expect) # define PL_copline D_PPP_my_PL_parser_var(copline) # define PL_rsfp D_PPP_my_PL_parser_var(rsfp) # define PL_rsfp_filters D_PPP_my_PL_parser_var(rsfp_filters) # define PL_linestr D_PPP_my_PL_parser_var(linestr) # define PL_bufptr D_PPP_my_PL_parser_var(bufptr) # define PL_bufend D_PPP_my_PL_parser_var(bufend) # define PL_lex_state D_PPP_my_PL_parser_var(lex_state) # define PL_lex_stuff D_PPP_my_PL_parser_var(lex_stuff) # define PL_tokenbuf D_PPP_my_PL_parser_var(tokenbuf) # define PL_in_my D_PPP_my_PL_parser_var(in_my) # define PL_in_my_stash D_PPP_my_PL_parser_var(in_my_stash) # define PL_error_count D_PPP_my_PL_parser_var(error_count) #else /* ensure that PL_parser != NULL and cannot be dereferenced */ # define PL_parser ((void *) 1) #endif #ifndef mPUSHs # define mPUSHs(s) PUSHs(sv_2mortal(s)) #endif #ifndef PUSHmortal # define PUSHmortal PUSHs(sv_newmortal()) #endif #ifndef mPUSHp # define mPUSHp(p,l) sv_setpvn(PUSHmortal, (p), (l)) #endif #ifndef mPUSHn # define mPUSHn(n) sv_setnv(PUSHmortal, (NV)(n)) #endif #ifndef mPUSHi # define mPUSHi(i) sv_setiv(PUSHmortal, (IV)(i)) #endif #ifndef mPUSHu # define mPUSHu(u) sv_setuv(PUSHmortal, (UV)(u)) #endif #ifndef mXPUSHs # define mXPUSHs(s) XPUSHs(sv_2mortal(s)) #endif #ifndef XPUSHmortal # define XPUSHmortal XPUSHs(sv_newmortal()) #endif #ifndef mXPUSHp # define mXPUSHp(p,l) STMT_START { EXTEND(sp,1); sv_setpvn(PUSHmortal, (p), (l)); } STMT_END #endif #ifndef mXPUSHn # define mXPUSHn(n) STMT_START { EXTEND(sp,1); sv_setnv(PUSHmortal, (NV)(n)); } STMT_END #endif #ifndef mXPUSHi # define mXPUSHi(i) STMT_START { EXTEND(sp,1); sv_setiv(PUSHmortal, (IV)(i)); } STMT_END #endif #ifndef mXPUSHu # define mXPUSHu(u) STMT_START { EXTEND(sp,1); sv_setuv(PUSHmortal, (UV)(u)); } STMT_END #endif /* Replace: 1 */ #ifndef call_sv # define call_sv perl_call_sv #endif #ifndef call_pv # define call_pv perl_call_pv #endif #ifndef call_argv # define call_argv perl_call_argv #endif #ifndef call_method # define call_method perl_call_method #endif #ifndef eval_sv # define eval_sv perl_eval_sv #endif /* Replace: 0 */ #ifndef PERL_LOADMOD_DENY # define PERL_LOADMOD_DENY 0x1 #endif #ifndef PERL_LOADMOD_NOIMPORT # define PERL_LOADMOD_NOIMPORT 0x2 #endif #ifndef PERL_LOADMOD_IMPORT_OPS # define PERL_LOADMOD_IMPORT_OPS 0x4 #endif #ifndef G_METHOD # define G_METHOD 64 # ifdef call_sv # undef call_sv # endif # if (PERL_BCDVERSION < 0x5006000) # define call_sv(sv, flags) ((flags) & G_METHOD ? perl_call_method((char *) SvPV_nolen_const(sv), \ (flags) & ~G_METHOD) : perl_call_sv(sv, flags)) # else # define call_sv(sv, flags) ((flags) & G_METHOD ? Perl_call_method(aTHX_ (char *) SvPV_nolen_const(sv), \ (flags) & ~G_METHOD) : Perl_call_sv(aTHX_ sv, flags)) # endif #endif /* Replace perl_eval_pv with eval_pv */ #ifndef eval_pv #if defined(NEED_eval_pv) static SV* DPPP_(my_eval_pv)(char *p, I32 croak_on_error); static #else extern SV* DPPP_(my_eval_pv)(char *p, I32 croak_on_error); #endif #ifdef eval_pv # undef eval_pv #endif #define eval_pv(a,b) DPPP_(my_eval_pv)(aTHX_ a,b) #define Perl_eval_pv DPPP_(my_eval_pv) #if defined(NEED_eval_pv) || defined(NEED_eval_pv_GLOBAL) SV* DPPP_(my_eval_pv)(char *p, I32 croak_on_error) { dSP; SV* sv = newSVpv(p, 0); PUSHMARK(sp); eval_sv(sv, G_SCALAR); SvREFCNT_dec(sv); SPAGAIN; sv = POPs; PUTBACK; if (croak_on_error && SvTRUE(GvSV(errgv))) croak(SvPVx(GvSV(errgv), na)); return sv; } #endif #endif #ifndef vload_module #if defined(NEED_vload_module) static void DPPP_(my_vload_module)(U32 flags, SV *name, SV *ver, va_list *args); static #else extern void DPPP_(my_vload_module)(U32 flags, SV *name, SV *ver, va_list *args); #endif #ifdef vload_module # undef vload_module #endif #define vload_module(a,b,c,d) DPPP_(my_vload_module)(aTHX_ a,b,c,d) #define Perl_vload_module DPPP_(my_vload_module) #if defined(NEED_vload_module) || defined(NEED_vload_module_GLOBAL) void DPPP_(my_vload_module)(U32 flags, SV *name, SV *ver, va_list *args) { dTHR; dVAR; OP *veop, *imop; OP * const modname = newSVOP(OP_CONST, 0, name); /* 5.005 has a somewhat hacky force_normal that doesn't croak on SvREADONLY() if PL_compling is true. Current perls take care in ck_require() to correctly turn off SvREADONLY before calling force_normal_flags(). This seems a better fix than fudging PL_compling */ SvREADONLY_off(((SVOP*)modname)->op_sv); modname->op_private |= OPpCONST_BARE; if (ver) { veop = newSVOP(OP_CONST, 0, ver); } else veop = NULL; if (flags & PERL_LOADMOD_NOIMPORT) { imop = sawparens(newNULLLIST()); } else if (flags & PERL_LOADMOD_IMPORT_OPS) { imop = va_arg(*args, OP*); } else { SV *sv; imop = NULL; sv = va_arg(*args, SV*); while (sv) { imop = append_elem(OP_LIST, imop, newSVOP(OP_CONST, 0, sv)); sv = va_arg(*args, SV*); } } { const line_t ocopline = PL_copline; COP * const ocurcop = PL_curcop; const int oexpect = PL_expect; #if (PERL_BCDVERSION >= 0x5004000) utilize(!(flags & PERL_LOADMOD_DENY), start_subparse(FALSE, 0), veop, modname, imop); #elif (PERL_BCDVERSION > 0x5003000) utilize(!(flags & PERL_LOADMOD_DENY), start_subparse(), veop, modname, imop); #else utilize(!(flags & PERL_LOADMOD_DENY), start_subparse(), modname, imop); #endif PL_expect = oexpect; PL_copline = ocopline; PL_curcop = ocurcop; } } #endif #endif #ifndef load_module #if defined(NEED_load_module) static void DPPP_(my_load_module)(U32 flags, SV *name, SV *ver, ...); static #else extern void DPPP_(my_load_module)(U32 flags, SV *name, SV *ver, ...); #endif #ifdef load_module # undef load_module #endif #define load_module DPPP_(my_load_module) #define Perl_load_module DPPP_(my_load_module) #if defined(NEED_load_module) || defined(NEED_load_module_GLOBAL) void DPPP_(my_load_module)(U32 flags, SV *name, SV *ver, ...) { va_list args; va_start(args, ver); vload_module(flags, name, ver, &args); va_end(args); } #endif #endif #ifndef newRV_inc # define newRV_inc(sv) newRV(sv) /* Replace */ #endif #ifndef newRV_noinc #if defined(NEED_newRV_noinc) static SV * DPPP_(my_newRV_noinc)(SV *sv); static #else extern SV * DPPP_(my_newRV_noinc)(SV *sv); #endif #ifdef newRV_noinc # undef newRV_noinc #endif #define newRV_noinc(a) DPPP_(my_newRV_noinc)(aTHX_ a) #define Perl_newRV_noinc DPPP_(my_newRV_noinc) #if defined(NEED_newRV_noinc) || defined(NEED_newRV_noinc_GLOBAL) SV * DPPP_(my_newRV_noinc)(SV *sv) { SV *rv = (SV *)newRV(sv); SvREFCNT_dec(sv); return rv; } #endif #endif /* Hint: newCONSTSUB * Returns a CV* as of perl-5.7.1. This return value is not supported * by Devel::PPPort. */ /* newCONSTSUB from IO.xs is in the core starting with 5.004_63 */ #if (PERL_BCDVERSION < 0x5004063) && (PERL_BCDVERSION != 0x5004005) #if defined(NEED_newCONSTSUB) static void DPPP_(my_newCONSTSUB)(HV *stash, const char *name, SV *sv); static #else extern void DPPP_(my_newCONSTSUB)(HV *stash, const char *name, SV *sv); #endif #ifdef newCONSTSUB # undef newCONSTSUB #endif #define newCONSTSUB(a,b,c) DPPP_(my_newCONSTSUB)(aTHX_ a,b,c) #define Perl_newCONSTSUB DPPP_(my_newCONSTSUB) #if defined(NEED_newCONSTSUB) || defined(NEED_newCONSTSUB_GLOBAL) /* This is just a trick to avoid a dependency of newCONSTSUB on PL_parser */ /* (There's no PL_parser in perl < 5.005, so this is completely safe) */ #define D_PPP_PL_copline PL_copline void DPPP_(my_newCONSTSUB)(HV *stash, const char *name, SV *sv) { U32 oldhints = PL_hints; HV *old_cop_stash = PL_curcop->cop_stash; HV *old_curstash = PL_curstash; line_t oldline = PL_curcop->cop_line; PL_curcop->cop_line = D_PPP_PL_copline; PL_hints &= ~HINT_BLOCK_SCOPE; if (stash) PL_curstash = PL_curcop->cop_stash = stash; newSUB( #if (PERL_BCDVERSION < 0x5003022) start_subparse(), #elif (PERL_BCDVERSION == 0x5003022) start_subparse(0), #else /* 5.003_23 onwards */ start_subparse(FALSE, 0), #endif newSVOP(OP_CONST, 0, newSVpv((char *) name, 0)), newSVOP(OP_CONST, 0, &PL_sv_no), /* SvPV(&PL_sv_no) == "" -- GMB */ newSTATEOP(0, Nullch, newSVOP(OP_CONST, 0, sv)) ); PL_hints = oldhints; PL_curcop->cop_stash = old_cop_stash; PL_curstash = old_curstash; PL_curcop->cop_line = oldline; } #endif #endif /* * Boilerplate macros for initializing and accessing interpreter-local * data from C. All statics in extensions should be reworked to use * this, if you want to make the extension thread-safe. See ext/re/re.xs * for an example of the use of these macros. * * Code that uses these macros is responsible for the following: * 1. #define MY_CXT_KEY to a unique string, e.g. "DynaLoader_guts" * 2. Declare a typedef named my_cxt_t that is a structure that contains * all the data that needs to be interpreter-local. * 3. Use the START_MY_CXT macro after the declaration of my_cxt_t. * 4. Use the MY_CXT_INIT macro such that it is called exactly once * (typically put in the BOOT: section). * 5. Use the members of the my_cxt_t structure everywhere as * MY_CXT.member. * 6. Use the dMY_CXT macro (a declaration) in all the functions that * access MY_CXT. */ #if defined(MULTIPLICITY) || defined(PERL_OBJECT) || \ defined(PERL_CAPI) || defined(PERL_IMPLICIT_CONTEXT) #ifndef START_MY_CXT /* This must appear in all extensions that define a my_cxt_t structure, * right after the definition (i.e. at file scope). The non-threads * case below uses it to declare the data as static. */ #define START_MY_CXT #if (PERL_BCDVERSION < 0x5004068) /* Fetches the SV that keeps the per-interpreter data. */ #define dMY_CXT_SV \ SV *my_cxt_sv = get_sv(MY_CXT_KEY, FALSE) #else /* >= perl5.004_68 */ #define dMY_CXT_SV \ SV *my_cxt_sv = *hv_fetch(PL_modglobal, MY_CXT_KEY, \ sizeof(MY_CXT_KEY)-1, TRUE) #endif /* < perl5.004_68 */ /* This declaration should be used within all functions that use the * interpreter-local data. */ #define dMY_CXT \ dMY_CXT_SV; \ my_cxt_t *my_cxtp = INT2PTR(my_cxt_t*,SvUV(my_cxt_sv)) /* Creates and zeroes the per-interpreter data. * (We allocate my_cxtp in a Perl SV so that it will be released when * the interpreter goes away.) */ #define MY_CXT_INIT \ dMY_CXT_SV; \ /* newSV() allocates one more than needed */ \ my_cxt_t *my_cxtp = (my_cxt_t*)SvPVX(newSV(sizeof(my_cxt_t)-1));\ Zero(my_cxtp, 1, my_cxt_t); \ sv_setuv(my_cxt_sv, PTR2UV(my_cxtp)) /* This macro must be used to access members of the my_cxt_t structure. * e.g. MYCXT.some_data */ #define MY_CXT (*my_cxtp) /* Judicious use of these macros can reduce the number of times dMY_CXT * is used. Use is similar to pTHX, aTHX etc. */ #define pMY_CXT my_cxt_t *my_cxtp #define pMY_CXT_ pMY_CXT, #define _pMY_CXT ,pMY_CXT #define aMY_CXT my_cxtp #define aMY_CXT_ aMY_CXT, #define _aMY_CXT ,aMY_CXT #endif /* START_MY_CXT */ #ifndef MY_CXT_CLONE /* Clones the per-interpreter data. */ #define MY_CXT_CLONE \ dMY_CXT_SV; \ my_cxt_t *my_cxtp = (my_cxt_t*)SvPVX(newSV(sizeof(my_cxt_t)-1));\ Copy(INT2PTR(my_cxt_t*, SvUV(my_cxt_sv)), my_cxtp, 1, my_cxt_t);\ sv_setuv(my_cxt_sv, PTR2UV(my_cxtp)) #endif #else /* single interpreter */ #ifndef START_MY_CXT #define START_MY_CXT static my_cxt_t my_cxt; #define dMY_CXT_SV dNOOP #define dMY_CXT dNOOP #define MY_CXT_INIT NOOP #define MY_CXT my_cxt #define pMY_CXT void #define pMY_CXT_ #define _pMY_CXT #define aMY_CXT #define aMY_CXT_ #define _aMY_CXT #endif /* START_MY_CXT */ #ifndef MY_CXT_CLONE #define MY_CXT_CLONE NOOP #endif #endif #ifndef IVdf # if IVSIZE == LONGSIZE # define IVdf "ld" # define UVuf "lu" # define UVof "lo" # define UVxf "lx" # define UVXf "lX" # elif IVSIZE == INTSIZE # define IVdf "d" # define UVuf "u" # define UVof "o" # define UVxf "x" # define UVXf "X" # else # error "cannot define IV/UV formats" # endif #endif #ifndef NVef # if defined(USE_LONG_DOUBLE) && defined(HAS_LONG_DOUBLE) && \ defined(PERL_PRIfldbl) && (PERL_BCDVERSION != 0x5006000) /* Not very likely, but let's try anyway. */ # define NVef PERL_PRIeldbl # define NVff PERL_PRIfldbl # define NVgf PERL_PRIgldbl # else # define NVef "e" # define NVff "f" # define NVgf "g" # endif #endif #ifndef SvREFCNT_inc # ifdef PERL_USE_GCC_BRACE_GROUPS # define SvREFCNT_inc(sv) \ ({ \ SV * const _sv = (SV*)(sv); \ if (_sv) \ (SvREFCNT(_sv))++; \ _sv; \ }) # else # define SvREFCNT_inc(sv) \ ((PL_Sv=(SV*)(sv)) ? (++(SvREFCNT(PL_Sv)),PL_Sv) : NULL) # endif #endif #ifndef SvREFCNT_inc_simple # ifdef PERL_USE_GCC_BRACE_GROUPS # define SvREFCNT_inc_simple(sv) \ ({ \ if (sv) \ (SvREFCNT(sv))++; \ (SV *)(sv); \ }) # else # define SvREFCNT_inc_simple(sv) \ ((sv) ? (SvREFCNT(sv)++,(SV*)(sv)) : NULL) # endif #endif #ifndef SvREFCNT_inc_NN # ifdef PERL_USE_GCC_BRACE_GROUPS # define SvREFCNT_inc_NN(sv) \ ({ \ SV * const _sv = (SV*)(sv); \ SvREFCNT(_sv)++; \ _sv; \ }) # else # define SvREFCNT_inc_NN(sv) \ (PL_Sv=(SV*)(sv),++(SvREFCNT(PL_Sv)),PL_Sv) # endif #endif #ifndef SvREFCNT_inc_void # ifdef PERL_USE_GCC_BRACE_GROUPS # define SvREFCNT_inc_void(sv) \ ({ \ SV * const _sv = (SV*)(sv); \ if (_sv) \ (void)(SvREFCNT(_sv)++); \ }) # else # define SvREFCNT_inc_void(sv) \ (void)((PL_Sv=(SV*)(sv)) ? ++(SvREFCNT(PL_Sv)) : 0) # endif #endif #ifndef SvREFCNT_inc_simple_void # define SvREFCNT_inc_simple_void(sv) STMT_START { if (sv) SvREFCNT(sv)++; } STMT_END #endif #ifndef SvREFCNT_inc_simple_NN # define SvREFCNT_inc_simple_NN(sv) (++SvREFCNT(sv), (SV*)(sv)) #endif #ifndef SvREFCNT_inc_void_NN # define SvREFCNT_inc_void_NN(sv) (void)(++SvREFCNT((SV*)(sv))) #endif #ifndef SvREFCNT_inc_simple_void_NN # define SvREFCNT_inc_simple_void_NN(sv) (void)(++SvREFCNT((SV*)(sv))) #endif #ifndef newSV_type #if defined(NEED_newSV_type) static SV* DPPP_(my_newSV_type)(pTHX_ svtype const t); static #else extern SV* DPPP_(my_newSV_type)(pTHX_ svtype const t); #endif #ifdef newSV_type # undef newSV_type #endif #define newSV_type(a) DPPP_(my_newSV_type)(aTHX_ a) #define Perl_newSV_type DPPP_(my_newSV_type) #if defined(NEED_newSV_type) || defined(NEED_newSV_type_GLOBAL) SV* DPPP_(my_newSV_type)(pTHX_ svtype const t) { SV* const sv = newSV(0); sv_upgrade(sv, t); return sv; } #endif #endif #if (PERL_BCDVERSION < 0x5006000) # define D_PPP_CONSTPV_ARG(x) ((char *) (x)) #else # define D_PPP_CONSTPV_ARG(x) (x) #endif #ifndef newSVpvn # define newSVpvn(data,len) ((data) \ ? ((len) ? newSVpv((data), (len)) : newSVpv("", 0)) \ : newSV(0)) #endif #ifndef newSVpvn_utf8 # define newSVpvn_utf8(s, len, u) newSVpvn_flags((s), (len), (u) ? SVf_UTF8 : 0) #endif #ifndef SVf_UTF8 # define SVf_UTF8 0 #endif #ifndef newSVpvn_flags #if defined(NEED_newSVpvn_flags) static SV * DPPP_(my_newSVpvn_flags)(pTHX_ const char *s, STRLEN len, U32 flags); static #else extern SV * DPPP_(my_newSVpvn_flags)(pTHX_ const char *s, STRLEN len, U32 flags); #endif #ifdef newSVpvn_flags # undef newSVpvn_flags #endif #define newSVpvn_flags(a,b,c) DPPP_(my_newSVpvn_flags)(aTHX_ a,b,c) #define Perl_newSVpvn_flags DPPP_(my_newSVpvn_flags) #if defined(NEED_newSVpvn_flags) || defined(NEED_newSVpvn_flags_GLOBAL) SV * DPPP_(my_newSVpvn_flags)(pTHX_ const char *s, STRLEN len, U32 flags) { SV *sv = newSVpvn(D_PPP_CONSTPV_ARG(s), len); SvFLAGS(sv) |= (flags & SVf_UTF8); return (flags & SVs_TEMP) ? sv_2mortal(sv) : sv; } #endif #endif /* Backwards compatibility stuff... :-( */ #if !defined(NEED_sv_2pv_flags) && defined(NEED_sv_2pv_nolen) # define NEED_sv_2pv_flags #endif #if !defined(NEED_sv_2pv_flags_GLOBAL) && defined(NEED_sv_2pv_nolen_GLOBAL) # define NEED_sv_2pv_flags_GLOBAL #endif /* Hint: sv_2pv_nolen * Use the SvPV_nolen() or SvPV_nolen_const() macros instead of sv_2pv_nolen(). */ #ifndef sv_2pv_nolen # define sv_2pv_nolen(sv) SvPV_nolen(sv) #endif #ifdef SvPVbyte /* Hint: SvPVbyte * Does not work in perl-5.6.1, ppport.h implements a version * borrowed from perl-5.7.3. */ #if (PERL_BCDVERSION < 0x5007000) #if defined(NEED_sv_2pvbyte) static char * DPPP_(my_sv_2pvbyte)(pTHX_ SV *sv, STRLEN *lp); static #else extern char * DPPP_(my_sv_2pvbyte)(pTHX_ SV *sv, STRLEN *lp); #endif #ifdef sv_2pvbyte # undef sv_2pvbyte #endif #define sv_2pvbyte(a,b) DPPP_(my_sv_2pvbyte)(aTHX_ a,b) #define Perl_sv_2pvbyte DPPP_(my_sv_2pvbyte) #if defined(NEED_sv_2pvbyte) || defined(NEED_sv_2pvbyte_GLOBAL) char * DPPP_(my_sv_2pvbyte)(pTHX_ SV *sv, STRLEN *lp) { sv_utf8_downgrade(sv,0); return SvPV(sv,*lp); } #endif /* Hint: sv_2pvbyte * Use the SvPVbyte() macro instead of sv_2pvbyte(). */ #undef SvPVbyte #define SvPVbyte(sv, lp) \ ((SvFLAGS(sv) & (SVf_POK|SVf_UTF8)) == (SVf_POK) \ ? ((lp = SvCUR(sv)), SvPVX(sv)) : sv_2pvbyte(sv, &lp)) #endif #else # define SvPVbyte SvPV # define sv_2pvbyte sv_2pv #endif #ifndef sv_2pvbyte_nolen # define sv_2pvbyte_nolen(sv) sv_2pv_nolen(sv) #endif /* Hint: sv_pvn * Always use the SvPV() macro instead of sv_pvn(). */ /* Hint: sv_pvn_force * Always use the SvPV_force() macro instead of sv_pvn_force(). */ /* If these are undefined, they're not handled by the core anyway */ #ifndef SV_IMMEDIATE_UNREF # define SV_IMMEDIATE_UNREF 0 #endif #ifndef SV_GMAGIC # define SV_GMAGIC 0 #endif #ifndef SV_COW_DROP_PV # define SV_COW_DROP_PV 0 #endif #ifndef SV_UTF8_NO_ENCODING # define SV_UTF8_NO_ENCODING 0 #endif #ifndef SV_NOSTEAL # define SV_NOSTEAL 0 #endif #ifndef SV_CONST_RETURN # define SV_CONST_RETURN 0 #endif #ifndef SV_MUTABLE_RETURN # define SV_MUTABLE_RETURN 0 #endif #ifndef SV_SMAGIC # define SV_SMAGIC 0 #endif #ifndef SV_HAS_TRAILING_NUL # define SV_HAS_TRAILING_NUL 0 #endif #ifndef SV_COW_SHARED_HASH_KEYS # define SV_COW_SHARED_HASH_KEYS 0 #endif #if (PERL_BCDVERSION < 0x5007002) #if defined(NEED_sv_2pv_flags) static char * DPPP_(my_sv_2pv_flags)(pTHX_ SV *sv, STRLEN *lp, I32 flags); static #else extern char * DPPP_(my_sv_2pv_flags)(pTHX_ SV *sv, STRLEN *lp, I32 flags); #endif #ifdef sv_2pv_flags # undef sv_2pv_flags #endif #define sv_2pv_flags(a,b,c) DPPP_(my_sv_2pv_flags)(aTHX_ a,b,c) #define Perl_sv_2pv_flags DPPP_(my_sv_2pv_flags) #if defined(NEED_sv_2pv_flags) || defined(NEED_sv_2pv_flags_GLOBAL) char * DPPP_(my_sv_2pv_flags)(pTHX_ SV *sv, STRLEN *lp, I32 flags) { STRLEN n_a = (STRLEN) flags; return sv_2pv(sv, lp ? lp : &n_a); } #endif #if defined(NEED_sv_pvn_force_flags) static char * DPPP_(my_sv_pvn_force_flags)(pTHX_ SV *sv, STRLEN *lp, I32 flags); static #else extern char * DPPP_(my_sv_pvn_force_flags)(pTHX_ SV *sv, STRLEN *lp, I32 flags); #endif #ifdef sv_pvn_force_flags # undef sv_pvn_force_flags #endif #define sv_pvn_force_flags(a,b,c) DPPP_(my_sv_pvn_force_flags)(aTHX_ a,b,c) #define Perl_sv_pvn_force_flags DPPP_(my_sv_pvn_force_flags) #if defined(NEED_sv_pvn_force_flags) || defined(NEED_sv_pvn_force_flags_GLOBAL) char * DPPP_(my_sv_pvn_force_flags)(pTHX_ SV *sv, STRLEN *lp, I32 flags) { STRLEN n_a = (STRLEN) flags; return sv_pvn_force(sv, lp ? lp : &n_a); } #endif #endif #if (PERL_BCDVERSION < 0x5008008) || ( (PERL_BCDVERSION >= 0x5009000) && (PERL_BCDVERSION < 0x5009003) ) # define DPPP_SVPV_NOLEN_LP_ARG &PL_na #else # define DPPP_SVPV_NOLEN_LP_ARG 0 #endif #ifndef SvPV_const # define SvPV_const(sv, lp) SvPV_flags_const(sv, lp, SV_GMAGIC) #endif #ifndef SvPV_mutable # define SvPV_mutable(sv, lp) SvPV_flags_mutable(sv, lp, SV_GMAGIC) #endif #ifndef SvPV_flags # define SvPV_flags(sv, lp, flags) \ ((SvFLAGS(sv) & (SVf_POK)) == SVf_POK \ ? ((lp = SvCUR(sv)), SvPVX(sv)) : sv_2pv_flags(sv, &lp, flags)) #endif #ifndef SvPV_flags_const # define SvPV_flags_const(sv, lp, flags) \ ((SvFLAGS(sv) & (SVf_POK)) == SVf_POK \ ? ((lp = SvCUR(sv)), SvPVX_const(sv)) : \ (const char*) sv_2pv_flags(sv, &lp, flags|SV_CONST_RETURN)) #endif #ifndef SvPV_flags_const_nolen # define SvPV_flags_const_nolen(sv, flags) \ ((SvFLAGS(sv) & (SVf_POK)) == SVf_POK \ ? SvPVX_const(sv) : \ (const char*) sv_2pv_flags(sv, DPPP_SVPV_NOLEN_LP_ARG, flags|SV_CONST_RETURN)) #endif #ifndef SvPV_flags_mutable # define SvPV_flags_mutable(sv, lp, flags) \ ((SvFLAGS(sv) & (SVf_POK)) == SVf_POK \ ? ((lp = SvCUR(sv)), SvPVX_mutable(sv)) : \ sv_2pv_flags(sv, &lp, flags|SV_MUTABLE_RETURN)) #endif #ifndef SvPV_force # define SvPV_force(sv, lp) SvPV_force_flags(sv, lp, SV_GMAGIC) #endif #ifndef SvPV_force_nolen # define SvPV_force_nolen(sv) SvPV_force_flags_nolen(sv, SV_GMAGIC) #endif #ifndef SvPV_force_mutable # define SvPV_force_mutable(sv, lp) SvPV_force_flags_mutable(sv, lp, SV_GMAGIC) #endif #ifndef SvPV_force_nomg # define SvPV_force_nomg(sv, lp) SvPV_force_flags(sv, lp, 0) #endif #ifndef SvPV_force_nomg_nolen # define SvPV_force_nomg_nolen(sv) SvPV_force_flags_nolen(sv, 0) #endif #ifndef SvPV_force_flags # define SvPV_force_flags(sv, lp, flags) \ ((SvFLAGS(sv) & (SVf_POK|SVf_THINKFIRST)) == SVf_POK \ ? ((lp = SvCUR(sv)), SvPVX(sv)) : sv_pvn_force_flags(sv, &lp, flags)) #endif #ifndef SvPV_force_flags_nolen # define SvPV_force_flags_nolen(sv, flags) \ ((SvFLAGS(sv) & (SVf_POK|SVf_THINKFIRST)) == SVf_POK \ ? SvPVX(sv) : sv_pvn_force_flags(sv, DPPP_SVPV_NOLEN_LP_ARG, flags)) #endif #ifndef SvPV_force_flags_mutable # define SvPV_force_flags_mutable(sv, lp, flags) \ ((SvFLAGS(sv) & (SVf_POK|SVf_THINKFIRST)) == SVf_POK \ ? ((lp = SvCUR(sv)), SvPVX_mutable(sv)) \ : sv_pvn_force_flags(sv, &lp, flags|SV_MUTABLE_RETURN)) #endif #ifndef SvPV_nolen # define SvPV_nolen(sv) \ ((SvFLAGS(sv) & (SVf_POK)) == SVf_POK \ ? SvPVX(sv) : sv_2pv_flags(sv, DPPP_SVPV_NOLEN_LP_ARG, SV_GMAGIC)) #endif #ifndef SvPV_nolen_const # define SvPV_nolen_const(sv) \ ((SvFLAGS(sv) & (SVf_POK)) == SVf_POK \ ? SvPVX_const(sv) : sv_2pv_flags(sv, DPPP_SVPV_NOLEN_LP_ARG, SV_GMAGIC|SV_CONST_RETURN)) #endif #ifndef SvPV_nomg # define SvPV_nomg(sv, lp) SvPV_flags(sv, lp, 0) #endif #ifndef SvPV_nomg_const # define SvPV_nomg_const(sv, lp) SvPV_flags_const(sv, lp, 0) #endif #ifndef SvPV_nomg_const_nolen # define SvPV_nomg_const_nolen(sv) SvPV_flags_const_nolen(sv, 0) #endif #ifndef SvPV_nomg_nolen # define SvPV_nomg_nolen(sv) ((SvFLAGS(sv) & (SVf_POK)) == SVf_POK \ ? SvPVX(sv) : sv_2pv_flags(sv, DPPP_SVPV_NOLEN_LP_ARG, 0)) #endif #ifndef SvPV_renew # define SvPV_renew(sv,n) STMT_START { SvLEN_set(sv, n); \ SvPV_set((sv), (char *) saferealloc( \ (Malloc_t)SvPVX(sv), (MEM_SIZE)((n)))); \ } STMT_END #endif #ifndef SvMAGIC_set # define SvMAGIC_set(sv, val) \ STMT_START { assert(SvTYPE(sv) >= SVt_PVMG); \ (((XPVMG*) SvANY(sv))->xmg_magic = (val)); } STMT_END #endif #if (PERL_BCDVERSION < 0x5009003) #ifndef SvPVX_const # define SvPVX_const(sv) ((const char*) (0 + SvPVX(sv))) #endif #ifndef SvPVX_mutable # define SvPVX_mutable(sv) (0 + SvPVX(sv)) #endif #ifndef SvRV_set # define SvRV_set(sv, val) \ STMT_START { assert(SvTYPE(sv) >= SVt_RV); \ (((XRV*) SvANY(sv))->xrv_rv = (val)); } STMT_END #endif #else #ifndef SvPVX_const # define SvPVX_const(sv) ((const char*)((sv)->sv_u.svu_pv)) #endif #ifndef SvPVX_mutable # define SvPVX_mutable(sv) ((sv)->sv_u.svu_pv) #endif #ifndef SvRV_set # define SvRV_set(sv, val) \ STMT_START { assert(SvTYPE(sv) >= SVt_RV); \ ((sv)->sv_u.svu_rv = (val)); } STMT_END #endif #endif #ifndef SvSTASH_set # define SvSTASH_set(sv, val) \ STMT_START { assert(SvTYPE(sv) >= SVt_PVMG); \ (((XPVMG*) SvANY(sv))->xmg_stash = (val)); } STMT_END #endif #if (PERL_BCDVERSION < 0x5004000) #ifndef SvUV_set # define SvUV_set(sv, val) \ STMT_START { assert(SvTYPE(sv) == SVt_IV || SvTYPE(sv) >= SVt_PVIV); \ (((XPVIV*) SvANY(sv))->xiv_iv = (IV) (val)); } STMT_END #endif #else #ifndef SvUV_set # define SvUV_set(sv, val) \ STMT_START { assert(SvTYPE(sv) == SVt_IV || SvTYPE(sv) >= SVt_PVIV); \ (((XPVUV*) SvANY(sv))->xuv_uv = (val)); } STMT_END #endif #endif #if (PERL_BCDVERSION >= 0x5004000) && !defined(vnewSVpvf) #if defined(NEED_vnewSVpvf) static SV * DPPP_(my_vnewSVpvf)(pTHX_ const char *pat, va_list *args); static #else extern SV * DPPP_(my_vnewSVpvf)(pTHX_ const char *pat, va_list *args); #endif #ifdef vnewSVpvf # undef vnewSVpvf #endif #define vnewSVpvf(a,b) DPPP_(my_vnewSVpvf)(aTHX_ a,b) #define Perl_vnewSVpvf DPPP_(my_vnewSVpvf) #if defined(NEED_vnewSVpvf) || defined(NEED_vnewSVpvf_GLOBAL) SV * DPPP_(my_vnewSVpvf)(pTHX_ const char *pat, va_list *args) { register SV *sv = newSV(0); sv_vsetpvfn(sv, pat, strlen(pat), args, Null(SV**), 0, Null(bool*)); return sv; } #endif #endif #if (PERL_BCDVERSION >= 0x5004000) && !defined(sv_vcatpvf) # define sv_vcatpvf(sv, pat, args) sv_vcatpvfn(sv, pat, strlen(pat), args, Null(SV**), 0, Null(bool*)) #endif #if (PERL_BCDVERSION >= 0x5004000) && !defined(sv_vsetpvf) # define sv_vsetpvf(sv, pat, args) sv_vsetpvfn(sv, pat, strlen(pat), args, Null(SV**), 0, Null(bool*)) #endif #if (PERL_BCDVERSION >= 0x5004000) && !defined(sv_catpvf_mg) #if defined(NEED_sv_catpvf_mg) static void DPPP_(my_sv_catpvf_mg)(pTHX_ SV *sv, const char *pat, ...); static #else extern void DPPP_(my_sv_catpvf_mg)(pTHX_ SV *sv, const char *pat, ...); #endif #define Perl_sv_catpvf_mg DPPP_(my_sv_catpvf_mg) #if defined(NEED_sv_catpvf_mg) || defined(NEED_sv_catpvf_mg_GLOBAL) void DPPP_(my_sv_catpvf_mg)(pTHX_ SV *sv, const char *pat, ...) { va_list args; va_start(args, pat); sv_vcatpvfn(sv, pat, strlen(pat), &args, Null(SV**), 0, Null(bool*)); SvSETMAGIC(sv); va_end(args); } #endif #endif #ifdef PERL_IMPLICIT_CONTEXT #if (PERL_BCDVERSION >= 0x5004000) && !defined(sv_catpvf_mg_nocontext) #if defined(NEED_sv_catpvf_mg_nocontext) static void DPPP_(my_sv_catpvf_mg_nocontext)(SV *sv, const char *pat, ...); static #else extern void DPPP_(my_sv_catpvf_mg_nocontext)(SV *sv, const char *pat, ...); #endif #define sv_catpvf_mg_nocontext DPPP_(my_sv_catpvf_mg_nocontext) #define Perl_sv_catpvf_mg_nocontext DPPP_(my_sv_catpvf_mg_nocontext) #if defined(NEED_sv_catpvf_mg_nocontext) || defined(NEED_sv_catpvf_mg_nocontext_GLOBAL) void DPPP_(my_sv_catpvf_mg_nocontext)(SV *sv, const char *pat, ...) { dTHX; va_list args; va_start(args, pat); sv_vcatpvfn(sv, pat, strlen(pat), &args, Null(SV**), 0, Null(bool*)); SvSETMAGIC(sv); va_end(args); } #endif #endif #endif /* sv_catpvf_mg depends on sv_catpvf_mg_nocontext */ #ifndef sv_catpvf_mg # ifdef PERL_IMPLICIT_CONTEXT # define sv_catpvf_mg Perl_sv_catpvf_mg_nocontext # else # define sv_catpvf_mg Perl_sv_catpvf_mg # endif #endif #if (PERL_BCDVERSION >= 0x5004000) && !defined(sv_vcatpvf_mg) # define sv_vcatpvf_mg(sv, pat, args) \ STMT_START { \ sv_vcatpvfn(sv, pat, strlen(pat), args, Null(SV**), 0, Null(bool*)); \ SvSETMAGIC(sv); \ } STMT_END #endif #if (PERL_BCDVERSION >= 0x5004000) && !defined(sv_setpvf_mg) #if defined(NEED_sv_setpvf_mg) static void DPPP_(my_sv_setpvf_mg)(pTHX_ SV *sv, const char *pat, ...); static #else extern void DPPP_(my_sv_setpvf_mg)(pTHX_ SV *sv, const char *pat, ...); #endif #define Perl_sv_setpvf_mg DPPP_(my_sv_setpvf_mg) #if defined(NEED_sv_setpvf_mg) || defined(NEED_sv_setpvf_mg_GLOBAL) void DPPP_(my_sv_setpvf_mg)(pTHX_ SV *sv, const char *pat, ...) { va_list args; va_start(args, pat); sv_vsetpvfn(sv, pat, strlen(pat), &args, Null(SV**), 0, Null(bool*)); SvSETMAGIC(sv); va_end(args); } #endif #endif #ifdef PERL_IMPLICIT_CONTEXT #if (PERL_BCDVERSION >= 0x5004000) && !defined(sv_setpvf_mg_nocontext) #if defined(NEED_sv_setpvf_mg_nocontext) static void DPPP_(my_sv_setpvf_mg_nocontext)(SV *sv, const char *pat, ...); static #else extern void DPPP_(my_sv_setpvf_mg_nocontext)(SV *sv, const char *pat, ...); #endif #define sv_setpvf_mg_nocontext DPPP_(my_sv_setpvf_mg_nocontext) #define Perl_sv_setpvf_mg_nocontext DPPP_(my_sv_setpvf_mg_nocontext) #if defined(NEED_sv_setpvf_mg_nocontext) || defined(NEED_sv_setpvf_mg_nocontext_GLOBAL) void DPPP_(my_sv_setpvf_mg_nocontext)(SV *sv, const char *pat, ...) { dTHX; va_list args; va_start(args, pat); sv_vsetpvfn(sv, pat, strlen(pat), &args, Null(SV**), 0, Null(bool*)); SvSETMAGIC(sv); va_end(args); } #endif #endif #endif /* sv_setpvf_mg depends on sv_setpvf_mg_nocontext */ #ifndef sv_setpvf_mg # ifdef PERL_IMPLICIT_CONTEXT # define sv_setpvf_mg Perl_sv_setpvf_mg_nocontext # else # define sv_setpvf_mg Perl_sv_setpvf_mg # endif #endif #if (PERL_BCDVERSION >= 0x5004000) && !defined(sv_vsetpvf_mg) # define sv_vsetpvf_mg(sv, pat, args) \ STMT_START { \ sv_vsetpvfn(sv, pat, strlen(pat), args, Null(SV**), 0, Null(bool*)); \ SvSETMAGIC(sv); \ } STMT_END #endif /* Hint: newSVpvn_share * The SVs created by this function only mimic the behaviour of * shared PVs without really being shared. Only use if you know * what you're doing. */ #ifndef newSVpvn_share #if defined(NEED_newSVpvn_share) static SV * DPPP_(my_newSVpvn_share)(pTHX_ const char *src, I32 len, U32 hash); static #else extern SV * DPPP_(my_newSVpvn_share)(pTHX_ const char *src, I32 len, U32 hash); #endif #ifdef newSVpvn_share # undef newSVpvn_share #endif #define newSVpvn_share(a,b,c) DPPP_(my_newSVpvn_share)(aTHX_ a,b,c) #define Perl_newSVpvn_share DPPP_(my_newSVpvn_share) #if defined(NEED_newSVpvn_share) || defined(NEED_newSVpvn_share_GLOBAL) SV * DPPP_(my_newSVpvn_share)(pTHX_ const char *src, I32 len, U32 hash) { SV *sv; if (len < 0) len = -len; if (!hash) PERL_HASH(hash, (char*) src, len); sv = newSVpvn((char *) src, len); sv_upgrade(sv, SVt_PVIV); SvIVX(sv) = hash; SvREADONLY_on(sv); SvPOK_on(sv); return sv; } #endif #endif #ifndef SvSHARED_HASH # define SvSHARED_HASH(sv) (0 + SvUVX(sv)) #endif #ifndef HvNAME_get # define HvNAME_get(hv) HvNAME(hv) #endif #ifndef HvNAMELEN_get # define HvNAMELEN_get(hv) (HvNAME_get(hv) ? (I32)strlen(HvNAME_get(hv)) : 0) #endif #ifndef GvSVn # define GvSVn(gv) GvSV(gv) #endif #ifndef isGV_with_GP # define isGV_with_GP(gv) isGV(gv) #endif #ifndef gv_fetchpvn_flags # define gv_fetchpvn_flags(name, len, flags, svt) gv_fetchpv(name, flags, svt) #endif #ifndef gv_fetchsv # define gv_fetchsv(name, flags, svt) gv_fetchpv(SvPV_nolen_const(name), flags, svt) #endif #ifndef get_cvn_flags # define get_cvn_flags(name, namelen, flags) get_cv(name, flags) #endif #ifndef WARN_ALL # define WARN_ALL 0 #endif #ifndef WARN_CLOSURE # define WARN_CLOSURE 1 #endif #ifndef WARN_DEPRECATED # define WARN_DEPRECATED 2 #endif #ifndef WARN_EXITING # define WARN_EXITING 3 #endif #ifndef WARN_GLOB # define WARN_GLOB 4 #endif #ifndef WARN_IO # define WARN_IO 5 #endif #ifndef WARN_CLOSED # define WARN_CLOSED 6 #endif #ifndef WARN_EXEC # define WARN_EXEC 7 #endif #ifndef WARN_LAYER # define WARN_LAYER 8 #endif #ifndef WARN_NEWLINE # define WARN_NEWLINE 9 #endif #ifndef WARN_PIPE # define WARN_PIPE 10 #endif #ifndef WARN_UNOPENED # define WARN_UNOPENED 11 #endif #ifndef WARN_MISC # define WARN_MISC 12 #endif #ifndef WARN_NUMERIC # define WARN_NUMERIC 13 #endif #ifndef WARN_ONCE # define WARN_ONCE 14 #endif #ifndef WARN_OVERFLOW # define WARN_OVERFLOW 15 #endif #ifndef WARN_PACK # define WARN_PACK 16 #endif #ifndef WARN_PORTABLE # define WARN_PORTABLE 17 #endif #ifndef WARN_RECURSION # define WARN_RECURSION 18 #endif #ifndef WARN_REDEFINE # define WARN_REDEFINE 19 #endif #ifndef WARN_REGEXP # define WARN_REGEXP 20 #endif #ifndef WARN_SEVERE # define WARN_SEVERE 21 #endif #ifndef WARN_DEBUGGING # define WARN_DEBUGGING 22 #endif #ifndef WARN_INPLACE # define WARN_INPLACE 23 #endif #ifndef WARN_INTERNAL # define WARN_INTERNAL 24 #endif #ifndef WARN_MALLOC # define WARN_MALLOC 25 #endif #ifndef WARN_SIGNAL # define WARN_SIGNAL 26 #endif #ifndef WARN_SUBSTR # define WARN_SUBSTR 27 #endif #ifndef WARN_SYNTAX # define WARN_SYNTAX 28 #endif #ifndef WARN_AMBIGUOUS # define WARN_AMBIGUOUS 29 #endif #ifndef WARN_BAREWORD # define WARN_BAREWORD 30 #endif #ifndef WARN_DIGIT # define WARN_DIGIT 31 #endif #ifndef WARN_PARENTHESIS # define WARN_PARENTHESIS 32 #endif #ifndef WARN_PRECEDENCE # define WARN_PRECEDENCE 33 #endif #ifndef WARN_PRINTF # define WARN_PRINTF 34 #endif #ifndef WARN_PROTOTYPE # define WARN_PROTOTYPE 35 #endif #ifndef WARN_QW # define WARN_QW 36 #endif #ifndef WARN_RESERVED # define WARN_RESERVED 37 #endif #ifndef WARN_SEMICOLON # define WARN_SEMICOLON 38 #endif #ifndef WARN_TAINT # define WARN_TAINT 39 #endif #ifndef WARN_THREADS # define WARN_THREADS 40 #endif #ifndef WARN_UNINITIALIZED # define WARN_UNINITIALIZED 41 #endif #ifndef WARN_UNPACK # define WARN_UNPACK 42 #endif #ifndef WARN_UNTIE # define WARN_UNTIE 43 #endif #ifndef WARN_UTF8 # define WARN_UTF8 44 #endif #ifndef WARN_VOID # define WARN_VOID 45 #endif #ifndef WARN_ASSERTIONS # define WARN_ASSERTIONS 46 #endif #ifndef packWARN # define packWARN(a) (a) #endif #ifndef ckWARN # ifdef G_WARN_ON # define ckWARN(a) (PL_dowarn & G_WARN_ON) # else # define ckWARN(a) PL_dowarn # endif #endif #if (PERL_BCDVERSION >= 0x5004000) && !defined(warner) #if defined(NEED_warner) static void DPPP_(my_warner)(U32 err, const char *pat, ...); static #else extern void DPPP_(my_warner)(U32 err, const char *pat, ...); #endif #define Perl_warner DPPP_(my_warner) #if defined(NEED_warner) || defined(NEED_warner_GLOBAL) void DPPP_(my_warner)(U32 err, const char *pat, ...) { SV *sv; va_list args; PERL_UNUSED_ARG(err); va_start(args, pat); sv = vnewSVpvf(pat, &args); va_end(args); sv_2mortal(sv); warn("%s", SvPV_nolen(sv)); } #define warner Perl_warner #define Perl_warner_nocontext Perl_warner #endif #endif /* concatenating with "" ensures that only literal strings are accepted as argument * note that STR_WITH_LEN() can't be used as argument to macros or functions that * under some configurations might be macros */ #ifndef STR_WITH_LEN # define STR_WITH_LEN(s) (s ""), (sizeof(s)-1) #endif #ifndef newSVpvs # define newSVpvs(str) newSVpvn(str "", sizeof(str) - 1) #endif #ifndef newSVpvs_flags # define newSVpvs_flags(str, flags) newSVpvn_flags(str "", sizeof(str) - 1, flags) #endif #ifndef newSVpvs_share # define newSVpvs_share(str) newSVpvn_share(str "", sizeof(str) - 1, 0) #endif #ifndef sv_catpvs # define sv_catpvs(sv, str) sv_catpvn(sv, str "", sizeof(str) - 1) #endif #ifndef sv_setpvs # define sv_setpvs(sv, str) sv_setpvn(sv, str "", sizeof(str) - 1) #endif #ifndef hv_fetchs # define hv_fetchs(hv, key, lval) hv_fetch(hv, key "", sizeof(key) - 1, lval) #endif #ifndef hv_stores # define hv_stores(hv, key, val) hv_store(hv, key "", sizeof(key) - 1, val, 0) #endif #ifndef gv_fetchpvs # define gv_fetchpvs(name, flags, svt) gv_fetchpvn_flags(name "", sizeof(name) - 1, flags, svt) #endif #ifndef gv_stashpvs # define gv_stashpvs(name, flags) gv_stashpvn(name "", sizeof(name) - 1, flags) #endif #ifndef get_cvs # define get_cvs(name, flags) get_cvn_flags(name "", sizeof(name)-1, flags) #endif #ifndef SvGETMAGIC # define SvGETMAGIC(x) STMT_START { if (SvGMAGICAL(x)) mg_get(x); } STMT_END #endif /* Some random bits for sv_unmagicext. These should probably be pulled in for real and organized at some point */ #ifndef HEf_SVKEY # define HEf_SVKEY -2 #endif #if defined(__GNUC__) && !defined(PERL_GCC_BRACE_GROUPS_FORBIDDEN) # define MUTABLE_PTR(p) ({ void *_p = (p); _p; }) #else # define MUTABLE_PTR(p) ((void *) (p)) #endif #define MUTABLE_SV(p) ((SV *)MUTABLE_PTR(p)) /* end of random bits */ #ifndef PERL_MAGIC_sv # define PERL_MAGIC_sv '\0' #endif #ifndef PERL_MAGIC_overload # define PERL_MAGIC_overload 'A' #endif #ifndef PERL_MAGIC_overload_elem # define PERL_MAGIC_overload_elem 'a' #endif #ifndef PERL_MAGIC_overload_table # define PERL_MAGIC_overload_table 'c' #endif #ifndef PERL_MAGIC_bm # define PERL_MAGIC_bm 'B' #endif #ifndef PERL_MAGIC_regdata # define PERL_MAGIC_regdata 'D' #endif #ifndef PERL_MAGIC_regdatum # define PERL_MAGIC_regdatum 'd' #endif #ifndef PERL_MAGIC_env # define PERL_MAGIC_env 'E' #endif #ifndef PERL_MAGIC_envelem # define PERL_MAGIC_envelem 'e' #endif #ifndef PERL_MAGIC_fm # define PERL_MAGIC_fm 'f' #endif #ifndef PERL_MAGIC_regex_global # define PERL_MAGIC_regex_global 'g' #endif #ifndef PERL_MAGIC_isa # define PERL_MAGIC_isa 'I' #endif #ifndef PERL_MAGIC_isaelem # define PERL_MAGIC_isaelem 'i' #endif #ifndef PERL_MAGIC_nkeys # define PERL_MAGIC_nkeys 'k' #endif #ifndef PERL_MAGIC_dbfile # define PERL_MAGIC_dbfile 'L' #endif #ifndef PERL_MAGIC_dbline # define PERL_MAGIC_dbline 'l' #endif #ifndef PERL_MAGIC_mutex # define PERL_MAGIC_mutex 'm' #endif #ifndef PERL_MAGIC_shared # define PERL_MAGIC_shared 'N' #endif #ifndef PERL_MAGIC_shared_scalar # define PERL_MAGIC_shared_scalar 'n' #endif #ifndef PERL_MAGIC_collxfrm # define PERL_MAGIC_collxfrm 'o' #endif #ifndef PERL_MAGIC_tied # define PERL_MAGIC_tied 'P' #endif #ifndef PERL_MAGIC_tiedelem # define PERL_MAGIC_tiedelem 'p' #endif #ifndef PERL_MAGIC_tiedscalar # define PERL_MAGIC_tiedscalar 'q' #endif #ifndef PERL_MAGIC_qr # define PERL_MAGIC_qr 'r' #endif #ifndef PERL_MAGIC_sig # define PERL_MAGIC_sig 'S' #endif #ifndef PERL_MAGIC_sigelem # define PERL_MAGIC_sigelem 's' #endif #ifndef PERL_MAGIC_taint # define PERL_MAGIC_taint 't' #endif #ifndef PERL_MAGIC_uvar # define PERL_MAGIC_uvar 'U' #endif #ifndef PERL_MAGIC_uvar_elem # define PERL_MAGIC_uvar_elem 'u' #endif #ifndef PERL_MAGIC_vstring # define PERL_MAGIC_vstring 'V' #endif #ifndef PERL_MAGIC_vec # define PERL_MAGIC_vec 'v' #endif #ifndef PERL_MAGIC_utf8 # define PERL_MAGIC_utf8 'w' #endif #ifndef PERL_MAGIC_substr # define PERL_MAGIC_substr 'x' #endif #ifndef PERL_MAGIC_defelem # define PERL_MAGIC_defelem 'y' #endif #ifndef PERL_MAGIC_glob # define PERL_MAGIC_glob '*' #endif #ifndef PERL_MAGIC_arylen # define PERL_MAGIC_arylen '#' #endif #ifndef PERL_MAGIC_pos # define PERL_MAGIC_pos '.' #endif #ifndef PERL_MAGIC_backref # define PERL_MAGIC_backref '<' #endif #ifndef PERL_MAGIC_ext # define PERL_MAGIC_ext '~' #endif /* That's the best we can do... */ #ifndef sv_catpvn_nomg # define sv_catpvn_nomg sv_catpvn #endif #ifndef sv_catsv_nomg # define sv_catsv_nomg sv_catsv #endif #ifndef sv_setsv_nomg # define sv_setsv_nomg sv_setsv #endif #ifndef sv_pvn_nomg # define sv_pvn_nomg sv_pvn #endif #ifndef SvIV_nomg # define SvIV_nomg SvIV #endif #ifndef SvUV_nomg # define SvUV_nomg SvUV #endif #ifndef sv_catpv_mg # define sv_catpv_mg(sv, ptr) \ STMT_START { \ SV *TeMpSv = sv; \ sv_catpv(TeMpSv,ptr); \ SvSETMAGIC(TeMpSv); \ } STMT_END #endif #ifndef sv_catpvn_mg # define sv_catpvn_mg(sv, ptr, len) \ STMT_START { \ SV *TeMpSv = sv; \ sv_catpvn(TeMpSv,ptr,len); \ SvSETMAGIC(TeMpSv); \ } STMT_END #endif #ifndef sv_catsv_mg # define sv_catsv_mg(dsv, ssv) \ STMT_START { \ SV *TeMpSv = dsv; \ sv_catsv(TeMpSv,ssv); \ SvSETMAGIC(TeMpSv); \ } STMT_END #endif #ifndef sv_setiv_mg # define sv_setiv_mg(sv, i) \ STMT_START { \ SV *TeMpSv = sv; \ sv_setiv(TeMpSv,i); \ SvSETMAGIC(TeMpSv); \ } STMT_END #endif #ifndef sv_setnv_mg # define sv_setnv_mg(sv, num) \ STMT_START { \ SV *TeMpSv = sv; \ sv_setnv(TeMpSv,num); \ SvSETMAGIC(TeMpSv); \ } STMT_END #endif #ifndef sv_setpv_mg # define sv_setpv_mg(sv, ptr) \ STMT_START { \ SV *TeMpSv = sv; \ sv_setpv(TeMpSv,ptr); \ SvSETMAGIC(TeMpSv); \ } STMT_END #endif #ifndef sv_setpvn_mg # define sv_setpvn_mg(sv, ptr, len) \ STMT_START { \ SV *TeMpSv = sv; \ sv_setpvn(TeMpSv,ptr,len); \ SvSETMAGIC(TeMpSv); \ } STMT_END #endif #ifndef sv_setsv_mg # define sv_setsv_mg(dsv, ssv) \ STMT_START { \ SV *TeMpSv = dsv; \ sv_setsv(TeMpSv,ssv); \ SvSETMAGIC(TeMpSv); \ } STMT_END #endif #ifndef sv_setuv_mg # define sv_setuv_mg(sv, i) \ STMT_START { \ SV *TeMpSv = sv; \ sv_setuv(TeMpSv,i); \ SvSETMAGIC(TeMpSv); \ } STMT_END #endif #ifndef sv_usepvn_mg # define sv_usepvn_mg(sv, ptr, len) \ STMT_START { \ SV *TeMpSv = sv; \ sv_usepvn(TeMpSv,ptr,len); \ SvSETMAGIC(TeMpSv); \ } STMT_END #endif #ifndef SvVSTRING_mg # define SvVSTRING_mg(sv) (SvMAGICAL(sv) ? mg_find(sv, PERL_MAGIC_vstring) : NULL) #endif /* Hint: sv_magic_portable * This is a compatibility function that is only available with * Devel::PPPort. It is NOT in the perl core. * Its purpose is to mimic the 5.8.0 behaviour of sv_magic() when * it is being passed a name pointer with namlen == 0. In that * case, perl 5.8.0 and later store the pointer, not a copy of it. * The compatibility can be provided back to perl 5.004. With * earlier versions, the code will not compile. */ #if (PERL_BCDVERSION < 0x5004000) /* code that uses sv_magic_portable will not compile */ #elif (PERL_BCDVERSION < 0x5008000) # define sv_magic_portable(sv, obj, how, name, namlen) \ STMT_START { \ SV *SvMp_sv = (sv); \ char *SvMp_name = (char *) (name); \ I32 SvMp_namlen = (namlen); \ if (SvMp_name && SvMp_namlen == 0) \ { \ MAGIC *mg; \ sv_magic(SvMp_sv, obj, how, 0, 0); \ mg = SvMAGIC(SvMp_sv); \ mg->mg_len = -42; /* XXX: this is the tricky part */ \ mg->mg_ptr = SvMp_name; \ } \ else \ { \ sv_magic(SvMp_sv, obj, how, SvMp_name, SvMp_namlen); \ } \ } STMT_END #else # define sv_magic_portable(a, b, c, d, e) sv_magic(a, b, c, d, e) #endif #if !defined(mg_findext) #if defined(NEED_mg_findext) static MAGIC * DPPP_(my_mg_findext)(SV * sv, int type, const MGVTBL *vtbl); static #else extern MAGIC * DPPP_(my_mg_findext)(SV * sv, int type, const MGVTBL *vtbl); #endif #define mg_findext DPPP_(my_mg_findext) #define Perl_mg_findext DPPP_(my_mg_findext) #if defined(NEED_mg_findext) || defined(NEED_mg_findext_GLOBAL) MAGIC * DPPP_(my_mg_findext)(SV * sv, int type, const MGVTBL *vtbl) { if (sv) { MAGIC *mg; #ifdef AvPAD_NAMELIST assert(!(SvTYPE(sv) == SVt_PVAV && AvPAD_NAMELIST(sv))); #endif for (mg = SvMAGIC (sv); mg; mg = mg->mg_moremagic) { if (mg->mg_type == type && mg->mg_virtual == vtbl) return mg; } } return NULL; } #endif #endif #if !defined(sv_unmagicext) #if defined(NEED_sv_unmagicext) static int DPPP_(my_sv_unmagicext)(pTHX_ SV * const sv, const int type, MGVTBL * vtbl); static #else extern int DPPP_(my_sv_unmagicext)(pTHX_ SV * const sv, const int type, MGVTBL * vtbl); #endif #ifdef sv_unmagicext # undef sv_unmagicext #endif #define sv_unmagicext(a,b,c) DPPP_(my_sv_unmagicext)(aTHX_ a,b,c) #define Perl_sv_unmagicext DPPP_(my_sv_unmagicext) #if defined(NEED_sv_unmagicext) || defined(NEED_sv_unmagicext_GLOBAL) int DPPP_(my_sv_unmagicext)(pTHX_ SV *const sv, const int type, MGVTBL *vtbl) { MAGIC* mg; MAGIC** mgp; if (SvTYPE(sv) < SVt_PVMG || !SvMAGIC(sv)) return 0; mgp = &(SvMAGIC(sv)); for (mg = *mgp; mg; mg = *mgp) { const MGVTBL* const virt = mg->mg_virtual; if (mg->mg_type == type && virt == vtbl) { *mgp = mg->mg_moremagic; if (virt && virt->svt_free) virt->svt_free(aTHX_ sv, mg); if (mg->mg_ptr && mg->mg_type != PERL_MAGIC_regex_global) { if (mg->mg_len > 0) Safefree(mg->mg_ptr); else if (mg->mg_len == HEf_SVKEY) /* Questionable on older perls... */ SvREFCNT_dec(MUTABLE_SV(mg->mg_ptr)); else if (mg->mg_type == PERL_MAGIC_utf8) Safefree(mg->mg_ptr); } if (mg->mg_flags & MGf_REFCOUNTED) SvREFCNT_dec(mg->mg_obj); Safefree(mg); } else mgp = &mg->mg_moremagic; } if (SvMAGIC(sv)) { if (SvMAGICAL(sv)) /* if we're under save_magic, wait for restore_magic; */ mg_magical(sv); /* else fix the flags now */ } else { SvMAGICAL_off(sv); SvFLAGS(sv) |= (SvFLAGS(sv) & (SVp_IOK|SVp_NOK|SVp_POK)) >> PRIVSHIFT; } return 0; } #endif #endif #ifdef USE_ITHREADS #ifndef CopFILE # define CopFILE(c) ((c)->cop_file) #endif #ifndef CopFILEGV # define CopFILEGV(c) (CopFILE(c) ? gv_fetchfile(CopFILE(c)) : Nullgv) #endif #ifndef CopFILE_set # define CopFILE_set(c,pv) ((c)->cop_file = savepv(pv)) #endif #ifndef CopFILESV # define CopFILESV(c) (CopFILE(c) ? GvSV(gv_fetchfile(CopFILE(c))) : Nullsv) #endif #ifndef CopFILEAV # define CopFILEAV(c) (CopFILE(c) ? GvAV(gv_fetchfile(CopFILE(c))) : Nullav) #endif #ifndef CopSTASHPV # define CopSTASHPV(c) ((c)->cop_stashpv) #endif #ifndef CopSTASHPV_set # define CopSTASHPV_set(c,pv) ((c)->cop_stashpv = ((pv) ? savepv(pv) : Nullch)) #endif #ifndef CopSTASH # define CopSTASH(c) (CopSTASHPV(c) ? gv_stashpv(CopSTASHPV(c),GV_ADD) : Nullhv) #endif #ifndef CopSTASH_set # define CopSTASH_set(c,hv) CopSTASHPV_set(c, (hv) ? HvNAME(hv) : Nullch) #endif #ifndef CopSTASH_eq # define CopSTASH_eq(c,hv) ((hv) && (CopSTASHPV(c) == HvNAME(hv) \ || (CopSTASHPV(c) && HvNAME(hv) \ && strEQ(CopSTASHPV(c), HvNAME(hv))))) #endif #else #ifndef CopFILEGV # define CopFILEGV(c) ((c)->cop_filegv) #endif #ifndef CopFILEGV_set # define CopFILEGV_set(c,gv) ((c)->cop_filegv = (GV*)SvREFCNT_inc(gv)) #endif #ifndef CopFILE_set # define CopFILE_set(c,pv) CopFILEGV_set((c), gv_fetchfile(pv)) #endif #ifndef CopFILESV # define CopFILESV(c) (CopFILEGV(c) ? GvSV(CopFILEGV(c)) : Nullsv) #endif #ifndef CopFILEAV # define CopFILEAV(c) (CopFILEGV(c) ? GvAV(CopFILEGV(c)) : Nullav) #endif #ifndef CopFILE # define CopFILE(c) (CopFILESV(c) ? SvPVX(CopFILESV(c)) : Nullch) #endif #ifndef CopSTASH # define CopSTASH(c) ((c)->cop_stash) #endif #ifndef CopSTASH_set # define CopSTASH_set(c,hv) ((c)->cop_stash = (hv)) #endif #ifndef CopSTASHPV # define CopSTASHPV(c) (CopSTASH(c) ? HvNAME(CopSTASH(c)) : Nullch) #endif #ifndef CopSTASHPV_set # define CopSTASHPV_set(c,pv) CopSTASH_set((c), gv_stashpv(pv,GV_ADD)) #endif #ifndef CopSTASH_eq # define CopSTASH_eq(c,hv) (CopSTASH(c) == (hv)) #endif #endif /* USE_ITHREADS */ #if (PERL_BCDVERSION >= 0x5006000) #ifndef caller_cx # if defined(NEED_caller_cx) || defined(NEED_caller_cx_GLOBAL) static I32 DPPP_dopoptosub_at(const PERL_CONTEXT *cxstk, I32 startingblock) { I32 i; for (i = startingblock; i >= 0; i--) { register const PERL_CONTEXT * const cx = &cxstk[i]; switch (CxTYPE(cx)) { default: continue; case CXt_EVAL: case CXt_SUB: case CXt_FORMAT: return i; } } return i; } # endif # if defined(NEED_caller_cx) static const PERL_CONTEXT * DPPP_(my_caller_cx)(pTHX_ I32 count, const PERL_CONTEXT **dbcxp); static #else extern const PERL_CONTEXT * DPPP_(my_caller_cx)(pTHX_ I32 count, const PERL_CONTEXT **dbcxp); #endif #ifdef caller_cx # undef caller_cx #endif #define caller_cx(a,b) DPPP_(my_caller_cx)(aTHX_ a,b) #define Perl_caller_cx DPPP_(my_caller_cx) #if defined(NEED_caller_cx) || defined(NEED_caller_cx_GLOBAL) const PERL_CONTEXT * DPPP_(my_caller_cx)(pTHX_ I32 count, const PERL_CONTEXT **dbcxp) { register I32 cxix = DPPP_dopoptosub_at(cxstack, cxstack_ix); register const PERL_CONTEXT *cx; register const PERL_CONTEXT *ccstack = cxstack; const PERL_SI *top_si = PL_curstackinfo; for (;;) { /* we may be in a higher stacklevel, so dig down deeper */ while (cxix < 0 && top_si->si_type != PERLSI_MAIN) { top_si = top_si->si_prev; ccstack = top_si->si_cxstack; cxix = DPPP_dopoptosub_at(ccstack, top_si->si_cxix); } if (cxix < 0) return NULL; /* caller() should not report the automatic calls to &DB::sub */ if (PL_DBsub && GvCV(PL_DBsub) && cxix >= 0 && ccstack[cxix].blk_sub.cv == GvCV(PL_DBsub)) count++; if (!count--) break; cxix = DPPP_dopoptosub_at(ccstack, cxix - 1); } cx = &ccstack[cxix]; if (dbcxp) *dbcxp = cx; if (CxTYPE(cx) == CXt_SUB || CxTYPE(cx) == CXt_FORMAT) { const I32 dbcxix = DPPP_dopoptosub_at(ccstack, cxix - 1); /* We expect that ccstack[dbcxix] is CXt_SUB, anyway, the field below is defined for any cx. */ /* caller() should not report the automatic calls to &DB::sub */ if (PL_DBsub && GvCV(PL_DBsub) && dbcxix >= 0 && ccstack[dbcxix].blk_sub.cv == GvCV(PL_DBsub)) cx = &ccstack[dbcxix]; } return cx; } # endif #endif /* caller_cx */ #endif /* 5.6.0 */ #ifndef IN_PERL_COMPILETIME # define IN_PERL_COMPILETIME (PL_curcop == &PL_compiling) #endif #ifndef IN_LOCALE_RUNTIME # define IN_LOCALE_RUNTIME (PL_curcop->op_private & HINT_LOCALE) #endif #ifndef IN_LOCALE_COMPILETIME # define IN_LOCALE_COMPILETIME (PL_hints & HINT_LOCALE) #endif #ifndef IN_LOCALE # define IN_LOCALE (IN_PERL_COMPILETIME ? IN_LOCALE_COMPILETIME : IN_LOCALE_RUNTIME) #endif #ifndef IS_NUMBER_IN_UV # define IS_NUMBER_IN_UV 0x01 #endif #ifndef IS_NUMBER_GREATER_THAN_UV_MAX # define IS_NUMBER_GREATER_THAN_UV_MAX 0x02 #endif #ifndef IS_NUMBER_NOT_INT # define IS_NUMBER_NOT_INT 0x04 #endif #ifndef IS_NUMBER_NEG # define IS_NUMBER_NEG 0x08 #endif #ifndef IS_NUMBER_INFINITY # define IS_NUMBER_INFINITY 0x10 #endif #ifndef IS_NUMBER_NAN # define IS_NUMBER_NAN 0x20 #endif #ifndef GROK_NUMERIC_RADIX # define GROK_NUMERIC_RADIX(sp, send) grok_numeric_radix(sp, send) #endif #ifndef PERL_SCAN_GREATER_THAN_UV_MAX # define PERL_SCAN_GREATER_THAN_UV_MAX 0x02 #endif #ifndef PERL_SCAN_SILENT_ILLDIGIT # define PERL_SCAN_SILENT_ILLDIGIT 0x04 #endif #ifndef PERL_SCAN_ALLOW_UNDERSCORES # define PERL_SCAN_ALLOW_UNDERSCORES 0x01 #endif #ifndef PERL_SCAN_DISALLOW_PREFIX # define PERL_SCAN_DISALLOW_PREFIX 0x02 #endif #ifndef grok_numeric_radix #if defined(NEED_grok_numeric_radix) static bool DPPP_(my_grok_numeric_radix)(pTHX_ const char ** sp, const char * send); static #else extern bool DPPP_(my_grok_numeric_radix)(pTHX_ const char ** sp, const char * send); #endif #ifdef grok_numeric_radix # undef grok_numeric_radix #endif #define grok_numeric_radix(a,b) DPPP_(my_grok_numeric_radix)(aTHX_ a,b) #define Perl_grok_numeric_radix DPPP_(my_grok_numeric_radix) #if defined(NEED_grok_numeric_radix) || defined(NEED_grok_numeric_radix_GLOBAL) bool DPPP_(my_grok_numeric_radix)(pTHX_ const char **sp, const char *send) { #ifdef USE_LOCALE_NUMERIC #ifdef PL_numeric_radix_sv if (PL_numeric_radix_sv && IN_LOCALE) { STRLEN len; char* radix = SvPV(PL_numeric_radix_sv, len); if (*sp + len <= send && memEQ(*sp, radix, len)) { *sp += len; return TRUE; } } #else /* older perls don't have PL_numeric_radix_sv so the radix * must manually be requested from locale.h */ #include dTHR; /* needed for older threaded perls */ struct lconv *lc = localeconv(); char *radix = lc->decimal_point; if (radix && IN_LOCALE) { STRLEN len = strlen(radix); if (*sp + len <= send && memEQ(*sp, radix, len)) { *sp += len; return TRUE; } } #endif #endif /* USE_LOCALE_NUMERIC */ /* always try "." if numeric radix didn't match because * we may have data from different locales mixed */ if (*sp < send && **sp == '.') { ++*sp; return TRUE; } return FALSE; } #endif #endif #ifndef grok_number #if defined(NEED_grok_number) static int DPPP_(my_grok_number)(pTHX_ const char * pv, STRLEN len, UV * valuep); static #else extern int DPPP_(my_grok_number)(pTHX_ const char * pv, STRLEN len, UV * valuep); #endif #ifdef grok_number # undef grok_number #endif #define grok_number(a,b,c) DPPP_(my_grok_number)(aTHX_ a,b,c) #define Perl_grok_number DPPP_(my_grok_number) #if defined(NEED_grok_number) || defined(NEED_grok_number_GLOBAL) int DPPP_(my_grok_number)(pTHX_ const char *pv, STRLEN len, UV *valuep) { const char *s = pv; const char *send = pv + len; const UV max_div_10 = UV_MAX / 10; const char max_mod_10 = UV_MAX % 10; int numtype = 0; int sawinf = 0; int sawnan = 0; while (s < send && isSPACE(*s)) s++; if (s == send) { return 0; } else if (*s == '-') { s++; numtype = IS_NUMBER_NEG; } else if (*s == '+') s++; if (s == send) return 0; /* next must be digit or the radix separator or beginning of infinity */ if (isDIGIT(*s)) { /* UVs are at least 32 bits, so the first 9 decimal digits cannot overflow. */ UV value = *s - '0'; /* This construction seems to be more optimiser friendly. (without it gcc does the isDIGIT test and the *s - '0' separately) With it gcc on arm is managing 6 instructions (6 cycles) per digit. In theory the optimiser could deduce how far to unroll the loop before checking for overflow. */ if (++s < send) { int digit = *s - '0'; if (digit >= 0 && digit <= 9) { value = value * 10 + digit; if (++s < send) { digit = *s - '0'; if (digit >= 0 && digit <= 9) { value = value * 10 + digit; if (++s < send) { digit = *s - '0'; if (digit >= 0 && digit <= 9) { value = value * 10 + digit; if (++s < send) { digit = *s - '0'; if (digit >= 0 && digit <= 9) { value = value * 10 + digit; if (++s < send) { digit = *s - '0'; if (digit >= 0 && digit <= 9) { value = value * 10 + digit; if (++s < send) { digit = *s - '0'; if (digit >= 0 && digit <= 9) { value = value * 10 + digit; if (++s < send) { digit = *s - '0'; if (digit >= 0 && digit <= 9) { value = value * 10 + digit; if (++s < send) { digit = *s - '0'; if (digit >= 0 && digit <= 9) { value = value * 10 + digit; if (++s < send) { /* Now got 9 digits, so need to check each time for overflow. */ digit = *s - '0'; while (digit >= 0 && digit <= 9 && (value < max_div_10 || (value == max_div_10 && digit <= max_mod_10))) { value = value * 10 + digit; if (++s < send) digit = *s - '0'; else break; } if (digit >= 0 && digit <= 9 && (s < send)) { /* value overflowed. skip the remaining digits, don't worry about setting *valuep. */ do { s++; } while (s < send && isDIGIT(*s)); numtype |= IS_NUMBER_GREATER_THAN_UV_MAX; goto skip_value; } } } } } } } } } } } } } } } } } } numtype |= IS_NUMBER_IN_UV; if (valuep) *valuep = value; skip_value: if (GROK_NUMERIC_RADIX(&s, send)) { numtype |= IS_NUMBER_NOT_INT; while (s < send && isDIGIT(*s)) /* optional digits after the radix */ s++; } } else if (GROK_NUMERIC_RADIX(&s, send)) { numtype |= IS_NUMBER_NOT_INT | IS_NUMBER_IN_UV; /* valuep assigned below */ /* no digits before the radix means we need digits after it */ if (s < send && isDIGIT(*s)) { do { s++; } while (s < send && isDIGIT(*s)); if (valuep) { /* integer approximation is valid - it's 0. */ *valuep = 0; } } else return 0; } else if (*s == 'I' || *s == 'i') { s++; if (s == send || (*s != 'N' && *s != 'n')) return 0; s++; if (s == send || (*s != 'F' && *s != 'f')) return 0; s++; if (s < send && (*s == 'I' || *s == 'i')) { s++; if (s == send || (*s != 'N' && *s != 'n')) return 0; s++; if (s == send || (*s != 'I' && *s != 'i')) return 0; s++; if (s == send || (*s != 'T' && *s != 't')) return 0; s++; if (s == send || (*s != 'Y' && *s != 'y')) return 0; s++; } sawinf = 1; } else if (*s == 'N' || *s == 'n') { /* XXX TODO: There are signaling NaNs and quiet NaNs. */ s++; if (s == send || (*s != 'A' && *s != 'a')) return 0; s++; if (s == send || (*s != 'N' && *s != 'n')) return 0; s++; sawnan = 1; } else return 0; if (sawinf) { numtype &= IS_NUMBER_NEG; /* Keep track of sign */ numtype |= IS_NUMBER_INFINITY | IS_NUMBER_NOT_INT; } else if (sawnan) { numtype &= IS_NUMBER_NEG; /* Keep track of sign */ numtype |= IS_NUMBER_NAN | IS_NUMBER_NOT_INT; } else if (s < send) { /* we can have an optional exponent part */ if (*s == 'e' || *s == 'E') { /* The only flag we keep is sign. Blow away any "it's UV" */ numtype &= IS_NUMBER_NEG; numtype |= IS_NUMBER_NOT_INT; s++; if (s < send && (*s == '-' || *s == '+')) s++; if (s < send && isDIGIT(*s)) { do { s++; } while (s < send && isDIGIT(*s)); } else return 0; } } while (s < send && isSPACE(*s)) s++; if (s >= send) return numtype; if (len == 10 && memEQ(pv, "0 but true", 10)) { if (valuep) *valuep = 0; return IS_NUMBER_IN_UV; } return 0; } #endif #endif /* * The grok_* routines have been modified to use warn() instead of * Perl_warner(). Also, 'hexdigit' was the former name of PL_hexdigit, * which is why the stack variable has been renamed to 'xdigit'. */ #ifndef grok_bin #if defined(NEED_grok_bin) static UV DPPP_(my_grok_bin)(pTHX_ const char * start, STRLEN * len_p, I32 * flags, NV * result); static #else extern UV DPPP_(my_grok_bin)(pTHX_ const char * start, STRLEN * len_p, I32 * flags, NV * result); #endif #ifdef grok_bin # undef grok_bin #endif #define grok_bin(a,b,c,d) DPPP_(my_grok_bin)(aTHX_ a,b,c,d) #define Perl_grok_bin DPPP_(my_grok_bin) #if defined(NEED_grok_bin) || defined(NEED_grok_bin_GLOBAL) UV DPPP_(my_grok_bin)(pTHX_ const char *start, STRLEN *len_p, I32 *flags, NV *result) { const char *s = start; STRLEN len = *len_p; UV value = 0; NV value_nv = 0; const UV max_div_2 = UV_MAX / 2; bool allow_underscores = *flags & PERL_SCAN_ALLOW_UNDERSCORES; bool overflowed = FALSE; if (!(*flags & PERL_SCAN_DISALLOW_PREFIX)) { /* strip off leading b or 0b. for compatibility silently suffer "b" and "0b" as valid binary numbers. */ if (len >= 1) { if (s[0] == 'b') { s++; len--; } else if (len >= 2 && s[0] == '0' && s[1] == 'b') { s+=2; len-=2; } } } for (; len-- && *s; s++) { char bit = *s; if (bit == '0' || bit == '1') { /* Write it in this wonky order with a goto to attempt to get the compiler to make the common case integer-only loop pretty tight. With gcc seems to be much straighter code than old scan_bin. */ redo: if (!overflowed) { if (value <= max_div_2) { value = (value << 1) | (bit - '0'); continue; } /* Bah. We're just overflowed. */ warn("Integer overflow in binary number"); overflowed = TRUE; value_nv = (NV) value; } value_nv *= 2.0; /* If an NV has not enough bits in its mantissa to * represent a UV this summing of small low-order numbers * is a waste of time (because the NV cannot preserve * the low-order bits anyway): we could just remember when * did we overflow and in the end just multiply value_nv by the * right amount. */ value_nv += (NV)(bit - '0'); continue; } if (bit == '_' && len && allow_underscores && (bit = s[1]) && (bit == '0' || bit == '1')) { --len; ++s; goto redo; } if (!(*flags & PERL_SCAN_SILENT_ILLDIGIT)) warn("Illegal binary digit '%c' ignored", *s); break; } if ( ( overflowed && value_nv > 4294967295.0) #if UVSIZE > 4 || (!overflowed && value > 0xffffffff ) #endif ) { warn("Binary number > 0b11111111111111111111111111111111 non-portable"); } *len_p = s - start; if (!overflowed) { *flags = 0; return value; } *flags = PERL_SCAN_GREATER_THAN_UV_MAX; if (result) *result = value_nv; return UV_MAX; } #endif #endif #ifndef grok_hex #if defined(NEED_grok_hex) static UV DPPP_(my_grok_hex)(pTHX_ const char * start, STRLEN * len_p, I32 * flags, NV * result); static #else extern UV DPPP_(my_grok_hex)(pTHX_ const char * start, STRLEN * len_p, I32 * flags, NV * result); #endif #ifdef grok_hex # undef grok_hex #endif #define grok_hex(a,b,c,d) DPPP_(my_grok_hex)(aTHX_ a,b,c,d) #define Perl_grok_hex DPPP_(my_grok_hex) #if defined(NEED_grok_hex) || defined(NEED_grok_hex_GLOBAL) UV DPPP_(my_grok_hex)(pTHX_ const char *start, STRLEN *len_p, I32 *flags, NV *result) { const char *s = start; STRLEN len = *len_p; UV value = 0; NV value_nv = 0; const UV max_div_16 = UV_MAX / 16; bool allow_underscores = *flags & PERL_SCAN_ALLOW_UNDERSCORES; bool overflowed = FALSE; const char *xdigit; if (!(*flags & PERL_SCAN_DISALLOW_PREFIX)) { /* strip off leading x or 0x. for compatibility silently suffer "x" and "0x" as valid hex numbers. */ if (len >= 1) { if (s[0] == 'x') { s++; len--; } else if (len >= 2 && s[0] == '0' && s[1] == 'x') { s+=2; len-=2; } } } for (; len-- && *s; s++) { xdigit = strchr((char *) PL_hexdigit, *s); if (xdigit) { /* Write it in this wonky order with a goto to attempt to get the compiler to make the common case integer-only loop pretty tight. With gcc seems to be much straighter code than old scan_hex. */ redo: if (!overflowed) { if (value <= max_div_16) { value = (value << 4) | ((xdigit - PL_hexdigit) & 15); continue; } warn("Integer overflow in hexadecimal number"); overflowed = TRUE; value_nv = (NV) value; } value_nv *= 16.0; /* If an NV has not enough bits in its mantissa to * represent a UV this summing of small low-order numbers * is a waste of time (because the NV cannot preserve * the low-order bits anyway): we could just remember when * did we overflow and in the end just multiply value_nv by the * right amount of 16-tuples. */ value_nv += (NV)((xdigit - PL_hexdigit) & 15); continue; } if (*s == '_' && len && allow_underscores && s[1] && (xdigit = strchr((char *) PL_hexdigit, s[1]))) { --len; ++s; goto redo; } if (!(*flags & PERL_SCAN_SILENT_ILLDIGIT)) warn("Illegal hexadecimal digit '%c' ignored", *s); break; } if ( ( overflowed && value_nv > 4294967295.0) #if UVSIZE > 4 || (!overflowed && value > 0xffffffff ) #endif ) { warn("Hexadecimal number > 0xffffffff non-portable"); } *len_p = s - start; if (!overflowed) { *flags = 0; return value; } *flags = PERL_SCAN_GREATER_THAN_UV_MAX; if (result) *result = value_nv; return UV_MAX; } #endif #endif #ifndef grok_oct #if defined(NEED_grok_oct) static UV DPPP_(my_grok_oct)(pTHX_ const char * start, STRLEN * len_p, I32 * flags, NV * result); static #else extern UV DPPP_(my_grok_oct)(pTHX_ const char * start, STRLEN * len_p, I32 * flags, NV * result); #endif #ifdef grok_oct # undef grok_oct #endif #define grok_oct(a,b,c,d) DPPP_(my_grok_oct)(aTHX_ a,b,c,d) #define Perl_grok_oct DPPP_(my_grok_oct) #if defined(NEED_grok_oct) || defined(NEED_grok_oct_GLOBAL) UV DPPP_(my_grok_oct)(pTHX_ const char *start, STRLEN *len_p, I32 *flags, NV *result) { const char *s = start; STRLEN len = *len_p; UV value = 0; NV value_nv = 0; const UV max_div_8 = UV_MAX / 8; bool allow_underscores = *flags & PERL_SCAN_ALLOW_UNDERSCORES; bool overflowed = FALSE; for (; len-- && *s; s++) { /* gcc 2.95 optimiser not smart enough to figure that this subtraction out front allows slicker code. */ int digit = *s - '0'; if (digit >= 0 && digit <= 7) { /* Write it in this wonky order with a goto to attempt to get the compiler to make the common case integer-only loop pretty tight. */ redo: if (!overflowed) { if (value <= max_div_8) { value = (value << 3) | digit; continue; } /* Bah. We're just overflowed. */ warn("Integer overflow in octal number"); overflowed = TRUE; value_nv = (NV) value; } value_nv *= 8.0; /* If an NV has not enough bits in its mantissa to * represent a UV this summing of small low-order numbers * is a waste of time (because the NV cannot preserve * the low-order bits anyway): we could just remember when * did we overflow and in the end just multiply value_nv by the * right amount of 8-tuples. */ value_nv += (NV)digit; continue; } if (digit == ('_' - '0') && len && allow_underscores && (digit = s[1] - '0') && (digit >= 0 && digit <= 7)) { --len; ++s; goto redo; } /* Allow \octal to work the DWIM way (that is, stop scanning * as soon as non-octal characters are seen, complain only iff * someone seems to want to use the digits eight and nine). */ if (digit == 8 || digit == 9) { if (!(*flags & PERL_SCAN_SILENT_ILLDIGIT)) warn("Illegal octal digit '%c' ignored", *s); } break; } if ( ( overflowed && value_nv > 4294967295.0) #if UVSIZE > 4 || (!overflowed && value > 0xffffffff ) #endif ) { warn("Octal number > 037777777777 non-portable"); } *len_p = s - start; if (!overflowed) { *flags = 0; return value; } *flags = PERL_SCAN_GREATER_THAN_UV_MAX; if (result) *result = value_nv; return UV_MAX; } #endif #endif #if !defined(my_snprintf) #if defined(NEED_my_snprintf) static int DPPP_(my_my_snprintf)(char * buffer, const Size_t len, const char * format, ...); static #else extern int DPPP_(my_my_snprintf)(char * buffer, const Size_t len, const char * format, ...); #endif #define my_snprintf DPPP_(my_my_snprintf) #define Perl_my_snprintf DPPP_(my_my_snprintf) #if defined(NEED_my_snprintf) || defined(NEED_my_snprintf_GLOBAL) int DPPP_(my_my_snprintf)(char *buffer, const Size_t len, const char *format, ...) { dTHX; int retval; va_list ap; va_start(ap, format); #ifdef HAS_VSNPRINTF retval = vsnprintf(buffer, len, format, ap); #else retval = vsprintf(buffer, format, ap); #endif va_end(ap); if (retval < 0 || (len > 0 && (Size_t)retval >= len)) Perl_croak(aTHX_ "panic: my_snprintf buffer overflow"); return retval; } #endif #endif #if !defined(my_sprintf) #if defined(NEED_my_sprintf) static int DPPP_(my_my_sprintf)(char * buffer, const char * pat, ...); static #else extern int DPPP_(my_my_sprintf)(char * buffer, const char * pat, ...); #endif #define my_sprintf DPPP_(my_my_sprintf) #define Perl_my_sprintf DPPP_(my_my_sprintf) #if defined(NEED_my_sprintf) || defined(NEED_my_sprintf_GLOBAL) int DPPP_(my_my_sprintf)(char *buffer, const char* pat, ...) { va_list args; va_start(args, pat); vsprintf(buffer, pat, args); va_end(args); return strlen(buffer); } #endif #endif #ifdef NO_XSLOCKS # ifdef dJMPENV # define dXCPT dJMPENV; int rEtV = 0 # define XCPT_TRY_START JMPENV_PUSH(rEtV); if (rEtV == 0) # define XCPT_TRY_END JMPENV_POP; # define XCPT_CATCH if (rEtV != 0) # define XCPT_RETHROW JMPENV_JUMP(rEtV) # else # define dXCPT Sigjmp_buf oldTOP; int rEtV = 0 # define XCPT_TRY_START Copy(top_env, oldTOP, 1, Sigjmp_buf); rEtV = Sigsetjmp(top_env, 1); if (rEtV == 0) # define XCPT_TRY_END Copy(oldTOP, top_env, 1, Sigjmp_buf); # define XCPT_CATCH if (rEtV != 0) # define XCPT_RETHROW Siglongjmp(top_env, rEtV) # endif #endif #if !defined(my_strlcat) #if defined(NEED_my_strlcat) static Size_t DPPP_(my_my_strlcat)(char * dst, const char * src, Size_t size); static #else extern Size_t DPPP_(my_my_strlcat)(char * dst, const char * src, Size_t size); #endif #define my_strlcat DPPP_(my_my_strlcat) #define Perl_my_strlcat DPPP_(my_my_strlcat) #if defined(NEED_my_strlcat) || defined(NEED_my_strlcat_GLOBAL) Size_t DPPP_(my_my_strlcat)(char *dst, const char *src, Size_t size) { Size_t used, length, copy; used = strlen(dst); length = strlen(src); if (size > 0 && used < size - 1) { copy = (length >= size - used) ? size - used - 1 : length; memcpy(dst + used, src, copy); dst[used + copy] = '\0'; } return used + length; } #endif #endif #if !defined(my_strlcpy) #if defined(NEED_my_strlcpy) static Size_t DPPP_(my_my_strlcpy)(char * dst, const char * src, Size_t size); static #else extern Size_t DPPP_(my_my_strlcpy)(char * dst, const char * src, Size_t size); #endif #define my_strlcpy DPPP_(my_my_strlcpy) #define Perl_my_strlcpy DPPP_(my_my_strlcpy) #if defined(NEED_my_strlcpy) || defined(NEED_my_strlcpy_GLOBAL) Size_t DPPP_(my_my_strlcpy)(char *dst, const char *src, Size_t size) { Size_t length, copy; length = strlen(src); if (size > 0) { copy = (length >= size) ? size - 1 : length; memcpy(dst, src, copy); dst[copy] = '\0'; } return length; } #endif #endif #ifndef PERL_PV_ESCAPE_QUOTE # define PERL_PV_ESCAPE_QUOTE 0x0001 #endif #ifndef PERL_PV_PRETTY_QUOTE # define PERL_PV_PRETTY_QUOTE PERL_PV_ESCAPE_QUOTE #endif #ifndef PERL_PV_PRETTY_ELLIPSES # define PERL_PV_PRETTY_ELLIPSES 0x0002 #endif #ifndef PERL_PV_PRETTY_LTGT # define PERL_PV_PRETTY_LTGT 0x0004 #endif #ifndef PERL_PV_ESCAPE_FIRSTCHAR # define PERL_PV_ESCAPE_FIRSTCHAR 0x0008 #endif #ifndef PERL_PV_ESCAPE_UNI # define PERL_PV_ESCAPE_UNI 0x0100 #endif #ifndef PERL_PV_ESCAPE_UNI_DETECT # define PERL_PV_ESCAPE_UNI_DETECT 0x0200 #endif #ifndef PERL_PV_ESCAPE_ALL # define PERL_PV_ESCAPE_ALL 0x1000 #endif #ifndef PERL_PV_ESCAPE_NOBACKSLASH # define PERL_PV_ESCAPE_NOBACKSLASH 0x2000 #endif #ifndef PERL_PV_ESCAPE_NOCLEAR # define PERL_PV_ESCAPE_NOCLEAR 0x4000 #endif #ifndef PERL_PV_ESCAPE_RE # define PERL_PV_ESCAPE_RE 0x8000 #endif #ifndef PERL_PV_PRETTY_NOCLEAR # define PERL_PV_PRETTY_NOCLEAR PERL_PV_ESCAPE_NOCLEAR #endif #ifndef PERL_PV_PRETTY_DUMP # define PERL_PV_PRETTY_DUMP PERL_PV_PRETTY_ELLIPSES|PERL_PV_PRETTY_QUOTE #endif #ifndef PERL_PV_PRETTY_REGPROP # define PERL_PV_PRETTY_REGPROP PERL_PV_PRETTY_ELLIPSES|PERL_PV_PRETTY_LTGT|PERL_PV_ESCAPE_RE #endif /* Hint: pv_escape * Note that unicode functionality is only backported to * those perl versions that support it. For older perl * versions, the implementation will fall back to bytes. */ #ifndef pv_escape #if defined(NEED_pv_escape) static char * DPPP_(my_pv_escape)(pTHX_ SV * dsv, char const * const str, const STRLEN count, const STRLEN max, STRLEN * const escaped, const U32 flags); static #else extern char * DPPP_(my_pv_escape)(pTHX_ SV * dsv, char const * const str, const STRLEN count, const STRLEN max, STRLEN * const escaped, const U32 flags); #endif #ifdef pv_escape # undef pv_escape #endif #define pv_escape(a,b,c,d,e,f) DPPP_(my_pv_escape)(aTHX_ a,b,c,d,e,f) #define Perl_pv_escape DPPP_(my_pv_escape) #if defined(NEED_pv_escape) || defined(NEED_pv_escape_GLOBAL) char * DPPP_(my_pv_escape)(pTHX_ SV *dsv, char const * const str, const STRLEN count, const STRLEN max, STRLEN * const escaped, const U32 flags) { const char esc = flags & PERL_PV_ESCAPE_RE ? '%' : '\\'; const char dq = flags & PERL_PV_ESCAPE_QUOTE ? '"' : esc; char octbuf[32] = "%123456789ABCDF"; STRLEN wrote = 0; STRLEN chsize = 0; STRLEN readsize = 1; #if defined(is_utf8_string) && defined(utf8_to_uvchr) bool isuni = flags & PERL_PV_ESCAPE_UNI ? 1 : 0; #endif const char *pv = str; const char * const end = pv + count; octbuf[0] = esc; if (!(flags & PERL_PV_ESCAPE_NOCLEAR)) sv_setpvs(dsv, ""); #if defined(is_utf8_string) && defined(utf8_to_uvchr) if ((flags & PERL_PV_ESCAPE_UNI_DETECT) && is_utf8_string((U8*)pv, count)) isuni = 1; #endif for (; pv < end && (!max || wrote < max) ; pv += readsize) { const UV u = #if defined(is_utf8_string) && defined(utf8_to_uvchr) isuni ? utf8_to_uvchr((U8*)pv, &readsize) : #endif (U8)*pv; const U8 c = (U8)u & 0xFF; if (u > 255 || (flags & PERL_PV_ESCAPE_ALL)) { if (flags & PERL_PV_ESCAPE_FIRSTCHAR) chsize = my_snprintf(octbuf, sizeof octbuf, "%" UVxf, u); else chsize = my_snprintf(octbuf, sizeof octbuf, "%cx{%" UVxf "}", esc, u); } else if (flags & PERL_PV_ESCAPE_NOBACKSLASH) { chsize = 1; } else { if (c == dq || c == esc || !isPRINT(c)) { chsize = 2; switch (c) { case '\\' : /* fallthrough */ case '%' : if (c == esc) octbuf[1] = esc; else chsize = 1; break; case '\v' : octbuf[1] = 'v'; break; case '\t' : octbuf[1] = 't'; break; case '\r' : octbuf[1] = 'r'; break; case '\n' : octbuf[1] = 'n'; break; case '\f' : octbuf[1] = 'f'; break; case '"' : if (dq == '"') octbuf[1] = '"'; else chsize = 1; break; default: chsize = my_snprintf(octbuf, sizeof octbuf, pv < end && isDIGIT((U8)*(pv+readsize)) ? "%c%03o" : "%c%o", esc, c); } } else { chsize = 1; } } if (max && wrote + chsize > max) { break; } else if (chsize > 1) { sv_catpvn(dsv, octbuf, chsize); wrote += chsize; } else { char tmp[2]; my_snprintf(tmp, sizeof tmp, "%c", c); sv_catpvn(dsv, tmp, 1); wrote++; } if (flags & PERL_PV_ESCAPE_FIRSTCHAR) break; } if (escaped != NULL) *escaped= pv - str; return SvPVX(dsv); } #endif #endif #ifndef pv_pretty #if defined(NEED_pv_pretty) static char * DPPP_(my_pv_pretty)(pTHX_ SV * dsv, char const * const str, const STRLEN count, const STRLEN max, char const * const start_color, char const * const end_color, const U32 flags); static #else extern char * DPPP_(my_pv_pretty)(pTHX_ SV * dsv, char const * const str, const STRLEN count, const STRLEN max, char const * const start_color, char const * const end_color, const U32 flags); #endif #ifdef pv_pretty # undef pv_pretty #endif #define pv_pretty(a,b,c,d,e,f,g) DPPP_(my_pv_pretty)(aTHX_ a,b,c,d,e,f,g) #define Perl_pv_pretty DPPP_(my_pv_pretty) #if defined(NEED_pv_pretty) || defined(NEED_pv_pretty_GLOBAL) char * DPPP_(my_pv_pretty)(pTHX_ SV *dsv, char const * const str, const STRLEN count, const STRLEN max, char const * const start_color, char const * const end_color, const U32 flags) { const U8 dq = (flags & PERL_PV_PRETTY_QUOTE) ? '"' : '%'; STRLEN escaped; if (!(flags & PERL_PV_PRETTY_NOCLEAR)) sv_setpvs(dsv, ""); if (dq == '"') sv_catpvs(dsv, "\""); else if (flags & PERL_PV_PRETTY_LTGT) sv_catpvs(dsv, "<"); if (start_color != NULL) sv_catpv(dsv, D_PPP_CONSTPV_ARG(start_color)); pv_escape(dsv, str, count, max, &escaped, flags | PERL_PV_ESCAPE_NOCLEAR); if (end_color != NULL) sv_catpv(dsv, D_PPP_CONSTPV_ARG(end_color)); if (dq == '"') sv_catpvs(dsv, "\""); else if (flags & PERL_PV_PRETTY_LTGT) sv_catpvs(dsv, ">"); if ((flags & PERL_PV_PRETTY_ELLIPSES) && escaped < count) sv_catpvs(dsv, "..."); return SvPVX(dsv); } #endif #endif #ifndef pv_display #if defined(NEED_pv_display) static char * DPPP_(my_pv_display)(pTHX_ SV * dsv, const char * pv, STRLEN cur, STRLEN len, STRLEN pvlim); static #else extern char * DPPP_(my_pv_display)(pTHX_ SV * dsv, const char * pv, STRLEN cur, STRLEN len, STRLEN pvlim); #endif #ifdef pv_display # undef pv_display #endif #define pv_display(a,b,c,d,e) DPPP_(my_pv_display)(aTHX_ a,b,c,d,e) #define Perl_pv_display DPPP_(my_pv_display) #if defined(NEED_pv_display) || defined(NEED_pv_display_GLOBAL) char * DPPP_(my_pv_display)(pTHX_ SV *dsv, const char *pv, STRLEN cur, STRLEN len, STRLEN pvlim) { pv_pretty(dsv, pv, cur, pvlim, NULL, NULL, PERL_PV_PRETTY_DUMP); if (len > cur && pv[cur] == '\0') sv_catpvs(dsv, "\\0"); return SvPVX(dsv); } #endif #endif #endif /* _P_P_PORTABILITY_H_ */ /* End of File ppport.h */ MongoDB-v1.2.2/pstdint.h000644 000765 000024 00000063432 12651754051 015275 0ustar00davidstaff000000 000000 /* A portable stdint.h **************************************************************************** * BSD License: **************************************************************************** * * Copyright (c) 2005-2011 Paul Hsieh * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. The name of the author may not be used to endorse or promote products * derived from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. * IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. * **************************************************************************** * * Version 0.1.12 * * The ANSI C standard committee, for the C99 standard, specified the * inclusion of a new standard include file called stdint.h. This is * a very useful and long desired include file which contains several * very precise definitions for integer scalar types that is * critically important for making portable several classes of * applications including cryptography, hashing, variable length * integer libraries and so on. But for most developers its likely * useful just for programming sanity. * * The problem is that most compiler vendors have decided not to * implement the C99 standard, and the next C++ language standard * (which has a lot more mindshare these days) will be a long time in * coming and its unknown whether or not it will include stdint.h or * how much adoption it will have. Either way, it will be a long time * before all compilers come with a stdint.h and it also does nothing * for the extremely large number of compilers available today which * do not include this file, or anything comparable to it. * * So that's what this file is all about. Its an attempt to build a * single universal include file that works on as many platforms as * possible to deliver what stdint.h is supposed to. A few things * that should be noted about this file: * * 1) It is not guaranteed to be portable and/or present an identical * interface on all platforms. The extreme variability of the * ANSI C standard makes this an impossibility right from the * very get go. Its really only meant to be useful for the vast * majority of platforms that possess the capability of * implementing usefully and precisely defined, standard sized * integer scalars. Systems which are not intrinsically 2s * complement may produce invalid constants. * * 2) There is an unavoidable use of non-reserved symbols. * * 3) Other standard include files are invoked. * * 4) This file may come in conflict with future platforms that do * include stdint.h. The hope is that one or the other can be * used with no real difference. * * 5) In the current verison, if your platform can't represent * int32_t, int16_t and int8_t, it just dumps out with a compiler * error. * * 6) 64 bit integers may or may not be defined. Test for their * presence with the test: #ifdef INT64_MAX or #ifdef UINT64_MAX. * Note that this is different from the C99 specification which * requires the existence of 64 bit support in the compiler. If * this is not defined for your platform, yet it is capable of * dealing with 64 bits then it is because this file has not yet * been extended to cover all of your system's capabilities. * * 7) (u)intptr_t may or may not be defined. Test for its presence * with the test: #ifdef PTRDIFF_MAX. If this is not defined * for your platform, then it is because this file has not yet * been extended to cover all of your system's capabilities, not * because its optional. * * 8) The following might not been defined even if your platform is * capable of defining it: * * WCHAR_MIN * WCHAR_MAX * (u)int64_t * PTRDIFF_MIN * PTRDIFF_MAX * (u)intptr_t * * 9) The following have not been defined: * * WINT_MIN * WINT_MAX * * 10) The criteria for defining (u)int_least(*)_t isn't clear, * except for systems which don't have a type that precisely * defined 8, 16, or 32 bit types (which this include file does * not support anyways). Default definitions have been given. * * 11) The criteria for defining (u)int_fast(*)_t isn't something I * would trust to any particular compiler vendor or the ANSI C * committee. It is well known that "compatible systems" are * commonly created that have very different performance * characteristics from the systems they are compatible with, * especially those whose vendors make both the compiler and the * system. Default definitions have been given, but its strongly * recommended that users never use these definitions for any * reason (they do *NOT* deliver any serious guarantee of * improved performance -- not in this file, nor any vendor's * stdint.h). * * 12) The following macros: * * PRINTF_INTMAX_MODIFIER * PRINTF_INT64_MODIFIER * PRINTF_INT32_MODIFIER * PRINTF_INT16_MODIFIER * PRINTF_LEAST64_MODIFIER * PRINTF_LEAST32_MODIFIER * PRINTF_LEAST16_MODIFIER * PRINTF_INTPTR_MODIFIER * * are strings which have been defined as the modifiers required * for the "d", "u" and "x" printf formats to correctly output * (u)intmax_t, (u)int64_t, (u)int32_t, (u)int16_t, (u)least64_t, * (u)least32_t, (u)least16_t and (u)intptr_t types respectively. * PRINTF_INTPTR_MODIFIER is not defined for some systems which * provide their own stdint.h. PRINTF_INT64_MODIFIER is not * defined if INT64_MAX is not defined. These are an extension * beyond what C99 specifies must be in stdint.h. * * In addition, the following macros are defined: * * PRINTF_INTMAX_HEX_WIDTH * PRINTF_INT64_HEX_WIDTH * PRINTF_INT32_HEX_WIDTH * PRINTF_INT16_HEX_WIDTH * PRINTF_INT8_HEX_WIDTH * PRINTF_INTMAX_DEC_WIDTH * PRINTF_INT64_DEC_WIDTH * PRINTF_INT32_DEC_WIDTH * PRINTF_INT16_DEC_WIDTH * PRINTF_INT8_DEC_WIDTH * * Which specifies the maximum number of characters required to * print the number of that type in either hexadecimal or decimal. * These are an extension beyond what C99 specifies must be in * stdint.h. * * Compilers tested (all with 0 warnings at their highest respective * settings): Borland Turbo C 2.0, WATCOM C/C++ 11.0 (16 bits and 32 * bits), Microsoft Visual C++ 6.0 (32 bit), Microsoft Visual Studio * .net (VC7), Intel C++ 4.0, GNU gcc v3.3.3 * * This file should be considered a work in progress. Suggestions for * improvements, especially those which increase coverage are strongly * encouraged. * * Acknowledgements * * The following people have made significant contributions to the * development and testing of this file: * * Chris Howie * John Steele Scott * Dave Thorup * John Dill * */ #include #include #include /* * For gcc with _STDINT_H, fill in the PRINTF_INT*_MODIFIER macros, and * do nothing else. On the Mac OS X version of gcc this is _STDINT_H_. */ #if ((defined(__STDC__) && __STDC__ && __STDC_VERSION__ >= 199901L) || (defined (__WATCOMC__) && (defined (_STDINT_H_INCLUDED) || __WATCOMC__ >= 1250)) || (defined(__GNUC__) && (defined(_STDINT_H) || defined(_STDINT_H_) || defined (__UINT_FAST64_TYPE__)) )) && !defined (_PSTDINT_H_INCLUDED) #include #define _PSTDINT_H_INCLUDED # ifndef PRINTF_INT64_MODIFIER # define PRINTF_INT64_MODIFIER "ll" # endif # ifndef PRINTF_INT32_MODIFIER # define PRINTF_INT32_MODIFIER "l" # endif # ifndef PRINTF_INT16_MODIFIER # define PRINTF_INT16_MODIFIER "h" # endif # ifndef PRINTF_INTMAX_MODIFIER # define PRINTF_INTMAX_MODIFIER PRINTF_INT64_MODIFIER # endif # ifndef PRINTF_INT64_HEX_WIDTH # define PRINTF_INT64_HEX_WIDTH "16" # endif # ifndef PRINTF_INT32_HEX_WIDTH # define PRINTF_INT32_HEX_WIDTH "8" # endif # ifndef PRINTF_INT16_HEX_WIDTH # define PRINTF_INT16_HEX_WIDTH "4" # endif # ifndef PRINTF_INT8_HEX_WIDTH # define PRINTF_INT8_HEX_WIDTH "2" # endif # ifndef PRINTF_INT64_DEC_WIDTH # define PRINTF_INT64_DEC_WIDTH "20" # endif # ifndef PRINTF_INT32_DEC_WIDTH # define PRINTF_INT32_DEC_WIDTH "10" # endif # ifndef PRINTF_INT16_DEC_WIDTH # define PRINTF_INT16_DEC_WIDTH "5" # endif # ifndef PRINTF_INT8_DEC_WIDTH # define PRINTF_INT8_DEC_WIDTH "3" # endif # ifndef PRINTF_INTMAX_HEX_WIDTH # define PRINTF_INTMAX_HEX_WIDTH PRINTF_INT64_HEX_WIDTH # endif # ifndef PRINTF_INTMAX_DEC_WIDTH # define PRINTF_INTMAX_DEC_WIDTH PRINTF_INT64_DEC_WIDTH # endif /* * Something really weird is going on with Open Watcom. Just pull some of * these duplicated definitions from Open Watcom's stdint.h file for now. */ # if defined (__WATCOMC__) && __WATCOMC__ >= 1250 # if !defined (INT64_C) # define INT64_C(x) (x + (INT64_MAX - INT64_MAX)) # endif # if !defined (UINT64_C) # define UINT64_C(x) (x + (UINT64_MAX - UINT64_MAX)) # endif # if !defined (INT32_C) # define INT32_C(x) (x + (INT32_MAX - INT32_MAX)) # endif # if !defined (UINT32_C) # define UINT32_C(x) (x + (UINT32_MAX - UINT32_MAX)) # endif # if !defined (INT16_C) # define INT16_C(x) (x) # endif # if !defined (UINT16_C) # define UINT16_C(x) (x) # endif # if !defined (INT8_C) # define INT8_C(x) (x) # endif # if !defined (UINT8_C) # define UINT8_C(x) (x) # endif # if !defined (UINT64_MAX) # define UINT64_MAX 18446744073709551615ULL # endif # if !defined (INT64_MAX) # define INT64_MAX 9223372036854775807LL # endif # if !defined (UINT32_MAX) # define UINT32_MAX 4294967295UL # endif # if !defined (INT32_MAX) # define INT32_MAX 2147483647L # endif # if !defined (INTMAX_MAX) # define INTMAX_MAX INT64_MAX # endif # if !defined (INTMAX_MIN) # define INTMAX_MIN INT64_MIN # endif # endif #endif #ifndef _PSTDINT_H_INCLUDED #define _PSTDINT_H_INCLUDED #ifndef SIZE_MAX # define SIZE_MAX (~(size_t)0) #endif /* * Deduce the type assignments from limits.h under the assumption that * integer sizes in bits are powers of 2, and follow the ANSI * definitions. */ #ifndef UINT8_MAX # define UINT8_MAX 0xff #endif #ifndef uint8_t # if (UCHAR_MAX == UINT8_MAX) || defined (S_SPLINT_S) typedef unsigned char uint8_t; # define UINT8_C(v) ((uint8_t) v) # else # error "Platform not supported" # endif #endif #ifndef INT8_MAX # define INT8_MAX 0x7f #endif #ifndef INT8_MIN # define INT8_MIN INT8_C(0x80) #endif #ifndef int8_t # if (SCHAR_MAX == INT8_MAX) || defined (S_SPLINT_S) typedef signed char int8_t; # define INT8_C(v) ((int8_t) v) # else # error "Platform not supported" # endif #endif #ifndef UINT16_MAX # define UINT16_MAX 0xffff #endif #ifndef uint16_t #if (UINT_MAX == UINT16_MAX) || defined (S_SPLINT_S) typedef unsigned int uint16_t; # ifndef PRINTF_INT16_MODIFIER # define PRINTF_INT16_MODIFIER "" # endif # define UINT16_C(v) ((uint16_t) (v)) #elif (USHRT_MAX == UINT16_MAX) typedef unsigned short uint16_t; # define UINT16_C(v) ((uint16_t) (v)) # ifndef PRINTF_INT16_MODIFIER # define PRINTF_INT16_MODIFIER "h" # endif #else #error "Platform not supported" #endif #endif #ifndef INT16_MAX # define INT16_MAX 0x7fff #endif #ifndef INT16_MIN # define INT16_MIN INT16_C(0x8000) #endif #ifndef int16_t #if (INT_MAX == INT16_MAX) || defined (S_SPLINT_S) typedef signed int int16_t; # define INT16_C(v) ((int16_t) (v)) # ifndef PRINTF_INT16_MODIFIER # define PRINTF_INT16_MODIFIER "" # endif #elif (SHRT_MAX == INT16_MAX) typedef signed short int16_t; # define INT16_C(v) ((int16_t) (v)) # ifndef PRINTF_INT16_MODIFIER # define PRINTF_INT16_MODIFIER "h" # endif #else #error "Platform not supported" #endif #endif #ifndef UINT32_MAX # define UINT32_MAX (0xffffffffUL) #endif #ifndef uint32_t #if (ULONG_MAX == UINT32_MAX) || defined (S_SPLINT_S) typedef unsigned long uint32_t; # define UINT32_C(v) v ## UL # ifndef PRINTF_INT32_MODIFIER # define PRINTF_INT32_MODIFIER "l" # endif #elif (UINT_MAX == UINT32_MAX) typedef unsigned int uint32_t; # ifndef PRINTF_INT32_MODIFIER # define PRINTF_INT32_MODIFIER "" # endif # define UINT32_C(v) v ## U #elif (USHRT_MAX == UINT32_MAX) typedef unsigned short uint32_t; # define UINT32_C(v) ((unsigned short) (v)) # ifndef PRINTF_INT32_MODIFIER # define PRINTF_INT32_MODIFIER "" # endif #else #error "Platform not supported" #endif #endif #ifndef INT32_MAX # define INT32_MAX (0x7fffffffL) #endif #ifndef INT32_MIN # define INT32_MIN INT32_C(0x80000000) #endif #ifndef int32_t #if (LONG_MAX == INT32_MAX) || defined (S_SPLINT_S) typedef signed long int32_t; # define INT32_C(v) v ## L # ifndef PRINTF_INT32_MODIFIER # define PRINTF_INT32_MODIFIER "l" # endif #elif (INT_MAX == INT32_MAX) typedef signed int int32_t; # define INT32_C(v) v # ifndef PRINTF_INT32_MODIFIER # define PRINTF_INT32_MODIFIER "" # endif #elif (SHRT_MAX == INT32_MAX) typedef signed short int32_t; # define INT32_C(v) ((short) (v)) # ifndef PRINTF_INT32_MODIFIER # define PRINTF_INT32_MODIFIER "" # endif #else #error "Platform not supported" #endif #endif /* * The macro stdint_int64_defined is temporarily used to record * whether or not 64 integer support is available. It must be * defined for any 64 integer extensions for new platforms that are * added. */ #undef stdint_int64_defined #if (defined(__STDC__) && defined(__STDC_VERSION__)) || defined (S_SPLINT_S) # if (__STDC__ && __STDC_VERSION__ >= 199901L) || defined (S_SPLINT_S) # define stdint_int64_defined typedef long long int64_t; typedef unsigned long long uint64_t; # define UINT64_C(v) v ## ULL # define INT64_C(v) v ## LL # ifndef PRINTF_INT64_MODIFIER # define PRINTF_INT64_MODIFIER "ll" # endif # endif #endif #if !defined (stdint_int64_defined) # if defined(__GNUC__) # define stdint_int64_defined __extension__ typedef long long int64_t; __extension__ typedef unsigned long long uint64_t; # define UINT64_C(v) v ## ULL # define INT64_C(v) v ## LL # ifndef PRINTF_INT64_MODIFIER # define PRINTF_INT64_MODIFIER "ll" # endif # elif defined(__MWERKS__) || defined (__SUNPRO_C) || defined (__SUNPRO_CC) || defined (__APPLE_CC__) || defined (_LONG_LONG) || defined (_CRAYC) || defined (S_SPLINT_S) # define stdint_int64_defined typedef long long int64_t; typedef unsigned long long uint64_t; # define UINT64_C(v) v ## ULL # define INT64_C(v) v ## LL # ifndef PRINTF_INT64_MODIFIER # define PRINTF_INT64_MODIFIER "ll" # endif # elif (defined(__WATCOMC__) && defined(__WATCOM_INT64__)) || (defined(_MSC_VER) && _INTEGRAL_MAX_BITS >= 64) || (defined (__BORLANDC__) && __BORLANDC__ > 0x460) || defined (__alpha) || defined (__DECC) # define stdint_int64_defined typedef __int64 int64_t; typedef unsigned __int64 uint64_t; # define UINT64_C(v) v ## UI64 # define INT64_C(v) v ## I64 # ifndef PRINTF_INT64_MODIFIER # define PRINTF_INT64_MODIFIER "I64" # endif # endif #endif #if !defined (LONG_LONG_MAX) && defined (INT64_C) # define LONG_LONG_MAX INT64_C (9223372036854775807) #endif #ifndef ULONG_LONG_MAX # define ULONG_LONG_MAX UINT64_C (18446744073709551615) #endif #if !defined (INT64_MAX) && defined (INT64_C) # define INT64_MAX INT64_C (9223372036854775807) #endif #if !defined (INT64_MIN) && defined (INT64_C) # define INT64_MIN INT64_C (-9223372036854775808) #endif #if !defined (UINT64_MAX) && defined (INT64_C) # define UINT64_MAX UINT64_C (18446744073709551615) #endif /* * Width of hexadecimal for number field. */ #ifndef PRINTF_INT64_HEX_WIDTH # define PRINTF_INT64_HEX_WIDTH "16" #endif #ifndef PRINTF_INT32_HEX_WIDTH # define PRINTF_INT32_HEX_WIDTH "8" #endif #ifndef PRINTF_INT16_HEX_WIDTH # define PRINTF_INT16_HEX_WIDTH "4" #endif #ifndef PRINTF_INT8_HEX_WIDTH # define PRINTF_INT8_HEX_WIDTH "2" #endif #ifndef PRINTF_INT64_DEC_WIDTH # define PRINTF_INT64_DEC_WIDTH "20" #endif #ifndef PRINTF_INT32_DEC_WIDTH # define PRINTF_INT32_DEC_WIDTH "10" #endif #ifndef PRINTF_INT16_DEC_WIDTH # define PRINTF_INT16_DEC_WIDTH "5" #endif #ifndef PRINTF_INT8_DEC_WIDTH # define PRINTF_INT8_DEC_WIDTH "3" #endif /* * Ok, lets not worry about 128 bit integers for now. Moore's law says * we don't need to worry about that until about 2040 at which point * we'll have bigger things to worry about. */ #ifdef stdint_int64_defined typedef int64_t intmax_t; typedef uint64_t uintmax_t; # define INTMAX_MAX INT64_MAX # define INTMAX_MIN INT64_MIN # define UINTMAX_MAX UINT64_MAX # define UINTMAX_C(v) UINT64_C(v) # define INTMAX_C(v) INT64_C(v) # ifndef PRINTF_INTMAX_MODIFIER # define PRINTF_INTMAX_MODIFIER PRINTF_INT64_MODIFIER # endif # ifndef PRINTF_INTMAX_HEX_WIDTH # define PRINTF_INTMAX_HEX_WIDTH PRINTF_INT64_HEX_WIDTH # endif # ifndef PRINTF_INTMAX_DEC_WIDTH # define PRINTF_INTMAX_DEC_WIDTH PRINTF_INT64_DEC_WIDTH # endif #else typedef int32_t intmax_t; typedef uint32_t uintmax_t; # define INTMAX_MAX INT32_MAX # define UINTMAX_MAX UINT32_MAX # define UINTMAX_C(v) UINT32_C(v) # define INTMAX_C(v) INT32_C(v) # ifndef PRINTF_INTMAX_MODIFIER # define PRINTF_INTMAX_MODIFIER PRINTF_INT32_MODIFIER # endif # ifndef PRINTF_INTMAX_HEX_WIDTH # define PRINTF_INTMAX_HEX_WIDTH PRINTF_INT32_HEX_WIDTH # endif # ifndef PRINTF_INTMAX_DEC_WIDTH # define PRINTF_INTMAX_DEC_WIDTH PRINTF_INT32_DEC_WIDTH # endif #endif /* * Because this file currently only supports platforms which have * precise powers of 2 as bit sizes for the default integers, the * least definitions are all trivial. Its possible that a future * version of this file could have different definitions. */ #ifndef stdint_least_defined typedef int8_t int_least8_t; typedef uint8_t uint_least8_t; typedef int16_t int_least16_t; typedef uint16_t uint_least16_t; typedef int32_t int_least32_t; typedef uint32_t uint_least32_t; # define PRINTF_LEAST32_MODIFIER PRINTF_INT32_MODIFIER # define PRINTF_LEAST16_MODIFIER PRINTF_INT16_MODIFIER # define UINT_LEAST8_MAX UINT8_MAX # define INT_LEAST8_MAX INT8_MAX # define UINT_LEAST16_MAX UINT16_MAX # define INT_LEAST16_MAX INT16_MAX # define UINT_LEAST32_MAX UINT32_MAX # define INT_LEAST32_MAX INT32_MAX # define INT_LEAST8_MIN INT8_MIN # define INT_LEAST16_MIN INT16_MIN # define INT_LEAST32_MIN INT32_MIN # ifdef stdint_int64_defined typedef int64_t int_least64_t; typedef uint64_t uint_least64_t; # define PRINTF_LEAST64_MODIFIER PRINTF_INT64_MODIFIER # define UINT_LEAST64_MAX UINT64_MAX # define INT_LEAST64_MAX INT64_MAX # define INT_LEAST64_MIN INT64_MIN # endif #endif #undef stdint_least_defined /* * The ANSI C committee pretending to know or specify anything about * performance is the epitome of misguided arrogance. The mandate of * this file is to *ONLY* ever support that absolute minimum * definition of the fast integer types, for compatibility purposes. * No extensions, and no attempt to suggest what may or may not be a * faster integer type will ever be made in this file. Developers are * warned to stay away from these types when using this or any other * stdint.h. */ typedef int_least8_t int_fast8_t; typedef uint_least8_t uint_fast8_t; typedef int_least16_t int_fast16_t; typedef uint_least16_t uint_fast16_t; typedef int_least32_t int_fast32_t; typedef uint_least32_t uint_fast32_t; #define UINT_FAST8_MAX UINT_LEAST8_MAX #define INT_FAST8_MAX INT_LEAST8_MAX #define UINT_FAST16_MAX UINT_LEAST16_MAX #define INT_FAST16_MAX INT_LEAST16_MAX #define UINT_FAST32_MAX UINT_LEAST32_MAX #define INT_FAST32_MAX INT_LEAST32_MAX #define INT_FAST8_MIN INT_LEAST8_MIN #define INT_FAST16_MIN INT_LEAST16_MIN #define INT_FAST32_MIN INT_LEAST32_MIN #ifdef stdint_int64_defined typedef int_least64_t int_fast64_t; typedef uint_least64_t uint_fast64_t; # define UINT_FAST64_MAX UINT_LEAST64_MAX # define INT_FAST64_MAX INT_LEAST64_MAX # define INT_FAST64_MIN INT_LEAST64_MIN #endif #undef stdint_int64_defined /* * Whatever piecemeal, per compiler thing we can do about the wchar_t * type limits. */ #if defined(__WATCOMC__) || defined(_MSC_VER) || defined (__GNUC__) # include # ifndef WCHAR_MIN # define WCHAR_MIN 0 # endif # ifndef WCHAR_MAX # define WCHAR_MAX ((wchar_t)-1) # endif #endif /* * Whatever piecemeal, per compiler/platform thing we can do about the * (u)intptr_t types and limits. */ #if defined (_MSC_VER) && defined (_UINTPTR_T_DEFINED) # define STDINT_H_UINTPTR_T_DEFINED #endif #ifndef STDINT_H_UINTPTR_T_DEFINED # if defined (__alpha__) || defined (__ia64__) || defined (__x86_64__) || defined (_WIN64) # define stdint_intptr_bits 64 # elif defined (__WATCOMC__) || defined (__TURBOC__) # if defined(__TINY__) || defined(__SMALL__) || defined(__MEDIUM__) # define stdint_intptr_bits 16 # else # define stdint_intptr_bits 32 # endif # elif defined (__i386__) || defined (_WIN32) || defined (WIN32) # define stdint_intptr_bits 32 # elif defined (__INTEL_COMPILER) /* TODO -- what did Intel do about x86-64? */ # endif # ifdef stdint_intptr_bits # define stdint_intptr_glue3_i(a,b,c) a##b##c # define stdint_intptr_glue3(a,b,c) stdint_intptr_glue3_i(a,b,c) # ifndef PRINTF_INTPTR_MODIFIER # define PRINTF_INTPTR_MODIFIER stdint_intptr_glue3(PRINTF_INT,stdint_intptr_bits,_MODIFIER) # endif # ifndef PTRDIFF_MAX # define PTRDIFF_MAX stdint_intptr_glue3(INT,stdint_intptr_bits,_MAX) # endif # ifndef PTRDIFF_MIN # define PTRDIFF_MIN stdint_intptr_glue3(INT,stdint_intptr_bits,_MIN) # endif # ifndef UINTPTR_MAX # define UINTPTR_MAX stdint_intptr_glue3(UINT,stdint_intptr_bits,_MAX) # endif # ifndef INTPTR_MAX # define INTPTR_MAX stdint_intptr_glue3(INT,stdint_intptr_bits,_MAX) # endif # ifndef INTPTR_MIN # define INTPTR_MIN stdint_intptr_glue3(INT,stdint_intptr_bits,_MIN) # endif # ifndef INTPTR_C # define INTPTR_C(x) stdint_intptr_glue3(INT,stdint_intptr_bits,_C)(x) # endif # ifndef UINTPTR_C # define UINTPTR_C(x) stdint_intptr_glue3(UINT,stdint_intptr_bits,_C)(x) # endif typedef stdint_intptr_glue3(uint,stdint_intptr_bits,_t) uintptr_t; typedef stdint_intptr_glue3( int,stdint_intptr_bits,_t) intptr_t; # else /* TODO -- This following is likely wrong for some platforms, and does nothing for the definition of uintptr_t. */ typedef ptrdiff_t intptr_t; # endif # define STDINT_H_UINTPTR_T_DEFINED #endif /* * Assumes sig_atomic_t is signed and we have a 2s complement machine. */ #ifndef SIG_ATOMIC_MAX # define SIG_ATOMIC_MAX ((((sig_atomic_t) 1) << (sizeof (sig_atomic_t)*CHAR_BIT-1)) - 1) #endif #endif #if defined (__TEST_PSTDINT_FOR_CORRECTNESS) /* * Please compile with the maximum warning settings to make sure macros are not * defined more than once. */ #include #include #include #define glue3_aux(x,y,z) x ## y ## z #define glue3(x,y,z) glue3_aux(x,y,z) #define DECLU(bits) glue3(uint,bits,_t) glue3(u,bits,=) glue3(UINT,bits,_C) (0); #define DECLI(bits) glue3(int,bits,_t) glue3(i,bits,=) glue3(INT,bits,_C) (0); #define DECL(us,bits) glue3(DECL,us,) (bits) #define TESTUMAX(bits) glue3(u,bits,=) glue3(~,u,bits); if (glue3(UINT,bits,_MAX) glue3(!=,u,bits)) printf ("Something wrong with UINT%d_MAX\n", bits) int main () { DECL(I,8) DECL(U,8) DECL(I,16) DECL(U,16) DECL(I,32) DECL(U,32) #ifdef INT64_MAX DECL(I,64) DECL(U,64) #endif intmax_t imax = INTMAX_C(0); uintmax_t umax = UINTMAX_C(0); char str0[256], str1[256]; sprintf (str0, "%d %x\n", 0, ~0); sprintf (str1, "%d %x\n", i8, ~0); if (0 != strcmp (str0, str1)) printf ("Something wrong with i8 : %s\n", str1); sprintf (str1, "%u %x\n", u8, ~0); if (0 != strcmp (str0, str1)) printf ("Something wrong with u8 : %s\n", str1); sprintf (str1, "%d %x\n", i16, ~0); if (0 != strcmp (str0, str1)) printf ("Something wrong with i16 : %s\n", str1); sprintf (str1, "%u %x\n", u16, ~0); if (0 != strcmp (str0, str1)) printf ("Something wrong with u16 : %s\n", str1); sprintf (str1, "%" PRINTF_INT32_MODIFIER "d %x\n", i32, ~0); if (0 != strcmp (str0, str1)) printf ("Something wrong with i32 : %s\n", str1); sprintf (str1, "%" PRINTF_INT32_MODIFIER "u %x\n", u32, ~0); if (0 != strcmp (str0, str1)) printf ("Something wrong with u32 : %s\n", str1); #ifdef INT64_MAX sprintf (str1, "%" PRINTF_INT64_MODIFIER "d %x\n", i64, ~0); if (0 != strcmp (str0, str1)) printf ("Something wrong with i64 : %s\n", str1); #endif sprintf (str1, "%" PRINTF_INTMAX_MODIFIER "d %x\n", imax, ~0); if (0 != strcmp (str0, str1)) printf ("Something wrong with imax : %s\n", str1); sprintf (str1, "%" PRINTF_INTMAX_MODIFIER "u %x\n", umax, ~0); if (0 != strcmp (str0, str1)) printf ("Something wrong with umax : %s\n", str1); TESTUMAX(8); TESTUMAX(16); TESTUMAX(32); #ifdef INT64_MAX TESTUMAX(64); #endif return EXIT_SUCCESS; } #endif MongoDB-v1.2.2/README000644 000765 000024 00000000477 12651754051 014317 0ustar00davidstaff000000 000000 This archive contains the distribution MongoDB, version v1.2.2: Official MongoDB Driver for Perl This software is Copyright (c) 2016 by MongoDB, Inc.. This is free software, licensed under: The Apache License, Version 2.0, January 2004 This README file was generated by Dist::Zilla::Plugin::Readme v5.043. MongoDB-v1.2.2/README.md000644 000765 000024 00000004032 12651754051 014705 0ustar00davidstaff000000 000000 # Contributing Guidelines ## Introduction `mongo-perl-driver` is the official client-side driver for talking to MongoDB with Perl. It is free software released under the Apache 2.0 license and available on CPAN under the distribution name `MongoDB`. ## Installation See [INSTALL.md](INSTALL.md) for more detailed installation instructions. ## How to Ask for Help If you are having difficulty building the driver after reading the below instructions, please email the [mongodb-user mailing list](https://groups.google.com/forum/#!forum/mongodb-user) to ask for help. Please include in your email **all** of the following information: - The version of the driver you are trying to build (branch or tag). - Examples: _maint-v0 branch_, _v0.704.2.0 tag_ - The output of _perl -V_ - How your version of perl was built or installed. - Examples: _plenv_, _perlbrew_, _built from source_ - The error you encountered. This may be compiler, Config::AutoConf, or other output. Failure to include the relevant information will result in additional round-trip communications to ascertain the necessary details, delaying a useful response. ## How to Contribute The code for `mongo-perl-driver` is hosted on GitHub at: https://github.com/mongodb/mongo-perl-driver/ If you would like to contribute code, documentation, tests, or bugfixes, follow these steps: 1. Fork the project on GitHub. 2. Clone the fork to your local machine. 3. Make your changes and push them back up to your GitHub account. 4. Send a "pull request" with a brief description of your changes, and a link to a JIRA ticket if there is one. If you are unfamiliar with GitHub, start with their excellent documentation here: https://help.github.com/articles/fork-a-repo ## Working with the Repository You will need to install Config::AutoConf and Path::Tiny to be able to run the Makefile.PL. While this distribution is shipped using Dist::Zilla, you do not need to install it or use it for testing. $ cpan Config::AutoConf Path::Tiny $ perl Makefile.PL $ make $ make test MongoDB-v1.2.2/t/000755 000765 000024 00000000000 12651754051 013672 5ustar00davidstaff000000 000000 MongoDB-v1.2.2/xs/000755 000765 000024 00000000000 12651754051 014061 5ustar00davidstaff000000 000000 MongoDB-v1.2.2/xt/000755 000765 000024 00000000000 12651754051 014062 5ustar00davidstaff000000 000000 MongoDB-v1.2.2/xt/author/000755 000765 000024 00000000000 12651754051 015364 5ustar00davidstaff000000 000000 MongoDB-v1.2.2/xt/release/000755 000765 000024 00000000000 12651754051 015502 5ustar00davidstaff000000 000000 MongoDB-v1.2.2/xt/release/check-jira-in-changes.t000644 000765 000024 00000002127 12651754051 021703 0ustar00davidstaff000000 000000 #!perl use strict; use warnings; # This test was generated by inc::CheckJiraInChanges use Test::More tests => 1; my @commits = split /\n/, <<'EOC'; a8d1bcc PERL-604 Use setVersion and electionId in SDAM e07e210 PERL-602 Support legacy Cpanel::JSON::XS booleans EOC my %ticket_map; for my $commit ( @commits ) { for my $ticket ( $commit =~ /PERL-(\d+)/g ) { next if $ENV{CHECK_JIRA_SKIP} && grep { $ticket eq $_ } split " ", $ENV{CHECK_JIRA_SKIP}; $ticket_map{$ticket} ||= []; push @{$ticket_map{$ticket}}, $commit; } } # grab Changes lines from new version to next un-indented line open my $fh, "<:encoding(UTF-8)", "Changes"; my $changelog = do { local $/; <$fh> }; my @bad; for my $ticket ( keys %ticket_map ) { if ( index( $changelog, "PERL-$ticket" ) < 0 ) { push @bad, $ticket; } } if ( !@commits ) { pass("No commits with Jira tickets"); } else { ok( ! scalar @bad, "Jira tickets in Changes") or diag "Jira tickets missing:\n" . join("\n", map { " * $_" } map { @{$ticket_map{$_}} } sort { $a <=> $b } @bad ); } MongoDB-v1.2.2/xt/release/minimum-version.t000644 000765 000024 00000000271 12651754051 021025 0ustar00davidstaff000000 000000 #!perl use Test::More; eval "use Test::MinimumVersion"; plan skip_all => "Test::MinimumVersion required for testing minimum versions" if $@; all_minimum_version_ok( qq{5.008001} ); MongoDB-v1.2.2/xt/author/circular-refs.t000644 000765 000024 00000002146 12651754051 020315 0ustar00davidstaff000000 000000 # # Copyright 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. use strict; use warnings; use Test::More; use MongoDB; use boolean; use lib "t/lib"; use MongoDBTest qw/build_client get_test_db/; plan skip_all => "Requires Test::Memory::Cycle" unless eval { require Test::Memory::Cycle; 1 }; my $client = build_client(); my $testdb = get_test_db($client); my $coll = $testdb->coll("testtesttest"); $coll->insert_one({ a => false }) for 1 .. 100; my @docs = $coll->find({})->all; Test::Memory::Cycle::memory_cycle_ok( $client ); done_testing; # COPYRIGHT # vim: set ts=4 sts=4 sw=4 et tw=75: MongoDB-v1.2.2/xt/author/pod-syntax.t000644 000765 000024 00000000252 12651754051 017656 0ustar00davidstaff000000 000000 #!perl # This file was automatically generated by Dist::Zilla::Plugin::PodSyntaxTests. use strict; use warnings; use Test::More; use Test::Pod 1.41; all_pod_files_ok(); MongoDB-v1.2.2/xt/author/test-version.t000644 000765 000024 00000000640 12651754051 020213 0ustar00davidstaff000000 000000 use strict; use warnings; use Test::More; # generated by Dist::Zilla::Plugin::Test::Version 1.05 use Test::Version; my @imports = qw( version_all_ok ); my $params = { is_strict => 0, has_version => 1, multiple => 0, }; push @imports, $params if version->parse( $Test::Version::VERSION ) >= version->parse('1.002'); Test::Version->import(@imports); version_all_ok; done_testing; MongoDB-v1.2.2/xs/BSON.xs000644 000765 000024 00000004511 12651754051 015177 0ustar00davidstaff000000 000000 /* * Copyright 2009-2013 MongoDB, Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #include "perl_mongo.h" MODULE = MongoDB PACKAGE = MongoDB::BSON PROTOTYPES: DISABLE void _decode_bson(msg, options) SV *msg SV *options PREINIT: char * data; const bson_t * bson; bson_reader_t * reader; bool reached_eof; STRLEN length; HV *opts; PPCODE: data = SvPV_nolen(msg); length = SvCUR(msg); opts = NULL; if ( options ) { if ( SvROK(options) && SvTYPE(SvRV(options)) == SVt_PVHV ) { opts = (HV *) SvRV(options); } else { croak("options must be a reference to a hash"); } } reader = bson_reader_new_from_data((uint8_t *)data, length); while ((bson = bson_reader_read(reader, &reached_eof))) { XPUSHs(sv_2mortal(perl_mongo_bson_to_sv(bson, opts))); } bson_reader_destroy(reader); void _encode_bson(doc, options) SV *doc SV *options PREINIT: bson_t * bson; HV *opts; PPCODE: opts = NULL; bson = bson_new(); if ( options ) { if ( SvROK(options) && SvTYPE(SvRV(options)) == SVt_PVHV ) { opts = (HV *) SvRV(options); } else { croak("options must be a reference to a hash"); } } perl_mongo_sv_to_bson(bson, doc, opts); XPUSHs(sv_2mortal(newSVpvn((const char *)bson_get_data(bson), bson->len))); bson_destroy(bson); SV * generate_oid () PREINIT: bson_oid_t boid; char oid[25]; CODE: bson_oid_init(&boid, NULL); bson_oid_to_string(&boid, oid); RETVAL = newSVpvn(oid, 24); OUTPUT: RETVAL MongoDB-v1.2.2/t/00-report-mongod.t000644 000765 000024 00000002343 12651754051 017072 0ustar00davidstaff000000 000000 # # Copyright 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # use strict; use warnings; use utf8; use Test::More 0.88; use lib "t/lib"; use MongoDBTest qw/skip_unless_mongod build_client server_version server_type/; skip_unless_mongod(); my $conn = build_client(); my $server_version = server_version($conn); my $server_type = server_type($conn); diag "Checking MongoDB test environment"; diag "\$ENV{MONGOD}=".$ENV{MONGOD} if $ENV{MONGOD}; diag "MongoDB version $server_version ($server_type)"; if ( -d ".git" or -d "../.git" ) { my $desc = qx/git describe --dirty/; unless ($?) { chomp $desc; diag "git describe: $desc"; } } pass("checked MongoDB test environment"); done_testing; MongoDB-v1.2.2/t/00-report-prereqs.dd000644 000765 000024 00000011313 12651754051 017411 0ustar00davidstaff000000 000000 do { my $x = { 'configure' => { 'requires' => { 'Config::AutoConf' => '0.22', 'ExtUtils::MakeMaker' => '0', 'Path::Tiny' => '0.052' } }, 'develop' => { 'requires' => { 'Test::Memory::Cycle' => '0', 'Test::More' => '0', 'Test::Pod' => '1.41', 'Test::Version' => '1', 'lib' => '0' } }, 'runtime' => { 'recommends' => { 'IO::Socket::IP' => '0.25', 'IO::Socket::SSL' => '1.42', 'Mozilla::CA' => '20130114', 'Net::SSLeay' => '1.49' }, 'requires' => { 'Authen::SCRAM::Client' => '0.003', 'Carp' => '0', 'Class::XSAccessor' => '0', 'DateTime' => '0.78', 'Digest::MD5' => '0', 'Encode' => '0', 'Exporter' => '5.57', 'IO::File' => '0', 'IO::Socket' => '0', 'JSON::PP' => '2.27300', 'List::Util' => '0', 'MIME::Base64' => '0', 'Moo' => '2', 'Moo::Role' => '0', 'Safe::Isa' => '0', 'Scalar::Util' => '0', 'Socket' => '0', 'Sub::Quote' => '0', 'Tie::IxHash' => '0', 'Time::HiRes' => '0', 'Try::Tiny' => '0', 'Type::Library' => '0', 'Type::Tiny::XS' => '0', 'Type::Utils' => '0', 'Types::Standard' => '0', 'XSLoader' => '0', 'boolean' => '0.25', 'constant' => '0', 'if' => '0', 'namespace::clean' => '0', 'overload' => '0', 'perl' => 'v5.8.0', 're' => '0', 'strict' => '0', 'version' => '0', 'warnings' => '0' }, 'suggests' => { 'IO::Socket::SSL' => '1.56' } }, 'test' => { 'recommends' => { 'CPAN::Meta' => '2.120900', 'DateTime::Tiny' => '1', 'Test::Harness' => '3.31', 'Time::Moment' => '0.22' }, 'requires' => { 'Data::Dumper' => '0', 'ExtUtils::MakeMaker' => '0', 'File::Spec' => '0', 'File::Temp' => '0', 'FileHandle' => '0', 'JSON::MaybeXS' => '0', 'Math::BigInt' => '0', 'Path::Tiny' => '0.054', 'Test::Deep' => '0.111', 'Test::Fatal' => '0', 'Test::More' => '0.96', 'bigint' => '0', 'lib' => '0', 'threads::shared' => '0', 'utf8' => '0' } } }; $x; }MongoDB-v1.2.2/t/00-report-prereqs.t000644 000765 000024 00000012731 12651754051 017272 0ustar00davidstaff000000 000000 #!perl use strict; use warnings; # This test was generated by Dist::Zilla::Plugin::Test::ReportPrereqs 0.021 use Test::More tests => 1; use ExtUtils::MakeMaker; use File::Spec; # from $version::LAX my $lax_version_re = qr/(?: undef | (?: (?:[0-9]+) (?: \. | (?:\.[0-9]+) (?:_[0-9]+)? )? | (?:\.[0-9]+) (?:_[0-9]+)? ) | (?: v (?:[0-9]+) (?: (?:\.[0-9]+)+ (?:_[0-9]+)? )? | (?:[0-9]+)? (?:\.[0-9]+){2,} (?:_[0-9]+)? ) )/x; # hide optional CPAN::Meta modules from prereq scanner # and check if they are available my $cpan_meta = "CPAN::Meta"; my $cpan_meta_pre = "CPAN::Meta::Prereqs"; my $HAS_CPAN_META = eval "require $cpan_meta; $cpan_meta->VERSION('2.120900')" && eval "require $cpan_meta_pre"; ## no critic # Verify requirements? my $DO_VERIFY_PREREQS = 1; sub _max { my $max = shift; $max = ( $_ > $max ) ? $_ : $max for @_; return $max; } sub _merge_prereqs { my ($collector, $prereqs) = @_; # CPAN::Meta::Prereqs object if (ref $collector eq $cpan_meta_pre) { return $collector->with_merged_prereqs( CPAN::Meta::Prereqs->new( $prereqs ) ); } # Raw hashrefs for my $phase ( keys %$prereqs ) { for my $type ( keys %{ $prereqs->{$phase} } ) { for my $module ( keys %{ $prereqs->{$phase}{$type} } ) { $collector->{$phase}{$type}{$module} = $prereqs->{$phase}{$type}{$module}; } } } return $collector; } my @include = qw( ); my @exclude = qw( ); # Add static prereqs to the included modules list my $static_prereqs = do 't/00-report-prereqs.dd'; # Merge all prereqs (either with ::Prereqs or a hashref) my $full_prereqs = _merge_prereqs( ( $HAS_CPAN_META ? $cpan_meta_pre->new : {} ), $static_prereqs ); # Add dynamic prereqs to the included modules list (if we can) my ($source) = grep { -f } 'MYMETA.json', 'MYMETA.yml'; if ( $source && $HAS_CPAN_META ) { if ( my $meta = eval { CPAN::Meta->load_file($source) } ) { $full_prereqs = _merge_prereqs($full_prereqs, $meta->prereqs); } } else { $source = 'static metadata'; } my @full_reports; my @dep_errors; my $req_hash = $HAS_CPAN_META ? $full_prereqs->as_string_hash : $full_prereqs; # Add static includes into a fake section for my $mod (@include) { $req_hash->{other}{modules}{$mod} = 0; } for my $phase ( qw(configure build test runtime develop other) ) { next unless $req_hash->{$phase}; next if ($phase eq 'develop' and not $ENV{AUTHOR_TESTING}); for my $type ( qw(requires recommends suggests conflicts modules) ) { next unless $req_hash->{$phase}{$type}; my $title = ucfirst($phase).' '.ucfirst($type); my @reports = [qw/Module Want Have/]; for my $mod ( sort keys %{ $req_hash->{$phase}{$type} } ) { next if $mod eq 'perl'; next if grep { $_ eq $mod } @exclude; my $file = $mod; $file =~ s{::}{/}g; $file .= ".pm"; my ($prefix) = grep { -e File::Spec->catfile($_, $file) } @INC; my $want = $req_hash->{$phase}{$type}{$mod}; $want = "undef" unless defined $want; $want = "any" if !$want && $want == 0; my $req_string = $want eq 'any' ? 'any version required' : "version '$want' required"; if ($prefix) { my $have = MM->parse_version( File::Spec->catfile($prefix, $file) ); $have = "undef" unless defined $have; push @reports, [$mod, $want, $have]; if ( $DO_VERIFY_PREREQS && $HAS_CPAN_META && $type eq 'requires' ) { if ( $have !~ /\A$lax_version_re\z/ ) { push @dep_errors, "$mod version '$have' cannot be parsed ($req_string)"; } elsif ( ! $full_prereqs->requirements_for( $phase, $type )->accepts_module( $mod => $have ) ) { push @dep_errors, "$mod version '$have' is not in required range '$want'"; } } } else { push @reports, [$mod, $want, "missing"]; if ( $DO_VERIFY_PREREQS && $type eq 'requires' ) { push @dep_errors, "$mod is not installed ($req_string)"; } } } if ( @reports ) { push @full_reports, "=== $title ===\n\n"; my $ml = _max( map { length $_->[0] } @reports ); my $wl = _max( map { length $_->[1] } @reports ); my $hl = _max( map { length $_->[2] } @reports ); if ($type eq 'modules') { splice @reports, 1, 0, ["-" x $ml, "", "-" x $hl]; push @full_reports, map { sprintf(" %*s %*s\n", -$ml, $_->[0], $hl, $_->[2]) } @reports; } else { splice @reports, 1, 0, ["-" x $ml, "-" x $wl, "-" x $hl]; push @full_reports, map { sprintf(" %*s %*s %*s\n", -$ml, $_->[0], $wl, $_->[1], $hl, $_->[2]) } @reports; } push @full_reports, "\n"; } } } if ( @full_reports ) { diag "\nVersions for all modules listed in $source (including optional ones):\n\n", @full_reports; } if ( @dep_errors ) { diag join("\n", "\n*** WARNING WARNING WARNING WARNING WARNING WARNING WARNING WARNING ***\n", "The following REQUIRED prerequisites were not satisfied:\n", @dep_errors, "\n" ); } pass; # vim: ts=4 sts=4 sw=4 et: MongoDB-v1.2.2/t/bson.t000644 000765 000024 00000030057 12651754051 015025 0ustar00davidstaff000000 000000 # # Copyright 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # use strict; use warnings; use Test::More 0.88; use MongoDB; use MongoDB::OID; use boolean; use DateTime; use Encode; use Tie::IxHash; use Test::Fatal; use MongoDB::Timestamp; # needed if db is being run as master use MongoDB::BSON::Binary; use lib "t/lib"; use MongoDBTest qw/skip_unless_mongod build_client get_test_db/; skip_unless_mongod(); my $testdb = get_test_db(build_client()); my $c = $testdb->get_collection('bar'); # relloc subtest "realloc" => sub { $c->drop; my $long_str = "y" x 8184; $c->insert_one({'text' => $long_str}); my $result = $c->find_one; is($result->{'text'}, $long_str, 'realloc'); }; # id realloc subtest "id realloc" => sub { $c->drop; my $med_str = "z" x 4014; $c->insert_one({'text' => $med_str, 'id2' => MongoDB::OID->new}); my $result = $c->find_one; is($result->{'text'}, $med_str, 'id realloc'); }; subtest "types" => sub { $c->drop; my $id = $c->insert_one({"n" => undef, "l" => 234234124, "d" => 23.23451452, "b" => true, "a" => {"foo" => "bar", "n" => undef, "x" => MongoDB::OID->new("49b6d9fb17330414a0c63102")}, "d2" => DateTime->from_epoch(epoch => 1271079861), "regex" => qr/xtz/, "_id" => MongoDB::OID->new("49b6d9fb17330414a0c63101"), "string" => "string"})->inserted_id; my $obj = $c->find_one; is($obj->{'n'}, undef); is($obj->{'l'}, 234234124); ok( abs( $obj->{'d'} - 23.23451452) < 1e-6 ); is($obj->{'b'}, true); is($obj->{'a'}->{'foo'}, 'bar'); is($obj->{'a'}->{'n'}, undef); isa_ok($obj->{'a'}->{'x'}, 'MongoDB::OID'); isa_ok($obj->{'d2'}, 'DateTime'); is($obj->{'d2'}->epoch, 1271079861); ok($obj->{'regex'}); isa_ok($obj->{'_id'}, 'MongoDB::OID'); is($obj->{'_id'}, $id); is($obj->{'string'}, 'string'); }; subtest "\$MongoDB::BSON::char" => sub { local $MongoDB::BSON::char = "="; my $alt_client = build_client(); my $alt_c=$alt_client->db($testdb->name)->coll($c->name); $alt_c->drop; $alt_c->update_one({x => 1}, {"=inc" => {x => 1}}, {upsert => true}); my $up = $c->find_one; is($up->{x}, 2); }; subtest "\$MongoDB::BSON::char ':'" => sub { local $MongoDB::BSON::char = ":"; my $alt_client = build_client(); my $alt_c=$alt_client->db($testdb->name)->coll($c->name); $alt_c->drop; $alt_c->insert_many([{x => 1}, {x => 2}, {x => 3}, {x => 4}, {x => 5}]); my $cursor = $alt_c->query({x => {":gt" => 2, ":lte" => 4}})->sort({x => 1}); my $result = $cursor->next; is($result->{x}, 3); $result = $cursor->next; is($result->{x}, 4); ok(!$cursor->has_next); }; # utf8 subtest "UTF-8 strings" => sub { $c->drop; # latin1 $c->insert_one({char => "\xFE"}); my $x =$c->find_one; is($x->{char}, "\xFE"); $c->remove; # non-latin1 my $valid = "\x{8D4B}\x{8BD5}"; $c->insert_one({char => $valid}); $x = $c->find_one; # make sure it's being returned as a utf8 string ok(utf8::is_utf8($x->{char})); is(length $x->{char}, 2); }; subtest "bad UTF8" => sub { my @bad = ( "\xC0\x80" , # Non-shortest form representation of U+0000 "\xC0\xAF" , # Non-shortest form representation of U+002F "\xE0\x80\x80" , # Non-shortest form representation of U+0000 "\xF0\x80\x80\x80" , # Non-shortest form representation of U+0000 "\xE0\x83\xBF" , # Non-shortest form representation of U+00FF "\xF0\x80\x83\xBF" , # Non-shortest form representation of U+00FF "\xF0\x80\xA3\x80" , # Non-shortest form representation of U+08C0 ); for my $bad_utf8 ( @bad ) { # invalid should throw my $label = "0x" . unpack("H*", $bad_utf8); Encode::_utf8_on($bad_utf8); # force on internal UTF8 flag like( exception { $c->insert_one({char => $bad_utf8}) }, qr/Invalid UTF-8 detected while encoding/, "invalid UTF-8 throws an error inserting $label" ); } }; subtest "undefined" => sub { my $err = $testdb->run_command([getLastError => 1]); ok(!defined $err->{err}, "undef"); }; subtest "circular references" => sub { my $q = {}; $q->{'q'} = $q; eval { $c->insert_one($q); }; ok($@ =~ /circular ref/); my %test; tie %test, 'Tie::IxHash'; $test{t} = \%test; eval { $c->insert_one(\%test); }; ok($@ =~ /circular ref/); my $tie = Tie::IxHash->new; $tie->Push("t" => $tie); eval { $c->insert_one($tie); }; ok($@ =~ /circular ref/); }; subtest "no . in key names" => sub { eval { $c->insert_one({"x.y" => "foo"}); }; like($@, qr/documents for storage cannot contain/, "insert"); eval { $c->insert_one({"x.y" => "foo", "bar" => "baz"}); }; like($@, qr/documents for storage cannot contain/, "insert"); eval { $c->insert_one({"bar" => "baz", "x.y" => "foo"}); }; like($@, qr/documents for storage cannot contain/, "insert"); eval { $c->insert_one({"bar" => {"x.y" => "foo"}}); }; like($@, qr/documents for storage cannot contain/, "insert"); TODO: { local $TODO = "insert_many doesn't check for nested keys"; eval { $c->insert_many([{"x" => "foo"}, {"x.y" => "foo"}, {"y" => "foo"}]); }; like($@, qr/documents for storage cannot contain/, "batch insert"); eval { $c->insert_many([{"x" => "foo"}, {"foo" => ["x", {"x.y" => "foo"}]}, {"y" => "foo"}]); }; like($@, qr/documents for storage cannot contain/, "batch insert" ); } }; subtest "empty key name" => sub { eval { $c->insert_one({"" => "foo"}); }; ok($@ =~ /empty key name/); }; # moose numbers package Person; use Moo; has 'name' => ( is=>'rw' ); has 'age' => ( is=>'rw' ); has 'size' => ( is=>'rw' ); package main; subtest "Person object" => sub { $c->drop; my $p = Person->new( name=>'jay', age=>22 ); $c->insert_one($p); my $person = $c->find_one; is($person->{'age'}, 22, "roundtrip number"); }; subtest "warn on floating timezone" => sub { my $warned = 0; local $SIG{__WARN__} = sub { if ($_[0] =~ /floating/) { $warned = 1; } else { warn(@_); } }; my $date = DateTime->new(year => 2010, time_zone => "floating"); $c->insert_one({"date" => $date}); is($warned, 1, "warn on floating timezone"); }; subtest "epoch time" => sub { my $date = DateTime->from_epoch( epoch => 0 ); is( exception { $c->insert_one( { "date" => $date } ) }, undef, "inserting DateTime at epoch succeeds" ); }; subtest "half-conversion to int type" => sub { $c->drop; my $var = 'zzz'; # don't actually change it to an int, but add pIOK flag { no warnings 'numeric'; $var = int($var) if (int($var) eq $var); } $c->insert_one({'key' => $var}); my $v = $c->find_one; # make sure it was saved as string is($v->{'key'}, 'zzz'); }; subtest "store a scalar with magic that's both a float and int (PVMG w/pIOK set)" => sub { $c->drop; # PVMG (NV is 11.5) my $size = Person->new( size => 11.5 )->size; # add pIOK flag (IV is 11) { no warnings 'void'; int($size); } $c->insert_one({'key' => $size}); my $v = $c->find_one; # make sure it was saved as float is(($v->{'key'}), $size); }; subtest "make sure _ids aren't double freed" => sub { $c->drop; my $insert1 = ['_id' => 1]; my $insert2 = Tie::IxHash->new('_id' => 2); my $id = $c->insert_one($insert1)->inserted_id; is($id, 1); $id = $c->insert_one($insert2)->inserted_id; is($id, 2); }; subtest "aggressively convert numbers" => sub { local $MongoDB::BSON::looks_like_number = 1; my $alt_client = build_client(); my $alt_c=$alt_client->db($testdb->name)->coll($c->name); $alt_c->drop; $alt_c->insert_one({num => "4"}); $alt_c->insert_one({num => "5"}); $alt_c->insert_one({num => "6"}); $alt_c->insert_one({num => 4}); $alt_c->insert_one({num => 5}); $alt_c->insert_one({num => 6}); is($alt_c->count({num => {'$gt' => 4}}), 4); is($alt_c->count({num => {'$gte' => "5"}}), 4); is($alt_c->count({num => {'$gte' => "4.1"}}), 4); }; subtest "MongoDB::BSON::String type" => sub { { local $MongoDB::BSON::looks_like_number = 1; my $alt_client = build_client(); my $alt_c=$alt_client->db($testdb->name)->coll($c->name); $c->drop; my $num = "001"; $alt_c->insert_one({num => $num} ); $alt_c->insert_one({num => bless(\$num, "MongoDB::BSON::String")}); } is($c->count({num => 1}), 1); is($c->count({num => "001"}), 1); is($c->count, 2); }; subtest "MongoDB::BSON::Binary type" => sub { $c->drop; my $str = "foo"; my $bin = {bindata => [ \$str, MongoDB::BSON::Binary->new(data => $str), MongoDB::BSON::Binary->new(data => $str, subtype => MongoDB::BSON::Binary->SUBTYPE_GENERIC), MongoDB::BSON::Binary->new(data => $str, subtype => MongoDB::BSON::Binary->SUBTYPE_FUNCTION), MongoDB::BSON::Binary->new(data => $str, subtype => MongoDB::BSON::Binary->SUBTYPE_GENERIC_DEPRECATED), MongoDB::BSON::Binary->new(data => $str, subtype => MongoDB::BSON::Binary->SUBTYPE_UUID_DEPRECATED), MongoDB::BSON::Binary->new(data => $str, subtype => MongoDB::BSON::Binary->SUBTYPE_UUID), MongoDB::BSON::Binary->new(data => $str, subtype => MongoDB::BSON::Binary->SUBTYPE_MD5), MongoDB::BSON::Binary->new(data => $str, subtype => MongoDB::BSON::Binary->SUBTYPE_USER_DEFINED)]}; $c->insert_one($bin); my $doc = $c->find_one; my $data = $doc->{'bindata'}; foreach (@$data) { is($_, "foo"); } $doc = $c->find_one; $data = $doc->{'bindata'}; my @arr = @$data; is($arr[0]->subtype, MongoDB::BSON::Binary->SUBTYPE_GENERIC); is($arr[0]->data, $str); for (my $i=1; $i<=$#arr; $i++ ) { is($arr[$i]->subtype, $bin->{'bindata'}->[$i]->subtype); is($arr[$i]->data, $bin->{'bindata'}->[$i]->data); } }; subtest "Checking hash key unicode support" => sub { use utf8; $c->drop; my $testkey = 'юникод'; my $hash = { $testkey => 1 }; my $oid; eval { $oid = $c->insert_one( $hash )->inserted_id; }; is ( $@, '' ); my $obj = $c->find_one( { _id => $oid } ); is ( $obj->{$testkey}, 1 ); }; subtest "PERL-489 ref to PVNV" => sub { my $value = 42.2; $value = "hello"; is( exception { $c->insert_one( { value => \$value } ) }, undef, "inserting ref to PVNV is not fatal", ); }; subtest "PERL-543 IxHash undef" => sub { $c->drop; my %h; tie(%h, 'Tie::IxHash', x => undef); $c->insert_one(\%h); my $doc = $c->find_one; is( $doc->{x}, undef, "round-trip undef with IxHash" ); $c->drop; my %doc = ( x => undef ); $c->insert_one(\%doc); $doc = $c->find_one; is( $doc->{x}, undef, "round-trip undef with regular hash" ); }; subtest "PERL-575 inflated boolean" => sub { $c->drop; $c->insert( { "okay" => false, "name" => "fred0" } ); $c->insert( { "okay" => false, "name" => "fred1" } ); my @docs = $c->find()->all; is( exception { $_->{okay} = $_->{okay}->TO_JSON for @docs }, undef, "replacing one boolean doesn't affect another" ); }; done_testing; MongoDB-v1.2.2/t/bson_codec/000755 000765 000024 00000000000 12651754051 015770 5ustar00davidstaff000000 000000 MongoDB-v1.2.2/t/bulk.t000644 000765 000024 00000142517 12651754051 015026 0ustar00davidstaff000000 000000 # # Copyright 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # use strict; use warnings; use utf8; use Test::More 0.88; use Test::Fatal; use Test::Deep 0.111 qw/!blessed/; use Scalar::Util qw/refaddr/; use Tie::IxHash; use boolean; use MongoDB; use MongoDB::Error; use lib "t/lib"; use MongoDBTest qw/skip_unless_mongod build_client get_test_db server_version/; skip_unless_mongod(); my $conn = build_client(); my $testdb = get_test_db($conn); my $coll = $testdb->get_collection("test_collection"); my $ismaster = $testdb->run_command( { ismaster => 1 } ); my $server_status = $testdb->run_command( { serverStatus => 1 } ); # Standalone in "--master" mode will have serverStatus.repl, but ordinary # standalone won't my $is_standalone = $conn->topology_type eq 'Single' && ! exists $server_status->{repl}; my $server_does_bulk = server_version($conn) >= v2.5.5; sub _truncate { return( length($_[0]) > 1600 ? (substr($_[0],0,1600)."...") : $_[0] ); } sub _bulk_write_result { return MongoDB::BulkWriteResult->new( acknowledged => 1, write_errors => [], write_concern_errors => [], modified_count => 0, inserted_count => 0, upserted_count => 0, matched_count => 0, deleted_count => 0, upserted => [], inserted => [], batch_count => 0, op_count => 0, @_, ); } subtest "constructors" => sub { my @constructors = qw( initialize_ordered_bulk_op initialize_unordered_bulk_op ordered_bulk unordered_bulk ); for my $method (@constructors) { my $bulk = $coll->$method; isa_ok( $bulk, 'MongoDB::BulkWrite', $method ); if ( $method =~ /unordered/ ) { ok( !$bulk->ordered, "ordered attr is false" ); } else { ok( $bulk->ordered, "ordered attr is true" ); } is( refaddr $bulk->collection, refaddr $coll, "MongoDB::BulkWrite holds ref to originating Collection" ); } }; note("QA-477 INSERT"); for my $method (qw/initialize_ordered_bulk_op initialize_unordered_bulk_op/) { subtest "$method: insert errors" => sub { my $bulk = $coll->$method; # raise errors on wrong arg types my %bad_args = ( LIST => [ {}, {} ], EMPTY => [], ); for my $k ( sort keys %bad_args ) { like( exception { $bulk->insert_one( @{ $bad_args{$k} } ) }, qr/reference/, "insert( $k ) throws an error" ); } like( exception { $bulk->insert_one( 'foo' ) }, qr/reference/, "insert( 'foo' ) throws an error" ); like( exception { $bulk->insert_one( ['foo'] ) }, qr{must have key/value pairs}, "insert( ['foo'] ) throws an error", ); like( exception { $bulk->find( {} )->insert_one( {} ) }, qr/^Can't locate object method "insert_one"/, "find({})->insert_one({}) throws an error", ); is( exception { $bulk->insert_one( { '$key' => 1 } ) }, undef, "queuing insertion of document with \$key is allowed" ); my $err = exception { $bulk->execute }; isa_ok( $err, 'MongoDB::WriteError', "executing insertion with \$key" ); }; subtest "$method: successful insert" => sub { $coll->drop; my $bulk = $coll->$method; is( $coll->count, 0, "no docs in collection" ); $bulk->insert_one( { _id => 1 } ); my ( $result, $err ); $err = exception { $result = $bulk->execute }; is( $err, undef, "no error on insert" ) or diag _truncate explain $err; is( $coll->count, 1, "one doc in collection" ); # test empty superclass isa_ok( $result, 'MongoDB::WriteResult', "result object" ); isa_ok( $result, 'MongoDB::BulkWriteResult', "result object" ); cmp_deeply( $result, _bulk_write_result( inserted_count => 1, modified_count => ( $server_does_bulk ? 0 : undef ), op_count => 1, batch_count => 1, inserted => [ { index => 0, _id => 1 } ], ), "result object correct" ) or diag _truncate explain $result; }; subtest "$method insert without _id" => sub { $coll->drop; my $bulk = $coll->$method; is( $coll->count, 0, "no docs in collection" ); my $doc = {}; $bulk->insert_one( $doc ); my ( $result, $err ); $err = exception { $result = $bulk->execute }; is( $err, undef, "no error on insert" ) or diag _truncate explain $err; is( $coll->count, 1, "one doc in collection" ); isa_ok( $result, 'MongoDB::BulkWriteResult', "result object" ); cmp_deeply( $result, _bulk_write_result( inserted_count => 1, modified_count => ( $server_does_bulk ? 0 : undef ), op_count => 1, batch_count => 1, inserted => [ { index => 0, _id => obj_isa("MongoDB::OID") } ], ), "result object correct" ); my $id = $coll->find_one()->{_id}; # OID PIDs are the low 16 bits is( $id->_get_pid, $$ & 0xffff, "generated ID has our PID" ) or diag sprintf( "got OID: %s but our PID is %x", $id->value, $$ ); }; } note("QA-477 FIND"); for my $method (qw/initialize_ordered_bulk_op initialize_unordered_bulk_op/) { subtest "$method: find" => sub { my $bulk = $coll->$method; like( exception { $bulk->find }, qr/find requires a criteria document/, "find without doc selector throws exception" ); }; } note("QA-477 UPDATE and UPDATE_ONE"); for my $method (qw/initialize_ordered_bulk_op initialize_unordered_bulk_op/) { subtest "update and update_one errors with $method" => sub { my $bulk; # raise errors on wrong arg types my %bad_args = ( SCALAR => ['foo'], EMPTY => [], # not in QA test ); for my $update (qw/update_many update_one/) { $bulk = $coll->$method; for my $k ( sort keys %bad_args ) { like( exception { $bulk->find( {} )->$update( @{ $bad_args{$k} } ) }, qr/argument to $update must be a single hashref, arrayref or Tie::IxHash/, "$update( $k ) throws an error" ); } $bulk = $coll->$method; like( exception { $bulk->$update( { '$set' => { x => 1 } } ) }, qr/^Can't locate object method "$update"/, "$update on bulk object (without find) throws an error", ); $bulk = $coll->$method; $bulk->find( {} )->$update( { key => 1 } ); like( exception { $bulk->execute }, qr/update document must only contain update operators/, "single non-op key in $update doc throws exception" ); $bulk = $coll->$method; $bulk->find( {} )->$update( [ key => 1, '$key' => 1 ]); like( exception { $bulk->execute }, qr/update document must only contain update operators/, "first non-op key in $update doc throws exception" ); } }; subtest "update all docs with $method" => sub { $coll->drop; my $bulk = $coll->$method; $coll->insert_one($_) for map { { key => $_ } } 1, 2; my @docs = $coll->find( {} )->all; $bulk->find( {} )->update_many( { '$set' => { x => 3 } } ); my ( $result, $err ); $err = exception { $result = $bulk->execute }; is( $err, undef, "no error on update" ) or diag _truncate explain $err; isa_ok( $result, 'MongoDB::BulkWriteResult', "result object" ); cmp_deeply( $result, _bulk_write_result( matched_count => 2, modified_count => ( $server_does_bulk ? 2 : undef ), op_count => 1, batch_count => 1, ), "result object correct" ) or diag _truncate explain $result; if ( $server_does_bulk ) { ok( $result->has_modified_count, "newer server has_modified_count" ); } else { ok( ! $result->has_modified_count, "older server has_modified_count" ); } # check expected values $_->{x} = 3 for @docs; cmp_deeply( [ $coll->find( {} )->all ], \@docs, "all documents updated" ); }; subtest "update only matching docs with $method" => sub { $coll->drop; my $bulk = $coll->$method; $coll->insert_one($_) for map { { key => $_ } } 1, 2; my @docs = $coll->find( {} )->all; $bulk->find( { key => 1 } )->update_many( { '$set' => { x => 1 } } ); $bulk->find( { key => 2 } )->update_many( { '$set' => { x => 2 } } ); my ( $result, $err ); $err = exception { $result = $bulk->execute }; is( $err, undef, "no error on update" ) or diag _truncate explain $err; isa_ok( $result, 'MongoDB::BulkWriteResult', "result object" ); is_deeply( $result, _bulk_write_result( matched_count => 2, modified_count => ( $server_does_bulk ? 2 : undef ), op_count => 2, batch_count => $server_does_bulk ? 1 : 2, ), "result object correct" ); # check expected values $_->{x} = $_->{key} for @docs; cmp_deeply( [ $coll->find( {} )->all ], \@docs, "all documents updated" ); }; subtest "update_one with $method" => sub { $coll->drop; my $bulk = $coll->$method; $coll->insert_one($_) for map { { key => $_ } } 1, 2; $bulk->find( {} )->update_one( { '$set' => { key => 3 } } ); my ( $result, $err ); $err = exception { $result = $bulk->execute }; is( $err, undef, "no error on update" ) or diag _truncate explain $err; isa_ok( $result, 'MongoDB::BulkWriteResult', "result object" ); is_deeply( $result, _bulk_write_result( matched_count => 1, modified_count => ( $server_does_bulk ? 1 : undef ), op_count => 1, batch_count => 1, ), "result object correct" ); # check expected values is( $coll->find( { key => 3 } )->count, 1, "one document updated" ); }; } note("QA-477 REPLACE_ONE"); for my $method (qw/initialize_ordered_bulk_op initialize_unordered_bulk_op/) { subtest "replace_one errors with $method" => sub { my $bulk; # raise errors on wrong arg types my %bad_args = ( SCALAR => ['foo'], EMPTY => [], # not in QA test ); $bulk = $coll->$method; for my $k ( sort keys %bad_args ) { like( exception { $bulk->find( {} )->replace_one( @{ $bad_args{$k} } ) }, qr/argument to replace_one must be a single hashref, arrayref or Tie::IxHash/, "replace_one( $k ) throws an error" ); } like( exception { $bulk->replace_one( { '$set' => { x => 1 } } ) }, qr/^Can't locate object method "replace_one"/, "replace_one on bulk object (without find) throws an error", ); $bulk = $coll->$method; $bulk->find( {} )->replace_one( { '$key' => 1 } ); like( exception { $bulk->execute }, qr/replacement document must not contain update operators/, "single op key in replace_one doc throws exception" ); $bulk = $coll->$method; $bulk->find( {} )->replace_one( [ '$key' => 1, key => 1 ] ); like( exception { $bulk->execute }, qr/replacement document must not contain update operators/, "mixed op and non-op key in replace_one doc throws exception" ); }; subtest "replace_one with $method" => sub { $coll->drop; my $bulk = $coll->$method; $coll->insert_one( { key => 1 } ) for 1 .. 2; $bulk->find( {} )->replace_one( { key => 3 } ); my ( $result, $err ); $err = exception { $result = $bulk->execute }; is( $err, undef, "no error on update" ) or diag _truncate explain $err; isa_ok( $result, 'MongoDB::BulkWriteResult', "result object" ); is_deeply( $result, _bulk_write_result( matched_count => 1, modified_count => ( $server_does_bulk ? 1 : undef ), op_count => 1, batch_count => 1, ), "result object correct" ); # check expected values my $distinct = [ $coll->distinct("key")->all ]; cmp_deeply( $distinct, bag( 1, 3 ), "only one document replaced" ); }; } note("QA-477 UPSERT-UPDATE"); for my $method (qw/initialize_ordered_bulk_op initialize_unordered_bulk_op/) { subtest "upsert errors with $method" => sub { my $bulk = $coll->$method; like( exception { $bulk->upsert() }, qr/^Can't locate object method "upsert"/, "upsert on bulk object (without find) throws an error", ); like( exception { $bulk->find( {} )->upsert( {} ) }, qr/the upsert method takes no arguments/, "upsert( NONEMPTY ) throws an error" ); }; subtest "upsert-update insertion with $method" => sub { $coll->drop; my $bulk = $coll->$method; $bulk->find( { key => 1 } )->update_many( { '$set' => { x => 1 } } ); $bulk->find( { key => 2 } )->upsert->update_many( { '$set' => { x => 2 } } ); my ( $result, $err ); $err = exception { $result = $bulk->execute }; is( $err, undef, "no error on upsert-update" ) or diag _truncate explain $err; isa_ok( $result, 'MongoDB::BulkWriteResult', "result object" ); cmp_deeply( $result, _bulk_write_result( upserted_count => 1, modified_count => ( $server_does_bulk ? 0 : undef ), upserted => [ { index => 1, _id => ignore() } ], op_count => 2, batch_count => $server_does_bulk ? 1 : 2, ), "result object correct" ) or diag _truncate explain $result; cmp_deeply( [ $coll->find( {} )->all ], [ { _id => ignore(), key => 2, x => 2 } ], "upserted document correct" ); $bulk = $coll->$method; $bulk->find( { key => 1 } )->update_many( { '$set' => { x => 1 } } ); $bulk->find( { key => 2 } )->upsert->update_many( { '$set' => { x => 2 } } ); $err = exception { $result = $bulk->execute }; is( $err, undef, "no error on second upsert-update" ) or diag _truncate explain $err; cmp_deeply( $result, _bulk_write_result( matched_count => 1, modified_count => ( $server_does_bulk ? 0 : undef ), op_count => 2, batch_count => $server_does_bulk ? 1 : 2, ), "result object correct" ) or diag _truncate explain $result; }; subtest "upsert-update updates with $method" => sub { $coll->drop; $coll->insert_one( { key => 1 } ) for 1 .. 2; my @docs = $coll->find( {} )->all; my $bulk = $coll->$method; $bulk->find( { key => 1 } )->upsert->update_many( { '$set' => { x => 1 } } ); my ( $result, $err ); $err = exception { $result = $bulk->execute }; is( $err, undef, "no error on upsert-update" ) or diag _truncate explain $err; isa_ok( $result, 'MongoDB::BulkWriteResult', "result object" ); cmp_deeply( $result, _bulk_write_result( matched_count => 2, modified_count => ( $server_does_bulk ? 2 : undef ), op_count => 1, batch_count => 1, ), "result object correct" ) or diag _truncate explain $result; $_->{x} = 1 for @docs; cmp_deeply( [ $coll->find( {} )->all ], \@docs, "all documents updated" ); }; subtest "upsert-update large doc with $method" => sub { $coll->drop; # QA test says big_string should be 16MiB - 31 long, but { _id => $oid, # key => 1, x => $big_string } exceeds 16MiB when BSON encoded unless # the bigstring is 16MiB - 41. This may be a peculiarity of Perl's # BSON type encoding. # # Using legacy API, the bigstring must be 16MiB - 97 for some reason. my $big_string = "a" x ( 16 * 1024 * 1024 - $server_does_bulk ? 41 : 97 ); my $bulk = $coll->$method; $bulk->find( { key => "1" } )->upsert->update_many( { '$set' => { x => $big_string } } ); my ( $result, $err ); $err = exception { $result = $bulk->execute }; is( $err, undef, "no error on upsert-update" ) or diag _truncate explain $err; isa_ok( $result, 'MongoDB::BulkWriteResult', "result object" ); cmp_deeply( $result, _bulk_write_result( upserted_count => 1, modified_count => ( $server_does_bulk ? 0 : undef ), upserted => [ { index => 0, _id => ignore() } ], op_count => 1, batch_count => 1, ), "result object correct" ) or diag _truncate explain $result; }; } note("QA-477 UPSERT-UPDATE_ONE"); for my $method (qw/initialize_ordered_bulk_op initialize_unordered_bulk_op/) { subtest "upsert-update_one insertion with $method" => sub { $coll->drop; my $bulk = $coll->$method; $bulk->find( { key => 1 } )->update_one( { '$set' => { x => 1 } } ); # not upsert $bulk->find( { key => 2 } )->upsert->update_one( { '$set' => { x => 2 } } ); my ( $result, $err ); $err = exception { $result = $bulk->execute }; is( $err, undef, "no error on upsert-update_one" ) or diag _truncate explain $err; isa_ok( $result, 'MongoDB::BulkWriteResult', "result object" ); cmp_deeply( $result, _bulk_write_result( upserted_count => 1, modified_count => ( $server_does_bulk ? 0 : undef ), upserted => [ { index => 1, _id => ignore() } ], op_count => 2, batch_count => $server_does_bulk ? 1 : 2, ), "result object correct" ) or diag _truncate explain $result; cmp_deeply( [ $coll->find( {} )->all ], [ { _id => ignore(), key => 2, x => 2 } ], "upserted document correct" ); }; subtest "upsert-update_one (no insert) with $method" => sub { $coll->drop; $coll->insert_one( { key => 1 } ) for 1 .. 2; my @docs = $coll->find( {} )->all; my $bulk = $coll->$method; $bulk->find( { key => 1 } )->upsert->update_one( { '$set' => { x => 2 } } ); my ( $result, $err ); $err = exception { $result = $bulk->execute }; is( $err, undef, "no error on upsert-update_one" ) or diag _truncate explain $err; isa_ok( $result, 'MongoDB::BulkWriteResult', "result object" ); cmp_deeply( $result, _bulk_write_result( matched_count => 1, modified_count => ( $server_does_bulk ? 1 : undef ), op_count => 1, batch_count => 1, ), "result object correct" ) or diag _truncate explain $result; # add expected key to one document only $docs[0]{x} = 2; my @got = $coll->find( {} )->all; cmp_deeply( \@got, bag(@docs), "updated document correct" ) or diag _truncate explain \@got; }; } note("QA-477 UPSERT-REPLACE_ONE"); for my $method (qw/initialize_ordered_bulk_op initialize_unordered_bulk_op/) { subtest "upsert-replace_one insertion with $method" => sub { $coll->drop; my $bulk = $coll->$method; $bulk->find( { key => 1 } )->replace_one( { x => 1 } ); # not upsert $bulk->find( { key => 2 } )->upsert->replace_one( { x => 2 } ); my ( $result, $err ); $err = exception { $result = $bulk->execute }; is( $err, undef, "no error on upsert-replace_one" ) or diag _truncate explain $err; isa_ok( $result, 'MongoDB::BulkWriteResult', "result object" ); cmp_deeply( $result, _bulk_write_result( upserted_count => 1, modified_count => ( $server_does_bulk ? 0 : undef ), upserted => [ { index => 1, _id => ignore() } ], op_count => 2, batch_count => $server_does_bulk ? 1 : 2, ), "result object correct" ) or diag _truncate explain $result; cmp_deeply( [ $coll->find( {} )->all ], [ { _id => ignore(), x => 2 } ], "upserted document correct" ); }; subtest "upsert-replace_one (no insert) with $method" => sub { $coll->drop; $coll->insert_one( { key => 1 } ) for 1 .. 2; my @docs = $coll->find( {} )->all; my $bulk = $coll->$method; $bulk->find( { key => 1 } )->upsert->replace_one( { x => 2 } ); my ( $result, $err ); $err = exception { $result = $bulk->execute }; is( $err, undef, "no error on upsert-replace_one" ) or diag _truncate explain $err; isa_ok( $result, 'MongoDB::BulkWriteResult', "result object" ); cmp_deeply( $result, _bulk_write_result( matched_count => 1, modified_count => ( $server_does_bulk ? 1 : undef ), op_count => 1, batch_count => 1, ), "result object correct" ) or diag _truncate explain $result; # change one expected doc only $docs[0]{x} = 2; delete $docs[0]{key}; my @got = $coll->find( {} )->all; cmp_deeply( \@got, bag(@docs), "updated document correct" ) or diag _truncate explain \@got; }; } note("QA-477 delete_many"); for my $method (qw/initialize_ordered_bulk_op initialize_unordered_bulk_op/) { subtest "delete_many errors with $method" => sub { my $bulk = $coll->$method; like( exception { $bulk->delete_many() }, qr/^Can't locate object method "delete_many"/, "delete_many on bulk object (without find) throws an error", ); }; subtest "delete_many all with $method" => sub { $coll->drop; $coll->insert_one( { key => 1 } ) for 1 .. 2; my $bulk = $coll->$method; $bulk->find( {} )->delete_many; my ( $result, $err ); $err = exception { $result = $bulk->execute }; is( $err, undef, "no error on delete_many" ) or diag _truncate explain $err; isa_ok( $result, 'MongoDB::BulkWriteResult', "result object" ); cmp_deeply( $result, _bulk_write_result( deleted_count => 2, modified_count => ( $server_does_bulk ? 0 : undef ), op_count => 1, batch_count => 1, ), "result object correct" ) or diag _truncate explain $result; is( $coll->count, 0, "all documents removed" ); }; subtest "delete_many matching with $method" => sub { $coll->drop; $coll->insert_one( { key => $_ } ) for 1 .. 2; my $bulk = $coll->$method; $bulk->find( { key => 1 } )->delete_many; my ( $result, $err ); $err = exception { $result = $bulk->execute }; is( $err, undef, "no error on delete_many" ) or diag _truncate explain $err; isa_ok( $result, 'MongoDB::BulkWriteResult', "result object" ); cmp_deeply( $result, _bulk_write_result( deleted_count => 1, modified_count => ( $server_does_bulk ? 0 : undef ), op_count => 1, batch_count => 1, ), "result object correct" ) or diag _truncate explain $result; cmp_deeply( [ $coll->find( {} )->all ], [ { _id => ignore(), key => 2 } ], "correct object remains" ); }; } note("QA-477 delete_one"); for my $method (qw/initialize_ordered_bulk_op initialize_unordered_bulk_op/) { subtest "delete_one errors with $method" => sub { my $bulk = $coll->$method; like( exception { $bulk->delete_one() }, qr/^Can't locate object method "delete_one"/, "delete_one on bulk object (without find) throws an error", ); }; subtest "delete_one with $method" => sub { $coll->drop; $coll->insert_one( { key => 1 } ) for 1 .. 2; my $bulk = $coll->$method; $bulk->find( {} )->delete_one; my ( $result, $err ); $err = exception { $result = $bulk->execute }; is( $err, undef, "no error on delete_one" ) or diag _truncate explain $err; isa_ok( $result, 'MongoDB::BulkWriteResult', "result object" ); cmp_deeply( $result, _bulk_write_result( deleted_count => 1, modified_count => ( $server_does_bulk ? 0 : undef ), op_count => 1, batch_count => 1, ), "result object correct" ) or diag _truncate explain $result; is( $coll->count, 1, "only one doc removed" ); }; } note("QA-477 MIXED OPERATIONS, UNORDERED"); subtest "mixed operations, unordered" => sub { $coll->drop; $coll->insert_one( { a => $_ } ) for 1 .. 2; my $bulk = $coll->initialize_unordered_bulk_op; $bulk->find( { a => 1 } )->update_many( { '$set' => { b => 1 } } ); $bulk->find( { a => 2 } )->delete_many; $bulk->insert_one( { a => 3 } ); $bulk->find( { a => 4 } )->upsert->update_one( { '$set' => { b => 4 } } ); my ( $result, $err ); $err = exception { $result = $bulk->execute }; is( $err, undef, "no error on mixed operations" ) or diag _truncate explain $err; cmp_deeply( $result, _bulk_write_result( inserted_count => 1, matched_count => 1, modified_count => ( $server_does_bulk ? 1 : undef ), upserted_count => 1, deleted_count => 1, op_count => 4, batch_count => $server_does_bulk ? 3 : 4, # XXX QA Test says index should be 3, but with unordered, that's # not guaranteed, so we ignore the value upserted => [ { index => ignore(), _id => obj_isa("MongoDB::OID") } ], inserted => [ { index => ignore(), _id => obj_isa("MongoDB::OID") } ], ), "result object correct" ) or diag _truncate explain $result; }; note("QA-477 MIXED OPERATIONS, ORDERED"); subtest "mixed operations, ordered" => sub { $coll->drop; my $bulk = $coll->initialize_ordered_bulk_op; $bulk->insert_one( { a => 1 } ); $bulk->find( { a => 1 } )->update_one( { '$set' => { b => 1 } } ); $bulk->find( { a => 2 } )->upsert->update_one( { '$set' => { b => 2 } } ); $bulk->insert_one( { a => 3 } ); $bulk->find( { a => 3 } )->delete_many; my ( $result, $err ); $err = exception { $result = $bulk->execute }; is( $err, undef, "no error on mixed operations" ) or diag _truncate explain $err; cmp_deeply( $result, _bulk_write_result( inserted_count => 2, upserted_count => 1, matched_count => 1, modified_count => ( $server_does_bulk ? 1 : undef ), deleted_count => 1, op_count => 5, batch_count => $server_does_bulk ? 4 : 5, upserted => [ { index => 2, _id => obj_isa("MongoDB::OID") } ], inserted => [ { index => 0, _id => obj_isa("MongoDB::OID") }, { index => 3, _id => obj_isa("MongoDB::OID") }, ], ), "result object correct" ) or diag _truncate explain $result; }; note("QA-477 UNORDERED BATCH WITH ERRORS"); subtest "unordered batch with errors" => sub { $coll->drop; $coll->indexes->create_one( [ a => 1 ], { unique => 1 } ); my $bulk = $coll->initialize_unordered_bulk_op; $bulk->insert_one( { b => 1, a => 1 } ); $bulk->find( { b => 2 } )->upsert->update_one( { '$set' => { a => 1 } } ); $bulk->find( { b => 3 } )->upsert->update_one( { '$set' => { a => 2 } } ); $bulk->find( { b => 2 } )->upsert->update_one( { '$set' => { a => 1 } } ); $bulk->insert_one( { b => 4, a => 3 } ); $bulk->insert_one( { b => 5, a => 1 } ); my ( $result, $err ); $err = exception { $result = $bulk->execute }; isa_ok( $err, 'MongoDB::DuplicateKeyError', 'caught error' ) or diag _truncate explain $err; my $details = $err->result; # Check if all ops ran in two batches (unless we're on a legacy server) is( $details->op_count, 6, "op_count" ); is( $details->batch_count, $server_does_bulk ? 2 : 6, "batch_count" ); # XXX QA 477 doesn't cover *both* possible orders. Either the inserts go # first or the upsert/update_ones goes first and different result states # are possible for each case. if ( $details->inserted_count == 2 ) { note("inserts went first"); is( $details->inserted_count, 2, "inserted_count" ); is( $details->upserted_count, 1, "upserted_count" ); is( $details->deleted_count, 0, "deleted_count" ); is( $details->matched_count, 0, "matched_count" ); is( $details->modified_count, ( $server_does_bulk ? 0 : undef ), "modified_count" ); is( $details->count_write_errors, 3, "writeError count" ) or diag _truncate explain $details; cmp_deeply( $details->upserted, [ { index => 4, _id => obj_isa("MongoDB::OID") }, ], "upsert list" ); } else { note("updates went first"); is( $details->inserted_count, 1, "inserted_count" ); is( $details->upserted_count, 2, "upserted_count" ); is( $details->deleted_count, 0, "deleted_count" ); is( $details->matched_count, 1, "matched_count" ); is( $details->modified_count, ( $server_does_bulk ? 0 : undef ), "modified_count" ); is( $details->count_write_errors, 2, "writeError count" ) or diag _truncate explain $details; cmp_deeply( $details->upserted, [ { index => 0, _id => obj_isa("MongoDB::OID") }, { index => 1, _id => obj_isa("MongoDB::OID") }, ], "upsert list" ); } my $distinct = [ $coll->distinct("a")->all ]; cmp_deeply( $distinct, bag( 1 .. 3 ), "distinct keys" ); }; note("QA-477 ORDERED BATCH WITH ERRORS"); subtest "ordered batch with errors" => sub { $coll->drop; $coll->indexes->create_one( [ a => 1 ], { unique => 1 } ); my $bulk = $coll->initialize_ordered_bulk_op; $bulk->insert_one( { b => 1, a => 1 } ); $bulk->find( { b => 2 } )->upsert->update_one( { '$set' => { a => 1 } } ); $bulk->find( { b => 3 } )->upsert->update_one( { '$set' => { a => 2 } } ); $bulk->find( { b => 2 } )->upsert->update_one( { '$set' => { a => 1 } } ); # fail $bulk->insert_one( { b => 4, a => 3 } ); $bulk->insert_one( { b => 5, a => 1 } ); my ( $result, $err ); $err = exception { $result = $bulk->execute }; isa_ok( $err, 'MongoDB::DuplicateKeyError', 'caught error' ); my $details = $err->result; is( $details->upserted_count, 0, "upserted_count" ); is( $details->matched_count, 0, "matched_count" ); is( $details->deleted_count, 0, "deleted_count" ); is( $details->modified_count, ( $server_does_bulk ? 0 : undef ), "modified_count" ); is( $details->inserted_count, 1, "inserted_count" ); # on 2.6+, 4 ops run in two batches; but on legacy, we get an error on # the first update_one, so we only have two ops, still in two batches is( $details->op_count, $server_does_bulk ? 4 : 2, "op_count" ); is( $details->batch_count, 2, "op_count" ); is( $details->count_write_errors, 1, "writeError count" ); is( $details->write_errors->[0]{code}, 11000, "error code" ); is( $details->write_errors->[0]{index}, 1, "error index" ); ok( length $details->write_errors->[0]{errmsg}, "error string" ); cmp_deeply( $details->write_errors->[0]{op}, { q => Tie::IxHash->new( b => 2 ), u => obj_isa( $server_does_bulk ? 'MongoDB::BSON::_EncodedDoc' : 'Tie::IxHash' ), multi => false, upsert => true, }, "error op" ) or diag _truncate explain $details->write_errors->[0]{op}; is( $coll->count, 1, "subsequent inserts did not run" ); }; note("QA-477 BATCH SPLITTING: maxBsonObjectSize"); subtest "ordered batch split on size" => sub { local $TODO = "pending topology monitoring"; $coll->drop; my $bulk = $coll->initialize_ordered_bulk_op; my $big_string = "a" x ( 4 * 1024 * 1024 ); $bulk->insert_one( { _id => $_, a => $big_string } ) for 0 .. 5; $bulk->insert_one( { _id => 0 } ); # will fail $bulk->insert_one( { _id => 100 } ); my ( $result, $err ); $err = exception { $result = $bulk->execute }; isa_ok( $err, 'MongoDB::DuplicateKeyError', 'caught error' ) or diag "CAUGHT ERROR: $err"; my $details = $err->result; my $errdoc = $details->write_errors->[0]; is( $details->inserted_count, 6, "inserted_count" ); cmp_deeply( $details->inserted_ids, { map { $_ => $_ } 0 .. 5 }, "inserted_ids correct" ); is( $details->count_write_errors, 1, "count_write_errors" ); is( $errdoc->{code}, 11000, "error code" ) or diag _truncate explain $errdoc; is( $errdoc->{index}, 6, "error index" ); ok( length( $errdoc->{errmsg} ), "error message" ); is( $coll->count, 6, "collection count" ); }; subtest "unordered batch split on size" => sub { local $TODO = "pending topology monitoring"; $coll->drop; my $bulk = $coll->initialize_unordered_bulk_op; my $big_string = "a" x ( 4 * 1024 * 1024 ); $bulk->insert_one( { _id => $_, a => $big_string } ) for 0 .. 5; $bulk->insert_one( { _id => 0 } ); # will fail $bulk->insert_one( { _id => 100 } ); my ( $result, $err ); $err = exception { $result = $bulk->execute }; isa_ok( $err, 'MongoDB::DuplicateKeyError', 'caught error' ) or diag $err; my $details = $err->result; my $errdoc = $details->write_errors->[0]; is( $details->inserted_count, 7, "inserted_count" ); is( $details->count_write_errors, 1, "count_write_errors" ); is( $errdoc->{code}, 11000, "error code" ) or diag _truncate explain $errdoc; is( $errdoc->{index}, 6, "error index" ); ok( length( $errdoc->{errmsg} ), "error message" ); is( $coll->count, 7, "collection count" ); }; note("QA-477 BATCH SPLITTING: maxWriteBatchSize"); subtest "ordered batch split on number of ops" => sub { $coll->drop; my $bulk = $coll->initialize_ordered_bulk_op; $bulk->insert_one( { _id => $_ } ) for 0 .. 1999; $bulk->insert_one( { _id => 0 } ); # will fail $bulk->insert_one( { _id => 10000 } ); my ( $result, $err ); $err = exception { $result = $bulk->execute }; isa_ok( $err, 'MongoDB::DuplicateKeyError', 'caught error' ) or diag $err; my $details = $err->result; my $errdoc = $details->write_errors->[0]; is( $details->inserted_count, 2000, "inserted_count" ); cmp_deeply( $details->inserted_ids, { map { $_ => $_ } 0 .. 1999 }, "inserted_ids correct" ); is( $details->count_write_errors, 1, "count_write_errors" ); is( $errdoc->{code}, 11000, "error code" ); is( $errdoc->{index}, 2000, "error index" ); ok( length( $errdoc->{errmsg} ), "error message" ); is( $coll->count, 2000, "collection count" ); }; subtest "unordered batch split on number of ops" => sub { $coll->drop; my $bulk = $coll->initialize_unordered_bulk_op; $bulk->insert_one( { _id => $_ } ) for 0 .. 1999; $bulk->insert_one( { _id => 0 } ); # will fail $bulk->insert_one( { _id => 10000 } ); my ( $result, $err ); $err = exception { $result = $bulk->execute }; isa_ok( $err, 'MongoDB::DuplicateKeyError', 'caught error' ) or diag $err; my $details = $err->result; my $errdoc = $details->write_errors->[0]; is( $details->inserted_count, 2001, "inserted_count" ); is( $details->count_write_errors, 1, "count_write_errors" ); is( $errdoc->{code}, 11000, "error code" ); is( $errdoc->{index}, 2000, "error index" ); ok( length( $errdoc->{errmsg} ), "error message" ); is( $coll->count, 2001, "collection count" ); }; note("QA-477 RE-RUNNING A BATCH"); for my $method (qw/initialize_ordered_bulk_op initialize_unordered_bulk_op/) { subtest "$method: rerun a bulk operation" => sub { $coll->drop; my $bulk = $coll->$method; $bulk->insert_one( {} ); my $err = exception { $bulk->execute }; is( $err, undef, "first execute succeeds" ); $err = exception { $bulk->execute }; isa_ok( $err, 'MongoDB::Error', "re-running a bulk op throws exception" ); like( $err->message, qr/bulk op execute called more than once/, "error message" ) or diag _truncate explain $err; }; } note("QA-477 EMPTY BATCH"); for my $method (qw/initialize_ordered_bulk_op initialize_unordered_bulk_op/) { subtest "$method: empty bulk operation" => sub { my $bulk = $coll->$method; my $err = exception { $bulk->execute }; isa_ok( $err, 'MongoDB::Error', "empty bulk op throws exception" ); like( $err->message, qr/no bulk ops to execute/, "error message" ) or diag _truncate explain $err; }; } note("QA-477 W>1 AGAINST STANDALONE"); for my $method (qw/initialize_ordered_bulk_op initialize_unordered_bulk_op/) { subtest "$method: w > 1 against standalone (explicit)" => sub { plan skip_all => 'needs a standalone server' unless $is_standalone; $coll->drop; my $bulk = $coll->$method; $bulk->insert_one( {} ); my $err = exception { $bulk->execute( { w => 2 } ) }; isa_ok( $err, 'MongoDB::DatabaseError', "executing write concern w > 1 throws error" ); like( $err->message, qr/replica/, "error message mentions replication" ); }; subtest "$method: w > 1 against standalone (implicit)" => sub { plan skip_all => 'needs a standalone server' unless $is_standalone; $coll->drop; my $coll2 = $coll->clone( write_concern => { w => 2 } ); my $bulk = $coll2->$method; $bulk->insert_one( {} ); my $err = exception { $bulk->execute() }; isa_ok( $err, 'MongoDB::DatabaseError', "executing write concern w > 1 throws error" ); like( $err->message, qr/replica/, "error message mentions replication" ); }; } note("QA-477 WTIMEOUT PLUS DUPLICATE KEY ERROR"); subtest "initialize_unordered_bulk_op: wtimeout plus duplicate keys" => sub { plan skip_all => 'needs a replica set' unless $ismaster->{hosts}; # asking for w more than N hosts will trigger the error we need my $W = @{ $ismaster->{hosts} } + 1; $coll->drop; my $bulk = $coll->initialize_unordered_bulk_op; $bulk->insert_one( { _id => 1 } ); $bulk->insert_one( { _id => 1 } ); my $err = exception { $bulk->execute( { w => $W, wtimeout => 100 } ) }; isa_ok( $err, 'MongoDB::DuplicateKeyError', "executing throws error" ); my $details = $err->result; is( $details->inserted_count, 1, "inserted_count == 1" ); is( $details->count_write_errors, 1, "one write error" ); is( $details->count_write_concern_errors, 1, "one write concern error" ); }; note("QA-477 W = 0"); for my $method (qw/initialize_ordered_bulk_op initialize_unordered_bulk_op/) { subtest "$method: w = 0" => sub { $coll->drop; my $bulk = $coll->$method; $bulk->insert_one( { _id => 1 } ); $bulk->insert_one( { _id => 1 } ); $bulk->insert_one( { _id => 2 } ); # ensure success after failure my ( $result, $err ); $err = exception { $result = $bulk->execute( { w => 0 } ) }; is( $err, undef, "execute with w = 0 doesn't throw error" ) or diag _truncate explain $err; my $expect = $method eq 'initialize_ordered_bulk_op' ? 1 : 2; is( $coll->count, $expect, "document count ($expect)" ); }; } # This test was not included in the QA-477 test plan; it ensures that # write concerns are applied only after all operations finish note("WRITE CONCERN ERRORS"); for my $method (qw/initialize_ordered_bulk_op initialize_unordered_bulk_op/) { subtest "$method: write concern errors" => sub { plan skip_all => 'needs a replica set' unless $ismaster->{hosts}; # asking for w more than N hosts will trigger the error we need my $W = @{ $ismaster->{hosts} } + 1; $coll->drop; my $bulk = $coll->$method; $bulk->insert_one( { _id => 1 } ); $bulk->insert_one( { _id => 2 } ); $bulk->find( { id => 3 } )->upsert->update_many( { '$set' => { x => 2 } } ); $bulk->insert_one( { _id => 4 } ); my $err = exception { $bulk->execute( { w => $W, wtimeout => 100 } ) }; isa_ok( $err, 'MongoDB::WriteConcernError', "executing throws error" ); my $details = $err->result; is( $details->inserted_count, 3, "inserted_count" ); is( $details->upserted_count, 1, "upserted_count" ); is( $details->count_write_errors, 0, "no write errors" ); ok( $details->count_write_concern_errors, "got write concern errors" ); }; } # Not in QA-477 -- Many methods take hashrefs, arrayrefs or Tie::IxHash # objects. The following tests check that arrayrefs and Tie::IxHash are legal # arguments to find, insert, update, update_one and replace_one. The # delete_many and delete_one methods take no arguments and don't need tests note("ARRAY REFS"); # Not in QA-477 -- this is perl driver specific subtest "insert (ARRAY)" => sub { $coll->drop; my $bulk = $coll->initialize_ordered_bulk_op; is( $coll->count, 0, "no docs in collection" ); $bulk->insert_one( [ _id => 1 ] ); $bulk->insert_one( [] ); my ( $result, $err ); $err = exception { $result = $bulk->execute }; is( $err, undef, "no error on insert" ) or diag _truncate explain $err; is( $coll->count, 2, "doc count" ); }; subtest "update (ARRAY)" => sub { $coll->drop; $coll->insert_one( { _id => 1 } ); my $bulk = $coll->initialize_ordered_bulk_op; $bulk->find( [] )->update_many( [ '$set' => { x => 2 } ] ); my ( $result, $err ); $err = exception { $result = $bulk->execute }; is( $err, undef, "no error on update" ) or diag _truncate explain $err; is( $coll->find_one( {} )->{x}, 2, "document updated" ); }; subtest "update_one (ARRAY)" => sub { $coll->drop; $coll->insert_one( { _id => $_ } ) for 1 .. 2; my $bulk = $coll->initialize_ordered_bulk_op; $bulk->find( [] )->update_one( [ '$set' => { x => 2 } ] ); my ( $result, $err ); $err = exception { $result = $bulk->execute }; is( $err, undef, "no error on update_one" ) or diag _truncate explain $err; is( $coll->count( { x => 2 } ), 1, "only one doc updated" ); }; subtest "replace_one (ARRAY)" => sub { $coll->drop; $coll->insert_one( { key => $_ } ) for 1 .. 2; my $bulk = $coll->initialize_ordered_bulk_op; $bulk->find( [] )->replace_one( [ key => 3 ] ); my ( $result, $err ); $err = exception { $result = $bulk->execute }; is( $err, undef, "no error on replace" ) or diag _truncate explain $err; is( $coll->count( { key => 3 } ), 1, "only one doc replaced" ); }; note("Tie::IxHash"); subtest "insert (Tie::IxHash)" => sub { $coll->drop; my $bulk = $coll->initialize_ordered_bulk_op; is( $coll->count, 0, "no docs in collection" ); $bulk->insert_one( Tie::IxHash->new( _id => 1 ) ); my $doc = Tie::IxHash->new(); $bulk->insert_one( $doc ); my ( $result, $err ); $err = exception { $result = $bulk->execute }; is( $err, undef, "no error on insert" ) or diag _truncate explain $err; is( $coll->count, 2, "doc count" ); }; subtest "update (Tie::IxHash)" => sub { $coll->drop; $coll->insert_one( { _id => 1 } ); my $bulk = $coll->initialize_ordered_bulk_op; $bulk->find( Tie::IxHash->new() ) ->update_many( Tie::IxHash->new( '$set' => { x => 2 } ) ); my ( $result, $err ); $err = exception { $result = $bulk->execute }; is( $err, undef, "no error on update" ) or diag _truncate explain $err; is( $coll->find_one( {} )->{x}, 2, "document updated" ); }; subtest "update_one (Tie::IxHash)" => sub { $coll->drop; $coll->insert_one( { _id => $_ } ) for 1 .. 2; my $bulk = $coll->initialize_ordered_bulk_op; $bulk->find( Tie::IxHash->new() ) ->update_one( Tie::IxHash->new( '$set' => { x => 2 } ) ); my ( $result, $err ); $err = exception { $result = $bulk->execute }; is( $err, undef, "no error on update" ) or diag _truncate explain $err; is( $coll->count( { x => 2 } ), 1, "only one doc updated" ); }; subtest "replace_one (Tie::IxHash)" => sub { $coll->drop; $coll->insert_one( { key => $_ } ) for 1 .. 2; my $bulk = $coll->initialize_ordered_bulk_op; $bulk->find( Tie::IxHash->new() )->replace_one( Tie::IxHash->new( key => 3 ) ); my ( $result, $err ); $err = exception { $result = $bulk->execute }; is( $err, undef, "no error on replace" ) or diag _truncate explain $err; is( $coll->count( { key => 3 } ), 1, "only one doc replaced" ); }; # not in QA-477 note("W = 0 IGNORES ERRORS"); for my $method (qw/initialize_ordered_bulk_op initialize_unordered_bulk_op/) { subtest "$method: w = 0" => sub { $coll->drop; my $bulk = $coll->$method; $bulk->insert_one( { _id => 1 } ); $bulk->insert_one( { _id => 3, '$bad' => 1 } ); $bulk->insert_one( { _id => 4 } ); my ( $result, $err ); $err = exception { $result = $bulk->execute( { w => 0 } ) }; is( $err, undef, "execute with w = 0 doesn't throw error" ) or diag _truncate explain $err; my $expect = $method eq 'initialize_ordered_bulk_op' ? 1 : 2; is( $coll->count, $expect, "document count ($expect)" ); }; } # DRIVERS-151 Handle edge case for pre-2.6 when upserted _id not returned note("UPSERT _ID NOT RETURNED"); for my $method (qw/initialize_ordered_bulk_op initialize_unordered_bulk_op/) { subtest "$method: upsert with non OID _ids" => sub { $coll->drop; my $bulk = $coll->$method; $bulk->find( { _id => 0 } )->upsert->update_one( { '$set' => { a => 0 } } ); $bulk->find( { a => 1 } )->upsert->replace_one( { _id => 1 } ); # 2.6 doesn't allow changing _id, but previously that's OK, so we try it both ways # to ensure we use the right _id from the replace doc on older servers $bulk->find( { _id => $server_does_bulk ? 2 : 3 } )->upsert->replace_one( { _id => 2 } ); my ( $result, $err ); $err = exception { $result = $bulk->execute }; is( $err, undef, "execute doesn't throw error" ) or diag _truncate explain $err; cmp_deeply( $result, _bulk_write_result( upserted_count => 3, modified_count => ( $server_does_bulk ? 0 : undef ), upserted => [ { index => 0, _id => 0 }, { index => 1, _id => 1 }, { index => 2, _id => 2 }, ], op_count => 3, batch_count => $server_does_bulk ? 1 : 3, ), "result object correct" ) or diag _truncate explain $result; }; } subtest "replace with custom op_char" => sub { $coll->drop; my $coll2 = $coll->with_codec( op_char => '-' ); my $bulk = $coll2->ordered_bulk; $bulk->insert_one( { _id => 0 } ); $bulk->find( { _id => 0 } )->replace_one( { '-set' => { key => 1} } ); like( exception { $bulk->execute }, qr/replacement document must not contain update operators/, "single non-op key in update doc throws exception" ); }; # XXX QA-477 tests not covered herein: # MIXED OPERATIONS, AUTH # FAILOVER WITH MIXED VERSIONS done_testing; MongoDB-v1.2.2/t/bypass_doc_validation.t000644 000765 000024 00000020166 12651754051 020424 0ustar00davidstaff000000 000000 # # Copyright 2015 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # use strict; use warnings; use Test::More 0.96; use Test::Fatal; use utf8; use boolean; use MongoDB; use MongoDB::Error; use lib "t/lib"; use MongoDBTest qw/skip_unless_mongod build_client get_test_db server_version server_type get_capped/; skip_unless_mongod(); my $conn = build_client(); my $testdb = get_test_db($conn); my $server_version = server_version($conn); my $server_type = server_type($conn); my $coll = $testdb->get_collection('test_validating'); my $res; my $does_validation = $server_version >= v3.1.3; # only set up validation on servers that support it sub _drop_coll { $coll->drop; $testdb->run_command( [ create => $coll->name ] ); if ($does_validation) { $testdb->run_command( [ collMod => $coll->name, validator => { x => { '$exists' => 1 } } ] ); } pass("reset collection"); } subtest "insert_one" => sub { _drop_coll(); SKIP: { skip "without MongoDB 3.2+", 1 unless $does_validation; like( exception { $coll->insert_one( {} ) }, qr/failed validation/, "invalid insert_one throws error" ); } is( exception { $coll->insert_one( {}, { bypassDocumentValidation => 1 } ) }, undef, "validation bypassed" ); }; subtest "replace_one" => sub { _drop_coll(); my $id = $coll->insert_one( { x => 1 } )->inserted_id; SKIP: { skip "without MongoDB 3.2+", 1 unless $does_validation; like( exception { $coll->replace_one( { _id => $id }, { y => 1 } ) }, qr/failed validation/, "invalid replace_one throws error" ); } is( exception { $coll->replace_one( { _id => $id }, { y => 1 }, { bypassDocumentValidation => 1 } ) }, undef, "validation bypassed" ); }; subtest "update_one" => sub { _drop_coll(); $coll->insert_one( { x => 1 } ); SKIP: { skip "without MongoDB 3.2+", 1 unless $does_validation; like( exception { $coll->update_one( { x => 1 }, { '$unset' => { x => 1 } } ) }, qr/failed validation/, "invalid update_one throws error" ); } is( exception { $coll->update_one( { x => 1 }, { '$unset' => { x => 1 } }, { bypassDocumentValidation => 1 } ) }, undef, "validation bypassed" ); }; subtest "update_many" => sub { _drop_coll(); $coll->insert_many( [ { x => 1 }, { x => 2 } ] ); SKIP: { skip "without MongoDB 3.2+", 1 unless $does_validation; like( exception { $coll->update_many( {}, { '$unset' => { x => 1 } } ) }, qr/failed validation/, "invalid update_many throws error" ); } is( exception { $coll->update_many( {}, { '$unset' => { x => 1 } }, { bypassDocumentValidation => 1 } ) }, undef, "validation bypassed" ); }; subtest 'bulk_write (unordered)' => sub { _drop_coll(); my $err; SKIP: { skip "without MongoDB 3.2+", 1 unless $does_validation; $err = exception { $coll->bulk_write( [ [ insert_one => [ { x => 1 } ] ], [ insert_many => [ {}, { x => 8 } ] ], ], { ordered => 0 } ); }; like( $err, qr/failed validation/, "invalid bulk_write throws error" ); } $err = exception { $coll->bulk_write( [ [ insert_one => [ { x => 1 } ] ], [ insert_many => [ {}, { x => 8 } ] ], ], { bypassDocumentValidation => 1, ordered => 0 }, ); }; is( $err, undef, "validation bypassed" ); }; subtest 'bulk_write (ordered)' => sub { _drop_coll(); my $err; SKIP: { skip "without MongoDB 3.2+", 1 unless $does_validation; $err = exception { $coll->bulk_write( [ [ insert_one => [ { x => 1 } ] ], [ insert_many => [ {}, { x => 8 } ] ], ], { ordered => 1 } ); }; like( $err, qr/failed validation/, "invalid bulk_write throws error" ); } $err = exception { $coll->bulk_write( [ [ insert_one => [ { x => 1 } ] ], [ insert_many => [ {}, { x => 8 } ] ], ], { bypassDocumentValidation => 1, ordered => 1 }, ); }; is( $err, undef, "validation bypassed" ); }; # insert_many uses bulk_write internally subtest "insert_many" => sub { _drop_coll(); SKIP: { skip "without MongoDB 3.2+", 1 unless $does_validation; like( exception { $coll->insert_many( [ {}, {} ] ) }, qr/failed validation/, "invalid insert_many throws error" ); } is( exception { $coll->insert_many( [ {}, {} ], { bypassDocumentValidation => 1 } ) }, undef, "validation bypassed" ); }; subtest "find_one_and_replace" => sub { _drop_coll(); $coll->insert_one( { x => 1 } ); SKIP: { skip "without MongoDB 3.2+", 1 unless $does_validation; like( exception { $coll->find_one_and_replace( { x => 1 }, { y => 1 } ) }, qr/failed validation/, "invalid find_one_and_replace throws error" ); } is( exception { $coll->find_one_and_replace( { x => 1 }, { y => 1 }, { bypassDocumentValidation => 1 } ) }, undef, "validation bypassed" ); }; subtest "find_one_and_update" => sub { _drop_coll(); $coll->insert_one( { x => 1 } ); SKIP: { skip "without MongoDB 3.2+", 1 unless $does_validation; like( exception { $coll->find_one_and_update( { x => 1 }, { '$unset' => { x => 1 } } ) }, qr/failed validation/, "invalid find_one_and_update throws error" ); } is( exception { $coll->find_one_and_update( { x => 1 }, { '$unset' => { x => 1 } }, { bypassDocumentValidation => 1 } ) }, undef, "validation bypassed" ); }; subtest "aggregate with \$out" => sub { _drop_coll(); plan skip_all => "Aggregation with \$out requires MongoDB 2.6+" unless $server_version >= v2.6.0; my $source = $testdb->get_collection('test_source'); $source->insert_many( [ map { { count => $_ } } 1 .. 20 ] ); SKIP: { skip "without MongoDB 3.2+", 1 unless $does_validation; like( exception { $source->aggregate( [ { '$match' => { count => { '$gt' => 10 } } }, { '$out' => $coll->name } ] ); }, qr/failed validation/, "invalid aggregate output throws error" ); is( $coll->count, 0, "no docs in \$out collection" ); } is( exception { $source->aggregate( [ { '$match' => { count => { '$gt' => 10 } } }, { '$out' => $coll->name } ], { bypassDocumentValidation => 1 } ); }, undef, "validation bypassed" ); is( $coll->count, 10, "correct doc count in \$out collection" ); is( exception { $source->aggregate( [ { '$match' => { count => { '$gt' => 10 } } } ], { bypassDocumentValidation => 1 } ); }, undef, "bypassDocumentValidation without \$out", ); }; done_testing; MongoDB-v1.2.2/t/collection.t000644 000765 000024 00000052403 12651754051 016216 0ustar00davidstaff000000 000000 # # Copyright 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # use strict; use warnings; use Test::More 0.96; use Test::Fatal; use Test::Deep qw/!blessed/; use utf8; use Tie::IxHash; use Encode qw(encode decode); use MongoDB::Timestamp; # needed if db is being run as master use MongoDB::Error; use MongoDB::Code; use MongoDB; use lib "t/lib"; use MongoDBTest qw/skip_unless_mongod build_client get_test_db server_version server_type/; skip_unless_mongod(); my $conn = build_client(); my $testdb = get_test_db($conn); my $server_version = server_version($conn); my $server_type = server_type($conn); my $coll = $testdb->get_collection('test_collection'); my $id; my $obj; my $ok; my $cursor; my $tied; # get_collection subtest get_collection => sub { my ( $db, $c ); ok( $c = $testdb->get_collection('foo'), "get_collection(NAME)" ); isa_ok( $c, 'MongoDB::Collection' ); is( $c->name, 'foo', 'get name' ); my $wc = MongoDB::WriteConcern->new( w => 2 ); ok( $c = $testdb->get_collection( 'foo', { write_concern => $wc } ), "get_collection(NAME, OPTION) (wc)" ); is( $c->write_concern->w, 2, "coll-level write concern as expected" ); ok( $c = $testdb->get_collection( 'foo', { write_concern => { w => 3 } } ), "get_collection(NAME, OPTION) (wc)" ); is( $c->write_concern->w, 3, "coll-level write concern coerces" ); my $rp = MongoDB::ReadPreference->new( mode => 'secondary' ); ok( $c = $testdb->get_collection( 'foo', { read_preference => $rp } ), "get_collection(NAME, OPTION) (rp)" ); is( $c->read_preference->mode, 'secondary', "coll-level read pref as expected" ); ok( $c = $testdb->get_collection( 'foo', { read_preference => { mode => 'nearest' } } ), "get_collection(NAME, OPTION) (rp)" ); is( $c->read_preference->mode, 'nearest', "coll-level read pref coerces" ); }; subtest get_namespace => sub { my $dbname = $testdb->name; my ( $db, $c ); ok( $c = $conn->get_namespace("$dbname.foo"), "get_namespace(NAME)" ); isa_ok( $c, 'MongoDB::Collection' ); is( $c->name, 'foo', 'get name' ); my $wc = MongoDB::WriteConcern->new( w => 2 ); ok( $c = $conn->get_namespace( "$dbname.foo", { write_concern => $wc } ), "get_collection(NAME, OPTION) (wc)" ); is( $c->write_concern->w, 2, "coll-level write concern as expected" ); ok( $c = $conn->ns("$dbname.foo"), "ns(NAME)" ); isa_ok( $c, 'MongoDB::Collection' ); is( $c->name, 'foo', 'get name' ); }; # very small insert { $id = $coll->insert_one({_id => 1})->inserted_id; is($id, 1); my $tiny = $coll->find_one; is($tiny->{'_id'}, 1); $coll->drop; $id = $coll->insert_one({})->inserted_id; isa_ok($id, 'MongoDB::OID'); $tiny = $coll->find_one; is($tiny->{'_id'}, $id); $coll->drop; } subtest write_concern => sub { my $c; ok( $c = $testdb->get_collection( 'foo', { write_concern => { w => 999 } } ), "get collection with w=999" ); my $err = exception { $c->insert_one( { _id => 1 } ) }; ok(ref $err && $err->isa('MongoDB::DatabaseError'), "collection-level write concern applies to insert_one" ) or diag "got:", explain $err; }; # insert { $id = $coll->insert_one({ just => 'another', perl => 'hacker' })->inserted_id; is($coll->count, 1, 'count'); $coll->replace_one({ _id => $id }, { just => "an\xE4oth\0er", mongo => 'hacker', with => { a => 'reference' }, and => [qw/an array reference/], }); is($coll->count, 1); } # inserting an _id subdoc with $ keys should be an error; only on 2.4+ if ( $server_version >= v2.4.0 ) { like( exception { $coll->insert_one( { '_id' => { '$oid' => "52d0b971b3ba219fdeb4170e" } } ) }, qr/WriteError/, "inserting an _id subdoc with \$ keys should error" ); } # rename { my $newcoll = $coll->rename('test_collection.rename'); is($newcoll->name, 'test_collection.rename', 'rename'); is($coll->count, 0, 'rename'); is($newcoll->count, 1, 'rename'); $coll = $newcoll->rename('test_collection'); is($coll->name, 'test_collection', 'rename'); is($coll->count, 1, 'rename'); is($newcoll->count, 0, 'rename'); } # count { is($coll->count({ mongo => 'programmer' }), 0, 'count = 0'); is($coll->count({ mongo => 'hacker' }), 1, 'count = 1'); is($coll->count({ 'with.a' => 'reference' }), 1, 'inner obj count'); # missing collection my $coll2 = $testdb->coll("aadfkasfa"); my $count; is( exception { $count = $coll2->count({}) }, undef, "count on missing collection lives" ); is( $count, 0, "count is correct" ); } # find_one { $obj = $coll->find_one; is($obj->{mongo} => 'hacker', 'find_one'); is(ref $obj->{with}, 'HASH', 'find_one type'); is($obj->{with}->{a}, 'reference'); is(ref $obj->{and}, 'ARRAY'); is_deeply($obj->{and}, [qw/an array reference/]); ok(!exists $obj->{perl}); is($obj->{just}, "an\xE4oth\0er"); } # find_id { my $doc = { a => 1, b => 2, c => 3 }; my $id = $coll->insert_one($doc)->inserted_id; my $result = $coll->find_id($id); is($result->{_id}, $id, 'find_id'); $result = $coll->find_id($id, { c => 3 }); cmp_deeply( $result, { _id => $id, c => 3 }, "find_id projection" ); $coll->delete_one($result); } # remove { $coll->delete_one($obj); is($coll->count, 0, 'remove() deleted everything (won\'t work on an old version of Mongo)'); } # doubles { my $pi = 3.14159265; ok($id = $coll->insert_one({ data => 'pi', pi => $pi })->inserted_id, "inserting float number value"); ok($obj = $coll->find_one({ data => 'pi' })); # can't test exactly because floating point nums are weird ok(abs($obj->{pi} - $pi) < .000000001); $coll->drop; my $object = {}; $object->{'autoPartNum'} = '123456'; $object->{'price'} = 123.19; $coll->insert_one($object); my $auto = $coll->find_one; like($auto->{'price'}, qr/^123\.\d+/, "round trip float looks like float"); ok(abs($auto->{'price'} - $object->{'price'}) < .000000001); } # undefined values { ok($id = $coll->insert_one({ data => 'null', none => undef })->inserted_id, 'inserting undefined data'); ok($obj = $coll->find_one({ data => 'null' }), 'finding undefined row'); ok(exists $obj->{none}, 'got null field'); ok(!defined $obj->{none}, 'null field is undefined'); $coll->drop; } # utf8 { my ($down, $up, $non_latin) = ("\xE5", "\xE6", "\x{2603}"); utf8::upgrade($up); utf8::downgrade($down); my $insert = { down => $down, up => $up, non_latin => $non_latin }; my $copy = +{ %{$insert} }; $coll->insert_one($insert); my $utfblah = $coll->find_one; delete $utfblah->{_id}; is_deeply($utfblah, $copy, 'non-ascii values'); $coll->drop; $insert = { $down => "down", $up => "up", $non_latin => "non_latin" }; $copy = +{ %{$insert} }; $coll->insert_one($insert); $utfblah = $coll->find_one; delete $utfblah->{_id}; is_deeply($utfblah, $copy, 'non-ascii keys'); } # more utf8 { $coll->drop; $coll->insert_one({"\xe9" => "hi"}); my $utfblah = $coll->find_one; is($utfblah->{"\xe9"}, "hi", 'byte key'); } { $coll->drop; $coll->insert_one({x => 1, y => 2, z => 3, w => 4}); $cursor = $coll->query->fields({'y' => 1}); $obj = $cursor->next; is(exists $obj->{'y'}, 1, 'y exists'); is(exists $obj->{'_id'}, 1, '_id exists'); is(exists $obj->{'x'}, '', 'x doesn\'t exist'); is(exists $obj->{'z'}, '', 'z doesn\'t exist'); is(exists $obj->{'w'}, '', 'w doesn\'t exist'); } # batch insert { $coll->drop; my $ids = $coll->insert_many([{'x' => 1}, {'x' => 2}, {'x' => 3}])->inserted_ids; is($coll->count, 3, 'insert_many'); } # sort { $cursor = $coll->query->sort({'x' => 1}); my $i = 1; while ($obj = $cursor->next) { is($obj->{'x'}, $i++); } } # find_one fields { $coll->drop; $coll->insert_one({'x' => 1, 'y' => 2, 'z' => 3})->inserted_id; my $yer = $coll->find_one({}, {'y' => 1}); cmp_deeply( $yer, { _id => ignore(), y => 2 }, "projection fields correct" ); $coll->drop; $coll->insert_many([{"x" => 1}, {"x" => 1}, {"x" => 1}]); $coll->delete_one( { "x" => 1 } ); is ($coll->count, 2, 'remove just one'); } # tie::ixhash for update/insert { $coll->drop; my $hash = Tie::IxHash->new("f" => 1, "s" => 2, "fo" => 4, "t" => 3); $id = $coll->insert_one($hash)->inserted_id; isa_ok($id, 'MongoDB::OID'); $tied = $coll->find_one; is($tied->{'_id'}."", "$id"); is($tied->{'f'}, 1); is($tied->{'s'}, 2); is($tied->{'fo'}, 4); is($tied->{'t'}, 3); my $criteria = Tie::IxHash->new("_id" => $id); $hash->Push("something" => "else"); $coll->replace_one($criteria, $hash); $tied = $coll->find_one; is($tied->{'f'}, 1); is($tied->{'something'}, 'else'); } # () update/insert { $coll->drop; my @h = ("f" => 1, "s" => 2, "fo" => 4, "t" => 3); $id = $coll->insert_one(\@h)->inserted_id; isa_ok($id, 'MongoDB::OID'); $tied = $coll->find_one; is($tied->{'_id'}."", "$id"); is($tied->{'f'}, 1); is($tied->{'s'}, 2); is($tied->{'fo'}, 4); is($tied->{'t'}, 3); my @criteria = ("_id" => $id); my @newobj = ('$inc' => {"f" => 1}); $coll->update_one(\@criteria, \@newobj); $tied = $coll->find_one; is($tied->{'f'}, 2); } # multiple update { $coll->drop; $coll->insert_one({"x" => 1}); $coll->insert_one({"x" => 1}); $coll->insert_one({"x" => 2, "y" => 3}); $coll->insert_one({"x" => 2, "y" => 4}); $coll->update_one({"x" => 1}, {'$set' => {'x' => "hi"}}); # make sure one is set, one is not ok($coll->find_one({"x" => "hi"})); ok($coll->find_one({"x" => 1})); my $res = $coll->update_many({"x" => 2}, {'$set' => {'x' => 4}}); is($coll->count({"x" => 4}), 2) or diag explain $res; $cursor = $coll->query({"x" => 4})->sort({"y" => 1}); $obj = $cursor->next(); is($obj->{'y'}, 3); $obj = $cursor->next(); is($obj->{'y'}, 4); } # check with upsert if there are matches subtest "multiple update" => sub { plan skip_all => "multiple update won't work with db version $server_version" unless $server_version >= v1.3.0; $coll->update_many({"x" => 4}, {'$set' => {"x" => 3}}, {'upsert' => 1}); is($coll->count({"x" => 3}), 2, 'count'); $cursor = $coll->query({"x" => 3})->sort({"y" => 1}); $obj = $cursor->next(); is($obj->{'y'}, 3, 'y == 3'); $obj = $cursor->next(); is($obj->{'y'}, 4, 'y == 4'); }; # uninitialised array elements { $coll->drop; my @g = (); $g[1] = 'foo'; ok($id = $coll->insert_one({ data => \@g })->inserted_id); ok($obj = $coll->find_one()); is_deeply($obj->{data}, [undef, 'foo']); } # was float, now string { $coll->drop; my $val = 1.5; $val = 'foo'; ok($id = $coll->insert_one({ data => $val })->inserted_id); ok($obj = $coll->find_one({ data => $val })); is($obj->{data}, 'foo'); } # was string, now float { my $f = 'abc'; $f = 3.3; ok($id = $coll->insert_one({ data => $f })->inserted_id, 'insert float'); ok($obj = $coll->find_one({ data => $f })); ok(abs($obj->{data} - 3.3) < .000000001); } # timeout SKIP: { skip "buildbot is stupid", 1 if 1; my $timeout = $conn->query_timeout; $conn->query_timeout(0); for (0 .. 10000) { $coll->insert_one({"field1" => "foo", "field2" => "bar", 'x' => $_}); } eval { # XXX eval is deprecated, but we'll leave this test for now my $num = $testdb->eval('for (i=0;i<1000;i++) { print(.);}'); }; ok($@ && $@ =~ /recv timed out/, 'count timeout'); $conn->query_timeout($timeout); } # safe insert { $coll->drop; $coll->insert_one({_id => 1}); my $err = exception { $coll->insert_one({_id => 1}) }; ok( $err, "got error" ); isa_ok( $err, 'MongoDB::DatabaseError', "duplicate insert error" ); like( $err->message, qr/duplicate key/, 'error was duplicate key exception') } # find { $coll->drop; $coll->insert_one({x => 1}); $coll->insert_one({x => 4}); $coll->insert_one({x => 5}); $coll->insert_one({x => 1, y => 2}); $cursor = $coll->find({x=>4}); my $result = $cursor->next; is($result->{'x'}, 4, 'find'); $cursor = $coll->find({x=>{'$gt' => 1}})->sort({x => -1}); $result = $cursor->next; is($result->{'x'}, 5); $result = $cursor->next; is($result->{'x'}, 4); $cursor = $coll->find({y=>2})->fields({y => 1, _id => 0}); $result = $cursor->next; is(keys %$result, 1, 'find fields'); } # batch { $coll->drop; for (0..14) { $coll->insert_one({ x => $_ }) }; $cursor = $coll->find({} , { batchSize => 5 }); my @batch = $cursor->batch; is(scalar @batch, 5, 'batch'); $cursor->next; $cursor->next; @batch = $cursor->batch; is(scalar @batch, 3, 'batch with next'); @batch = $cursor->batch; is(scalar @batch, 5, 'batch after next'); @batch = $cursor->batch; ok(!@batch, 'empty batch'); } # ns hack # check insert utf8 { my $coll = $testdb->get_collection('test_collection'); $coll->drop; my $utf8 = "\x{4e2d}\x{56fd}"; $coll->insert_one({ foo => $utf8}); my $utfblah = $coll->find_one; is($utfblah->{foo}, $utf8,'round trip UTF-8'); $coll->drop; } # utf8 test, croak when null key is inserted { $ok = 0; my $kanji = "漢\0字"; utf8::encode($kanji); eval{ $ok = $coll->insert_one({ $kanji => 1}); }; is($ok,0,"Insert key with Null Char Operation Failed"); is($coll->count, 0, "Insert key with Null Char in Key Failed"); $coll->drop; $ok = 0; my $kanji_a = "漢\0字"; my $kanji_b = "漢\0字中"; my $kanji_c = "漢\0字国"; utf8::encode($kanji_a); utf8::encode($kanji_b); utf8::encode($kanji_c); eval { $ok = $coll->insert_many([{ $kanji_a => "some data"} , { $kanji_b => "some more data"}, { $kanji_c => "even more data"}]); }; is($ok,0, "insert_many key with Null Char in Key Operation Failed"); is($coll->count, 0, "insert_many key with Null Char in Key Failed"); $coll->drop; #test ixhash my $hash = Tie::IxHash->new("f\0f" => 1); eval { $ok = $coll->insert_one($hash); }; is($ok,0, "ixHash Insert key with Null Char in Key Operation Failed"); is($coll->count, 0, "ixHash key with Null Char in Key Operation Failed"); $tied = $coll->find_one; $coll->drop; } # aggregate subtest "aggregation" => sub { plan skip_all => "Aggregation framework unsupported on MongoDB $server_version" unless $server_version >= v2.2.0; $coll->insert_many( [ { wanted => 1, score => 56 }, { wanted => 1, score => 72 }, { wanted => 1, score => 96 }, { wanted => 1, score => 32 }, { wanted => 1, score => 61 }, { wanted => 1, score => 33 }, { wanted => 0, score => 1000 } ] ); my $cursor = $coll->aggregate( [ { '$match' => { wanted => 1 } }, { '$group' => { _id => 1, 'avgScore' => { '$avg' => '$score' } } } ] ); isa_ok( $cursor, 'MongoDB::QueryResult' ); my $res = [ $cursor->all ]; ok $res->[0]{avgScore} < 59; ok $res->[0]{avgScore} > 57; if ( $server_version < v2.5.0 ) { is( exception { $coll->aggregate( [ {'$match' => { count => {'$gt' => 0} } } ], { cursor => {} } ) }, undef, "asking for cursor when unsupported does not throw error" ); } }; # aggregation cursors subtest "aggregation cursors" => sub { plan skip_all => "Aggregation cursors unsupported on MongoDB $server_version" unless $server_version >= v2.5.0; for( 1..20 ) { $coll->insert_one( { count => $_ } ); } $cursor = $coll->aggregate( [ { '$match' => { count => { '$gt' => 0 } } } ], { cursor => 1 } ); isa_ok $cursor, 'MongoDB::QueryResult'; is $cursor->started_iterating, 1; is( ref( $cursor->_docs ), ref [ ] ); is $cursor->_doc_count, 20, "document count cached in cursor"; for( 1..20 ) { my $doc = $cursor->next; is( ref( $doc ), ref { } ); is $doc->{count}, $_; is $cursor->_doc_count, ( 20 - $_ ); } # make sure we can transition to a "real" cursor $cursor = $coll->aggregate( [ { '$match' => { count => { '$gt' => 0 } } } ], { cursor => { batchSize => 10 } } ); isa_ok $cursor, 'MongoDB::QueryResult'; is $cursor->started_iterating, 1; is( ref( $cursor->_docs), ref [ ] ); is $cursor->_doc_count, 10, "doc count correct"; for( 1..20 ) { my $doc = $cursor->next; isa_ok( $doc, 'HASH' ); is $doc->{count}, $_, "doc count field is $_"; } $coll->drop; }; # aggregation $out subtest "aggregation \$out" => sub { plan skip_all => "Aggregation result collections unsupported on MongoDB $server_version" unless $server_version >= v2.5.0; for( 1..20 ) { $coll->insert_one( { count => $_ } ); } my $result = $coll->aggregate( [ { '$match' => { count => { '$gt' => 0 } } }, { '$out' => 'test_out' } ] ); ok $result; my $res_coll = $testdb->get_collection( 'test_out' ); my $cursor = $res_coll->find; for( 1..20 ) { my $doc = $cursor->next; is( ref( $doc ), ref { } ); is $doc->{count}, $_; } $res_coll->drop; $coll->drop; }; # aggregation explain subtest "aggregation explain" => sub { plan skip_all => "Aggregation explain unsupported on MongoDB $server_version" unless $server_version >= v2.4.0; for ( 1..20 ) { $coll->insert_one( { count => $_ } ); } my $cursor = $coll->aggregate( [ { '$match' => { count => { '$gt' => 0 } } }, { '$sort' => { count => 1 } } ], { explain => 1 } ); my $result = $cursor->next; is( ref( $result ), 'HASH', "aggregate with explain returns a hashref" ); my $expected = $server_version >= v2.6.0 ? 'stages' : 'serverPipeline'; ok( exists $result->{$expected}, "result had '$expected' field" ) or diag explain $result; $coll->drop; }; subtest "deep update" => sub { $coll->drop; $coll->insert_one( { _id => 1 } ); $coll->update_one( { _id => 1 }, { '$set' => { 'x.y' => 42 } } ); my $doc = $coll->find_one( { _id => 1 } ); is( $doc->{x}{y}, 42, "deep update worked" ); like( exception { $coll->replace_one( { _id => 1 }, { 'p.q' => 23 } ) }, qr/documents for storage cannot contain/, "replace with dots in field dies" ); }; subtest "count w/ hint" => sub { $coll->drop; $coll->insert_one( { i => 1 } ); $coll->insert_one( { i => 2 } ); is ($coll->count(), 2, 'count = 2'); $coll->indexes->create_one( { i => 1 } ); is( $coll->count( { i => 1 }, { hint => '_id_' } ), 1, 'count w/ hint & spec'); is( $coll->count( {}, { hint => '_id_' } ), 2, 'count w/ hint'); my $current_version = version->parse($server_version); my $version_2_6 = version->parse('v2.6'); if ( $current_version > $version_2_6 ) { eval { $coll->count( { i => 1 } , { hint => 'BAD HINT' } ) }; like($@, ($server_type eq "Mongos" ? qr/failed/ : qr/bad hint/ ), 'check bad hint error'); } else { is( $coll->count( { i => 1 } , { hint => 'BAD HINT' } ), 1, 'bad hint and spec'); } $coll->indexes->create_one( { x => 1 }, { sparse => 1 } ); if ($current_version > $version_2_6 ) { is( $coll->count( { i => 1 } , { hint => 'x_1' } ), 0, 'spec & hint on empty sparse index'); } else { is( $coll->count( { i => 1 } , { hint => 'x_1' } ), 1, 'spec & hint on empty sparse index'); } is( $coll->count( {}, { hint => 'x_1' } ), 2, 'hint on empty sparse index'); }; my $js_str = 'function() { return this.a > this.b }'; my $js_obj = MongoDB::Code->new( code => $js_str ); for my $criteria ( $js_str, $js_obj ) { my $type = ref($criteria) || 'string'; subtest "query with \$where as $type" => sub { $coll->drop; $coll->insert_one( { a => 1, b => 1, n => 1 } ); $coll->insert_one( { a => 2, b => 1, n => 2 } ); $coll->insert_one( { a => 3, b => 1, n => 3 } ); $coll->insert_one( { a => 0, b => 1, n => 4 } ); $coll->insert_one( { a => 1, b => 2, n => 5 } ); $coll->insert_one( { a => 2, b => 3, n => 6 } ); my @docs = $coll->find( { '$where' => $criteria } )->sort( { n => 1 } )->all; is( scalar @docs, 2, "correct count a > b" ) or diag explain @docs; cmp_deeply( \@docs, [ { _id => ignore(), a => 2, b => 1, n => 2 }, { _id => ignore(), a => 3, b => 1, n => 3 } ], "javascript query correct" ); }; } done_testing; MongoDB-v1.2.2/t/connection.t000644 000765 000024 00000006530 12651754051 016222 0ustar00davidstaff000000 000000 # # Copyright 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # use strict; use warnings; use Test::More; use Test::Fatal; use MongoDB::Timestamp; # needed if db is being run as master use MongoDB; use lib "t/lib"; use MongoDBTest qw/skip_unless_mongod build_client get_test_db/; skip_unless_mongod(); my $conn = build_client(); my $testdb = get_test_db($conn); ok( $conn->connected, "client is connected" ); isa_ok( $conn, 'MongoDB::MongoClient' ); subtest "bad seedlist" => sub { my $conn2; is( exception { $conn2 = build_client( host => 'localhost', port => 1, connect_timeout_ms => 1000, server_selection_timeout => 1, ); }, undef, 'no exception on construction for bad port' ); ok( !$conn2->connected, "bad port reports not connected" ); }; subtest "get_database and check names" => sub { my $db = $conn->get_database( $testdb->name ); isa_ok( $db, 'MongoDB::Database', 'get_database' ); $db->get_collection('test_collection')->insert_one( { foo => 42 } ); ok( ( grep { /testdb/ } $conn->database_names ), 'database_names' ); my $result = $db->drop; is( $result->{'ok'}, 1, 'db was dropped' ); }; subtest "wire protocol versions" => sub { is $conn->_topology->{min_wire_version}, 0, 'default min wire version'; is $conn->_topology->{max_wire_version}, 3, 'default max wire version'; # monkey patch wire versions my $conn2 = build_client(); $conn2->_topology->{min_wire_version} = 100; $conn2->_topology->{max_wire_version} = 101; like( exception { $conn2->send_admin_command( [ is_master => 1 ] ) }, qr/Incompatible wire protocol/i, 'exception on wire protocol' ); }; subtest "reconnect" => sub { ok( $testdb->_client->reconnect, "ran reconnect" ); my $db = $conn->get_database( $testdb->name ); ok( $db->get_collection('test_collection')->insert_one( { foo => 42 } ), "inserted a doc after reconnection" ); }; subtest "topology status" => sub { my $res = $conn->topology_status( ); is( ref($res), 'HASH', "topology_status returns a hash reference" ); my $last = $res->{last_scan_time}; sleep 1; $res = $conn->topology_status( refresh => 1 ); ok( $res->{last_scan_time} > $last, "scan time refreshed" ); }; subtest "cooldown" => sub { my $conn = build_client( host => "mongodb://localhost:9" ); my $topo = $conn->_topology; $topo->scan_all_servers; my $orig_update = $topo->status_struct->{servers}[0]{last_update_time}; $topo->scan_all_servers; my $next_update = $topo->status_struct->{servers}[0]{last_update_time}; is( $next_update, $orig_update, "Unknown server not scanned again during cooldown" ); }; done_testing; MongoDB-v1.2.2/t/crud.t000644 000765 000024 00000057154 12651754051 015030 0ustar00davidstaff000000 000000 # # Copyright 2015 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # use strict; use warnings; use Test::More 0.96; use Test::Fatal; use Test::Deep qw/!blessed/; use utf8; use Tie::IxHash; use MongoDB; use MongoDB::Error; use lib "t/lib"; use MongoDBTest qw/skip_unless_mongod build_client get_test_db server_version server_type get_capped/; skip_unless_mongod(); my $conn = build_client(); my $testdb = get_test_db($conn); my $server_version = server_version($conn); my $server_type = server_type($conn); my $coll = $testdb->get_collection('test_collection'); my $res; subtest "insert_one" => sub { # insert doc with _id $coll->drop; $res = $coll->insert_one( { _id => "foo", value => "bar" } ); cmp_deeply( [ $coll->find( {} )->all ], bag( { _id => "foo", value => "bar" } ), "insert with _id: doc inserted" ); ok( $res->acknowledged, "result acknowledged" ); isa_ok( $res, "MongoDB::InsertOneResult", "result" ); is( $res->inserted_id, "foo", "res->inserted_id" ); # insert doc without _id $coll->drop; my $orig = { value => "bar" }; my $doc = { %$orig }; $res = $coll->insert_one( $doc ); my @got = $coll->find( {} )->all; cmp_deeply( \@got, bag( { _id => ignore(), value => "bar" } ), "insert without _id: hash doc inserted" ); ok( $res->acknowledged, "result acknowledged" ); is( $got[0]{_id}, $res->inserted_id, "doc has expected inserted _id" ); cmp_deeply( $doc, $orig, "original unmodified" ); # insert arrayref $coll->drop; $res = $coll->insert_one( [ value => "bar" ] ); cmp_deeply( [ $coll->find( {} )->all ], bag( { _id => ignore(), value => "bar" } ), "insert without _id: array doc inserted" ); # insert Tie::Ixhash $coll->drop; $res = $coll->insert_one( Tie::IxHash->new( value => "bar" ) ); cmp_deeply( [ $coll->find( {} )->all ], bag( { _id => ignore(), value => "bar" } ), "insert without _id: Tie::IxHash doc inserted" ); }; subtest "insert_many" => sub { # insert docs with mixed _id and not and mixed types $coll->drop; my $doc = { value => "baz" }; $res = $coll->insert_many( [ [ _id => "foo", value => "bar" ], $doc, ] ); my @got = $coll->find( {} )->all; cmp_deeply( \@got, bag( { _id => "foo", value => "bar" }, { _id => ignore(), value => "baz" }, ), "insert many: docs inserted" ); ok( $res->acknowledged, "result acknowledged" ); isa_ok( $res, "MongoDB::InsertManyResult", "result" ); cmp_deeply( $res->inserted, [ { index => 0, _id => 'foo' }, { index => 1, _id => obj_isa("MongoDB::OID") } ], "inserted contains correct hashrefs" ); cmp_deeply( $res->inserted_ids, { 0 => "foo", 1 => $res->inserted->[1]{_id}, }, "inserted_ids contains correct keys/values" ); is($res->inserted_count, 2, "Two docs inserted."); # ordered insert should halt on error $coll->drop; my $err = exception { $coll->insert_many( [ { _id => 0 }, { _id => 1 }, { _id => 2 }, { _id => 1 }, ] ) }; ok( $err, "ordered insert got an error" ); isa_ok( $err, 'MongoDB::DuplicateKeyError', 'caught error' ) or diag explain $err; $res = $err->result; is( $res->inserted_count, 3, "only first three inserted" ); # unordered insert should not halt on error $coll->drop; $err = exception { $coll->insert_many( [ { _id => 0 }, { _id => 1 }, { _id => 1 }, { _id => 2 }, ], { ordered => 0 } ) }; ok( $err, "unordered insert got an error" ); isa_ok( $err, 'MongoDB::DuplicateKeyError', 'caught error' ) or diag explain $err; $res = $err->result; is( $res->inserted_count, 3, "all valid docs inserted" ); # insert bad type $err = exception { $coll->insert_many( { x => 1 } ) }; like( $err, qr/must be an array reference/, "exception inserting bad type" ); }; subtest "delete_one" => sub { $coll->drop; $coll->insert_many( [ map { { _id => $_, x => "foo" } } 1 .. 3 ] ); is( $coll->count( { x => 'foo' } ), 3, "inserted three docs" ); $res = $coll->delete_one( { x => 'foo' } ); ok( $res->acknowledged, "result acknowledged" ); isa_ok( $res, "MongoDB::DeleteResult", "result" ); is( $res->deleted_count, 1, "delete one document" ); is( $coll->count( { x => 'foo' } ), 2, "two documents left" ); $res = $coll->delete_one( { x => 'bar' } ); is( $res->deleted_count, 0, "delete non existent document does nothing" ); is( $coll->count( { x => 'foo' } ), 2, "two documents left" ); # test errors -- deletion invalid on capped collection my $cap = get_capped($testdb); $cap->insert_many( [ map { { _id => $_ } } 1..10 ] ); my $err = exception { $cap->delete_one( { _id => 4 } ) }; ok( $err, "deleting from capped collection throws error" ); isa_ok( $err, 'MongoDB::WriteError' ); like( $err->result->last_errmsg, qr/capped/, "error had string 'capped'" ); }; subtest "delete_many" => sub { $coll->drop; $coll->insert_many( [ map { { _id => $_, x => $_ } } 1 .. 3 ] ); is( $coll->count( {} ), 3, "inserted three docs" ); $res = $coll->delete_many( { x => { '$gt', 1 } } ); ok( $res->acknowledged, "result acknowledged" ); isa_ok( $res, "MongoDB::DeleteResult", "result" ); is( $res->deleted_count, 2, "deleted two documents" ); is( $coll->count( {} ), 1, "one documents left" ); $res = $coll->delete_many( { y => 'bar' } ); is( $res->deleted_count, 0, "delete non existent document does nothing" ); is( $coll->count( {} ), 1, "one documents left" ); # test errors -- deletion invalid on capped collection my $cap = get_capped($testdb); $cap->insert_many( [ map { { _id => $_ } } 1..10 ] ); my $err = exception { $cap->delete_many( {} ) }; ok( $err, "deleting from capped collection throws error" ); isa_ok( $err, 'MongoDB::WriteError' ); like( $err->result->last_errmsg, qr/capped/, "error had string 'capped'" ); }; subtest "replace_one" => sub { $coll->drop; # replace missing doc without upsert $res = $coll->replace_one( { x => 1 }, { x => 2 } ); ok( $res->acknowledged, "result acknowledged" ); isa_ok( $res, "MongoDB::UpdateResult", "result" ); is( $res->matched_count, 0, "matched count is zero" ); is( $coll->count( {} ), 0, "collection still empty" ); # replace missing with upsert $res = $coll->replace_one( { x => 1 }, { x => 2 }, { upsert => 1 } ); is( $res->matched_count, 0, "matched count is zero" ); is( $res->modified_count, ( $server_version >= v2.6.0 ? 0 : undef ), "modified count correct based on server version" ); ok( $server_version >= 2.6.0 ? $res->has_modified_count : !$res->has_modified_count, "has_modified_count correct" ); isa_ok( $res->upserted_id, "MongoDB::OID", "got upserted id" ); is( $coll->count( {} ), 1, "one doc in database" ); my $got = $coll->find_one( { _id => $res->upserted_id } ); is( $got->{x}, 2, "document contents correct" ); # replace existing with upsert -- add duplicate to confirm only one $coll->insert( { x => 2 } ); $res = $coll->replace_one( { x => 2 }, { x => 3 }, { upsert => 1 } ); is( $coll->count( {} ), 2, "replace existing with upsert" ); is( $res->matched_count, 1, "matched_count 1" ); is( $res->modified_count, ( $server_version >= v2.6.0 ? 1 : undef ), "modified count correct based on server version" ); cmp_deeply( [ $coll->find( {} )->all ], bag( { _id => ignore(), x => 2 }, { _id => ignore, x => 3 } ), "collection docs correct" ); # replace existing without upsert $res = $coll->replace_one( { x => 3 }, { x => 4 } ); is( $coll->count( {} ), 2, "replace existing with upsert" ); is( $res->matched_count, 1, "matched_count 1" ); is( $res->modified_count, ( $server_version >= v2.6.0 ? 1 : undef ), "modified count correct based on server version" ); cmp_deeply( [ $coll->find( {} )->all ], bag( { _id => ignore(), x => 2 }, { _id => ignore, x => 4 } ), "collection docs correct" ); # replace doc with $op is an error my $err = exception { $coll->replace_one( { x => 3} , { '$set' => { x => 4 } } ) }; ok( $err, "replace with update operators is an error" ); like( $err, qr/must not contain update operators/, "correct error message" ); # replace doc with custom op_char is an error $err = exception { my $coll2 = $coll->with_codec( op_char => '-' ); $coll2->replace_one( { x => 3} , { -set => { x => 4 } } ) }; ok( $err, "replace with op_char update operators is an error" ); like( $err, qr/must not contain update operators/, "correct error message" ); }; subtest "update_one" => sub { $coll->drop; # update missing doc without upsert $res = $coll->update_one( { x => 1 }, { '$set' => { x => 2 } } ); ok( $res->acknowledged, "result acknowledged" ); isa_ok( $res, "MongoDB::UpdateResult", "result" ); is( $res->matched_count, 0, "matched count is zero" ); is( $coll->count( {} ), 0, "collection still empty" ); # update missing with upsert $res = $coll->update_one( { x => 1 }, { '$set' => { x => 2 } }, { upsert => 1 } ); is( $res->matched_count, 0, "matched count is zero" ); is( $res->modified_count, ( $server_version >= v2.6.0 ? 0 : undef ), "modified count correct based on server version" ); isa_ok( $res->upserted_id, "MongoDB::OID", "got upserted id" ); is( $coll->count( {} ), 1, "one doc in database" ); my $got = $coll->find_one( { _id => $res->upserted_id } ); is( $got->{x}, 2, "document contents correct" ); # update existing with upsert -- add duplicate to confirm only one $coll->insert( { x => 2 } ); $res = $coll->update_one( { x => 2 }, { '$set' => { x => 3 } }, { upsert => 1 } ); is( $coll->count( {} ), 2, "update existing with upsert" ); is( $res->matched_count, 1, "matched_count 1" ); is( $res->modified_count, ( $server_version >= v2.6.0 ? 1 : undef ), "modified count correct based on server version" ); cmp_deeply( [ $coll->find( {} )->all ], bag( { _id => ignore(), x => 2 }, { _id => ignore, x => 3 } ), "collection docs correct" ); # update existing without upsert $res = $coll->update_one( { x => 3 }, { '$set' => { x => 4 } } ); is( $coll->count( {} ), 2, "update existing with upsert" ); is( $res->matched_count, 1, "matched_count 1" ); is( $res->modified_count, ( $server_version >= v2.6.0 ? 1 : undef ), "modified count correct based on server version" ); cmp_deeply( [ $coll->find( {} )->all ], bag( { _id => ignore(), x => 2 }, { _id => ignore, x => 4 } ), "collection docs correct" ); # update doc without $op is an error my $err = exception { $coll->update_one( { x => 3} , { x => 4 } ) }; ok( $err, "update without update operators is an error" ); like( $err, qr/must only contain update operators/, "correct error message" ); }; subtest "update_many" => sub { $coll->drop; # update missing doc without upsert $res = $coll->update_many( { x => 1 }, { '$set' => { x => 2 } } ); ok( $res->acknowledged, "result acknowledged" ); isa_ok( $res, "MongoDB::UpdateResult", "result" ); is( $res->matched_count, 0, "matched count is zero" ); is( $coll->count( {} ), 0, "collection still empty" ); # update missing with upsert $res = $coll->update_many( { x => 1 }, { '$set' => { x => 2 } }, { upsert => 1 } ); is( $res->matched_count, 0, "matched count is zero" ); is( $res->modified_count, ( $server_version >= v2.6.0 ? 0 : undef ), "modified count correct based on server version" ); isa_ok( $res->upserted_id, "MongoDB::OID", "got upserted id" ); is( $coll->count( {} ), 1, "one doc in database" ); my $got = $coll->find_one( { _id => $res->upserted_id } ); cmp_deeply( [ $coll->find( {} )->all ], bag( { _id => ignore(), x => 2 } ), "collection docs correct" ); # update existing with upsert -- add duplicate to confirm multiple $coll->insert( { x => 2 } ); $res = $coll->update_many( { x => 2 }, { '$set' => { x => 3 } }, { upsert => 1 } ); is( $coll->count( {} ), 2, "update existing with upsert" ); is( $res->matched_count, 2, "matched_count 2" ); is( $res->modified_count, ( $server_version >= v2.6.0 ? 2 : undef ), "modified count correct based on server version" ); cmp_deeply( [ $coll->find( {} )->all ], bag( { _id => ignore(), x => 3 }, { _id => ignore, x => 3 } ), "collection docs correct" ); # update existing without upsert $res = $coll->update_many( { x => 3 }, { '$set' => { x => 4 } } ); is( $coll->count( {} ), 2, "update existing with upsert" ); is( $res->matched_count, 2, "matched_count 1" ); is( $res->modified_count, ( $server_version >= v2.6.0 ? 2 : undef ), "modified count correct based on server version" ); cmp_deeply( [ $coll->find( {} )->all ], bag( { _id => ignore(), x => 4 }, { _id => ignore, x => 4 } ), "collection docs correct" ); # update doc without $op is an error my $err = exception { $coll->update_one( { x => 3 } , { x => 4 } ) }; ok( $err, "update without update operators is an error" ); like( $err, qr/must only contain update operators/, "correct error message" ); }; subtest 'bulk_write' => sub { $coll->drop; # test mixed-form write models, array/hash refs or pairs $res = $coll->bulk_write( [ [ insert_one => [ { x => 1 } ] ], { insert_many => [ { x => 2 }, { x => 3 } ] }, replace_one => [ { x => 1 }, { x => 4 } ], update_one => [ { x => 7 }, { '$set' => { x => 5 } }, { upsert => 1 } ], [ insert_one => [ { x => 6 } ] ], { insert_many => [ { x => 7 }, { x => 8 } ] }, delete_one => [ { x => 4 } ], delete_many => [ { x => { '$lt' => 3 } } ], update_many => [ { x => { '$gt' => 5 } }, { '$inc' => { x => 1 } } ], ], ); ok( $res->acknowledged, "result acknowledged" ); isa_ok( $res, "MongoDB::BulkWriteResult", "result" ); is( $res->op_count, 11, "op count correct" ); my @got = $coll->find( {} )->all; cmp_deeply( \@got, bag( map { { _id => ignore, x => $_ } } 3, 5, 7, 8, 9 ), "collection docs correct", ) or diag explain \@got; # test ordered error # ordered insert should not halt on error $coll->drop; my $err = exception { $coll->bulk_write( [ insert_one => [ { _id => 1 } ], insert_one => [ { _id => 2 } ], insert_one => [ { _id => 1 } ], ], { ordered => 1, }, ); }; ok( $err, "ordered bulk got an error" ); isa_ok( $err, 'MongoDB::DuplicateKeyError', 'caught error' ) or diag explain $err; $res = $err->result; is( $res->inserted_count, 2, "only first two inserted" ); # test unordered error # unordered insert should halt on error $coll->drop; $err = exception { $coll->bulk_write( [ insert_one => [ { _id => 1 } ], insert_one => [ { _id => 2 } ], insert_one => [ { _id => 1 } ], insert_one => [ { _id => 3 } ], ], { ordered => 0, }, ); }; ok( $err, "unordered bulk got an error" ); isa_ok( $err, 'MongoDB::DuplicateKeyError', 'caught error' ) or diag explain $err; $res = $err->result; is( $res->inserted_count, 3, "three valid docs inserted" ); }; subtest "find_one_and_delete" => sub { $coll->drop; $coll->insert_one( { x => 1, y => 'a' } ); $coll->insert_one( { x => 1, y => 'b' } ); is( $coll->count( {} ), 2, "inserted 2 docs" ); my $doc; # find non-existent doc $doc = $coll->find_one_and_delete( { x => 2 } ); is( $doc, undef, "find_one_and_delete on nonexistent doc returns undef" ); is( $coll->count( {} ), 2, "still 2 docs" ); # find/remove existing doc (testing sort and projection, too) $doc = $coll->find_one_and_delete( { x => 1 }, { sort => [ y => 1 ], projection => { y => 1 } } ); cmp_deeply( $doc, { _id => ignore(), y => 'a' }, "expected doc returned" ); is( $coll->count( {} ), 1, "only 1 doc left" ); # XXX how to test max_time_ms? }; subtest "find_one_and_replace" => sub { $coll->drop; $coll->insert_one( { x => 1, y => 'a' } ); $coll->insert_one( { x => 1, y => 'b' } ); is( $coll->count( {} ), 2, "inserted 2 docs" ); my $doc; # find and replace non-existent doc, without upsert $doc = $coll->find_one_and_replace( { x => 2 }, { x => 3, y => 'c' } ); is( $doc, undef, "find_one_and_replace on nonexistent doc returns undef" ); is( $coll->count( {} ), 2, "still 2 docs" ); is( $coll->count( { x => 3 } ), 0, "no docs matching replacment" ); # find and replace non-existent doc, with upsert $doc = $coll->find_one_and_replace( { x => 2 }, { x => 3, y => 'c' }, { upsert => 1 } ); if ( $server_version >= v2.2.0 ) { is( $doc, undef, "find_one_and_replace upsert on nonexistent doc returns undef" ); } is( $coll->count( {} ), 3, "doc has been upserted" ); is( $coll->count( { x => 3 } ), 1, "1 doc matching replacment" ); # find and replace existing doc, with upsert $doc = $coll->find_one_and_replace( { x => 3 }, { x => 4, y => 'c' }, { upsert => 1 }); cmp_deeply( $doc, { _id => ignore(), x => 3, y => 'c' }, "find_one_and_replace on existing doc returned old doc", ); is( $coll->count( {} ), 3, "no new doc added" ); is( $coll->count( { x => 4 } ), 1, "1 doc matching replacment" ); # find and replace existing doc, with after doc $doc = $coll->find_one_and_replace( { x => 4 }, { x => 5, y => 'c' }, { returnDocument => 'after' }); cmp_deeply( $doc, { _id => ignore(), x => 5, y => 'c' }, "find_one_and_replace on existing doc returned new doc", ); is( $coll->count( {} ), 3, "no new doc added" ); is( $coll->count( { x => 5 } ), 1, "1 doc matching replacment" ); # test project and sort $doc = $coll->find_one_and_replace( { x => 1 }, { x => 2, y => 'z' }, { sort => [ y => -1 ], projection => { y => 1 } } ); cmp_deeply( $doc, { _id => ignore(), y => 'b' }, "find_one_and_replace on existing doc returned new doc", ); is( $coll->count( { x => 2 } ), 1, "1 doc matching replacment" ); is( $coll->count( { x => 1, y => 'a' } ), 1, "correct doc untouched" ); # test duplicate key error $coll->drop; $coll->insert_many( [ map { { _id => $_ } } 0 .. 2 ] ); my $err = exception { $coll->find_one_and_replace( { x => 1 }, { _id => 0 }, { upsert => 1 } ); }; ok( $err, "upsert dup key got an error" ); isa_ok( $err, 'MongoDB::DuplicateKeyError', 'caught error' ) or diag explain $err; }; subtest "find_one_and_update" => sub { $coll->drop; $coll->insert_one( { x => 1, y => 'a' } ); $coll->insert_one( { x => 1, y => 'b' } ); is( $coll->count( {} ), 2, "inserted 2 docs" ); my $doc; # find and update non-existent doc, without upsert $doc = $coll->find_one_and_update( { x => 2 }, { '$inc' => { x => 1 } } ); is( $doc, undef, "find_one_and_update on nonexistent doc returns undef" ); is( $coll->count( {} ), 2, "still 2 docs" ); is( $coll->count( { x => 3 } ), 0, "no docs matching update" ); # find and update non-existent doc, with upsert $doc = $coll->find_one_and_update( { x => 2 }, { '$inc' => { x => 1 }, '$set' => { y => 'c' } }, { upsert => 1 } ); if ( $server_version >= v2.2.0 ) { is( $doc, undef, "find_one_and_update upsert on nonexistent doc returns undef" ); } is( $coll->count( {} ), 3, "doc has been upserted" ); is( $coll->count( { x => 3 } ), 1, "1 doc matching upsert" ); # find and update existing doc, with upsert $doc = $coll->find_one_and_update( { x => 3 }, { '$inc' => { x => 1 } }, { upsert => 1 }); cmp_deeply( $doc, { _id => ignore(), x => 3, y => 'c' }, "find_one_and_update on existing doc returned old doc", ); is( $coll->count( {} ), 3, "no new doc added" ); is( $coll->count( { x => 4 } ), 1, "1 doc matching replacment" ); # find and update existing doc, with after doc $doc = $coll->find_one_and_update( { x => 4 }, { '$inc' => { x => 1 } }, { returnDocument => 'after' }); cmp_deeply( $doc, { _id => ignore(), x => 5, y => 'c' }, "find_one_and_update on existing doc returned new doc", ); is( $coll->count( {} ), 3, "no new doc added" ); is( $coll->count( { x => 5 } ), 1, "1 doc matching replacment" ); # test project and sort $doc = $coll->find_one_and_update( { x => 1 }, { '$inc' => { x => 1 }, '$set' => { y => 'z' } }, { sort => [ y => -1 ], projection => { y => 1 } } ); cmp_deeply( $doc, { _id => ignore(), y => 'b' }, "find_one_and_update on existing doc returned new doc", ); is( $coll->count( { x => 2 } ), 1, "1 doc matching replacment" ); is( $coll->count( { x => 1, y => 'a' } ), 1, "correct doc untouched" ); # test duplicate key error $coll->drop; $coll->indexes->create_one([x => 1], {unique => 1}); $coll->insert_many( [ map { { _id => $_, x => $_ } } 1 .. 3 ] ); my $err = exception { $coll->find_one_and_update( { x => 0 }, { '$set' => { x => 1 } }, { upsert => 1 } ); }; ok( $err, "update dup key got an error" ); isa_ok( $err, 'MongoDB::DuplicateKeyError', 'caught error' ) or diag explain $err; }; subtest "write concern errors" => sub { plan skip_all => "not a replica set" unless $server_type eq 'RSPrimary'; $coll->drop; my $coll2 = $coll->clone( write_concern => { w => 99 } ); my @cases = ( [ insert_one => [ { x => 1 } ] ], [ insert_many => [ [ { x => 2 }, { x => 3 } ] ] ], [ delete_one => [ { x => 1 } ] ], [ delete_one => [ {} ] ], [ replace_one => [ { x => 0 }, { x => 1 }, { upsert => 1 } ] ], [ update_one => [ { x => 1 }, { '$inc' => { x => 1 } } ] ], ); # findAndModify doesn't take write concern until MongoDB 3.2 if ( $server_version >= v3.2.0 ) { push @cases, ( [ find_one_and_replace => [ { x => 2 }, { x => 1 } ] ], [ find_one_and_update => [ { x => 1 }, { '$inc' => { x => 1 } } ] ], [ find_one_and_delete => [ { x => 2 } ] ], ); } for my $c ( @cases ) { my ($method, $args) = @$c; my $res; my $err = exception { $res = $coll2->$method( @$args ) }; ok( $err, "caught error for $method" ) or diag explain $res; isa_ok( $err, 'MongoDB::WriteConcernError', "$method error" ) or diag explain $err; } }; done_testing; MongoDB-v1.2.2/t/crud_spec.t000644 000765 000024 00000020674 12651754051 016037 0ustar00davidstaff000000 000000 # # Copyright 2015 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # use strict; use warnings; use Test::More 0.96; use JSON::MaybeXS; use Test::Deep; use Path::Tiny; use Try::Tiny; use MongoDB; use lib "t/lib"; use MongoDBTest qw/skip_unless_mongod build_client get_test_db server_version server_type get_capped/; skip_unless_mongod(); my $conn = build_client(); my $testdb = get_test_db($conn); my $server_version = server_version($conn); my $server_type = server_type($conn); my $coll = $testdb->get_collection('test_collection'); for my $dir ( map { path("t/data/CRUD/$_") } qw/read write/ ) { my $iterator = $dir->iterator( { recurse => 1 } ); while ( my $path = $iterator->() ) { next unless -f $path && $path =~ /\.json$/; my $plan = eval { decode_json( $path->slurp_utf8 ) }; if ($@) { die "Error decoding $path: $@"; } my $name = $path->relative($dir)->basename(".json"); subtest $name => sub { for my $test ( @{ $plan->{tests} } ) { $coll->drop; $coll->insert_many( $plan->{data} ); my $op = $test->{operation}; my $meth = $op->{name}; $meth =~ s{([A-Z])}{_\L$1}g; my $test_meth = "test_$meth"; my $res = main->$test_meth( $test->{description}, $meth, $op->{arguments}, $test->{outcome} ); } }; } } #--------------------------------------------------------------------------# # generic tests #--------------------------------------------------------------------------# sub test_read_w_filter { my ( $class, $label, $method, $args, $outcome ) = @_; my $filter = delete $args->{filter}; my $res = $coll->$method( grep { defined } $filter, $args ); check_read_outcome( $label, $res, $outcome ); } sub test_write_w_filter { my ( $class, $label, $method, $args, $outcome ) = @_; my $filter = delete $args->{filter}; my $res = $coll->$method( $filter, ( scalar %$args ? $args : () ) ); if ( $method =~ /^find_one/ ) { check_find_one_outcome( $label, $res, $outcome ); } else { check_write_outcome( $label, $res, $outcome ); } } sub test_insert { my ( $class, $label, $method, $args, $outcome ) = @_; $args = delete $args->{document} || delete $args->{documents}; my $res = $coll->$method($args); check_insert_outcome( $label, $res, $outcome ); } sub test_modify { my ( $class, $label, $method, $args, $outcome ) = @_; my $filter = delete $args->{filter}; # SERVER-5289 -- _id not taken from filter before 2.6 if ( $server_version < v2.6.0 && !$coll->find_one($filter) && $args->{upsert} && exists( $args->{replacement} ) ) { $outcome->{collection}{data}[-1]{_id} = ignore(); } my $doc = delete $args->{replacement} || delete $args->{update}; my $res = $coll->$method( $filter, $doc, ( scalar %$args ? $args : () ) ); check_write_outcome( $label, $res, $outcome ); } sub test_find_and_modify { my ( $class, $label, $method, $args, $outcome ) = @_; my $filter = delete $args->{filter}; my $doc = delete $args->{replacement} || delete $args->{update}; $args->{returnDocument} = lc( $args->{returnDocument} ) if exists $args->{returnDocument}; # SERVER-17650 -- before 3.0, this case returned empty doc if ( $server_version < v3.0.0 && !$coll->find_one($filter) && ( !$args->{returnDocument} || $args->{returnDocument} eq 'before' ) && $args->{upsert} && $args->{sort} ) { $outcome->{result} = {}; } # SERVER-5289 -- _id not taken from filter before 2.6 if ( $server_version < v2.6.0 ) { if ( $outcome->{result} && ( !exists $args->{projection}{_id} || $args->{projection}{_id} ) ) { $outcome->{result}{_id} = ignore(); } if ( $args->{upsert} && !$coll->find_one($filter) ) { $outcome->{collection}{data}[-1]{_id} = ignore(); } } my $res = $coll->$method( $filter, $doc, ( scalar %$args ? $args : () ) ); check_find_one_outcome( $label, $res, $outcome ); } BEGIN { *test_find = \&test_read_w_filter; *test_count = \&test_read_w_filter; *test_delete_many = \&test_write_w_filter; *test_delete_one = \&test_write_w_filter; *test_insert_many = \&test_insert; *test_insert_one = \&test_insert; *test_replace_one = \&test_modify; *test_update_one = \&test_modify; *test_update_many = \&test_modify; *test_find_one_and_delete = \&test_write_w_filter; *test_find_one_and_replace = \&test_find_and_modify; *test_find_one_and_update = \&test_find_and_modify; } #--------------------------------------------------------------------------# # method-specific tests #--------------------------------------------------------------------------# sub test_aggregate { my ( $class, $label, $method, $args, $outcome ) = @_; plan skip_all => "aggregate not available until MongoDB v2.2" unless $server_version > v2.2.0; my $pipeline = delete $args->{pipeline}; # $out not supported until 2.6 my $is_out = exists $pipeline->[-1]{'$out'}; return if $is_out && $server_version < v2.6.0; # Perl driver returns empty result if $out $outcome->{result} = [] if $is_out; my $res = $coll->aggregate( grep { defined } $pipeline, $args ); check_read_outcome( $label, $res, $outcome ); } sub test_distinct { my ( $class, $label, $method, $args, $outcome ) = @_; my $fieldname = delete $args->{fieldName}; my $filter = delete $args->{filter}; my $res = $coll->distinct( grep { defined } $fieldname, $filter, $args ); check_read_outcome( $label, $res, $outcome ); } #--------------------------------------------------------------------------# # outcome checkers #--------------------------------------------------------------------------# sub check_read_outcome { my ( $label, $res, $outcome ) = @_; if ( ref $outcome->{result} ) { my $all = [ $res->all ]; cmp_deeply( $all, $outcome->{result}, "$label: result documents" ) or diag explain $all; } else { is( $res, $outcome->{result}, "$label: result scalar" ); } check_collection( $label, $outcome ); } sub check_write_outcome { my ( $label, $res, $outcome ) = @_; for my $k ( keys %{ $outcome->{result} } ) { ( my $attr = $k ) =~ s{([A-Z])}{_\L$1}g; if ( $server_version < v2.6.0 ) { $outcome->{result}{$k} = undef if $k eq 'modifiedCount'; $outcome->{result}{$k} = ignore() if $k eq 'upsertedId'; } cmp_deeply( $res->$attr, $outcome->{result}{$k}, "$label: $k" ); } check_collection( $label, $outcome ); } sub check_find_one_outcome { my ( $label, $res, $outcome ) = @_; cmp_deeply( $res, $outcome->{result}, "$label: result doc" ) or diag explain $res; check_collection( $label, $outcome ); } sub check_insert_outcome { my ( $label, $res, $outcome ) = @_; if ( exists $outcome->{result}{insertedId} ) { return check_write_outcome( $label, $res, $outcome ); } my $ids = [ map { $res->inserted_ids->{$_} } sort { $a <=> $b } keys %{ $res->inserted_ids } ]; cmp_deeply( $ids, $outcome->{result}{insertedIds}, "$label: result doc" ); check_collection( $label, $outcome ); } sub check_collection { my ( $label, $outcome ) = @_; return unless exists $outcome->{collection}; my $out_coll = exists( $outcome->{collection}{name} ) ? $testdb->coll( $outcome->{collection}{name} ) : $coll; my $data = [ $out_coll->find( {} )->all ]; cmp_deeply( $data, $outcome->{collection}{data}, "$label: collection data" ) or diag "GOT:\n", explain($data), "EXPECTED:\n", explain( $outcome->{collection}{data} ); } done_testing; MongoDB-v1.2.2/t/cursor.t000644 000765 000024 00000030170 12651754051 015375 0ustar00davidstaff000000 000000 # # Copyright 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # use strict; use warnings; use Test::More; use Test::Fatal; use Tie::IxHash; use version; use MongoDB; use lib "t/lib"; use MongoDBTest qw/skip_unless_mongod build_client get_test_db server_version server_type/; skip_unless_mongod(); my $conn = build_client(); my $testdb = get_test_db($conn); my $server_version = server_version($conn); my $server_type = server_type($conn); my $coll = $testdb->get_collection('test_collection'); my $coll2 = $testdb->get_collection("cap_collection"); # after dropping coll2, must run command below to make it capped my $create_capped_cmd = [ create => "cap_collection", capped => 1, size => 10000 ]; my $cursor; my @values; # test setup { $coll->drop; $coll->insert_one({ foo => 9, bar => 3, shazbot => 1 }); $coll->insert_one({ foo => 2, bar => 5 }); $coll->insert_one({ foo => -3, bar => 4 }); $coll->insert_one({ foo => 4, bar => 9, shazbot => 1 }); } # $coll->query { @values = $coll->query({}, { sort_by => { foo => 1 } })->all; is(scalar @values, 4); is ($values[0]->{foo}, -3); is ($values[1]->{foo}, 2); is ($values[2]->{foo}, 4); is ($values[3]->{foo}, 9); @values = $coll->query({}, { sort_by => { bar => -1 } })->all; is(scalar @values, 4); is($values[0]->{bar}, 9); is($values[1]->{bar}, 5); is($values[2]->{bar}, 4); is($values[3]->{bar}, 3); } # criteria { @values = $coll->query({ shazbot => 1 }, { sort_by => { foo => -1 } })->all; is(scalar @values, 2); is($values[0]->{foo}, 9); is($values[1]->{foo}, 4); } # limit { @values = $coll->query({}, { limit => 3, sort_by => { foo => 1 } })->all; is(scalar @values, 3) or diag explain \@values; is ($values[0]->{foo}, -3); is ($values[1]->{foo}, 2); is ($values[2]->{foo}, 4); } # skip { @values = $coll->query({}, { limit => 3, skip => 1, sort_by => { foo => 1 } })->all; is(scalar @values, 3); is ($values[0]->{foo}, 2); is ($values[1]->{foo}, 4); is ($values[2]->{foo}, 9); } $coll->drop; # next and all { is($coll->query->next, undef, 'test undef'); is_deeply([$coll->query->all], []); my $id1 = $coll->insert_one({x => 1})->inserted_id; my $id2 = $coll->insert_one({x => 5})->inserted_id; is($coll->count, 2); $cursor = $coll->query; is($cursor->next->{'x'}, 1); is($cursor->next->{'x'}, 5); is($cursor->next, undef); my $cursor2 = $coll->query({x => 5}); is_deeply([$cursor2->all], [{_id => $id2, x => 5}]); is_deeply([$coll->query->all], [{_id => $id1, x => 1}, {_id => $id2, x => 5}]); } # sort, and sort by tie::ixhash { my $cursor_sort = $coll->query->sort({'x' => -1}); is($cursor_sort->has_next, 1); is($cursor_sort->next->{'x'}, 5, 'Cursor->sort'); is($cursor_sort->next->{'x'}, 1); $cursor_sort = $coll->query->sort({'x' => 1}); is($cursor_sort->next->{'x'}, 1); is($cursor_sort->next->{'x'}, 5); my $hash = Tie::IxHash->new("x" => -1); $cursor_sort = $coll->query->sort($hash); is($cursor_sort->has_next, 1); is($cursor_sort->next->{'x'}, 5, 'Tie::IxHash cursor->sort'); is($cursor_sort->next->{'x'}, 1); } # snapshot # XXX tests don't fail if snapshot is turned off ?!? { my $cursor3 = $coll->query->snapshot(1); is($cursor3->has_next, 1, 'check has_next'); my $r1 = $cursor3->next; is($cursor3->has_next, 1, 'if this failed, the database you\'re running is old and snapshot won\'t work'); $cursor3->next; is(int $cursor3->has_next, 0, 'check has_next is false'); like( exception { $coll->query->snapshot }, qr/requires a defined, boolean argument/, "snapshot exception without argument" ); } # paging { $coll->insert_one({x => 2}); $coll->insert_one({x => 3}); $coll->insert_one({x => 4}); my $paging = $coll->query->skip(1)->limit(2); is($paging->has_next, 1, 'check skip/limit'); $paging->next; is($paging->has_next, 1); $paging->next; is(int $paging->has_next, 0); } # bigger test, with index { $coll = $testdb->get_collection('test'); $coll->drop; $coll->indexes->create_one({'sn'=>1}); my $bulk = $coll->unordered_bulk; $bulk->insert_one({sn => $_}) for 0 .. 5000; $bulk->execute; $cursor = $coll->query; my $count = 0; while (my $doc = $cursor->next()) { $count++; } is(5001, $count); my @all = $coll->find->limit(3999)->all; is( 0+@all, 3999, "got limited documents" ); } # reset { my ( $r1, $r2 ); ok( $cursor->reset, "first reset" ); ok( ( $r1 = $cursor->next ), "first doc after first reset" ); ok( $cursor->reset, "second reset" ); ok( ( $r2 = $cursor->next ), "first doc after second reset" ); is($r1->{'sn'}, $r2->{'sn'}, 'reset'); } # explain { my $exp = $cursor->explain; if ( $server_version >= v2.7.3 ) { is ($exp->{executionStats}{nReturned}, 5001, "count of items" ); $cursor->reset; $exp = $cursor->limit(20)->explain; is ($exp->{executionStats}{nReturned}, 20, "explain with limit" ); $cursor->reset; $exp = $cursor->limit(-20)->explain; is ($exp->{executionStats}{nReturned}, 20, "explain with negative limit" ); } else { is($exp->{'n'}, 5001, 'explain'); is($exp->{'cursor'}, 'BasicCursor'); $cursor->reset; $exp = $cursor->limit(20)->explain; is(20, $exp->{'n'}, 'explain limit'); $cursor->reset; $exp = $cursor->limit(-20)->explain; is(20, $exp->{'n'}); } } # hint { $cursor->reset; my $hinted = $cursor->hint({'x' => 1}); is($hinted, $cursor, "hint returns self"); $coll->drop; $coll->insert_one({'num' => 1, 'foo' => 1}); # "Command Op::_Explain will throw a MongoDB::Error, while the legacy # code will throw a MongoDB::DatabaseError, so test must check both. like( exception { $coll->query->hint( { 'num' => 1 } )->explain }, qr/MongoDB::(Database)?Error/, "check error on hint with explain" ); } # count { $coll->drop; is ($coll->count, 0, "empty" ); $coll->insert_many([{'x' => 1}, {'x' => 1}, {'y' => 1}, {'x' => 1, 'z' => 1}]); is($coll->query->count, 4, 'count'); is($coll->query({'x' => 1})->count, 3, 'count query'); is($coll->query->limit(1)->count(1), 1, 'count limit'); is($coll->query->skip(1)->count(1), 3, 'count skip'); is($coll->query->limit(1)->skip(1)->count(1), 1, 'count limit & skip'); } # cursor opts # not a functional test, just make sure they don't blow up { $cursor = $coll->find(); $cursor = $cursor->tailable(1); is($cursor->query->cursorType, 'tailable', "set tailable"); $cursor = $cursor->tailable(0); is($cursor->query->cursorType, 'non_tailable', "clear tailable"); $cursor = $cursor->tailable_await(1); is($cursor->query->cursorType, 'tailable_await', "set tailable_await"); $cursor = $cursor->tailable_await(0); is($cursor->query->cursorType, 'non_tailable', "clear tailable_await"); $cursor = $cursor->tailable(1); is($cursor->query->cursorType, 'tailable', "set tailable"); $cursor = $cursor->tailable_await(0); is($cursor->query->cursorType, 'non_tailable', "clear tailable_await"); $cursor = $cursor->tailable_await(1); is($cursor->query->cursorType, 'tailable_await', "set tailable_await"); $cursor = $cursor->tailable(0); is($cursor->query->cursorType, 'non_tailable', "clear tailable"); #test is actual cursor $coll->drop; $coll->insert_one({"x" => 1}); $cursor = $coll->find()->tailable(0); my $doc = $cursor->next; is($doc->{'x'}, 1); $cursor = $coll->find(); $cursor->immortal(1); ok($cursor->query->noCursorTimeout, "set immortal"); $cursor->immortal(0); ok(! $cursor->query->noCursorTimeout, "clear immortal"); $cursor->slave_okay(1); is($cursor->query->read_preference->mode, 'secondaryPreferred', "set slave_ok"); $cursor->slave_okay(0); is($cursor->query->read_preference->mode, 'primary', "clear slave_ok"); } # explain { $coll->drop; $coll->insert_one({"x" => 1}); $cursor = $coll->find; my $doc = $cursor->next; is($doc->{'x'}, 1); my $exp = $cursor->explain; # cursor should not be reset $doc = $cursor->next; is($doc, undef) or diag explain $doc; } # info { $coll->drop; $coll->insert_one( { x => $_ } ) for 1 .. 1000; $cursor = $coll->find; my $info = $cursor->info; is_deeply( $info, {num => 0}, "before execution, info only has num field"); ok( $cursor->has_next, "cursor executed and has results" ); $info = $cursor->info; ok($info->{'num'} > 0, "cursor reports more than zero results"); is($info->{'at'}, 0, "cursor still not iterated"); is($info->{'start'}, 0); ok($info->{'cursor_id'}, "cursor_id non-zero"); $cursor->next; $info = $cursor->info; is($info->{'at'}, 1); $cursor->all; $info = $cursor->info; is($info->{at}, 1000); } # sort_by { $coll->drop; for (my $i=0; $i < 5; $i++) { $coll->insert_one({x => $i}); } $cursor = $coll->query({}, { limit => 10, skip => 0, sort_by => {created => 1 }}); is($cursor->count(), 5); } # delayed tailable cursor subtest "delayed tailable cursor" => sub { $coll2->drop; $testdb->run_command($create_capped_cmd); $coll2->insert_one( { x => $_ } ) for 0 .. 9; # Get last doc my $cursor = $coll2->find()->sort({x => -1})->limit(1); my $last_doc = $cursor->next(); $cursor = $coll2->find({_id => {'$gt' => $last_doc->{_id}}})->tailable(1); # We won't get anything yet $cursor->next(); for (my $i=10; $i < 20; $i++) { $coll2->insert_one({x => $i}); } # We should retrieve documents here since we are tailable. my $count =()= $cursor->all; is($count, 10); }; # tailable_await subtest "await data" => sub { $coll2->drop; $testdb->run_command($create_capped_cmd); $coll2->insert_one( { x => $_ } ) for 0 .. 9; # Get last doc my $cursor = $coll2->find()->sort( { x => -1 } )->limit(1); my $last_doc = $cursor->next(); my $start = time; $cursor = $coll2->find( { _id => { '$gt' => $last_doc->{_id} } } )->tailable_await(1) ->max_await_time_ms(1000); # We won't get anything yet $cursor->next(); my $end = time; # did it actually block for a bit? ok( $end >= $start + 1, "cursor blocked to await data" ) or diag "START: $start; END: $end"; }; subtest "count w/ hint" => sub { $coll->drop; $coll->insert_one( { i => 1 } ); $coll->insert_one( { i => 2 } ); is ($coll->find()->count(), 2, 'count = 2'); $coll->indexes->create_one( { i => 1 } ); is( $coll->find( { i => 1 } )->hint( '_id_' )->count(), 1, 'count w/ hint & spec'); is( $coll->find()->hint( '_id_' )->count(), 2, 'count w/ hint'); my $current_version = version->parse($server_version); my $version_2_6 = version->parse('v2.6'); if ( $current_version > $version_2_6 ) { eval { $coll->find( { i => 1 } )->hint( 'BAD HINT')->count() }; like($@, ($server_type eq "Mongos" ? qr/failed/ : qr/bad hint/ ), 'check bad hint error'); } else { is( $coll->find( { i => 1 } )->hint( 'BAD HINT' )->count(), 1, 'bad hint and spec'); } $coll->indexes->create_one( { x => 1 }, { sparse => 1 } ); if ($current_version > $version_2_6 ) { is( $coll->find( { i => 1 } )->hint( 'x_1' )->count(), 0, 'spec & hint on empty sparse index'); } else { is( $coll->find( { i => 1 } )->hint( 'x_1' )->count(), 1, 'spec & hint on empty sparse index'); } is( $coll->find()->hint( 'x_1' )->count(), 2, 'hint on empty sparse index'); }; done_testing; MongoDB-v1.2.2/t/data/000755 000765 000024 00000000000 12651754051 014603 5ustar00davidstaff000000 000000 MongoDB-v1.2.2/t/database.t000644 000765 000024 00000013125 12651754051 015625 0ustar00davidstaff000000 000000 # # Copyright 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # use strict; use warnings; use Test::More; use Test::Fatal; use Test::Deep; use Tie::IxHash; use boolean; use MongoDB::Timestamp; # needed if db is being run as master use MongoDB; use MongoDB::_Constants; use MongoDB::Error; use MongoDB::WriteConcern; use lib "t/lib"; use MongoDBTest qw/skip_unless_mongod build_client get_test_db server_version server_type/; skip_unless_mongod(); my $conn = build_client(); my $testdb = get_test_db($conn); my $db_name = $testdb->name; my $server_version = server_version($conn); my $server_type = server_type($conn);; subtest 'get_database' => sub { isa_ok( $conn, 'MongoDB::MongoClient' ); my $db; ok( $db = $conn->get_database($db_name), "get_database(NAME)" ); isa_ok( $db, 'MongoDB::Database' ); my $wc = MongoDB::WriteConcern->new( w => 2 ); ok( $db = $conn->get_database( $db_name, { write_concern => $wc } ), "get_database(NAME, OPTIONS)" ); is( $db->write_concern->w, 2, "DB-level write concern as expected" ); ok( $db = $conn->get_database( $db_name, { write_concern => { w => 3 } } ), "get_database(NAME, OPTIONS)" ); is( $db->write_concern->w, 3, "DB-level write concern coerces" ); ok( $db = $conn->get_database( $db_name, { bson_codec => { op_char => '-' } } ), "get_database(NAME, OPTIONS)" ); is( $db->bson_codec->op_char, '-', "DB-level bson_codec coerces" ); }; subtest 'run_command' => sub { is( ref $testdb->run_command( [ ismaster => 1 ] ), 'HASH', "run_command(ARRAYREF) gives HASH" ); is( ref $testdb->run_command( { ismaster => 1 } ), 'HASH', "run_command(HASHREF) gives HASH" ); is( ref $testdb->run_command( Tie::IxHash->new( ismaster => 1 ) ), 'HASH', "run_command(IxHash) gives HASH" ); if ( $server_type eq 'RSPrimary' && $conn->_topology->all_servers > 1 ) { my $primary = $testdb->run_command( [ ismaster => 1 ] ); my $secondary = $testdb->run_command( [ ismaster => 1 ], { mode => 'secondary' } ); isnt( $primary->{me}, $secondary->{me}, "run_command respects explicit read preference" ) or do { diag explain $primary; diag explain $secondary }; } my $err = exception { $testdb->run_command( { foo => 'bar' } ) }; if ( $err->code == COMMAND_NOT_FOUND ) { pass("error from non-existent command"); } else { like( $err->message, qr/no such cmd|unrecognized command/, "error from non-existent command" ); } $err = exception { $testdb->run_command( [ x => "a" x MAX_BSON_WIRE_SIZE ] ) }; like( $err, qr/command too large/, "error on too large command" ); }; # collection_names subtest "collection names" => sub { is(scalar $testdb->collection_names, 0, 'no collections'); my $res = $testdb->list_collections; cmp_deeply( [ $res->all ], [], "list_collections has empty cursor" ); my $coll = $testdb->get_collection('test'); my $cmd = [ create => "test_capped", capped => 1, size => 10000 ]; $testdb->run_command($cmd); my $cap = $testdb->get_collection("test_capped"); $coll->indexes->create_one([ name => 1]); $cap->indexes->create_one([ name => 1]); ok($coll->insert_one({name => 'Alice'}), "create test collection"); ok($cap->insert_one({name => 'Bob'}), "create capped collection"); my %names = map {; $_ => 1 } $testdb->collection_names; my %got = map { $_->{name} => $_ } $testdb->list_collections( { name => qr/^test/ } )->all; for my $k ( qw/test test_capped/ ) { ok( exists $names{$k}, "collection_names included $k" ); ok( exists $got{$k}, "list_collections included $k" ); } }; # getlasterror subtest 'getlasterror' => sub { plan skip_all => "MongoDB 1.5+ needed" unless $server_version >= v1.5.0; $testdb->run_command([ismaster => 1]); my $result = $testdb->last_error({fsync => 1}); is($result->{ok}, 1); is($result->{err}, undef); $result = $testdb->last_error; is($result->{ok}, 1, 'last_error: ok'); is($result->{err}, undef, 'last_error: err'); # mongos never returns 'n' is($result->{n}, $server_type eq 'Mongos' ? undef : 0, 'last_error: n'); }; # reseterror { my $result = $testdb->run_command({reseterror => 1}); is($result->{ok}, 1, 'reset error'); } # forceerror { my $err = exception{ $testdb->run_command({forceerror => 1}) }; isa_ok( $err, "MongoDB::DatabaseError" ); } # XXX eval is deprecated, but we'll leave this test for now subtest "eval (deprecated)" => sub { plan skip_all => "eval not available under auth" if $conn->password; my $hello = $testdb->eval('function(x) { return "hello, "+x; }', ["world"]); is('hello, world', $hello, 'db eval'); like( exception { $testdb->eval('function(x) { xreturn "hello, "+x; }', ["world"]) }, qr/SyntaxError/, 'js err' ); }; # tie { my $admin = $conn->get_database('admin'); my %cmd; tie( %cmd, 'Tie::IxHash', buildinfo => 1); my $result = $admin->run_command(\%cmd); is($result->{ok}, 1); } done_testing; MongoDB-v1.2.2/t/dbref.t000644 000765 000024 00000010456 12651754051 015147 0ustar00davidstaff000000 000000 # # Copyright 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # use strict; use warnings; use Test::More; use Test::Fatal; use MongoDB; use MongoDB::BSON; use Scalar::Util 'blessed', 'reftype'; use lib "t/lib"; use MongoDBTest qw/skip_unless_mongod build_client get_test_db/; skip_unless_mongod(); my $conn = build_client(); my $testdb = get_test_db($conn); { my $ref = MongoDB::DBRef->new( db => 'test', ref => 'test_coll', id => 123 ); ok $ref; isa_ok $ref, 'MongoDB::DBRef'; } # test type coercions { my $coll = $testdb->get_collection( 'test_collection' ); my $ref = MongoDB::DBRef->new( db => $testdb, ref => $coll, id => 123 ); ok $ref; ok not blessed $ref->db; ok not blessed $ref->ref; is $ref->db, $testdb->name; is $ref->ref, 'test_collection'; is $ref->id, 123; $ref = MongoDB::DBRef->new( ref => $coll, id => 123 ); is( $ref->db, undef, "no db in new gives undef db" ); $ref = MongoDB::DBRef->new( ref => $coll, id => 123, db => undef ); is( $ref->db, undef, "explicit undef db in new gives undef db" ); } # test roundtrip { my $dbref = MongoDB::DBRef->new( db => 'some_db', ref => 'some_coll', id => 123 ); my $coll = $testdb->get_collection( 'test_coll' ); $coll->insert_one( { _id => 'wut wut wut', thing => $dbref } ); my $doc = $coll->find_one( { _id => 'wut wut wut' } ); ok exists $doc->{thing}; my $thing = $doc->{thing}; isa_ok $thing, 'MongoDB::DBRef'; is $thing->ref, 'some_coll'; is $thing->id, 123; is $thing->db, 'some_db'; $dbref = MongoDB::DBRef->new( ref => 'some_coll', id => 123 ); $coll->insert_one( { _id => 123, thing => $dbref } ); $doc = $coll->find_one( { _id => 123 } ); $thing = $doc->{thing}; isa_ok( $thing, 'MongoDB::DBRef' ); is( $thing->ref, 'some_coll', '$ref' ); is( $thing->id, 123, '$id' ); is( $thing->db, undef, '$db undefined' ); $coll->drop; } # test changing dbref_callback on bson_codec { my $coll = $testdb->get_collection( 'test_coll', { bson_codec => {} } ); my $dbref = MongoDB::DBRef->new( db => $testdb->name, ref => 'some_coll', id => 123 ); $coll->insert_one( { _id => 'wut wut wut', thing => $dbref } ); my $doc = $coll->find_one( { _id => 'wut wut wut' } ); ok( exists $doc->{thing}, "got inserted doc from db" ); is( ref $doc->{thing}, 'HASH', "doc is hash, not object" );; is( $doc->{thing}{'$id'}, 123, '$id' ); is( $doc->{thing}{'$ref'}, 'some_coll', '$ref' ); is( $doc->{thing}{'$db'}, $testdb->name, '$db' ); $dbref = MongoDB::DBRef->new( ref => 'some_coll', id => 123 ); $coll->insert_one( { _id => 123, thing => $dbref } ); $doc = $coll->find_one( { _id => 123 } ); ok( exists $doc->{thing}, "got inserted doc from db" ); is( $doc->{thing}{'$id'}, 123, '$id' ); is( $doc->{thing}{'$ref'}, 'some_coll', '$ref' ); ok( !exists($doc->{thing}{'$db'}), '$db not inserted' ); $coll->drop; } # test round-tripping extra fields subtest "round-trip fields" => sub { my $coll = $testdb->get_collection( 'test_coll' ); $coll->drop; my $ixhash = Tie::IxHash->new( '$ref' => 'some_coll', '$id' => 456, foo => 'bar', baz => 'bam', id => '123', # should be OK, since $id is taken first ); $coll->insert_one( { _id => 123, thing => $ixhash } ); my $doc = $coll->find_one( { _id => 123 } ); my $dbref = $doc->{thing}; isa_ok( $dbref, "MongoDB::DBRef" ); $coll->insert_one( { _id => 124, thing => $dbref } ); $doc = $coll->find_one( { _id => 124 } ); $dbref = $doc->{thing}; for my $k ( $ixhash->Keys ) { next if $k =~ /^\$/; is( $dbref->extra->{$k}, $ixhash->FETCH($k), "$k" ); } }; done_testing; # vim: set ts=4 sts=4 sw=4 et tw=75: MongoDB-v1.2.2/t/deprecated/000755 000765 000024 00000000000 12651754051 015772 5ustar00davidstaff000000 000000 MongoDB-v1.2.2/t/dt_types.t000644 000765 000024 00000007440 12651754051 015717 0ustar00davidstaff000000 000000 # # Copyright 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # use strict; use warnings; use Test::More; use Test::Fatal; use MongoDB::Timestamp; # needed if db is being run as master use MongoDB; use DateTime; use constant HAS_DATETIME_TINY => eval { require DateTime::Tiny; 1 }; use lib "t/lib"; use MongoDBTest qw/skip_unless_mongod build_client get_test_db/; skip_unless_mongod(); my $conn = build_client(); my $testdb = get_test_db($conn); my $base_coll = $testdb->get_collection( 'test_collection' ); my $now = DateTime->now; { $base_coll->insert_one( { date => $now } ); my $date1 = $base_coll->find_one->{date}; isa_ok $date1, 'DateTime'; is $date1->epoch, $now->epoch; $base_coll->drop; } { my $coll = $base_coll->with_codec( dt_type => undef ); $coll->insert_one( { date => $now } ); my $date3 = $coll->find_one->{date}; ok( ! ref $date3, "dt_type undef returns unblessed value" ); is( $date3, $now->epoch, "returned value is epoch secs without fractions" ); $coll->drop; } if ( HAS_DATETIME_TINY ) { my $coll = $base_coll->with_codec( dt_type => "DateTime::Tiny" ); $coll->insert_one( { date => $now } ); my $date2 = $coll->find_one->{date}; isa_ok( $date2, 'DateTime::Tiny' ); is $date2->DateTime->epoch, $now->epoch; $coll->drop; } { my $coll = $base_coll->with_codec( dt_type => "DateTime::Bad" ); $coll->insert_one( { date => $now } ); like( exception { my $date4 = $coll->find_one->{date}; }, qr/Invalid dt_type "DateTime::Bad"/i, "invalid dt_type throws" ); $coll->drop; } # roundtrips { $base_coll->insert_one( { date => $now } ); my $doc = $base_coll->find_one; $doc->{date}->add( seconds => 60 ); $base_coll->replace_one( { _id => $doc->{_id} }, { date => $doc->{date} } ); my $doc2 = $base_coll->find_one; is( $doc2->{date}->epoch, ( $now->epoch + 60 ) ); $base_coll->drop; } if ( HAS_DATETIME_TINY ) { my $coll = $base_coll->with_codec( dt_type => "DateTime::Tiny" ); my $dtt_now = DateTime::Tiny->now; $coll->insert_one( { date => $dtt_now } ); my $doc = $coll->find_one; is $doc->{date}->year, $dtt_now->year; is $doc->{date}->month, $dtt_now->month; is $doc->{date}->day, $dtt_now->day; is $doc->{date}->hour, $dtt_now->hour; is $doc->{date}->minute, $dtt_now->minute; is $doc->{date}->second, $dtt_now->second; $doc->{date} = DateTime::Tiny->from_string( $doc->{date}->DateTime->add( seconds => 30 )->iso8601 ); $coll->replace_one( { _id => $doc->{_id} }, $doc ); my $doc2 = $coll->find_one( { _id => $doc->{_id} } ); is( $doc2->{date}->DateTime->epoch, $dtt_now->DateTime->epoch + 30 ); $coll->drop; } { # test fractional second roundtrip my $now = DateTime->now; $now->add( nanoseconds => 500_000_000 ); $base_coll->insert_one( { date => $now } ); my $doc = $base_coll->find_one; is $doc->{date}->year, $now->year; is $doc->{date}->month, $now->month; is $doc->{date}->day, $now->day; is $doc->{date}->hour, $now->hour; is $doc->{date}->minute, $now->minute; is $doc->{date}->second, $now->second; is $doc->{date}->nanosecond, $now->nanosecond; $base_coll->drop; } done_testing; MongoDB-v1.2.2/t/errors.t000644 000765 000024 00000002516 12651754051 015377 0ustar00davidstaff000000 000000 use strict; use warnings; use Test::More 0.88; use Test::Fatal; use MongoDB::Error; use MongoDB::BulkWriteResult; # check if FIRST->throw give object that isa SECOND my @isa_checks = qw( MongoDB::Error MongoDB::Error MongoDB::ConnectionError MongoDB::Error ); while (@isa_checks) { my ( $error, $isa ) = splice( @isa_checks, 0, 2 ); isa_ok( exception { $error->throw }, $isa ); } my $result = MongoDB::BulkWriteResult->new( acknowledged => 1, write_errors => [], write_concern_errors => [], modified_count => 0, inserted_count => 0, upserted_count => 0, matched_count => 0, deleted_count => 0, upserted => [], inserted => [], batch_count => 0, op_count => 0, ); my $error = exception { MongoDB::WriteError->throw( message => "whoops", result => $result, ); }; isa_ok( $error, 'MongoDB::DatabaseError', "MongoDB::WriteError" ); isa_ok( $error, 'MongoDB::Error', "MongoDB::WriteError" ); is( $error->message, "whoops", "object message captured" ); is_deeply( $error->result, $result, "object details captured" ); is( "$error", "MongoDB::WriteError: whoops", "object stringifies to class plus error message" ); done_testing; # vim: ts=4 sts=4 sw=4 et: MongoDB-v1.2.2/t/fsync.t000644 000765 000024 00000006046 12651754051 015207 0ustar00davidstaff000000 000000 # # Copyright 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # use strict; use warnings; use Test::More 0.96; use Test::Fatal; use Data::Dumper; use MongoDB::Timestamp; # needed if db is being run as master use MongoDB; use lib "t/lib"; use MongoDBTest qw/skip_unless_mongod build_client server_type server_version/; skip_unless_mongod(); my $conn = build_client(); my $server_type = server_type( $conn ); my $server_version = server_version( $conn ); my $server_status_res = $conn->send_admin_command([serverStatus => 1]); my $storage_engine = $server_status_res->{output}{storageEngine}{name} || ''; plan skip_all => "fsync not supported for inMemory storage engine" if $storage_engine =~ qr/inMemory/; my $ret; # Test normal fsync. subtest "normal fsync" => sub { $ret = $conn->fsync(); is($ret->{ok}, 1, "fsync returned 'ok' => 1"); is(exists $ret->{numFiles}, 1, "fsync returned 'numFiles'"); }; # Test async fsync. subtest "async fsync" => sub { my $err = exception { $ret = $conn->fsync({async => 1}) }; plan skip_all => 'async not supported' if $err && $err =~ /exception:.*not supported/; is( $err, undef, "fsync command ran without error" ) or diag $err; if ( ref $ret eq 'HASH' ) { is($ret->{ok}, 1, "fsync + async returned 'ok' => 1"); is(exists $ret->{numFiles}, 1, "fsync + async returned 'numFiles'"); } }; # Test fsync with lock. subtest "fsync with lock" => sub { plan skip_all => "lock not supported through mongos" if $server_type eq 'Mongos'; # Lock $ret = $conn->fsync({lock => 1}); is($ret->{ok}, 1, "fsync + lock returned 'ok' => 1"); is(exists $ret->{seeAlso}, 1, "fsync + lock returned a link to fsync+lock documentation."); is($ret->{info}, "now locked against writes, use db.fsyncUnlock() to unlock", "Successfully locked mongodb."); # Check the lock. if ($server_version <= v3.1.0) { $ret = $conn->get_database('admin')->get_collection('$cmd.sys.inprog')->find_one(); } else { $ret = $conn->send_admin_command([currentOp => 1]); $ret = $ret->{output}; } is($ret->{fsyncLock}, 1, "MongoDB is still locked."); is($ret->{info}, "use db.fsyncUnlock() to terminate the fsync write/snapshot lock", "Got docs on how to unlock (via shell)."); # Unlock $ret = $conn->fsync_unlock(); Dumper($ret); is($ret->{ok}, 1, "Got 'ok' => 1 from unlock command."); is($ret->{info}, "unlock completed", "Got a successful unlock."); }; done_testing; MongoDB-v1.2.2/t/gridfs.t000644 000765 000024 00000022412 12651754051 015336 0ustar00davidstaff000000 000000 # # Copyright 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # use strict; use warnings; use Test::More; use Test::Fatal; use IO::File; use File::Temp; use MongoDB::Timestamp; # needed if db is being run as master use MongoDB; use MongoDB::GridFS; use MongoDB::GridFS::File; use DateTime; use FileHandle; use lib "t/lib"; use MongoDBTest qw/skip_unless_mongod build_client get_test_db/; skip_unless_mongod(); my $testdb = get_test_db(build_client()); my $txtfile = "t/data/gridfs/input.txt"; my $pngfile = "t/data/gridfs/img.png"; my $dumb_str; my $now; my $file; my $save_id; # XXX work around SERVER-18062; create collection to initialize DB for # sharded collection so gridfs index creation doesn't fail $testdb->coll("testtesttest")->insert({}); # DB initialized, so now get gridfs object my $grid = $testdb->get_gridfs; $grid->drop; # test ctor prefix { is($testdb->name . '.fs.files', $grid->files->full_name, "no prefix"); is($testdb->name . '.fs.chunks', $grid->chunks->full_name); my $fancy_grid = $testdb->get_gridfs("bar"); is($testdb->name . '.bar.files', $fancy_grid->files->full_name, "prefix"); is($testdb->name . '.bar.chunks', $fancy_grid->chunks->full_name); } # test text insert { $dumb_str = "abc\n\nzyw\n"; my $text_doc = new IO::File("$txtfile", "r") or die $!; my $ts = DateTime->now; ok( my $id = $grid->put($text_doc), "put" ); # safe mode so we can check MD5 $text_doc->close; my $chunk = $grid->chunks->find_one(); is(0, $chunk->{'n'}); is("$id", $chunk->{'files_id'}."", "compare returned id"); is($dumb_str, $chunk->{'data'}, "compare file content"); my $md5 = $testdb->run_command(["filemd5" => $chunk->{'files_id'}, "root" => "fs"]); $file = $grid->files->find_one(); ok($file->{'md5'} ne 'd41d8cd98f00b204e9800998ecf8427e', $file->{'md5'}); is($file->{'md5'}, $md5->{'md5'}, $md5->{'md5'}); ok($file->{'uploadDate'}->epoch - $ts->epoch < 10); is($file->{'chunkSize'}, $MongoDB::GridFS::chunk_size); is($file->{'length'}, length $dumb_str, "compare file len"); is($chunk->{'files_id'}, $file->{'_id'}, "compare ids"); } # test bin insert { my $img = new IO::File($pngfile, "r") or die $!; # Windows is dumb binmode($img); my $id = $grid->insert($img); $save_id = $id; $img->read($dumb_str, 4000000); $img->close; my $meta = $grid->files->find_one({'_id' => $save_id}); is($meta->{'length'}, 1292706); my $chunk = $grid->chunks->find_one({'files_id' => $id}); is(0, $chunk->{'n'}); is("$id", $chunk->{'files_id'}.""); my $len = $MongoDB::GridFS::chunk_size; ok(substr($dumb_str, 0, $len) eq substr($chunk->{'data'}, 0, $len), "compare first chunk with file"); $file = $grid->files->find_one({'_id' => $id}); is($file->{'length'}, length $dumb_str, "compare file length"); is($chunk->{'files_id'}, $file->{'_id'}, "compare ids"); } # test inserting metadata { my $text_doc = new IO::File("$txtfile", "r") or die $!; $now = time; my $id = $grid->insert($text_doc, {"filename" => "$txtfile", "uploaded" => time, "_id" => 1}); $text_doc->close; is($id, 1); } # $grid->files->find_one (NOT $grid->find_one) { $file = $grid->files->find_one({"_id" => 1}); ok($file, "found file"); is($file->{"uploaded"}, $now, "compare ts"); is($file->{"filename"}, "$txtfile", "compare filename"); } # $grid->find_one { $file = $grid->find_one({"_id" => 1}); isa_ok($file, 'MongoDB::GridFS::File'); is($file->info->{"uploaded"}, $now, "compare ts"); is($file->info->{"filename"}, "$txtfile", "compare filename"); } #write { my $wfh = IO::File->new("t/output.txt", "+>") or die $!; my $written = $file->print($wfh); is($written, length "abc\n\nzyw\n"); $wfh->close(); } # slurp { is($file->slurp,"abc\n\nzyw\n",'slurp'); } { my $buf; my $wfh = IO::File->new("t/output.txt", "<") or die $!; $wfh->read($buf, 1000); #$wfh->read($buf, length( "abc\n\nzyw\n")); is($buf, "abc\n\nzyw\n", "read chars from tmpfile"); my $wh = IO::File->new("t/outsub.txt", "+>") or die $!; my $written = $file->print($wh, 3, 2); is($written, 3); } # write bindata { $file = $grid->find_one({'_id' => $save_id}); my $wfh = IO::File->new('t/output.png', '+>') or die $!; $wfh->binmode; my $written = $file->print($wfh); is($written, $file->info->{'length'}, 'bin file length'); } #all { my @list = $grid->all; is(@list, 3, "three files"); for (my $i=0; $i<3; $i++) { isa_ok($list[$i], 'MongoDB::GridFS::File'); } is($list[0]->info->{'length'}, 9, 'checking lens'); is($list[1]->info->{'length'}, 1292706); is($list[2]->info->{'length'}, 9); } # remove { is($grid->files->query({"_id" => 1})->has_next, 1, 'pre-remove'); is($grid->chunks->query({"files_id" => 1})->has_next, 1); $file = $grid->remove({"_id" => 1}); is(int($grid->files->query({"_id" => 1})->has_next), 0, 'post-remove'); is(int($grid->chunks->query({"files_id" => 1})->has_next), 0); } # remove just_one { $grid->drop; my $img = new IO::File($pngfile, "r") or die $!; $grid->insert($img, {"filename" => "garbage.png"}); $grid->insert($img, {"filename" => "garbage.png"}); is($grid->files->count, 2); $grid->remove({'filename' => 'garbage.png'}, {just_one => 1}); is($grid->files->count, 1, 'remove just one'); unlink 't/output.txt', 't/output.png', 't/outsub.txt'; } # multi-chunk { $grid->drop; foreach (1..3) { my $txt = "HELLO" x 1_000_000; # 5MB my $fh = File::Temp->new; $fh->printflush( $txt ) or die $!; $fh->seek(0, 0); $grid->insert( $fh, { filename => $fh->filename } ); $fh->close() || die $!; #file is unlinked by dtor # now, spot check that we can retrieve the file my $gridfile = $grid->find_one( { filename => $fh->filename } ); my $info = $gridfile->info(); is($info->{length}, 5000000, 'length: '.$info->{'length'}); is($info->{filename}, $fh->filename, $info->{'filename'}); } } # reading from a big string { $grid->drop; my $txt = "HELLO"; my $basicfh; open($basicfh, '<', \$txt); my $fh = FileHandle->new; $fh->fdopen($basicfh, 'r'); $grid->insert($fh, {filename => 'hello.txt'}); $file = $grid->find_one; is($file->info->{filename}, 'hello.txt'); is($file->info->{length}, 5); } # safe insert { $grid->drop; my $img = new IO::File($pngfile, "r") or die $!; $img->binmode; $grid->insert($img, {filename => 'img.png'}, {safe => boolean::true}); $file = $grid->find_one; is($file->info->{filename}, 'img.png', 'safe insert'); is($file->info->{length}, 1292706); ok($file->info->{md5} ne 'd41d8cd98f00b204e9800998ecf8427e', $file->info->{'md5'}); } # get, put, delete { $grid->drop; my $img = new IO::File($pngfile, "r") or die $!; $img->binmode; my $id = $grid->put($img, {_id => 'img.png', filename => 'img.png'}); is($id, 'img.png', "put _id"); $img->seek(0,0); $id = $grid->put($img); isa_ok($id, 'MongoDB::OID'); $img->seek(0,0); eval { $id = $grid->put($img, {_id => 'img.png', filename => 'img.png'}); }; like($@->result->last_errmsg, qr/E11000/, 'duplicate key exception'); $file = $grid->get('img.png'); is($file->info->{filename}, 'img.png'); ok($file->info->{md5} ne 'd41d8cd98f00b204e9800998ecf8427e', $file->info->{'md5'}); $grid->delete('img.png'); my $coll = $testdb->get_collection('fs.files'); $file = $coll->find_one({_id => 1}); is($file, undef); $coll = $testdb->get_collection('fs.chunks'); $file = $coll->find_one({files_id => 1}); is($file, undef); } subtest "empty file" => sub { $grid->drop; is( $grid->chunks->count, 0, "0 chunks exist" ); my $txt = ""; my $basicfh; open( $basicfh, '<', \$txt ); my $fh = FileHandle->new; $fh->fdopen( $basicfh, 'r' ); ok( $grid->insert( $fh, { filename => 'hello.txt' } ), "inserted" ); is( $grid->chunks->count, 0, "0 chunks still" ); $file = $grid->find_one; is( $file->info->{filename}, 'hello.txt', "filename" ); is( $file->info->{length}, 0, "length" ); is( $file->slurp, "", "slurp" ); }; # no chunks for file with length asserts { $grid->drop; my $img = new IO::File($pngfile, "r") or die $!; $img->binmode; ok( my $id = $grid->put($img), "inserted file" ); # delete chunks to simulate corruption my $res = $grid->chunks->delete_many({"files_id" => $id}); ok( $res->deleted_count, "deleted chunks" ); like( exception { $file = $grid->get($id)->slurp }, qr/GridFS file corrupt.*\Q$id\E/, "invalid file throws error" ); } done_testing; MongoDB-v1.2.2/t/indexview.t000644 000765 000024 00000023236 12651754051 016067 0ustar00davidstaff000000 000000 # # Copyright 2015 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # use strict; use warnings; use Test::More 0.96; use Test::Fatal; use Test::Deep qw/!blessed/; use Tie::IxHash; use MongoDB; use MongoDB::Error; use lib "t/lib"; use MongoDBTest qw/skip_unless_mongod build_client get_test_db server_version server_type get_capped/; skip_unless_mongod(); my $conn = build_client(); my $testdb = get_test_db($conn); my $server_version = server_version($conn); my $server_type = server_type($conn); my $coll = $testdb->get_collection('test_collection'); my ($iv); # XXX work around SERVER-18062; create collection to initialize DB for # sharded collection so gridfs index creation doesn't fail $testdb->coll("testtesttest")->insert({}); subtest "collection API" => sub { $iv = $coll->indexes; isa_ok( $iv, "MongoDB::IndexView", "coll->indexes" ); }; subtest "create_many" => sub { $coll->drop; my @names = $iv->create_many( { keys => [ x => 1 ] }, { keys => [ y => -1 ] } ); ok( scalar @names, "got non-empty result" ); is_deeply( [ sort @names ], [ sort qw/x_1 y_-1/ ], "returned list of names" ); # exception on index creation SKIP: { skip "bad index type won't fail before 2.4", 1 if $server_version <= v2.4.0; like( exception { $iv->create_many( { keys => [ x => '4d' ] } ); }, qr/MongoDB::(?:Database|Write)Error/, "exception creating impossible index", ); } like( exception { $iv->create_many( { keys => { x => 1, y => 1 } } ) }, qr/index models/, "exception giving unordered docs for keys" ); is( exception { $iv->create_many( { keys => { y => 1 } } ) }, undef, "no exception on single-key hashref" ); }; subtest "list indexes" => sub { $coll->drop; $coll->insert( {} ); my $res = $iv->list(); isa_ok( $res, "MongoDB::QueryResult", "indexes->list" ); is_deeply( [ sort map { $_->{name} } $res->all ], ['_id_'], "list only gives _id_ index" ); ok( $iv->create_many( { keys => [ x => 1 ] } ), "added index" ); is_deeply( [ sort map { $_->{name} } $iv->list->all ], [ sort qw/_id_ x_1/ ], "list finds both indexes" ); }; subtest "create_one" => sub { $coll->drop; my $name = $iv->create_one( [ x => 1 ] ); my $found = grep { $_->{name} eq 'x_1' } $iv->list->all; ok( $found, "created one index on x" ); ok( $iv->create_one( [ y => -1 ], { unique => 1 } ), "created unique index on y" ); ($found) = grep { $_->{name} eq 'y_-1' } $iv->list->all; ok( $found->{unique}, "saw unique property in index info for y" ); like( exception { $iv->create_one( [ x => 1 ], { keys => [ y => 1 ] } ) }, qr/MongoDB::UsageError/, "exception putting 'keys' in options" ); like( exception { $iv->create_one( [ x => 1 ], { key => [ y => 1 ] } ) }, qr/MongoDB::UsageError/, "exception putting 'key' in options" ); like( exception { $iv->create_one( { x => 1, y => 1 } ) }, qr/ordered document/, "exception giving unordered docs for keys" ); is( exception { $iv->create_one( { y => 1 } ) }, undef, "no exception on single-key hashref" ); # exception on index creation SKIP: { skip "bad index type won't fail before 2.4", 1 if $server_version <= v2.4.0; like( exception { $iv->create_one( [ x => '4d' ] ); }, qr/MongoDB::(?:Database|Write)Error/, "exception creating impossible index", ); } }; subtest "drop_one" => sub { $coll->drop; ok( my $name = $iv->create_one( [ x => 1 ] ), "created index on x" ); my $res = $iv->drop_one($name); ok( $res->{ok}, "result of drop_one is a database result document" ); my $found = grep { $_->{name} eq 'x_1' } $iv->list->all; ok( !$found, "dropped index on x" ); # exception on index drop like( exception { $iv->drop_one("*") }, qr/MongoDB::UsageError/, "exception calling drop_one on '*'" ); like( exception { $iv->drop_one('_id_'); }, qr/MongoDB::(?:Database|Write)Error/, "exception dropping _id_", ); like( exception { $iv->drop_one( { keys => [ x => 1 ] } ); }, qr/must be a string/, "exception dropping hashref" ); }; subtest "drop_all" => sub { $coll->drop; $iv->create_many( map { { keys => $_ } }[ x => 1 ], [ y => 1 ], [ z => 1 ] ); is_deeply( [ sort map $_->{name}, $iv->list->all ], [ sort qw/_id_ x_1 y_1 z_1/ ], "created three indexes" ); my $res = $iv->drop_all; ok( $res->{ok}, "result of drop_all is a database result document" ); is_deeply( [ sort map $_->{name}, $iv->list->all ], [qw/_id_/], "dropped all but _id index" ); }; subtest 'handling duplicates' => sub { $coll->drop; my $doc = { foo => 1, bar => 1, baz => 1, boo => 1 }; $coll->insert_one($doc) for 1 .. 2; is( $coll->count, 2, "two identical docs inserted" ); like( exception { $iv->create_one( [ foo => 1 ], { unique => 1 } ) }, qr/E11000/, "got expected error creating unique index with dups" ); # prior to 2.7.5, drop_dups was respected if ( $server_version < v2.7.5 ) { ok( $iv->create_one( [ foo => 1 ], { unique => 1, dropDups => 1 } ), "create unique with dropDups" ); is( $coll->count, 1, "one doc dropped" ); } }; subtest '2d index with options' => sub { $coll->drop; $iv->create_one( [ loc => '2d' ], { bits => 32, sparse => 1 } ); my ($index) = grep { $_->{name} eq 'loc_2d' } $iv->list->all; ok( $index, "created 2d index" ); ok( $index->{sparse}, "sparse option set on index" ); is( $index->{bits}, 32, "bits option set on index" ); }; subtest 'ensure index arbitrary options' => sub { $coll->drop; $iv->create_one( { wibble => 1 }, { notReallyAnOption => { foo => 1 } } ); my ($index) = grep { $_->{name} eq 'wibble_1' } $iv->list->all; ok( $index, "created index" ); cmp_deeply( $index->{notReallyAnOption}, { foo => 1 }, "arbitrary option set on index" ); }; # test index names with "."s subtest "index with dots" => sub { $coll->drop; $iv->create_one( { "x.y" => 1 }, { name => "foo" } ); my ($index) = grep { $_->{name} eq 'foo' } $iv->list->all; ok( $index, "got index" ); ok( $index->{key}, "has key field" ); ok( $index->{key}->{'x.y'}, "has dotted field in key" ); $coll->drop; }; # sparse indexes subtest "sparse indexes" => sub { $coll->drop; for ( 1 .. 10 ) { $coll->insert_one( { x => $_, y => $_ } ); $coll->insert_one( { x => $_ } ); } is( $coll->count, 20, "inserted 20 docs" ); like( exception { $iv->create_one( { y => 1 }, { unique => 1, name => "foo" } ) }, qr/MongoDB::DuplicateKeyError/, "error creating non-sparse index" ); my ($index) = grep { $_->{name} eq 'foo' } $iv->list->all; ok( !$index, "index not found" ); $iv->create_one( { y => 1 }, { unique => 1, sparse => 1, name => "foo" } ); ($index) = grep { $_->{name} eq 'foo' } $iv->list->all; ok( $index, "sparse index created" ); }; # text indices subtest 'text indices' => sub { plan skip_all => "text indices won't work with db version $server_version" unless $server_version >= v2.4.0; my $res = $conn->get_database('admin') ->run_command( [ 'getParameter' => 1, 'textSearchEnabled' => 1 ] ); plan skip_all => "text search not enabled" if !$res->{'textSearchEnabled'}; my $coll2 = $testdb->get_collection('test_text'); $coll2->drop; $coll2->insert_one( { language => 'english', w1 => 'hello', w2 => 'world' } ) foreach ( 1 .. 10 ); is( $coll2->count, 10, "inserted 10 documents" ); $res = $coll2->indexes->create_one( { '$**' => 'text' }, { name => 'testTextIndex', default_language => 'spanish', language_override => 'language', weights => { w1 => 5, w2 => 10 } } ); ok( $res, "created text index" ); my ($text_index) = grep { $_->{name} eq 'testTextIndex' } $coll2->get_indexes; is( $text_index->{'default_language'}, 'spanish', 'default_language option works' ); is( $text_index->{'language_override'}, 'language', 'language_override option works' ); is( $text_index->{'weights'}->{'w1'}, 5, 'weights option works 1' ); is( $text_index->{'weights'}->{'w2'}, 10, 'weights option works 2' ); # 2.6 deprecated 'text' command and added '$text' operator; also the # result format changed. if ( $server_version >= v2.6.0 ) { my $n_found =()= $coll2->find( { '$text' => { '$search' => 'world' } } )->all; is( $n_found, 10, "correct number of results found" ); } else { my $results = $testdb->run_command( [ 'text' => 'test_text', 'search' => 'world' ] )->{results}; is( scalar(@$results), 10, "correct number of results found" ); } }; done_testing; # vim: set ts=4 sts=4 sw=4 et tw=75: MongoDB-v1.2.2/t/lib/000755 000765 000024 00000000000 12651754051 014440 5ustar00davidstaff000000 000000 MongoDB-v1.2.2/t/max_time_ms.t000644 000765 000024 00000026224 12651754051 016367 0ustar00davidstaff000000 000000 # # Copyright 2014 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. use strict; use warnings; use Test::More 0.96; use Test::Fatal; use MongoDB; use lib "t/lib"; use MongoDBTest qw/skip_unless_mongod build_client get_test_db server_type server_version/; skip_unless_mongod(); my $conn = build_client(); my $testdb = get_test_db($conn); my $server_type = server_type($conn); my $server_version = server_version($conn); # This test sets failpoints, which will make the tested server unusable # for ordinary purposes. As this is risky, the test requires the user # to opt-in unless ( $ENV{FAILPOINT_TESTING} ) { plan skip_all => "\$ENV{FAILPOINT_TESTING} is false"; } # Test::Harness 3.31 supports the t/testrules.yml file to ensure that # this test file won't be run in parallel other tests, since turning on # a fail point will interfere with other tests. if ( $ENV{HARNESS_VERSION} < 3.31 ) { plan skip_all => "not safe to run fail points before Test::Harness 3.31"; } my $param = eval { $conn->get_database('admin') ->run_command( [ getParameter => 1, enableTestCommands => 1 ] ); }; my $coll; my $admin = $conn->get_database("admin"); note "CAP-401 test plan"; can_ok( 'MongoDB::Cursor', 'max_time_ms' ); $coll = $testdb->get_collection("test_collection"); my $bulk = $coll->ordered_bulk; $bulk->insert_one( { _id => $_ } ) for 1 .. 20; my $err = exception { $bulk->execute }; is( $err, undef, "inserted 20 documents for testing" ); subtest "expected behaviors" => sub { is( exception { $coll->find->max_time_ms()->next }, undef, "find->max_time_ms()" ); is( exception { $coll->find->max_time_ms(0)->next }, undef, "find->max_time_ms(0)" ); is( exception { $coll->find->max_time_ms(5000)->next }, undef, "find->max_time_ms(5000)" ); like( exception { $coll->find->max_time_ms(-1)->next }, qr/non-negative/, "find->max_time_ms(-1) throws exception" ); is( exception { $coll->find( {}, { maxTimeMS => 5000 } ) }, undef, "find with maxTimeMS" ); is( exception { my $doc = $coll->find_one( { _id => 1 }, undef, { maxTimeMS => 5000 } ); }, undef, "find_one with maxTimeMS works" ); SKIP: { skip "aggregate not available until MongoDB v2.2", 1 unless $server_version > v2.2.0; is( exception { my $doc = $coll->aggregate( [ { '$project' => { name => 1, count => 1 } } ], { maxTimeMS => 5000 }, ); }, undef, "aggregate helper with maxTimeMS works" ); } is( exception { my $doc = $coll->count( {}, { maxTimeMS => 5000 } ); }, undef, "count helper with maxTimeMS works" ); is( exception { my $doc = $coll->distinct( 'a', {}, { maxTimeMS => 5000 } ); }, undef, "distinct helper with maxTimeMS works" ); is( exception { my $doc = $coll->find_one_and_replace( { _id => 22 }, { x => 1 }, { upsert => 1, maxTimeMS => 5000 } ); }, undef, "find_one_and_replace helper with maxTimeMS works" ); is( exception { my $doc = $coll->find_one_and_update( { _id => 23 }, { '$set' => { x => 1 } }, { upsert => 1, maxTimeMS => 5000 } ); }, undef, "find_one_and_update helper with maxTimeMS works" ); is( exception { my $doc = $coll->find_one_and_delete( { _id => 23 }, { maxTimeMS => 5000 } ); }, undef, "find_one_and_delete helper with maxTimeMS works" ); is( exception { my $cursor = $coll->database->list_collections( {}, { maxTimeMS => 5000 } ); }, undef, "list_collections command with maxTimeMS works" ); }; subtest "force maxTimeMS failures" => sub { plan skip_all => "maxTimeMS not available before 2.6" unless $server_version >= v2.6.0; plan skip_all => "enableTestCommands is off" unless $param && $param->{enableTestCommands}; plan skip_all => "fail points not supported via mongos" if $server_type eq 'Mongos'; # low batchSize to force multiple batches to get all docs my $cursor = $coll->find( {}, { batchSize => 5, maxTimeMS => 5000 } )->result; $cursor->next; # before turning on fail point is( exception { $admin->run_command( [ configureFailPoint => 'maxTimeAlwaysTimeOut', mode => 'alwaysOn' ] ); }, undef, "turned on maxTimeAlwaysTimeOut fail point" ); my @foo; like( exception { @foo = $cursor->all }, qr/exceeded time limit/, "existing cursor with max_time_ms times out" ) or diag explain \@foo; like( exception { $coll->find()->max_time_ms(10)->next }, qr/exceeded time limit/, "new cursor with max_time_ms times out" ); like( exception { $coll->find( {}, { maxTimeMS => 10 } )->next }, qr/exceeded time limit/, , "find with maxTimeMS times out" ); like( exception { my $doc = $coll->find_one( { _id => 1 }, undef, { maxTimeMS => 10 } ); }, qr/exceeded time limit/, "find_one with maxTimeMS times out" ); like( exception { my $doc = $coll->count( {}, { maxTimeMS => 10 } ); }, qr/exceeded time limit/, "count command with maxTimeMS times out" ); SKIP: { skip "aggregate not available until MongoDB v2.2", 1 unless $server_version > v2.2.0; like( exception { my $doc = $coll->aggregate( [ { '$project' => { name => 1, count => 1 } } ], { maxTimeMS => 10 }, ); }, qr/exceeded time limit/, "aggregate helper with maxTimeMS times out" ); } like( exception { my $doc = $coll->count( {}, { maxTimeMS => 10 } ); }, qr/exceeded time limit/, "count helper with maxTimeMS times out" ); like( exception { my $doc = $coll->distinct( 'a', {}, { maxTimeMS => 10 } ); }, qr/exceeded time limit/, "distinct helper with maxTimeMS times out" ); like( exception { my $doc = $coll->find_one_and_replace( { _id => 22 }, { x => 1 }, { upsert => 1, maxTimeMS => 10 } ); }, qr/exceeded time limit/, "find_one_and_replace helper with maxTimeMS times out" ); like( exception { my $doc = $coll->find_one_and_update( { _id => 23 }, { '$set' => { x => 1 } }, { upsert => 1, maxTimeMS => 10 } ); }, qr/exceeded time limit/, "find_one_and_update helper with maxTimeMS times out" ); like( exception { my $doc = $coll->find_one_and_delete( { _id => 23 }, { maxTimeMS => 10 } ); }, qr/exceeded time limit/, "find_one_and_delete helper with maxTimeMS times out" ); like( exception { my $cursor = $coll->database->list_collections( {}, { maxTimeMS => 10 } ); }, qr/exceeded time limit/, "list_collections command times out" ); subtest "max_time_ms via constructor" => sub { is( exception { my $doc = $coll->count( {} ) }, undef, "count helper with default maxTimeMS 0 from client works" ); my $conn2 = build_client( max_time_ms => 10 ); my $testdb2 = get_test_db($conn2); my $coll2 = $testdb2->get_collection("test_collection"); like( exception { my $doc = $coll2->count( {} ); }, qr/exceeded time limit/, "count helper with configured maxTimeMS times out" ); }; subtest "zero disables maxTimeMS" => sub { is( exception { $coll->find->max_time_ms(0)->next }, undef, "find->max_time_ms(0)" ); is( exception { $coll->find( {}, { maxTimeMS => 5000 } ) }, undef, "find with MaxTimeMS 5000 works" ); is( exception { my $doc = $coll->find_one( { _id => 1 }, undef, { maxTimeMS => 0 } ); }, undef, "find_one with MaxTimeMS zero works" ); SKIP: { skip "aggregate not available until MongoDB v2.2", 1 unless $server_version > v2.2.0; is( exception { my $doc = $coll->aggregate( [ { '$project' => { name => 1, count => 1 } } ], { maxTimeMS => 0 }, ); }, undef, "aggregate helper with MaxTimeMS zero works" ); } is( exception { my $doc = $coll->count( {}, { maxTimeMS => 0 } ); }, undef, "count helper with MaxTimeMS zero works" ); is( exception { my $doc = $coll->distinct( 'a', {}, { maxTimeMS => 0 } ); }, undef, "distinct helper with MaxTimeMS zero works" ); is( exception { my $doc = $coll->find_one_and_replace( { _id => 22 }, { x => 1 }, { upsert => 1, maxTimeMS => 0 } ); }, undef, "find_one_and_replace helper with MaxTimeMS zero works" ); is( exception { my $doc = $coll->find_one_and_update( { _id => 23 }, { '$set' => { x => 1 } }, { upsert => 1, maxTimeMS => 0 } ); }, undef, "find_one_and_update helper with MaxTimeMS zero works" ); is( exception { my $doc = $coll->find_one_and_delete( { _id => 23 }, { maxTimeMS => 0 } ); }, undef, "find_one_and_delete helper with MaxTimeMS zero works" ); }; is( exception { $admin->run_command( [ configureFailPoint => 'maxTimeAlwaysTimeOut', mode => 'off' ] ); }, undef, "turned off maxTimeAlwaysTimeOut fail point" ); }; done_testing; MongoDB-v1.2.2/t/parallel_scan.t000644 000765 000024 00000006640 12651754051 016665 0ustar00davidstaff000000 000000 # # Copyright 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # use strict; use warnings; use utf8; use Test::More 0.96; use Test::Fatal; use Test::Deep qw/!blessed/; use MongoDB; use lib "t/lib"; use MongoDBTest qw/skip_unless_mongod build_client get_test_db server_version server_type/; skip_unless_mongod(); my $conn = build_client(); my $testdb = get_test_db($conn); my $server_version = server_version($conn); my $server_type = server_type($conn); my $coll = $testdb->get_collection('test_collection'); # parallel_scan subtest "parallel scan" => sub { plan skip_all => "Parallel scan not supported before MongoDB 2.6" unless $server_version >= v2.6.0; plan skip_all => "Parallel scan not supported on mongos" if $server_type eq 'Mongos'; my $num_docs = 2000; for ( 1 .. $num_docs ) { $coll->insert_one( { _id => $_ } ); } my $err_re = qr/must be a positive integer between 1 and 10000/; eval { $coll->parallel_scan }; like( $@, $err_re, "parallel_scan() throws error" ); for my $i ( 0, -1, 10001 ) { eval { $coll->parallel_scan($i) }; like( $@, $err_re, "parallel_scan($i) throws error" ); } my $max = 3; my @cursors = $coll->parallel_scan($max); ok( scalar @cursors <= $max, "parallel_scan($max) returned <= $max cursors" ); for my $method (qw/reset count explain/) { eval { $cursors[0]->$method }; like( $@, qr/Can't locate object method/, "$method on parallel scan cursor throws error" ); } _check_parallel_results( $num_docs, @cursors ); # read preference subtest "replica set" => sub { plan skip_all => 'needs a replicaset' unless $server_type eq 'RSPrimary'; my $conn2 = MongoDBTest::build_client( read_preference => 'secondaryPreferred' ); my @cursors = $coll->parallel_scan($max); _check_parallel_results( $num_docs, @cursors ); }; # empty collection subtest "empty collection" => sub { $coll->delete_many({}); my @cursors = $coll->parallel_scan($max); _check_parallel_results( 0, @cursors ); } }; sub _check_parallel_results { my ( $num_docs, @cursors ) = @_; local $Test::Builder::Level = $Test::Builder::Level + 1; my %seen; my $count = 0; for my $i ( 0 .. $#cursors ) { my @chunk = $cursors[$i]->all; if ($num_docs) { ok( @chunk > 0, "cursor $i had some results" ); } else { is( scalar @chunk, 0, "cursor $i had no results" ); } $seen{$_}++ for map { $_->{_id} } @chunk; $count += @chunk; } is( $count, $num_docs, "cursors returned right number of docs" ); is_deeply( [ sort { $a <=> $b } keys %seen ], [ 1 .. $num_docs ], "cursors returned all results" ); } done_testing; MongoDB-v1.2.2/t/readpref.t000644 000765 000024 00000007600 12651754051 015652 0ustar00davidstaff000000 000000 # # Copyright 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # use strict; use warnings; use Test::More 0.96; use Test::Fatal; use MongoDB; use Tie::IxHash; use lib "t/lib"; use MongoDBTest qw/skip_unless_mongod build_client get_test_db server_type/; skip_unless_mongod(); my $conn = build_client(); my $testdb = get_test_db($conn); my $server_type = server_type($conn); my $coll = $testdb->get_collection("test_coll"); my @modes = qw/primary secondary primaryPreferred secondaryPreferred nearest/; subtest "read preference connection string" => sub { my $conn2 = build_client( host => "mongodb://localhost/?readPreference=primaryPreferred&readPreferenceTags=dc:ny,rack:1&readPreferenceTags=dc:ny&readPreferenceTags=", ); my $rp = $conn2->read_preference; is( $rp->mode, 'primaryPreferred', "mode from" ); is_deeply( $rp->tag_sets, [ { dc => 'ny', rack => 1 }, { dc => 'ny'}, {} ], "tag set list" ); }; subtest "read preference propagation" => sub { for my $m (@modes) { my $conn2 = build_client( read_pref_mode => $m ); my $db2 = $conn2->get_database( $testdb->name ); my $coll2 = $db2->get_collection("test_coll"); my $cur = $coll2->find( {} ); for my $thing ( $conn2, $db2, $coll2 ) { is( $thing->read_preference->mode, $m, "$m set on " . ref($thing) ); } is( $cur->query->read_preference->mode, $m, "$m set on " . ref($cur) ); } }; subtest "read preference on cursor" => sub { for my $m ( @modes ) { my $cur = $coll->find()->read_preference($m); is( $cur->query->read_preference->mode, $m, "$m set on " . ref($cur) ); } }; subtest "error cases" => sub { like( exception { $conn->read_preference( MongoDB::ReadPreference->new ) }, qr/read-only/, "read_preference on client is read-only" ); like( exception { build_client( read_pref_mode => 'primary', read_pref_tag_sets => [ { use => 'production' } ], ) }, qr/A tag set list is not allowed with read preference mode 'primary'/, 'primary cannot be combined with a tag set list' ); }; subtest 'commands' => sub { ok( my $conn2 = build_client( read_preference => 'secondary' ), "read pref set to secondary without error" ); my $admin = $conn2->get_database('admin'); my $testdb_name = $testdb->name; my $db = $conn2->get_database( $testdb_name ); my $temp_coll = $db->get_collection("foo"); $temp_coll->insert_one({}); is( exception { $admin->run_command( [ renameCollection => "$testdb_name\.foo", to => "$testdb_name\.foofoo" ] ); }, undef, "generic helper ran with primary read pref" ); }; subtest "direct connection" => sub { my $N = 20; $coll->drop; $coll->insert({'a' => $_}) for 1..$N; for my $s ( $conn->_topology->all_servers ) { next unless $s->is_readable; my $addr = $s->address; my $type = $s->type; my $conn2 = build_client( host => $addr ); my $coll2 = $conn2->get_database( $testdb->name )->get_collection( $coll->name ); my $count; is( exception { $count = $coll2->count }, undef, "count on $addr ($type) succeeds" ) or diag explain $s; } }; done_testing; MongoDB-v1.2.2/t/regexp_obj.t000644 000765 000024 00000003305 12651754051 016204 0ustar00davidstaff000000 000000 # # Copyright 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # use strict; use warnings; use Test::More; use Test::Fatal; use MongoDB; use lib "t/lib"; use MongoDBTest qw/skip_unless_mongod build_client get_test_db/; skip_unless_mongod(); my $conn = build_client(); { my $regexp = MongoDB::BSON::Regexp->new( pattern => 'foo*bar' ); is $regexp->pattern, 'foo*bar'; } { my $regexp = MongoDB::BSON::Regexp->new( pattern => 'bar?baz', flags => 'msi' ); is $regexp->pattern, 'bar?baz'; is $regexp->flags, 'ims'; } like( exception { my $regexp = MongoDB::BSON::Regexp->new( pattern => 'narf', flags => 'xyz' ); }, qr/Regexp flag \w is not supported/, 'exception on invalid flag' ); { my $testdb = get_test_db($conn); my $coll = $testdb->get_collection("test_collection"); $coll->insert_one( { _id => 'spl0rt', foo => MongoDB::BSON::Regexp->new( pattern => 'foo.+bar', flags => 'ims' ) } ); my $doc = $coll->find_one( { _id => 'spl0rt' } ); ok $doc->{foo}; ok ref $doc->{foo}; isa_ok $doc->{foo}, 'MongoDB::BSON::Regexp'; is $doc->{foo}->pattern, 'foo.+bar'; is $doc->{foo}->flags, 'ims'; } done_testing; MongoDB-v1.2.2/t/sdam_spec.t000644 000765 000024 00000007274 12651754051 016027 0ustar00davidstaff000000 000000 # # Copyright 2009-2014 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # use strict; use warnings; use Test::More 0.96; use JSON::MaybeXS; use Path::Tiny; use Time::HiRes qw/time/; use Try::Tiny; use MongoDB; use MongoDB::_Credential; my $iterator = path('t/data/SDAM')->iterator({recurse => 1}); while ( my $path = $iterator->() ) { next unless -f $path && $path =~ /\.json$/; my $plan = eval { decode_json( $path->slurp_utf8 ) }; if ( $@ ) { die "Error decoding $path: $@"; } run_test($path->relative('t/data/SDAM'), $plan); } sub create_mock_topology { my ($name, $string) = @_; my $uri = MongoDB::_URI->new( uri => $string ); my $seed_count = scalar @{ $uri->hostpairs }; # XXX this is a hack because the direct tests are written to # assume Single, even though this is not implied by the spec my $type = ( $name =~ /^single/ && $seed_count == 1 ) ? 'Single' : "Unknown"; return MongoDB::_Topology->new( uri => $uri, type => $type, replica_set_name => $uri->options->{replicaset} || '', max_wire_version => 2, min_wire_version => 0, credential => MongoDB::_Credential->new( mechanism => 'NONE' ), ); } sub run_test { my ($name, $plan) = @_; $name =~ s/\.json$//; subtest "$name" => sub { my $topology = create_mock_topology( $name, $plan->{'uri'} ); for my $phase (@{$plan->{'phases'}}) { for my $response (@{$phase->{'responses'}}) { my ($addr, $is_master) = @$response; if ( defined $is_master->{electionId} ){ $is_master->{electionId} = MongoDB::OID->new( value => $is_master->{electionId}->{'$oid'} ); } # Process response my $desc = MongoDB::_Server->new( address => $addr, is_master => $is_master, last_update_time => time, ); $topology->_update_topology_from_server_desc( @$response[0], $desc); } # Process outcome check_outcome($topology, $phase->{'outcome'}); } }; } sub check_outcome { my ($topology, $outcome, $start_type) = @_; my %expected_servers = %{$outcome->{'servers'}}; my %actual_servers = %{$topology->servers}; is( scalar keys %actual_servers, scalar keys %expected_servers, 'correct amount of servers'); while (my ($key, $value) = each %expected_servers) { if ( ok( (exists $actual_servers{$key}), "$key exists in outcome") ) { my $actual_server = $actual_servers{$key}; is($actual_server->type, $value->{'type'}, 'correct server type'); my $expected_set_name = defined $value->{'setName'} ? $value->{'setName'} : ""; is($actual_server->set_name, $expected_set_name, 'correct setName for server'); } } my $expected_set_name = defined $outcome->{'setName'} ? $outcome->{'setName'} : ""; is($topology->replica_set_name, $expected_set_name, 'correct setName for topology'); is($topology->type, $outcome->{'topologyType'}, 'correct topology type'); } done_testing; MongoDB-v1.2.2/t/ss_spec.t000644 000765 000024 00000013052 12651754051 015517 0ustar00davidstaff000000 000000 # # Copyright 2015 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # use strict; use warnings; use Test::More 0.96; use Test::Fatal; use JSON::MaybeXS; use Path::Tiny 0.054; # basename with suffix use Try::Tiny; use MongoDB; use MongoDB::ReadPreference; use MongoDB::_Credential; use MongoDB::_Server; use MongoDB::_Topology; use MongoDB::_URI; subtest "rtt tests" => sub { my $iterator = path('t/data/SS/rtt')->iterator( { recurse => 1 } ); for my $path ( exhaust_sort($iterator) ) { next unless -f $path && $path =~ /\.json$/; my $plan = eval { decode_json( $path->slurp_utf8 ) }; if ($@) { die "Error decoding $path: $@"; } run_rtt_test( $path->basename(".json"), $plan ); } like( exception { create_mock_server( "localhost:27017", -1 ) }, qr/non-negative number/, "negative RTT times throw an excepton" ); }; subtest "server selection tests" => sub { my $source = path('t/data/SS/server_selection'); my $iterator = $source->iterator( { recurse => 1 } ); for my $path ( exhaust_sort($iterator) ) { next unless -f $path && $path =~ /\.json$/; my $plan = eval { decode_json( $path->slurp_utf8 ) }; if ($@) { die "Error decoding $path: $@"; } run_ss_test( $path->relative($source), $plan ); } }; subtest "random selection" => sub { my $topo = create_mock_topology( "mongodb://localhost", 'Sharded' ); $topo->_remove_address("localhost:27017"); for my $n ( "A" .. "Z" ) { my $address = "$n:27017"; my $server = create_mock_server( $address, 10, type => 'Mongos' ); $topo->servers->{$server->address} = $server; $topo->_update_ewma( $server->address, $server ); } # try up to 20 my $first = $topo->_find_any_server; my $different = 0; for ( 1 .. 20 ) { my $another = $topo->_find_any_server; if ( $first->address ne $another->address ) { $different = 1; last; } } ok( $different, "servers randomly selected" ); }; sub exhaust_sort { my $iter = shift; my @result; while ( defined( my $i = $iter->() ) ) { push @result, $i; } return sort @result; } sub create_mock_topology { my ( $uri, $type ) = @_; $type ||= 'Single'; return MongoDB::_Topology->new( uri => MongoDB::_URI->new( uri => $uri ), type => $type, max_wire_version => 3, min_wire_version => 0, heartbeat_frequency_ms => 3600000, last_scan_time => time + 60, credential => MongoDB::_Credential->new( mechanism => 'NONE' ), ); } sub create_mock_server { my ( $address, $rtt, @args ) = @_; return MongoDB::_Server->new( address => $address, last_update_time => 0, rtt_sec => $rtt, is_master => { ismaster => 1, ok => 1 }, @args, ); } sub run_rtt_test { my ( $name, $plan ) = @_; my $topo = create_mock_topology("mongodb://localhost"); if ( $plan->{avg_rtt_ms} ne 'NULL' ) { $topo->rtt_ewma_sec->{"localhost:27017"} = $plan->{avg_rtt_ms}/1000; } my $server = create_mock_server( "localhost:2707", $plan->{new_rtt_ms}/1000 ); $topo->_update_topology_from_server_desc( 'localhost:27017', $server ); is( $topo->rtt_ewma_sec->{"localhost:27017"}, $plan->{new_avg_rtt}/1000, $name ); } sub run_ss_test { my ( $name, $plan ) = @_; $name =~ s{\.json$}{}; my $topo_desc = $plan->{topology_description}; my $topo = create_mock_topology( "mongodb://localhost", $topo_desc->{type} ); $topo->_remove_address("localhost:27017"); for my $s ( @{ $topo_desc->{servers} } ) { my $address = $s->{address}; my %tags = map { %$_ } @{ $s->{tags} }; my $server = create_mock_server( $address, $s->{avg_rtt_ms}/1000, type => $s->{type}, tags => \%tags, ); $topo->servers->{$server->address} = $server; $topo->_update_ewma( $server->address, $server ); } my $got; if ( $plan->{operation} eq 'read' ) { my $read_pref = MongoDB::ReadPreference->new( mode => $plan->{read_preference}{mode}, tag_sets => $plan->{read_preference}{tags}, ); my $mode = $read_pref ? lc $read_pref->mode : 'primary'; my $method = $topo->type eq 'Single' || $topo->type eq 'Sharded' ? '_find_any_server' : "_find_${mode}_server"; $got = $topo->$method($read_pref); } else { my $method = $topo->type eq 'Single' || $topo->type eq 'Sharded' ? '_find_any_server' : "_find_primary_server"; $got = $topo->$method; } if ( my @expect = @{ $plan->{in_latency_window} } ) { my $got_address = $got->address; my $found = grep { $got_address eq $_->{address} } @expect; ok( $found, $name ); } else { ok( !defined($got), $name ); } } done_testing; MongoDB-v1.2.2/t/testrules.yml000644 000765 000024 00000000122 12651754051 016442 0ustar00davidstaff000000 000000 --- seq: - seq: t/00-report-mongod.t - seq: t/max_time_ms.t - par: ** MongoDB-v1.2.2/t/threads/000755 000765 000024 00000000000 12651754051 015324 5ustar00davidstaff000000 000000 MongoDB-v1.2.2/t/types.t000644 000765 000024 00000017357 12651754051 015240 0ustar00davidstaff000000 000000 # # Copyright 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # use strict; use warnings; use Test::More 0.96; use MongoDB; use MongoDB::OID; use MongoDB::Code; use MongoDB::Timestamp; use DateTime; use JSON::MaybeXS; use Test::Fatal; use lib "t/lib"; use MongoDBTest qw/skip_unless_mongod build_client get_test_db/; skip_unless_mongod(); my $conn = build_client(); my $testdb = get_test_db($conn); my $coll = $testdb->get_collection('y'); $coll->drop; my $id = MongoDB::OID->new; isa_ok($id, 'MongoDB::OID'); is($id."", $id->value); # OIDs created in time-ascending order { my $ids = []; for (0..9) { push @$ids, new MongoDB::OID; select undef, undef, undef, 0.1; # Sleep 0.1 seconds } for (0..8) { ok((@$ids[$_]."") lt (@$ids[$_+1]."")); } my $now = DateTime->now; $id = MongoDB::OID->new; ok($id->get_time >= $now->epoch, "OID time >= epoch" ); } # creating ids from an existing value { my $value = "012345678901234567890abc"; my $id = MongoDB::OID->new(value => $value); is($id->value, $value); my $id_orig = MongoDB::OID->new; foreach my $args ( [value => $id_orig->value], [value => uc $id_orig->value], [$id_orig->value], [$id_orig], ) { my $id_copy = MongoDB::OID->new(@{$args}); is($id_orig->value, $id_copy->value); } } # invalid ids from an existing value { my $value = "506b37b1a7e2037c1f0004"; like( exception { MongoDB::OID->new(value => $value) }, qr/not a valid OID/i, "Invalid OID throws exception" ); } #regexes { $coll->insert_one({'x' => 'FRED', 'y' => 1}); $coll->insert_one({'x' => 'bob'}); $coll->insert_one({'x' => 'fRed', 'y' => 2}); my $freds = $coll->query({'x' => qr/fred/i})->sort({'y' => 1}); is($freds->next->{'x'}, 'FRED', 'case insensitive'); is($freds->next->{'x'}, 'fRed', 'case insensitive'); ok(!$freds->has_next, 'bob doesn\'t match'); my $fred = $coll->find_one({'x' => qr/^F/}); is($fred->{'x'}, 'FRED', 'starts with'); # saving/getting regexes $coll->drop; $coll->insert_one({"r" => qr/foo/i}); my $obj = $coll->find_one; my $qr = $obj->{r}->try_compile; like("foo", $qr, 'matches'); like("FOO", $qr, "flag i works"); unlike("bar", $qr, 'not a match'); } # date { $coll->drop; my $now = DateTime->now; $coll->insert_one({'date' => $now}); my $date = $coll->find_one; is($date->{'date'}->epoch, $now->epoch); is($date->{'date'}->day_of_week, $now->day_of_week); my $past = DateTime->from_epoch('epoch' => 1234567890); $coll->insert_one({'date' => $past}); $date = $coll->find_one({'date' => $past}); is($date->{'date'}->epoch, 1234567890); } # minkey/maxkey { $coll->drop; my $min = bless {}, "MongoDB::MinKey"; my $max = bless {}, "MongoDB::MaxKey"; $coll->insert_one({min => $min, max => $max}); my $x = $coll->find_one; isa_ok($x->{min}, 'MongoDB::MinKey'); isa_ok($x->{max}, 'MongoDB::MaxKey'); } # tie::ixhash { $coll->drop; my %test; tie %test, 'Tie::IxHash'; $test{one} = "on"; $test{two} = 2; ok( $coll->insert_one(\%test), "inserted IxHash") ; my $doc = $coll->find_one; is($doc->{'one'}, 'on', "field one"); is($doc->{'two'}, 2, "field two"); } # binary { $coll->drop; my $invalid = "\xFE"; ok( $coll->insert_one({"bin" => \$invalid}), "inserted binary data" ); my $one = $coll->find_one; isa_ok($one->{bin}, "MongoDB::BSON::Binary", "binary data"); is($one->{'bin'}, "\xFE", "read binary data"); } # 64-bit ints { use bigint; $coll->drop; my $x = 2 ** 34; $coll->insert_one({x => $x}); my $result = $coll->find_one; is($result->{'x'}, 17179869184) or diag explain $result; $coll->drop; $x = (2 ** 34) * -1; $coll->insert_one({x => $x}); $result = $coll->find_one; is($result->{'x'}, -17179869184) or diag explain $result; $coll->drop; $coll->insert_one({x => 2712631400}); $result = $coll->find_one; is($result->{'x'}, 2712631400) or diag explain $result; eval { $coll->insert_one({x => 9834590149023841902384137418571984503}); }; like($@, qr/can't fit/, "big int too large error message"); $coll->drop; } # code { my $str = "function() { return 5; }"; my $code = MongoDB::Code->new("code" => $str); my $scope = $code->scope; is(keys %$scope, 0); $coll->insert_one({"code" => $code}); my $ret = $coll->find_one; my $ret_code = $ret->{code}; $scope = $ret_code->scope; is(keys %$scope, 0); is($ret_code->code, $str); my $x; if ( ! $conn->password ) { $x = $testdb->eval($code); is($x, 5); } $str = "function() { return name; }"; $code = MongoDB::Code->new("code" => $str, "scope" => {"name" => "Fred"}); if ( ! $conn->password ) { # XXX eval is deprecated, but we'll leave this test for now $x = $testdb->eval($code); is($x, "Fred"); } $coll->drop; $coll->insert_one({"x" => "foo", "y" => $code, "z" => 1}); $x = $coll->find_one; is($x->{x}, "foo"); is($x->{y}->code, $str); is($x->{y}->scope->{"name"}, "Fred"); is($x->{z}, 1); $coll->drop; } SKIP: { use Config; skip "Skipping 64 bit native SV", 1 if ( !$Config{use64bitint} ); $coll->update_one({ x => 1 }, { '$inc' => { 'y' => 19401194714 } }, { 'upsert' => 1 }); my $result = $coll->find_one; is($result->{'y'},19401194714,'64 bit ints without Math::BigInt'); } # oid json { my $doc = {"foo" => MongoDB::OID->new}; my $j = JSON->new; $j->allow_blessed; $j->convert_blessed; my $json = $j->encode($doc); is($json, '{"foo":{"$oid":"'.$doc->{'foo'}->value.'"}}'); } # timestamp { $coll->drop; my $t = MongoDB::Timestamp->new("sec" => 12345678, "inc" => 9876543); $coll->insert_one({"ts" => $t}); my $x = $coll->find_one; is($x->{'ts'}->sec, $t->sec); is($x->{'ts'}->inc, $t->inc); } # boolean objects { $coll->drop; $coll->insert_one({"x" => boolean::true, "y" => boolean::false}); my $x = $coll->find_one; is( ref $x->{x}, 'boolean', "roundtrip boolean field x"); is( ref $x->{y}, 'boolean', "roundtrip boolean field y"); ok( $x->{x}, "x is true"); ok( ! $x->{y}, "y is false"); } # unrecognized obj { eval { $coll->insert_one({"x" => $coll}); }; ok($@ =~ m/type \(MongoDB::Collection\) unhandled/, "can't insert a non-recognized obj"); } # forcing types { $coll->drop; my $x = 1.0; my ($double_type, $int_type) = ({x => {'$type' => 1}}, {'$or' => [{x => {'$type' => 16}}, {x => {'$type' => 18}}]}); MongoDB::force_double($x); $coll->insert_one({x => $x}); my $result = $coll->find_one($double_type); is($result->{x}, 1); $result = $coll->find_one($int_type); is($result, undef); $coll->drop; MongoDB::force_int($x); $coll->insert_one({x => $x}); $result = $coll->find_one($double_type); is($result, undef); $result = $coll->find_one($int_type); is($result->{x}, 1); } done_testing; MongoDB-v1.2.2/t/unit/000755 000765 000024 00000000000 12651754051 014651 5ustar00davidstaff000000 000000 MongoDB-v1.2.2/t/unit/configuration.t000644 000765 000024 00000026006 12651754051 017711 0ustar00davidstaff000000 000000 # # Copyright 2015 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # use strict; use warnings; use Test::More; use Test::Fatal; use MongoDB; use MongoDB::MongoClient; use MongoDB::BSON; use constant HAS_DATETIME_TINY => eval { require DateTime::Tiny; 1 }; sub _mc { return MongoDB::MongoClient->new(@_); } subtest "host and port" => sub { my $mc = _mc(); is( $mc->host, "mongodb://localhost:27017", "default host is URI" ); is( $mc->port, 27017, "port" ); is( $mc->_uri->uri, $mc->host, "uri matches host" ); $mc = _mc( host => "example.com" ); is( $mc->host, "example.com", "host as hostname is preserved" ); is( $mc->_uri->uri, "mongodb://example.com:27017", "uri gets host" ); $mc = _mc( host => "example.com", port => 99 ); is( $mc->host, "example.com", "host as hostname is preserved" ); is( $mc->port, 99, "default port changed" ); is( $mc->_uri->uri, "mongodb://example.com:99", "uri gets both host and port" ); $mc = _mc( host => "localhost:27018" ); is( $mc->_uri->uri, "mongodb://localhost:27018", "host as localhost:27018" ); $mc = _mc( host => "mongodb://example.com", port => 99 ); is( $mc->host, "mongodb://example.com", "host as URI is preserved" ); is( $mc->port, 99, "port changed" ); is( $mc->_uri->uri, $mc->host, "uri matches host" ); is_deeply( $mc->_uri->hostpairs, ["example.com:27017"], "host pairs ignores changed port" ); }; subtest "auth mechanism and properties" => sub { my $mc = _mc(); is( $mc->auth_mechanism, 'NONE', "default auth_mechanism" ); is_deeply( $mc->auth_mechanism_properties, {}, "default auth_mechanism_properties" ); $mc = _mc( auth_mechanism => 'MONGODB-CR', auth_mechanism_properties => { foo => 1 } ); is( $mc->auth_mechanism, 'MONGODB-CR', "custom auth_mechanism" ); is_deeply( $mc->auth_mechanism_properties, { foo => 1 }, "custom auth_mechanism_properties" ); $mc = _mc( host => 'mongodb://localhost/?authMechanism=PLAIN&authMechanismProperties=bar:2', auth_mechanism => 'MONGODB-CR', auth_mechanism_properties => { foo => 1 }, ); is( $mc->auth_mechanism, 'PLAIN', "authMechanism supersedes auth_mechanism" ); is_deeply( $mc->auth_mechanism_properties, { bar => 2 }, "authMechanismProperties supersedes auth_mechanism_properties" ); $mc = _mc( sasl => 1, sasl_mechanism => 'PLAIN', ); is( $mc->auth_mechanism, 'PLAIN', "sasl+sasl_mechanism is auth_mechanism default" ); $mc = _mc( auth_mechanism => 'MONGODB-CR', sasl => 1, sasl_mechanism => 'PLAIN', ); is( $mc->auth_mechanism, 'MONGODB-CR', "auth_mechanism dominates sasl+sasl_mechanism" ); }; subtest bson_codec => sub { my $codec = MongoDB::BSON->new( op_char => '-' ); my $mc = _mc(); ok( !$mc->bson_codec->prefer_numeric, "default bson_codec object" ); $mc = _mc( bson_codec => $codec ); is( $mc->bson_codec->op_char, '-', "bson_codec object" ); $mc = _mc( bson_codec => { prefer_numeric => 1 } ); isa_ok( $mc->bson_codec, 'MongoDB::BSON' ); ok( $mc->bson_codec->prefer_numeric, "bson_codec coerced from hashref" ); if ( HAS_DATETIME_TINY ) { $mc = _mc( dt_type => 'DateTime::Tiny' ); isa_ok( $mc->bson_codec, 'MongoDB::BSON' ); ok( $mc->bson_codec->dt_type, "legacy dt_type influences default codec" ); } }; subtest connect_timeout_ms => sub { my $mc = _mc(); is( $mc->connect_timeout_ms, 10000, "default connect_timeout_ms" ); $mc = _mc( timeout => 60000, ); is( $mc->connect_timeout_ms, 60000, "legacy 'timeout' as fallback" ); $mc = _mc( timeout => 60000, connect_timeout_ms => 30000, ); is( $mc->connect_timeout_ms, 30000, "connect_timeout_ms" ); $mc = _mc( host => 'mongodb://localhost/?connectTimeoutMS=20000', connect_timeout_ms => 30000, ); is( $mc->connect_timeout_ms, 20000, "connectTimeoutMS" ); }; subtest db_name => sub { my $mc = _mc(); is( $mc->db_name, "", "default db_name" ); $mc = _mc( db_name => "testdb", ); is( $mc->db_name, "testdb", "db_name" ); $mc = _mc( host => 'mongodb://localhost/admin', db_name => "testdb", ); is( $mc->db_name, "admin", "database in URI" ); }; my %simple_time_options = ( heartbeat_frequency_ms => 60000, local_threshold_ms => 15, max_time_ms => 0, server_selection_timeout_ms => 30000, socket_check_interval_ms => 5000, ); for my $key ( sort keys %simple_time_options ) { subtest $key => sub { my $mc = _mc(); is( $mc->$key, $simple_time_options{$key}, "default $key" ); $mc = _mc( $key => 99999, ); is( $mc->$key, 99999, "$key" ); ( my $cs_key = $key ) =~ s/_//g; $mc = _mc( host => "mongodb://localhost/?$cs_key=88888", $key => 99999, ); is( $mc->$key, 88888, "$cs_key" ); }; } subtest journal => sub { my $mc = _mc(); ok( !$mc->j, "default j (false)" ); $mc = _mc( j => 1 ); ok( $mc->j, "j (true)" ); $mc = _mc( host => 'mongodb://localhost/?journal=false', j => 1, ); ok( !$mc->j, "journal supersedes j" ); }; subtest "read_pref_mode and read_pref_tag_sets" => sub { my $mc = _mc(); is( $mc->read_pref_mode, 'primary', "default read_pref_mode" ); is_deeply( $mc->read_pref_tag_sets, [ {} ], "default read_pref_tag_sets" ); my $tag_set_list = [ { dc => 'nyc', rack => 1 }, { dc => 'nyc' } ]; $mc = _mc( read_pref_mode => 'secondary', read_pref_tag_sets => $tag_set_list, ); is( $mc->read_pref_mode, 'secondary', "read_pref_mode" ); is_deeply( $mc->read_pref_tag_sets, $tag_set_list, "read_pref_tag_sets" ); $mc = _mc( host => 'mongodb://localhost/?readPreference=nearest&readPreferenceTags=dc:sf', read_pref_mode => 'secondary', read_pref_tag_sets => $tag_set_list, ); is( $mc->read_pref_mode, 'nearest', "readPreference" ); is_deeply( $mc->read_pref_tag_sets, [ { dc => 'sf' } ], "readPreferenceTags" ); }; subtest replica_set_name => sub { my $mc = _mc(); is( $mc->replica_set_name, "", "default replica_set_name" ); is( $mc->_topology->replica_set_name, '', "topology object matches" ); $mc = _mc( replica_set_name => "repl1" ); is( $mc->replica_set_name, "repl1", "replica_set_name" ); is( $mc->_topology->replica_set_name, "repl1", "topology object matches" ); $mc = _mc( host => 'mongodb://localhost/?replicaSet=repl2', replica_set_name => "repl1", ); is( $mc->replica_set_name, "repl2", "replicaSet" ); is( $mc->_topology->replica_set_name, "repl2", "topology object matches" ); }; subtest server_selection_try_once => sub { my $mc = _mc(); ok( $mc->server_selection_try_once, "default server_selection_try_once true" ); $mc = _mc( server_selection_try_once => 0 ); ok( !$mc->server_selection_try_once, "server_selection_try_once (false)" ); $mc = _mc( host => 'mongodb://localhost/?serverSelectionTryOnce=false', server_selection_try_once => 1, ); ok( !$mc->server_selection_try_once, "URI supersedes argument" ) or diag explain $mc->_uri; }; subtest socket_timeout_ms => sub { my $mc = _mc(); is( $mc->socket_timeout_ms, 30000, "default socket_timeout_ms" ); $mc = _mc( query_timeout => 60000, ); is( $mc->socket_timeout_ms, 60000, "explicit 'query_timeout' as fallback" ); $mc = _mc( query_timeout => 60000, socket_timeout_ms => 40000, ); is( $mc->socket_timeout_ms, 40000, "socket_timeout_ms" ); $mc = _mc( host => 'mongodb://localhost/?socketTimeoutMS=10000', socket_timeout_ms => 40000, ); is( $mc->socket_timeout_ms, 10000, "socketTimeoutMS" ); }; subtest ssl => sub { my $mc = _mc(); ok( !$mc->ssl, "default ssl (false)" ); $mc = _mc( ssl => 1 ); ok( $mc->ssl, "ssl (true)" ); $mc = _mc( ssl => {} ); ok( $mc->ssl, "ssl (hashref)" ); $mc = _mc( host => 'mongodb://localhost/?ssl=false', ssl => 1, ); ok( !$mc->ssl, "connection string supersedes" ); }; subtest "username and password" => sub { my $mc = _mc(); is( $mc->username, "", "default username" ); is( $mc->password, "", "default password" ); $mc = _mc( username => "mulder", password => "trustno1" ); is( $mc->username, "mulder", "username" ); is( $mc->password, "trustno1", "password" ); $mc = _mc( host => 'mongodb://scully:skeptic@localhost/', username => "mulder", password => "trustno1" ); is( $mc->username, "scully", "username from URI" ); is( $mc->password, "skeptic", "password from URI" ); $mc = _mc( host => 'mongodb://:@localhost/', username => "mulder", password => "trustno1" ); is( $mc->username, "", "username from URI" ); is( $mc->password, "", "password from URI" ); }; subtest w => sub { my $mc = _mc(); is( $mc->w, undef, "default w" ); $mc = _mc( w => 2 ); is( $mc->w, 2, "w:2" ); $mc = _mc( w => 'majority' ); is( $mc->w, 'majority', "w:majority" ); $mc = _mc( host => 'mongodb://localhost/?w=0', w => 'majority', ); is( $mc->w, 0, "w from connection string" ); isnt( exception { _mc( w => {} ) }, undef, "Setting w to anything but a string or int dies." ); }; subtest wtimeout => sub { my $mc = _mc(); is( $mc->wtimeout, 1000, "default wtimeout" ); $mc = _mc( wtimeout => 40000, ); is( $mc->wtimeout, 40000, "wtimeout" ); $mc = _mc( host => 'mongodb://localhost/?wtimeoutMS=10000', wtimeout => 40000, ); is( $mc->wtimeout, 10000, "wtimeoutMS" ); }; subtest "warnings and exceptions" => sub { my $warning; local $SIG{__WARN__} = sub { $warning = shift }; my $mc = _mc( host => "mongodb://localhost/?notArealOption=42" ); like( $warning, qr/Unsupported option 'notArealOption' in URI/, "unknown option warns with original case" ); like( exception { _mc( host => "mongodb://localhost/?ssl=" ) }, qr/expected boolean/, 'ssl key with invalid value' ); }; done_testing; MongoDB-v1.2.2/t/unit/link.t000644 000765 000024 00000001610 12651754051 015771 0ustar00davidstaff000000 000000 use strict; use warnings; use Test::More 0.88; use Test::Fatal; use MongoDB::_Server; use Time::HiRes qw/time/; my $class = "MongoDB::_Link"; require_ok( $class ); my $obj = new_ok( $class, [ address => 'localhost:27017'] ); my $dummy_server = MongoDB::_Server->new( address => 'localhost:27017', last_update_time => time, ); $obj->set_metadata( $dummy_server ); is( $obj->max_bson_object_size, 4*1024*1024, "default max bson object size" ); is( $obj->max_message_size_bytes, 2*4*1024*1024, "default max message size" ); { # monkeypatch to let length check fire no warnings 'redefine', 'once'; local *MongoDB::_Link::assert_valid_connection = sub { 1 }; like( exception { $obj->write( "a" x ($obj->max_message_size_bytes + 1) ) }, qr/Message.*?exceeds maximum/, "over long message throws error", ); } done_testing; # vim: ts=4 sts=4 sw=4 et: MongoDB-v1.2.2/t/unit/read_preference.t000644 000765 000024 00000002264 12651754051 020153 0ustar00davidstaff000000 000000 use strict; use warnings; use Test::More 0.88; use Test::Fatal; my $class = "MongoDB::ReadPreference"; require_ok( $class ); is( exception { $class->new }, undef, "new without args has default" ); my @modes = qw( primary PRIMARY PrImArY secondary secondary_preferred primary_preferred nearest secondarypreferred primarypreferred ); for my $mode (@modes) { new_ok( $class, [ mode => $mode ], "new( mode => '$mode' )" ); } like( exception { $class->new( mode => 'primary', tag_sets => [ { dc => 'us' } ] ) }, qr/not allowed/, "tag set list not allowed with primary" ); subtest "stringification" => sub { my $rp; my @cases = ( [ {} => 'primary' ], [ { mode => 'secondary_preferred' }, 'secondaryPreferred' ], [ { mode => 'secondary_preferred', tag_sets => [ { dc => 'ny', rack => 1 }, { dc => 'ny' }, {} ] }, 'secondaryPreferred ({dc:ny,rack:1},{dc:ny},{})' ], ); for my $case (@cases) { my $rp = $class->new( $case->[0] ); is( $rp->as_string, $case->[1], $case->[1] ); } }; done_testing; # vim: ts=4 sts=4 sw=4 et: MongoDB-v1.2.2/t/unit/uri.t000644 000765 000024 00000013244 12651754051 015641 0ustar00davidstaff000000 000000 # # Copyright 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # use strict; use warnings; use Test::More; use Test::Fatal; use MongoDB::_URI; subtest "localhost" => sub { my @hostpairs = ('localhost:27017'); my $uri = MongoDB::_URI->new( uri => 'mongodb://localhost'); is_deeply($uri->hostpairs, \@hostpairs); $uri = MongoDB::_URI->new( uri => 'mongodb://localhost,'); is_deeply($uri->hostpairs, \@hostpairs, "trailing comma"); }; subtest "db_name" => sub { my $uri = MongoDB::_URI->new( uri => 'mongodb://localhost/example_db'); is($uri->db_name, "example_db", "parse db_name"); $uri = MongoDB::_URI->new( uri => 'mongodb://localhost,/example_db'); is($uri->db_name, "example_db", "parse db_name with trailing comma on host"); $uri = MongoDB::_URI->new( uri => 'mongodb://localhost/example_db?'); is($uri->db_name, "example_db", "parse db_name with trailing ?"); $uri = MongoDB::_URI->new( uri => 'mongodb://localhost,localhost:27020,localhost:27021/example_db'); is($uri->db_name, "example_db", "parse db_name, many hosts"); $uri = MongoDB::_URI->new( uri => 'mongodb://localhost/?'); is($uri->db_name, "", "no db_name with trailing ?"); }; subtest "localhost with username/password" => sub { my $uri = MongoDB::_URI->new( uri => 'mongodb://fred:foobar@localhost'); my @hostpairs = ('localhost:27017'); is_deeply($uri->hostpairs, \@hostpairs); is($uri->username, 'fred'); is($uri->password, 'foobar'); }; # XXX this should really be illegal, I think, but the regex allows it subtest "localhost with username only" => sub { my $uri = MongoDB::_URI->new( uri => 'mongodb://fred@localhost'); my @hostpairs = ('localhost:27017'); is_deeply($uri->hostpairs, \@hostpairs); is($uri->username, 'fred'); is($uri->password, undef); }; subtest "localhost with username/password and db" => sub { my $uri = MongoDB::_URI->new( uri => 'mongodb://fred:foobar@localhost/baz'); my @hostpairs = ('localhost:27017'); is_deeply($uri->hostpairs, \@hostpairs); is($uri->username, 'fred'); is($uri->password, 'foobar'); is($uri->db_name, 'baz'); }; subtest "localhost with username/password and db (trailing comma)" => sub { my $uri = MongoDB::_URI->new( uri => 'mongodb://fred:foobar@localhost,/baz'); my @hostpairs = ('localhost:27017'); is_deeply($uri->hostpairs, \@hostpairs); is($uri->username, 'fred'); is($uri->password, 'foobar'); is($uri->db_name, 'baz'); }; subtest "localhost with username/password and db (trailing question)" => sub { my $uri = MongoDB::_URI->new( uri => 'mongodb://fred:foobar@localhost/baz?'); my @hostpairs = ('localhost:27017'); is_deeply($uri->hostpairs, \@hostpairs); is($uri->username, 'fred'); is($uri->password, 'foobar'); is($uri->db_name, 'baz'); }; subtest "localhost with empty extras" => sub { my $uri = MongoDB::_URI->new( uri => 'mongodb://:@localhost/?'); my @hostpairs = ('localhost:27017'); is_deeply($uri->hostpairs, \@hostpairs); }; subtest "multiple hostnames" => sub { my $uri = MongoDB::_URI->new( uri => 'mongodb://example1.com:27017,example2.com:27017'); my @hostpairs = ('example1.com:27017', 'example2.com:27017'); is_deeply($uri->hostpairs, \@hostpairs); }; subtest "multiple hostnames at localhost" => sub { my $uri = MongoDB::_URI->new( uri => 'mongodb://localhost,localhost:27018,localhost:27019'); my @hostpairs = ('localhost:27017', 'localhost:27018', 'localhost:27019'); is_deeply($uri->hostpairs, \@hostpairs); }; subtest "multiple hostnames (localhost/domain)" => sub { my $uri = MongoDB::_URI->new( uri => 'mongodb://localhost,example1.com:27018,localhost:27019'); my @hostpairs = ('localhost:27017', 'example1.com:27018', 'localhost:27019'); is_deeply($uri->hostpairs, \@hostpairs); }; subtest "multiple hostnames (localhost/domain)" => sub { my $uri = MongoDB::_URI->new( uri => 'mongodb://localhost,example1.com:27018,localhost:27019'); my @hostpairs = ('localhost:27017', 'example1.com:27018', 'localhost:27019'); is_deeply($uri->hostpairs, \@hostpairs); }; subtest "percent encoded username and password" => sub { my $uri = MongoDB::_URI->new( uri => 'mongodb://dog%3Adogston:p%40ssword@localhost'); my @hostpairs = ('localhost:27017'); is($uri->username, 'dog:dogston'); is($uri->password, 'p@ssword'); is_deeply($uri->hostpairs, \@hostpairs); }; subtest "empty username and password" => sub { my $uri = MongoDB::_URI->new( uri => 'mongodb://:@localhost'); is($uri->username, '', "empty username"); is($uri->password, '', "empty password"); }; subtest "case normalization" => sub { my $uri; $uri = MongoDB::_URI->new( uri => 'mongodb://eXaMpLe1.cOm:27017,eXAMPLe2.com:27017'); my @hostpairs = ('example1.com:27017', 'example2.com:27017'); is_deeply($uri->hostpairs, \@hostpairs, "hostname normalized"); $uri = MongoDB::_URI->new( uri => 'mongodb://localhost/?ReAdPrEfErEnCe=Primary&wTimeoutMS=1000' ); is( $uri->options->{readpreference}, 'Primary', "readPreference key normalized" ); is( $uri->options->{wtimeoutms}, 1000, "wTimeoutMS key normalized" ); }; done_testing; MongoDB-v1.2.2/t/unit/write_concern.t000644 000765 000024 00000000606 12651754051 017701 0ustar00davidstaff000000 000000 use strict; use warnings; use Test::More 0.88; use Test::Fatal; my $class = "MongoDB::WriteConcern"; require_ok( $class ); is( exception { $class->new }, undef, "new without args has default" ); like( exception { $class->new( w => 0, j => 1 ) }, qr/can't use write concern w=0 with j=1/, "j=1 not allowed with w=0", ); done_testing; # vim: ts=4 sts=4 sw=4 et: MongoDB-v1.2.2/t/threads/basic.t000644 000765 000024 00000004312 12651754051 016572 0ustar00davidstaff000000 000000 # # Copyright 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # use strict; use warnings; use Config; use if $Config{usethreads}, 'threads'; use Test::More; BEGIN { plan skip_all => 'requires threads' unless $Config{usethreads} } use MongoDB; use Try::Tiny; use lib "t/lib"; use MongoDBTest qw/skip_unless_mongod build_client get_test_db/; skip_unless_mongod(); my $conn = build_client(); my $testdb = get_test_db($conn); my $col = $testdb->get_collection('kooh'); $col->drop; { my $ret = try { threads->create(sub { $conn->reconnect; $col->insert_one({ foo => 42 })->inserted_id; })->join->value; } catch { diag $_; }; ok $ret, 'we survived destruction of a cloned connection'; my $o = $col->find_one({ foo => 42 }); is $ret, $o->{_id}, 'we inserted and joined the OID back'; } { my ($n_threads, $n_inserts) = $ENV{AUTOMATED_TESTING} ? (10,1000) : (5, 100); note "inserting $n_inserts items each in $n_threads threads"; my @threads = map { threads->create(sub { $conn->reconnect; my $col = $conn->get_database($testdb->name)->get_collection('kooh'); map { $col->insert_one({ foo => threads->self->tid })->inserted_id } 1 .. $n_inserts; }) } 1 .. $n_threads; my @vals = map { ( $_->tid ) x $n_inserts } @threads; my @ids = map { $_->join } @threads; my $expected = scalar @ids; is scalar keys %{ { map { ($_ => undef) } @ids } }, $expected, "we got $expected unique OIDs"; is_deeply( [map { $col->find_one({ _id => $_ })->{foo} } @ids], [@vals], 'right values inserted from threads', ); } done_testing(); MongoDB-v1.2.2/t/threads/bson.t000644 000765 000024 00000004754 12651754051 016464 0ustar00davidstaff000000 000000 # # Copyright 2015 # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # use strict; use warnings; use Config; use if $Config{usethreads}, 'threads'; use Test::More; BEGIN { plan skip_all => 'requires threads' unless $Config{usethreads}; plan skip_all => 'needs Perl 5.10.1' unless $] ge '5.010001'; } use MongoDB; use Try::Tiny; use threads::shared; use lib "t/lib"; my $class = "MongoDB::BSON"; require_ok($class); my $codec = $class->new; my $var = { a => 0.1 +0 }; my $clone = shared_clone $var; my $enc_var = $codec->encode_one($var); my $enc_clone = $codec->encode_one($clone); _bson_is( $enc_var, $enc_clone, "encoded top level hash and encoded top level shared hash" ); _bson_is( $codec->encode_one( { data => $var } ), $codec->encode_one( { data => $clone } ), "encoded hash and encoded shared hash" ); _bson_is( $codec->encode_one( { data => $var->{a} } ), $codec->encode_one( { data => $clone->{a} } ), "encoded double and encoded shared clone of double" ); threads->create( sub { _bson_is( $codec->encode_one($var), $codec->encode_one($clone), "(in thread) encoded top level hash and encoded top level shared hash" ); _bson_is( $codec->encode_one( { data => $var } ), $codec->encode_one( { data => $clone } ), "(in thread) encoded hash and encoded shared hash" ); _bson_is( $codec->encode_one( { data => $var->{a} } ), $codec->encode_one( { data => $clone->{a} } ), "(in thread) encoded double and encoded shared clone of double" ); } )->join; sub _bson_is { my ( $got, $exp, $label ) = @_; local $Test::Builder::Level = $Test::Builder::Level + 1; ok( $got eq $exp, $label ) or diag " Got:", _hexdump($got), "\nExpected:", _hexdump($exp), "\n"; } sub _hexdump { my $str = shift; $str =~ s{([^[:graph:]])}{sprintf("\\x{%02x}",ord($1))}ge; return $str; } done_testing(); MongoDB-v1.2.2/t/threads/cursor.t000644 000765 000024 00000005662 12651754051 017037 0ustar00davidstaff000000 000000 # # Copyright 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # use strict; use warnings; use Config; use if $Config{usethreads}, 'threads'; use Test::More; BEGIN { plan skip_all => 'requires threads' unless $Config{usethreads} } use MongoDB; use Try::Tiny; use lib "t/lib"; use MongoDBTest qw/skip_unless_mongod build_client get_test_db/; skip_unless_mongod(); my $testdb = get_test_db(build_client()); my $col = $testdb->get_collection('tiger'); $col->drop; $col->insert_one({ foo => 9, bar => 3, shazbot => 1 }); $col->insert_one({ foo => 2, bar => 5 }); $col->insert_one({ foo => -3, bar => 4 }); $col->insert_one({ foo => 4, bar => 9, shazbot => 1 }); { my $cursor = $col->query; # force start of retrieval before creating threads $cursor->next; my $ret = threads->create(sub { $testdb->_client->reconnect; $cursor->next; })->join; is_deeply $ret, $cursor->next, 'cursors retain their position on thread cloning'; } { my $cursor = threads->create(sub { $testdb->_client->reconnect; my $cursor = $col->query; # force start of retrieval before returning the cursor $cursor->next; return $cursor; })->join; # cursor for comparison my $comp_cursor = $col->query; # seek as far ahead as we did within the thread $comp_cursor->next; is_deeply $cursor->next, $comp_cursor->next, 'joining back cursors works'; } { my $cursor = $col->query; # force start of retrieval before creating threads $cursor->next; my @threads = map { threads->create(sub { $testdb->_client->reconnect; $cursor->next; }); } 0 .. 9; my @ret = map { $_->join } @threads; is_deeply [@ret], [($cursor->next) x 10], 'cursors retain their position on thread cloning'; } { my @threads = map { threads->create(sub { $testdb->_client->reconnect; my $cursor = $col->query; # force start of retrieval before returning the cursor $cursor->next; return $cursor; }) } 0 .. 9; my @cursors = map { $_->join } @threads; # cursor for comparison my $comp_cursor = $col->query; # seek as far ahead as we did within the thread $comp_cursor->next; is_deeply [map { $_->next } @cursors], [($comp_cursor->next) x 10], 'joining back cursors works'; } done_testing(); MongoDB-v1.2.2/t/threads/oid.t000644 000765 000024 00000002167 12651754051 016272 0ustar00davidstaff000000 000000 # # Copyright 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # use strict; use warnings; use Config; use if $Config{usethreads}, 'threads'; use Test::More; BEGIN { plan skip_all => 'requires threads' unless $Config{usethreads} } use MongoDB; use MongoDB::OID; my @threads = map { threads->create(sub { [map { MongoDB::OID->new } 0 .. 3] }); } 0 .. 9; my @oids = map { @{ $_->join } } @threads; my @inc = sort { $a <=> $b } map { unpack 'v', (pack('H*', $_) . '\0') } map { substr $_->value, 20 } @oids; my $prev = -1; for (@inc) { ok($prev < $_); $prev = $_; } done_testing(); MongoDB-v1.2.2/t/lib/MongoDBTest.pm000644 000765 000024 00000007520 12651754051 017127 0ustar00davidstaff000000 000000 # # Copyright 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # package MongoDBTest; use strict; use warnings; use Exporter 'import'; use MongoDB; use Test::More; use boolean; use version; our @EXPORT_OK = qw( build_client get_test_db server_version server_type clear_testdbs get_capped skip_unless_mongod ); my @testdbs; sub _check_local_rs { } # abstract building a connection sub build_client { my %args = @_; my $host = exists $args{host} ? delete $args{host} : exists $ENV{MONGOD} ? $ENV{MONGOD} : 'localhost'; # long query timeout may help spurious failures on heavily loaded CI machines return MongoDB->connect( $host, { ssl => $ENV{MONGO_SSL}, socket_timeout_ms => 60000, server_selection_timeout_ms => 2000, %args, } ); } sub get_test_db { my $conn = shift; my $testdb = 'testdb' . int(rand(2**31)); my $db = $conn->get_database($testdb) or die "Can't get database\n"; push(@testdbs, $db); return $db; } sub get_capped { my ($db, $name, %args) = @_; $name ||= 'capped' . int(rand(2**31)); $args{size} ||= 500_000; $db->run_command([ create => $name, capped => true, %args ]); return $db->get_collection($name); } sub skip_unless_mongod { eval { my $conn = build_client( server_selection_timeout_ms => 1000 ); my $topo = $conn->_topology; $topo->scan_all_servers; my $link; eval { $link = $topo->get_writable_link } or die "couldn't connect"; $conn->get_database("admin")->run_command( { serverStatus => 1 } ) or die "Database has auth enabled\n"; my $server = $link->server; if ( !$ENV{MONGOD} && $topo->type eq 'Single' && $server->type =~ /^RS/ ) { # direct connection to RS member on default, so add set name # via MONGOD environment variable for subsequent use $ENV{MONGOD} = "mongodb://localhost/?replicaSet=" . $server->set_name; } ## $conn->_topology->_dump; }; if ($@) { ( my $err = $@ ) =~ s/\n//g; if ( $err =~ /couldn't connect|connection refused/i ) { $err = "no mongod on " . ( $ENV{MONGOD} || "localhost:27017" ); $err .= ' and $ENV{MONGOD} not set' unless $ENV{MONGOD}; } plan skip_all => "$err"; } } sub server_version { my $conn = shift; my $build = $conn->send_admin_command( [ buildInfo => 1 ] )->output; my ($version_str) = $build->{version} =~ m{^([0-9.]+)}; return version->parse("v$version_str"); } sub server_type { my $conn = shift; my $server_type; # check database type my $ismaster = $conn->get_database('admin')->run_command({ismaster => 1}); if (exists $ismaster->{msg} && $ismaster->{msg} eq 'isdbgrid') { $server_type = 'Mongos'; } elsif ( $ismaster->{ismaster} && exists $ismaster->{setName} ) { $server_type = 'RSPrimary' } elsif ( ! exists $ismaster->{setName} && ! $ismaster->{isreplicaset} ) { $server_type = 'Standalone' } else { $server_type = 'Unknown'; } return $server_type; } sub clear_testdbs { @testdbs = () } # cleanup test dbs END { for my $db (@testdbs) { $db->drop; } } 1; MongoDB-v1.2.2/t/lib/TestBSON.pm000644 000765 000024 00000006525 12651754051 016407 0ustar00davidstaff000000 000000 use 5.008001; use strict; use warnings; package TestBSON; use Config; use Exporter 'import'; use Test::More; our @EXPORT = qw( BSON_DATETIME BSON_DOC BSON_DOUBLE BSON_INT32 BSON_INT64 BSON_NULL BSON_OID BSON_BOOL BSON_REGEXP BSON_STRING HAS_INT64 MAX_LONG MIN_LONG _cstring _datetime _dbref _doc _double _ename _hexdump _int32 _int64 _pack_bigint _regexp _string is_bin ); use constant { PERL58 => $] lt '5.010', HAS_INT64 => $Config{use64bitint} }; use constant { P_INT32 => PERL58 ? "l" : "l<", P_INT64 => PERL58 ? "q" : "q<", MAX_LONG => 2147483647, MIN_LONG => -2147483647, BSON_DOUBLE => "\x01", BSON_STRING => "\x02", BSON_DOC => "\x03", BSON_OID => "\x07", BSON_BOOL => "\x08", BSON_DATETIME => "\x09", BSON_NULL => "\x0A", BSON_REGEXP => "\x0B", BSON_INT32 => "\x10", BSON_INT64 => "\x12", }; sub _hexdump { my ($str) = @_; $str =~ s{([^[:graph:]])}{sprintf("\\x{%02x}",ord($1))}ge; return $str; } sub is_bin { my ( $got, $exp, $label ) = @_; $label ||= ''; $got = _hexdump($got); $exp = _hexdump($exp); local $Test::Builder::Level = $Test::Builder::Level + 1; is( $got, $exp, $label ); } sub _doc { my ($string) = shift; return pack( P_INT32, 5 + length($string) ) . $string . "\x00"; } sub _cstring { return $_[0] . "\x00" } BEGIN { *_ename = \&_cstring } sub _double { return pack( PERL58 ? "d" : "d<", shift ) } sub _int32 { return pack( P_INT32, shift ) } sub _int64 { my $val = shift; if ( ref($val) && eval { $val->isa("Math::BigInt") } ) { return _pack_bigint($val); } elsif (HAS_INT64) { return pack( P_INT64, $val ); } else { my $big = Math::BigInt->new( $val ); return _pack_bigint($big); } } sub _string { my ($string) = shift; return pack( P_INT32, 1 + length($string) ) . $string . "\x00"; } sub _datetime { my $dt = shift; if (HAS_INT64) { return pack( P_INT64, 1000 * $dt->epoch + $dt->millisecond ); } else { my $big = Math::BigInt->new( $dt->epoch ); $big->bmul(1000); $big->badd( $dt->millisecond ); return _pack_bigint($big); } } sub _regexp { my ( $pattern, $flags ) = @_; return _cstring($pattern) . _cstring($flags); } sub _dbref { my $dbref = shift; #<<< No perltidy return _doc( BSON_STRING . _ename('$ref') . _string($dbref->ref) . BSON_STRING . _ename('$id' ) . _string($dbref->id) . BSON_STRING . _ename('$db' ) . _string($dbref->db) ); #>>> } # pack to int64_t sub _pack_bigint { my $bi = shift; my $binary = $bi->as_bin; $binary =~ s{^-?0b}{}; $binary = "0"x(64-length($binary)) . $binary if length($binary) < 64; if ( $bi->sign eq '+' ) { return pack("b*", scalar reverse $binary); } else { my @lendian = split //, reverse $binary; my $saw_first_one = 0; for (@lendian) { if ( ! $saw_first_one ) { $saw_first_one = $_ == '1'; next; } else { tr[01][10]; } } return pack("b*", join("", @lendian)); } } 1; # vim: set ts=4 sts=4 sw=4 et tw=75: MongoDB-v1.2.2/t/deprecated/bulk.t000644 000765 000024 00000140523 12651754051 017121 0ustar00davidstaff000000 000000 # # Copyright 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # use strict; use warnings; use utf8; use Test::More 0.88; use Test::Fatal; use Test::Deep 0.111 qw/!blessed/; use Scalar::Util qw/refaddr/; use Tie::IxHash; use boolean; use MongoDB; use MongoDB::Error; use lib "t/lib"; use MongoDBTest qw/skip_unless_mongod build_client get_test_db server_version/; skip_unless_mongod(); my $conn = build_client(); my $testdb = get_test_db($conn); my $coll = $testdb->get_collection("test_collection"); my $ismaster = $testdb->run_command( { ismaster => 1 } ); my $server_status = $testdb->run_command( { serverStatus => 1 } ); # Standalone in "--master" mode will have serverStatus.repl, but ordinary # standalone won't my $is_standalone = $conn->topology_type eq 'Single' && ! exists $server_status->{repl}; my $server_does_bulk = server_version($conn) >= v2.5.5; sub _bulk_write_result { return MongoDB::BulkWriteResult->new( acknowledged => 1, write_errors => [], write_concern_errors => [], modified_count => 0, inserted_count => 0, upserted_count => 0, matched_count => 0, deleted_count => 0, upserted => [], inserted => [], batch_count => 0, op_count => 0, @_, ); } subtest "constructors" => sub { my @constructors = qw( initialize_ordered_bulk_op initialize_unordered_bulk_op ordered_bulk unordered_bulk ); for my $method (@constructors) { my $bulk = $coll->$method; isa_ok( $bulk, 'MongoDB::BulkWrite', $method ); if ( $method =~ /unordered/ ) { ok( !$bulk->ordered, "ordered attr is false" ); } else { ok( $bulk->ordered, "ordered attr is true" ); } is( refaddr $bulk->collection, refaddr $coll, "MongoDB::BulkWrite holds ref to originating Collection" ); } }; note("QA-477 INSERT"); for my $method (qw/initialize_ordered_bulk_op initialize_unordered_bulk_op/) { subtest "$method: insert errors" => sub { my $bulk = $coll->$method; # raise errors on wrong arg types my %bad_args = ( LIST => [ {}, {} ], EMPTY => [], ); for my $k ( sort keys %bad_args ) { like( exception { $bulk->insert( @{ $bad_args{$k} } ) }, qr/reference/, "insert( $k ) throws an error" ); } like( exception { $bulk->insert( 'foo' ) }, qr/reference/, "insert( 'foo' ) throws an error" ); like( exception { $bulk->insert( ['foo'] ) }, qr{reference}, "insert( ['foo'] ) throws an error", ); like( exception { $bulk->find( {} )->insert( {} ) }, qr/^Can't locate object method "insert"/, "find({})->insert({}) throws an error", ); is( exception { $bulk->insert( { '$key' => 1 } ) }, undef, "queuing insertion of document with \$key is allowed" ); my $err = exception { $bulk->execute }; isa_ok( $err, 'MongoDB::WriteError', "executing insertion with \$key" ); }; subtest "$method: successful insert" => sub { $coll->drop; my $bulk = $coll->$method; is( $coll->count, 0, "no docs in collection" ); $bulk->insert( { _id => 1 } ); my ( $result, $err ); $err = exception { $result = $bulk->execute }; is( $err, undef, "no error on insert" ) or diag explain $err; is( $coll->count, 1, "one doc in collection" ); # test empty superclass isa_ok( $result, 'MongoDB::WriteResult', "result object" ); isa_ok( $result, 'MongoDB::BulkWriteResult', "result object" ); cmp_deeply( $result, _bulk_write_result( inserted_count => 1, modified_count => ( $server_does_bulk ? 0 : undef ), op_count => 1, batch_count => 1, inserted => [ { index => 0, _id => 1 } ], ), "result object correct" ) or diag explain $result; }; subtest "$method insert without _id" => sub { $coll->drop; my $bulk = $coll->$method; is( $coll->count, 0, "no docs in collection" ); my $doc = {}; $bulk->insert( $doc ); my ( $result, $err ); $err = exception { $result = $bulk->execute }; is( $err, undef, "no error on insert" ) or diag explain $err; is( $coll->count, 1, "one doc in collection" ); isa_ok( $result, 'MongoDB::BulkWriteResult', "result object" ); cmp_deeply( $result, _bulk_write_result( inserted_count => 1, modified_count => ( $server_does_bulk ? 0 : undef ), op_count => 1, batch_count => 1, inserted => [ { index => 0, _id => obj_isa("MongoDB::OID") } ], ), "result object correct" ); my $id = $coll->find_one()->{_id}; # OID PIDs are the low 16 bits is( $id->_get_pid, $$ & 0xffff, "generated ID has our PID" ) or diag sprintf( "got OID: %s but our PID is %x", $id->value, $$ ); }; } note("QA-477 FIND"); for my $method (qw/initialize_ordered_bulk_op initialize_unordered_bulk_op/) { subtest "$method: find" => sub { my $bulk = $coll->$method; like( exception { $bulk->find }, qr/find requires a criteria document/, "find without doc selector throws exception" ); }; } note("QA-477 UPDATE and UPDATE_ONE"); for my $method (qw/initialize_ordered_bulk_op initialize_unordered_bulk_op/) { subtest "update and update_one errors with $method" => sub { my $bulk; # raise errors on wrong arg types my %bad_args = ( SCALAR => ['foo'], EMPTY => [], # not in QA test ); for my $update (qw/update update_one/) { $bulk = $coll->$method; for my $k ( sort keys %bad_args ) { like( exception { $bulk->find( {} )->$update( @{ $bad_args{$k} } ) }, qr/argument to .* must be a single hashref, arrayref or Tie::IxHash/, "$update( $k ) throws an error" ); } $bulk = $coll->$method; like( exception { $bulk->$update( { '$set' => { x => 1 } } ) }, qr/^Can't locate object method "$update"/, "$update on bulk object (without find) throws an error", ); $bulk = $coll->$method; $bulk->find( {} )->$update( { key => 1 } ); like( exception { $bulk->execute }, qr/update document must only contain update operators/, "single non-op key in $update doc throws exception" ); $bulk = $coll->$method; $bulk->find( {} )->$update( [ key => 1, '$key' => 1 ]); like( exception { $bulk->execute }, qr/update document must only contain update operators/, "first non-op key in $update doc throws exception" ); } }; subtest "update all docs with $method" => sub { $coll->drop; my $bulk = $coll->$method; $coll->insert_one($_) for map { { key => $_ } } 1, 2; my @docs = $coll->find( {} )->all; $bulk->find( {} )->update( { '$set' => { x => 3 } } ); my ( $result, $err ); $err = exception { $result = $bulk->execute }; is( $err, undef, "no error on update" ) or diag explain $err; isa_ok( $result, 'MongoDB::BulkWriteResult', "result object" ); cmp_deeply( $result, _bulk_write_result( matched_count => 2, modified_count => ( $server_does_bulk ? 2 : undef ), op_count => 1, batch_count => 1, ), "result object correct" ) or diag explain $result; if ( $server_does_bulk ) { ok( $result->has_modified_count, "newer server has_modified_count" ); } else { ok( ! $result->has_modified_count, "older server has_modified_count" ); } # check expected values $_->{x} = 3 for @docs; cmp_deeply( [ $coll->find( {} )->all ], \@docs, "all documents updated" ); }; subtest "update only matching docs with $method" => sub { $coll->drop; my $bulk = $coll->$method; $coll->insert_one($_) for map { { key => $_ } } 1, 2; my @docs = $coll->find( {} )->all; $bulk->find( { key => 1 } )->update( { '$set' => { x => 1 } } ); $bulk->find( { key => 2 } )->update( { '$set' => { x => 2 } } ); my ( $result, $err ); $err = exception { $result = $bulk->execute }; is( $err, undef, "no error on update" ) or diag explain $err; isa_ok( $result, 'MongoDB::BulkWriteResult', "result object" ); is_deeply( $result, _bulk_write_result( matched_count => 2, modified_count => ( $server_does_bulk ? 2 : undef ), op_count => 2, batch_count => $server_does_bulk ? 1 : 2, ), "result object correct" ); # check expected values $_->{x} = $_->{key} for @docs; cmp_deeply( [ $coll->find( {} )->all ], \@docs, "all documents updated" ); }; subtest "update_one with $method" => sub { $coll->drop; my $bulk = $coll->$method; $coll->insert_one($_) for map { { key => $_ } } 1, 2; $bulk->find( {} )->update_one( { '$set' => { key => 3 } } ); my ( $result, $err ); $err = exception { $result = $bulk->execute }; is( $err, undef, "no error on update" ) or diag explain $err; isa_ok( $result, 'MongoDB::BulkWriteResult', "result object" ); is_deeply( $result, _bulk_write_result( matched_count => 1, modified_count => ( $server_does_bulk ? 1 : undef ), op_count => 1, batch_count => 1, ), "result object correct" ); # check expected values is( $coll->find( { key => 3 } )->count, 1, "one document updated" ); }; } note("QA-477 REPLACE_ONE"); for my $method (qw/initialize_ordered_bulk_op initialize_unordered_bulk_op/) { subtest "replace_one errors with $method" => sub { my $bulk; # raise errors on wrong arg types my %bad_args = ( SCALAR => ['foo'], EMPTY => [], # not in QA test ); $bulk = $coll->$method; for my $k ( sort keys %bad_args ) { like( exception { $bulk->find( {} )->replace_one( @{ $bad_args{$k} } ) }, qr/argument to .* must be a single hashref, arrayref or Tie::IxHash/, "replace_one( $k ) throws an error" ); } like( exception { $bulk->replace_one( { '$set' => { x => 1 } } ) }, qr/^Can't locate object method "replace_one"/, "replace_one on bulk object (without find) throws an error", ); $bulk = $coll->$method; $bulk->find( {} )->replace_one( { '$key' => 1 } ); like( exception { $bulk->execute }, qr/replacement document must not contain update operators/, "single op key in replace_one doc throws exception" ); $bulk = $coll->$method; $bulk->find( {} )->replace_one( [ '$key' => 1, key => 1 ] ); like( exception { $bulk->execute }, qr/replacement document must not contain update operators/, "mixed op and non-op key in replace_one doc throws exception" ); }; subtest "replace_one with $method" => sub { $coll->drop; my $bulk = $coll->$method; $coll->insert_one( { key => 1 } ) for 1 .. 2; $bulk->find( {} )->replace_one( { key => 3 } ); my ( $result, $err ); $err = exception { $result = $bulk->execute }; is( $err, undef, "no error on update" ) or diag explain $err; isa_ok( $result, 'MongoDB::BulkWriteResult', "result object" ); is_deeply( $result, _bulk_write_result( matched_count => 1, modified_count => ( $server_does_bulk ? 1 : undef ), op_count => 1, batch_count => 1, ), "result object correct" ); # check expected values my $distinct = [ $coll->distinct("key")->all ]; cmp_deeply( $distinct, bag( 1, 3 ), "only one document replaced" ); }; } note("QA-477 UPSERT-UPDATE"); for my $method (qw/initialize_ordered_bulk_op initialize_unordered_bulk_op/) { subtest "upsert errors with $method" => sub { my $bulk = $coll->$method; like( exception { $bulk->upsert() }, qr/^Can't locate object method "upsert"/, "upsert on bulk object (without find) throws an error", ); like( exception { $bulk->find( {} )->upsert( {} ) }, qr/the upsert method takes no arguments/, "upsert( NONEMPTY ) throws an error" ); }; subtest "upsert-update insertion with $method" => sub { $coll->drop; my $bulk = $coll->$method; $bulk->find( { key => 1 } )->update( { '$set' => { x => 1 } } ); $bulk->find( { key => 2 } )->upsert->update( { '$set' => { x => 2 } } ); my ( $result, $err ); $err = exception { $result = $bulk->execute }; is( $err, undef, "no error on upsert-update" ) or diag explain $err; isa_ok( $result, 'MongoDB::BulkWriteResult', "result object" ); cmp_deeply( $result, _bulk_write_result( upserted_count => 1, modified_count => ( $server_does_bulk ? 0 : undef ), upserted => [ { index => 1, _id => ignore() } ], op_count => 2, batch_count => $server_does_bulk ? 1 : 2, ), "result object correct" ) or diag explain $result; cmp_deeply( [ $coll->find( {} )->all ], [ { _id => ignore(), key => 2, x => 2 } ], "upserted document correct" ); $bulk = $coll->$method; $bulk->find( { key => 1 } )->update( { '$set' => { x => 1 } } ); $bulk->find( { key => 2 } )->upsert->update( { '$set' => { x => 2 } } ); $err = exception { $result = $bulk->execute }; is( $err, undef, "no error on second upsert-update" ) or diag explain $err; cmp_deeply( $result, _bulk_write_result( matched_count => 1, modified_count => ( $server_does_bulk ? 0 : undef ), op_count => 2, batch_count => $server_does_bulk ? 1 : 2, ), "result object correct" ) or diag explain $result; }; subtest "upsert-update updates with $method" => sub { $coll->drop; $coll->insert_one( { key => 1 } ) for 1 .. 2; my @docs = $coll->find( {} )->all; my $bulk = $coll->$method; $bulk->find( { key => 1 } )->upsert->update( { '$set' => { x => 1 } } ); my ( $result, $err ); $err = exception { $result = $bulk->execute }; is( $err, undef, "no error on upsert-update" ) or diag explain $err; isa_ok( $result, 'MongoDB::BulkWriteResult', "result object" ); cmp_deeply( $result, _bulk_write_result( matched_count => 2, modified_count => ( $server_does_bulk ? 2 : undef ), op_count => 1, batch_count => 1, ), "result object correct" ) or diag explain $result; $_->{x} = 1 for @docs; cmp_deeply( [ $coll->find( {} )->all ], \@docs, "all documents updated" ); }; subtest "upsert-update large doc with $method" => sub { $coll->drop; # QA test says big_string should be 16MiB - 31 long, but { _id => $oid, # key => 1, x => $big_string } exceeds 16MiB when BSON encoded unless # the bigstring is 16MiB - 41. This may be a peculiarity of Perl's # BSON type encoding. # # Using legacy API, the bigstring must be 16MiB - 97 for some reason. my $big_string = "a" x ( 16 * 1024 * 1024 - $server_does_bulk ? 41 : 97 ); my $bulk = $coll->$method; $bulk->find( { key => "1" } )->upsert->update( { '$set' => { x => $big_string } } ); my ( $result, $err ); $err = exception { $result = $bulk->execute }; is( $err, undef, "no error on upsert-update" ) or diag explain $err; isa_ok( $result, 'MongoDB::BulkWriteResult', "result object" ); cmp_deeply( $result, _bulk_write_result( upserted_count => 1, modified_count => ( $server_does_bulk ? 0 : undef ), upserted => [ { index => 0, _id => ignore() } ], op_count => 1, batch_count => 1, ), "result object correct" ) or diag explain $result; }; } note("QA-477 UPSERT-UPDATE_ONE"); for my $method (qw/initialize_ordered_bulk_op initialize_unordered_bulk_op/) { subtest "upsert-update_one insertion with $method" => sub { $coll->drop; my $bulk = $coll->$method; $bulk->find( { key => 1 } )->update_one( { '$set' => { x => 1 } } ); # not upsert $bulk->find( { key => 2 } )->upsert->update_one( { '$set' => { x => 2 } } ); my ( $result, $err ); $err = exception { $result = $bulk->execute }; is( $err, undef, "no error on upsert-update_one" ) or diag explain $err; isa_ok( $result, 'MongoDB::BulkWriteResult', "result object" ); cmp_deeply( $result, _bulk_write_result( upserted_count => 1, modified_count => ( $server_does_bulk ? 0 : undef ), upserted => [ { index => 1, _id => ignore() } ], op_count => 2, batch_count => $server_does_bulk ? 1 : 2, ), "result object correct" ) or diag explain $result; cmp_deeply( [ $coll->find( {} )->all ], [ { _id => ignore(), key => 2, x => 2 } ], "upserted document correct" ); }; subtest "upsert-update_one (no insert) with $method" => sub { $coll->drop; $coll->insert_one( { key => 1 } ) for 1 .. 2; my @docs = $coll->find( {} )->all; my $bulk = $coll->$method; $bulk->find( { key => 1 } )->upsert->update_one( { '$set' => { x => 2 } } ); my ( $result, $err ); $err = exception { $result = $bulk->execute }; is( $err, undef, "no error on upsert-update_one" ) or diag explain $err; isa_ok( $result, 'MongoDB::BulkWriteResult', "result object" ); cmp_deeply( $result, _bulk_write_result( matched_count => 1, modified_count => ( $server_does_bulk ? 1 : undef ), op_count => 1, batch_count => 1, ), "result object correct" ) or diag explain $result; # add expected key to one document only $docs[0]{x} = 2; my @got = $coll->find( {} )->all; cmp_deeply( \@got, bag(@docs), "updated document correct" ) or diag explain \@got; }; } note("QA-477 UPSERT-REPLACE_ONE"); for my $method (qw/initialize_ordered_bulk_op initialize_unordered_bulk_op/) { subtest "upsert-replace_one insertion with $method" => sub { $coll->drop; my $bulk = $coll->$method; $bulk->find( { key => 1 } )->replace_one( { x => 1 } ); # not upsert $bulk->find( { key => 2 } )->upsert->replace_one( { x => 2 } ); my ( $result, $err ); $err = exception { $result = $bulk->execute }; is( $err, undef, "no error on upsert-replace_one" ) or diag explain $err; isa_ok( $result, 'MongoDB::BulkWriteResult', "result object" ); cmp_deeply( $result, _bulk_write_result( upserted_count => 1, modified_count => ( $server_does_bulk ? 0 : undef ), upserted => [ { index => 1, _id => ignore() } ], op_count => 2, batch_count => $server_does_bulk ? 1 : 2, ), "result object correct" ) or diag explain $result; cmp_deeply( [ $coll->find( {} )->all ], [ { _id => ignore(), x => 2 } ], "upserted document correct" ); }; subtest "upsert-replace_one (no insert) with $method" => sub { $coll->drop; $coll->insert_one( { key => 1 } ) for 1 .. 2; my @docs = $coll->find( {} )->all; my $bulk = $coll->$method; $bulk->find( { key => 1 } )->upsert->replace_one( { x => 2 } ); my ( $result, $err ); $err = exception { $result = $bulk->execute }; is( $err, undef, "no error on upsert-replace_one" ) or diag explain $err; isa_ok( $result, 'MongoDB::BulkWriteResult', "result object" ); cmp_deeply( $result, _bulk_write_result( matched_count => 1, modified_count => ( $server_does_bulk ? 1 : undef ), op_count => 1, batch_count => 1, ), "result object correct" ) or diag explain $result; # change one expected doc only $docs[0]{x} = 2; delete $docs[0]{key}; my @got = $coll->find( {} )->all; cmp_deeply( \@got, bag(@docs), "updated document correct" ) or diag explain \@got; }; } note("QA-477 REMOVE"); for my $method (qw/initialize_ordered_bulk_op initialize_unordered_bulk_op/) { subtest "remove errors with $method" => sub { my $bulk = $coll->$method; like( exception { $bulk->remove() }, qr/^Can't locate object method "remove"/, "remove on bulk object (without find) throws an error", ); }; subtest "remove all with $method" => sub { $coll->drop; $coll->insert_one( { key => 1 } ) for 1 .. 2; my $bulk = $coll->$method; $bulk->find( {} )->remove; my ( $result, $err ); $err = exception { $result = $bulk->execute }; is( $err, undef, "no error on remove" ) or diag explain $err; isa_ok( $result, 'MongoDB::BulkWriteResult', "result object" ); cmp_deeply( $result, _bulk_write_result( deleted_count => 2, modified_count => ( $server_does_bulk ? 0 : undef ), op_count => 1, batch_count => 1, ), "result object correct" ) or diag explain $result; is( $coll->count, 0, "all documents removed" ); }; subtest "remove matching with $method" => sub { $coll->drop; $coll->insert_one( { key => $_ } ) for 1 .. 2; my $bulk = $coll->$method; $bulk->find( { key => 1 } )->remove; my ( $result, $err ); $err = exception { $result = $bulk->execute }; is( $err, undef, "no error on remove" ) or diag explain $err; isa_ok( $result, 'MongoDB::BulkWriteResult', "result object" ); cmp_deeply( $result, _bulk_write_result( deleted_count => 1, modified_count => ( $server_does_bulk ? 0 : undef ), op_count => 1, batch_count => 1, ), "result object correct" ) or diag explain $result; cmp_deeply( [ $coll->find( {} )->all ], [ { _id => ignore(), key => 2 } ], "correct object remains" ); }; } note("QA-477 REMOVE_ONE"); for my $method (qw/initialize_ordered_bulk_op initialize_unordered_bulk_op/) { subtest "remove_one errors with $method" => sub { my $bulk = $coll->$method; like( exception { $bulk->remove_one() }, qr/^Can't locate object method "remove_one"/, "remove_one on bulk object (without find) throws an error", ); }; subtest "remove_one with $method" => sub { $coll->drop; $coll->insert_one( { key => 1 } ) for 1 .. 2; my $bulk = $coll->$method; $bulk->find( {} )->remove_one; my ( $result, $err ); $err = exception { $result = $bulk->execute }; is( $err, undef, "no error on remove_one" ) or diag explain $err; isa_ok( $result, 'MongoDB::BulkWriteResult', "result object" ); cmp_deeply( $result, _bulk_write_result( deleted_count => 1, modified_count => ( $server_does_bulk ? 0 : undef ), op_count => 1, batch_count => 1, ), "result object correct" ) or diag explain $result; is( $coll->count, 1, "only one doc removed" ); }; } note("QA-477 MIXED OPERATIONS, UNORDERED"); subtest "mixed operations, unordered" => sub { $coll->drop; $coll->insert_one( { a => $_ } ) for 1 .. 2; my $bulk = $coll->initialize_unordered_bulk_op; $bulk->find( { a => 1 } )->update( { '$set' => { b => 1 } } ); $bulk->find( { a => 2 } )->remove; $bulk->insert( { a => 3 } ); $bulk->find( { a => 4 } )->upsert->update_one( { '$set' => { b => 4 } } ); my ( $result, $err ); $err = exception { $result = $bulk->execute }; is( $err, undef, "no error on mixed operations" ) or diag explain $err; cmp_deeply( $result, _bulk_write_result( inserted_count => 1, matched_count => 1, modified_count => ( $server_does_bulk ? 1 : undef ), upserted_count => 1, deleted_count => 1, op_count => 4, batch_count => $server_does_bulk ? 3 : 4, # XXX QA Test says index should be 3, but with unordered, that's # not guaranteed, so we ignore the value upserted => [ { index => ignore(), _id => obj_isa("MongoDB::OID") } ], inserted => [ { index => ignore(), _id => obj_isa("MongoDB::OID") } ], ), "result object correct" ) or diag explain $result; }; note("QA-477 MIXED OPERATIONS, ORDERED"); subtest "mixed operations, ordered" => sub { $coll->drop; my $bulk = $coll->initialize_ordered_bulk_op; $bulk->insert( { a => 1 } ); $bulk->find( { a => 1 } )->update_one( { '$set' => { b => 1 } } ); $bulk->find( { a => 2 } )->upsert->update_one( { '$set' => { b => 2 } } ); $bulk->insert( { a => 3 } ); $bulk->find( { a => 3 } )->remove; my ( $result, $err ); $err = exception { $result = $bulk->execute }; is( $err, undef, "no error on mixed operations" ) or diag explain $err; cmp_deeply( $result, _bulk_write_result( inserted_count => 2, upserted_count => 1, matched_count => 1, modified_count => ( $server_does_bulk ? 1 : undef ), deleted_count => 1, op_count => 5, batch_count => $server_does_bulk ? 4 : 5, upserted => [ { index => 2, _id => obj_isa("MongoDB::OID") } ], inserted => [ { index => 0, _id => obj_isa("MongoDB::OID") }, { index => 3, _id => obj_isa("MongoDB::OID") }, ], ), "result object correct" ) or diag explain $result; }; note("QA-477 UNORDERED BATCH WITH ERRORS"); subtest "unordered batch with errors" => sub { $coll->drop; $coll->indexes->create_one( [ a => 1 ], { unique => 1 } ); my $bulk = $coll->initialize_unordered_bulk_op; $bulk->insert( { b => 1, a => 1 } ); $bulk->find( { b => 2 } )->upsert->update_one( { '$set' => { a => 1 } } ); $bulk->find( { b => 3 } )->upsert->update_one( { '$set' => { a => 2 } } ); $bulk->find( { b => 2 } )->upsert->update_one( { '$set' => { a => 1 } } ); $bulk->insert( { b => 4, a => 3 } ); $bulk->insert( { b => 5, a => 1 } ); my ( $result, $err ); $err = exception { $result = $bulk->execute }; isa_ok( $err, 'MongoDB::DuplicateKeyError', 'caught error' ) or diag explain $err; my $details = $err->result; # Check if all ops ran in two batches (unless we're on a legacy server) is( $details->op_count, 6, "op_count" ); is( $details->batch_count, $server_does_bulk ? 2 : 6, "batch_count" ); # XXX QA 477 doesn't cover *both* possible orders. Either the inserts go # first or the upsert/update_ones goes first and different result states # are possible for each case. if ( $details->inserted_count == 2 ) { note("inserts went first"); is( $details->inserted_count, 2, "inserted_count" ); is( $details->upserted_count, 1, "upserted_count" ); is( $details->deleted_count, 0, "deleted_count" ); is( $details->matched_count, 0, "matched_count" ); is( $details->modified_count, ( $server_does_bulk ? 0 : undef ), "modified_count" ); is( $details->count_write_errors, 3, "writeError count" ) or diag explain $details; cmp_deeply( $details->upserted, [ { index => 4, _id => obj_isa("MongoDB::OID") }, ], "upsert list" ); } else { note("updates went first"); is( $details->inserted_count, 1, "inserted_count" ); is( $details->upserted_count, 2, "upserted_count" ); is( $details->deleted_count, 0, "deleted_count" ); is( $details->matched_count, 1, "matched_count" ); is( $details->modified_count, ( $server_does_bulk ? 0 : undef ), "modified_count" ); is( $details->count_write_errors, 2, "writeError count" ) or diag explain $details; cmp_deeply( $details->upserted, [ { index => 0, _id => obj_isa("MongoDB::OID") }, { index => 1, _id => obj_isa("MongoDB::OID") }, ], "upsert list" ); } my $distinct = [ $coll->distinct("a")->all ]; cmp_deeply( $distinct, bag( 1 .. 3 ), "distinct keys" ); }; note("QA-477 ORDERED BATCH WITH ERRORS"); subtest "ordered batch with errors" => sub { $coll->drop; $coll->indexes->create_one( [ a => 1 ], { unique => 1 } ); my $bulk = $coll->initialize_ordered_bulk_op; $bulk->insert( { b => 1, a => 1 } ); $bulk->find( { b => 2 } )->upsert->update_one( { '$set' => { a => 1 } } ); $bulk->find( { b => 3 } )->upsert->update_one( { '$set' => { a => 2 } } ); $bulk->find( { b => 2 } )->upsert->update_one( { '$set' => { a => 1 } } ); # fail $bulk->insert( { b => 4, a => 3 } ); $bulk->insert( { b => 5, a => 1 } ); my ( $result, $err ); $err = exception { $result = $bulk->execute }; isa_ok( $err, 'MongoDB::DuplicateKeyError', 'caught error' ); my $details = $err->result; is( $details->upserted_count, 0, "upserted_count" ); is( $details->matched_count, 0, "matched_count" ); is( $details->deleted_count, 0, "deleted_count" ); is( $details->modified_count, ( $server_does_bulk ? 0 : undef ), "modified_count" ); is( $details->inserted_count, 1, "inserted_count" ); # on 2.6+, 4 ops run in two batches; but on legacy, we get an error on # the first update_one, so we only have two ops, still in two batches is( $details->op_count, $server_does_bulk ? 4 : 2, "op_count" ); is( $details->batch_count, 2, "op_count" ); is( $details->count_write_errors, 1, "writeError count" ); is( $details->write_errors->[0]{code}, 11000, "error code" ); is( $details->write_errors->[0]{index}, 1, "error index" ); ok( length $details->write_errors->[0]{errmsg}, "error string" ); cmp_deeply( $details->write_errors->[0]{op}, { q => Tie::IxHash->new( b => 2 ), u => obj_isa( $server_does_bulk ? 'MongoDB::BSON::_EncodedDoc' : 'Tie::IxHash' ), multi => false, upsert => true, }, "error op" ) or diag explain $details->write_errors->[0]{op}; is( $coll->count, 1, "subsequent inserts did not run" ); }; note("QA-477 BATCH SPLITTING: maxBsonObjectSize"); subtest "ordered batch split on size" => sub { local $TODO = "pending topology monitoring"; $coll->drop; my $bulk = $coll->initialize_ordered_bulk_op; my $big_string = "a" x ( 4 * 1024 * 1024 ); $bulk->insert( { _id => $_, a => $big_string } ) for 0 .. 5; $bulk->insert( { _id => 0 } ); # will fail $bulk->insert( { _id => 100 } ); my ( $result, $err ); $err = exception { $result = $bulk->execute }; isa_ok( $err, 'MongoDB::DuplicateKeyError', 'caught error' ) or diag "CAUGHT ERROR: $err"; my $details = $err->result; my $errdoc = $details->write_errors->[0]; is( $details->inserted_count, 6, "inserted_count" ); cmp_deeply( $details->inserted_ids, { map { $_ => $_ } 0 .. 5 }, "inserted_ids correct" ); is( $details->count_write_errors, 1, "count_write_errors" ); is( $errdoc->{code}, 11000, "error code" ) or diag explain $errdoc; is( $errdoc->{index}, 6, "error index" ); ok( length( $errdoc->{errmsg} ), "error message" ); is( $coll->count, 6, "collection count" ); }; subtest "unordered batch split on size" => sub { local $TODO = "pending topology monitoring"; $coll->drop; my $bulk = $coll->initialize_unordered_bulk_op; my $big_string = "a" x ( 4 * 1024 * 1024 ); $bulk->insert( { _id => $_, a => $big_string } ) for 0 .. 5; $bulk->insert( { _id => 0 } ); # will fail $bulk->insert( { _id => 100 } ); my ( $result, $err ); $err = exception { $result = $bulk->execute }; isa_ok( $err, 'MongoDB::DuplicateKeyError', 'caught error' ) or diag $err; my $details = $err->result; my $errdoc = $details->write_errors->[0]; is( $details->inserted_count, 7, "inserted_count" ); is( $details->count_write_errors, 1, "count_write_errors" ); is( $errdoc->{code}, 11000, "error code" ) or diag explain $errdoc; is( $errdoc->{index}, 6, "error index" ); ok( length( $errdoc->{errmsg} ), "error message" ); is( $coll->count, 7, "collection count" ); }; note("QA-477 BATCH SPLITTING: maxWriteBatchSize"); subtest "ordered batch split on number of ops" => sub { $coll->drop; my $bulk = $coll->initialize_ordered_bulk_op; $bulk->insert( { _id => $_ } ) for 0 .. 1999; $bulk->insert( { _id => 0 } ); # will fail $bulk->insert( { _id => 10000 } ); my ( $result, $err ); $err = exception { $result = $bulk->execute }; isa_ok( $err, 'MongoDB::DuplicateKeyError', 'caught error' ) or diag $err; my $details = $err->result; my $errdoc = $details->write_errors->[0]; is( $details->inserted_count, 2000, "inserted_count" ); cmp_deeply( $details->inserted_ids, { map { $_ => $_ } 0 .. 1999 }, "inserted_ids correct" ); is( $details->count_write_errors, 1, "count_write_errors" ); is( $errdoc->{code}, 11000, "error code" ); is( $errdoc->{index}, 2000, "error index" ); ok( length( $errdoc->{errmsg} ), "error message" ); is( $coll->count, 2000, "collection count" ); }; subtest "unordered batch split on number of ops" => sub { $coll->drop; my $bulk = $coll->initialize_unordered_bulk_op; $bulk->insert( { _id => $_ } ) for 0 .. 1999; $bulk->insert( { _id => 0 } ); # will fail $bulk->insert( { _id => 10000 } ); my ( $result, $err ); $err = exception { $result = $bulk->execute }; isa_ok( $err, 'MongoDB::DuplicateKeyError', 'caught error' ) or diag $err; my $details = $err->result; my $errdoc = $details->write_errors->[0]; is( $details->inserted_count, 2001, "inserted_count" ); is( $details->count_write_errors, 1, "count_write_errors" ); is( $errdoc->{code}, 11000, "error code" ); is( $errdoc->{index}, 2000, "error index" ); ok( length( $errdoc->{errmsg} ), "error message" ); is( $coll->count, 2001, "collection count" ); }; note("QA-477 RE-RUNNING A BATCH"); for my $method (qw/initialize_ordered_bulk_op initialize_unordered_bulk_op/) { subtest "$method: rerun a bulk operation" => sub { $coll->drop; my $bulk = $coll->$method; $bulk->insert( {} ); my $err = exception { $bulk->execute }; is( $err, undef, "first execute succeeds" ); $err = exception { $bulk->execute }; isa_ok( $err, 'MongoDB::Error', "re-running a bulk op throws exception" ); like( $err->message, qr/bulk op execute called more than once/, "error message" ) or diag explain $err; }; } note("QA-477 EMPTY BATCH"); for my $method (qw/initialize_ordered_bulk_op initialize_unordered_bulk_op/) { subtest "$method: empty bulk operation" => sub { my $bulk = $coll->$method; my $err = exception { $bulk->execute }; isa_ok( $err, 'MongoDB::Error', "empty bulk op throws exception" ); like( $err->message, qr/no bulk ops to execute/, "error message" ) or diag explain $err; }; } note("QA-477 W>1 AGAINST STANDALONE"); for my $method (qw/initialize_ordered_bulk_op initialize_unordered_bulk_op/) { subtest "$method: w > 1 against standalone (explicit)" => sub { plan skip_all => 'needs a standalone server' unless $is_standalone; $coll->drop; my $bulk = $coll->$method; $bulk->insert( {} ); my $err = exception { $bulk->execute( { w => 2 } ) }; isa_ok( $err, 'MongoDB::DatabaseError', "executing write concern w > 1 throws error" ); like( $err->message, qr/replica/, "error message mentions replication" ); }; subtest "$method: w > 1 against standalone (implicit)" => sub { plan skip_all => 'needs a standalone server' unless $is_standalone; $coll->drop; my $coll2 = $coll->clone( write_concern => { w => 2 } ); my $bulk = $coll2->$method; $bulk->insert( {} ); my $err = exception { $bulk->execute() }; isa_ok( $err, 'MongoDB::DatabaseError', "executing write concern w > 1 throws error" ); like( $err->message, qr/replica/, "error message mentions replication" ); }; } note("QA-477 WTIMEOUT PLUS DUPLICATE KEY ERROR"); subtest "initialize_unordered_bulk_op: wtimeout plus duplicate keys" => sub { plan skip_all => 'needs a replica set' unless $ismaster->{hosts}; # asking for w more than N hosts will trigger the error we need my $W = @{ $ismaster->{hosts} } + 1; $coll->drop; my $bulk = $coll->initialize_unordered_bulk_op; $bulk->insert( { _id => 1 } ); $bulk->insert( { _id => 1 } ); my $err = exception { $bulk->execute( { w => $W, wtimeout => 100 } ) }; isa_ok( $err, 'MongoDB::DuplicateKeyError', "executing throws error" ); my $details = $err->result; is( $details->inserted_count, 1, "inserted_count == 1" ); is( $details->count_write_errors, 1, "one write error" ); is( $details->count_write_concern_errors, 1, "one write concern error" ); }; note("QA-477 W = 0"); for my $method (qw/initialize_ordered_bulk_op initialize_unordered_bulk_op/) { subtest "$method: w = 0" => sub { $coll->drop; my $bulk = $coll->$method; $bulk->insert( { _id => 1 } ); $bulk->insert( { _id => 1 } ); $bulk->insert( { _id => 2 } ); # ensure success after failure my ( $result, $err ); $err = exception { $result = $bulk->execute( { w => 0 } ) }; is( $err, undef, "execute with w = 0 doesn't throw error" ) or diag explain $err; my $expect = $method eq 'initialize_ordered_bulk_op' ? 1 : 2; is( $coll->count, $expect, "document count ($expect)" ); }; } # This test was not included in the QA-477 test plan; it ensures that # write concerns are applied only after all operations finish note("WRITE CONCERN ERRORS"); for my $method (qw/initialize_ordered_bulk_op initialize_unordered_bulk_op/) { subtest "$method: write concern errors" => sub { plan skip_all => 'needs a replica set' unless $ismaster->{hosts}; # asking for w more than N hosts will trigger the error we need my $W = @{ $ismaster->{hosts} } + 1; $coll->drop; my $bulk = $coll->$method; $bulk->insert( { _id => 1 } ); $bulk->insert( { _id => 2 } ); $bulk->find( { id => 3 } )->upsert->update( { '$set' => { x => 2 } } ); $bulk->insert( { _id => 4 } ); my $err = exception { $bulk->execute( { w => $W, wtimeout => 100 } ) }; isa_ok( $err, 'MongoDB::WriteConcernError', "executing throws error" ); my $details = $err->result; is( $details->inserted_count, 3, "inserted_count" ); is( $details->upserted_count, 1, "upserted_count" ); is( $details->count_write_errors, 0, "no write errors" ); ok( $details->count_write_concern_errors, "got write concern errors" ); }; } # Not in QA-477 -- Many methods take hashrefs, arrayrefs or Tie::IxHash # objects. The following tests check that arrayrefs and Tie::IxHash are legal # arguments to find, insert, update, update_one and replace_one. The # remove and remove_one methods take no arguments and don't need tests note("ARRAY REFS"); # Not in QA-477 -- this is perl driver specific subtest "insert (ARRAY)" => sub { $coll->drop; my $bulk = $coll->initialize_ordered_bulk_op; is( $coll->count, 0, "no docs in collection" ); $bulk->insert( [ _id => 1 ] ); $bulk->insert( [] ); my ( $result, $err ); $err = exception { $result = $bulk->execute }; is( $err, undef, "no error on insert" ) or diag explain $err; is( $coll->count, 2, "doc count" ); }; subtest "update (ARRAY)" => sub { $coll->drop; $coll->insert_one( { _id => 1 } ); my $bulk = $coll->initialize_ordered_bulk_op; $bulk->find( [] )->update( [ '$set' => { x => 2 } ] ); my ( $result, $err ); $err = exception { $result = $bulk->execute }; is( $err, undef, "no error on update" ) or diag explain $err; is( $coll->find_one( {} )->{x}, 2, "document updated" ); }; subtest "update_one (ARRAY)" => sub { $coll->drop; $coll->insert_one( { _id => $_ } ) for 1 .. 2; my $bulk = $coll->initialize_ordered_bulk_op; $bulk->find( [] )->update_one( [ '$set' => { x => 2 } ] ); my ( $result, $err ); $err = exception { $result = $bulk->execute }; is( $err, undef, "no error on update_one" ) or diag explain $err; is( $coll->count( { x => 2 } ), 1, "only one doc updated" ); }; subtest "replace_one (ARRAY)" => sub { $coll->drop; $coll->insert_one( { key => $_ } ) for 1 .. 2; my $bulk = $coll->initialize_ordered_bulk_op; $bulk->find( [] )->replace_one( [ key => 3 ] ); my ( $result, $err ); $err = exception { $result = $bulk->execute }; is( $err, undef, "no error on replace" ) or diag explain $err; is( $coll->count( { key => 3 } ), 1, "only one doc replaced" ); }; note("Tie::IxHash"); subtest "insert (Tie::IxHash)" => sub { $coll->drop; my $bulk = $coll->initialize_ordered_bulk_op; is( $coll->count, 0, "no docs in collection" ); $bulk->insert( Tie::IxHash->new( _id => 1 ) ); my $doc = Tie::IxHash->new(); $bulk->insert( $doc ); my ( $result, $err ); $err = exception { $result = $bulk->execute }; is( $err, undef, "no error on insert" ) or diag explain $err; is( $coll->count, 2, "doc count" ); }; subtest "update (Tie::IxHash)" => sub { $coll->drop; $coll->insert_one( { _id => 1 } ); my $bulk = $coll->initialize_ordered_bulk_op; $bulk->find( Tie::IxHash->new() ) ->update( Tie::IxHash->new( '$set' => { x => 2 } ) ); my ( $result, $err ); $err = exception { $result = $bulk->execute }; is( $err, undef, "no error on update" ) or diag explain $err; is( $coll->find_one( {} )->{x}, 2, "document updated" ); }; subtest "update_one (Tie::IxHash)" => sub { $coll->drop; $coll->insert_one( { _id => $_ } ) for 1 .. 2; my $bulk = $coll->initialize_ordered_bulk_op; $bulk->find( Tie::IxHash->new() ) ->update_one( Tie::IxHash->new( '$set' => { x => 2 } ) ); my ( $result, $err ); $err = exception { $result = $bulk->execute }; is( $err, undef, "no error on update" ) or diag explain $err; is( $coll->count( { x => 2 } ), 1, "only one doc updated" ); }; subtest "replace_one (Tie::IxHash)" => sub { $coll->drop; $coll->insert_one( { key => $_ } ) for 1 .. 2; my $bulk = $coll->initialize_ordered_bulk_op; $bulk->find( Tie::IxHash->new() )->replace_one( Tie::IxHash->new( key => 3 ) ); my ( $result, $err ); $err = exception { $result = $bulk->execute }; is( $err, undef, "no error on replace" ) or diag explain $err; is( $coll->count( { key => 3 } ), 1, "only one doc replaced" ); }; # not in QA-477 note("W = 0 IGNORES ERRORS"); for my $method (qw/initialize_ordered_bulk_op initialize_unordered_bulk_op/) { subtest "$method: w = 0" => sub { $coll->drop; my $bulk = $coll->$method; $bulk->insert( { _id => 1 } ); $bulk->insert( { _id => 3, '$bad' => 1 } ); $bulk->insert( { _id => 4 } ); my ( $result, $err ); $err = exception { $result = $bulk->execute( { w => 0 } ) }; is( $err, undef, "execute with w = 0 doesn't throw error" ) or diag explain $err; my $expect = $method eq 'initialize_ordered_bulk_op' ? 1 : 2; is( $coll->count, $expect, "document count ($expect)" ); }; } # DRIVERS-151 Handle edge case for pre-2.6 when upserted _id not returned note("UPSERT _ID NOT RETURNED"); for my $method (qw/initialize_ordered_bulk_op initialize_unordered_bulk_op/) { subtest "$method: upsert with non OID _ids" => sub { $coll->drop; my $bulk = $coll->$method; $bulk->find( { _id => 0 } )->upsert->update_one( { '$set' => { a => 0 } } ); $bulk->find( { a => 1 } )->upsert->replace_one( { _id => 1 } ); # 2.6 doesn't allow changing _id, but previously that's OK, so we try it both ways # to ensure we use the right _id from the replace doc on older servers $bulk->find( { _id => $server_does_bulk ? 2 : 3 } )->upsert->replace_one( { _id => 2 } ); my ( $result, $err ); $err = exception { $result = $bulk->execute }; is( $err, undef, "execute doesn't throw error" ) or diag explain $err; cmp_deeply( $result, _bulk_write_result( upserted_count => 3, modified_count => ( $server_does_bulk ? 0 : undef ), upserted => [ { index => 0, _id => 0 }, { index => 1, _id => 1 }, { index => 2, _id => 2 }, ], op_count => 3, batch_count => $server_does_bulk ? 1 : 3, ), "result object correct" ) or diag explain $result; }; } subtest "replace with custom op_char" => sub { $coll->drop; my $coll2 = $coll->with_codec( op_char => '-' ); my $bulk = $coll2->ordered_bulk; $bulk->insert( { _id => 0 } ); $bulk->find( { _id => 0 } )->replace_one( { '-set' => { key => 1} } ); like( exception { $bulk->execute }, qr/replacement document must not contain update operators/, "single non-op key in update doc throws exception" ); }; # XXX QA-477 tests not covered herein: # MIXED OPERATIONS, AUTH # FAILOVER WITH MIXED VERSIONS done_testing; MongoDB-v1.2.2/t/deprecated/collection.t000644 000765 000024 00000044202 12651754051 020314 0ustar00davidstaff000000 000000 # # Copyright 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Adapted from t/collection.t to keep testing deprecated APIs use strict; use warnings; use Test::More 0.96; use Test::Fatal; use Test::Deep qw/!blessed/; use utf8; use Tie::IxHash; use Encode qw(encode decode); use MongoDB::Timestamp; # needed if db is being run as master use MongoDB::Error; use MongoDB::Code; use MongoDB; use lib "t/lib"; use MongoDBTest qw/skip_unless_mongod build_client get_test_db server_version server_type/; skip_unless_mongod(); my $conn = build_client(); my $testdb = get_test_db($conn); my $server_version = server_version($conn); my $server_type = server_type($conn); my $coll = $testdb->get_collection('test_collection'); my $id; my $obj; my $ok; my $cursor; my $tied; # get_collection subtest get_collection => sub { my ( $db, $c ); ok( $c = $testdb->get_collection('foo'), "get_collection(NAME)" ); isa_ok( $c, 'MongoDB::Collection' ); is( $c->name, 'foo', 'get name' ); my $wc = MongoDB::WriteConcern->new( w => 2 ); ok( $c = $testdb->get_collection( 'foo', { write_concern => $wc } ), "get_collection(NAME, OPTION) (wc)" ); is( $c->write_concern->w, 2, "coll-level write concern as expected" ); ok( $c = $testdb->get_collection( 'foo', { write_concern => { w => 3 } } ), "get_collection(NAME, OPTION) (wc)" ); is( $c->write_concern->w, 3, "coll-level write concern coerces" ); my $rp = MongoDB::ReadPreference->new( mode => 'secondary' ); ok( $c = $testdb->get_collection( 'foo', { read_preference => $rp } ), "get_collection(NAME, OPTION) (rp)" ); is( $c->read_preference->mode, 'secondary', "coll-level read pref as expected" ); ok( $c = $testdb->get_collection( 'foo', { read_preference => { mode => 'nearest' } } ), "get_collection(NAME, OPTION) (rp)" ); is( $c->read_preference->mode, 'nearest', "coll-level read pref coerces" ); }; subtest get_namespace => sub { my $dbname = $testdb->name; my ( $db, $c ); ok( $c = $conn->get_namespace("$dbname.foo"), "get_namespace(NAME)" ); isa_ok( $c, 'MongoDB::Collection' ); is( $c->name, 'foo', 'get name' ); my $wc = MongoDB::WriteConcern->new( w => 2 ); ok( $c = $conn->get_namespace( "$dbname.foo", { write_concern => $wc } ), "get_collection(NAME, OPTION) (wc)" ); is( $c->write_concern->w, 2, "coll-level write concern as expected" ); ok( $c = $conn->ns("$dbname.foo"), "ns(NAME)" ); isa_ok( $c, 'MongoDB::Collection' ); is( $c->name, 'foo', 'get name' ); }; # very small insert { $id = $coll->insert({_id => 1}); is($id, 1); my $tiny = $coll->find_one; is($tiny->{'_id'}, 1); $coll->remove; $id = $coll->insert({}); isa_ok($id, 'MongoDB::OID'); $tiny = $coll->find_one; is($tiny->{'_id'}, $id); $coll->remove; } # insert { my $doc = { just => 'another', perl => 'hacker' }; my $orig = { %$doc }; $id = $coll->insert($doc); is($coll->count, 1, 'count'); cmp_deeply( $doc, $orig, "doc not modified by insert" ); $coll->update({ _id => $id }, { just => "an\xE4oth\0er", mongo => 'hacker', with => { a => 'reference' }, and => [qw/an array reference/], }); is($coll->count, 1); } # rename { my $newcoll = $coll->rename('test_collection.rename'); is($newcoll->name, 'test_collection.rename', 'rename'); is($coll->count, 0, 'rename'); is($newcoll->count, 1, 'rename'); $coll = $newcoll->rename('test_collection'); is($coll->name, 'test_collection', 'rename'); is($coll->count, 1, 'rename'); is($newcoll->count, 0, 'rename'); } # count { is($coll->count({ mongo => 'programmer' }), 0, 'count = 0'); is($coll->count({ mongo => 'hacker' }), 1, 'count = 1'); is($coll->count({ 'with.a' => 'reference' }), 1, 'inner obj count'); } # find_one { $obj = $coll->find_one; is($obj->{mongo} => 'hacker', 'find_one'); is(ref $obj->{with}, 'HASH', 'find_one type'); is($obj->{with}->{a}, 'reference'); is(ref $obj->{and}, 'ARRAY'); is_deeply($obj->{and}, [qw/an array reference/]); ok(!exists $obj->{perl}); is($obj->{just}, "an\xE4oth\0er"); } # validate and remove { is( exception { $coll->validate }, undef, 'validate' ); $coll->remove($obj); is($coll->count, 0, 'remove() deleted everything (won\'t work on an old version of Mongo)'); } { $coll->drop; $coll->insert({x => 1, y => 2, z => 3, w => 4}); $cursor = $coll->query->fields({'y' => 1}); $obj = $cursor->next; is(exists $obj->{'y'}, 1, 'y exists'); is(exists $obj->{'_id'}, 1, '_id exists'); is(exists $obj->{'x'}, '', 'x doesn\'t exist'); is(exists $obj->{'z'}, '', 'z doesn\'t exist'); is(exists $obj->{'w'}, '', 'w doesn\'t exist'); } # batch insert { $coll->drop; my $ids = $coll->batch_insert([{'x' => 1}, {'x' => 2}, {'x' => 3}]); is($coll->count, 3, 'batch_insert'); } # sort { $cursor = $coll->query->sort({'x' => 1}); my $i = 1; while ($obj = $cursor->next) { is($obj->{'x'}, $i++); } } # find_one fields { $coll->drop; $coll->insert({'x' => 1, 'y' => 2, 'z' => 3}); my $yer = $coll->find_one({}, {'y' => 1}); cmp_deeply( $yer, { _id => ignore(), y => 2 }, "projection fields correct" ); $coll->drop; $coll->batch_insert([{"x" => 1}, {"x" => 1}, {"x" => 1}]); $coll->remove( { "x" => 1 }, { just_one => 1 } ); is ($coll->count, 2, 'remove just one'); } # tie::ixhash for update/insert { $coll->drop; my $hash = Tie::IxHash->new("f" => 1, "s" => 2, "fo" => 4, "t" => 3); $id = $coll->insert($hash); isa_ok($id, 'MongoDB::OID'); $tied = $coll->find_one; is($tied->{'_id'}."", "$id"); is($tied->{'f'}, 1); is($tied->{'s'}, 2); is($tied->{'fo'}, 4); is($tied->{'t'}, 3); my $criteria = Tie::IxHash->new("_id" => $id); $hash->Push("something" => "else"); $coll->update($criteria, $hash); $tied = $coll->find_one; is($tied->{'f'}, 1); is($tied->{'something'}, 'else'); } # () update/insert { $coll->drop; my @h = ("f" => 1, "s" => 2, "fo" => 4, "t" => 3); $id = $coll->insert(\@h); isa_ok($id, 'MongoDB::OID'); $tied = $coll->find_one; is($tied->{'_id'}."", "$id"); is($tied->{'f'}, 1); is($tied->{'s'}, 2); is($tied->{'fo'}, 4); is($tied->{'t'}, 3); my @criteria = ("_id" => $id); my @newobj = ('$inc' => {"f" => 1}); $coll->update(\@criteria, \@newobj); $tied = $coll->find_one; is($tied->{'f'}, 2); } # multiple update { $coll->drop; $coll->insert({"x" => 1}); $coll->insert({"x" => 1}); $coll->insert({"x" => 2, "y" => 3}); $coll->insert({"x" => 2, "y" => 4}); $coll->update({"x" => 1}, {'$set' => {'x' => "hi"}}); # make sure one is set, one is not ok($coll->find_one({"x" => "hi"})); ok($coll->find_one({"x" => 1})); my $res = $coll->update({"x" => 2}, {'$set' => {'x' => 4}}, {'multiple' => 1}); is($coll->count({"x" => 4}), 2) or diag explain $res; $cursor = $coll->query({"x" => 4})->sort({"y" => 1}); $obj = $cursor->next(); is($obj->{'y'}, 3); $obj = $cursor->next(); is($obj->{'y'}, 4); } # check with upsert if there are matches subtest "multiple update" => sub { plan skip_all => "multiple update won't work with db version $server_version" unless $server_version >= v1.3.0; $coll->update({"x" => 4}, {'$set' => {"x" => 3}}, {'multiple' => 1, 'upsert' => 1}); is($coll->count({"x" => 3}), 2, 'count'); $cursor = $coll->query({"x" => 3})->sort({"y" => 1}); $obj = $cursor->next(); is($obj->{'y'}, 3, 'y == 3'); $obj = $cursor->next(); is($obj->{'y'}, 4, 'y == 4'); # check with upsert if there are no matches # also check that 'multi' is allowed my $res = $coll->update({"x" => 15}, {'$set' => {"z" => 4}}, {'upsert' => 1, 'multi' => 1}); ok( $res->{ok}, "update succeeded" ); is( $res->{n}, 0, "update match count" ); isa_ok( $res->{upserted}, "MongoDB::OID" ); ok($coll->find_one({"z" => 4})); # check that 'multi' and 'multiple' conflicting is an error like( exception { $coll->update( { "x" => 15 }, { '$set' => { "z" => 4 } }, { 'multi' => 1, 'multiple' => undef } ) }, qr/can't use conflicting values/, "multi and multiple conflicting is an error" ); is($coll->count(), 5); }; # safe insert { $coll->drop; $coll->insert({_id => 1}, {safe => 1}); my $err = exception { $coll->insert({_id => 1}, {safe => 1}) }; ok( $err, "got error" ); isa_ok( $err, 'MongoDB::DatabaseError', "duplicate insert error" ); like( $err->message, qr/duplicate key/, 'error was duplicate key exception') } # safe update { $coll->drop; $coll->ensure_index({name => 1}, {unique => 1}); $coll->insert( {name => 'Alice'} ); $coll->insert( {name => 'Bob'} ); my $err = exception { $coll->update( { name => 'Alice'}, { '$set' => { name => 'Bob' } }, { safe => 0 } ) }; is($err, undef, "bad update with safe => 0: no error"); for my $h ( {}, { safe => 1 } ) { my $res; $err = exception { $res = $coll->update( { name => 'Alice'}, { '$set' => { name => 'Bob' } }, $h ) }; my $case = $h ? "explicit" : "default"; ok( $err, "bad update with $case safe gives error" ) or diag explain $res; like( $err->message, qr/duplicate key/, 'error was duplicate key exception'); ok( my $ok = $coll->update( { name => 'Alice' }, { '$set' => { age => 23 } }, $h ), "did legal update" ); isa_ok( $ok, "HASH" ); is( $ok->{n}, 1, "n set to 1" ); ok( $ok->{ok}, "legal update with $case safe had no error" ); } } # save { $coll->drop; my $x = {"hello" => "world"}; $coll->save($x); is($coll->count, 1, 'save'); my $y = $coll->find_one; $y->{"hello"} = 3; $coll->save($y); is($coll->count, 1); my $z = $coll->find_one; is($z->{"hello"}, 3); } # find { $coll->drop; $coll->insert({x => 1}); $coll->insert({x => 4}); $coll->insert({x => 5}); $coll->insert({x => 1, y => 2}); $cursor = $coll->find({x=>4}); my $result = $cursor->next; is($result->{'x'}, 4, 'find'); $cursor = $coll->find({x=>{'$gt' => 1}})->sort({x => -1}); $result = $cursor->next; is($result->{'x'}, 5); $result = $cursor->next; is($result->{'x'}, 4); $cursor = $coll->find({y=>2})->fields({y => 1, _id => 0}); $result = $cursor->next; is(keys %$result, 1, 'find fields'); } # findAndModify { $coll->insert( { name => "find_and_modify_test", value => 42 } ); $coll->find_and_modify( { query => { name => "find_and_modify_test" }, update => { '$set' => { value => 43 } } } ); my $doc = $coll->find_one( { name => "find_and_modify_test" } ); is( $doc->{value}, 43 ); $coll->drop; $coll->insert( { name => "find_and_modify_test", value => 46 } ); my $new = $coll->find_and_modify( { query => { name => "find_and_modify_test" }, update => { '$set' => { value => 57 } }, new => 1 } ); is ( $new->{value}, 57 ); $coll->drop; my $nothing = $coll->find_and_modify( { query => { name => "does not exist" }, update => { name => "barf" } } ); is ( $nothing, undef ); $coll->drop; } # aggregate subtest "aggregation" => sub { plan skip_all => "Aggregation framework unsupported on MongoDB $server_version" unless $server_version >= v2.2.0; $coll->batch_insert( [ { wanted => 1, score => 56 }, { wanted => 1, score => 72 }, { wanted => 1, score => 96 }, { wanted => 1, score => 32 }, { wanted => 1, score => 61 }, { wanted => 1, score => 33 }, { wanted => 0, score => 1000 } ] ); my $cursor = $coll->aggregate( [ { '$match' => { wanted => 1 } }, { '$group' => { _id => 1, 'avgScore' => { '$avg' => '$score' } } } ] ); isa_ok( $cursor, 'MongoDB::QueryResult' ); my $res = [ $cursor->all ]; ok $res->[0]{avgScore} < 59; ok $res->[0]{avgScore} > 57; if ( $server_version < v2.5.0 ) { is( exception { $coll->aggregate( [ {'$match' => { count => {'$gt' => 0} } } ], { cursor => {} } ) }, undef, "asking for cursor when unsupported does not throw error" ); } }; # aggregation cursors subtest "aggregation cursors" => sub { plan skip_all => "Aggregation cursors unsupported on MongoDB $server_version" unless $server_version >= v2.5.0; for( 1..20 ) { $coll->insert( { count => $_ } ); } $cursor = $coll->aggregate( [ { '$match' => { count => { '$gt' => 0 } } } ], { cursor => 1 } ); isa_ok $cursor, 'MongoDB::QueryResult'; is $cursor->started_iterating, 1; is( ref( $cursor->_docs ), ref [ ] ); is $cursor->_doc_count, 20, "document count cached in cursor"; for( 1..20 ) { my $doc = $cursor->next; is( ref( $doc ), ref { } ); is $doc->{count}, $_; is $cursor->_doc_count, ( 20 - $_ ); } # make sure we can transition to a "real" cursor $cursor = $coll->aggregate( [ { '$match' => { count => { '$gt' => 0 } } } ], { cursor => { batchSize => 10 } } ); isa_ok $cursor, 'MongoDB::QueryResult'; is $cursor->started_iterating, 1; is( ref( $cursor->_docs), ref [ ] ); is $cursor->_doc_count, 10, "doc count correct"; for( 1..20 ) { my $doc = $cursor->next; isa_ok( $doc, 'HASH' ); is $doc->{count}, $_, "doc count field is $_"; } $coll->drop; }; # aggregation $out subtest "aggregation \$out" => sub { plan skip_all => "Aggregation result collections unsupported on MongoDB $server_version" unless $server_version >= v2.5.0; for( 1..20 ) { $coll->insert( { count => $_ } ); } my $result = $coll->aggregate( [ { '$match' => { count => { '$gt' => 0 } } }, { '$out' => 'test_out' } ] ); ok $result; my $res_coll = $testdb->get_collection( 'test_out' ); my $cursor = $res_coll->find; for( 1..20 ) { my $doc = $cursor->next; is( ref( $doc ), ref { } ); is $doc->{count}, $_; } $res_coll->drop; $coll->drop; }; # aggregation explain subtest "aggregation explain" => sub { plan skip_all => "Aggregation explain unsupported on MongoDB $server_version" unless $server_version >= v2.4.0; for ( 1..20 ) { $coll->insert( { count => $_ } ); } my $cursor = $coll->aggregate( [ { '$match' => { count => { '$gt' => 0 } } }, { '$sort' => { count => 1 } } ], { explain => 1 } ); my $result = $cursor->next; is( ref( $result ), 'HASH', "aggregate with explain returns a hashref" ); my $expected = $server_version >= v2.6.0 ? 'stages' : 'serverPipeline'; ok( exists $result->{$expected}, "result had '$expected' field" ) or diag explain $result; $coll->drop; }; subtest "deep update" => sub { $coll->drop; $coll->insert( { _id => 1 } ); $coll->update( { _id => 1 }, { '$set' => { 'x.y' => 42 } } ); my $doc = $coll->find_one( { _id => 1 } ); is( $doc->{x}{y}, 42, "deep update worked" ); like( exception { $coll->update( { _id => 1 }, { 'p.q' => 23 } ) }, qr/cannot contain the '\.' character/, "replace with dots in field dies" ); }; subtest "count w/ hint" => sub { $coll->drop; $coll->insert( { i => 1 } ); $coll->insert( { i => 2 } ); is ($coll->count(), 2, 'count = 2'); $coll->ensure_index( { i => 1 } ); is( $coll->count( { i => 1 }, { hint => '_id_' } ), 1, 'count w/ hint & spec'); is( $coll->count( {}, { hint => '_id_' } ), 2, 'count w/ hint'); my $current_version = version->parse($server_version); my $version_2_6 = version->parse('v2.6'); if ( $current_version > $version_2_6 ) { eval { $coll->count( { i => 1 } , { hint => 'BAD HINT' } ) }; like($@, ($server_type eq "Mongos" ? qr/failed/ : qr/bad hint/ ), 'check bad hint error'); } else { is( $coll->count( { i => 1 } , { hint => 'BAD HINT' } ), 1, 'bad hint and spec'); } $coll->ensure_index( { x => 1 }, { sparse => 1 } ); if ($current_version > $version_2_6 ) { is( $coll->count( { i => 1 } , { hint => 'x_1' } ), 0, 'spec & hint on empty sparse index'); } else { is( $coll->count( { i => 1 } , { hint => 'x_1' } ), 1, 'spec & hint on empty sparse index'); } is( $coll->count( {}, { hint => 'x_1' } ), 2, 'hint on empty sparse index'); }; my $js_str = 'function() { return this.a > this.b }'; my $js_obj = MongoDB::Code->new( code => $js_str ); for my $criteria ( $js_str, $js_obj ) { my $type = ref($criteria) || 'string'; subtest "query with \$where as $type" => sub { $coll->drop; $coll->insert( { a => 1, b => 1, n => 1 } ); $coll->insert( { a => 2, b => 1, n => 2 } ); $coll->insert( { a => 3, b => 1, n => 3 } ); $coll->insert( { a => 0, b => 1, n => 4 } ); $coll->insert( { a => 1, b => 2, n => 5 } ); $coll->insert( { a => 2, b => 3, n => 6 } ); my @docs = $coll->find( { '$where' => $criteria } )->sort( { n => 1 } )->all; is( scalar @docs, 2, "correct count a > b" ) or diag explain @docs; cmp_deeply( \@docs, [ { _id => ignore(), a => 2, b => 1, n => 2 }, { _id => ignore(), a => 3, b => 1, n => 3 } ], "javascript query correct" ); }; } done_testing; MongoDB-v1.2.2/t/deprecated/indexes.t000644 000765 000024 00000016275 12651754051 017631 0ustar00davidstaff000000 000000 # # Copyright 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # use strict; use warnings; use Test::More 0.96; use Test::Deep qw/!blessed/; use Test::Fatal; use utf8; use JSON::MaybeXS; use MongoDB; use lib "t/lib"; use MongoDBTest qw/skip_unless_mongod build_client get_test_db server_version server_type/; skip_unless_mongod(); my $conn = build_client(); my $testdb = get_test_db($conn); my $server_version = server_version($conn); my $server_type = server_type($conn); my $coll = $testdb->get_collection("foo"); # basic indexes subtest 'basic indexes' => sub { $coll->drop; $coll->drop; for ( my $i = 0; $i < 10; $i++ ) { $coll->insert_one( { 'x' => $i, 'z' => 3, 'w' => 4 } ); $coll->insert_one( { 'x' => $i, 'y' => 2, 'z' => 3, 'w' => 4 } ); } $coll->drop; ok( !$coll->get_indexes, 'no indexes yet' ); my $indexes = Tie::IxHash->new( foo => 1, bar => 1, baz => 1 ); ok( $coll->ensure_index($indexes) ); my $err = $testdb->last_error; is( $err->{ok}, 1 ); is( $err->{err}, undef ); $indexes = Tie::IxHash->new( foo => 1, bar => 1 ); ok( $coll->ensure_index($indexes) ); $coll->insert_one( { foo => 1, bar => 1, baz => 1, boo => 1 } ); $coll->insert_one( { foo => 1, bar => 1, baz => 1, boo => 2 } ); is( $coll->count, 2 ); ok( $coll->ensure_index( { boo => 1 }, { unique => 1 } ) ); eval { $coll->insert_one( { foo => 3, bar => 3, baz => 3, boo => 2 } ) }; is( $coll->count, 2, 'unique index' ); my @indexes = $coll->get_indexes; is( scalar @indexes, 4, 'three custom indexes and the default _id_ index' ); my ($foobarbaz) = grep { $_->{name} eq 'foo_1_bar_1_baz_1' } @indexes; is_deeply( [ sort keys %{ $foobarbaz->{key} } ], [ sort qw/foo bar baz/ ], ); my ($foobar) = grep { $_->{name} eq 'foo_1_bar_1' } @indexes; is_deeply( [ sort keys %{ $foobar->{key} } ], [ sort qw/foo bar/ ], ); $coll->drop_index('foo_1_bar_1_baz_1'); @indexes = $coll->get_indexes; is( scalar @indexes, 3 ); ok( ( !scalar grep { $_->{name} eq 'foo_1_bar_1_baz_1' } @indexes ), "right index deleted" ); $coll->drop; ok( !$coll->get_indexes, 'no indexes after dropping' ); # make sure this still works $coll->ensure_index( { "foo" => 1 } ); @indexes = $coll->get_indexes; is( scalar @indexes, 2, '1 custom index and the default _id_ index' ); }; # test ensure index with drop_dups subtest 'drop dups' => sub { $coll->drop; $coll->insert_one( { foo => 1, bar => 1, baz => 1, boo => 1 } ); $coll->insert_one( { foo => 1, bar => 1, baz => 1, boo => 2 } ); is( $coll->count, 2 ); eval { $coll->ensure_index( { foo => 1 }, { unique => 1 } ) }; like( $@, qr/E11000/, "got expected error creating unique index with dups" ); # prior to 2.7.5, drop_dups was respected if ( $server_version < v2.7.5 ) { ok( $coll->ensure_index( { foo => 1 }, { unique => 1, drop_dups => 1 } ) ); } }; # test new form of ensure index subtest 'new form of ensure index' => sub { $coll->drop; ok( $coll->ensure_index( { foo => 1, bar => -1, baz => 1 } ) ); ok( $coll->ensure_index( [ foo => 1, bar => 1 ] ) ); $coll->insert_one( { foo => 1, bar => 1, baz => 1, boo => 1 } ); $coll->insert_one( { foo => 1, bar => 1, baz => 1, boo => 2 } ); is( $coll->count, 2 ); # unique index $coll->ensure_index( { boo => 1 }, { unique => 1 } ); eval { $coll->insert_one( { foo => 3, bar => 3, baz => 3, boo => 2 } ) }; is( $coll->count, 2, 'unique index' ); }; subtest '2d index with options' => sub { $coll->drop; $coll->ensure_index( { loc => '2d' }, { bits => 32, sparse => 1 } ); my ($index) = grep { $_->{name} eq 'loc_2d' } $coll->get_indexes; ok( $index, "created 2d index" ); ok( $index->{sparse}, "sparse option set on index" ); is( $index->{bits}, 32, "bits option set on index" ); }; subtest 'ensure index arbitrary options' => sub { $coll->ensure_index( { wibble => 1 }, { notReallyAnOption => { foo => 1 } } ); my ($index) = grep { $_->{name} eq 'wibble_1' } $coll->get_indexes; ok( $index, "created index" ); cmp_deeply( $index->{notReallyAnOption}, { foo => 1 }, "arbitrary option set on index" ); }; subtest "indexes with dots" => sub { my $ok = $coll->ensure_index({"x.y" => 1}, {"name" => "foo"}); my ($index) = grep { $_->{name} eq 'foo' } $coll->get_indexes; ok($index); ok($index->{'key'}); ok($index->{'key'}->{'x.y'}); $coll->drop; }; subtest 'sparse indexes' => sub { for (1..10) { $coll->insert_one({x => $_, y => $_}); $coll->insert_one({x => $_}); } is($coll->count, 20); eval { $coll->ensure_index({"y" => 1}, {"unique" => 1, "name" => "foo"}) }; my ($index) = grep { $_->{name} eq 'foo' } $coll->get_indexes; ok(!$index); $coll->ensure_index({"y" => 1}, {"unique" => 1, "sparse" => 1, "name" => "foo"}); ($index) = grep { $_->{name} eq 'foo' } $coll->get_indexes; ok($index); $coll->drop; }; subtest 'text indices' => sub { plan skip_all => "text indices won't work with db version $server_version" unless $server_version >= v2.4.0; my $res = $conn->get_database('admin')->run_command(['getParameter' => 1, 'textSearchEnabled' => 1]); plan skip_all => "text search not enabled" if !$res->{'textSearchEnabled'}; my $coll = $testdb->get_collection('test_text'); $coll->insert_one({language => 'english', w1 => 'hello', w2 => 'world'}) foreach (1..10); is($coll->count, 10); $res = $coll->ensure_index({'$**' => 'text'}, { name => 'testTextIndex', default_language => 'spanish', language_override => 'language', weights => { w1 => 5, w2 => 10 } }); ok($res); my ($text_index) = grep { $_->{name} eq 'testTextIndex' } $coll->get_indexes; is($text_index->{'default_language'}, 'spanish', 'default_language option works'); is($text_index->{'language_override'}, 'language', 'language_override option works'); is($text_index->{'weights'}->{'w1'}, 5, 'weights option works 1'); is($text_index->{'weights'}->{'w2'}, 10, 'weights option works 2'); # 2.6 deprecated 'text' command and added '$text' operator; also the # result format changed. if ( $server_version >= v2.6.0 ) { my $n_found =()= $coll->find( { '$text' => { '$search' => 'world' } } )->all; is( $n_found, 10, "correct number of results found" ); } else { my $results = $testdb->run_command( [ 'text' => 'test_text', 'search' => 'world' ] )->{results}; is( scalar(@$results), 10, "correct number of results found" ); } $coll->drop; }; done_testing; MongoDB-v1.2.2/t/data/CRUD/000755 000765 000024 00000000000 12651754051 015340 5ustar00davidstaff000000 000000 MongoDB-v1.2.2/t/data/gridfs/000755 000765 000024 00000000000 12651754051 016061 5ustar00davidstaff000000 000000 MongoDB-v1.2.2/t/data/SDAM/000755 000765 000024 00000000000 12651754051 015327 5ustar00davidstaff000000 000000 MongoDB-v1.2.2/t/data/SS/000755 000765 000024 00000000000 12651754051 015130 5ustar00davidstaff000000 000000 MongoDB-v1.2.2/t/data/SS/README.rst000644 000765 000024 00000010031 12651754051 016612 0ustar00davidstaff000000 000000 ====================== Server Selection Tests ====================== This directory contains platform-independent tests that drivers can use to prove their conformance to the Server Selection spec. The tests are provided in both YAML and JSON formats, and drivers may test against whichever format is more convenient for them. Converting to JSON ------------------ The Server Selection tests were originally written in YAML. YAML has a standard comment format, which makes it more human-readable than JSON, and it also has language features for expressing duplicated information more concisely. A JSON-converted version of each YAML test is included here, but if you change the YAML, you will need to re-convert to JSON. One way of converting YAML to JSON is with `jsonwidget-python `_:: pip install PyYAML urwid jsonwidget make Or instead of "make":: for i in `find . -iname '*.yml'`; do echo "${i%.*}" jwc yaml2json $i > ${i%.*}.json done Version ------- Specifications have no version scheme. They are not tied to a MongoDB server version, and it is our intention that each specification moves from "draft" to "final" with no further versions; it is superseded by a future spec, not revised. However, implementers must have stable sets of tests to target. As test files evolve they will be occasionally tagged like "server-selection-tests-2015-01-04", until the spec is final. Test Format and Use ------------------- There are two types of tests for the server selection spec, tests for round trip time (RTT) calculation, and tests for server selection logic. Drivers should be able to test their server selection logic without any network I/O, by parsing topology descriptions and read preference documents from the test files and passing them into driver code. Parts of the server selection code may need to be mocked or subclassed to achieve this. RTT Calculation Tests >>>>>>>>>>>>>>>>>>>>> These YAML files contain the following keys: - ``avg_rtt_ms``: a server's previous average RTT, in milliseconds - ``new_rtt_ms``: a new RTT value for this server, in milliseconds - ``new_avg_rtt``: this server's newly-calculated average RTT, in milliseconds For each file, create a server description object initialized with ``avg_rtt_ms``. Parse ``new_rtt_ms``, and ensure that the new RTT value for the mocked server description is equal to ``new_avg_rtt``. Server Selection Logic Tests >>>>>>>>>>>>>>>>>>>>>>>>>>>> These YAML files contain the following setup for each test: - ``topology_description``: the state of a mocked cluster - ``operation``: the kind of operation to perform, either read or write - ``read_preference``: a read preference document For each file, create a new TopologyDescription object initialized with the values from ``topology_description``. Create a ReadPreference object initialized with the values from ``read_preference``. Together with "operation", pass the newly-created TopologyDescription and ReadPreference to server selection, and ensure that it selects the correct subset of servers from the TopologyDescription. Each YAML file contains a key for each substage of server selection: - ``candidate_servers``: the set of servers in topology_description that are candidates, as per the Server Selection spec, given operation and read_preference - ``eligible_servers``: the set of servers in topology_description that are eligible, as per the Server Selection spec, given operation and read_preference - ``suitable_servers``: the set of servers in topology_description that are suitable, as per the Server Selection spec, given operation and read_preference - ``in_latency_window``: the set of suitable_servers that fall within the latency window Drivers implementing server selection MUST test that their implementations correctly return the set of servers in ``in_latency_window``. Drivers SHOULD test against ``suitable_servers`` if possible, and MAY test against ``eligible_servers`` and ``candidate_servers`` if testing at intermediate stages of server selection is desired. MongoDB-v1.2.2/t/data/SS/rtt/000755 000765 000024 00000000000 12651754051 015741 5ustar00davidstaff000000 000000 MongoDB-v1.2.2/t/data/SS/server_selection/000755 000765 000024 00000000000 12651754051 020503 5ustar00davidstaff000000 000000 MongoDB-v1.2.2/t/data/SS/server_selection/ReplicaSetNoPrimary/000755 000765 000024 00000000000 12651754051 024377 5ustar00davidstaff000000 000000 MongoDB-v1.2.2/t/data/SS/server_selection/ReplicaSetWithPrimary/000755 000765 000024 00000000000 12651754051 024736 5ustar00davidstaff000000 000000 MongoDB-v1.2.2/t/data/SS/server_selection/Sharded/000755 000765 000024 00000000000 12651754051 022055 5ustar00davidstaff000000 000000 MongoDB-v1.2.2/t/data/SS/server_selection/Single/000755 000765 000024 00000000000 12651754051 021724 5ustar00davidstaff000000 000000 MongoDB-v1.2.2/t/data/SS/server_selection/Unknown/000755 000765 000024 00000000000 12651754051 022142 5ustar00davidstaff000000 000000 MongoDB-v1.2.2/t/data/SS/server_selection/Unknown/read/000755 000765 000024 00000000000 12651754051 023055 5ustar00davidstaff000000 000000 MongoDB-v1.2.2/t/data/SS/server_selection/Unknown/write/000755 000765 000024 00000000000 12651754051 023274 5ustar00davidstaff000000 000000 MongoDB-v1.2.2/t/data/SS/server_selection/Unknown/write/SecondaryPreferred.json000644 000765 000024 00000000620 12651754051 027753 0ustar00davidstaff000000 000000 { "candidate_servers": [], "eligible_servers": [], "in_latency_window": [], "operation": "write", "read_preference": { "mode": "SecondaryPreferred", "tags": [ { "data_center": "nyc" } ] }, "suitable_servers": [], "topology_description": { "servers": [], "type": "Unknown" } } MongoDB-v1.2.2/t/data/SS/server_selection/Unknown/write/SecondaryPreferred.yml000644 000765 000024 00000000350 12651754051 027603 0ustar00davidstaff000000 000000 --- topology_description: type: Unknown servers: [] operation: write read_preference: mode: SecondaryPreferred tags: - data_center: nyc candidate_servers: [] eligible_servers: [] suitable_servers: [] in_latency_window: [] MongoDB-v1.2.2/t/data/SS/server_selection/Unknown/read/SecondaryPreferred.json000644 000765 000024 00000000617 12651754051 027542 0ustar00davidstaff000000 000000 { "candidate_servers": [], "eligible_servers": [], "in_latency_window": [], "operation": "read", "read_preference": { "mode": "SecondaryPreferred", "tags": [ { "data_center": "nyc" } ] }, "suitable_servers": [], "topology_description": { "servers": [], "type": "Unknown" } } MongoDB-v1.2.2/t/data/SS/server_selection/Unknown/read/SecondaryPreferred.yml000644 000765 000024 00000000347 12651754051 027372 0ustar00davidstaff000000 000000 --- topology_description: type: Unknown servers: [] operation: read read_preference: mode: SecondaryPreferred tags: - data_center: nyc candidate_servers: [] eligible_servers: [] suitable_servers: [] in_latency_window: [] MongoDB-v1.2.2/t/data/SS/server_selection/Single/read/000755 000765 000024 00000000000 12651754051 022637 5ustar00davidstaff000000 000000 MongoDB-v1.2.2/t/data/SS/server_selection/Single/write/000755 000765 000024 00000000000 12651754051 023056 5ustar00davidstaff000000 000000 MongoDB-v1.2.2/t/data/SS/server_selection/Single/write/SecondaryPreferred.json000644 000765 000024 00000003134 12651754051 027540 0ustar00davidstaff000000 000000 { "candidate_servers": [ { "address": "a:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "dc" } ], "type": "Standalone" } ], "eligible_servers": [ { "address": "a:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "dc" } ], "type": "Standalone" } ], "in_latency_window": [ { "address": "a:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "dc" } ], "type": "Standalone" } ], "operation": "write", "read_preference": { "mode": "SecondaryPreferred", "tags": [ { "data_center": "nyc" } ] }, "suitable_servers": [ { "address": "a:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "dc" } ], "type": "Standalone" } ], "topology_description": { "servers": [ { "address": "a:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "dc" } ], "type": "Standalone" } ], "type": "Single" } } MongoDB-v1.2.2/t/data/SS/server_selection/Single/write/SecondaryPreferred.yml000644 000765 000024 00000000517 12651754051 027372 0ustar00davidstaff000000 000000 --- topology_description: type: Single servers: - &1 address: a:27017 avg_rtt_ms: 5 type: Standalone tags: - data_center: dc operation: write read_preference: mode: SecondaryPreferred tags: - data_center: nyc candidate_servers: - *1 eligible_servers: - *1 suitable_servers: - *1 in_latency_window: - *1 MongoDB-v1.2.2/t/data/SS/server_selection/Single/read/SecondaryPreferred.json000644 000765 000024 00000003133 12651754051 027320 0ustar00davidstaff000000 000000 { "candidate_servers": [ { "address": "a:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "dc" } ], "type": "Standalone" } ], "eligible_servers": [ { "address": "a:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "dc" } ], "type": "Standalone" } ], "in_latency_window": [ { "address": "a:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "dc" } ], "type": "Standalone" } ], "operation": "read", "read_preference": { "mode": "SecondaryPreferred", "tags": [ { "data_center": "nyc" } ] }, "suitable_servers": [ { "address": "a:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "dc" } ], "type": "Standalone" } ], "topology_description": { "servers": [ { "address": "a:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "dc" } ], "type": "Standalone" } ], "type": "Single" } } MongoDB-v1.2.2/t/data/SS/server_selection/Single/read/SecondaryPreferred.yml000644 000765 000024 00000000516 12651754051 027152 0ustar00davidstaff000000 000000 --- topology_description: type: Single servers: - &1 address: a:27017 avg_rtt_ms: 5 type: Standalone tags: - data_center: dc operation: read read_preference: mode: SecondaryPreferred tags: - data_center: nyc candidate_servers: - *1 eligible_servers: - *1 suitable_servers: - *1 in_latency_window: - *1 MongoDB-v1.2.2/t/data/SS/server_selection/Sharded/read/000755 000765 000024 00000000000 12651754051 022770 5ustar00davidstaff000000 000000 MongoDB-v1.2.2/t/data/SS/server_selection/Sharded/write/000755 000765 000024 00000000000 12651754051 023207 5ustar00davidstaff000000 000000 MongoDB-v1.2.2/t/data/SS/server_selection/Sharded/write/SecondaryPreferred.json000644 000765 000024 00000005022 12651754051 027667 0ustar00davidstaff000000 000000 { "candidate_servers": [ { "address": "g:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "nyc" } ], "type": "Mongos" }, { "address": "h:27017", "avg_rtt_ms": 35, "tags": [ { "data_center": "dc" } ], "type": "Mongos" } ], "eligible_servers": [ { "address": "g:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "nyc" } ], "type": "Mongos" }, { "address": "h:27017", "avg_rtt_ms": 35, "tags": [ { "data_center": "dc" } ], "type": "Mongos" } ], "in_latency_window": [ { "address": "g:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "nyc" } ], "type": "Mongos" } ], "operation": "write", "read_preference": { "mode": "SecondaryPreferred", "tags": [ { "data_center": "nyc" } ] }, "suitable_servers": [ { "address": "g:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "nyc" } ], "type": "Mongos" }, { "address": "h:27017", "avg_rtt_ms": 35, "tags": [ { "data_center": "dc" } ], "type": "Mongos" } ], "topology_description": { "servers": [ { "address": "g:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "nyc" } ], "type": "Mongos" }, { "address": "h:27017", "avg_rtt_ms": 35, "tags": [ { "data_center": "dc" } ], "type": "Mongos" } ], "type": "Sharded" } } MongoDB-v1.2.2/t/data/SS/server_selection/Sharded/write/SecondaryPreferred.yml000644 000765 000024 00000000674 12651754051 027527 0ustar00davidstaff000000 000000 --- topology_description: type: Sharded servers: - &1 address: g:27017 avg_rtt_ms: 5 type: Mongos tags: - data_center: nyc - &2 address: h:27017 avg_rtt_ms: 35 type: Mongos tags: - data_center: dc operation: write read_preference: mode: SecondaryPreferred tags: - data_center: nyc candidate_servers: - *1 - *2 eligible_servers: - *1 - *2 suitable_servers: - *1 - *2 in_latency_window: - *1 MongoDB-v1.2.2/t/data/SS/server_selection/Sharded/read/SecondaryPreferred.json000644 000765 000024 00000005021 12651754051 027447 0ustar00davidstaff000000 000000 { "candidate_servers": [ { "address": "g:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "nyc" } ], "type": "Mongos" }, { "address": "h:27017", "avg_rtt_ms": 35, "tags": [ { "data_center": "dc" } ], "type": "Mongos" } ], "eligible_servers": [ { "address": "g:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "nyc" } ], "type": "Mongos" }, { "address": "h:27017", "avg_rtt_ms": 35, "tags": [ { "data_center": "dc" } ], "type": "Mongos" } ], "in_latency_window": [ { "address": "g:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "nyc" } ], "type": "Mongos" } ], "operation": "read", "read_preference": { "mode": "SecondaryPreferred", "tags": [ { "data_center": "nyc" } ] }, "suitable_servers": [ { "address": "g:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "nyc" } ], "type": "Mongos" }, { "address": "h:27017", "avg_rtt_ms": 35, "tags": [ { "data_center": "dc" } ], "type": "Mongos" } ], "topology_description": { "servers": [ { "address": "g:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "nyc" } ], "type": "Mongos" }, { "address": "h:27017", "avg_rtt_ms": 35, "tags": [ { "data_center": "dc" } ], "type": "Mongos" } ], "type": "Sharded" } } MongoDB-v1.2.2/t/data/SS/server_selection/Sharded/read/SecondaryPreferred.yml000644 000765 000024 00000000673 12651754051 027307 0ustar00davidstaff000000 000000 --- topology_description: type: Sharded servers: - &1 address: g:27017 avg_rtt_ms: 5 type: Mongos tags: - data_center: nyc - &2 address: h:27017 avg_rtt_ms: 35 type: Mongos tags: - data_center: dc operation: read read_preference: mode: SecondaryPreferred tags: - data_center: nyc candidate_servers: - *1 - *2 eligible_servers: - *1 - *2 suitable_servers: - *1 - *2 in_latency_window: - *1 MongoDB-v1.2.2/t/data/SS/server_selection/ReplicaSetWithPrimary/read/000755 000765 000024 00000000000 12651754051 025651 5ustar00davidstaff000000 000000 MongoDB-v1.2.2/t/data/SS/server_selection/ReplicaSetWithPrimary/write/000755 000765 000024 00000000000 12651754051 026070 5ustar00davidstaff000000 000000 MongoDB-v1.2.2/t/data/SS/server_selection/ReplicaSetWithPrimary/write/SecondaryPreferred.json000644 000765 000024 00000004232 12651754051 032552 0ustar00davidstaff000000 000000 { "candidate_servers": [ { "address": "a:27017", "avg_rtt_ms": 26, "tags": [ { "data_center": "nyc" } ], "type": "RSPrimary" } ], "eligible_servers": [ { "address": "a:27017", "avg_rtt_ms": 26, "tags": [ { "data_center": "nyc" } ], "type": "RSPrimary" } ], "in_latency_window": [ { "address": "a:27017", "avg_rtt_ms": 26, "tags": [ { "data_center": "nyc" } ], "type": "RSPrimary" } ], "operation": "write", "read_preference": { "mode": "SecondaryPreferred", "tags": [ { "data_center": "nyc" } ] }, "suitable_servers": [ { "address": "a:27017", "avg_rtt_ms": 26, "tags": [ { "data_center": "nyc" } ], "type": "RSPrimary" } ], "topology_description": { "servers": [ { "address": "b:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" }, { "address": "c:27017", "avg_rtt_ms": 100, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" }, { "address": "a:27017", "avg_rtt_ms": 26, "tags": [ { "data_center": "nyc" } ], "type": "RSPrimary" } ], "type": "ReplicaSetWithPrimary" } } MongoDB-v1.2.2/t/data/SS/server_selection/ReplicaSetWithPrimary/write/SecondaryPreferred.yml000644 000765 000024 00000001035 12651754051 032400 0ustar00davidstaff000000 000000 --- topology_description: type: ReplicaSetWithPrimary servers: - address: b:27017 avg_rtt_ms: 5 type: RSSecondary tags: - data_center: nyc - address: c:27017 avg_rtt_ms: 100 type: RSSecondary tags: - data_center: nyc - &1 address: a:27017 avg_rtt_ms: 26 type: RSPrimary tags: - data_center: nyc operation: write read_preference: mode: SecondaryPreferred tags: - data_center: nyc candidate_servers: - *1 eligible_servers: - *1 suitable_servers: - *1 in_latency_window: - *1 MongoDB-v1.2.2/t/data/SS/server_selection/ReplicaSetWithPrimary/read/Nearest.json000644 000765 000024 00000007035 12651754051 030152 0ustar00davidstaff000000 000000 { "candidate_servers": [ { "address": "b:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" }, { "address": "c:27017", "avg_rtt_ms": 100, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" }, { "address": "a:27017", "avg_rtt_ms": 26, "tags": [ { "data_center": "nyc" } ], "type": "RSPrimary" } ], "eligible_servers": [ { "address": "b:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" }, { "address": "c:27017", "avg_rtt_ms": 100, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" }, { "address": "a:27017", "avg_rtt_ms": 26, "tags": [ { "data_center": "nyc" } ], "type": "RSPrimary" } ], "in_latency_window": [ { "address": "b:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" } ], "operation": "read", "read_preference": { "mode": "Nearest", "tags": [ { "data_center": "nyc" } ] }, "suitable_servers": [ { "address": "b:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" }, { "address": "a:27017", "avg_rtt_ms": 26, "tags": [ { "data_center": "nyc" } ], "type": "RSPrimary" }, { "address": "c:27017", "avg_rtt_ms": 100, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" } ], "topology_description": { "servers": [ { "address": "b:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" }, { "address": "c:27017", "avg_rtt_ms": 100, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" }, { "address": "a:27017", "avg_rtt_ms": 26, "tags": [ { "data_center": "nyc" } ], "type": "RSPrimary" } ], "type": "ReplicaSetWithPrimary" } } MongoDB-v1.2.2/t/data/SS/server_selection/ReplicaSetWithPrimary/read/Nearest.yml000644 000765 000024 00000001075 12651754051 030000 0ustar00davidstaff000000 000000 --- topology_description: type: ReplicaSetWithPrimary servers: - &1 address: b:27017 avg_rtt_ms: 5 type: RSSecondary tags: - data_center: nyc - &2 address: c:27017 avg_rtt_ms: 100 type: RSSecondary tags: - data_center: nyc - &3 address: a:27017 avg_rtt_ms: 26 type: RSPrimary tags: - data_center: nyc operation: read read_preference: mode: Nearest tags: - data_center: nyc candidate_servers: - *1 - *2 - *3 eligible_servers: - *1 - *2 - *3 suitable_servers: - *1 - *3 - *2 in_latency_window: - *1 MongoDB-v1.2.2/t/data/SS/server_selection/ReplicaSetWithPrimary/read/Nearest_non_matching.json000644 000765 000024 00000003635 12651754051 032700 0ustar00davidstaff000000 000000 { "candidate_servers": [ { "address": "b:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" }, { "address": "c:27017", "avg_rtt_ms": 100, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" }, { "address": "a:27017", "avg_rtt_ms": 26, "tags": [ { "data_center": "nyc" } ], "type": "RSPrimary" } ], "eligible_servers": [], "in_latency_window": [], "operation": "read", "read_preference": { "mode": "Nearest", "tags": [ { "data_center": "sf" } ] }, "suitable_servers": [], "topology_description": { "servers": [ { "address": "b:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" }, { "address": "c:27017", "avg_rtt_ms": 100, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" }, { "address": "a:27017", "avg_rtt_ms": 26, "tags": [ { "data_center": "nyc" } ], "type": "RSPrimary" } ], "type": "ReplicaSetWithPrimary" } } MongoDB-v1.2.2/t/data/SS/server_selection/ReplicaSetWithPrimary/read/Nearest_non_matching.yml000644 000765 000024 00000001042 12651754051 032516 0ustar00davidstaff000000 000000 --- topology_description: type: ReplicaSetWithPrimary servers: - &1 address: b:27017 avg_rtt_ms: 5 type: RSSecondary tags: - data_center: nyc - &2 address: c:27017 avg_rtt_ms: 100 type: RSSecondary tags: - data_center: nyc - &3 address: a:27017 avg_rtt_ms: 26 type: RSPrimary tags: - data_center: nyc operation: read read_preference: mode: Nearest tags: - data_center: sf candidate_servers: - *1 - *2 - *3 eligible_servers: [] suitable_servers: [] in_latency_window: [] MongoDB-v1.2.2/t/data/SS/server_selection/ReplicaSetWithPrimary/read/Primary.json000644 000765 000024 00000004134 12651754051 030171 0ustar00davidstaff000000 000000 { "candidate_servers": [ { "address": "a:27017", "avg_rtt_ms": 26, "tags": [ { "data_center": "nyc" } ], "type": "RSPrimary" } ], "eligible_servers": [ { "address": "a:27017", "avg_rtt_ms": 26, "tags": [ { "data_center": "nyc" } ], "type": "RSPrimary" } ], "in_latency_window": [ { "address": "a:27017", "avg_rtt_ms": 26, "tags": [ { "data_center": "nyc" } ], "type": "RSPrimary" } ], "operation": "read", "read_preference": { "mode": "Primary", "tags": [ {} ] }, "suitable_servers": [ { "address": "a:27017", "avg_rtt_ms": 26, "tags": [ { "data_center": "nyc" } ], "type": "RSPrimary" } ], "topology_description": { "servers": [ { "address": "b:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" }, { "address": "c:27017", "avg_rtt_ms": 100, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" }, { "address": "a:27017", "avg_rtt_ms": 26, "tags": [ { "data_center": "nyc" } ], "type": "RSPrimary" } ], "type": "ReplicaSetWithPrimary" } } MongoDB-v1.2.2/t/data/SS/server_selection/ReplicaSetWithPrimary/read/Primary.yml000644 000765 000024 00000001003 12651754051 030011 0ustar00davidstaff000000 000000 --- topology_description: type: ReplicaSetWithPrimary servers: - address: b:27017 avg_rtt_ms: 5 type: RSSecondary tags: - data_center: nyc - address: c:27017 avg_rtt_ms: 100 type: RSSecondary tags: - data_center: nyc - &1 address: a:27017 avg_rtt_ms: 26 type: RSPrimary tags: - data_center: nyc operation: read read_preference: mode: Primary tags: - {} candidate_servers: - *1 eligible_servers: - *1 suitable_servers: - *1 in_latency_window: - *1 MongoDB-v1.2.2/t/data/SS/server_selection/ReplicaSetWithPrimary/read/PrimaryPreferred.json000644 000765 000024 00000006031 12651754051 032026 0ustar00davidstaff000000 000000 { "candidate_servers": [ { "address": "b:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" }, { "address": "c:27017", "avg_rtt_ms": 100, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" }, { "address": "a:27017", "avg_rtt_ms": 26, "tags": [ { "data_center": "nyc" } ], "type": "RSPrimary" } ], "eligible_servers": [ { "address": "b:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" }, { "address": "c:27017", "avg_rtt_ms": 100, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" }, { "address": "a:27017", "avg_rtt_ms": 26, "tags": [ { "data_center": "nyc" } ], "type": "RSPrimary" } ], "in_latency_window": [ { "address": "a:27017", "avg_rtt_ms": 26, "tags": [ { "data_center": "nyc" } ], "type": "RSPrimary" } ], "operation": "read", "read_preference": { "mode": "PrimaryPreferred", "tags": [ {} ] }, "suitable_servers": [ { "address": "a:27017", "avg_rtt_ms": 26, "tags": [ { "data_center": "nyc" } ], "type": "RSPrimary" } ], "topology_description": { "servers": [ { "address": "b:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" }, { "address": "c:27017", "avg_rtt_ms": 100, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" }, { "address": "a:27017", "avg_rtt_ms": 26, "tags": [ { "data_center": "nyc" } ], "type": "RSPrimary" } ], "type": "ReplicaSetWithPrimary" } } MongoDB-v1.2.2/t/data/SS/server_selection/ReplicaSetWithPrimary/read/PrimaryPreferred.yml000644 000765 000024 00000001056 12651754051 031660 0ustar00davidstaff000000 000000 --- topology_description: type: ReplicaSetWithPrimary servers: - &1 address: b:27017 avg_rtt_ms: 5 type: RSSecondary tags: - data_center: nyc - &2 address: c:27017 avg_rtt_ms: 100 type: RSSecondary tags: - data_center: nyc - &3 address: a:27017 avg_rtt_ms: 26 type: RSPrimary tags: - data_center: nyc operation: read read_preference: mode: PrimaryPreferred tags: - {} candidate_servers: - *1 - *2 - *3 eligible_servers: - *1 - *2 - *3 suitable_servers: - *3 in_latency_window: - *3 t/data/SS/server_selection/ReplicaSetWithPrimary/read/PrimaryPreferred_non_matching.json000644 000765 000024 00000004602 12651754051 034475 0ustar00davidstaff000000 000000 MongoDB-v1.2.2{ "candidate_servers": [ { "address": "b:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" }, { "address": "c:27017", "avg_rtt_ms": 100, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" }, { "address": "a:27017", "avg_rtt_ms": 26, "tags": [ { "data_center": "nyc" } ], "type": "RSPrimary" } ], "eligible_servers": [], "in_latency_window": [ { "address": "a:27017", "avg_rtt_ms": 26, "tags": [ { "data_center": "nyc" } ], "type": "RSPrimary" } ], "operation": "read", "read_preference": { "mode": "PrimaryPreferred", "tags": [ { "data_center": "sf" } ] }, "suitable_servers": [ { "address": "a:27017", "avg_rtt_ms": 26, "tags": [ { "data_center": "nyc" } ], "type": "RSPrimary" } ], "topology_description": { "servers": [ { "address": "b:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" }, { "address": "c:27017", "avg_rtt_ms": 100, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" }, { "address": "a:27017", "avg_rtt_ms": 26, "tags": [ { "data_center": "nyc" } ], "type": "RSPrimary" } ], "type": "ReplicaSetWithPrimary" } } t/data/SS/server_selection/ReplicaSetWithPrimary/read/PrimaryPreferred_non_matching.yml000644 000765 000024 00000001057 12651754051 034326 0ustar00davidstaff000000 000000 MongoDB-v1.2.2--- topology_description: type: ReplicaSetWithPrimary servers: - &1 address: b:27017 avg_rtt_ms: 5 type: RSSecondary tags: - data_center: nyc - &2 address: c:27017 avg_rtt_ms: 100 type: RSSecondary tags: - data_center: nyc - &3 address: a:27017 avg_rtt_ms: 26 type: RSPrimary tags: - data_center: nyc operation: read read_preference: mode: PrimaryPreferred tags: - data_center: sf candidate_servers: - *1 - *2 - *3 eligible_servers: [] suitable_servers: - *3 in_latency_window: - *3 MongoDB-v1.2.2/t/data/SS/server_selection/ReplicaSetWithPrimary/read/Secondary.json000644 000765 000024 00000005536 12651754051 030504 0ustar00davidstaff000000 000000 { "candidate_servers": [ { "address": "b:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" }, { "address": "c:27017", "avg_rtt_ms": 100, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" } ], "eligible_servers": [ { "address": "b:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" }, { "address": "c:27017", "avg_rtt_ms": 100, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" } ], "in_latency_window": [ { "address": "b:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" } ], "operation": "read", "read_preference": { "mode": "Secondary", "tags": [ { "data_center": "nyc" } ] }, "suitable_servers": [ { "address": "b:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" }, { "address": "c:27017", "avg_rtt_ms": 100, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" } ], "topology_description": { "servers": [ { "address": "b:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" }, { "address": "c:27017", "avg_rtt_ms": 100, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" }, { "address": "a:27017", "avg_rtt_ms": 26, "tags": [ { "data_center": "nyc" } ], "type": "RSPrimary" } ], "type": "ReplicaSetWithPrimary" } } MongoDB-v1.2.2/t/data/SS/server_selection/ReplicaSetWithPrimary/read/Secondary.yml000644 000765 000024 00000001051 12651754051 030320 0ustar00davidstaff000000 000000 --- topology_description: type: ReplicaSetWithPrimary servers: - &1 address: b:27017 avg_rtt_ms: 5 type: RSSecondary tags: - data_center: nyc - &2 address: c:27017 avg_rtt_ms: 100 type: RSSecondary tags: - data_center: nyc - address: a:27017 avg_rtt_ms: 26 type: RSPrimary tags: - data_center: nyc operation: read read_preference: mode: Secondary tags: - data_center: nyc candidate_servers: - *1 - *2 eligible_servers: - *1 - *2 suitable_servers: - *1 - *2 in_latency_window: - *1 MongoDB-v1.2.2/t/data/SS/server_selection/ReplicaSetWithPrimary/read/Secondary_non_matching.json000644 000765 000024 00000003264 12651754051 033224 0ustar00davidstaff000000 000000 { "candidate_servers": [ { "address": "b:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" }, { "address": "c:27017", "avg_rtt_ms": 100, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" } ], "eligible_servers": [], "in_latency_window": [], "operation": "read", "read_preference": { "mode": "Secondary", "tags": [ { "data_center": "sf" } ] }, "suitable_servers": [], "topology_description": { "servers": [ { "address": "b:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" }, { "address": "c:27017", "avg_rtt_ms": 100, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" }, { "address": "a:27017", "avg_rtt_ms": 26, "tags": [ { "data_center": "nyc" } ], "type": "RSPrimary" } ], "type": "ReplicaSetWithPrimary" } } MongoDB-v1.2.2/t/data/SS/server_selection/ReplicaSetWithPrimary/read/Secondary_non_matching.yml000644 000765 000024 00000001030 12651754051 033041 0ustar00davidstaff000000 000000 --- topology_description: type: ReplicaSetWithPrimary servers: - &1 address: b:27017 avg_rtt_ms: 5 type: RSSecondary tags: - data_center: nyc - &2 address: c:27017 avg_rtt_ms: 100 type: RSSecondary tags: - data_center: nyc - address: a:27017 avg_rtt_ms: 26 type: RSPrimary tags: - data_center: nyc operation: read read_preference: mode: Secondary tags: - data_center: sf candidate_servers: - *1 - *2 eligible_servers: [] suitable_servers: [] in_latency_window: [] MongoDB-v1.2.2/t/data/SS/server_selection/ReplicaSetWithPrimary/read/SecondaryPreferred.json000644 000765 000024 00000006475 12651754051 032346 0ustar00davidstaff000000 000000 { "candidate_servers": [ { "address": "b:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" }, { "address": "c:27017", "avg_rtt_ms": 100, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" }, { "address": "a:27017", "avg_rtt_ms": 26, "tags": [ { "data_center": "nyc" } ], "type": "RSPrimary" } ], "eligible_servers": [ { "address": "b:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" }, { "address": "c:27017", "avg_rtt_ms": 100, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" }, { "address": "a:27017", "avg_rtt_ms": 26, "tags": [ { "data_center": "nyc" } ], "type": "RSPrimary" } ], "in_latency_window": [ { "address": "b:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" } ], "operation": "read", "read_preference": { "mode": "SecondaryPreferred", "tags": [ { "data_center": "nyc" } ] }, "suitable_servers": [ { "address": "b:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" }, { "address": "c:27017", "avg_rtt_ms": 100, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" } ], "topology_description": { "servers": [ { "address": "b:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" }, { "address": "c:27017", "avg_rtt_ms": 100, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" }, { "address": "a:27017", "avg_rtt_ms": 26, "tags": [ { "data_center": "nyc" } ], "type": "RSPrimary" } ], "type": "ReplicaSetWithPrimary" } } MongoDB-v1.2.2/t/data/SS/server_selection/ReplicaSetWithPrimary/read/SecondaryPreferred.yml000644 000765 000024 00000001103 12651754051 032155 0ustar00davidstaff000000 000000 --- topology_description: type: ReplicaSetWithPrimary servers: - &1 address: b:27017 avg_rtt_ms: 5 type: RSSecondary tags: - data_center: nyc - &2 address: c:27017 avg_rtt_ms: 100 type: RSSecondary tags: - data_center: nyc - &3 address: a:27017 avg_rtt_ms: 26 type: RSPrimary tags: - data_center: nyc operation: read read_preference: mode: SecondaryPreferred tags: - data_center: nyc candidate_servers: - *1 - *2 - *3 eligible_servers: - *1 - *2 - *3 suitable_servers: - *1 - *2 in_latency_window: - *1 t/data/SS/server_selection/ReplicaSetWithPrimary/read/SecondaryPreferred_non_matching.json000644 000765 000024 00000004604 12651754051 035003 0ustar00davidstaff000000 000000 MongoDB-v1.2.2{ "candidate_servers": [ { "address": "b:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" }, { "address": "c:27017", "avg_rtt_ms": 100, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" }, { "address": "a:27017", "avg_rtt_ms": 26, "tags": [ { "data_center": "nyc" } ], "type": "RSPrimary" } ], "eligible_servers": [], "in_latency_window": [ { "address": "a:27017", "avg_rtt_ms": 26, "tags": [ { "data_center": "nyc" } ], "type": "RSPrimary" } ], "operation": "read", "read_preference": { "mode": "SecondaryPreferred", "tags": [ { "data_center": "sf" } ] }, "suitable_servers": [ { "address": "a:27017", "avg_rtt_ms": 26, "tags": [ { "data_center": "nyc" } ], "type": "RSPrimary" } ], "topology_description": { "servers": [ { "address": "b:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" }, { "address": "c:27017", "avg_rtt_ms": 100, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" }, { "address": "a:27017", "avg_rtt_ms": 26, "tags": [ { "data_center": "nyc" } ], "type": "RSPrimary" } ], "type": "ReplicaSetWithPrimary" } } t/data/SS/server_selection/ReplicaSetWithPrimary/read/SecondaryPreferred_non_matching.yml000644 000765 000024 00000001061 12651754051 034625 0ustar00davidstaff000000 000000 MongoDB-v1.2.2--- topology_description: type: ReplicaSetWithPrimary servers: - &1 address: b:27017 avg_rtt_ms: 5 type: RSSecondary tags: - data_center: nyc - &2 address: c:27017 avg_rtt_ms: 100 type: RSSecondary tags: - data_center: nyc - &3 address: a:27017 avg_rtt_ms: 26 type: RSPrimary tags: - data_center: nyc operation: read read_preference: mode: SecondaryPreferred tags: - data_center: sf candidate_servers: - *1 - *2 - *3 eligible_servers: [] suitable_servers: - *3 in_latency_window: - *3 MongoDB-v1.2.2/t/data/SS/server_selection/ReplicaSetNoPrimary/read/000755 000765 000024 00000000000 12651754051 025312 5ustar00davidstaff000000 000000 MongoDB-v1.2.2/t/data/SS/server_selection/ReplicaSetNoPrimary/write/000755 000765 000024 00000000000 12651754051 025531 5ustar00davidstaff000000 000000 MongoDB-v1.2.2/t/data/SS/server_selection/ReplicaSetNoPrimary/write/SecondaryPreferred.json000644 000765 000024 00000001715 12651754051 032216 0ustar00davidstaff000000 000000 { "candidate_servers": [], "eligible_servers": [], "in_latency_window": [], "operation": "write", "read_preference": { "mode": "SecondaryPreferred", "tags": [ { "data_center": "nyc" } ] }, "suitable_servers": [], "topology_description": { "servers": [ { "address": "b:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" }, { "address": "c:27017", "avg_rtt_ms": 100, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" } ], "type": "ReplicaSetNoPrimary" } } MongoDB-v1.2.2/t/data/SS/server_selection/ReplicaSetNoPrimary/write/SecondaryPreferred.yml000644 000765 000024 00000000657 12651754051 032052 0ustar00davidstaff000000 000000 --- topology_description: type: ReplicaSetNoPrimary servers: - address: b:27017 avg_rtt_ms: 5 type: RSSecondary tags: - data_center: nyc - address: c:27017 avg_rtt_ms: 100 type: RSSecondary tags: - data_center: nyc operation: write read_preference: mode: SecondaryPreferred tags: - data_center: nyc candidate_servers: [] eligible_servers: [] suitable_servers: [] in_latency_window: [] MongoDB-v1.2.2/t/data/SS/server_selection/ReplicaSetNoPrimary/read/Nearest.json000644 000765 000024 00000005107 12651754051 027611 0ustar00davidstaff000000 000000 { "candidate_servers": [ { "address": "b:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" }, { "address": "c:27017", "avg_rtt_ms": 100, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" } ], "eligible_servers": [ { "address": "b:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" }, { "address": "c:27017", "avg_rtt_ms": 100, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" } ], "in_latency_window": [ { "address": "b:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" } ], "operation": "read", "read_preference": { "mode": "Nearest", "tags": [ { "data_center": "nyc" } ] }, "suitable_servers": [ { "address": "b:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" }, { "address": "c:27017", "avg_rtt_ms": 100, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" } ], "topology_description": { "servers": [ { "address": "b:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" }, { "address": "c:27017", "avg_rtt_ms": 100, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" } ], "type": "ReplicaSetNoPrimary" } } MongoDB-v1.2.2/t/data/SS/server_selection/ReplicaSetNoPrimary/read/Nearest.yml000644 000765 000024 00000000710 12651754051 027434 0ustar00davidstaff000000 000000 --- topology_description: type: ReplicaSetNoPrimary servers: - &1 address: b:27017 avg_rtt_ms: 5 type: RSSecondary tags: - data_center: nyc - &2 address: c:27017 avg_rtt_ms: 100 type: RSSecondary tags: - data_center: nyc operation: read read_preference: mode: Nearest tags: - data_center: nyc candidate_servers: - *1 - *2 eligible_servers: - *1 - *2 suitable_servers: - *1 - *2 in_latency_window: - *1 MongoDB-v1.2.2/t/data/SS/server_selection/ReplicaSetNoPrimary/read/Nearest_non_matching.json000644 000765 000024 00000002635 12651754051 032340 0ustar00davidstaff000000 000000 { "candidate_servers": [ { "address": "b:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" }, { "address": "c:27017", "avg_rtt_ms": 100, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" } ], "eligible_servers": [], "in_latency_window": [], "operation": "read", "read_preference": { "mode": "Nearest", "tags": [ { "data_center": "sf" } ] }, "suitable_servers": [], "topology_description": { "servers": [ { "address": "b:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" }, { "address": "c:27017", "avg_rtt_ms": 100, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" } ], "type": "ReplicaSetNoPrimary" } } MongoDB-v1.2.2/t/data/SS/server_selection/ReplicaSetNoPrimary/read/Nearest_non_matching.yml000644 000765 000024 00000000667 12651754051 032173 0ustar00davidstaff000000 000000 --- topology_description: type: ReplicaSetNoPrimary servers: - &1 address: b:27017 avg_rtt_ms: 5 type: RSSecondary tags: - data_center: nyc - &2 address: c:27017 avg_rtt_ms: 100 type: RSSecondary tags: - data_center: nyc operation: read read_preference: mode: Nearest tags: - data_center: sf candidate_servers: - *1 - *2 eligible_servers: [] suitable_servers: [] in_latency_window: [] MongoDB-v1.2.2/t/data/SS/server_selection/ReplicaSetNoPrimary/read/Primary.json000644 000765 000024 00000001617 12651754051 027635 0ustar00davidstaff000000 000000 { "candidate_servers": [], "eligible_servers": [], "in_latency_window": [], "operation": "read", "read_preference": { "mode": "Primary", "tags": [ {} ] }, "suitable_servers": [], "topology_description": { "servers": [ { "address": "b:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" }, { "address": "c:27017", "avg_rtt_ms": 100, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" } ], "type": "ReplicaSetNoPrimary" } } MongoDB-v1.2.2/t/data/SS/server_selection/ReplicaSetNoPrimary/read/Primary.yml000644 000765 000024 00000000625 12651754051 027463 0ustar00davidstaff000000 000000 --- topology_description: type: ReplicaSetNoPrimary servers: - address: b:27017 avg_rtt_ms: 5 type: RSSecondary tags: - data_center: nyc - address: c:27017 avg_rtt_ms: 100 type: RSSecondary tags: - data_center: nyc operation: read read_preference: mode: Primary tags: - {} candidate_servers: [] eligible_servers: [] suitable_servers: [] in_latency_window: [] MongoDB-v1.2.2/t/data/SS/server_selection/ReplicaSetNoPrimary/read/PrimaryPreferred.json000644 000765 000024 00000005036 12651754051 031473 0ustar00davidstaff000000 000000 { "candidate_servers": [ { "address": "b:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" }, { "address": "c:27017", "avg_rtt_ms": 100, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" } ], "eligible_servers": [ { "address": "b:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" }, { "address": "c:27017", "avg_rtt_ms": 100, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" } ], "in_latency_window": [ { "address": "b:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" } ], "operation": "read", "read_preference": { "mode": "PrimaryPreferred", "tags": [ {} ] }, "suitable_servers": [ { "address": "b:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" }, { "address": "c:27017", "avg_rtt_ms": 100, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" } ], "topology_description": { "servers": [ { "address": "b:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" }, { "address": "c:27017", "avg_rtt_ms": 100, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" } ], "type": "ReplicaSetNoPrimary" } } MongoDB-v1.2.2/t/data/SS/server_selection/ReplicaSetNoPrimary/read/PrimaryPreferred.yml000644 000765 000024 00000000703 12651754051 031317 0ustar00davidstaff000000 000000 --- topology_description: type: ReplicaSetNoPrimary servers: - &1 address: b:27017 avg_rtt_ms: 5 type: RSSecondary tags: - data_center: nyc - &2 address: c:27017 avg_rtt_ms: 100 type: RSSecondary tags: - data_center: nyc operation: read read_preference: mode: PrimaryPreferred tags: - {} candidate_servers: - *1 - *2 eligible_servers: - *1 - *2 suitable_servers: - *1 - *2 in_latency_window: - *1 t/data/SS/server_selection/ReplicaSetNoPrimary/read/PrimaryPreferred_non_matching.json000644 000765 000024 00000002646 12651754051 034144 0ustar00davidstaff000000 000000 MongoDB-v1.2.2{ "candidate_servers": [ { "address": "b:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" }, { "address": "c:27017", "avg_rtt_ms": 100, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" } ], "eligible_servers": [], "in_latency_window": [], "operation": "read", "read_preference": { "mode": "PrimaryPreferred", "tags": [ { "data_center": "sf" } ] }, "suitable_servers": [], "topology_description": { "servers": [ { "address": "b:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" }, { "address": "c:27017", "avg_rtt_ms": 100, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" } ], "type": "ReplicaSetNoPrimary" } } MongoDB-v1.2.2/t/data/SS/server_selection/ReplicaSetNoPrimary/read/PrimaryPreferred_non_matching.yml000644 000765 000024 00000000700 12651754051 034040 0ustar00davidstaff000000 000000 --- topology_description: type: ReplicaSetNoPrimary servers: - &1 address: b:27017 avg_rtt_ms: 5 type: RSSecondary tags: - data_center: nyc - &2 address: c:27017 avg_rtt_ms: 100 type: RSSecondary tags: - data_center: nyc operation: read read_preference: mode: PrimaryPreferred tags: - data_center: sf candidate_servers: - *1 - *2 eligible_servers: [] suitable_servers: [] in_latency_window: [] MongoDB-v1.2.2/t/data/SS/server_selection/ReplicaSetNoPrimary/read/Secondary.json000644 000765 000024 00000005111 12651754051 030132 0ustar00davidstaff000000 000000 { "candidate_servers": [ { "address": "b:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" }, { "address": "c:27017", "avg_rtt_ms": 100, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" } ], "eligible_servers": [ { "address": "b:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" }, { "address": "c:27017", "avg_rtt_ms": 100, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" } ], "in_latency_window": [ { "address": "b:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" } ], "operation": "read", "read_preference": { "mode": "Secondary", "tags": [ { "data_center": "nyc" } ] }, "suitable_servers": [ { "address": "b:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" }, { "address": "c:27017", "avg_rtt_ms": 100, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" } ], "topology_description": { "servers": [ { "address": "b:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" }, { "address": "c:27017", "avg_rtt_ms": 100, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" } ], "type": "ReplicaSetNoPrimary" } } MongoDB-v1.2.2/t/data/SS/server_selection/ReplicaSetNoPrimary/read/Secondary.yml000644 000765 000024 00000000712 12651754051 027764 0ustar00davidstaff000000 000000 --- topology_description: type: ReplicaSetNoPrimary servers: - &1 address: b:27017 avg_rtt_ms: 5 type: RSSecondary tags: - data_center: nyc - &2 address: c:27017 avg_rtt_ms: 100 type: RSSecondary tags: - data_center: nyc operation: read read_preference: mode: Secondary tags: - data_center: nyc candidate_servers: - *1 - *2 eligible_servers: - *1 - *2 suitable_servers: - *1 - *2 in_latency_window: - *1 MongoDB-v1.2.2/t/data/SS/server_selection/ReplicaSetNoPrimary/read/Secondary_non_matching.json000644 000765 000024 00000002637 12651754051 032670 0ustar00davidstaff000000 000000 { "candidate_servers": [ { "address": "b:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" }, { "address": "c:27017", "avg_rtt_ms": 100, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" } ], "eligible_servers": [], "in_latency_window": [], "operation": "read", "read_preference": { "mode": "Secondary", "tags": [ { "data_center": "sf" } ] }, "suitable_servers": [], "topology_description": { "servers": [ { "address": "b:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" }, { "address": "c:27017", "avg_rtt_ms": 100, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" } ], "type": "ReplicaSetNoPrimary" } } MongoDB-v1.2.2/t/data/SS/server_selection/ReplicaSetNoPrimary/read/Secondary_non_matching.yml000644 000765 000024 00000000671 12651754051 032514 0ustar00davidstaff000000 000000 --- topology_description: type: ReplicaSetNoPrimary servers: - &1 address: b:27017 avg_rtt_ms: 5 type: RSSecondary tags: - data_center: nyc - &2 address: c:27017 avg_rtt_ms: 100 type: RSSecondary tags: - data_center: nyc operation: read read_preference: mode: Secondary tags: - data_center: sf candidate_servers: - *1 - *2 eligible_servers: [] suitable_servers: [] in_latency_window: [] MongoDB-v1.2.2/t/data/SS/server_selection/ReplicaSetNoPrimary/read/SecondaryPreferred.json000644 000765 000024 00000005122 12651754051 031773 0ustar00davidstaff000000 000000 { "candidate_servers": [ { "address": "b:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" }, { "address": "c:27017", "avg_rtt_ms": 100, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" } ], "eligible_servers": [ { "address": "b:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" }, { "address": "c:27017", "avg_rtt_ms": 100, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" } ], "in_latency_window": [ { "address": "b:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" } ], "operation": "read", "read_preference": { "mode": "SecondaryPreferred", "tags": [ { "data_center": "nyc" } ] }, "suitable_servers": [ { "address": "b:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" }, { "address": "c:27017", "avg_rtt_ms": 100, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" } ], "topology_description": { "servers": [ { "address": "b:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" }, { "address": "c:27017", "avg_rtt_ms": 100, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" } ], "type": "ReplicaSetNoPrimary" } } MongoDB-v1.2.2/t/data/SS/server_selection/ReplicaSetNoPrimary/read/SecondaryPreferred.yml000644 000765 000024 00000000723 12651754051 031625 0ustar00davidstaff000000 000000 --- topology_description: type: ReplicaSetNoPrimary servers: - &1 address: b:27017 avg_rtt_ms: 5 type: RSSecondary tags: - data_center: nyc - &2 address: c:27017 avg_rtt_ms: 100 type: RSSecondary tags: - data_center: nyc operation: read read_preference: mode: SecondaryPreferred tags: - data_center: nyc candidate_servers: - *1 - *2 eligible_servers: - *1 - *2 suitable_servers: - *1 - *2 in_latency_window: - *1 t/data/SS/server_selection/ReplicaSetNoPrimary/read/SecondaryPreferred_non_matching.json000644 000765 000024 00000002650 12651754051 034443 0ustar00davidstaff000000 000000 MongoDB-v1.2.2{ "candidate_servers": [ { "address": "b:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" }, { "address": "c:27017", "avg_rtt_ms": 100, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" } ], "eligible_servers": [], "in_latency_window": [], "operation": "read", "read_preference": { "mode": "SecondaryPreferred", "tags": [ { "data_center": "sf" } ] }, "suitable_servers": [], "topology_description": { "servers": [ { "address": "b:27017", "avg_rtt_ms": 5, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" }, { "address": "c:27017", "avg_rtt_ms": 100, "tags": [ { "data_center": "nyc" } ], "type": "RSSecondary" } ], "type": "ReplicaSetNoPrimary" } } t/data/SS/server_selection/ReplicaSetNoPrimary/read/SecondaryPreferred_non_matching.yml000644 000765 000024 00000000702 12651754051 034267 0ustar00davidstaff000000 000000 MongoDB-v1.2.2--- topology_description: type: ReplicaSetNoPrimary servers: - &1 address: b:27017 avg_rtt_ms: 5 type: RSSecondary tags: - data_center: nyc - &2 address: c:27017 avg_rtt_ms: 100 type: RSSecondary tags: - data_center: nyc operation: read read_preference: mode: SecondaryPreferred tags: - data_center: sf candidate_servers: - *1 - *2 eligible_servers: [] suitable_servers: [] in_latency_window: [] MongoDB-v1.2.2/t/data/SS/rtt/first_value.json000644 000765 000024 00000000114 12651754051 021153 0ustar00davidstaff000000 000000 { "avg_rtt_ms": "NULL", "new_avg_rtt": 10, "new_rtt_ms": 10 } MongoDB-v1.2.2/t/data/SS/rtt/first_value.yml000644 000765 000024 00000000066 12651754051 021011 0ustar00davidstaff000000 000000 --- avg_rtt_ms: 'NULL' new_rtt_ms: 10 new_avg_rtt: 10 MongoDB-v1.2.2/t/data/SS/rtt/first_value_zero.json000644 000765 000024 00000000112 12651754051 022210 0ustar00davidstaff000000 000000 { "avg_rtt_ms": "NULL", "new_avg_rtt": 0, "new_rtt_ms": 0 } MongoDB-v1.2.2/t/data/SS/rtt/first_value_zero.yml000644 000765 000024 00000000064 12651754051 022046 0ustar00davidstaff000000 000000 --- avg_rtt_ms: 'NULL' new_rtt_ms: 0 new_avg_rtt: 0 MongoDB-v1.2.2/t/data/SS/rtt/value_test_1.json000644 000765 000024 00000000107 12651754051 021225 0ustar00davidstaff000000 000000 { "avg_rtt_ms": 0, "new_avg_rtt": 1.0, "new_rtt_ms": 5 } MongoDB-v1.2.2/t/data/SS/rtt/value_test_1.yml000644 000765 000024 00000000061 12651754051 021054 0ustar00davidstaff000000 000000 --- avg_rtt_ms: 0 new_rtt_ms: 5 new_avg_rtt: 1.0 MongoDB-v1.2.2/t/data/SS/rtt/value_test_2.json000644 000765 000024 00000000113 12651754051 021223 0ustar00davidstaff000000 000000 { "avg_rtt_ms": 3.1, "new_avg_rtt": 9.68, "new_rtt_ms": 36 } MongoDB-v1.2.2/t/data/SS/rtt/value_test_2.yml000644 000765 000024 00000000065 12651754051 021061 0ustar00davidstaff000000 000000 --- avg_rtt_ms: 3.1 new_rtt_ms: 36 new_avg_rtt: 9.68 MongoDB-v1.2.2/t/data/SS/rtt/value_test_3.json000644 000765 000024 00000000116 12651754051 021227 0ustar00davidstaff000000 000000 { "avg_rtt_ms": 9.12, "new_avg_rtt": 9.12, "new_rtt_ms": 9.12 } MongoDB-v1.2.2/t/data/SS/rtt/value_test_3.yml000644 000765 000024 00000000070 12651754051 021056 0ustar00davidstaff000000 000000 --- avg_rtt_ms: 9.12 new_rtt_ms: 9.12 new_avg_rtt: 9.12 MongoDB-v1.2.2/t/data/SS/rtt/value_test_4.json000644 000765 000024 00000000114 12651754051 021226 0ustar00davidstaff000000 000000 { "avg_rtt_ms": 1, "new_avg_rtt": 200.8, "new_rtt_ms": 1000 } MongoDB-v1.2.2/t/data/SS/rtt/value_test_4.yml000644 000765 000024 00000000066 12651754051 021064 0ustar00davidstaff000000 000000 --- avg_rtt_ms: 1 new_rtt_ms: 1000 new_avg_rtt: 200.8 MongoDB-v1.2.2/t/data/SS/rtt/value_test_5.json000644 000765 000024 00000000113 12651754051 021226 0ustar00davidstaff000000 000000 { "avg_rtt_ms": 0, "new_avg_rtt": 0.05, "new_rtt_ms": 0.25 } MongoDB-v1.2.2/t/data/SS/rtt/value_test_5.yml000644 000765 000024 00000000065 12651754051 021064 0ustar00davidstaff000000 000000 --- avg_rtt_ms: 0 new_rtt_ms: 0.25 new_avg_rtt: 0.05 MongoDB-v1.2.2/t/data/SDAM/README.rst000644 000765 000024 00000007270 12651754051 017024 0ustar00davidstaff000000 000000 ===================================== Server Discovery And Monitoring Tests ===================================== The YAML and JSON files in this directory tree are platform-independent tests that drivers can use to prove their conformance to the Server Discovery And Monitoring Spec. Converting to JSON ------------------ The tests are written in YAML because it is easier for humans to write and read, and because YAML includes a standard comment format. A JSONified version of each YAML file is included in this repository. Whenever you change the YAML, re-convert to JSON. One method to convert to JSON is with `jsonwidget-python `_:: pip install PyYAML urwid jsonwidget make Or instead of "make": for i in `find . -iname '*.yml'`; do echo "${i%.*}" jwc yaml2json $i > ${i%.*}.json done Version ------- Files in the "specifications" repository have no version scheme. They are not tied to a MongoDB server version, and it is our intention that each specification moves from "draft" to "final" with no further versions; it is superseded by a future spec, not revised. However, implementers must have stable sets of tests to target. As test files evolve they will be occasionally tagged like "server-discovery-tests-2014-09-10", until the spec is final. Format ------ Each YAML file has the following keys: - description: Some text. - uri: A connection string. - phases: An array of "phase" objects. A "phase" of the test sends inputs to the client, then tests the client's resulting TopologyDescription. Each phase object has two keys: - responses: An array of "response" objects. - outcome: An "outcome" object representing the TopologyDescription. A response is a pair of values: - The source, for example "a:27017". This is the address the client sent the "ismaster" command to. - An ismaster response, for example `{ok: 1, ismaster: true}`. If the response includes an electionId it is shown in extended JSON like `{"$oid": "000000000000000000000002"}`. The empty response `{}` indicates a network error when attempting to call "ismaster". An "outcome" represents the correct TopologyDescription that results from processing the responses in the phases so far. It has the following keys: - topologyType: A string like "ReplicaSetNoPrimary". - setName: A string with the expected replica set name, or null. - servers: An object whose keys are addresses like "a:27017", and whose values are "server" objects. A "server" object represents a correct ServerDescription within the client's current TopologyDescription. It has the following keys: - type: A ServerType name, like "RSSecondary". - setName: A string with the expected replica set name, or null. - setVersion: absent or an integer. - electionId: absent, null, or an ObjectId. Use as unittests ---------------- Drivers should be able to test their server discovery and monitoring logic without any network I/O, by parsing ismaster responses from the test file and passing them into the driver code. Parts of the client and monitoring code may need to be mocked or subclassed to achieve this. `A reference implementation for PyMongo 3.x is available here `_. For each file, create a fresh client object initialized with the file's "uri". For each phase in the file, parse the "responses" array. Pass in the responses in order to the driver code. If a response is the empty object `{}`, simulate a network error. Once all responses are processed, assert that the phase's "outcome" object is equivalent to the driver's current TopologyDescription. Continue until all phases have been executed. MongoDB-v1.2.2/t/data/SDAM/rs/000755 000765 000024 00000000000 12651754051 015753 5ustar00davidstaff000000 000000 MongoDB-v1.2.2/t/data/SDAM/sharded/000755 000765 000024 00000000000 12651754051 016741 5ustar00davidstaff000000 000000 MongoDB-v1.2.2/t/data/SDAM/single/000755 000765 000024 00000000000 12651754051 016610 5ustar00davidstaff000000 000000 MongoDB-v1.2.2/t/data/SDAM/single/direct_connection_external_ip.json000644 000765 000024 00000001464 12651754051 025573 0ustar00davidstaff000000 000000 { "description": "Direct connection to RSPrimary via external IP", "phases": [ { "outcome": { "servers": { "a:27017": { "setName": "rs", "type": "RSPrimary" } }, "setName": null, "topologyType": "Single" }, "responses": [ [ "a:27017", { "hosts": [ "b:27017" ], "ismaster": true, "ok": 1, "setName": "rs" } ] ] } ], "uri": "mongodb://a" } MongoDB-v1.2.2/t/data/SDAM/single/direct_connection_external_ip.yml000644 000765 000024 00000001121 12651754051 025411 0ustar00davidstaff000000 000000 description: "Direct connection to RSPrimary via external IP" uri: "mongodb://a" phases: [ { responses: [ ["a:27017", { ok: 1, ismaster: true, hosts: ["b:27017"], # Internal IP. setName: "rs" }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs" } }, topologyType: "Single", setName: } } ] MongoDB-v1.2.2/t/data/SDAM/single/direct_connection_mongos.json000644 000765 000024 00000001261 12651754051 024556 0ustar00davidstaff000000 000000 { "description": "Connect to mongos", "phases": [ { "outcome": { "servers": { "a:27017": { "setName": null, "type": "Mongos" } }, "setName": null, "topologyType": "Single" }, "responses": [ [ "a:27017", { "ismaster": true, "msg": "isdbgrid", "ok": 1 } ] ] } ], "uri": "mongodb://a" } MongoDB-v1.2.2/t/data/SDAM/single/direct_connection_mongos.yml000644 000765 000024 00000000766 12651754051 024417 0ustar00davidstaff000000 000000 description: "Connect to mongos" uri: "mongodb://a" phases: [ { responses: [ ["a:27017", { ok: 1, ismaster: true, msg: "isdbgrid" }] ], outcome: { servers: { "a:27017": { type: "Mongos", setName: } }, topologyType: "Single", setName: } } ] MongoDB-v1.2.2/t/data/SDAM/single/direct_connection_rsarbiter.json000644 000765 000024 00000001511 12651754051 025247 0ustar00davidstaff000000 000000 { "description": "Connect to RSArbiter", "phases": [ { "outcome": { "servers": { "a:27017": { "setName": "rs", "type": "RSArbiter" } }, "setName": null, "topologyType": "Single" }, "responses": [ [ "a:27017", { "arbiterOnly": true, "hosts": [ "a:27017" ], "ismaster": false, "ok": 1, "setName": "rs" } ] ] } ], "uri": "mongodb://a" } MongoDB-v1.2.2/t/data/SDAM/single/direct_connection_rsarbiter.yml000644 000765 000024 00000001112 12651754051 025074 0ustar00davidstaff000000 000000 description: "Connect to RSArbiter" uri: "mongodb://a" phases: [ { responses: [ ["a:27017", { ok: 1, ismaster: false, arbiterOnly: true, hosts: ["a:27017"], setName: "rs" }] ], outcome: { servers: { "a:27017": { type: "RSArbiter", setName: "rs" } }, topologyType: "Single", setName: } } ] MongoDB-v1.2.2/t/data/SDAM/single/direct_connection_rsprimary.json000644 000765 000024 00000001432 12651754051 025304 0ustar00davidstaff000000 000000 { "description": "Connect to RSPrimary", "phases": [ { "outcome": { "servers": { "a:27017": { "setName": "rs", "type": "RSPrimary" } }, "setName": null, "topologyType": "Single" }, "responses": [ [ "a:27017", { "hosts": [ "a:27017" ], "ismaster": true, "ok": 1, "setName": "rs" } ] ] } ], "uri": "mongodb://a" } MongoDB-v1.2.2/t/data/SDAM/single/direct_connection_rsprimary.yml000644 000765 000024 00000001043 12651754051 025132 0ustar00davidstaff000000 000000 description: "Connect to RSPrimary" uri: "mongodb://a" phases: [ { responses: [ ["a:27017", { ok: 1, ismaster: true, hosts: ["a:27017"], setName: "rs" }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs" } }, topologyType: "Single", setName: } } ] MongoDB-v1.2.2/t/data/SDAM/single/direct_connection_rssecondary.json000644 000765 000024 00000001513 12651754051 025610 0ustar00davidstaff000000 000000 { "description": "Connect to RSSecondary", "phases": [ { "outcome": { "servers": { "a:27017": { "setName": "rs", "type": "RSSecondary" } }, "setName": null, "topologyType": "Single" }, "responses": [ [ "a:27017", { "hosts": [ "a:27017" ], "ismaster": false, "ok": 1, "secondary": true, "setName": "rs" } ] ] } ], "uri": "mongodb://a" } MongoDB-v1.2.2/t/data/SDAM/single/direct_connection_rssecondary.yml000644 000765 000024 00000001121 12651754051 025433 0ustar00davidstaff000000 000000 description: "Connect to RSSecondary" uri: "mongodb://a" phases: [ { responses: [ ["a:27017", { ok: 1, ismaster: false, secondary: true, hosts: ["a:27017"], setName: "rs" }] ], outcome: { servers: { "a:27017": { type: "RSSecondary", setName: "rs" } }, topologyType: "Single", setName: } } ] MongoDB-v1.2.2/t/data/SDAM/single/direct_connection_slave.json000644 000765 000024 00000001223 12651754051 024364 0ustar00davidstaff000000 000000 { "description": "Direct connection to slave", "phases": [ { "outcome": { "servers": { "a:27017": { "setName": null, "type": "Standalone" } }, "setName": null, "topologyType": "Single" }, "responses": [ [ "a:27017", { "ismaster": false, "ok": 1 } ] ] } ], "uri": "mongodb://a" } MongoDB-v1.2.2/t/data/SDAM/single/direct_connection_slave.yml000644 000765 000024 00000000737 12651754051 024225 0ustar00davidstaff000000 000000 description: "Direct connection to slave" uri: "mongodb://a" phases: [ { responses: [ ["a:27017", { ok: 1, ismaster: false }] ], outcome: { servers: { "a:27017": { type: "Standalone", setName: } }, topologyType: "Single", setName: } } ] MongoDB-v1.2.2/t/data/SDAM/single/direct_connection_standalone.json000644 000765 000024 00000001215 12651754051 025403 0ustar00davidstaff000000 000000 { "description": "Connect to standalone", "phases": [ { "outcome": { "servers": { "a:27017": { "setName": null, "type": "Standalone" } }, "setName": null, "topologyType": "Single" }, "responses": [ [ "a:27017", { "ismaster": true, "ok": 1 } ] ] } ], "uri": "mongodb://a" } MongoDB-v1.2.2/t/data/SDAM/single/direct_connection_standalone.yml000644 000765 000024 00000000731 12651754051 025235 0ustar00davidstaff000000 000000 description: "Connect to standalone" uri: "mongodb://a" phases: [ { responses: [ ["a:27017", { ok: 1, ismaster: true }] ], outcome: { servers: { "a:27017": { type: "Standalone", setName: } }, topologyType: "Single", setName: } } ] MongoDB-v1.2.2/t/data/SDAM/single/not_ok_response.json000644 000765 000024 00000001523 12651754051 022713 0ustar00davidstaff000000 000000 { "description": "Handle a not-ok ismaster response", "phases": [ { "outcome": { "servers": { "a:27017": { "setName": null, "type": "Unknown" } }, "setName": null, "topologyType": "Single" }, "responses": [ [ "a:27017", { "ismaster": true, "ok": 1 } ], [ "a:27017", { "ismaster": true, "ok": 0 } ] ] } ], "uri": "mongodb://a" } MongoDB-v1.2.2/t/data/SDAM/single/not_ok_response.yml000644 000765 000024 00000001124 12651754051 022540 0ustar00davidstaff000000 000000 description: "Handle a not-ok ismaster response" uri: "mongodb://a" phases: [ { responses: [ ["a:27017", { ok: 1, ismaster: true }], ["a:27017", { ok: 0, ismaster: true }] ], outcome: { servers: { "a:27017": { type: "Unknown", setName: } }, topologyType: "Single", setName: } } ] MongoDB-v1.2.2/t/data/SDAM/single/standalone_removed.json000644 000765 000024 00000001245 12651754051 023356 0ustar00davidstaff000000 000000 { "description": "Standalone removed from multi-server topology", "phases": [ { "outcome": { "servers": { "b:27017": { "setName": null, "type": "Unknown" } }, "setName": null, "topologyType": "Unknown" }, "responses": [ [ "a:27017", { "ismaster": true, "ok": 1 } ] ] } ], "uri": "mongodb://a,b" } MongoDB-v1.2.2/t/data/SDAM/single/standalone_removed.yml000644 000765 000024 00000000761 12651754051 023210 0ustar00davidstaff000000 000000 description: "Standalone removed from multi-server topology" uri: "mongodb://a,b" phases: [ { responses: [ ["a:27017", { ok: 1, ismaster: true }] ], outcome: { servers: { "b:27017": { type: "Unknown", setName: } }, topologyType: "Unknown", setName: } } ] MongoDB-v1.2.2/t/data/SDAM/single/unavailable_seed.json000644 000765 000024 00000001045 12651754051 022766 0ustar00davidstaff000000 000000 { "description": "Unavailable seed", "phases": [ { "outcome": { "servers": { "a:27017": { "setName": null, "type": "Unknown" } }, "setName": null, "topologyType": "Single" }, "responses": [ [ "a:27017", {} ] ] } ], "uri": "mongodb://a" } MongoDB-v1.2.2/t/data/SDAM/single/unavailable_seed.yml000644 000765 000024 00000000601 12651754051 022613 0ustar00davidstaff000000 000000 description: "Unavailable seed" uri: "mongodb://a" phases: [ { responses: [ ["a:27017", {}] ], outcome: { servers: { "a:27017": { type: "Unknown", setName: } }, topologyType: "Single", setName: } } ] MongoDB-v1.2.2/t/data/SDAM/sharded/mongos_disconnect.json000644 000765 000024 00000004542 12651754051 023354 0ustar00davidstaff000000 000000 { "description": "Mongos disconnect", "phases": [ { "outcome": { "servers": { "a:27017": { "setName": null, "type": "Mongos" }, "b:27017": { "setName": null, "type": "Mongos" } }, "setName": null, "topologyType": "Sharded" }, "responses": [ [ "a:27017", { "ismaster": true, "msg": "isdbgrid", "ok": 1 } ], [ "b:27017", { "ismaster": true, "msg": "isdbgrid", "ok": 1 } ] ] }, { "outcome": { "servers": { "a:27017": { "setName": null, "type": "Unknown" }, "b:27017": { "setName": null, "type": "Mongos" } }, "setName": null, "topologyType": "Sharded" }, "responses": [ [ "a:27017", {} ] ] }, { "outcome": { "servers": { "a:27017": { "setName": null, "type": "Mongos" }, "b:27017": { "setName": null, "type": "Mongos" } }, "setName": null, "topologyType": "Sharded" }, "responses": [ [ "a:27017", { "ismaster": true, "msg": "isdbgrid", "ok": 1 } ] ] } ], "uri": "mongodb://a,b" } MongoDB-v1.2.2/t/data/SDAM/sharded/mongos_disconnect.yml000644 000765 000024 00000003414 12651754051 023201 0ustar00davidstaff000000 000000 description: "Mongos disconnect" uri: "mongodb://a,b" phases: [ { responses: [ ["a:27017", { ok: 1, ismaster: true, msg: "isdbgrid" }], ["b:27017", { ok: 1, ismaster: true, msg: "isdbgrid" }] ], outcome: { servers: { "a:27017": { type: "Mongos", setName: }, "b:27017": { type: "Mongos", setName: } }, topologyType: "Sharded", setName: } }, { responses: [ ["a:27017", {}], # Hangup. ], outcome: { servers: { "a:27017": { type: "Unknown", setName: }, "b:27017": { type: "Mongos", setName: } }, topologyType: "Sharded", setName: } }, { responses: [ # Back in action. ["a:27017", { ok: 1, ismaster: true, msg: "isdbgrid" }], ], outcome: { servers: { "a:27017": { type: "Mongos", setName: }, "b:27017": { type: "Mongos", setName: } }, topologyType: "Sharded", setName: } } ] MongoDB-v1.2.2/t/data/SDAM/sharded/multiple_mongoses.json000644 000765 000024 00000002051 12651754051 023377 0ustar00davidstaff000000 000000 { "description": "Multiple mongoses", "phases": [ { "outcome": { "servers": { "a:27017": { "setName": null, "type": "Mongos" }, "b:27017": { "setName": null, "type": "Mongos" } }, "setName": null, "topologyType": "Sharded" }, "responses": [ [ "a:27017", { "ismaster": true, "msg": "isdbgrid", "ok": 1 } ], [ "b:27017", { "ismaster": true, "msg": "isdbgrid", "ok": 1 } ] ] } ], "uri": "mongodb://a,b" } MongoDB-v1.2.2/t/data/SDAM/sharded/multiple_mongoses.yml000644 000765 000024 00000001403 12651754051 023227 0ustar00davidstaff000000 000000 description: "Multiple mongoses" uri: "mongodb://a,b" phases: [ { responses: [ ["a:27017", { ok: 1, ismaster: true, msg: "isdbgrid" }], ["b:27017", { ok: 1, ismaster: true, msg: "isdbgrid" }] ], outcome: { servers: { "a:27017": { type: "Mongos", setName: }, "b:27017": { type: "Mongos", setName: } }, topologyType: "Sharded", setName: } } ] MongoDB-v1.2.2/t/data/SDAM/sharded/non_mongos_removed.json000644 000765 000024 00000002023 12651754051 023526 0ustar00davidstaff000000 000000 { "description": "Non-Mongos server in sharded cluster", "phases": [ { "outcome": { "servers": { "a:27017": { "setName": null, "type": "Mongos" } }, "setName": null, "topologyType": "Sharded" }, "responses": [ [ "a:27017", { "ismaster": true, "msg": "isdbgrid", "ok": 1 } ], [ "b:27017", { "hosts": [ "b:27017" ], "ismaster": true, "ok": 1, "setName": "rs" } ] ] } ], "uri": "mongodb://a,b" } MongoDB-v1.2.2/t/data/SDAM/sharded/non_mongos_removed.yml000644 000765 000024 00000001311 12651754051 023355 0ustar00davidstaff000000 000000 description: "Non-Mongos server in sharded cluster" uri: "mongodb://a,b" phases: [ { responses: [ ["a:27017", { ok: 1, ismaster: true, msg: "isdbgrid" }], ["b:27017", { ok: 1, ismaster: true, hosts: ["b:27017"], setName: "rs" }] ], outcome: { servers: { "a:27017": { type: "Mongos", setName: } }, topologyType: "Sharded", setName: } } ] MongoDB-v1.2.2/t/data/SDAM/sharded/normalize_uri_case.json000644 000765 000024 00000001117 12651754051 023506 0ustar00davidstaff000000 000000 { "description": "Normalize URI case", "phases": [ { "outcome": { "servers": { "a:27017": { "setName": null, "type": "Unknown" }, "b:27017": { "setName": null, "type": "Unknown" } }, "setName": null, "topologyType": "Unknown" }, "responses": [] } ], "uri": "mongodb://A,B" } MongoDB-v1.2.2/t/data/SDAM/sharded/normalize_uri_case.yml000644 000765 000024 00000000731 12651754051 023337 0ustar00davidstaff000000 000000 description: "Normalize URI case" uri: "mongodb://A,B" phases: [ { responses: [ ], outcome: { servers: { "a:27017": { type: "Unknown", setName: }, "b:27017": { type: "Unknown", setName: } }, topologyType: "Unknown", setName: } } ] MongoDB-v1.2.2/t/data/SDAM/rs/discover_arbiters.json000644 000765 000024 00000002052 12651754051 022356 0ustar00davidstaff000000 000000 { "description": "Discover arbiters", "phases": [ { "outcome": { "servers": { "a:27017": { "setName": "rs", "type": "RSPrimary" }, "b:27017": { "setName": null, "type": "Unknown" } }, "setName": "rs", "topologyType": "ReplicaSetWithPrimary" }, "responses": [ [ "a:27017", { "arbiters": [ "b:27017" ], "hosts": [ "a:27017" ], "ismaster": true, "ok": 1, "setName": "rs" } ] ] } ], "uri": "mongodb://a/?replicaSet=rs" } MongoDB-v1.2.2/t/data/SDAM/rs/discover_arbiters.yml000644 000765 000024 00000001346 12651754051 022213 0ustar00davidstaff000000 000000 description: "Discover arbiters" uri: "mongodb://a/?replicaSet=rs" phases: [ { responses: [ ["a:27017", { ok: 1, ismaster: true, hosts: ["a:27017"], arbiters: ["b:27017"], setName: "rs" }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs" }, "b:27017": { type: "Unknown", setName: } }, topologyType: "ReplicaSetWithPrimary", setName: "rs" } } ] MongoDB-v1.2.2/t/data/SDAM/rs/discover_passives.json000644 000765 000024 00000004104 12651754051 022400 0ustar00davidstaff000000 000000 { "description": "Discover passives", "phases": [ { "outcome": { "servers": { "a:27017": { "setName": "rs", "type": "RSPrimary" }, "b:27017": { "setName": null, "type": "Unknown" } }, "setName": "rs", "topologyType": "ReplicaSetWithPrimary" }, "responses": [ [ "a:27017", { "hosts": [ "a:27017" ], "ismaster": true, "ok": 1, "passives": [ "b:27017" ], "setName": "rs" } ] ] }, { "outcome": { "servers": { "a:27017": { "setName": "rs", "type": "RSPrimary" }, "b:27017": { "setName": "rs", "type": "RSSecondary" } }, "setName": "rs", "topologyType": "ReplicaSetWithPrimary" }, "responses": [ [ "b:27017", { "hosts": [ "a:27017" ], "ismaster": false, "ok": 1, "passive": true, "passives": [ "b:27017" ], "secondary": true, "setName": "rs" } ] ] } ], "uri": "mongodb://a/?replicaSet=rs" } MongoDB-v1.2.2/t/data/SDAM/rs/discover_passives.yml000644 000765 000024 00000002715 12651754051 022236 0ustar00davidstaff000000 000000 description: "Discover passives" uri: "mongodb://a/?replicaSet=rs" phases: [ { responses: [ ["a:27017", { ok: 1, ismaster: true, hosts: ["a:27017"], passives: ["b:27017"], setName: "rs" }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs" }, "b:27017": { type: "Unknown", setName: } }, topologyType: "ReplicaSetWithPrimary", setName: "rs" } }, { responses: [ ["b:27017", { ok: 1, ismaster: false, secondary: true, passive: true, hosts: ["a:27017"], passives: ["b:27017"], setName: "rs" }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs" }, "b:27017": { type: "RSSecondary", setName: "rs" } }, topologyType: "ReplicaSetWithPrimary", setName: "rs" } } ] MongoDB-v1.2.2/t/data/SDAM/rs/discover_primary.json000644 000765 000024 00000001773 12651754051 022237 0ustar00davidstaff000000 000000 { "description": "Replica set discovery from primary", "phases": [ { "outcome": { "servers": { "a:27017": { "setName": "rs", "type": "RSPrimary" }, "b:27017": { "setName": null, "type": "Unknown" } }, "setName": "rs", "topologyType": "ReplicaSetWithPrimary" }, "responses": [ [ "a:27017", { "hosts": [ "a:27017", "b:27017" ], "ismaster": true, "ok": 1, "setName": "rs" } ] ] } ], "uri": "mongodb://a/?replicaSet=rs" } MongoDB-v1.2.2/t/data/SDAM/rs/discover_primary.yml000644 000765 000024 00000001327 12651754051 022062 0ustar00davidstaff000000 000000 description: "Replica set discovery from primary" uri: "mongodb://a/?replicaSet=rs" phases: [ { responses: [ ["a:27017", { ok: 1, ismaster: true, setName: "rs", hosts: ["a:27017", "b:27017"] }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs" }, "b:27017": { type: "Unknown", setName: } }, topologyType: "ReplicaSetWithPrimary", setName: "rs" } } ] MongoDB-v1.2.2/t/data/SDAM/rs/discover_secondary.json000644 000765 000024 00000002052 12651754051 022532 0ustar00davidstaff000000 000000 { "description": "Replica set discovery from secondary", "phases": [ { "outcome": { "servers": { "a:27017": { "setName": null, "type": "Unknown" }, "b:27017": { "setName": "rs", "type": "RSSecondary" } }, "setName": "rs", "topologyType": "ReplicaSetNoPrimary" }, "responses": [ [ "b:27017", { "hosts": [ "a:27017", "b:27017" ], "ismaster": false, "ok": 1, "secondary": true, "setName": "rs" } ] ] } ], "uri": "mongodb://b/?replicaSet=rs" } MongoDB-v1.2.2/t/data/SDAM/rs/discover_secondary.yml000644 000765 000024 00000001377 12651754051 022373 0ustar00davidstaff000000 000000 description: "Replica set discovery from secondary" uri: "mongodb://b/?replicaSet=rs" phases: [ { responses: [ ["b:27017", { ok: 1, ismaster: false, secondary: true, setName: "rs", hosts: ["a:27017", "b:27017"] }] ], outcome: { servers: { "a:27017": { type: "Unknown", setName: }, "b:27017": { type: "RSSecondary", setName: "rs" } }, topologyType: "ReplicaSetNoPrimary", setName: "rs" } } ] MongoDB-v1.2.2/t/data/SDAM/rs/discovery.json000644 000765 000024 00000011734 12651754051 020663 0ustar00davidstaff000000 000000 { "description": "Replica set discovery", "phases": [ { "outcome": { "servers": { "a:27017": { "setName": "rs", "type": "RSSecondary" }, "b:27017": { "setName": null, "type": "Unknown" }, "c:27017": { "setName": null, "type": "Unknown" } }, "setName": "rs", "topologyType": "ReplicaSetNoPrimary" }, "responses": [ [ "a:27017", { "hosts": [ "a:27017", "b:27017", "c:27017" ], "ismaster": false, "ok": 1, "secondary": true, "setName": "rs" } ] ] }, { "outcome": { "servers": { "a:27017": { "setName": "rs", "type": "RSSecondary" }, "b:27017": { "setName": "rs", "type": "RSSecondary" }, "c:27017": { "setName": null, "type": "Unknown" }, "d:27017": { "setName": null, "type": "PossiblePrimary" } }, "setName": "rs", "topologyType": "ReplicaSetNoPrimary" }, "responses": [ [ "b:27017", { "hosts": [ "b:27017", "c:27017", "d:27017" ], "ismaster": false, "ok": 1, "primary": "d:27017", "secondary": true, "setName": "rs" } ] ] }, { "outcome": { "servers": { "b:27017": { "setName": "rs", "type": "RSSecondary" }, "c:27017": { "setName": null, "type": "Unknown" }, "d:27017": { "setName": "rs", "type": "RSPrimary" }, "e:27017": { "setName": null, "type": "Unknown" } }, "setName": "rs", "topologyType": "ReplicaSetWithPrimary" }, "responses": [ [ "d:27017", { "hosts": [ "b:27017", "c:27017", "d:27017", "e:27017" ], "ismaster": true, "ok": 1, "setName": "rs" } ] ] }, { "outcome": { "servers": { "b:27017": { "setName": "rs", "type": "RSSecondary" }, "c:27017": { "setName": "rs", "type": "RSSecondary" }, "d:27017": { "setName": "rs", "type": "RSPrimary" }, "e:27017": { "setName": null, "type": "Unknown" } }, "setName": "rs", "topologyType": "ReplicaSetWithPrimary" }, "responses": [ [ "c:27017", { "hosts": [ "a:27017", "b:27017", "c:27017" ], "ismaster": false, "ok": 1, "secondary": true, "setName": "rs" } ] ] } ], "uri": "mongodb://a/?replicaSet=rs" } MongoDB-v1.2.2/t/data/SDAM/rs/discovery.yml000644 000765 000024 00000007537 12651754051 020521 0ustar00davidstaff000000 000000 description: "Replica set discovery" uri: "mongodb://a/?replicaSet=rs" phases: [ # At first, a, b, and c are secondaries. { responses: [ ["a:27017", { ok: 1, ismaster: false, secondary: true, setName: "rs", hosts: ["a:27017", "b:27017", "c:27017"] }] ], outcome: { servers: { "a:27017": { type: "RSSecondary", setName: "rs" }, "b:27017": { type: "Unknown", setName: }, "c:27017": { type: "Unknown", setName: } }, topologyType: "ReplicaSetNoPrimary", setName: "rs" } }, # Admin removes a, adds a high-priority member d which becomes primary. { responses: [ ["b:27017", { ok: 1, ismaster: false, secondary: true, setName: "rs", primary: "d:27017", hosts: ["b:27017", "c:27017", "d:27017"] }] ], outcome: { servers: { "a:27017": { type: "RSSecondary", setName: "rs" }, "b:27017": { type: "RSSecondary", setName: "rs" }, "c:27017": { type: "Unknown", setName: }, "d:27017": { type: "PossiblePrimary", setName: } }, topologyType: "ReplicaSetNoPrimary", setName: "rs" } }, # Primary responds. { responses: [ ["d:27017", { ok: 1, ismaster: true, setName: "rs", hosts: ["b:27017", "c:27017", "d:27017", "e:27017"] }] ], outcome: { # e is new. servers: { "b:27017": { type: "RSSecondary", setName: "rs" }, "c:27017": { type: "Unknown", setName: }, "d:27017": { type: "RSPrimary", setName: "rs" }, "e:27017": { type: "Unknown", setName: } }, topologyType: "ReplicaSetWithPrimary", setName: "rs" } }, # Stale response from c. { responses: [ ["c:27017", { ok: 1, ismaster: false, secondary: true, setName: "rs", hosts: ["a:27017", "b:27017", "c:27017"] }] ], outcome: { # We don't add a back. # We don't remove e. servers: { "b:27017": { type: "RSSecondary", setName: "rs" }, "c:27017": { type: "RSSecondary", setName: "rs" }, "d:27017": { type: "RSPrimary", setName: "rs" }, "e:27017": { type: "Unknown", setName: } }, topologyType: "ReplicaSetWithPrimary", setName: "rs" } } ] MongoDB-v1.2.2/t/data/SDAM/rs/equal_electionids.json000644 000765 000024 00000003717 12651754051 022347 0ustar00davidstaff000000 000000 { "description": "New primary with equal electionId", "phases": [ { "outcome": { "servers": { "a:27017": { "electionId": null, "setName": null, "setVersion": null, "type": "Unknown" }, "b:27017": { "electionId": { "$oid": "000000000000000000000001" }, "setName": "rs", "setVersion": 1, "type": "RSPrimary" } }, "setName": "rs", "topologyType": "ReplicaSetWithPrimary" }, "responses": [ [ "a:27017", { "electionId": { "$oid": "000000000000000000000001" }, "hosts": [ "a:27017", "b:27017" ], "ismaster": true, "ok": 1, "setName": "rs", "setVersion": 1 } ], [ "b:27017", { "electionId": { "$oid": "000000000000000000000001" }, "hosts": [ "a:27017", "b:27017" ], "ismaster": true, "ok": 1, "setName": "rs", "setVersion": 1 } ] ] } ], "uri": "mongodb://a/?replicaSet=rs" } MongoDB-v1.2.2/t/data/SDAM/rs/equal_electionids.yml000644 000765 000024 00000002511 12651754051 022166 0ustar00davidstaff000000 000000 description: "New primary with equal electionId" uri: "mongodb://a/?replicaSet=rs" phases: [ # A and B claim to be primaries, with equal electionIds. { responses: [ ["a:27017", { ok: 1, ismaster: true, hosts: ["a:27017", "b:27017"], setName: "rs", setVersion: 1, electionId: {"$oid": "000000000000000000000001"} }], ["b:27017", { ok: 1, ismaster: true, hosts: ["a:27017", "b:27017"], setName: "rs", setVersion: 1, electionId: {"$oid": "000000000000000000000001"} }] ], # No choice but to believe the latter response. outcome: { servers: { "a:27017": { type: "Unknown", setName: , setVersion: , electionId: }, "b:27017": { type: "RSPrimary", setName: "rs", setVersion: 1, electionId: {"$oid": "000000000000000000000001"} } }, topologyType: "ReplicaSetWithPrimary", setName: "rs", } } ] MongoDB-v1.2.2/t/data/SDAM/rs/ghost_discovered.json000644 000765 000024 00000001540 12651754051 022201 0ustar00davidstaff000000 000000 { "description": "Ghost discovered", "phases": [ { "outcome": { "servers": { "a:27017": { "setName": null, "type": "Unknown" }, "b:27017": { "setName": null, "type": "RSGhost" } }, "setName": "rs", "topologyType": "ReplicaSetNoPrimary" }, "responses": [ [ "b:27017", { "ismaster": false, "isreplicaset": true, "ok": 1 } ] ] } ], "uri": "mongodb://a,b/?replicaSet=rs" } MongoDB-v1.2.2/t/data/SDAM/rs/ghost_discovered.yml000644 000765 000024 00000001221 12651754051 022025 0ustar00davidstaff000000 000000 description: "Ghost discovered" uri: "mongodb://a,b/?replicaSet=rs" phases: [ { responses: [ ["b:27017", { ok: 1, ismaster: false, isreplicaset: true }] ], outcome: { servers: { "a:27017": { type: "Unknown", setName: }, "b:27017": { type: "RSGhost", setName: } }, topologyType: "ReplicaSetNoPrimary", setName: "rs" } } ] MongoDB-v1.2.2/t/data/SDAM/rs/hosts_differ_from_seeds.json000644 000765 000024 00000001474 12651754051 023541 0ustar00davidstaff000000 000000 { "description": "Host list differs from seeds", "phases": [ { "outcome": { "servers": { "b:27017": { "setName": null, "type": "Unknown" } }, "setName": "rs", "topologyType": "ReplicaSetNoPrimary" }, "responses": [ [ "a:27017", { "hosts": [ "b:27017" ], "ismaster": true, "ok": 1, "setName": "rs" } ] ] } ], "uri": "mongodb://a/?replicaSet=rs" } MongoDB-v1.2.2/t/data/SDAM/rs/hosts_differ_from_seeds.yml000644 000765 000024 00000001111 12651754051 023355 0ustar00davidstaff000000 000000 description: "Host list differs from seeds" uri: "mongodb://a/?replicaSet=rs" phases: [ { responses: [ ["a:27017", { ok: 1, ismaster: true, setName: "rs", hosts: ["b:27017"] }] ], outcome: { servers: { "b:27017": { type: "Unknown", setName: } }, topologyType: "ReplicaSetNoPrimary", setName: "rs" } } ] MongoDB-v1.2.2/t/data/SDAM/rs/member_reconfig.json000644 000765 000024 00000003277 12651754051 022002 0ustar00davidstaff000000 000000 { "description": "Member removed by reconfig", "phases": [ { "outcome": { "servers": { "a:27017": { "setName": "rs", "type": "RSPrimary" }, "b:27017": { "setName": null, "type": "Unknown" } }, "setName": "rs", "topologyType": "ReplicaSetWithPrimary" }, "responses": [ [ "a:27017", { "hosts": [ "a:27017", "b:27017" ], "ismaster": true, "ok": 1, "setName": "rs" } ] ] }, { "outcome": { "servers": { "a:27017": { "setName": "rs", "type": "RSPrimary" } }, "setName": "rs", "topologyType": "ReplicaSetWithPrimary" }, "responses": [ [ "a:27017", { "hosts": [ "a:27017" ], "ismaster": true, "ok": 1, "setName": "rs" } ] ] } ], "uri": "mongodb://a,b/?replicaSet=rs" } MongoDB-v1.2.2/t/data/SDAM/rs/member_reconfig.yml000644 000765 000024 00000002310 12651754051 021615 0ustar00davidstaff000000 000000 description: "Member removed by reconfig" uri: "mongodb://a,b/?replicaSet=rs" phases: [ { responses: [ ["a:27017", { ok: 1, ismaster: true, setName: "rs", hosts: ["a:27017", "b:27017"] }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs" }, "b:27017": { type: "Unknown", setName: } }, topologyType: "ReplicaSetWithPrimary", setName: "rs" } }, { responses: [ ["a:27017", { ok: 1, ismaster: true, setName: "rs", hosts: ["a:27017"] }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs" } }, topologyType: "ReplicaSetWithPrimary", setName: "rs" } } ] MongoDB-v1.2.2/t/data/SDAM/rs/member_standalone.json000644 000765 000024 00000002541 12651754051 022327 0ustar00davidstaff000000 000000 { "description": "Member brought up as standalone", "phases": [ { "outcome": { "servers": { "a:27017": { "setName": null, "type": "Unknown" } }, "setName": null, "topologyType": "Unknown" }, "responses": [ [ "b:27017", { "ismaster": true, "ok": 1 } ] ] }, { "outcome": { "servers": { "a:27017": { "setName": "rs", "type": "RSPrimary" } }, "setName": "rs", "topologyType": "ReplicaSetWithPrimary" }, "responses": [ [ "a:27017", { "hosts": [ "a:27017" ], "ismaster": true, "ok": 1, "setName": "rs" } ] ] } ], "uri": "mongodb://a,b" } MongoDB-v1.2.2/t/data/SDAM/rs/member_standalone.yml000644 000765 000024 00000001732 12651754051 022160 0ustar00davidstaff000000 000000 description: "Member brought up as standalone" uri: "mongodb://a,b" phases: [ { responses: [ ["b:27017", { ok: 1, ismaster: true }] ], outcome: { servers: { "a:27017": { type: "Unknown", setName: } }, topologyType: "Unknown", setName: } }, { responses: [ ["a:27017", { ok: 1, ismaster: true, setName: "rs", hosts: ["a:27017"] }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs" } }, topologyType: "ReplicaSetWithPrimary", setName: "rs" } } ] MongoDB-v1.2.2/t/data/SDAM/rs/new_primary.json000644 000765 000024 00000003545 12651754051 021211 0ustar00davidstaff000000 000000 { "description": "New primary", "phases": [ { "outcome": { "servers": { "a:27017": { "setName": "rs", "type": "RSPrimary" }, "b:27017": { "setName": null, "type": "Unknown" } }, "setName": "rs", "topologyType": "ReplicaSetWithPrimary" }, "responses": [ [ "a:27017", { "hosts": [ "a:27017", "b:27017" ], "ismaster": true, "ok": 1, "setName": "rs" } ] ] }, { "outcome": { "servers": { "a:27017": { "setName": null, "type": "Unknown" }, "b:27017": { "setName": "rs", "type": "RSPrimary" } }, "setName": "rs", "topologyType": "ReplicaSetWithPrimary" }, "responses": [ [ "b:27017", { "hosts": [ "a:27017", "b:27017" ], "ismaster": true, "ok": 1, "setName": "rs" } ] ] } ], "uri": "mongodb://a,b/?replicaSet=rs" } MongoDB-v1.2.2/t/data/SDAM/rs/new_primary.yml000644 000765 000024 00000002470 12651754051 021035 0ustar00davidstaff000000 000000 description: "New primary" uri: "mongodb://a,b/?replicaSet=rs" phases: [ { responses: [ ["a:27017", { ok: 1, ismaster: true, setName: "rs", hosts: ["a:27017", "b:27017"] }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs" }, "b:27017": { type: "Unknown", setName: } }, topologyType: "ReplicaSetWithPrimary", setName: "rs" } }, { responses: [ ["b:27017", { ok: 1, ismaster: true, setName: "rs", hosts: ["a:27017", "b:27017"] }] ], outcome: { servers: { "a:27017": { type: "Unknown", setName: }, "b:27017": { type: "RSPrimary", setName: "rs" } }, topologyType: "ReplicaSetWithPrimary", setName: "rs" } } ] MongoDB-v1.2.2/t/data/SDAM/rs/new_primary_new_electionid.json000644 000765 000024 00000007636 12651754051 024266 0ustar00davidstaff000000 000000 { "description": "New primary with greater setVersion and electionId", "phases": [ { "outcome": { "servers": { "a:27017": { "electionId": { "$oid": "000000000000000000000001" }, "setName": "rs", "setVersion": 1, "type": "RSPrimary" }, "b:27017": { "electionId": null, "setName": null, "type": "Unknown" } }, "setName": "rs", "topologyType": "ReplicaSetWithPrimary" }, "responses": [ [ "a:27017", { "electionId": { "$oid": "000000000000000000000001" }, "hosts": [ "a:27017", "b:27017" ], "ismaster": true, "ok": 1, "setName": "rs", "setVersion": 1 } ] ] }, { "outcome": { "servers": { "a:27017": { "electionId": null, "setName": null, "type": "Unknown" }, "b:27017": { "electionId": { "$oid": "000000000000000000000002" }, "setName": "rs", "setVersion": 1, "type": "RSPrimary" } }, "setName": "rs", "topologyType": "ReplicaSetWithPrimary" }, "responses": [ [ "b:27017", { "electionId": { "$oid": "000000000000000000000002" }, "hosts": [ "a:27017", "b:27017" ], "ismaster": true, "ok": 1, "setName": "rs", "setVersion": 1 } ] ] }, { "outcome": { "servers": { "a:27017": { "electionId": null, "setName": null, "type": "Unknown" }, "b:27017": { "electionId": { "$oid": "000000000000000000000002" }, "setName": "rs", "setVersion": 1, "type": "RSPrimary" } }, "setName": "rs", "topologyType": "ReplicaSetWithPrimary" }, "responses": [ [ "a:27017", { "electionId": { "$oid": "000000000000000000000001" }, "hosts": [ "a:27017", "b:27017" ], "ismaster": true, "ok": 1, "setName": "rs", "setVersion": 1 } ] ] } ], "uri": "mongodb://a/?replicaSet=rs" } MongoDB-v1.2.2/t/data/SDAM/rs/new_primary_new_electionid.yml000644 000765 000024 00000005266 12651754051 024113 0ustar00davidstaff000000 000000 description: "New primary with greater setVersion and electionId" uri: "mongodb://a/?replicaSet=rs" phases: [ # Primary A is discovered and tells us about B. { responses: [ ["a:27017", { ok: 1, ismaster: true, hosts: ["a:27017", "b:27017"], setName: "rs", setVersion: 1, electionId: {"$oid": "000000000000000000000001"} }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs", setVersion: 1, electionId: {"$oid": "000000000000000000000001"} }, "b:27017": { type: "Unknown", setName: , electionId: } }, topologyType: "ReplicaSetWithPrimary", setName: "rs", } }, # B is elected. { responses: [ ["b:27017", { ok: 1, ismaster: true, hosts: ["a:27017", "b:27017"], setName: "rs", setVersion: 1, electionId: {"$oid": "000000000000000000000002"} }] ], outcome: { servers: { "a:27017": { type: "Unknown", setName: , electionId: }, "b:27017": { type: "RSPrimary", setName: "rs", setVersion: 1, electionId: {"$oid": "000000000000000000000002"} } }, topologyType: "ReplicaSetWithPrimary", setName: "rs", } }, # A still claims to be primary but it's ignored. { responses: [ ["a:27017", { ok: 1, ismaster: true, hosts: ["a:27017", "b:27017"], setName: "rs", setVersion: 1, electionId: {"$oid": "000000000000000000000001"} }] ], outcome: { servers: { "a:27017": { type: "Unknown", setName: , electionId: }, "b:27017": { type: "RSPrimary", setName: "rs", setVersion: 1, electionId: {"$oid": "000000000000000000000002"} } }, topologyType: "ReplicaSetWithPrimary", setName: "rs", } } ] MongoDB-v1.2.2/t/data/SDAM/rs/new_primary_new_setversion.json000644 000765 000024 00000007617 12651754051 024347 0ustar00davidstaff000000 000000 { "description": "New primary with greater setVersion", "phases": [ { "outcome": { "servers": { "a:27017": { "electionId": { "$oid": "000000000000000000000001" }, "setName": "rs", "setVersion": 1, "type": "RSPrimary" }, "b:27017": { "electionId": null, "setName": null, "type": "Unknown" } }, "setName": "rs", "topologyType": "ReplicaSetWithPrimary" }, "responses": [ [ "a:27017", { "electionId": { "$oid": "000000000000000000000001" }, "hosts": [ "a:27017", "b:27017" ], "ismaster": true, "ok": 1, "setName": "rs", "setVersion": 1 } ] ] }, { "outcome": { "servers": { "a:27017": { "electionId": null, "setName": null, "type": "Unknown" }, "b:27017": { "electionId": { "$oid": "000000000000000000000001" }, "setName": "rs", "setVersion": 2, "type": "RSPrimary" } }, "setName": "rs", "topologyType": "ReplicaSetWithPrimary" }, "responses": [ [ "b:27017", { "electionId": { "$oid": "000000000000000000000001" }, "hosts": [ "a:27017", "b:27017" ], "ismaster": true, "ok": 1, "setName": "rs", "setVersion": 2 } ] ] }, { "outcome": { "servers": { "a:27017": { "electionId": null, "setName": null, "type": "Unknown" }, "b:27017": { "electionId": { "$oid": "000000000000000000000002" }, "setName": "rs", "setVersion": 2, "type": "RSPrimary" } }, "setName": "rs", "topologyType": "ReplicaSetWithPrimary" }, "responses": [ [ "a:27017", { "electionId": { "$oid": "000000000000000000000001" }, "hosts": [ "a:27017", "b:27017" ], "ismaster": true, "ok": 1, "setName": "rs", "setVersion": 1 } ] ] } ], "uri": "mongodb://a/?replicaSet=rs" } MongoDB-v1.2.2/t/data/SDAM/rs/new_primary_new_setversion.yml000644 000765 000024 00000005276 12651754051 024176 0ustar00davidstaff000000 000000 description: "New primary with greater setVersion" uri: "mongodb://a/?replicaSet=rs" phases: [ # Primary A is discovered and tells us about B. { responses: [ ["a:27017", { ok: 1, ismaster: true, hosts: ["a:27017", "b:27017"], setName: "rs", setVersion: 1, electionId: {"$oid": "000000000000000000000001"} }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs", setVersion: 1, electionId: {"$oid": "000000000000000000000001"} }, "b:27017": { type: "Unknown", setName: , electionId: } }, topologyType: "ReplicaSetWithPrimary", setName: "rs", } }, # RS is reconfigured and B is elected. { responses: [ ["b:27017", { ok: 1, ismaster: true, hosts: ["a:27017", "b:27017"], setName: "rs", setVersion: 2, electionId: {"$oid": "000000000000000000000001"} }] ], outcome: { servers: { "a:27017": { type: "Unknown", setName: , electionId: }, "b:27017": { type: "RSPrimary", setName: "rs", setVersion: 2, electionId: {"$oid": "000000000000000000000001"} } }, topologyType: "ReplicaSetWithPrimary", setName: "rs", } }, # A still claims to be primary but it's ignored. { responses: [ ["a:27017", { ok: 1, ismaster: true, hosts: ["a:27017", "b:27017"], setName: "rs", setVersion: 1, electionId: {"$oid": "000000000000000000000001"} }] ], outcome: { servers: { "a:27017": { type: "Unknown", setName: , electionId: }, "b:27017": { type: "RSPrimary", setName: "rs", setVersion: 2, electionId: {"$oid": "000000000000000000000002"} } }, topologyType: "ReplicaSetWithPrimary", setName: "rs", } } ] MongoDB-v1.2.2/t/data/SDAM/rs/new_primary_wrong_set_name.json000644 000765 000024 00000003304 12651754051 024271 0ustar00davidstaff000000 000000 { "description": "New primary with wrong setName", "phases": [ { "outcome": { "servers": { "a:27017": { "setName": "rs", "type": "RSPrimary" }, "b:27017": { "setName": null, "type": "Unknown" } }, "setName": "rs", "topologyType": "ReplicaSetWithPrimary" }, "responses": [ [ "a:27017", { "hosts": [ "a:27017", "b:27017" ], "ismaster": true, "ok": 1, "setName": "rs" } ] ] }, { "outcome": { "servers": { "a:27017": { "setName": "rs", "type": "RSPrimary" } }, "setName": "rs", "topologyType": "ReplicaSetWithPrimary" }, "responses": [ [ "b:27017", { "hosts": [ "a:27017" ], "ismaster": true, "ok": 1, "setName": "wrong" } ] ] } ], "uri": "mongodb://a/?replicaSet=rs" } MongoDB-v1.2.2/t/data/SDAM/rs/new_primary_wrong_set_name.yml000644 000765 000024 00000002614 12651754051 024124 0ustar00davidstaff000000 000000 description: "New primary with wrong setName" uri: "mongodb://a/?replicaSet=rs" phases: [ # Primary is discovered normally, and tells us about server B. { responses: [ ["a:27017", { ok: 1, ismaster: true, hosts: ["a:27017", "b:27017"], setName: "rs" }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs" }, "b:27017": { type: "Unknown", setName: } }, topologyType: "ReplicaSetWithPrimary", setName: "rs" } }, # B is actually the primary of another replica set. It's removed, and # topologyType remains ReplicaSetWithPrimary. { responses: [ ["b:27017", { ok: 1, ismaster: true, hosts: ["a:27017"], setName: "wrong" }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs" } }, topologyType: "ReplicaSetWithPrimary", setName: "rs" } } ] MongoDB-v1.2.2/t/data/SDAM/rs/non_rs_member.json000644 000765 000024 00000001206 12651754051 021472 0ustar00davidstaff000000 000000 { "description": "Non replicaSet member responds", "phases": [ { "outcome": { "servers": { "a:27017": { "setName": null, "type": "Unknown" } }, "setName": "rs", "topologyType": "ReplicaSetNoPrimary" }, "responses": [ [ "b:27017", { "ok": 1 } ] ] } ], "uri": "mongodb://a,b/?replicaSet=rs" } MongoDB-v1.2.2/t/data/SDAM/rs/non_rs_member.yml000644 000765 000024 00000000736 12651754051 021331 0ustar00davidstaff000000 000000 description: "Non replicaSet member responds" uri: "mongodb://a,b/?replicaSet=rs" phases: [ { responses: [ ["b:27017", { ok: 1 }] ], outcome: { servers: { "a:27017": { type: "Unknown", setName: } }, topologyType: "ReplicaSetNoPrimary", setName: "rs" } } ] MongoDB-v1.2.2/t/data/SDAM/rs/normalize_case.json000644 000765 000024 00000002454 12651754051 021646 0ustar00davidstaff000000 000000 { "description": "Replica set case normalization", "phases": [ { "outcome": { "servers": { "a:27017": { "setName": "rs", "type": "RSPrimary" }, "b:27017": { "setName": null, "type": "Unknown" }, "c:27017": { "setName": null, "type": "Unknown" } }, "setName": "rs", "topologyType": "ReplicaSetWithPrimary" }, "responses": [ [ "a:27017", { "arbiters": [ "C:27017" ], "hosts": [ "A:27017" ], "ismaster": true, "ok": 1, "passives": [ "B:27017" ], "setName": "rs" } ] ] } ], "uri": "mongodb://A/?replicaSet=rs" } MongoDB-v1.2.2/t/data/SDAM/rs/normalize_case.yml000644 000765 000024 00000001623 12651754051 021473 0ustar00davidstaff000000 000000 description: "Replica set case normalization" uri: "mongodb://A/?replicaSet=rs" phases: [ { responses: [ ["a:27017", { ok: 1, ismaster: true, setName: "rs", hosts: ["A:27017"], passives: ["B:27017"], arbiters: ["C:27017"] }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs" }, "b:27017": { type: "Unknown", setName: }, "c:27017": { type: "Unknown", setName: } }, topologyType: "ReplicaSetWithPrimary", setName: "rs" } } ] MongoDB-v1.2.2/t/data/SDAM/rs/null_election_id.json000644 000765 000024 00000012756 12651754051 022171 0ustar00davidstaff000000 000000 { "description": "Primaries with and without electionIds", "phases": [ { "outcome": { "servers": { "a:27017": { "electionId": null, "setName": "rs", "setVersion": 1, "type": "RSPrimary" }, "b:27017": { "electionId": null, "setName": null, "type": "Unknown" }, "c:27017": { "electionId": null, "setName": null, "type": "Unknown" } }, "setName": "rs", "topologyType": "ReplicaSetWithPrimary" }, "responses": [ [ "a:27017", { "hosts": [ "a:27017", "b:27017", "c:27017" ], "ismaster": true, "ok": 1, "setName": "rs", "setVersion": 1 } ] ] }, { "outcome": { "servers": { "a:27017": { "electionId": null, "setName": null, "type": "Unknown" }, "b:27017": { "electionId": { "$oid": "000000000000000000000002" }, "setName": "rs", "setVersion": 1, "type": "RSPrimary" }, "c:27017": { "electionId": null, "setName": null, "type": "Unknown" } }, "setName": "rs", "topologyType": "ReplicaSetWithPrimary" }, "responses": [ [ "b:27017", { "electionId": { "$oid": "000000000000000000000002" }, "hosts": [ "a:27017", "b:27017", "c:27017" ], "ismaster": true, "ok": 1, "setName": "rs", "setVersion": 1 } ] ] }, { "outcome": { "servers": { "a:27017": { "electionId": null, "setName": "rs", "setVersion": 1, "type": "RSPrimary" }, "b:27017": { "electionId": null, "setName": null, "type": "Unknown" }, "c:27017": { "electionId": null, "setName": null, "type": "Unknown" } }, "setName": "rs", "topologyType": "ReplicaSetWithPrimary" }, "responses": [ [ "a:27017", { "hosts": [ "a:27017", "b:27017", "c:27017" ], "ismaster": true, "ok": 1, "setName": "rs", "setVersion": 1 } ] ] }, { "outcome": { "servers": { "a:27017": { "electionId": null, "setName": "rs", "type": "RSPrimary" }, "b:27017": { "electionId": null, "setName": null, "type": "Unknown" }, "c:27017": { "electionId": null, "setName": null, "type": "Unknown" } }, "setName": "rs", "topologyType": "ReplicaSetWithPrimary" }, "responses": [ [ "c:27017", { "electionId": { "$oid": "000000000000000000000001" }, "hosts": [ "a:27017", "b:27017", "c:27017" ], "ismaster": true, "ok": 1, "setName": "rs", "setVersion": 1 } ] ] } ], "uri": "mongodb://a/?replicaSet=rs" } MongoDB-v1.2.2/t/data/SDAM/rs/null_election_id.yml000644 000765 000024 00000010032 12651754051 022002 0ustar00davidstaff000000 000000 description: "Primaries with and without electionIds" uri: "mongodb://a/?replicaSet=rs" phases: [ # Primary A has no electionId. { responses: [ ["a:27017", { ok: 1, ismaster: true, hosts: ["a:27017", "b:27017", "c:27017"], setVersion: 1, setName: "rs" }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs", setVersion: 1, electionId: }, "b:27017": { type: "Unknown", setName: , electionId: }, "c:27017": { type: "Unknown", setName: , electionId: } }, topologyType: "ReplicaSetWithPrimary", setName: "rs", } }, # B is elected, it has an electionId. { responses: [ ["b:27017", { ok: 1, ismaster: true, hosts: ["a:27017", "b:27017", "c:27017"], setName: "rs", setVersion: 1, electionId: {"$oid": "000000000000000000000002"} }] ], outcome: { servers: { "a:27017": { type: "Unknown", setName: , electionId: }, "b:27017": { type: "RSPrimary", setName: "rs", setVersion: 1, electionId: {"$oid": "000000000000000000000002"} }, "c:27017": { type: "Unknown", setName: , electionId: } }, topologyType: "ReplicaSetWithPrimary", setName: "rs", } }, # A still claims to be primary, no electionId, we have to trust it. { responses: [ ["a:27017", { ok: 1, ismaster: true, hosts: ["a:27017", "b:27017", "c:27017"], setVersion: 1, setName: "rs" }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs", setVersion: 1, electionId: }, "b:27017": { type: "Unknown", setName: , electionId: }, "c:27017": { type: "Unknown", setName: , electionId: } }, topologyType: "ReplicaSetWithPrimary", setName: "rs", } }, # But we remember B's electionId, so when we finally hear from C # claiming it is primary, we ignore it due to its outdated electionId { responses: [ ["c:27017", { ok: 1, ismaster: true, hosts: ["a:27017", "b:27017", "c:27017"], setName: "rs", setVersion: 1, electionId: {"$oid": "000000000000000000000001"} }] ], outcome: { servers: { # Still primary. "a:27017": { type: "RSPrimary", setName: "rs", electionId: }, "b:27017": { type: "Unknown", setName: , electionId: }, "c:27017": { type: "Unknown", setName: , electionId: } }, topologyType: "ReplicaSetWithPrimary", setName: "rs", } } ] MongoDB-v1.2.2/t/data/SDAM/rs/primary_becomes_standalone.json000644 000765 000024 00000002256 12651754051 024243 0ustar00davidstaff000000 000000 { "description": "Primary becomes standalone", "phases": [ { "outcome": { "servers": { "a:27017": { "setName": "rs", "type": "RSPrimary" } }, "setName": "rs", "topologyType": "ReplicaSetWithPrimary" }, "responses": [ [ "a:27017", { "hosts": [ "a:27017" ], "ismaster": true, "ok": 1, "setName": "rs" } ] ] }, { "outcome": { "servers": {}, "setName": "rs", "topologyType": "ReplicaSetNoPrimary" }, "responses": [ [ "a:27017", { "ok": 1 } ] ] } ], "uri": "mongodb://a/?replicaSet=rs" } MongoDB-v1.2.2/t/data/SDAM/rs/primary_becomes_standalone.yml000644 000765 000024 00000001517 12651754051 024072 0ustar00davidstaff000000 000000 description: "Primary becomes standalone" uri: "mongodb://a/?replicaSet=rs" phases: [ { responses: [ ["a:27017", { ok: 1, ismaster: true, hosts: ["a:27017"], setName: "rs" }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs" } }, topologyType: "ReplicaSetWithPrimary", setName: "rs" } }, { responses: [ ["a:27017", { ok: 1 }] ], outcome: { servers: {}, topologyType: "ReplicaSetNoPrimary", setName: "rs" } } ] MongoDB-v1.2.2/t/data/SDAM/rs/primary_changes_set_name.json000644 000765 000024 00000002550 12651754051 023676 0ustar00davidstaff000000 000000 { "description": "Primary changes setName", "phases": [ { "outcome": { "servers": { "a:27017": { "setName": "rs", "type": "RSPrimary" } }, "setName": "rs", "topologyType": "ReplicaSetWithPrimary" }, "responses": [ [ "a:27017", { "hosts": [ "a:27017" ], "ismaster": true, "ok": 1, "setName": "rs" } ] ] }, { "outcome": { "servers": {}, "setName": "rs", "topologyType": "ReplicaSetNoPrimary" }, "responses": [ [ "a:27017", { "hosts": [ "a:27017" ], "ismaster": true, "ok": 1, "setName": "wrong" } ] ] } ], "uri": "mongodb://a/?replicaSet=rs" } MongoDB-v1.2.2/t/data/SDAM/rs/primary_changes_set_name.yml000644 000765 000024 00000002056 12651754051 023527 0ustar00davidstaff000000 000000 description: "Primary changes setName" uri: "mongodb://a/?replicaSet=rs" phases: [ # Primary is discovered normally. { responses: [ ["a:27017", { ok: 1, ismaster: true, hosts: ["a:27017"], setName: "rs" }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs" } }, topologyType: "ReplicaSetWithPrimary", setName: "rs" } }, # Primary changes its setName. Remove it and change the topologyType. { responses: [ ["a:27017", { ok: 1, ismaster: true, hosts: ["a:27017"], setName: "wrong" }] ], outcome: { servers: {}, topologyType: "ReplicaSetNoPrimary", setName: "rs" } } ] MongoDB-v1.2.2/t/data/SDAM/rs/primary_disconnect.json000644 000765 000024 00000002424 12651754051 022544 0ustar00davidstaff000000 000000 { "description": "Disconnected from primary", "phases": [ { "outcome": { "servers": { "a:27017": { "setName": "rs", "type": "RSPrimary" } }, "setName": "rs", "topologyType": "ReplicaSetWithPrimary" }, "responses": [ [ "a:27017", { "hosts": [ "a:27017" ], "ismaster": true, "ok": 1, "setName": "rs" } ] ] }, { "outcome": { "servers": { "a:27017": { "setName": null, "type": "Unknown" } }, "setName": "rs", "topologyType": "ReplicaSetNoPrimary" }, "responses": [ [ "a:27017", {} ] ] } ], "uri": "mongodb://a/?replicaSet=rs" } MongoDB-v1.2.2/t/data/SDAM/rs/primary_disconnect.yml000644 000765 000024 00000001642 12651754051 022375 0ustar00davidstaff000000 000000 description: "Disconnected from primary" uri: "mongodb://a/?replicaSet=rs" phases: [ { responses: [ ["a:27017", { ok: 1, ismaster: true, hosts: ["a:27017"], setName: "rs" }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs" } }, topologyType: "ReplicaSetWithPrimary", setName: "rs" } }, { responses: [ ["a:27017", {}] ], outcome: { servers: { "a:27017": { type: "Unknown", setName: } }, topologyType: "ReplicaSetNoPrimary", setName: "rs" } } ] MongoDB-v1.2.2/t/data/SDAM/rs/primary_disconnect_electionid.json000644 000765 000024 00000014215 12651754051 024744 0ustar00davidstaff000000 000000 { "description": "Disconnected from primary, reject primary with stale electionId", "phases": [ { "outcome": { "servers": { "a:27017": { "electionId": null, "setName": null, "type": "Unknown" }, "b:27017": { "electionId": { "$oid": "000000000000000000000002" }, "setName": "rs", "setVersion": 1, "type": "RSPrimary" } }, "setName": "rs", "topologyType": "ReplicaSetWithPrimary" }, "responses": [ [ "a:27017", { "electionId": { "$oid": "000000000000000000000001" }, "hosts": [ "a:27017", "b:27017" ], "ismaster": true, "ok": 1, "setName": "rs", "setVersion": 1 } ], [ "b:27017", { "electionId": { "$oid": "000000000000000000000002" }, "hosts": [ "a:27017", "b:27017" ], "ismaster": true, "ok": 1, "setName": "rs", "setVersion": 1 } ] ] }, { "outcome": { "servers": { "a:27017": { "electionId": null, "setName": null, "type": "Unknown" }, "b:27017": { "electionId": null, "setName": null, "type": "Unknown" } }, "setName": "rs", "topologyType": "ReplicaSetNoPrimary" }, "responses": [ [ "b:27017", {} ] ] }, { "outcome": { "servers": { "a:27017": { "electionId": null, "setName": null, "type": "Unknown" }, "b:27017": { "electionId": null, "setName": null, "type": "Unknown" } }, "setName": "rs", "topologyType": "ReplicaSetNoPrimary" }, "responses": [ [ "a:27017", { "electionId": { "$oid": "000000000000000000000001" }, "hosts": [ "a:27017", "b:27017" ], "ismaster": true, "ok": 1, "setName": "rs", "setVersion": 1 } ] ] }, { "outcome": { "servers": { "a:27017": { "electionId": { "$oid": "000000000000000000000003" }, "setName": "rs", "setVersion": 1, "type": "RSPrimary" }, "b:27017": { "electionId": null, "setName": null, "type": "Unknown" } }, "setName": "rs", "topologyType": "ReplicaSetWithPrimary" }, "responses": [ [ "a:27017", { "electionId": { "$oid": "000000000000000000000003" }, "hosts": [ "a:27017", "b:27017" ], "ismaster": true, "ok": 1, "setName": "rs", "setVersion": 1 } ] ] }, { "outcome": { "servers": { "a:27017": { "electionId": { "$oid": "000000000000000000000002" }, "setName": "rs", "setVersion": 1, "type": "RSPrimary" }, "b:27017": { "setName": "rs", "type": "RSSecondary" } }, "setName": "rs", "topologyType": "ReplicaSetWithPrimary" }, "responses": [ [ "b:27017", { "hosts": [ "a:27017", "b:27017" ], "ismaster": false, "ok": 1, "secondary": true, "setName": "rs", "setVersion": 2 } ] ] } ], "uri": "mongodb://a/?replicaSet=rs" } MongoDB-v1.2.2/t/data/SDAM/rs/primary_disconnect_electionid.yml000644 000765 000024 00000010265 12651754051 024575 0ustar00davidstaff000000 000000 description: "Disconnected from primary, reject primary with stale electionId" uri: "mongodb://a/?replicaSet=rs" phases: [ # A is elected, then B. { responses: [ ["a:27017", { ok: 1, ismaster: true, hosts: ["a:27017", "b:27017"], setName: "rs", setVersion: 1, electionId: {"$oid": "000000000000000000000001"} }], ["b:27017", { ok: 1, ismaster: true, hosts: ["a:27017", "b:27017"], setName: "rs", setVersion: 1, electionId: {"$oid": "000000000000000000000002"} }] ], outcome: { servers: { "a:27017": { type: "Unknown", setName: , electionId: }, "b:27017": { type: "RSPrimary", setName: "rs", setVersion: 1, electionId: {"$oid": "000000000000000000000002"} } }, topologyType: "ReplicaSetWithPrimary", setName: "rs", } }, # Disconnected from B. { responses: [ ["b:27017", {}] ], outcome: { servers: { "a:27017": { type: "Unknown", setName: , electionId: }, "b:27017": { type: "Unknown", setName: , electionId: } }, topologyType: "ReplicaSetNoPrimary", setName: "rs", } }, # A still claims to be primary but it's ignored. { responses: [ ["a:27017", { ok: 1, ismaster: true, hosts: ["a:27017", "b:27017"], setName: "rs", setVersion: 1, electionId: {"$oid": "000000000000000000000001"} }] ], outcome: { servers: { "a:27017": { type: "Unknown", setName: , electionId: }, "b:27017": { type: "Unknown", setName: , electionId: } }, topologyType: "ReplicaSetNoPrimary", setName: "rs", } }, # Now A is re-elected. { responses: [ ["a:27017", { ok: 1, ismaster: true, hosts: ["a:27017", "b:27017"], setName: "rs", setVersion: 1, electionId: {"$oid": "000000000000000000000003"} }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs", setVersion: 1, electionId: {"$oid": "000000000000000000000003"} }, "b:27017": { type: "Unknown", setName: , electionId: } }, topologyType: "ReplicaSetWithPrimary", setName: "rs", } }, # B comes back as secondary. { responses: [ ["b:27017", { ok: 1, ismaster: false, secondary: true, hosts: ["a:27017", "b:27017"], setName: "rs", setVersion: 2 }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs", setVersion: 1, electionId: {"$oid": "000000000000000000000002"} }, "b:27017": { type: "RSSecondary", setName: "rs" } }, topologyType: "ReplicaSetWithPrimary", setName: "rs", } } ] MongoDB-v1.2.2/t/data/SDAM/rs/primary_disconnect_setversion.json000644 000765 000024 00000014215 12651754051 025026 0ustar00davidstaff000000 000000 { "description": "Disconnected from primary, reject primary with stale setVersion", "phases": [ { "outcome": { "servers": { "a:27017": { "electionId": null, "setName": null, "type": "Unknown" }, "b:27017": { "electionId": { "$oid": "000000000000000000000001" }, "setName": "rs", "setVersion": 2, "type": "RSPrimary" } }, "setName": "rs", "topologyType": "ReplicaSetWithPrimary" }, "responses": [ [ "a:27017", { "electionId": { "$oid": "000000000000000000000001" }, "hosts": [ "a:27017", "b:27017" ], "ismaster": true, "ok": 1, "setName": "rs", "setVersion": 1 } ], [ "b:27017", { "electionId": { "$oid": "000000000000000000000001" }, "hosts": [ "a:27017", "b:27017" ], "ismaster": true, "ok": 1, "setName": "rs", "setVersion": 2 } ] ] }, { "outcome": { "servers": { "a:27017": { "electionId": null, "setName": null, "type": "Unknown" }, "b:27017": { "electionId": null, "setName": null, "type": "Unknown" } }, "setName": "rs", "topologyType": "ReplicaSetNoPrimary" }, "responses": [ [ "b:27017", {} ] ] }, { "outcome": { "servers": { "a:27017": { "electionId": null, "setName": null, "type": "Unknown" }, "b:27017": { "electionId": null, "setName": null, "type": "Unknown" } }, "setName": "rs", "topologyType": "ReplicaSetNoPrimary" }, "responses": [ [ "a:27017", { "electionId": { "$oid": "000000000000000000000001" }, "hosts": [ "a:27017", "b:27017" ], "ismaster": true, "ok": 1, "setName": "rs", "setVersion": 1 } ] ] }, { "outcome": { "servers": { "a:27017": { "electionId": { "$oid": "000000000000000000000002" }, "setName": "rs", "setVersion": 2, "type": "RSPrimary" }, "b:27017": { "electionId": null, "setName": null, "type": "Unknown" } }, "setName": "rs", "topologyType": "ReplicaSetWithPrimary" }, "responses": [ [ "a:27017", { "electionId": { "$oid": "000000000000000000000002" }, "hosts": [ "a:27017", "b:27017" ], "ismaster": true, "ok": 1, "setName": "rs", "setVersion": 2 } ] ] }, { "outcome": { "servers": { "a:27017": { "electionId": { "$oid": "000000000000000000000002" }, "setName": "rs", "setVersion": 2, "type": "RSPrimary" }, "b:27017": { "setName": "rs", "type": "RSSecondary" } }, "setName": "rs", "topologyType": "ReplicaSetWithPrimary" }, "responses": [ [ "b:27017", { "hosts": [ "a:27017", "b:27017" ], "ismaster": false, "ok": 1, "secondary": true, "setName": "rs", "setVersion": 2 } ] ] } ], "uri": "mongodb://a/?replicaSet=rs" } MongoDB-v1.2.2/t/data/SDAM/rs/primary_disconnect_setversion.yml000644 000765 000024 00000010306 12651754051 024653 0ustar00davidstaff000000 000000 description: "Disconnected from primary, reject primary with stale setVersion" uri: "mongodb://a/?replicaSet=rs" phases: [ # A is elected, then B after a reconfig. { responses: [ ["a:27017", { ok: 1, ismaster: true, hosts: ["a:27017", "b:27017"], setName: "rs", setVersion: 1, electionId: {"$oid": "000000000000000000000001"} }], ["b:27017", { ok: 1, ismaster: true, hosts: ["a:27017", "b:27017"], setName: "rs", setVersion: 2, electionId: {"$oid": "000000000000000000000001"} }] ], outcome: { servers: { "a:27017": { type: "Unknown", setName: , electionId: }, "b:27017": { type: "RSPrimary", setName: "rs", setVersion: 2, electionId: {"$oid": "000000000000000000000001"} } }, topologyType: "ReplicaSetWithPrimary", setName: "rs", } }, # Disconnected from B. { responses: [ ["b:27017", {}] ], outcome: { servers: { "a:27017": { type: "Unknown", setName: , electionId: }, "b:27017": { type: "Unknown", setName: , electionId: } }, topologyType: "ReplicaSetNoPrimary", setName: "rs", } }, # A still claims to be primary but it's ignored. { responses: [ ["a:27017", { ok: 1, ismaster: true, hosts: ["a:27017", "b:27017"], setName: "rs", setVersion: 1, electionId: {"$oid": "000000000000000000000001"} }] ], outcome: { servers: { "a:27017": { type: "Unknown", setName: , electionId: }, "b:27017": { type: "Unknown", setName: , electionId: } }, topologyType: "ReplicaSetNoPrimary", setName: "rs", } }, # Now A is re-elected. { responses: [ ["a:27017", { ok: 1, ismaster: true, hosts: ["a:27017", "b:27017"], setName: "rs", setVersion: 2, electionId: {"$oid": "000000000000000000000002"} }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs", setVersion: 2, electionId: {"$oid": "000000000000000000000002"} }, "b:27017": { type: "Unknown", setName: , electionId: } }, topologyType: "ReplicaSetWithPrimary", setName: "rs", } }, # B comes back as secondary. { responses: [ ["b:27017", { ok: 1, ismaster: false, secondary: true, hosts: ["a:27017", "b:27017"], setName: "rs", setVersion: 2 }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs", setVersion: 2, electionId: {"$oid": "000000000000000000000002"} }, "b:27017": { type: "RSSecondary", setName: "rs" } }, topologyType: "ReplicaSetWithPrimary", setName: "rs", } } ] MongoDB-v1.2.2/t/data/SDAM/rs/primary_mismatched_me.json000644 000765 000024 00000002052 12651754051 023207 0ustar00davidstaff000000 000000 { "description": "Primary mismatched me", "phases": [ { "outcome": { "servers": { "a:27017": { "setName": null, "type": "Unknown" }, "b:27017": { "setName": null, "type": "Unknown" } }, "setName": "rs", "topologyType": "ReplicaSetNoPrimary" }, "responses": [ [ "localhost:27017", { "hosts": [ "a:27017", "b:27017" ], "ismaster": true, "me": "a:27017", "ok": 1, "setName": "rs" } ] ] } ], "uri": "mongodb://localhost:27017/?replicaSet=rs" } MongoDB-v1.2.2/t/data/SDAM/rs/primary_mismatched_me.yml000644 000765 000024 00000002046 12651754051 023042 0ustar00davidstaff000000 000000 { "description": "Primary mismatched me", "phases": [ { "outcome": { "servers": { "a:27017": { "setName": null, "type": "Unknown" }, "b:27017": { "setName": null, "type": "Unknown" } }, "setName": "rs", "topologyType": "ReplicaSetNoPrimary" }, "responses": [ [ "localhost:27017", { "me": "a:27017", "hosts": [ "a:27017", "b:27017" ], "ismaster": true, "ok": 1, "setName": "rs" } ] ] } ], "uri": "mongodb://localhost:27017/?replicaSet=rs" } MongoDB-v1.2.2/t/data/SDAM/rs/primary_to_no_primary_mismatched_me.json000644 000765 000024 00000003720 12651754051 026153 0ustar00davidstaff000000 000000 { "description": "Primary to no primary with mismatched me", "phases": [ { "outcome": { "servers": { "a:27017": { "setName": "rs", "type": "RSPrimary" }, "b:27017": { "setName": null, "type": "Unknown" } }, "setName": "rs", "topologyType": "ReplicaSetWithPrimary" }, "responses": [ [ "a:27017", { "hosts": [ "a:27017", "b:27017" ], "ismaster": true, "me": "a:27017", "ok": 1, "setName": "rs" } ] ] }, { "outcome": { "servers": { "c:27017": { "setName": null, "type": "Unknown" }, "d:27017": { "setName": null, "type": "Unknown" } }, "setName": "rs", "topologyType": "ReplicaSetNoPrimary" }, "responses": [ [ "a:27017", { "hosts": [ "c:27017", "d:27017" ], "ismaster": true, "me": "c:27017", "ok": 1, "setName": "rs" } ] ] } ], "uri": "mongodb://a/?replicaSet=rs" } MongoDB-v1.2.2/t/data/SDAM/rs/primary_to_no_primary_mismatched_me.yml000644 000765 000024 00000002620 12651754051 026001 0ustar00davidstaff000000 000000 description: "Primary to no primary with mismatched me" uri: "mongodb://a/?replicaSet=rs" phases: [ { responses: [ ["a:27017", { ok: 1, ismaster: true, hosts: ["a:27017", "b:27017"], me: "a:27017", setName: "rs" }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs" }, "b:27017": { type: "Unknown", setName: } }, topologyType: "ReplicaSetWithPrimary", setName: "rs" } }, { responses: [ ["a:27017", { ok: 1, ismaster: true, hosts: ["c:27017", "d:27017"], me : "c:27017", setName: "rs" }] ], outcome: { servers: { "c:27017": { type: "Unknown", setName: }, "d:27017": { type: "Unknown", setName: } }, topologyType: "ReplicaSetNoPrimary", setName: "rs" } } ] MongoDB-v1.2.2/t/data/SDAM/rs/primary_wrong_set_name.json000644 000765 000024 00000001234 12651754051 023420 0ustar00davidstaff000000 000000 { "description": "Primary wrong setName", "phases": [ { "outcome": { "servers": {}, "setName": "rs", "topologyType": "ReplicaSetNoPrimary" }, "responses": [ [ "a:27017", { "hosts": [ "a:27017" ], "ismaster": true, "ok": 1, "setName": "wrong" } ] ] } ], "uri": "mongodb://a/?replicaSet=rs" } MongoDB-v1.2.2/t/data/SDAM/rs/primary_wrong_set_name.yml000644 000765 000024 00000000705 12651754051 023252 0ustar00davidstaff000000 000000 description: "Primary wrong setName" uri: "mongodb://a/?replicaSet=rs" phases: [ { responses: [ ["a:27017", { ok: 1, ismaster: true, hosts: ["a:27017"], setName: "wrong" }] ], outcome: { servers: {}, topologyType: "ReplicaSetNoPrimary", setName: "rs" } } ] MongoDB-v1.2.2/t/data/SDAM/rs/response_from_removed.json000644 000765 000024 00000003141 12651754051 023247 0ustar00davidstaff000000 000000 { "description": "Response from removed server", "phases": [ { "outcome": { "servers": { "a:27017": { "setName": "rs", "type": "RSPrimary" } }, "setName": "rs", "topologyType": "ReplicaSetWithPrimary" }, "responses": [ [ "a:27017", { "hosts": [ "a:27017" ], "ismaster": true, "ok": 1, "setName": "rs" } ] ] }, { "outcome": { "servers": { "a:27017": { "setName": "rs", "type": "RSPrimary" } }, "setName": "rs", "topologyType": "ReplicaSetWithPrimary" }, "responses": [ [ "b:27017", { "hosts": [ "a:27017", "b:27017" ], "ismaster": false, "ok": 1, "secondary": true, "setName": "rs" } ] ] } ], "uri": "mongodb://a,b/?replicaSet=rs" } MongoDB-v1.2.2/t/data/SDAM/rs/response_from_removed.yml000644 000765 000024 00000002174 12651754051 023104 0ustar00davidstaff000000 000000 description: "Response from removed server" uri: "mongodb://a,b/?replicaSet=rs" phases: [ { responses: [ ["a:27017", { ok: 1, ismaster: true, setName: "rs", hosts: ["a:27017"] }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs" } }, topologyType: "ReplicaSetWithPrimary", setName: "rs" } }, { responses: [ ["b:27017", { ok: 1, ismaster: false, secondary: true, setName: "rs", hosts: ["a:27017", "b:27017"] }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs" } }, topologyType: "ReplicaSetWithPrimary", setName: "rs" } } ] MongoDB-v1.2.2/t/data/SDAM/rs/rsother_discovered.json000644 000765 000024 00000003373 12651754051 022551 0ustar00davidstaff000000 000000 { "description": "RSOther discovered", "phases": [ { "outcome": { "servers": { "a:27017": { "setName": "rs", "type": "RSOther" }, "b:27017": { "setName": "rs", "type": "RSOther" }, "c:27017": { "setName": null, "type": "Unknown" }, "d:27017": { "setName": null, "type": "Unknown" } }, "setName": "rs", "topologyType": "ReplicaSetNoPrimary" }, "responses": [ [ "a:27017", { "hidden": true, "hosts": [ "c:27017", "d:27017" ], "ismaster": false, "ok": 1, "secondary": true, "setName": "rs" } ], [ "b:27017", { "hosts": [ "c:27017", "d:27017" ], "ismaster": false, "ok": 1, "secondary": false, "setName": "rs" } ] ] } ], "uri": "mongodb://a,b/?replicaSet=rs" } MongoDB-v1.2.2/t/data/SDAM/rs/rsother_discovered.yml000644 000765 000024 00000002352 12651754051 022375 0ustar00davidstaff000000 000000 description: "RSOther discovered" uri: "mongodb://a,b/?replicaSet=rs" phases: [ { responses: [ ["a:27017", { ok: 1, ismaster: false, secondary: true, hidden: true, hosts: ["c:27017", "d:27017"], setName: "rs" }], ["b:27017", { ok: 1, ismaster: false, secondary: false, hosts: ["c:27017", "d:27017"], setName: "rs" }] ], outcome: { servers: { "a:27017": { type: "RSOther", setName: "rs" }, "b:27017": { type: "RSOther", setName: "rs" }, "c:27017": { type: "Unknown", setName: }, "d:27017": { type: "Unknown", setName: } }, topologyType: "ReplicaSetNoPrimary", setName: "rs" } } ] MongoDB-v1.2.2/t/data/SDAM/rs/sec_not_auth.json000644 000765 000024 00000002650 12651754051 021324 0ustar00davidstaff000000 000000 { "description": "Secondary's host list is not authoritative", "phases": [ { "outcome": { "servers": { "a:27017": { "setName": "rs", "type": "RSPrimary" }, "b:27017": { "setName": "rs", "type": "RSSecondary" } }, "setName": "rs", "topologyType": "ReplicaSetWithPrimary" }, "responses": [ [ "a:27017", { "hosts": [ "a:27017", "b:27017" ], "ismaster": true, "ok": 1, "setName": "rs" } ], [ "b:27017", { "hosts": [ "b:27017", "c:27017" ], "ismaster": false, "ok": 1, "secondary": true, "setName": "rs" } ] ] } ], "uri": "mongodb://a/?replicaSet=rs" } MongoDB-v1.2.2/t/data/SDAM/rs/sec_not_auth.yml000644 000765 000024 00000001726 12651754051 021157 0ustar00davidstaff000000 000000 description: "Secondary's host list is not authoritative" uri: "mongodb://a/?replicaSet=rs" phases: [ { responses: [ ["a:27017", { ok: 1, ismaster: true, setName: "rs", hosts: ["a:27017", "b:27017"] }], ["b:27017", { ok: 1, ismaster: false, secondary: true, setName: "rs", hosts: ["b:27017", "c:27017"] }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs" }, "b:27017": { type: "RSSecondary", setName: "rs" } }, topologyType: "ReplicaSetWithPrimary", setName: "rs" } } ] MongoDB-v1.2.2/t/data/SDAM/rs/secondary_mismatched_me.json000644 000765 000024 00000002055 12651754051 023516 0ustar00davidstaff000000 000000 { "description": "Secondary mismatched me", "phases": [ { "outcome": { "servers": { "a:27017": { "setName": null, "type": "Unknown" }, "b:27017": { "setName": null, "type": "Unknown" } }, "setName": "rs", "topologyType": "ReplicaSetNoPrimary" }, "responses": [ [ "localhost:27017", { "hosts": [ "a:27017", "b:27017" ], "ismaster": false, "me": "a:27017", "ok": 1, "setName": "rs" } ] ] } ], "uri": "mongodb://localhost:27017/?replicaSet=rs" } MongoDB-v1.2.2/t/data/SDAM/rs/secondary_mismatched_me.yml000644 000765 000024 00000002051 12651754051 023342 0ustar00davidstaff000000 000000 { "description": "Secondary mismatched me", "phases": [ { "outcome": { "servers": { "a:27017": { "setName": null, "type": "Unknown" }, "b:27017": { "setName": null, "type": "Unknown" } }, "setName": "rs", "topologyType": "ReplicaSetNoPrimary" }, "responses": [ [ "localhost:27017", { "me": "a:27017", "hosts": [ "a:27017", "b:27017" ], "ismaster": false, "ok": 1, "setName": "rs" } ] ] } ], "uri": "mongodb://localhost:27017/?replicaSet=rs" } MongoDB-v1.2.2/t/data/SDAM/rs/secondary_wrong_set_name.json000644 000765 000024 00000001313 12651754051 023722 0ustar00davidstaff000000 000000 { "description": "Secondary wrong setName", "phases": [ { "outcome": { "servers": {}, "setName": "rs", "topologyType": "ReplicaSetNoPrimary" }, "responses": [ [ "a:27017", { "hosts": [ "a:27017" ], "ismaster": false, "ok": 1, "secondary": true, "setName": "wrong" } ] ] } ], "uri": "mongodb://a/?replicaSet=rs" } MongoDB-v1.2.2/t/data/SDAM/rs/secondary_wrong_set_name.yml000644 000765 000024 00000000755 12651754051 023563 0ustar00davidstaff000000 000000 description: "Secondary wrong setName" uri: "mongodb://a/?replicaSet=rs" phases: [ { responses: [ ["a:27017", { ok: 1, ismaster: false, secondary: true, hosts: ["a:27017"], setName: "wrong" }] ], outcome: { servers: {}, topologyType: "ReplicaSetNoPrimary", setName: "rs" } } ] MongoDB-v1.2.2/t/data/SDAM/rs/secondary_wrong_set_name_with_primary.json000644 000765 000024 00000003441 12651754051 026524 0ustar00davidstaff000000 000000 { "description": "Secondary wrong setName with primary", "phases": [ { "outcome": { "servers": { "a:27017": { "setName": "rs", "type": "RSPrimary" }, "b:27017": { "setName": null, "type": "Unknown" } }, "setName": "rs", "topologyType": "ReplicaSetWithPrimary" }, "responses": [ [ "a:27017", { "hosts": [ "a:27017", "b:27017" ], "ismaster": true, "ok": 1, "setName": "rs" } ] ] }, { "outcome": { "servers": { "a:27017": { "setName": "rs", "type": "RSPrimary" } }, "setName": "rs", "topologyType": "ReplicaSetWithPrimary" }, "responses": [ [ "b:27017", { "hosts": [ "a:27017", "b:27017" ], "ismaster": false, "ok": 1, "secondary": true, "setName": "wrong" } ] ] } ], "uri": "mongodb://a,b/?replicaSet=rs" } MongoDB-v1.2.2/t/data/SDAM/rs/secondary_wrong_set_name_with_primary.yml000644 000765 000024 00000002406 12651754051 026354 0ustar00davidstaff000000 000000 description: "Secondary wrong setName with primary" uri: "mongodb://a,b/?replicaSet=rs" phases: [ { responses: [ ["a:27017", { ok: 1, ismaster: true, hosts: ["a:27017", "b:27017"], setName: "rs" }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs" }, "b:27017": { type: "Unknown", setName: } }, topologyType: "ReplicaSetWithPrimary", setName: "rs" } }, { responses: [ ["b:27017", { ok: 1, ismaster: false, secondary: true, hosts: ["a:27017", "b:27017"], setName: "wrong" }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs" } }, topologyType: "ReplicaSetWithPrimary", setName: "rs" } } ] MongoDB-v1.2.2/t/data/SDAM/rs/setversion_without_electionid.json000644 000765 000024 00000004343 12651754051 025035 0ustar00davidstaff000000 000000 { "description": "setVersion is ignored if there is no electionId", "phases": [ { "outcome": { "servers": { "a:27017": { "electionId": null, "setName": "rs", "setVersion": 2, "type": "RSPrimary" }, "b:27017": { "electionId": null, "setName": null, "type": "Unknown" } }, "setName": "rs", "topologyType": "ReplicaSetWithPrimary" }, "responses": [ [ "a:27017", { "hosts": [ "a:27017", "b:27017" ], "ismaster": true, "ok": 1, "setName": "rs", "setVersion": 2 } ] ] }, { "outcome": { "servers": { "a:27017": { "electionId": null, "setName": null, "type": "Unknown" }, "b:27017": { "electionId": null, "setName": "rs", "setVersion": 1, "type": "RSPrimary" } }, "setName": "rs", "topologyType": "ReplicaSetWithPrimary" }, "responses": [ [ "b:27017", { "hosts": [ "a:27017", "b:27017" ], "ismaster": true, "ok": 1, "setName": "rs", "setVersion": 1 } ] ] } ], "uri": "mongodb://a/?replicaSet=rs" } MongoDB-v1.2.2/t/data/SDAM/rs/setversion_without_electionid.yml000644 000765 000024 00000003347 12651754051 024670 0ustar00davidstaff000000 000000 description: "setVersion is ignored if there is no electionId" uri: "mongodb://a/?replicaSet=rs" phases: [ # Primary A is discovered and tells us about B. { responses: [ ["a:27017", { ok: 1, ismaster: true, hosts: ["a:27017", "b:27017"], setName: "rs", setVersion: 2 }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs", setVersion: 2 , electionId: }, "b:27017": { type: "Unknown", setName: , electionId: } }, topologyType: "ReplicaSetWithPrimary", setName: "rs", } }, # B is elected, its setVersion is older but we believe it anyway, because # setVersion is only used in conjunction with electionId. { responses: [ ["b:27017", { ok: 1, ismaster: true, hosts: ["a:27017", "b:27017"], setName: "rs", setVersion: 1 }] ], outcome: { servers: { "a:27017": { type: "Unknown", setName: , electionId: }, "b:27017": { type: "RSPrimary", setName: "rs", setVersion: 1, electionId: } }, topologyType: "ReplicaSetWithPrimary", setName: "rs", } } ] MongoDB-v1.2.2/t/data/SDAM/rs/stepdown_change_set_name.json000644 000765 000024 00000002654 12651754051 023700 0ustar00davidstaff000000 000000 { "description": "Primary becomes a secondary with wrong setName", "phases": [ { "outcome": { "servers": { "a:27017": { "setName": "rs", "type": "RSPrimary" } }, "setName": "rs", "topologyType": "ReplicaSetWithPrimary" }, "responses": [ [ "a:27017", { "hosts": [ "a:27017" ], "ismaster": true, "ok": 1, "setName": "rs" } ] ] }, { "outcome": { "servers": {}, "setName": "rs", "topologyType": "ReplicaSetNoPrimary" }, "responses": [ [ "a:27017", { "hosts": [ "a:27017" ], "ismaster": false, "ok": 1, "secondary": true, "setName": "wrong" } ] ] } ], "uri": "mongodb://a/?replicaSet=rs" } MongoDB-v1.2.2/t/data/SDAM/rs/stepdown_change_set_name.yml000644 000765 000024 00000002207 12651754051 023522 0ustar00davidstaff000000 000000 description: "Primary becomes a secondary with wrong setName" uri: "mongodb://a/?replicaSet=rs" phases: [ # Primary is discovered normally. { responses: [ ["a:27017", { ok: 1, ismaster: true, hosts: ["a:27017"], setName: "rs" }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs" } }, topologyType: "ReplicaSetWithPrimary", setName: "rs" } }, # Primary changes its setName and becomes secondary. # Remove it and change the topologyType. { responses: [ ["a:27017", { ok: 1, ismaster: false, secondary: true, hosts: ["a:27017"], setName: "wrong" }] ], outcome: { servers: {}, topologyType: "ReplicaSetNoPrimary", setName: "rs" } } ] MongoDB-v1.2.2/t/data/SDAM/rs/unexpected_mongos.json000644 000765 000024 00000001062 12651754051 022373 0ustar00davidstaff000000 000000 { "description": "Unexpected mongos", "phases": [ { "outcome": { "servers": {}, "setName": "rs", "topologyType": "ReplicaSetNoPrimary" }, "responses": [ [ "b:27017", { "ismaster": true, "msg": "isdbgrid", "ok": 1 } ] ] } ], "uri": "mongodb://b/?replicaSet=rs" } MongoDB-v1.2.2/t/data/SDAM/rs/unexpected_mongos.yml000644 000765 000024 00000000630 12651754051 022223 0ustar00davidstaff000000 000000 description: "Unexpected mongos" uri: "mongodb://b/?replicaSet=rs" phases: [ { responses: [ ["b:27017", { ok: 1, ismaster: true, msg: "isdbgrid" }] ], outcome: { servers: {}, topologyType: "ReplicaSetNoPrimary", setName: "rs" } } ] MongoDB-v1.2.2/t/data/SDAM/rs/use_setversion_without_electionid.json000644 000765 000024 00000007036 12651754051 025713 0ustar00davidstaff000000 000000 { "description": "Record max setVersion, even from primary without electionId", "phases": [ { "outcome": { "servers": { "a:27017": { "electionId": { "$oid": "000000000000000000000001" }, "setName": "rs", "setVersion": 1, "type": "RSPrimary" }, "b:27017": { "electionId": null, "setName": null, "type": "Unknown" } }, "setName": "rs", "topologyType": "ReplicaSetWithPrimary" }, "responses": [ [ "a:27017", { "electionId": { "$oid": "000000000000000000000001" }, "hosts": [ "a:27017", "b:27017" ], "ismaster": true, "ok": 1, "setName": "rs", "setVersion": 1 } ] ] }, { "outcome": { "servers": { "a:27017": { "electionId": null, "setName": null, "type": "Unknown" }, "b:27017": { "setName": "rs", "setVersion": 2, "type": "RSPrimary" } }, "setName": "rs", "topologyType": "ReplicaSetWithPrimary" }, "responses": [ [ "b:27017", { "hosts": [ "a:27017", "b:27017" ], "ismaster": true, "ok": 1, "setName": "rs", "setVersion": 2 } ] ] }, { "outcome": { "servers": { "a:27017": { "electionId": null, "setName": null, "type": "Unknown" }, "b:27017": { "setName": "rs", "setVersion": 2, "type": "RSPrimary" } }, "setName": "rs", "topologyType": "ReplicaSetWithPrimary" }, "responses": [ [ "a:27017", { "electionId": { "$oid": "000000000000000000000002" }, "hosts": [ "a:27017", "b:27017" ], "ismaster": true, "ok": 1, "setName": "rs", "setVersion": 1 } ] ] } ], "uri": "mongodb://a/?replicaSet=rs" } MongoDB-v1.2.2/t/data/SDAM/rs/use_setversion_without_electionid.yml000644 000765 000024 00000005203 12651754051 025535 0ustar00davidstaff000000 000000 description: "Record max setVersion, even from primary without electionId" uri: "mongodb://a/?replicaSet=rs" phases: [ # Primary A has setVersion and electionId, tells us about B. { responses: [ ["a:27017", { ok: 1, ismaster: true, hosts: ["a:27017", "b:27017"], setName: "rs", setVersion: 1, electionId: {"$oid": "000000000000000000000001"} }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs", setVersion: 1, electionId: {"$oid": "000000000000000000000001"} }, "b:27017": { type: "Unknown", setName: , electionId: } }, topologyType: "ReplicaSetWithPrimary", setName: "rs", } }, # Reconfig the set and elect B, it has a new setVersion but no electionId. { responses: [ ["b:27017", { ok: 1, ismaster: true, hosts: ["a:27017", "b:27017"], setName: "rs", setVersion: 2 }] ], outcome: { servers: { "a:27017": { type: "Unknown", setName: , electionId: }, "b:27017": { type: "RSPrimary", setName: "rs", setVersion: 2 } }, topologyType: "ReplicaSetWithPrimary", setName: "rs", } }, # Delayed response from A, reporting its reelection. Its setVersion shows # the election preceded B's so we ignore it. { responses: [ ["a:27017", { ok: 1, ismaster: true, hosts: ["a:27017", "b:27017"], setName: "rs", setVersion: 1, electionId: {"$oid": "000000000000000000000002"} }] ], outcome: { servers: { "a:27017": { type: "Unknown", setName: , electionId: }, "b:27017": { type: "RSPrimary", setName: "rs", setVersion: 2 } }, topologyType: "ReplicaSetWithPrimary", setName: "rs", } } ] MongoDB-v1.2.2/t/data/SDAM/rs/wrong_set_name.json000644 000765 000024 00000001607 12651754051 021661 0ustar00davidstaff000000 000000 { "description": "Wrong setName", "phases": [ { "outcome": { "servers": { "a:27017": { "setName": null, "type": "Unknown" } }, "setName": "rs", "topologyType": "ReplicaSetNoPrimary" }, "responses": [ [ "b:27017", { "hosts": [ "b:27017", "c:27017" ], "ismaster": false, "ok": 1, "secondary": true, "setName": "wrong" } ] ] } ], "uri": "mongodb://a,b/?replicaSet=rs" } MongoDB-v1.2.2/t/data/SDAM/rs/wrong_set_name.yml000644 000765 000024 00000001160 12651754051 021503 0ustar00davidstaff000000 000000 description: "Wrong setName" uri: "mongodb://a,b/?replicaSet=rs" phases: [ { responses: [ ["b:27017", { ok: 1, ismaster: false, secondary: true, hosts: ["b:27017", "c:27017"], setName: "wrong" }] ], outcome: { servers: { "a:27017": { type: "Unknown", setName: } }, topologyType: "ReplicaSetNoPrimary", setName: "rs" } } ] MongoDB-v1.2.2/t/data/gridfs/img.png000644 000765 000024 00004734642 12651754051 017367 0ustar00davidstaff000000 000000 PNG  IHDR{OGsRGBbKGD pHYsuqtIME :@<tEXtCommentCreated with GIMPW IDATxײ$[r%[Nqd麷D5f͟ T4C\QWT*|0FCTYYS;|/w_ky_~%cDDL }J xǧ;9{BNlӭ뺮~TUB JnRjBkm!/\7o~ߦoe2Bզ&U%y1}sCcL fuQ9MLsaJI)9g)˗qZ-!}_UlZ"ak޽3t:˼1{8=dKD}wl6R6M$Pj$28}Lܬ.znb8t֚qB Xem@0E\1810ey||9WJ18mSO<[, c0)qu69g}|zǜQUUEUk!s@$CJiDv4]]]UwU׸^R2"6%&9绻;yU1檪ιcL)% ){`4U߷91wݗ/_|wӧO1vvvn~0U%{)OfC z臌@#$%RmۂeʯoܭVmքTVu[hik)eΙsZ !xcJ3J0ò,=z6H9t:",nMϿ~`]}_Uu8/xtIɵRRbֺ֛C !Dc %cv?O?ĹxuoAr2NzD.J!ᜳ-;iw?;1&!UtLeK QjZQLQh4Myݯ?zw{7Mj i[on϶D@0M9~Vm 1tss, g<8/u,pR+)r1D (! }S"@p:aXVmpΝs?_?b 2޾}+,;9,Suq88WjcSLsDZ뺿Srۿ[?|uu3}fu?<tC8!NdsZ0t}ٶ˲R$sú)囷߽{#xO[":N?/x|W^O*Y`Y&FT`딒s9Rʺq44McLznחM,yYlJj9+jRiJQZH)1_R3/B9s)17Z}ԵR<R 1u %y~z|/Z6ۍ 1΁AJ,x"B#^02r&<{,iq>N 4ZۯvB(,a)'b(O=kcc\it˿> Y3朗e􏏏w뫋 OO>2x an4tÇw?|?|l8|}u1Zʻ3ƌ1J9eYrί^^ϧXn4cl^"2uUuU5U?94Я ApH)c!̀+WvSmGZ,pSo>|2Zq1{͛1_˧޷mTZwm ;|l_@4߿~痛KJ1zڬeYkU- - !,CZﭵ.{RvM۶1FIdcH)BwwOCr9Lp.0y3%0`}~ZkVk%> |{gg޽[ `Yy&LP*bD K)""p&Rg9sC&ez-1RIYnZ2i(rSFd5M5f4HJp%EJ)b֞Ɖw~\z,ƘV@<1θ,.c жmI4aƦi...[bB!~^FXsiqիٶdy\o6K(b9ʜX:yA:z*(!p$kS*8וּ!˲L]W^SJ9`B FHH 4Oa{e\p"Rj)uy19R2ƺRr!ޅiCH9 8GBJ(ms&-%N+w-BDDaY,R*SksN1 !t.$9nibR(D䜘CJ/z^'"PRHYx8zիo·餔ZV_|Vge4pbSJ݂`hnv臺̅"d(+p ՌCֻ91BЧ瘖y>TH!ۺ\H!uRʳ`8MSRN>[Я//`\ܲx} Hfh[Sʈ tBH)ri#ǧ8McY4muÀOr8T4*{4u]h2S9ǜ\RL!8)-%qWsag44Z!a6هya=p8ȅjۚs*+^k-WU}iCR*!𯎜sakز,˲Vғqv^cupKMID!ڶRpRJD$3# O?}0 ۻ;lY۶-c+UDZM*em۶RBDŒB)0H1R*mvIl}98"@|^R{_ ֜s9[Q ! !ʲ &|R0P6,8oR%;.,<O?ßoDizRZ#ct<oVߤ.1juq(Vj8{]ۦ] 9/T:)~1J)Ws{vxC]ץֹZcv~>:yyֲֶڜSvUp-vG")!0&lfn9!RJbq䜟WWWFJRFȥ,9ouUJP)  `F95R#b̘B PUk!>+!bJ3yS3asJ9S8ȘLD 1 &b2*. {?Ms 2_wp($lwSkSk#,ٜ3RN9c;wRIC??VYoSde=t\]]]罟2aڄs@F8(_ncNmۆr:8uS׫~ͷ]<>=Ne{u7.^]5bCJBp)22b]>u]۶ڮbJ>xk<{11>Mi3cjfι֪LԫjYf",5%!Qc9x8sٹ8ڱ}9J[gY0.70w)%d\Jc?F h-j PBTe{ιg6MS7Fx\\m[1%\),3 !`F.xXOSa->@UU޽kcJJiBYoyϜOsV]۷m"2Dj@ v}C8!="@\ZGZƒXk{=D/+\"LXXZ볳 hUeRJ6W6BWˉb!M{YDDu]sqV=Rjt:c)mW9}DUU%ʲ-_6eBDyPTއe˖ "чEi!?w8 &.>|fr%CJeqd!Q<$!QYPZB8~(t|ǛO?}*zl6!7DB !)'\[q R0׫[ btֶb^ %œl9gBʪӇ1D1Fyuڻۇ]J~/Ѱ^y^ggg}1J\XT}S]m )yǏMWׯmu]c..Tq1e^e&i(<@kK4Nz4Mu6UNҡI8gm]\\\^^s ZIIFs wBRuӬ7뫫+)xaLJyn񗇻?~U͖sFD.be-Te Ue ,-Y*(F>dyۻ WUSUU 9/.sVq*xfs)J!N8V+Uxf"kB/eJ9'"˲32|D$4s)diLJGk]4WWgggu8cn)$4IyFpDAe V,3z0Om 8Pv9viJ!>O4v=0-n6s!Bxw9 /YVRZu]_8)KyYF,beܸk JH)$g8`7 %>z.n&N9WRm+%id 44]vի޽yꪪiB0 vwAƂ]V !Ms2dUe\p@Y)Ѷ!J-8'S S& 岿wR 1X6]?>~xuu)Ř{|v@תQq@LxN襫*Z IDAT Bʔs$$"8`#BD@@e!ZJ&6Zi )%9s'2RLq j 3aLCI!'3甲xp8fL1圕U]CڔbK1U[ᒔ*V*]_UZ̳sW<D<|wgs>PژzTUYiudqq:)C"S9eu.yww}ݶq^f)i]Dc10F1i6U[O'6bM֍iv˲8ϳN릮[5BJ3u:lwiZd "x:տkTZjWFw4sQ{1'QpY؁n҄#QBdI%VmSRr4ڼk8"S!cE,Dl1_&CK1>ϾcBeuݶ]~Qؐ0G1W9#|K'YB?ʏEN)aiڜc5MS SL1Ɛ)Xit<ӡDdTtD9!ANQĤPJ+%U] ]}-7֔ɓ/7.,l6Rp:sF벵K`M׶}T('/B ̌RZk$cLFd2frH)%ulbW:ݻwJ~*W O)8uҫBJ9(%lZM3t""؛eYj뺘њ0 m]K`RJ,KJo^ e溪e1kѶmH>n/7oqOxsw;պ뇮Lemk[%%c<:7u)%?›H)dt!RJmA n֫n~qݷ~{yyUL)f&J^J~8MӔ7ƨޘJ e >J)B dZFDu3 0sƹZ,BWGҪJW̘B⥣rr֝.3a*"EkGnwޥҜdL/:Ѧ8 #';Ncouz |<s^o6uUSs&ds)Meby+LJT.1PI{7Nq7B$hg\N6is qR6M\Q)51e@ZWRB(#+1 &]wm/\iniyD_ )$!u% !6b'<ȀCPi%̔&bev 1=O>Q B(%B-*y!QBggLk2u#&PTBfuݬVvs۷~R:kҦyZ8<,1H.X1*m۳0(ιbƜ(_x!}aiۡoRZcnq Pa2J)RR-նa[k)eXkoꘋ2#rRb 9W2=%FD qƅ(.XN_r}D QpQ`ɥbeZ]+&P92RrޅS1)PrX(Q ֒:;MpźnZVP D M[kcrc 9"2㲸隺BƘ9!L۠BYkqiag%Qll2!,MKicrBCB ֹ 9!;/}(ٺme9B1&)Ջ.1ƴ,b\A+5c!>v]]^^\^^VI &L9@FDʌ1l*ͅ@$$L)NpDDiR c\4NW޽;?;SZu[;#ח $LC,{.u\󋋋w+b!bIƌBW;g3btO~SLr&Ftힺ:?;o3.] tFBW:%@H!4,39VU]Ţ2*D+)eQc0m{uuu{w Y.\5tq{zxxpΥիJJcpn>O0!gEb9/$i{0߾3Y ne/y4 Y;2v0p>V_a; CyѯrCplRv8cu]ksfg;K% 0Nv}8dv0w7ͦmۜs90V1✶Od'JWRil#cBi@Εֵ.'Ru]PΏ1ngE[J)¦ye yqtn.k۶"*OSl#m!1me af1&)RKZt9=i2z>|bN뫿˿O~r}}ZBQ몮R  EbWZ_ e{j{N3clU9rLmAFɉ`{ߴMm+S89k=r!BL%r1iۮѺb9t8v@Z."@F+#kt:1F1^'e'PRi 6mt 1p&TMA&\`. i&.Ta0;ǐB.yTc 1N bѭ˦i>{1^U2F)g<>{B׫zfhymEL)xBB%a e;{:ݢ_, !a[@i_xq{sSeB?`y8;Gy`ԛy8cB]B9(B wGt)ey9׺Dө@HTVsA&fmpf2aDqDs)g]W\p@rJ!M9Q.7 !JH 0J)gB"Vs1B*y/c0xD)&H)}>S sXN)ۅud)2bZMwh\@)'%eUUBJ$*"crNƘ '&lssȐ'RrK8 6$`I.T}um'8?1F;;ȠB_ϼ=!eLq26JA.Wضn28c>b਺+R,鋨/gDZq.bL rvz$>}(ꫯ^2qdML1 :\ιmۻ/Vt:Xd#DX&&INT%EI1ovBx02 {x<o߾UBDZ+Tb чuUוVf} )g!D4CZ]+90ᆲοԗR_KI??/04uŅ(<&.yݢ/gm@XDeRptz"Hx=J^zP)S6;# f{*RJZt]]כUJpw~pq2t=7[k7|N)s.9轓43<- dLJߌ"|a`g+]Bal)}],n޽{t'9Z\nJBsZ+%S9^O9c1%B,'7|sssSRۈ3VB,]]}|VU_w~kںj/_ݾ{n'X6ʼnFuU!@뜒Z+oZ*5LHcJ!ɘb1틗qݫ| CTWmJ).di-D 8ZU08c̜)K)491,o$rޗO"ZAPd],UU!P]1FclX䔕uWK!C1%gm̈́0Mçpz^o6WmFI48e-:QUOӘY3䪪_~kS=PpYu*7C1!J1]uTUX >XkKaaTE 21` 91$Bb@ >&1 1FL ``"wZ@MUs .產 ,+Bcp~1b 8bLX,9gHQZ.fJ48Cᵳe\:b.vUՅG ~Doo^j449A*e]77/^6Md3.ޛqOOOE+X,λigKRʉ!cHi4a&.8%*ຩ]/;g k"#q^\p~{>di:)e׶]םnnpNiI!E@!2)u}]9DvL& +: bXyE!lOۧӮ`^ܽ\,V! RQpž佟x<<ǣpPUKUUupIMR ɉ17yghF!ES碴-t,, NnUd}/ŋ۪bL߿'"Uw]] E 1N8cy}, fdPcNB]WzIR1✮6낽$dywm}xf蓫~їe-<υTe!Ĕo}RV% D!攂! N"A~a|oZ?}*kmXMf/$4;[ YD !Z5c9 e?3 ~oZ4%PKHgcKޅOMVZ#AD4mg׸؁ u7fqw9WJ\~7o޼}a}3#f(][Ӵjr. X9|NL+X[Ey9os^C`FĜ˅LK2`"D P*yN2sO1!Ls#"0g?d$vѷ -||*A>=toۺiۮϡ]vvZ.EW/W/_(w4dRJFF\IUyC mw}iRƜc!q8=4d !UJy6q>K)+rO?cRWJK ϧĀ$EJBVZ?ϑ%*2 ;3AX\vnf\TOB-#A9Ɯ2rNYHH3X1s$s"SιhLB̵WWW}$!TK8'sΐpW (8@B~mVrՕr1 Q5Mjl6Ri@H䜧qG{8cooJwUU1%fEQ^{{޿x8$I$rd0 [%VĘ< ˾\I! >a%wE!gaq|x>ݫ0FWX>wuJ )c4$M4ZNl̥J+xLy_\֛vǑ+ tr%KRU)&"\cJVm~|8֌曯l6B(DBdXkz4MR<9 '!s~gk1LƗrͫE+|uUI.)dH)GT+8c_$aB-6.\b1K˥t>~x!D/WR(non 彸!s~Nn6M8 GdrRrjz\:sncԺ?y?~taF3voU:4͛7oj !?o~SB7Z}_U4M|mUU/_lj s9S閽aw؇R}ɣa}g/_~|6xcgS9!8%JNJ8|fXUUąNi[əqv agcSJu] QΡX-Qww/r?)yׯ E!8cfc&.YRi]Z,D$2-tUޖ\%K97 pԺVRsVsSA )"iqR?oooy鉈Ek<ϒC9g}aɢ*/YE?_ƻ)&&F ^mVM.@]3pb(7#!uJ)B !9JB$8WBhoo~-r4ZHr265ca[T}ɐbw^漥k-[0L9쏐SaPc>B Ȱ QJ(ox9"`a e! e&)~SUuq-ӟv]^ 1 *m(Z~ LS3Xm'VZpSw:B\]m|O0zTHJZlxggi8ǜ14XV5!w;Bf&k.sJ13Kl0+7MZڦ0zC0)#!)}|vjsssS)UFD9Ɣ\@3"eL!L*&$OdW!'d 4:B @P–s"D@~916;sRJɪlˈ%o/l- cyB9#bB*%| Bt c,x)A.E$d (]&dRssL ^(@$GH+]5UcF3Ջ4ZZ+vrcY7kIHF J>lrLJ+YK#)`0q6$ƅaYWKIRq@B(ΤR>Jd04Z\@yvRI.sHLKhYi\ RMܟv}ӶŢm{ 9*%eŁe-'/f^~/?~O3~: l12Hֹ77{ت]ֲVL4Ia7O0SP* 1x<಍S ΍SZ~|/ԟlO>|_/_77<˿Nl1i!#F,8GT4HPUu RvD 1rQF@3R悪&rUZė^ S Ĝv۽a 8 SJmkN<01θZpB.i%iDR\jX:$9qf#鰏C1bGk溮)\tfgGC9gT/^׸nwOiZm7Q $JQU2vr!PZi!29f2y8y&guzi )%X!1<۔gB Qsɇ1xq\O8z開~_m i)MMZ+0Np')7| !Zr욶mDdrBUuWGri-qx8%p@ (qYggnVO>ft.X{N)sNxbrLhC]]\!s6BSŢzkMBdq=j䵬U]qIDKlfks~ ^\ .c9bJEL)'i-i%} 1Ͳn륒 g4ƀHeH0?<W_uw{WDZ@"ڶ㜤dJs"29m!Ɣ)B!/38)g<ϐ2fEcz%*"G %02+"ϔ,HHr(]k-һ{lCEuRrZw]9q{|mipi:*]ݽBV MPJ4 N3*D1sQ0(WZ1Α0bƎoe%CvIs94;7"!Z'Hl` j$1F ]XIJzZm\GYW/ifT4j))wOr}Wv 9Oi\}7Ryuӯhc\1)2dݕsLs;oa&8 k$'1BƘ]pT5ҧ`u9r3pMi Rȥ42Hهt<VRqS !c=zRw]UZ'_592@91H>fw7ɤ%K)v'HI$ykO}߯+"X@LɗR_K{t^|_3=?s9WxyBB_֝a$b$AΔ`3b((T̺ ÒQRպBh^vn[k-!ެVTJ Eۧp:G9J*)!~g$zƕRMVs"S+b!g;,URHH8 +r9}d9CM]WRn4fngB)B`"1dF"TJ]m6)IնXU%ds4T 0GJuӬC䌏h!eHf6λy1]wswfˈynZ7]v53ݻw BzZ#۷?|yOw/_ܽUJ֪*-z[6f0Z!x<޽+,>JV0{gfcf%XI-dz+gi8 ~1Op:.*"j:CRJA.X >NcYJlfkO[r%2c$Ko"笮u5wަ۶^_C0xSN14M9窪猥 9pp/Hs^/M|?p\)֬_VUJ CJnq_|w߽|RJctY8E&3sUUBp!.8.HWU!s^JAN;3iۮ KzA)+T{/$G@9$K$JnaY (ҭ!pΥ(ct9o.zOHu]׫zݵ2CÇo>?i 1$H9! c"T!1& oq~ b8M|.?I)lY\α` y8l;5{Υ@DPJTs\hDyYKϸVWHM\`r.L FPS #)벱_Zw)eIq.bJRם~8Lp!ey`F(p)އc9g1~uuݶp8NLL(c!1}r590:K_XH5וd< 窮Wtpab^)ȕKkF»kMS$ٖy8B}{߶z -˾=Lm\]]m{ssu{s\D_޽7A$;[>х`X,2bYm/q)11Vr[DYϳ 4N#"Lft<i*1WWWMӔxi&cL4%꺾aCK)jZk S)&LJ){6Y;Nǐ1:!DƤ3d"Em)l`9g "qidV]UA!gHu$ LSPbJ)j9ꦑR\G2@J9?7o|R譗/gMyrRva .aRRC8J+|&q>7ѐ%bHt{kv2ZKr*`_?rRTZFAӖ-n2qfD!1R|(g=@sc>Nf4G-T{w>aRkggBQa!0 RVZBe>~cI麾mi.>9X|>x{of3Lp> gcF,\UUq&֒J骮Ƅ(Svr]RgCœSQ03(2zaD@> LRuu}}5mQ?AΙ2e9Fb)C9J н`&.LrZd)*bN9gH1ǘ"& ez?{G-LĘC"ng_T'_K}/VY m?~'RߙQAHL 'SI)dSQ24YșqH".>\tK1u]<BUKri>=wC@%ׯ ~%X9]vwRZWd ERڤtI)Φ h}vau2{cLѳ "\9 zcr pdO-{]ݕDz(5B3&a@o̔sZ-bYG<R]6M[FK*!ֹ|N>DY;ýsR:)vެ˯~o-z/nmf3Bk/8+:x֪z]uAXeȕ1Cߟ窩VU4)O?Mf1d&]w-x<p83VԤUƘXU]#QNZ+8~2\HTuGEE]|>_y7t]W~)%c8ZV}$HH)m*%;F.y80Nԋ/ X~vBVs .c>p.iAUUVɘ8cYjc ja_v۶kŢZc1fK(g(1c2Bk-b X3y?w_˞f5J8v8C9 K}^d9WpNIsg4OsוBk!$9Canz8jիzuuu΍Rr솲@"zqB4y BD~g|@U/ŇO@`tn?Z'?>ʢ|Go|_ԩL8{:L[o֛SK$Z$9K.dƔT9'%;ZD^r*ERzA/g]mbM),y\:皺vыaFP9: %bLsL1Uk UUʛx#irݶ #tdsVUU [k ^ +"AIEbDX^d.3%F 2Bqy,qMԶk_d5ŋ<32iqgchVR *!i}?i>\~on Vo^ބivΑ1Y}//֖,BɾZU񕿺~7ʆh|{l68^_?zqݴ1,%c..)qCƘ^}um}ꏒS!Ю/Wưd/~q{o8m8M#Q,iJPZ뺮m7ۻ;uVx83+`8ַ6 ◾7x{˿KW|3g"=(ZR6EClC!2"<Ϲ&닋rY:YTu\6Mc ׵%ƲOlH h@rJ)͋/_vaҗZ,_|y8 KA٠9ek5nnCUUe=gqgDHp"KYRJԊPS@ D~xsbq5̧Qme^MK)?ƴH"0s)ŘSTb*/̌ȒUjW #J!d窪BfD$Bֲ1 #!@Uef"D(H)aΥ{LU 2r弞4OSi9 ]54u*)g!BII1IKYs)C.39c*A 3Y6wcLOp)kK҅1ooTZt@ADX&^bRL)1b9\.///{㼫AUBc !rR}s27iGח닦a@'"ŕsU ðwgO_^b/( 5DR3(T"r㡟{qBYP;':p嫺pG*Y9vaղTm%z|~w~w~vk Uٚ dG.9ܪ1T KR8km)b.o5%P!{CG3urg;P'0 %{8~ӟ^__o~9|_ڻ!y(Nqc(ihgsyg"F6Znۮkk-2cIT,E0! bL<o=a<1ųgV:_y筵߾@KYJŀ9庪VUu%Zc, )-1+02;?O}OPT%ƘbNY:[UuӶA%ڢ-IM*>0)"Y$)"!->EfRl><9u9^k_88X[BTQdU%P HTJX )@~ؓ"d$B45f@;t`+"PFgQ.|eOl䜋6b saP[̏W~jKg31)IԉHZġMZ_y&î0Z?yOin}A@08aӬU{ ER׵1ɔd1̕w!DUe&o?&ln(ggqwow=[iJb0_/MJ4'xI T:\]Dh-׵7ƔLU1HC!KiiyK*8ж1vX*Hȗ?UU;첤N+Y-(Zq:,,PDCu\]]fS=eR'gqgΚEXg뺪몪`ΠN|2PPe$!˘XNx4R=4s#₭b6-ʳ7+|3RDZ8cJ} !&TT"UY2H9R%*Y$IN%c(cUDTTAmEX.UX ERy~Rf:شk6|ߪSڞ@hI\BSJ!؇S>q;)8vsbLBAylD2i RDPwi.gPxDӋ Iiխ-۱-RN"6͢˥5vAa~wahnj߮_v?zql 3׋vŢ[,Wlbjmj6$99iWOԩ"kas񨆩D7m-9L!(Ϫ38_/..?~ɓ/7y_C‡*)ID9_4H(l"RHNA.YT4g"SfΝY}#2%%$ (RJl}e#œCJv֧JLħo%T"T<}KQ>> 1:BʯU_b*٣au̕j^ ̬)BAsN1Y2ZcaQ5&Fe pRLJHH "BHUwNin0L!sj)D&E]7777<[UWTeæD0׭8uJ) y%VjѰx4͢mxm'G4g96|;@zhi'_ Q_8ƘY/rAm5"圲$tUu{PxFAc0luqgnׯo fXUCJӧ%JWß_VxBP2 9cXE )IH yNUT9|ya? pwZl ڑ_j髏?n b 9H1l)T{<۶E&29X=| Z˅u.4 s0DMU]p*./gqƙ:}C*=145vy_!4EHd*d-l*$P`$UEP 4Iz6D!(s"HʐƊf 2 R\! O z)T0R4HTBB J `IB%93eEU ,)KǾr*ɯUUu겮kQH@L@1GIy)H%xhiZ< OߧfHl Q1$*Ld":9N4MqKHU:[izWW:"w=MӤ{_wO4ϫS'z&sֺjq"̆ *{"՜SJp`_gqr|ϧin^{w/1ƟxyyX,JL'vNsm[CP?s)'9 ф);_ u]׍O)AjVUǏ=z\fX-+=дbmÔsJ) P~O릔\b麘KӧO~~窪l3 `+.nű38??~??__Xr~ ȇm]Y2$YrN[CDMN)S1I$a"9㔒"ژ@@*)>C31:.4 {=E !&fF*a@@sy YEE:(W"URNARɪ0M(bJH8*DD$B@PL1Řf}wkK{vC?imΞt/tb {d)4MS4rZfY+"vι +cL1 dd$"8a&5K4NXjʹjh1 :ni)[97cOjǜ0i$o?o7뛫GW]UUU;DB2X(;3ͅbrtP9)XC *4MugE~gAɯf+_;ίޙ+Zo[sMSWCʩ% i*#ܽWp_扙GvnGįJ. 4 CRrDss!Gy a.S1,)zc۶/^l6~o].UUӐS)h)y >3A n:1zyYFT!*2*{(ƱrT #Qn7*Gv)9g@lEY"fD@LhQD!b60 P'%tĨ*&YŐal 2PP*s,ZkUUTIA7hyk!眙ŪiPD sRt{#󎭡ҍ`y:a! &!բ79?%LKɌRWU!MTUUUcye. IDAT1S 1kN* DEI%SdaKHD+.G׏Vќwr!G)iЍ.PKY""a^R.dp1y]瘙x<pǜsylQ՜S9L"Y$UWy(Iq|0}ZL<.uw-.DT6<gq'@܊h PE$cB[S($ Pc)*"a!@01(*>F- 0 }gLaf6\#1%PdQ(d)"@f)圑SJ[DH@EUzZ[SbX_\4u Ƥ99$Ii8NǾ癈 T)gEDQvy9zH6x ])'od6)h !䜌iwmLEϔ-S`jIN*hLA$&ABLHm]^^#@dʥWEԆ뺜R) sr8CO\. 2e1-Z$~_b.'Qc#~""1 I<U4ƲOueԦi5)B(bY,1Fkm]"R"ˮI6"9X~ۦBr @llV<3O^geDѻox'ulf ֠ ")%ٽExHT]*EP_s9|[UU $"EJMJj\닺n(@ZbrLI }sk?i/Ͽo|l10{M0Jai&6RaTr.Z&)"BqFIٰ֪# dZA"@DJT~ ]tuTR1=e{sт%$6Ll \D1iaOAd d2DLL z iPT䜋4abGzQՕsAI@D4@evs{p80;gW ʪ"'vM?qL1:kO)X/@HFJ1:L7hΉ5̬R)%@ RʢHb ȒLnnQ5ovmmL{ky""u]UUa2䚶_VDi+k9-bv77/7뺫Z6Ք"cuXX"PLRJ%tDcaKE:938㌗7wڶmm۪j]^AJI1K*[cyR("<*ՔCȺsvG\UX I)X)*~Ҙ eS+YcxayJ9 R)4 @|g3ӟַussw?~8oV o9W\g'mD!LZ MAdC Z( AR[ )q6w0W1:4KgDD"E()e r"$kKҙR 9)AbL)ƘRRD*01 )i !M1a#(01ˆZHČ 9#徆x!ཿ|ՕD%U!&, 91Dt8no^q*ڦ3*xg?8i.3ӽT!JyUqnˋ^!PsQrUE, !0JDZk]~z=u]}?}B(,WJqxÐR28/j*1)ERΐ4A2#2+` Ƙ7]:!fB}?;utBUsVry_#D1`jgq9uu]YDd&rU1FP)0;ψBH)sJєuD 1CRKsaJq9LEWucLDXZ; CLND֐n1?l|gq']˗/@fyQqZxr>DUR9,!i}r*1 "mS RBhi? ǡiv^5me5"`I ѐaN*9DB('HDer (N[ )zZLYU$B1S!cWk7 R a@TU4`yǡCgǏއ nxgqjL);cXĕEKvynoi 01ck$Z- r֖Ǐf3 C}Zv=uz!DUU8qmef8 Ǜz>:93fXP)=ka~lyII$S !L!2jN `飉!䘈E ںۦmF6u{qqu}}蚪&I\bR<<hЫIcX1~lm9_n5H9ǔDJ)ƈLe׵WUhN'ʆ1D)gQA8Q'% uƒM="ms}uy^YCaaBJb^Dxܧhu1<)Ǯi4[Uyoܾ;wo_xrzyQD:9h aP@6cQ&tƋۛn{h껦VUvl{=O2{物u-Ycs!vw<avD\^yɢ[YkK0!0{pr?RcPW؅m<90mJQEq ;sc`4:O[F 8N%gcK"ja:mͲkYU>km놘AOfyC<Ø$ogqg|ѥ1f9-pno>}Fa]]#9v{I^/.v] jo|B8zٳnoon7λB)cJijE[ɓimV.4m흽aSJJ1&BsC@fhV]u<~>ӟqg{pry<#ˇ-_vΞ圧1MCQqV*8Madž*H),P*54Rr׭.=~Y fE%#!p8ȔvA*9,Y$@9LiTTU)3̐ qŨhJ. ";4O !1HĜR12h 97m\q7/Ci>}>駒OvUu{G4pwwwy sUכ͟uu뫔yW휧!3[9Zmn7%@!4M8֗%%L UU5MLL\ZRJ"*"̼l@RUAIY #(1[ްէ **,90Yc1Dl1OI1i6 ha,ږ{Js!9[WFEEd/@q'P@sgcL4MӔ7("ܼ97MZJøݦS3r)SwiPS%3!J<㡪j睈vUXEDxJU^М+^KVd^Dŝb1CaSվ:cJޜ%8sDc0cDCP$Da8,KggyS@@U۶mY'"Eӱ+罈 }b<$ 6uӶm !(a\UUeo;GĈC<~@~0K{_yc"hJ)ŔsV(mPT.P(?s Dl=}RErQ.)D^xcJ1<4OY2} "uXB6HS9RO7,WsSNXΰa./ D KO}kfl ~ >b1s)~{KyUmDv KyF'r(Lu}JK($o"8f_IU'p_@FDD$\.$3@40NSD\.Ӎǔ7=àsrN]-T\MJyp͗BcrKz4(YJtcJ@cǜ뺪N,3"CPB"}TUl]B版 [DRP: }%BuU[g09SlJtك|=*0֊HITzSJ1b9F@SJ~{/ڎiEl1!%$U唊R:L1DgaO!j$eI aX6 jATR&PUWu]ٟo=Mǜ70 $, RL)KԜI@!KNy,j*DŽ"XjhM,]]f$cL )ji4Nc?ۺCiiJ1ivJ11 XSH-(6w!WY"f&$4_@F5!@&`4lXC1xE\._?ztuy\,mѡ˛ubU:Enn]apx&\\:<1[uSmu} wS~HDP&U P*)YeΉ 2[ӭO_{8W6{A)%Dbn򕧯^\\,ESTמqTU):x<qT^t]g [k؜f1c4D$躶VwcY9$n6g*R a_ecb_y益iW5\MιhDEt+2eGDc"eQJ[UGTd\bJDTWPr:@~Wv x򏝀9'Q%&kuXQ,)GE$gSsKbMS_J(L oBL(X(jb,+8ϒ02%Y~E|ED1kԊcTO ̘ a1 #RY9z)g(Vaf6Ɩea̹yz8VK̜(90iyV%eu4,Y1L|Bfr֊({ve Zku)KYUʻvTs{DrƘ$22t?1d, 0l@ᄽj9ìs߾}߾a1޳CvZjg>!G"$27rn"Pʑ 2kJig`B6@$Ĝ9uSRt udP-t@HF:/t,jUsLEQkhC&`][1Zhk 91hyQx\V#ܕHd x;FE5žsEOܗzp6.vc sB] D4xk]zάF 3+D@8(6@N9b'lF~o'$4("OoFd&ɩdeUAwQպw@73f5䬡f5+Wn9GXp + b,}a5d>S%w^&ZkIXP4/A4Sfx9ZcE83gݕsJsΜ-[%XX Sb+sOc q˗/M9fWbW^%2,EvƘ/_2άmǽiDU#uaGDxǟtUlkuuU_smB 1Ӳ,u7.Ȉ bf,G3yu5;s>1ƫW~g_ȩSyeB-ߓmoM?d1wEQb D"@m0%!mtOk7ZÜsN}9 f W]rkkxy{D) ^{3#3[k(Rz`Wuo@!B B5cܴvo~7./-We׮]Nm1_\v޷t6͚LLnڶzm6[k,Fd,KB_Sʖ37M;$Ea!Ek SNQ@eQO87߆sssk䦛+V֦!kks:h$vnYBLgjhC;'c жlY]bZֺ圹mۺnڶ;yZ3YNΦ3cmID#彶"CBh4uݭVʼ\UU+Ju/PJ9Ė`OPtg끈, Q !"CuQ֮~B4Ds ̑tfBjC;|hHgX2申Y'OZ"iTTs81D2@Da1t5sUYVN%^!sJ!jOL@"G"H)F}1# {Jӱ'BW6m+0u&1s1!1NXdjCVJ!#b,u]0qSTJ4B@9HVۜ9#ah}ASnF@r"b:L#cE:9 I IDATES΋Y= g), "FB̂!Q)R7%-kth.ZuNCuu=kC)Ejkuι,ѪxtVDRJ9Gˌ5qN 2a,zbo A`"c7@\ҝGHtSHYQ&"At^D ”vupS<h~.C`Ĉ(T#N1q}/顿C :0N" s9Cucrv~lfg8k4NpJpġ,(!FVY#:+)EDA3CiYs΀0(dD2H# y@tP*"k׊{_puPX )ضA{CLcJ){owN&cƐsV*jzwzuJϦiF~B*m*9ZqD9K7B:c9kChY2&/lA'[ckl_@c '|<|Ipֺɜ-Bs9\l9 }6#̌EQFiۆYHKv3uUe3C $+;Q%A҇8s櫯~tn_߾=d?n_|_~$|ƹسG{޽#ulTߡ=?ɓ1=܏裏SO?5<X7yO>'z'i0"uɫp3h27?#v?o?hp/nG'~ 6gw}EYQrNEQTeiDYk% 1*GSb9画}O0v P*C)"3P¬NJ>|^JbcsiIzdFa, N];/ؽ{ŋۻ5woÇWVWB _~PR wq{kqE_O>c[;3x657IEaQ{㨚D('; Ho :dD'J`a!Q=3Ko$äf90 *9#m@ο}fڦo?;s)icsキsMG׍1ku1ukdO>խ* .ʲ$H `_|ĉDWN8T&s M1aDd7""2bԶV77z{ݍ`UBrQƔR|Akȳ>;g 裏U:SOO?c)s' 3>}ɓ:pСқoycDl6Us)eU2RJߒ2`89m4Gp8k5tvHE,&;F(5d#5(DQ+2+Pa4OFco}[y6!¤[[[S[41Ψe蓻?siV _O~.r!>71/oz^1kF"962)Bb )SX3Y!=xƘzVRJMڶ {B$2ei;{pcB,,J)ȜIUS@/9S0ˍeUqj)'|/oq[o??_LӾ&(Ï{`Oo}UY65= k:3@ EN-`=~D%jySN)L k SƔYmuvּ9ę0ȅ!TI (tbz{g}衇w?O?eWG@oCN:8z'z'k or -rxpr D:pw#@ATI9eZ녃9+<c4M Jr}4zcڬ4+2K^ Zj猀*UdApF "2Ƒ1 bPdΤ fr56@dL) "$?äu*rƨ3l末e38۶xH~,Wtw 57Do|f̡!"58I5TS}L@R1y\[5_w?;|џ3ѣGΜ9+"Do xsp߻=ӥxf~睷_?z8#dH3T `sDD=wF 0p 9 _QEzQڶY Zm5F7ԁy4#`_۷~k, gYYcɐAzzEym~}ͷ=w;8g_3TSSwIH*x0%,'0`[4pyBݻY߿s y(%GHhVV}o1q{Ƹ(c_71>t&t]!U}pm=T">_&޹?p߾z}Ӥ=sŋÃBzܟÇdN#h%Cg";PCC@/ȂۋyeX'ln-:)9SlX9ͦc1}ժ{Sm׆B:1("gΜ9z1S6S>3FHꫯ8qb0?ۻo0#;C8kcJq 9y7_BAVdGZZUDD_;yUQp 4uE?MevΆ袜zgͦtE<;D">̳?.]ǢP 9r$KgN8fUԩCC{;4p#\Μ8ȏ<! Wҹ`R/`Zf,YX*AD}a97u,ŠF16Gw>\f}/^;o6!D~zOb *}g{߁f༳Φb?'_p뭷UU!sWxP!zOrHRm,Nʨ\I[?{#Gw>wO7w{hV#G={СCxu|۶ !R#k[$Α!D2z1]Lͦp)a֨߻> \-ܢ7߻x;;9PK> !Wo뮻\_E^?{C,YAOoiƘ,;cAa?z\ӧ?o{.\xw93x3guT5emChpkt>D}Ck|M&ܜ _MNRʡ91:kSBda!jzrzg%cn5F4 ubܪK~_돒=ԽWz#"| ?`2믿W ~tLYm\pAI""ǏW^9}QBĞt Pdɜy(8kRFsaDKNɸrJ)sviB9v/Sc@/IzJGyd*-"azcL4֞9sz 4moMeU:&(Y[xK_RuL k5zwS&e5u=mRh6.sU5MYp7m=@daaa4cB3ol^A4!Dd2VU,+ZO+rӯjȨSVmƜ0gvΗeY6䜇8s*,bU0O{"VUyGDrmڦm4zޯ*(ֺfySJb(m UaaY "+++lf_P*M@FںluIW )a0'{YgӆA1D6dT8'u bJ$E0DFy*BuӆSbzsY-..`a?h4qM4M]\趍3 ҩEWNYrNc䜅3XgG I%Anir).]&Gd1R"2UU[c7ؾSLvdibsV3jTMn]B:.zfj ԳsQ\YzW|ys49?oW~ư/.oM:Y*A=!2d's ۷Y3z_ץ@ F n]״,Uh=xӧO=w d  QP#a@n!-UEΝ۷o? SwyGD׷:ԔZ!i=]׍"_}? .@h!ĜbL@@A Xɏ+s={t*lc W^9i@2308j8)N9P""d9biJ/5OoNvL CAbDt=ҳ=tY:0(g#L Fq7Mӭ!I盧'Z2HZk +X'gΜg 9r~rOȑ:cZ2gDPz txADMV x%Ġ6޻51>!‰q,߻c#vjaj]reiy9s,o߶\VVV˦i]Vjzm.--]zu'6*9. _UFzYmrjfVOU?G g Qx5MfGd<.jFq.]@1 󽰉00 ɆݻF"عdcWf= r9u|@B@g>)@)6tTL3d{0X1ɜ_ܬ59chuO_ڴS$df`ޛI"̋abHo:ǔS4"bF#>rκ1HJ wPCά`xr ǜ]]0S9-Y1)9gaYN_); ~E|H#r 4辒h#LU=o !0sLQ?ɢ')i`Y2bJ`bl "E2cwN釄b&!k"a™cL!A:ֹ,~L|ar";Hz 5&$,9瘺 !Cc/|UUE5_6`2l߾ciyi<_"2tC(ŅtB߶s8.]lM&hW i ;aCHi2֙yWEcťy]]q2^A+++v&bT9v<QY ` ۄm¹r4R{}sv(* !UU9WʵkW._6D1HSI9%H5'jnl6!,,,/7*"f+++z$֬uZz{e/ݥhڶ,_~%qh I]Bt ؜Sf1b4&֙{ޓ:}:_Hw`yDz<}o]qn@۶Z3 {sC&#ѣkgxڔM3;GD]Iupޫ9 ?#++VRg2gHs=ugu#Nz쏛1Ԝ9W%"e9p[k_x{{=ۈRh[f-fYA^ou[~iWs/rk54z饍駞zS"mC4٬mdnaaa<mX]]Ug<kur5sśoy!]7-JHI=G$笶tRt~d05:n/~뭷" Q;k$mO!2uξt}7[%{HCQ59Dc[z鮿Jh af"'W-HPtɎHtX2SU($](lľ );s>Ԙ<}nF,@&Dq dfD*{צ$bS ^+=|t'V"0 8qJa"AaX'*;uTJ̙3O6@ cTn^mG=\ʢlC !dj쎂 eNۧ9YSvX>'Yhk#wІsϿir6@\ǘ ]C&H}RmhN)N"!!C9{8V2Y gN XT֫MWWV`W6AYU9c͛;뺾st:߲eޗiXj mj sLDkm>3DA8d-Chl];- F\S79a1"TZtLDUUib" !sZᤎu7Eڶ!1&`CfHZʼn!LI1)x#'|nzd,DGРq`N 0i}je( 5 o#>!HB)~}̌!21£B!mJ!™l-*񝚤SΆԶ駞{._}*?!̹Zd._U ܾ}[O:I`}ǪzfeG@ÉCι^x78qdJ(6j%Dw9|D_: RP5"ΟSӶtZb2Q锝EQш w8$6ƨ1v`3ksUUE1_{篪6*b2M1J+/MO'+bjq]x 5&Vju;ǎSQ.]焱^|]NX@] x IDATKoCґ4I(ʲBhl6f4?ܽw=WRوOvYV?9!>`=ib[p8T}݇ϟ{ٝEQ '"R'|駞裏~ 569)!ىZKd &uD/qHCϼsoO릟 SAo3%9(Z$}Jg?nƘOVsz^c$ tz;'M(KՆ6&NG={ӧE8R4:kuMk" 1@gDjڶUe+-1;wNQzȃ3ۜ._>K0`u88I+x\41s]j[36$$򝁎.mm( áwN8mS7! s:ĘBE@Lٜ0}BNU'Y;Ω|#.YL߼ 1X HN6.ҵS)ZtҶ'w-/Un0 QxuSK9$$XtE֦өn@Sӂ2fK8"LlQix#'qSɉHEZ1Ɯ35TNoNPE^$V;<眳+KXUIZ~#iB:X-L((H)9^MbY7 JgMh[H M$I U6OXE +̙$]'8%^9NujC=H=tNg3Y'IWIK=DXAD3Jk-jI)Riп6!d eQE mYS3cj6T,7`B= 0X_utcb\$F8 gYUU(Nj'R0???uN],n]FiX #sx t2Y]Yiq H6c$Q6:^@~  C:_n}gk|sA~䑅yALji2w/kp8yB۲$@d֭/?[n͍Fp4mYQE $lf6fSbcܻ;kM ƻK7\& hE9IJEPlYL~ȉgٽ{ڶ um"x_zF9}چáF=ĉm<]u8ppp83DdrZ<=z\YáЏ8&N͍^}Սt شi:^cMUp'?ɳwN#ڶ%¢,yֶ-11淿 ~ѿы/O^o#(mp l#}O>e0.+W`׮ݻ]_ϸ9rWGQK/?O+ bфGzG?k4?c?ҳcvޭ͍۷ʕfWڦա#ϝ;oc%6-H137 F٬ m׮^]۳wܹSK.06‹ wNOݻT_p.^ﯧ8=Գ*}2֔Uup˟?$)O)b9~ gt ֟_|Oyp_y߽kRa' 0`3HLP6q O~^zk7^:y;˱GA7N^yPߨѢ''Ok*)&Wa:uJa#xWQK+^7%;4uSO~'NR =PcAڇpa]Tӟ$6GtWѶD'z#)C[I:^F$f˘QVEc暂Xe4/iư^X˭[z_Fl:E%c:{,˲,KɮƘ&~aab"`@d:՛)q8{`˲ͦUUnٲ{״34Nt o믿^wo~8ZXXX\\Y]l6EQlڴIB.uD˗i]v'>(uJY c ony?~LmD*X"1g9!Z1mchCչL7|zGO(JgJjoؽkDDW|?ccHЉ6(W6܎7Od.XBX "һ]ZZN|~Þ{nMIΨBm;BjFUUbk'"1D;kzIc j1( dnz7椛$ >gS m%)eY7*RgU2 eմM?޵ jRNJں{Mv,|zmp0ې?"\f^pUjui8p0%f`Y@QdoGzaԋ[o gz猵 UM.9y9wmyػwV6{3dV_}sNQ[-u4M6-sEu Yܙ$"qjX̄d#@9zSHR9 }Ny'dA"@ imQᬮ8NCZ2 '((F*~v`VjfI^G}d@sB(Fs C!,) 37BeuZ׳6)[b3ڂ>|x6m,n~ʿ͛7WUճ=6h{,o޽{h_|/~˻V|g~Q2NullmIZ5jᥜd>XUU=rO&ݻ{h@#P EN)VeŸ~kXuQ[mm۶uh *K]۟}ۗ_~i\X'2!{up@Dҝ;w;wnm; f1ąs۶m߾m|勲,uva~9 (1lք%TUUUu!?wA0i:k͋s󣪪?G?^ MSó jgdjʯs mfZ,Q#"jP}`j杂5 4ê j"Xuj[۶d2irΒun+ˊΦ3_Qk53R(u꽖ѝ3ԅf3D(;9©7XJJC,1JN).td* ܦM YâW 1Ĩ΂ƚ7Bh;H2Eň9NZŒkԽ̝2`Ȉp]7ɤm-)Bڪ |&*Htzsycu;Te^;z$3'S"" <"/ʲRG[}\4SZǎռ )45z/0K-QQ* p,hJ:6)Xc:!YOq5ƖEi$@Tovkmփ$N# N-iu*R`t]tXW5 Mɱ}0^|FlXTECd:\"eW1=A0(_yg@pɔ`m D !Mf92&KDDg}SC ;Kl6S嚵5$"RS=7MnԹҶmcp0ղƒ\i61VFst!MՕ#GuNk׮o^OC7/..VՀǓl:ṹ&VBtumݻ7/;ۦ|s[6e) WV+*iz탲jFF7|lڴIzxigι͛Coi{yqq~n^Ջ16ƚo}^vapOSkf7U(( BLz"kX4uSfcx /:yA6gΜ=~0!PpRJ;6mNeaa5Y{\| /,---K+BՒF})#j|:%mAc,Т>tƍ]K@_r̙'xRYkJQb  r{ts־}GwLH2Au.ƚU~GO>>j\RJ5RZ;/rX/u4`fZClׯٵ;DGP7w/]:|*t'v363IB9>lλxuXzngRY;D|睳GR [JI%_>jゝnͿ+G/6D + ̨ϱO@PAY݃{ognZ|NKJ=]W]JI¼/ˢ*xm/( ? 4"Hw~Ӧ**dm与ZRJ!RsM]Ǯ i3OiHa2d[7&CMkY"J-'lV7ucMg[lٺe:ju;^[6Κ.r4kf: 4E5tiCx ͠1h*xW!CK2Nǫk)iGsp`ͨmڍ\UUپ;h-R%9Ƞ&B`N6Yf ^ߪd`N,QEu1cY³L{'='u"cR7Œfw+V'9{VudCϺ*|ӈ)&fv{tOD֙8{}Q0xg}6R˗/޽;Y2.?| вAk[Mu҉P#Ն ,ұ+5oЀ!R>$I$D(Beiii!Z3laN,˽Y+!rhZd!" 6޺:52UnCdzUKӴD,Z;f c1wcLbV(w\*E{fZZ]|رcι}чveck,Q/>|XǏ9ɓ'D,'/WuQj,Y̻/ݷwRJvC"lvEPbLWw<#3ߋatb,(u$f,1cRL]:Ot$XW*42Ĩ,{ĖP ~6v>L "ؙ#0@_d~du>sp֮vcdX$cSѢ*Qg dHrNi)B:8Q!2:H8pRhF A_XfQ, 2/2nR EXYt0V3@ӑHz {Rk.ADzsjc:ϠN+]>vcWοcl|1kkk~޽{ÄOrOePF7>K(ԮYl6qDUYݷul67_xl'c}7u;*#>A !9,hzyaWg3[;3"@:묵M„YڦEkj`areR 4W/s$[&e*Yccw*u֪qn~jBlNf NV5Rx?B'4&q8Uj`fkM5^_ !B۴XkQDR#캯x3KwlޤNQ(HA/*Cz5%pѕeC9nnЫ IDATMۦB=-vj XH0ƨtMLg%dANК*EN˲0AWﴣJC1NZ:?N)ilV z[c}Qx盦!u!cŢR,sjCf{Wt}zg#crZQޖ@B˜|#2)$T֮$NJKP!P ٦m.."mԕ)v nv/G] FA8.}S.zd#NQaLfdYsH,hԘC,Jy.1`cAJ7 ;*2 tRW  bжJsK1ܙ3(ZeY~oV)("pNU3:YԶbܣK1ӈ4&cTa4! .IXro'鬩a99Ԣmۦ:dCkkkcjTp8xe` oMۆRz7u=AK(6oYj0(R[NZ MKS5V)Ԧ(IzPH}0/"%cf/~˟V_n3WCY,wIT4;zDS= _<|YVK/iBGN8AÇy2,w1ڔNM'`ÇmEM<\ W-^څ1{G%ڰaT+f)XʻWڵk?ڮu\ 4*CVAs.Ed[_=ϿRkiI{ASO=MA1ArW.cEYK={߽{kv]KKKZw޺Çg}һ^s./gr/V*Zߟ:zYKݳg7}nΘ7F>8p-B#$!52"BWݝewuaa +t@Y s4`#1akOT+g!J o!)e5!ܔنVf|YTƚnD81Y=s=4TBu$o^kz KJS"ĢÁ#J6K &Rh1Eb"LVq6&Y"]v=gmgٹsc=FD_|ŵknݺu֭nO=_%dɿ֝!ekΕ@xm6'ssveoko[@gIojncJ 0`XUvjҴ;k9MƓz2+ڢ|9 u۲uˠ*n};wmՕ` z6Nfڀ- "ldX3VmL=k;Vj]NCp8lO>iC1!vmss/>K)VŠ,KCJ uޗ^đ `0ܲeK4/11pS|Y8 9DQ}Uֆ !vĐϽTb^p~q:3gμ E$tpU._s)7T\rpSNjSo?~W(fԙOȥWރ?Jh˕w=(S!=}bDu˪O?{tGڹk'b$ӟ裏Ve#gn p8fqQMBbJ B!Դm?\Zz.Pu]7Ѓ_}gֹ7n֮]KZw|џrRv^glo',--{ҡCr .U}ݣG2s4W۱ݸs:j1~;vtŋ'Ox79Z7u۴r#G9rΞ=U5PwcqZKT9wܞٹsI 7o;6hFYPo5[Sںo|o犢pֆRJ7n|g23DN'~m[~O?43ڳuؽgz^u7r;;Ǐ(}oSO=,X眱scyy޽ʺ2 [aɊf;YeQZcAmƭEgmhSL/H̜ AcQX $cXDɶD k%!`DdԺ|:g[oK&dNzJ0[ymbLXm6 Q):DuS \k@N^E_~Y9PftHò,gPAr~]s_8 QIHx޽ /o ޳7~-O;6R)1,ٳg H\|B8;G F΃cr7@Uc_P]IBt:bd+ީu|f!)MӶ)E!YujF iyڢN84yw-Qg:;N !$ïJh?ޫEsqKehRאN 9Ũ٦lk Qlۖ˲TaRTx8 )#ٷ~MBh}8m;NG 5~%! !ZyS1Sg^@]9S6s–?жb$ q:و5 5{-dC>ʅ 't:igIbRY2ilZ!b_T׳tꜳޗ[Voݺ5 lݜ8޻wޝoI,ݻwhnn[E)zYsBhl٤ cP1ƦiЄ(Io!\; ty3II `)ZsZцGEshu W5W NiI9HNn b - bΪVr 99'ٳgK9 YDؿ,e{:<O?GɻJTZk64uMD;vJ2M /_ߵg7 \_gM1As`45zڶѴ`"J ̡Pk6b ! )@"t'61q LeN)Aiʢ$c1zGd5F0:Ȏ9d5<(]X8`0z(#:(! !&a4@B9(ȢN 8u?mST1sDD`u!(cM=aLv+Ͱ4e}Rq蘄0ǑE=8 ) u11`fŢt*\nDNL,"$,h:-B@Fwr!.8)1?^7͛72ͮ_."jzr򻿗_N& "tZ!PXMEVe9O4M][n&_'<#E5In,:M @4ᆰ*9"J̴m^[[S^`8X\\TZL M6[ed2(jm/>|J֘x9mw߶hж{릞4LݽMU۷?R~X ~<rLssAp4\X䜏a{Ǭqm}vamm7|{W۶o'K֚p}Ji:2d2;ۇf4Cv|!!$g{믶i =n&Hy⬙ŘpZMp`栺}!UrG}&׮/MA$J(=k͵k")xn3?zG4ׯj~P%uSdJE6mZan a=(k]UU٘׿vZΙO >ƨ)ouac9C!\|cGU$?O~@x+`1~+Zŀ^~pi!;}AŪ{%;,ms=pPHBbL ! IY w}m}#emB 1Šwcq:L?R(ܷ>/E(H(A&<9Bd AJh#/wY%R:yJ`C]000VHNk % ac( eYu%4B nG^80[Dn\"%0QɏCpўC:˂2DkmL3srpTD~˖-UE͛jbZo߾~,ˇzh֭ia<_plhz0X/+ʲra:Zv&J|bU] ߴiSGYct"qr>{{bat%˗OΝx!I+yI1jҲ]/^EDq'N*yFbbQ@`_o֚vR&zޯ~(,BBDɃkW=zTg /F3X/P''D,hT~\%g1+<{oËOT u9 QpiÊ0u^'NĴ>JYЁy^(^gvrrdDC_ v;^/2M)[#}K\,ۃmr1lh)*N~߿~eaaa?_?#g;AeDQI䂝e"HB@6%tB C ʴ=▭[IsB((p8 :e999911we!x k&ERw jX ̂^ˬ IDAT7>>>99yJwzݞ5 V޽?VQ4M#β)7?)냟~qaJU5cL۶y1X:#U Mݱ~-7oټ{RX6R4{!زu{V++ؘTo2kdQƟk@g Ts8 ^}zɇq+, ޝkmxy”pg}DS@{-Z}xdks'wvQӘ׻?7yv*(`*1?ɧ@#[}8R>H)n\~+gf 0 \yYōD;r1MVWp[H2IYH:9% ޵B6;yTYZk뜰BiPZVuUqy^HD9O@:Se!4 ,ϴνM vkǸ ("/r}yǦ˲O9w3_@'% CD1,U]["z$"EĩWTĤւ,$E5h–+!"G%(yVCV /[^|ˀ(RW>rWU]?yT s fg(";fÛ?O1MuU3J>WK!)]Cj %B܉1SR(`N<Rv:eskM4UUi˲Se˘,Ϥ ([o+oFh}1%u[c !8n0 vEE i0%G8)+d _ZYYiW^'oy6=S>Ï=<99y'CvGk{B5B͛ BD^`nZ .Gr1mEe^O+;e}>H pAW_~[7oB~ׯ3ᇷ l~}}c4 l,L2a}Vnݾg^O>_ogT^}6{}otkjjNWH.b4m([NJC:_wʕGzּbȑ#9 ܜ\ |c\ct:Pq&f"hELkNwc.ht&}qJ(lqfx-֪k'3-])|U s$֧FDiPMwyh%!XsJɢ(>mmV?#?ݻ7]oiRnD =y#8S*{@0Yp̬:/^lY?~$5ϜyƍBw><Û]c _/8!@*̍JZg@j5if![Xwu%[Kgffڳ  9ΨΓGtna950ggQM)<$()˲!k7[d5@=YkR%Y!?!&+@8yd#r@N Yac_~m=:petۅ ^zeqj)o,Ü{\^{MI y SNK.w~{c1b:}E齿t1-ߴaC*Ab`Zck,fYmRɶ4H uk96;yvG)璟uƋ;wibXQ4;R wkh{.amՙ~ӥS=XoW{Qǽ{k_O?u:L ,6TYI-Q'OVU2,Pbp1j"ZZ ?x14r/{R 8uk:REwc.O>}Z'Z]~޽{ÇS~gӚf0J)o߾===c='c 7į}`8)l*Z= >RID.ҁcԍ=56mȅg ޼ysjj{@;W\=F Ex8F޷+w'搐u)# | /^BS׮^;zh;cmWcϟg "m D Tm'D`qJwɓ,βL )`$獳:z<۠xiݗ1M)QXV*u㐆4Bh!vO"$!Qpڏ!8gY|ܒ5!5C0sm 0kY3S!xb0+pQPJEYd-3@wkM~ᅿܾ Kѭ{!O ?=3_<ê" O=$/h_=&pl>V>sD?oo[ٳ{m_~Ν;yw^v~=k.ڵPD8ׯGlΝ;T[we[nFMB;#J!Jx\ti.^Dz܅ gfի46sRĽs焐^af+FU<S*<:Y93g2Župq`!w<쇄?'Οɓ'nI96ͱgμ}ɓ'1ΘSS:jOm@yn2gs[wC7RωeQ򼄈5,wڧ~ꫯ6 ;=LZɽRMM8ٱcǧ~Já om(đ#Gn-FHĩnݾK8x} l8rȵ+L}`>&pzΎWq.LsEloTY>pr`|~an\ w0cs)mF;w>#GH!{swV! _믿 ?A˻{;~O[o=vlʕbޞ={Gߐ^JswD1Wl;Y1BJsX˂S3DyǙR*> F),kcGHFEJ* *^ dLVG474r?((C+OHPvKc_'E!mj!"0K@I {B `@{YEs Xj״9:19v1Xɫ=7gDDp_% BH+洊I' a iJ }ܸ"mRB%-DBQ" rD&6 ͇gwflSm_{i}чzhyy(ge955u= xK'eQ6?{6"䈔5wNK5ucC {H&KoVll BkׯMMMe:cap'O "XM;f;x`4%WB 077ө.X1 dw/o&OcnbzgBcH /dDpBhzibNJYW#F)$uk=p.7Q+rbbL+E,(!Bއ`=ey>u q~xaak)Dc@2=11eYe,9!5&JnKMc TDP$AkL5%yQyyr1ܷbl 1JkFfxKڋ"N w֒g* xpvqROٙe"&sS'Or& #EDlM =u!c=N<&QJI":r`0d4{`M,0#͢cg~K5a 0!x9ki7>qc<99sia?7<_xay%ݻnDr'\ :t{oEļȕRi׮]T"5)xۓ0wnVQ\`ǎJʼ( +Xk|7!V~w_Oyg}مVs";MݓNeY7sxwVj}AIiغkS7aey^ԫ++m޼ Svʲu{܎Z5a5\__o+]U NHFPXg֚?CO=9yݻݿ=d*[6mv&ƭn7˲^''7mBp+X;cs*w B4[kOLkcw-rvbuV9_V2N*Ni$`[Âs;9x $@n /x{{a is5w>FŇ}j2<͛ % s1ʤ52 A[nƘpfˊ m$3Q Aa\oa]qV1"DC@\3#R/,jBQy)Ӥ',(J\U y{iX?(cAJӲ ә֚D O^JFՅZN66RQ`g\e"O1Ggy/限_}7v%3oje2lC bhpրhi6QbXD KXY0zC:5NO\z9v;ݪcZʁHsG9͛IE^D-s DA=H8d]1'Ǐ$ytؕ j!u)1}hG-3zɓJ2kW~e\*R\`Evİ"nssٕ-aA o^;^tM<,B <;B*lR+IDl[`G ygGpblBmFXp|EV3C)eYH/y$hA*yelx@d {,)@)TlwDB11Dca՘ɢ("WZYc_NgӦM(pyeeeeʺv q3c`0/..<˘:D46>}v)J4RJPA)J $`զn8"zNÓ4> O'x"QKtl""v"&x碱 (bRMʬBp5;)OUmn:ǾKcB0QKyga@yg\~{HG]I=-XQ&E`LTO:i[+@ bJAILbb@f{DPD!/0˞hL2Ҳ햝4"bnVd !xNEa'?cBdW@i5d:{)iR +[-$@S ٵ`'Ep]jSDRHD" A))b&Ґ{U+ "*%Z&l0))޷ooJ*@!3t8q,R0PFǑN8 6v &,Qd pk8 9Eq10r$Q!P-#2s=, D!SJB $W>`*Br㽭  >g銒R^%:k+뚺Nd잓v( Ԩlno sًe%JpX,R3SY{!r?:n}s"DCo$NݣJR<ϻݮZy u֭e<@ U]-.-~>ؖ[iӦ啕{u]"ϲ,ϭ5M !dYEQ :zy+DTYg~4lMK]׃[qek(9-`0;9zA2D:p j-DC-Xq,7ڜs;o#$-^%sP<E4Z~C,{(M0"D$)?]Ϳ sIc_"rҡB%X[[-[l=)jO|upuuL;Xo8>iVUU¡,2zeee0XcpE!g9˨12H흲y_}/.l߾駶n{haa￿{n k /6oټunc11hM,Zguփ 5A@Zp:Xqx独/u:et(ʎR~Ob)W \XXyrXlK=8@-PpOw= DΠ-> ތz}VX\k $A8g܁87V麎"ҦiiNs`R7o|)Y\RضnD.x]ֹnwlSZ[Gn8zիW=Tx㘔آ|<"&CvC|#GZzA89' Q^x^zen!׍BȢTU]Uu錍y{.Ǩ(b F5  JiNȋ|U5J))K)BdJZkw\s:"ŋ_|d+sμt[g_u~bMPqAl[2{HNW6'6R {qHq6ɘO^lTRyOw|I+>@J6d7PA%e{ Ƽ !Rgָ5Z8HuGiMi%X|LE㔨Vɽ9<͑5gHCQ8^qC" Qe/RK <O.%…081!(%94!3Pn-]M i |{|&?m$V=򎼱61%ePH%;1iZgɬ =q1G^5\F!h @RH;OڍCm |~yQeiތ9g1ꀓ:f:ݕLq&dlm4ZnHA!Xk꺎iLDTY^GIH@,bF! N RS-Z72Ik5>l1Eϲ;zp}~~Q(OXT02p7h!]Fǃxp<8;?ORֳ3P(<2vs0B9-Wb <"QJN<ve7Ү>p<R[6(ggǎ"ǩP0R"I}J­۷#.5,0qBs{lz<%?ꫯ,[IN!N!|Qp j~ vG1U]gw>iO}SSiFDjxYFq./>'ULMU ${r4M-LJ eQ|ߦ TUƚXcs+KKxO>ϿبT{Mn7' çf!TeS&M۷nmδ]{+Xk](+j& 9'%9`^֜VYJ),߱9@V`ockdy. |4̓Ƀxp<8!ݻwMO>0Ƹ1ٱS $JȤ#5u]75dY&yuU7Cϊ"3rIv`@D|Z oc#mۭEX "R$AELOf8Lvqg>%gGvxl-KV%EH Hm|?ދ ^->р=my>1GK)sx"T0 (繮}9l.8e |d2H$H)Q,Bxo۶{gҹ\6O0!%/oܸ.x<,+OF"Q]וŽؘVD.~1J ]Bds\.2Q q.5!9F!n u-O)FE=D70oÇwtPBZXc$RXM󿭋8BJM6Eq֭[yf"D]$!BE;4,}q|~7'%iu \ !2)B`ZFQ0ɂkƐ})nPBVA AT{|bLa:CX,jۉQγh4O31>fQSSi纔R0 a4ӴܴϹTJ\C .ea"icǎm۶M߫)F`C`Jؐ-7~"D_;Ǜ"6KQ@$EWoS)!P:q `u)IK)) (Ha:cql7͂S!@ s.X>#8c"Q3LbML%L:U-ʂ0BND1O3"`m'u ,L0 3EBPL:"E'V}BXUVmMm$ᜃ8eE'0Ƭ(942< B ~3 F`k8B3,5JoB&Hљ'.J);7D!Bx#NB"DH3φ=X!)BcLt1œscLR9iLp΅HR n{R 0F$ !˜=RF-\dP:7BH&q]ױm4 b9z B{+.l6MM$d,b ‚1#D+W\rq24("cLgnɃ+ ](0Y`LEY2b J|r6V^^i[)!b:!,p'Ac %h2 g޽i8c̲,i`h"XS={>hґM,| "D!~[/-}Ϣ֭B:t]ڵkߵkŢEjjj^{5К5kwWxq)8r{WRB=oʝ˗/ohh`ɤ=ߩK$eFJ)]˒eHDuXjhHSemCԉ8bbh$}QP2)`B6ndF0Ry܇0RXaeY_zq[w7:Upڲe RJAJH"Di䶿wܹs7n[zV싛ȉ'&&&IUP{Xϊ_~yCCC<Λoyo{}rȑ\W^m82 uۋ\5'OLw133UX ^t /ތp]}7x㷱x^bEkk7|Դzj_ECCCjo=;<<\?wHݪSnڿ?زe˖uuu=r.\'d<8eI3(?>woilvuua i]ŋJXl6[u#SLDE#RD,Q_Wkׯ2 _DQ$Q$0&J*%eqh$bYPhHJiF<F#e9!$t]+hr$%!)B(R&6ݻ۴q=Bk׬=p_MOR)7o.Yv d(э7+`!B￟ "Rjbb/xE;ݘUfj0e3.6>g#t:F/_y-J&Ǐ$6mrF]ׯ_oryʕ˗/^0`K.3galʕ+rΛ=\nB$Yzi )QN_>xvKK T7=A$[ti"eÇ[[[B---_}pp!4wGQu/]3cƍГ 04)ƪU{G-RN_uuu-Bcc7MPH$nP_']}x h`"ݻwwKLJ.Q[]tz_0 0[pa" 8qx~._ !400P&gٖ3g bX*:w\*e˖7o:u x⪪*X-^8'{ȑ)u[[ܹsѨbppC=n[IEZZZt]rNz*B(Jm޼(q^ ^WWaʝmN82Ms۶mϟz/'@JbEϙ(F )!P0Tc,4 DJ1 & T$8 RPJ&X/ e.2y86QbRT 01&ڶ=::d4MD"\vdx$Ns !iZeX6,<' !sl:kxm\C31B|&o0L7Dƶmg|>/Q!i!DӍ}Jfhi0泺N(e)u]g:ô`,L)[۹4-p+n޴,0o pta) 6JJ)+)dz7I)|#2A #|1F>{+R'!Bbx,XoPOR/"M9u(8J͟?tttll 睞uagY LlplX'㞞7oަM{9X)ZZZ:t10-Z[o!VXа{l6;w (Ŋ---ojjZ~=2ɡcǎWW/˲ [;X~ܪU-.s---juG J)M֯_iXy{-jkkE]<_f B>|f֭ . 0cw|_0 .lkk;z(T3jJ ͇>~xq:LgKLJ.QΌ-Wi666 c;11kkko~󳣣~߾}Lt:=}~vtt477k׮>Ą7779rd˖-'OMݩT*X… [[[=:>>WZ)J VTTxJ FnU󾾾%K9sF)5gΜׯC€{p͚5gΜtB8$m{{o޼I)ߕVrٲe#f͚_~}ddgrLܲeˍ7yJ:u<6l /@,gȔҚN8cǎbUV!zzz<8::K/jܴi7:;;O>|5 ؓhLFJ).TR}&L\(1 P"B"0BpU AGVJRBH0%c,AC $BHI)2]W(#uL&}1V[[SJz'ɪX<4IabT<_m[n|bwYJ'cT dh!4H$3 kD G3ka縮;>>ǔgYaX̊D+h"B!`FX(Ź<RxK Д%0[TRp!D0510%B cJ !Bö#Gtvv޼ySJqС\.߮X1K/뱱 Rcdɒs2Ʈ](C\[>|UUlk[!Jmܸŋ'Oܸqc?_:Ԡmfv)%6mlvG9x|ܹ^qpȑZXQ l۶Z:tO8Q[[ sqaX9sBgϞE6~> eee c<::cǎ*DUP#GծX`AP[2݊:^v-#ӉB… 8UmZr%PWJ⡹|vSӖ&7Epם;wz'N궬0Ͻ=|KWDɳj鄝d2~0Zp|qȱc/_L)mmm=rϓ'O8?Ν{QJΞ= 6j*}(DXOYI0JWoƙ6::zʕP$hm'O\bɓ'}yfuu+WˇKt+*++`}H$k8E~{08R(@<ݡD%uر7n Ν;J͛p0׬Y;v+Jhwjjj._|O3䗿`+n7|3͞˗V)ӧOZʕ+xW_}/oϒ7'Ƅ`(H)DL+VH:\B_a$BriF)aA*DPJ*$쀑 `Ր:PA}**ʇF ]s7L !|>_QQlmV2t=qu@@xo*ɜ>}ڵkD8θ1zEEEK}ן8q".g}7hnAmlJv>򑏀oWXQQQ!P<]ׁ|! _ UUU_G.[,W:!444,Y$ڭ)QbaŹBCCCK.^M6k`\ի݃ŝ\\%M6l0<qhnٷ4 f(GWW)njm/1]uƒIUڷoAbX0?DGGG, !Ŧ pee唒۪MXp6teɓ?pSSS}c֭{BUUUUWWgD"qגRi&y&]e˖WN]]][o Uzxtt,YR|ULܹs)$e˖RY>Z߿[͐bgڵYnSNΛ7o6;[>D*RF &RI8㊂+iHI(!"\UJR1a1O0H(8}Y +(cR* +`[Sx\g&|ߍDD<湎l:˥v È\.h,jYia?"m;SF@ƍV^^NI$l{>Q'J)u|6t]7 M )oy_KJJ) 0HԊFI#b`Q$Ua*v@(1a1F1RJ1s=O0BR%aB0BRT QB bꚆ1\ob"}&[H"D;¡Cs΋c "d2 N}ߣG666f\ӟZKK˪U8TQLL5Ig3gz/nlebw ;y7o|@<b"88Z;%n|KS Ɇh4 "%)Q)dՌ#1& 7rV#W:}ݩTc<^~eCqWD۶M NN9W^UUK|uk Xܫ3.1%J.]m_gVzah6m444/|GMrP0Y^8 }fϝ;lٲ@*8eǎ3͛e%ɡ!MZ[[4'}}}۷o,ҥKSf5޽{STUUUmmҥK8pU~ \-4U%oloo߱ctCQ'7+xDH)oS<}`4KhN[5вX,Fb̆=) IAQL1c%%hL"0\>4B3I%I0„`T8PJ!qR]gcc!΅V!<2Mt)Eh|ppڵbeX,Ežrt:d}uݼm4$JMt]7M=?iH$J0`](E !\ϵ'HCSH!!o.G1J1%S7B(I\xQJq"MžW+BX&!w)D!Bxڪu,dJm^u2\|}߻C]]ܹs_{){ etj=H͙UN;Kzzz2r+%8H)dP+!Deee0 9^ɓ=XKKK___rJTxN^T IDATTc0Gޮ +!8$7lK_e66޶)U^_ݶNn{mK.],zzz}K.YDN:1FNdrJ8gԼtZm՛c{{{[[[7q9uuu3v/,^l>nܸjժmTJROixbӧO_~Μ9A2 hsΫJS~ͽm%YQQ1/]t޽S9:,˲ӧOCcl$c7f}}x Yftto͚5X' eCJI NaL09B(\%RJ)# x4WiTJ,p!D*"lyTR Y6c:34t4 ]3|ߟdҲ"/¹-˔RހaiR#0Ae---\n||b:羔GkQ㺄1ˆ zn:I$ )y|ҭm۶ h4* BX5ӴRrl.y'<};[l@ %,'UI%( oJRJ+a4 c|UbBZj5D4DrM NB"ĝd|T*v`!\VVVdx|aGE%\KcbbBJv7hB `:VAs^z|>m<ĉٸqϟ?1228+/||r0lnnnll|ՃwW.ZD9%Z !V\mȭ PJˋ /I&׮]m;Cs.Fs'O}ʕ+㊊D"1j L/4h`OOϲe|(jr|E˙rd]zu….]rFȯ[zu !.^d H1p²eDSSScc={FҥKYi̦zy^oooqwҥ_zc!FFF̙1{mrcǎͦvM)߼ymnll|"hhhVH"+N\vmOO d^@lL^JU{zzfB36!&G/8555֭{W41Q<9皮1ZX !@h )@D)ʩ!BRrR@ x!'*U-cLs_K\.ec8Pi:Fu=w3L$u]Fׯr9 BQ L*EQM|DŽPB]ׅ!_cVJHu]RqҎ #t],qq|΃s[MRHuӄ1&PJPCQ Q(&W)iuuu׮]+QNȑ#s0裔;vܸqu ?m2`zoicX8޾|r)e:>w\pn[a5=裏c9tTK'Ym_:ujׯ_߻wի͛y޹sʂ;r}CqGGGϟ?gѣGaR7t@ᆱWr1ŧ^ ={v޼y:uuwtt@^p!8~pp:0S[=8{l"x(<܃7M'NLi~ooV}?]<JWرc #{ي ҥKQ۶})soܹ?>Lvm^ӧa4-[6}4U-Zg89r}^^^asιr\<7M3 !)Rhall\ut]Se)e@2c̲,xbcq;GEQ 8ҶsJ+:cTJK.u]q@X)&_!.҂Ћ/ؽm+$LI)_}uǶmݘ@ !ef FJ)!2dD"H8dRevF!D8p捊TEGG켝c2˲l$(R|4 +>eTu븶c{2Cc dPt]B>1F5fzߵ~Bh!)%رsm˯lܸ!1k׶mݯcmxW^޺eKJ9=ܔ]OD0kz+b>rkK/# HSta[†%lH TWWo߾}׮]˗/7os=Jya,^)ŋ/^ڲeK. k,Xu{ܹ" RT1ԴiӦg}vFD"m۶={lڴ.!oT*58r"tuuR`3;8ވ|qsϟ_zjM< &;珷sxzM8yNft䝜{:Y~="]QJiB?iǫ*SeK.ifa@Uh.|2dav<ur/:dɒj)e.Lja](rΥTxRi<JĄa Tvm;Aj.!1"Db+Htƨ0֭[0&;vҽ!wM o!B=: "D!/>WRJM)SHR܊aT*ɀ( B8LZ~.0ƾ5Ƥ DFp5:2NJb(!҅\.'ǡ뺡3l.# a cL*cMץ QDR4q\@gd@ "۶+%RJ!cRٳguJIg6BhÆ Ri1Ӵ{5K]!BۢN***^"D!1Ɣ(cihEQrtt!L&Ѩ" x88~<\7m )B]c9)4qTR*)$8L1pKϹ8B)1"B )D u\'Is9iiF"+AJSX,q]7OLL\vŋx|ʕ-,k2Oxԅ r\CC3<կ~+Vg?C]v_~kYV"Ї>o|ABcbbٳҶ6pF|L'| >|\.}'?׿O}S'O}ɓ?Ϯ_Z`O>},X%)%s?g>o߾;wۿۧ?7oRJ?OWtСk׮ EQ4=CiF)bUVVW^}'֭[cH0N>SO %'|rP^^^WW1޷o࠮l6=S$mׯ_O5)˗wń?.c09sO?yӧOe˖b`B###\.7oބ9SȰm)e:m;H&MABRU`ݨiZ,dT*5g%d@헾Ǐ?%Kǥ^p!ϛ7ϲ,]!B)9d2 !ld2`ߟr\<8!TYYJB[& ARjY__{}QJRTEEEeeeee^ڰaG>gϾϟe09.~ƍO|uOKK/O|XBFXJ AG/BY)AaqGGG1p \Ui)ee2H*%%F}b14FR"M4-K4jRpq!B!sss|§0FV<SJimtlRLa8ǑRHn!, !(bF0B@N9!SJ!|u<!T) NQ"D!BxOS'a,S4D l_)hѢX,16  r!hwwկ<(>ӉDdq@R\rڵkuuu---c9*J_z__~_T*L&m RF}swyEl6kfggORx<~ƍ2/cr4{{{/^8s纮K)9Ҳt:u]4>1ƚARyy(hj&x<Ao(xض Q@͌Դ9sfddĶmadl613gN. x4Bf|߼y9s~ig~OOO"Ow\.R LaH*Bhttq8㸮k&2Z&N$ gΜAh4ZQQ'o;H*)==ߊY0B!Id砄w||ܶm#H)9YQQQ<Br aؘy>kp8,XD@!`_4F%Zql۶mc˜s)GRzo۶y1MJ%BTtMI0RnV!BR'!BgOV:sᎎV۶a1_81`LP`Tcc`__ҥKzO^~5kdEEΝ;a~o|q3gضh"Xr0 O |_/|__|}{hmm +Vg>/?awwwEEEKKKmmm4m{͚5[n<00~O>?+⎂?~wܹG+D;v[a>,Y288.rxxxbbs>22R[VWW߿ܹsV^ #ۿݻBmmmG.]绻M<4̀*&P`[|xll'(++ T2deY8 M<Qhc,A91bmmma@@DEE!$lh)D"n:w:|r"H&@d2p$q d``ի&C)~C%?>44ҥK{1X'_tw`fw1J)<<u hd2QUUef:60]7Nn7LTE&'"ah4Z! G3 Duݼm|QF )%eY'pb v *]"M'B0!Rxb1Vzy.d!}/$Q `Bs4yQ Ha~P z!B"NBqLCb"nܸ_^^>o޼H$>)U?(`Q1M$[lYpass}kΝԧ>lٲh4:gΜ% Njjj܋/ZrJJi"W[[|K_ҧ?_l… w~ QBW9sX566o~aÆo~{=p;v|MMMP@d|3inn>v옮Þ?wuuuuuqSRX6-//'@eYt, IDAT-//_reeecЇ 0ŋb}c[lq'N3FFFBUUU }`qe>>k.>O>#w-[֭[ǁc k(CP80CD8뎍B^p$`L`]Z</ ςrҥUV .x&]ѣGRk֬N<ǀʑRr&XG\.7110H8J)ѴB0PZ !$OA ƔRRӧO ["#WZnݺgy̙3A紵pէ !XKr$s̈Fl:%LkFĊ2MKӂXL!1O!l68\BR(sYL0gua"җRa dLÈ/k14yuŒ1٬f"ai"@OEcLsJѮ{wі[R $!DIIOy <9JJBJq!|'ɞC #'q/D!B!B1),At:}ŋ+** ÈbAte2H$2>>_W~\.WWuuub!ӟtttnuuŋkkk!Blb/_o''?෿ׯ777Â$ ؾ}oqرx`ll򒺻R?ŋzGccc7N8?:ڵkXҥK+|vttDST$2u]󪫫<444D"D"1gΜݻw_G>#Yɓ֭{DLpD{k׮'NTWW#.]Zpc=F)B R 4C': u0<<<00fO>կ~^я~OOgΜ'>Q[[9ir9f\!!D"T r ԥx!${0L9|߯œӧO/_s?o޼`BjvӧOwtt̙3u]`sq, tU `"#}P}}}KKK<w]nѢEJk׮AcU~ !$N߸q#Aǁ|QJ!***8烃;uTJ[Y4Ýy1PB_{yys!IСC@q}VUUM0a۶muuuiiiݧOχggg/Y_,//߸qcUUՓO>9m4so9%%!z⒒P(^ nC B>>11Qi4CZSL;wnVVf D@"yyyǹ\NPiiiqqx< V5WWW$H$v$poBTVV$~)SYfͧOGиb;(Yj5v:.&&R I"ӀA!662??p&%% !tY a** ̝ZD"**!!!&&XQQ^)RA #nw$RT` 0UFX5`c:tȑ#pnJ3n)gϞ p8L 4A!I! )!eeI2V~?𕔽ϰJܒ$ @(f9VP8D$Ac$I$qCX` 5˲`BM3 C>e*U*,!Qa?@^e$A(D ɒ(21B$Y%FC1²,I, FƲ(cu:l~g֮]+ӧ-ZdXL7E5m DQliiZ|W_}ecǎM:&$566vuuorcƌw...۞i3f,ZhƍΝ{Ν 9>Ц#G8c p$5`0)aF,- ǃn7 T:|,SL1 Ǐ߸qҥK@ 4󙙙,,kϜ9SWW^0B :v]gddtttTTTvV[WW{@(4, 7mhhp\=PIIO7nƌYYYp4MwuuATb(r"#ߏirUZYqqEب钓NZi: >|!T\\LQT$sj `R* w!ǁ1fd2d0 0k4jrI XYYYxTFP(a y$H@QQ$IQT([ hb$Q5*J^Q`4^IS (ʽ?AB}{bCё#G>p2k%1ƒ(q,˲r#ZFQ$iJEI?yQQ! VhE+ZQ$Zъ"؁{p. NH騏7 `oAsɲ,?sAM7Dor***nKΘ1iM_n$Y]]K/=[l1 ~R]:::x-((;x≍7u]7pc 4h;ӯ_zv/Yvyyy3L B2CY,%%eΝk6Pղ,+9  "@ľ}Z[[9KMMmiiIJJl?3g6l005phV5 9Φ&Ysrr 0˲0qHDQd4~?܊ pG@qffCL(JHH2dHllMv}ӧ5*--ME_p8h?WJE$tSpϯQ8&55ipQv̙3#cep" 4BKW!{hA,Bwff&Mӭ>OF@FO4MFPqqեΞ=\.l6 ˲j,$ @Ɇ Z6mٱcGAAm۔ey,?r$-ʘ1y)EIec #d]D9,F" Iz"!$0@@r.K`0d.@u3$9Az]||fiwaDQ(p$) hbbbL&IP NCYILD"gl2uvv$+I(,aLHH*$Mt:^NBh 0%B<+piÇA%$c>t(WD"qHˆ8`3gBcƌx^Y89xpxɢOhE+ZъVI .eZ[[F#D~*& ^ A?99:ᾭ4i 1zŋz,{=L2eҥNknn~GK+]hAnW_}uԩP(d4Id|'|>p8ԇVcϟ߯_%K;wnݺuw߉' -p8vQd)H8Ђ4HKNrr]wUUU%?.BNNW_]\\vڧ~hҤIolliiive@ Y*#Fyx|0 0 M  ~?0Vl6C2pba0HMM(Bs^oLL_{ĉ?xͳf3fnG | pO@bnBU`8D^Sotqqqmmmqqqfl6WVV\+`Pcc\Bt, x @6TG/XVk6z}llJ^8`j6y<Az=4'$$455Rt:kjjFjHqqqpFLLLrrrkk^0`@NNNnnnww-[D#F\r%?#466~w}'岥H cأ$bf$Y970Jpe$(VFIRE! 2k&\? H '6M CCaͨj\FvZf0~ (x9hBR1*c<ȇ~+eYŗDUBT*Ѩ8`iX*+=^lIINň$ch4Mfpn 3%.ZERozbJT(&H#,\!"iR)Z ̒"L$CTjվ}{Bev \EQG+ZъV(thE|K@c/IRgggmm-p4M[,Z >!p9^h9khhJZԣ@ P\\K/1 ye˖M6m L2%--m…wfYg~饗 (D颕p㓒DQ,///..^xѣGW\ݭR222l6w߽e˖ '%%nE9E@l222,X};v{kG|| nPƞ9sfڵ---Zvʔ)&11g=pƍ_xᅼSsJJJFoV~$$=YH_@OIARHdL`%L`Q֔hD`ݠ^dyRuD"IQ14CQ$ƿ"@8,rj5qsn74aFД GEI}>O  c0D"uB$IR ,q@@EF  %%A>q$gSP@^\I1RF%h&Zz{R1*ѥAʃ LҤ@4M;c$vl?~<1&,9rɢ CJ~xj>F ,˾-lmjjfp`6~ܘewå#>weǽ֪[ikkC̸햑#g~_h5MQ]z{ENN6?>0ovIHmm=8TVIg)&e3&&0hf@ӴpTUUth3 ruuuAWoL&ҒEGGM .|77lذbŊ+##ѣEEE-JIIYrٳg-ܲx믿WQ;}}O$--!??_}7,z999pBgϞիW'&&0 jU^ByѬW_}5++_t֭aɒ%@`Æ ӧOl^ H$I^$'|Rӥ, 'A}=7{N7mڴ\˕\.ѣGSSS"Oƛn/]<*U|ňG~>q⣏>hj>_A'0iq䟟ey_INNz^X1oW?t trĄC+GhE+Z_DD+Z9%)pvx<555p\EH EāI+ ***2p6&&&))iC_GG+Z[[kkkj3̙ҲhѢ۷k0 RM0d A"׹N#vT*^;ptFi&>J! XxQd,n a1p$I @d:::7΃njjlx$\_c4O8f!IrcǎGg^t:m&IB`1ޕF=QDQ$ BV #{$E`:OH"E,ˀ{EEI1B']21Cae_YhBЁ2.OEiJT*C$A@P ID .(QFq؀ A X{l|B-ƦjH$D,u;]-mn`@rnWW z^Hh5p$IJ,<Ҭ'LmܶѣUV:jԮ]{vO~igiH8F",rDzHhĈa۷mO۶m駝#EQ>|#GΙ}ϡGG84fL _0YgZ/׿pK?[<]Nk%TvnOOtY&L/ъV:Vud]z8_l640APр,0; :@嶿F1|ÇtǏo۶ٳ$''HFFҥKoֿ/~رcP[nYrɓ1駟;wnss"@zC@DJfyk׮ݺu+^Lj4M8p  ={V崴4ݭp`h333'N1޴i͛ =z4믿l6ƄͦV5fׯBpx<ŃCyE%ΤnWSRR=&M/^+17oD"YYYƍ(r< ۭRL&IZfGI$@#xjUT"Ht֦v=999ׯ_fMMM XNg8A566666e !~?8"IΜ93o޼jժUvJKKK`zjkkm6h4@B HD04PEE#"iD[A[oJѨT ESEQ4E34H%Ee?$  ˱,b4lXx^zp>Λ,ˡP8P t{vEp!iJQkfdQ4Ze0(#$aLbL0*..>66bZ,ͦjIE.̆ik5Z-0B,K2"Hʨ7괺kLb|lID|ttu;^/aEQRTf9&f6U,XIQ^ ccg5;lĈl$2dH8CE  *qP$ ?P( ` pAHDE$JKG @6E VVrR ?755,}aqbbBh3g;{χ^ Yw9t`wfFK^@MU9;0N7_~Qy䫯7]:yVg{{}[[p/֯~m PBBdLgo@yLbSx\| }ڰ+"z>WWgW7[}/ɣッlr;zNyv[#>lSBwL@$I:q԰Cs?W' _oZW-z{զ×/ }W^~ɣҊ7mъV:VuXur'Ihiil`kZ!XaCp: &4 )Ƒh4O<9//ɓ^EEjEKN>w߅tǏ=7ޘ;wjE>vwT 7,nrjnoyɒ%@C[Ǝ;mڴotjǏ4 "qZ__?vw}o/G}7Vj#jׯ؎vtssscbbnooW  ȋt:]bbbBB(---Yf˙CyM6 Duuu=\yyyNNN\\\qES+4Vp(r\`0668EQ8^~`0&0M74c ñbŊ;v(8f YBӴ $`0ͽVSՅrssN'S145?Q%k$bL&8Rh2, 0Hbbbj52Eٳ;w L9Zr# r8{ٻwɓ';;;eȐ!.)) <8c9W1ޠh4(I(&\_=WFEQ$H$)+40*R$IdIy$# H$IDX$ VcU b`t:n:j5L4 dJHHP4-ґt:q,PV911>9%1k Zj5$IJ~8R2z ϳ~EVmRRV+\8怊"ɢ$2"r0NAxWr0Ƣ s,KYBzRϺ%W0 9d$Ht:Q) |l뮙r57L~`Ccij>1ﱽzlnۺiaj`09|Y"m ckЮ]{@qAԓs}=?Zs豾,B' /<__w5fYo <]8.lvɤ AV8/5uG =V:w"7'[Q8yġq>ie>ZpEE4MTp|?|9&usBa<λGF+Z]:VJ zd{vH AB fYI_(4TRRp8+RqZZڲeB~o=wܔcǎA΂ /_~ԩN>}f"Z6\Faڹs|y\_~KdM6}G)..okkk wttO:5==}޼y*//xΝ;{ 7Y>4iR^^d<#G\\j4.Ч@X{~D\&јMDMMM8h4#Fh4J8رc(_^轂` ɲ_WWWSS $)) \4!d6 xNQYvŃN<0̀)/iFTsԚ5cƏ:ڝvO6u=ss#|e#s*z|e%4aO?lggg\\\GGsƏ=5\U*P|7^uNp/1zt{nj.;|aBÆ ]v㬨,)elC(?WW :DuСU×[, zrsC~VI)dГXjuRR=K Bz} Ð^_[[[322DQܽ{իsrrƏo6.] 6mڴr|0!!E1==[nׯ߫yG}KIIA >/..lǎə7o^ww#<{) qaaw/w7pBرc 8糲@jmms\Puuꪫuo٭?÷~{}A G<԰, x8NLLdYznzK`0@x/8Vߑ$ES iq,aEQ* f c#a-2˲HcL=l87aQ,2B# AI ɲAG$I=T~ B(HcD2"PVdɂ @=Y!/b]~w[7>eeBii55?ccFdje9?˦8>\<~K9AI|^ۧ蟞p:O>k/OYYY9tȐjU,u=y.3ԇ&0e^כf;뫯7Nl29sh̅n_sՕܹk/;g{A'es{!tE-@ 6yO!8C;]NIhE+ZQ$Zъ|$I,5CVKb4!k\z%$I/(@-oM_HOO7 pGVpTTT/8p`iiijjSO=D~ᇷ~Gl'N8.33sСJaao[o:ujѢEcƌn,Bn1- na`C VAa9)XW`łNECbqKK-**X, {^i(BX'pZ/๰, ^[`{'y_{)fE"-RyW><;wNӹSNvvrghGAɒ, !D$Ep$B$_$ $QepADo, J! LH$! ిUp`<>WV bT@zE-e@ r*f3zr,!$"9cx"IR@Ape@rAaƲDA8.\Ha^ർiYijj8L QDQ@aZEq~;ݗL!$=0)&0AP=* @1 9d $bkԘXKKުĄ+^{֜O~xe/.|L5ĉ}?.݀˗k~ IDAT4k~FCז1xPL򖿾bPQoZcF*F3_u^O;w֗_n|/th5M1krYQ~;~kii/:b9_v y@MuuucccuuueecH2F!=]\^^IحNlYBA.RUL&j ^2dɓ7''Gן8qBep8ЌFcUUU%fdc<]\29c#Q`q4c+ʾ^ |AbƜEbx 3@cZM&F|evuw\>/H a,B8QEQCp0 C~!sDMChooonnnkktݢ(!LVGT(HDF20z^בq,q#Nk C/&J8v옱cnj]3f̄/4 !YF=d@A%I^D$QB2"HaP$Eq **:d#=B$$Iye6o˲}x rsqwq{}}Oͯ޽gH_3ek8CG.tƏ{Ǐ+X?%/ofwE@QZ:rLD7B[gn-bKk3 [EE,^|GiY۟Y3 >S`_޽{/F|Eqq¬\\{}g>wiAN<5sS3gN<O=`̙3Qn/((p:͐ ctPhƍmmm&Ls̹[VcΝ;VQQBf rcH M#**)) VQQq#G;vԩSgΜ9}tUUUSSS{{ `0G-e\؇cGH(^rM:KJJn 1z L e X2BM/#xAE FRUEa ! ,@HEQ (&jZtl0EIf5MY|P0 O^f8fYNY_`Y(J=R3&H KeY&0$ A$I dI֑b Id@g-XhH+.SH"Y^z{0p<ܥ`ZZ b!պyٳg'cǎNVV֞={|{n<-_~'x7g͚򳡡fE"g}h4 /Ree_~)b X,ׯ_h$進g=pkdA}>MA-(-- P0PyuuommONNNNN4J PC>í[655SPP0hР└e?^__o6u:]JJ <σ,2C_Hz,'NSաP y:OU{rҌ, 2xY^w bkx6>-Ȋ(K(hFhLtJSErr#z+sz_B*>UcnSUI5 s41"p8dV:fC10F>"k)TشiUk]xJ8GJݻ-[9߹s*_p$ˊ,Ʊm.h]/#K;L~|ɒ$)P__ߩW]u_}?^mʫ`y^'^y\;,h u}ҤI`b\&t_ gϞ5 C4g+EpUU&13i !g>[>]tw˲6n oO8ò3f"pM7M2{饗:twZꫯ^jxLǎ[re 'NX|5\Ϛ5 BeYꪫ}K/!y F?~wy!h!򗿬-*fw}%r/H 'p|{<?ua`0t<W.q*++saHQQ\.700zEEE(%/^x ?c[nݲe-Z(ٳg/^ 1C555dX,)QW . 6`p`0'Or7:: -֮];wܧ~駟^` p.6QRt: !q͏)Bcv)'HR,4m_4x;RX긃հ<=p?7) 0 #HlܸԩSͻ{7qg=}uf,Y΅6ħ8 BL.#1;#Pd 9L ؅QDc,7cU! Ll!8!D@96B\5;!uNr 9 ڶB !D ",@r,c|<1GVղl$)@(2Ɛ,, <@4RFEbhX*c3cDB!aqd5 H1L&Ava>.W>ڶ%߻w,!$@p*Ic6`c+V,߽{7BhŊa\鎝/[9sf[w}ӏ|#W}NVxsThfdrҤIxn'L&q @i mIl6[+hdY|ׯ/>l0;)퀞A4ϟԩ>Ȇ :;;%IdYSn߾o4bIo#G?;O~߸qe]N!-XspGM&K.40 hڡUn BwuWoo/!$u]uuuk֬A)Ї5)@2tg̘L&;::p61 h4 KӴ|>89l_# gXͷr˺u6mڴe˖m۶g?o~`kF嶶ӧBJt>mX,R))-9̕" |>@O X‚>})SyW^ygŊ҅!7 \.d`$Aв80 !~R)$d2`:rP7*wƒd:]ӈ,G. HR[l9rHMM`Op_EʯSN=3Phٲe]]]۶msb(!#YKJD aae 2erB#v0s.KTf[|\' U4M۲9cDaFPYs" Ե1@F<ҹA(5ͧH* *e۶im!"=e勹\>Ř 3ɚi) F!$iةTʶ@0 -ԨLM,m^, eYBs;$. !M#1.+TL0\؎`JwB!ʸQ71fA˲$IҊ1۶mG-_|d~_}k>|0iӦE"\.FoFSO=[oΝ;կBKSSӜ9s|ɡ4!|X,~_;w %.{UUm`ӪBCivy2::֚RבH>rB+_c=6o}zPv=²,<nPmlv„ 7p-smذa>o̙_r%c1J, [zZ˲ h1Hj$cǎA(B'Lp-477?S?… ϟ\(#Xmm-|$IJK<SГ>Q2 cdddxxxttP(~!ԃ (WP~lPH$gϞ=} d~ 3f\z饿z{{u]_hQ,{UKǎ Djhh+ #ƹ8X !xഛi.1Vf-*j8cSJ 1&l01 QU\N( ! ‚1Ga[N`X9n"`\_8ϛ`\s4<ƈqn%sd3m;I&B5&+Ed),ǘ!%4 ~H(m ז[˗isܶm۱,)"K%te۶\!?8Z꿫k+<+>U|@?t"hjjh ]Mns P H$TRm{pp0 VTTS^YF>'J͟??]tQ*8~>; ַsɷ~;6lؐfm޼n@Yx~Pgb8un'lnnv'B-Z/vuu^ZQЃBVe„ wﮩA j644>E+??l߾!tۿۜ9swJ +ԩS@IPPU ei}}}]]]555CCCeM0!Ȉ$IX o`4V8$1L6+x駟xo}[>~Z]]eurN" ]B(9mD"# 9rt: 1Bl6[QQqeUTTرcDbҤI[J=H$koo+B1̺RƊ 猻]08Yw=1Lu]8@,' ˲/LLH{dd$]uU .,+JUTTH$?_pp>'I 7KwQ)OGkFS-,L$(qtq0۶)b B,!c09D&2Ƙ1ٶYJ(ܑmJ"BIQ ˲ ^> ^E !bb!cK%h c~?4 Hv۲M6kiOgiَE1>SY-D0-ˁ`R:!E%\bB+Vf2L9 !F19i9)dIE1BK. 0FgW^yW^yЉW^y < >{,囜FAhnۤ yI Q+H3Bp>8q'?IȲy饗|P(tmQJ8pС&t…^xܹswuuutt|>x׿3f۷K.qPTU]`ѣG{%K8 -[8pS ARoo%\2gΜ^H 3zɒ%==sρ?뗾GydڴiЎ%Җh4FDuID8UUU)@]BxsdYBE"IRcc?񶶶M6mݺѣ(" IDAT]]]]]]{ &L>/*!H8b~5s`s@{]bE0h#ˁ X|9shꫯܹs``+9s&M.4L&kllt%Ne a`\c9|10ƹ@L1a@H0PBަ'hNuƙSK-˲$IRU'RU |CTT!aS`\SB5fFֳ555X AA!eږe1e aʌ9iY!,Rۖ8 !SqaŢĥ&FGa)SJBHUTL0%!$Q*a& đ@;wZp!,ɲw!00(BXr++<+\.788$ݕKuKD",!$I`QʼF\͎+FQӴl6 l믿>'m۶w}p ,8p޽{b]]ɓA"~)S|{ۻw8O=Ԏ;֯_(ʱc4D@!V^f!WӴ˗꫷zy"P1D7|3LbBӴ!𰪪pcGKDK+b$l˖%)Vd c ɀV:Bsv Wܲu+m`gKAOD:̡CL6[o3sbάهv_p* +ʃNʫ;%΁* I4$i``s^WWmwuQ;@(Od5Mꪫb7;w{z]| 'O $u?3<#$O6/ɀ_F0v4P(tu׽˯y4MGJ4}t Ԕ[]$*x|{a@?8}ym߾s`L&2J 5 sbǫ!0XepiwpkjjI\( PGG?_`iBpmhh^Fuuݻ7nܸw^{mSL1cɓΝB###ЫWUU-L8QQ|UWW& `UUUZQQ1ݻwo޼ٳ֭6md @rJ<$ <}{zzꚛ@:v tEQi***x!FӧO8}t?$H$П@DdY| ~Y!]J1D-ck8&xkĊx(Ḳ2`;.rIcl˺`B.8Bhɒ%`sŗJqEq-agϜ%Q,mUUedh8m[6zߌ^yW^yA'^yչ 4Qa7n/m3 +e%x6}sy饗o4Wٳg.]Z__^!ՀM@p:_ `F}ݪ?4M_ 7p\R$P| qSӴt:) L:R]$I|1Ds. BF)MR'NعsFFFdYD"^xԩS$s>eʔe˖}sعs;ݻ{n!D$bSN8qI`%T*5<u֮] TNCb۷w… ###.W A3=zѹsq2looW;ݝJɏFӧOojjjnnnll`ꂕFQ1Jŋۻ I>*Bi & L(B'p` ٳ>h̙3WZUN0!9c!AzC($ gK>/`R,aBx.- ]PEQ%%X@zB$shp!΅bH)k!} 6Bhll,"U>.RVSU[,!9+JU (tl&MJ2, h/K>YVcLF8<|9FPЧ`a/-ˢRO ;#"%`m;qsn'!,g$YV q˶,IB˶aD)\++:ʫ?AH$rܬY E >ܖჱXSrݻϖn:麍.^t]މR`~o߽{uM<!t'N:u d;w9rd߾}wq>.%]~/sΏ*'|1EBS( !\謾o(rx~ܹt$$,RZQQАH$:;;OӧOL4P(B)L hhhe9Lcǎ߿㽽B!XbΜ9 pxtt4r@PU;DE-Z[o9y<޾[kkk|>D677/_|ԩ@9s,˽5553g2e Zŋٳ緿mWW_N.{E.,kԩS,}P`ƌǏ Gٺu'RƸjҥ'N yo/yeR3Z-z뭷?:qI0\.W__@UU}뭷ٳgZwСCOvq̥K^\ Å@*H&L0йXZ ; \˶PB0&,ͪ\p# a"CHpBF!" 4~]4X OkeYe%~0)D5 0L.ˡp8ł p2F) |^妦fYү#3 #Ne3t21±XYrr\:iZ( Xr< ȗsB!$QʥqcA6 -B 6&lmL$J#9㜣@I!۶!$,IX"XW^yWtW^}Ak`Z;#cX,;N0Ã&3 Eh{4M;}4:wm=zϛ74P(wvvB! 8ナ '?9{l(ڰaƍo[oǩ"矿KEqQwL .BwNs$l6!L&ׯ_~l0LӲ,B!UUB4Mkkk}ĉX,c"L2@eee&!H=&L`ܼfq|__Çw޽k׮Ύz EYz^ FšP .8zhww7CCC&L Ȳ&Ls=WZu@0⭬S>^hA1MNppΟ~G#BЅ^bŊI&qЅyWoz9s:tx.| pmN$P{~E Lf׮]`U7oޥ^:N ÉȲ(2cqdX̔@H D($ !`pLRu0Ɯ3.@O(̔eY,+ T*e&0D{eYVh2JQdMBb*P @tPu%@0 sa3Y)1. `Y "cN_U2`4Yz, ;2&ض,4)$IRTEQg"L T"1Ɣ`1r*ɲRJ$Q!W^yWtW^gm0Y9?\ |y]yW\5ST8vm.!pY4[ZZ{ٰaôi@SSSS[[;00H$@B;̙s=ڵ`Oӧzo뫪FU0M2eɒ%7o6m @?wO|=H{yL&smb1CRgʔ)`\Qnikk~RZ[[ߟNA.$r,f}}}ǎ;x jkkg͚NjŢammmMvtt+׬7Rb!ʚ544VUUi'L67Mرco&Tw߾}{ouuuss3{,z뭷gϞxٱ1dct]ojjڴiqM7544EEQl쬮nmm5M3r9€qQ9) |}ر#N+ֶbŊ3fTVV666w }\Y4D- ׮]~wϞ=@ؿڵk/([xH󯭭ݽ{O<1a„'Ԁ믿~jĉP(ϻz"BK W!BbM)"+ Fq!3˲,˒m۶0ƥ(\8BpB,K2xvHŘqlc%Fm9AL"|*Bxƅa ,L)8ǔHLm6 92mDZ1r2l&岌P(>GTWW144<<UdEI ˦ >|>;a,^"BP)\\>a0- A0Fs)%BPB0n/#%J  J++:+p{-MBCCCo!prclxx8@9 bp_wCQ "bccchy:~7n|׿u[[ۺuh zBkjOz```֭V|ȑWr5A7 YӴk_ܸq |X,D\.Z!DQuݍ($ l޼yɒ%k׮,kÆ B'|rxxnkkksm#`d3n޼9Ξ=;:䖖NP[om߾)SZ*J={1,յB; },㯻~=Fm555cccgΜ2e;0b:yI&]ycHSNǏooo?qDOO h4L>}ܹ+Ƹ-LfYq8 /9r_+!fͲ,+bL&64w}GUeX,/.j}80d ˴BXec)T& ̲L&M4zXٶ0FJ1e%Ӝ#Mӣш$Im0La1L$!=YX(&2ݻ /B8۱5k.FHض Fq%J !Ȳ8F˒D%sJ[n[fMƎ˯r%0;gZ+.\MW^yxWjͼI8P0bÇM\sYhD"Drg߉3 4p Y@w+V|o}[?Bꫯ>Ō3;::ۓɓkjj9rٲesmmmk__W7|# b822r)ԩS/_N)ꪫ@;uĉ-[$I4N>sȑwp8 KԩS |fhh+5kc=vwzׯM$]v966VYYi*nJۏ=9^hх^8{l]z-puM&p(Z^m5E oDep/ϟ?koo߾X,``єz,:uTKKKss3$555---q{A:zh$d2dY۱X96¶'F 8IJ2pwq,l#B m0Q m;m P( I.`. fEUpAfs ) fB!r" RY1 q- Lڶy>_mJ1) IDATL%@5؎Ғ*Ni)o\I +]5\>($K%rmǶLӶ,I141L`s6N 1!P R~K-}W=59q+mBL&655{d~88d2It`04^eF @G|P(t[, ׃ٳuuu'N 7xC=_z͔RX,4-tvv D3g8q;~{[8q{g>r/}KTU>_|19!~ږ *#ԀPr;o֬Ywy+Ţ81h3bOO8Z6T*ujH mٳD*++)ccc0ڠ4i="O} /z)S?{;vM7};o%Kz֭[n/~W_}uH$֖{YkPxת?|eYB1hѢ{8tW֟'_tMi``̙3d9tÇ!lݏ d)S,\4MF vH466q`B)-'\ &M)@1Matt̙3&LhD.)ihPu]>u~;!UW])O)mjj0 .`ڵ===, O=x xsBdL8^D XfwܹgϞqbآEf̘l`躮뺦i}}}pxk>(61UU6lذbŊѽ{nܸ{Ctwv/tEk`܂OiԩS{zz& i $UFK m;b0[%1!1!!2DQ!,` ؎H"8ql0s%+mQXmm-8lx^wcY<\.QL~2=w"% ]f]TTTb1]L *  cCC#d2/$IS74RI5* d-XK0)%$#$YB!E%d^Z|9/C/a \%B%qV,_m+VHtiӦVD Ak\}-zW^yG 3WOH%N'.ߢ29ûǯ{eY.Q+b60GϞ=?y6{k_djiW沮~>! ؈&!HXu"J93qDN:U,'Oz>rc, vڀ^ٶ mlBZ(\3]DB.!+P.nCۄItH;!aSS!dӌpuקa0zEh`|' 0x L&L&?$aD9& V~^Yn,R)I~__ոa|GBu@<BP 1 !S3nY$K0D!Y qfV #X%YVP8S&0a9mc*+D%B F!D7hcN{{{A&SJE Ae2YUU~_AqF(%ڶL&4PAN`1Nx<>a„X, 銢 $fr$Jf ccd29B05 A aTꦛ?VdJرsEQ6 f_^~˖ Ih]˖-_u%`1!̹ l۾ !tWp_~5kָƍWZuIж-m۲sGFKS'77^yWוO(x՟:?@~%n#{x.}Ptk d|H`&{4Mwӟ4f28@CC0d21esJFGGrB566Fwww6+bFXb1t:  Z PUAPSeYs 80M>jl6 k^7Mw!\ s@ya6 La~@R@T~ЅtOׅ @8gͿ+R~}6otgFeY&ܟItSE >3` bرos\8s,܂>Bg ڗp!0!r0ADQd攼rYxlH EUJ%ƈ@ 9MeS)P9 ;>ղm,l 9c,I@cѨmbٳ,@N~rg6!T²Leɲ9|1]t]hdYb!`(l6r9HڂDQ8M˴ۦTdIB3nViTBm?[j<К]Z#aBڵkeY~/rpm}-zW^yG:F;niE@'N${މPi2PE_wu7+WB3Om###B':d2944dv(u>dD:&0Ld2xis8l4|8v3Y&.hG}>i x! +p+0Ny!O 5< ؅N\Bk[0 j5i&đHD4P` n@$ ҁԵhI . XPU>IbpOt:=66.pp;4I 4M Hu=1G: D$$i!' D%HP*$2-U(i! I$L1GB)&G%@R_fzXie BH0t8Y&H$RUYi[Hjl(s c!D6۶a%WTT0 \#9IŒCxPiHaܴ ۶1F 0pb^&(ؐOUma# F qB('Bh͚aZ$9ٿK`I 7Y G++>Љ7^yg. 9Q7)7x]wsym 8`%Z)tK({zz~o~/W!rmH:v4Ҋ Y!h;22=p(rw-q B (M :A*ЅQ008faYsR.@-~JGh4::::44bn3 !+LirB\ fqPxezwʧUUd2 ,bX BZ4%CSDI0ra$ILm!L0F1.+2B3 09DY6#jt12 Ӳm1\v UHb B0FBJ~E,*N$dB $I/ A |>bٌ36w8̶m09G,. Qi {ea]8M#86@-!B` G_~RR6oټ|rK8J9soH2;㷑ҭsرcٲ7'G#8\Qo|o7CăN꿧N`=swΡ_c$pGqDq,>Cm{``!T]];d5% I!QR G@+Α@஻뮽=/=b/˅)]}]t2k{7MS4;C/;{zz,˪>J]{{{ HӦi~y's堕:'9.\B:L#B6@1 ֤<E8),h `̜z@V\{j/H$L, $I|K|a5:::J\s0Lo)lM\*xh4#(׎o|2d,5vCG I&Ϛ5B7(IR}}}cccEEŴiV^^s!ƄJ؄B05\1q9BΘ SBK$@P.e:~3f2s]Ӆc[ @ @X,24MEF"_mc00P8LeEQBef3$!$FBcg\0۶؎,K,PIbb0e2gϞ>}_ %*U8 $I赍[xݻO;NlOa* ,hM}[4hF0&1j~!hk$D( ,ma=L9|'l}̜6}u_\ӦN%D B :'n)S8c'4y7G1 XvW\{7e//fЉ~|6,Nq٦C^j5!1HH$TUb}sO%$@99C T@0/̝;w޽o8>qe69)g2qB0T0EFN=f߾}𬻪JZlJJ{hd8Tb1۶% =ΔȫRRRZ[[w]^^^UU2(Uudbd^Ub(Rb*Ib{w!m@bq*HB~(G ! H  Tt:9BtucgCP$.Ji) Cp,@OJ>0l1?زyטQ㊋K0eon1x9O$mmmlvŊ3g\l:;.bsƹ㺮H7* M0r8w݂ `vq  5&XR*xaAS}:q]]ӻV7++)uqsDvr1LjL*x<7JB ƘP+4Dz)Wvttt4qR\@\L,ld2sJBU8 ds:QL.-) E*h*c(!NeeeLg:x괩TQ(BqN<8H;B ml O3a x@#BHa{8MQB-+gY6IH02 öӉ{;K?9n zU.??Ï @2uaI$@24u0u_>c?oj,? `HH(DqQG͛7a޻gUfok;ʕܴ[?U} V;am߾=h4 .BML,..9rAݻ?۷wttb1u(|$7Dz߀ԅ=@{{D25;"# Jq|:Y{{`bܸq=2(yʋHG+FJ`; ᘣzccc-: B(Pڿ?!RZUUU__oYG}' Dђ򲲲h4 F9$2/$ԃL-[~Y%jC)-:̯<ֹ7?z_Q4e;zVT*H5k,YdҤI𑪪$Fr !岙L&o9!P*(tL aPH7tY^Z0@>yꣅ?>n߾_r86 3lkk; /21}N|?>@2fUN  ֲ~gS_pzUΆ~?T*ߑH$J?A9`ZHA  d2CBs$Ò5M3WTTT gϞfPCMh9$Hⱦ &޾peG*s҄acR_'f-X㨣ƶ쐫K@@ӧ_r%SONtYbL3g IDATIҙtƲ,us#MTEEq.(!S .BD *hA3\ iTq溮[ڊ+Cш(q˲3l>gSҲ2Y BpXJϸLK7߿ǎ¶`qΎX,ٹk׮?p㒒P(9O&dXEi4h%Hdaǎ=z!Ch&ź׭1bxyyY  &qCBa0ƶe]XSUMSUMUEQ ̲-qa$ 㨻jq躮:|.;X'{ܰzd2m۶H$rUW9i/^N~_<aMӎ9|^{D"qΘ1׿H$f̘1{l΢E7tzѢE{Bg}t:x_n_|y=3}9Bhҥ֭>Z{>~K_?bĈn`9 >cǀ=:lI?T{RJ0 ߽YS\RC-AGˤR)5@󢾾^JW޽OH$Ҳgg={/U" m۶, riT6nX{=?Yn]GklhhqxG}/\pJ3GHSSS0cu%,2U!mɥ~BXCJ Bǩݿ]]RX""'[ 5@O0aÆ Xi޽PH1VGeY!٣f '2XۏI!C'~@'L&սݷbqjI tt1wԦ8kVT{7lkk]€8V]M7ǜ'>_ng/Ut]oZ]7nXy?^~u"1_-(++ѫ7[ mkiiIRҢX6ݩ'(x8? Œgym kt΅r=[ou…;w,**Kѱ~zN ,缵uժU=X"744|_>}q@4Mu7o/˱;wEQ&N~W>=X, oڴiܸq Ǎ7o޼ 655B &̙3'@PBr24TcBiDb޽\fR޴~ S[|>-MPB NIkpi@_8`c,~ʎuĀbȆv'Z(miym9_dUU{`0xn^Zy=컅`'a2?}ljjVXL&et L_#g _kSClt-v7Z=ۨzv/'mthi19fPL>Bz z _|뗤RVc&xE]0{-Q]] ׸N"Zq}U8ێp8,5J!Oa!Ĺ~eޖV4#NcN9i̋,Ne2UEÇ>uT[o??ϟ_򪫮40/Z+}ă>8y /pذavz'O?7;7ީ@]hyy9dΟx9s}љL~ϟ+c{Æ nvo gYfvۈ#:::ᄃz+:zjΜ9Gyd:~衇͛ŋϟ?ĈMMMr tsƍs̚5k>䓷v>z=w}[LJ:|A  P61MòP(8,灩)**D@r]]]bP7Ar ǽmw7rM$uwۿdQgW۾*$ s!4°^^Y[__1~eap s$DSU]]BˆP"O1R!a6`"FJ?6AG A:)aR)g DP0 !Qs|.gFQQh K&S@ h&Y\R.̲|qMӂf0LqК(U"a(peqJoMӨJUE3Mӱm8&$dl&˘DaJtҲDHDz  AлAc[.1]M3 1&sa3.E"*a70X$@`Ms c.b sSN}$VO?(0>tž7oO<1zh;=ٳ'Lzj)G7;Gy;{GB&MzGýC n@d lSuΠf:3xr?+_?oZ'։~| *x^|G-ە#NU:#Z&pkZM*mvڻaqq8mmm*x-˕zۆCȴUEGuQ|{HK !%d+{q_UUv y@gA0 浳6th@VTT3W}\0 PH9@ y\sspxꩧnc9&~;y嗁SC)馛7*BRz !Z[[ &3:7x#8HE(?q4M+..W^ ׋/xw1RzQGx㍐R PO<裏VU++{gѪp k֬v.^xΜ9s o?l UE`' Drׯ]mcL& x/ m u]UU0'4m r;wLPnCF( Tኊ!C 8D4u3L{{={vZz|=~{K d+ihjNJܞjjIdnT#(JTUg\\r\{{?yJb.c 0&ۤY!Z@\ D)qmDZ-H%PF)ek;c{(UB!s!bݼ ]34U7 0ƠHjU`SU1P85]7 =Ɉ ohb 4hСC![VQ v@OtvvqA !vvukٳy{6Rv,N3d:E & V뺖m\.,b.H JA+-E)h?ijB0oy 5BtE;h#ѓ;S :6l0~x"TUU{}k/~ٳg!z͛:u,{w&O,?:餓?P?K: @~@5jʕ>v8z]G{ݻ0 6̙3~G28!ǀ1:?3˗_zNzJtdd$!hn쮢BQ'JiEEL|>/K"M7^7|0Јoη!_j#^{3},Ҷ햖inE*R1|[@gviZ4]w9!4fud/|%C:Ѹ{Go۶ vڋ/xҤIF:ꨣN;T*'BoNJ;[ny'~i*`L 8P.ob3h D|>o۶mĉ`PUU4( !%\8a„ӧwttH_G"B[n3f,7nWkT*dL.}s 8!tժUk׭]~uk8HIO1*T9L +exaѓIrAJ}|$=z>p_䨭]l<+{D͞={ڵYf7|\g^f< 7pȭnkf_^Hoޮ]s5}om-0Ç?|7|~A91:)$)o~|VI/C#u4RGke#r:,lR:t\5J$۱]d=6-))h4#p|mv!afBۿwzE)rG )ĭ\6&qZ퀡|>KْԍcV:kMS*E =sɃ?9mmm15M3L:cY]N Z!K.~x GM74cƌ5kּk<;SJ{.$x3o XH`B@C~Y%|rZtiQQQ@ J[{^w$w(`<#L ]ve2H# ÈD"@yTQ4h4i޽{;;;|yyaRGN偽:ri>]Uv#†8Æ}淋7vv88DSD#5e[WWՆh4*4:K/+@^:.OO Uw !80 x !.C.IeB(u]B9!P[[;W,u1!.CHR\\ " #~ ?я.Җr3gڵk~׾;ټy3~eK/d2fڵkW}}w](;.x<~y絴p ޥwyD /ܻwo}}zW}oZOa8 .+|]w?#c@#Љ~|!By()!lxQG-[+_JaIJ,]{|*ě6m@ 1#Cfwܙfkkk+++1`\RRҕ9@!3cpƩߵkW",?@ )9F)5  *%%fَD"ѣI.K ofyvݚOΏeYPpE?8K/|̘PFc _Jq ֜cLE`#:.ո!0 bB(r΄`!* uyQU9qΘ-we.XN$p8 J哼Q 3Ed"OsTc@|( % U0) & CeB@@G L2gd2L,U`HQʊ U]Q)\uٌ\UA#꺢.)2tC׍b4)%# IDATsΘLiBI4B)㺌3 B /AiNEtpB )U0F'0YUUP?v)BP.sB!D1=O' /4ͳ>!zxW^wYn&=Ӏ͞=[z@#ŋ;vƍG<i߹Kyڛ$p'$h@/v>7)7n1O9sPF.1uE HE"Ě!n[ H!:Th'nKqy|Q]]?ֵH˿l[F4?\Yե2u"pN!tn޼qu͙3K.+~7Agt͚57p,r/֬YͣGw.))ythIz@;?\̚5k>c޻u]B`J^߫g?ْ%Kӵk^rtGGf͚unٲ1s= {rgǎlvUUU@4I_"a@dν{{1%y5_$zb&0/$(w0J9 'uˠ2 VSSWFTK&/>fTem߾#xݞ=6Rr]8/ DBUUURR `;^,cJnB)MBUU * am#0!@fb\X4.sc}[;;;r,c0BPFB!Rd.ulm۹\Ļb]lqlDZsl&smٶeYt&Ǻ:&=[mJ@ D \ԶD<cai:\j#(!u(b҆뺦⋪! ÑH4ZT\T\\T +זyWzS5U7@ `LiZw\ gQꪫ}S[ne֬Y@`֬Y= Ι3 / \rɭڟI?MzK/ݺu뺛6mꪫmO\~#u/D>p^'>?>K` < [~~ WVS"-!rIiNݓdgg'B oaVA49O\E ܹ󨣎$v }z:[gΜR  ! H Ȅ ~,X}ї^zw*1(핕\p? ]{wqǾ},zS8NsseY^aitM0@o@}$cv'ɀ\<𲮮0 XI ޤ2hqM-[Mv`=xya>[?X+:0bSNi%4n!(Bp!iMJ gc+ U0?z)!B\r BR!떦&~! $XR[A6!Ig2`<D#E`ql۶ !at .y.8gAX6M$ppt&(N3 C!B5\iPl6#GY`˺݅ ?mۖ&H@7ga?fRB1!p|!Ќ3ΔoƄ F*ƄPeX'q0&*SO=02.g#}9SLy}c' :#['[lz3vY[[{W~>܆-ꟘABHCC u/D>p^'>t6\"f1߽xD@r, pNX9lazo8i8cյ{I&*(|R Bw4bo}ӧ/5Yhw-R>@4m^Y῎|ȃw Cmhq:뤓N UU!2QF=CeHfbݺup1^vmo!;8 59眪*H@>r91Ha1֭[IEQ6o a{WUnZWW--Lffرc,X[RjYpOկ~_}j{ƌ3f̀&677g}yUcPeA4%X'H_80 \`Ǜ Dd*: yڶ M& dːi(d;`MR ;!D(> g@;ޜۡEEE@y *r.01Rʘ !\cL(ca8HUU`g1(PB .!B(!˲L*r,;JT"-VVVrV6EYV8liPJ9uCJRT<EQ4MujYv>oR)։(s9` #@V \ :떴AIĶmmqNPB F^y!T!$g(xpN0&J( TZb8_Ɖ Lve]ve-wHV_|8N|?>ϱe^ׇhzlOH26R4jgCaEq2ޛCC,GYYeY]]]^odR RG=t}}֭[[[[kjjЁdb嶔C: Bw8X,6|p?6;XNC45[ :<UUvë|55@{?Ғ@ $rh'@*`0iZ"o憆m۶G?:&0GS :0*Guwq655͟?;p)߷o_2/**: (l(`+ACSr0 0***Z[[aaJigg۫*++E@[^k#)6l6@u;L<8ŐfN1 # Ʉ4͗^I@0Ơ8N29sAM#.qpek`JHB*q…#cH(@0eٶmcE}pX7 (&Bt:n6m[Wu!@0 ˲F"0 CWU!lYy]*Jp o( B Fl%h Dv'JM_I< Pl5`Iqi VeY :SrBJ c`駝v)k.sO;?59cvaP 1C%4y+>Dt_追0Qo3eQ).:XWVVv@w&җ\=du]<_4_p|>o>@yjkk1lvuMu@g@&qH BRJ:8K) d%H1~2 BP]]ݮ]  , ]  'O;:: v烺DFI `b۶aB_}KKKEEŗK.DASQJPU^' `ض ']!+6 #H$d !8Jݻ7$@Ol6N#ء뺡P( R)NdAGB4p8d@]4͒ 5ji~#̋f™H$j<׉Ap .P)6HtWT(D30GJ B*T AV 3H uk"ŃTUq]s8#Ǧ ݾs!A y:2ˍ`0H$lF; iA+`08Xq0*++`M<!a)2@8 @kr@:KRP($H @+D3f $礓&ÉyӸ19PP0@>H;cʴN \a\8ol<?ÇNÏވáa26;P?B }8 nһ/Pg1pݻw{ m۔R2477b@v&JizC@ c WB(Nwuu[.k&&=Z. C"pmiibUUUR'>[ @.l8Y?6uKꁉVU0#sH[=}`r!8@d߾}z1BT6kKnh8"M8 x.)..V^ƋN2 ߦ3m@% VhL&%"/K CBEeh Ul#Fn F[Yj`|ltPa!9c.c0L0!#T0B`.e1a % TQ4Uu0vq ʛL&m\RD*+B X3½.EU6C ,|>.26ry۲4- HX0 ]]]aa]xJ ^LabQ@ޓWF?(S322W0Q\q\θ@#1UqFu@}I~|!A??sIA@~Mquu5cl۶m#FJ҂(AÛ755ٳvuuyq@% G ˍ<{'a2 mACD s0@ 5Rwd:::B@ CS}UU9cm@U1.H &d727#mȐ!-{vz'Qum݊>+M"*{_Vm pUU x$I$WBX8\.S;vR)`-I ;Y3 rsDWWW<hX~FPlRQQ1hРs9C.8gu @),?0,rx*T $`B- &#OXSUXw J(Ìq9kp hTQQD+B Xpnَm۶m1$guun2f3sg2A.נI(HF P BRî@ PYYY\\L0Ox<*7Mʣt]pU0rYm0,wmw3 P@!guB!?ISO)'~3褶?-~(K/KrOêyg5}Uoڻg/0_P>@0(qT:e[6y9gH$rJJKuMrT*ɘa"̠ M@cdIr)B?}r]#<1F#a%tʏeb?x0A!!Ji %??6?Ï$4U.뚦 !B'w%񮘕&YZZ*8P0M0 Ei=_Bq3h(ZKGBL>hu3f`BAu3JO \`@!L',iӦa8[&MBIՇg~~o ɓOq&w$rܰ> ׯ\źTE3MXRPm#kؠ~rۀua1V5 r"< A!_{(3\"uDͶMY!@B+ [?,]iRZFFj>wW]?X/ ~|ޠ8/~o\׀?~SOŶb ?oh[h& H$dB`;P$pqA~oP @@upG1׵t:٩*8vuuia  mVPqX%'|2(r_{SN>Y__S0yd9aL"L@|ucl̘ъT_fSOmitL\y ~|7??>|[0)of)H8g~8:"f -FhСܶav\7A5MSb@H7@ 4Ub,fclv:N&EEEsV8!\׵, ^bBl6HfL&_BBEh1g@ q~( IDATtSB`TF\$'^Zb '&PK:B褓&[%8ǘ&nA+dF4EK>y o^#G{P]PHi'Olv Ƕկtt?{^B-K֍3XYgؓgЉ~~8bmag[~yz&SJ 0t UUF^^2vr\6 4M4c\7&OwfG+. *^0S]]]]U駞G |߯Rxn"p}Gׁ1|6w^{aG_–9.함Rҝۗ3Ma[>%-v,['/]Mi<3HVN0RYt=/eE;8vȅW?ɰ">ݴŧ~$\s6j8\Æ @ذaU( 4mWl9]97$L*t| RNJJI1-32r=\ʪJP!(77 #dBcKι8%RD2Lc0LL7h5 . :[ׯxbԐO^lY +dzQB}JΪU RġC6\ѭN<rr`zh ./ GZ Hλw;<n'uO׽s/غmn:??B+_>TeŰ1?1z __٪C&ݚ۸Y=S~]l\OL/ݹE&NmٮS=o<1O>:}\큮0[OROX\'sbFc0?n5tI4}G+@]9F>VQ+_3b$=6Z4bN=hvQ7P:ٷ_?5.shuZw<ӰLoq[Wz=fK*U#a)%.{nR0a(jWGf[rr}{H`Z׿‹m۶xK.cǎ=0JaJM)!s.θ+ snH_lxrk3usR7{&_Wךc6z-#.=^|`j}_<>}e˺U_}ssuZj/Z;MvzBBBBBBB̈#8g}k1cF~F7@>}`ԩ>i6iҤo}8pΉ1~饗noO?b_ .xׯ)Sl޼Y/Kk֮]?qĉ?呎S]I3Bw}_M:iԨQ͏=u^~3 AʪoٶevN&BxAAA&M>u- !jRd*r'H$ (M6i|JR4YAA^Ӧbկ^|&i&PJ-4M1Fvxy "PJ)c2F 0M2- 2B8n*xL%E`?=Ԓ%pI+WBFpq+г'Xj?'G0`5S*m,, s`=|= ڴGNɏ<mڀa-[nڴn>}}Wskhrr0瞿 {0ϹҪ -u;uoѶceɮ̦_nv,/<kxkȴ#/ o KyVZ?\0Ӟg^9qR?xTҊ?6bq7ʒ7d|~IVGsrV5nkM}?ޢmGBin^?%\s6j8\Æ @ذґ#G>C֭9s3?~ٳ`ǎC )((뮻R۷o_~}ǎ.\WΔO>;w^|9clĈ/¢.۷oH$[n]II GؑN29b& nu]c۶1&W*0x<Ř\ ])$Iqmgggs%yh4bd`J)LAF{ `)TR))<Ц+!1Bcu4DQR2\`=P0(c{dBG :߇nj|G}50|o[oq;ٳ{wx=F sĺJ3W/?;Mٳ`ݺq=0kp_^˰ˆF 5kV'N!gBi.CgApٔh[7ox}?N˩Z C ^q/?θse꛺4م⢼-j&9=/Oׯʒ.;s߇!Qv`UWu{]M5ǛMTYۍ}<ݿgDse񼂆o㶮ڿ|F-C``'$$$$$4iIJk6uBnݚTQQL&3S􊊊ƍ-[ ۶mk۶QJ5i4͍7\TTr-}͛׬Yst{ΏM^شw׮"4>^1&B0FR!ceZp1 #D""J*bV')q纔*uHĶ=K&@<OBp. KlP h@RJR@c:*#!icқRPBJ%00 òlP|΃ 'RPQgpQУLyb4?S;~:Ìз/@~0cL餮 etwÌл7@߾0cvڞ2Sᗴr^|~:6Աt:qzN_;4Sq:I^>~$WQ+QS(]IDŽL J*Ңmǯ6}tQǜ{ iGϫ.*@fBu i8pϦA^>G=s޽{[yoK/^$EQVͧfǝy!qX72h=p5^홊fO:R7xm U<7@6,Wgԕc9y\Qy`_+ض壅=[];뚿R|`5c!|mG>r_V~ac}'>z^7߆e~vU4Py ;,%;mx8w_o*t^oyH8 װZ7guo~g}vڵpqǵhbΜ9pM7<:B87I&ӦMc͜9W_} ?~|{_~F}g?6l,k׮jnݺƍO0aO>$BdR+!Txc4`o8Kf6=}wq>,ZfYyYIq c1!F"x,+1PR\j9<!B }z@0cB(!OAJ9BH! !R`L( -_1p` zsH&S"XͶt񂥋{?Epcp)p%p0ewx"p>g{oSOM7n֬Fa6ȭitP*ڭL-)ԩ0lX:7қ7ΦSFX蟖?tlftq8Ue~<ܱcH{?ʦX_oಃ2K`'$$$$$;wnEE駟~E !¿;ƍ>|x"?G_L&#G+ٰaRuvڝ~xr>lڌc7,.U[aukd7qfŝ;?q]ku"F#%3J+ /< =0MƌtH'H$x<FHĶMt'LJ)#HYY(..FYYY-SxsDp(BH'F"5kVJ ˖-2dZxO^z6ajA&S~5D*%XtifڙpQzl+رN<`;v 7W] _ ޚvPr0aB:,κu0a^at04 aZ_Z=L&BnnwJΦ {a%RI/! !N;Gnmê5lp Tz뭷M6mZͯ{M֯_Wg5skZ͔]vvmϥpst0Om؀`#tQDZk^ڥ6\ֵkf_MQQiR `0J(u\!ιmeegٶy86ʋl0")ՈT*9O$jTB ƆiUHQ ii5$|_|y>}gXjڵz!u0&c!{]:wܰ#2d0/XbU}P'5Գkb"K(*J}Ɉ0q"l͚7/I"غZn#GcσO.Oip9PQ lZ w /[eKѣùGHHHH,V'!!!!!!G.R)ɋ77\J+8{I6u>{ 5N+0ƂePlYxyp!mG2UU * e%w.r yK!\hW#BʀRRr DH$jBUcLOBh&( 0`0ƴxZ~ +wNQwe.[zMQ*+v@kҐ. . j3jUK~#CFIy=)a :Ou 9r8īuBBB$$$$$$%h~Fܷa+cO8a;q1,j8RƘe4 ʢ!BeّH4"L>LKaa$`~8ye#RR(J ˶(PzP1%%AC5k0^ !M```]nv`wc!3[HHHHHO& vچU kذ6v~}ucPRyb10 %mc mKD2YQU1/ȏX,h C)J8zO B*LCp^0)Bp)Bk(Bi[  @/1LNOI g>Gd~ r5kE@n_MvV6`BIRI)-ҾKRTR)9|\'jWe `1׋}2}Ǔ*@W/v$RPP[PHd tlbBh+W@w Z|y~G`@UQa LסZ!Ck*~df!?aSwO6g8p`!puV 9vR{c,''GHDJJ*++*r`B%]vEF4*ASTUU0gE !NҩL&TRJ)FX cIBcL{5:-^Tp1pe˗g0`R@p*/{ IDATa00FB={yL=3ڷo_d|rf~ N~jwZYx?jժT*բE:Oae @lٲެCrQut1zɓ' L?H$!!!!!!!? eRݻwgee]ee^"HReeeeTʲ,0kixAJ'ia&`o6JB) !&kR~v^Ŭ@I!B%cб sNacI)RB R`ee,`(qɄr ZmG"m< 24#"BR$ZE(!f@ *vB }9 @RT H)WܖiRJRAx/R BIF}@ P}M_R jзH&eeg~2HRRm[iRBo 5} = RJ)RR `2)BJ軴<\PcAIN6]"q}ID=%ev/j(PkLƘs.8xS\ \ #(J3ѡkr]=)3X$ )R(%"LRJ02;me}/W L_Bpu<LӠbL0FXJ_2s0@)MG6L @H#D5 f!}&(ӗշιPJ-6 Coa[V\ m%!ޣG{M6O?*?''駟[nzq=Cݻw߷=zu?~uW^i֓~ww}}soٲO?qR8ܫ1c̲x0X4FZ,HO"E3JMS$eYvĎضacRJy{!˲bѨeZ`ZͫE:&!!L(aRF)eeG"۶ àjE;`ٖX$lRʅHRUUUR`LS 4 UYR = ."uӕ'J-0ʴBBYCA 8cKϯ<ߓRMϷ9mYe3ƤNIRz0RBxpZ˕R)\H) }#K])<{KMһ!`Z6˲M˲m;9 qT*J98>8 .T2}m2L4̴uXCJ/ FPB)R88´LJFHUw~RUD"iUR20LErH)@K2 B%D+Pu=Bodim6!dVYrT_2d<۶󳳳1ƉDh۶m[n-))qS)$#b=̊F6U>믿.))IRDh۶mcX<+nXTJɊ'0iP2*<c`#NPJiV ͖m3Ƥ۩TW).6`1Möl۶-ÔRZ(m۶kժՈ#Ν Y⮻:?1c/N4bD^~c,(%Z+2vfOg_љ3̔vmw|~}{5aCt=Ϛ5e+Jܹ'|r~?a„_~y_^y啇zNH$6lذzwy1Q=s+V7o^`Nn[o͙3믿>|W_]}je+Wܫa^Oꫯ[n=ztF}ث=ۤu?vuηvXf͚5k6lPUUuw4| י^8=:EHHHHHHȡ{;w? Zղe˖eUTV~gZ}p]RʐxeY%%M <,- 7{#)D(F|DH+} ; Kp=WmLӔRB(eѪ;v .~k>胵6W^]צÖO>M?T3QwJյ)L{r˙g9qVC+'N7oqא'C.7?v aĈ Æ53Y337Dj.Zs=]5ܿ/gc| Htiڵ]tg}9S:jq]uҥK}GydرG}+ TW999[liԨx׼yn}ߏSu[5<7=''gÆ Zټy~ϨxnMZ'ý+\r@P㵍Re`\uBE)%RPJmòM˴L4]F `e A)p]=?(.TjFĎdgK%K[ՓFB !ڸc<&c`LiǷo-P Vq~.MB"4h֬YotR+AL:uܹs̩Pk~\wv~m=ٶmm> )3fힻVҲys壏~g וnѺ boo͛7e/DO&kSs~F1{K/tժU=^Zmڵ'ORTCS+馛K.ի?C7@Ϸ4M-hUϺA|kw/_MJuӞ{W(:0Ɯ{LL}R)B1<@Z-faԲfUJ1vp&99D B&an y,JVTTb1ƘvbJ:I])| -,YtlHI%ROňPRr<[n%SΙgYskǎg͚E_>Geqgu~ӱc|qouݜ4I&>uqꩧq{;mڴSO=uCF,Xp毿ujO2lʔ)Fx{g QWA\tӐ 9T}94''E-[GڵKICm61/:`ّF4o <)vCtyR,2-J8.\(8A ۶׬Y#ի׼zwyնeaB3!03 a @'S޽zy5kV^jժ+Vx4xW^_Bi%\rwiӦEN=WTTq9TTTqԨfsl=s}筷vMҡֱc[Æ-ٹZE#V>Ç77ᇿ1b 7lЉM'O,s={ݻwZSN%tԩsΆaL:ֻ{ D1cTVVu1t;v1bĈ;v:thCʯ7}\P׸+Go[kz׮]{ѩS'˲H)~ &40஋nbCBBBBBB}RV;F]bhmycL8k2?\p) Ń@*IȮ 8lQFM7dzܹ/DXH$bRq~s|{%-\|@b;KQ.\4hKWp@aW^ӽ[?G `Щc$!czM}_zuΝKJKnbCDMp8xf=*3f/7?o^|?gRmn8̙340: 9tp.tdm;++ p=us f2ƤTU q(aٹA#cB5be,'/YyD"""%bXAAA^^^4T*qΊ #B˴BH0LShY<1ϥ$[G?;v)6Di`ӅN~V"ExO^|a}B~(vm O# !Df-=c !D'JeYJ)|?0 3''7 PDcN*b“JYYymv= A0d ]<+mڤŘ 1VJ90LD1!$`C;BJ@DCeJI0"@R !ƍC1RIإkW6M nvL]U8HP: 9 (}4ͬlJ sA)3 ݩd1ּysSH)cP!D"b)<{>cL*@BAZRP*JyB+2Î2JJ)) <9J)gmUεs!߭ 1BY RJ).8h9Q*x zwN 5=RZXh Cc M6RcHPYYFeeU"FNNvƍ=K$XYUL$SXXhFTrb3B%AE"mι64I*\H%wn{:-_&ճwfN{nPB s޾}?$CR:vѦ"~N6mШqSq*+DZwń+t-V/_8++[xdѢ鷅عKX,HTm`]ne/Zй 99yRۿY|:wzBH4HTmܰ7umBhLU\2{%~ 9P7ZIHHHHHHOaڟ IDAT24&J%ɒݻH4,+r|S L8espI|W20ILq{" @RB"a0) \ 0B0 ƴw]71֯o?J c"׷/Qjl'ssN")up!cD&# ۲TP=!dlpˏ57&#v@Fhy7Jv=S,޽y_O?_ڴڵǢobfIIq~~!C3VڭǢ痔 Ƞt[pAIIq~AC/«/$0$MBp!!!!!!! ʘB 10<㜻S^^s2ιa0˲ 0ƆadegGqvܹuow,--)-+O$DH$A{۶;!!)eĴq8J ;شLێE! ~k,+PcF#eaBX)BQJ !BJ b ӌD"x<+++GQ˴F l˯葒g>fJ)~^h3luEUUB|ƍ<;u]bɮ]ER]E;W._ұSL ~[):kYѼEKީKו˗]$ؽh%])K+Wڵ;{@f]E+W,ҵ{x)d"C&T凄IHHHHHH/X:ѫeYYYYmK)h4ZPPPPPFBJ^B }ߗRXUX-jqQ-[nբEX,VUUUZZL&J )RJ!$5! B%cm@\p%\(%1BJaa00B:>1ƘPJ(9s Ąa&cdV {5PѼQeDof^*WJ3|g(M+YmfSږs>un'h `+P 1$Ft:!IwǏhJiM4%B 0GN'G(%Z|rm9{9w̹ŵ$x#gk1>_?vSG>G׎Nr\q9.x<EY;"v3"p1ߏ_7U}!"Eeq!/]DTejs7"JCލ;@wgpHB@w o)e^dDr7yR֚o \ⓓ3.̄:K-w])~BL=a@֛N@Z][]RJAnDZʲ,6wG$C-0w7g$,VqfODZO#xy&ĜsJigUs`peM8"!#@%3"Dsa.{m-: "2R $"sA""fnni2aqILn/Ghnޭ,GpWUg"I 'B7:3{!1FBuJ?cQ~VUs=R͛7oܸH88ͳ=1!Fj*b]"n|\۸Kv\o'!80ȱj;H{km)K)LcOXa!&d0 y'̢va,|R'CL&LV[mMݍYNN)i`"bDRJ9q{6վ,i_Q,<8‹8EhM3"Zj+>ɉRϑustO}StaBZkqyogZT) !VK)-p%iZmG џمww2,s>’>BO~_y?ǡ>qǓ?|w~s'_='O}s<?>Iw||ԏ>%_ۦ{˲|p}^7qwr//s=~{~w~ޟ#=7 oxqu___x3>OoN.W tGQ?Gވ`֟eeZoHy(j61fMX̆aL9{-ZrލcJ޼qu;#3Rl{' !FsywcFFe]f֦-"wj <3MG$k`bQǽ:MS\x"bA#b rȣF楔Nx\"&8w VmEgН6R8 !mO#<2y.8v;2S-u&Drʕ+9XTږeV-2aY =O}wvmҴnkfs _w/|Շ>~}ۮ=ᮻ_Cx஻_yg??z1󇗛;wo|^džNcoկ~nߺ;džNy ovZ|񲗽,l/wiO{<)Oyv9.WtTCt߶mMuH-g;2QgM5H3;L 93KS"3lH̜<戈K)Aǝ!_8S&>4me@PY9`s,Zi1nR&!yQUCԺK%+"B$PktݗDe1ե,M[cޟ#P:ͽEFiȡqD65M)͛4!BJq))^n=펯!*A&R"hj"!¨CWZ+KiZ4cͧt8N]sDLݤ8.P.wbV4D XڛDnKYxv4͵bJ` s\8jսQ$ώNUGuNL̼`yK)uoe7q9)*a]q1qiJYByf4[S%hAÆa9I.@I;e;V"9gaaDiUSǕVC5 VSH9r*Nt~8W5_J)EXW/Fxcl&FBf9Q8kiBJB3ktm n8&`ĕc <͏Z(ܽ,*)gfZ*BMtD)ӵX.y4"."񶃳+^H.kj!0U_&ڴJHJ)Zkmf5anZEEμӴ,K_9_yDEզ: ,63":u3Ab$RkE (!qfk@V r 9<{8GWO-`:bxrn GnܜyCo4ρh_a(??MT&0 ClHS3̣>)~g8ǿ_:褵FNѮJIӎAcJ;Rjo<G/$"f]2Ba#F3_銏:30j  !7AGy.eq.\KM})B „ gC·aY0"g3l `Hxpa^E:cJ v;f9֢ rcY6%Y'8"ԄB*ynMҹ@6A-jpjD@R#@b@16 D_G)l5LDR#"76jŲ,Rڤd^ D P&U"`YuLZp"$x:JL|鏳3oΑ7*JiԘcD㍀HaTm*)%I,AĄFfjeRH6appt*K^Ğ DZ 'x|(R.Dˊj]C3ZuubBG.UO>e3:iZM-?ٛ*{O)̬-ZӐ/=???4  #Bx:hmZG4k+$t@-e̼ @h:$TKԘ8&,]j k_gՀ&~g҃lfmjD)OkN@%qHI.$!\5uk2J:bc"Β2eJ)˼fի8kuz뭵Բqnǡsf:;;=9_q^zmv.a"9\RMU?Oӟ(,Sv$ѻk?_yok3Z(Cq7Ϭ2,R[} ~˟ʲN~nܸſp_~_?_y|} _wяG?9>?q9ǣܮ]O~)Oy |8?GG|y'< };hN.*N_VZDtK-4E^D,A7wlۤjT6/#`llToZ9f![X# 8jW,/BI-jjTwj00 p߁bszs* E4-sZT94 ј1AD[=k,aOIj)j݀0.;"%a0/\BnFTeC<&R"+.xC ٕ4Zӓ4\JyZk(6EU5զ2>M!oaЯ9ʊSYYB?~V.n"Mt",`6 ys&Q<uS'Sc-F8`V? j .TG26c Y]N׹JHYuӦ3#! ;ܼ˹@!n b yȱjSD} flkdanRM3 #ٙ%l9.;$4o`~_Sx}o+lnw\p24 5njuYia^@z ^D$)#Zn!"*A֚JkrMT!'艄nC@J)z趶: fsap6aj[D j)?H3C$p?"%ryenkM"b\3V ִՐ~#ZEezjl-A@XL; :Fu)町(cK 17Fvtb5m7 \N9YAu0 n<|p^z[捛}S<K=Z)eTi)Tªt(̼TX\ED|5^R^ucHCp1Y[)p,)ԱB3m~I뢾iZmF*MJHEb,ERj`T֫â˲NÍܸq#>هZk''0oY,xz3  Zk G1sgggn$)|]sV*@$Y_6DO7<R/2F.#oL"xsEdlq9.+_Gʜʯ< @ID}k_k^׼5c:zի^W]N"#\4 `e)Xmt1|1 aȃ~QHԷDGaa b]Jq)Ѭervy9"!8Z"QZy΃HAaL jm4Zk:#@>p`=(J#ru[hJZHiUUjJ"\#'INA$hH0<-̢h=Bm)u/Ľk)H ~,Htn L@-ڿRZְ)DZYk՜ 8<:dnSa^K9QU#g)+tCffЇ雞ǟw}]I}jY'r\q9.t#.hO&wty[5"09zCQ֟"qDC1I!]|J(D9Q/Ɓ704d#chM4DM4]cZC/#Q$ :JX/6& ftoZZjtNeB㊹ rID6 k7h91[C:LbnZq2_ xUpT"")3i_,e -k kJ#H=F2WQG%Zf̛ ݒmYciR+vb7q-@e_lݼm1̚;,"sTZuZ<݄sFZ,M5>dz),Zx唌x(+nc>Pmł#ybV 3ÚIKִ5Yr9吧10 q׮]O}1% Tnun\J9jH?׮] )=ƐEn7n\t :b?íd!4ys64z8=:{1 8Sd!+g\q9._i\j{ֈ9Fx2χp>ժD< 8sH<眒 6Vp vk5@I)[|[|yRJ9ebyvj,˶+b&s mS"QJhBkΉSqc7wļkZpPqsʺelQAmE$awWږi ' '"rBʴ0ao_xqjeY C i{q{$d3y"z@t3y/rW{LGʜRR8FIGL\s& em-.<*&_ƕ|eY札Kwh ,/J L3֦i "b&.X)N;GWZJa< )ĥ5L{Wz ~Ljڍ'kX,˲`A'jp j=ݬuw|9]iZSv`~T,Ry9LY&R~ͼBV5U[ Sc&bcO .xVKQ R,V ҅"6C 9MKA3 Y*$lk-ֺ^C6i'v15\LjԌ[.]er7'I+pƔRTRJ$!F(K X7Qkp`vTjkM-`,v7/ܢ~Fl0''ӓc$s4\j+haXk7E9fEY4I'Qf!}Ǒy)eA$ϩԪjqV˼4m"v]=w[b4*%Dۍ Y Eĝj[)kƌUyK-n8& d:lNOO0p8??'Ɠ+WveL,f xrr˭W#y+W\v-Bycջ~[' CH@l!5kn+gf/| ￿k%/~<MMft7O7!Bg_r%r\8ir\\{fJj'{4K #pYfSe$ÐF٬́zSsʉK)9PPR Եe)g!IV{>r7)D< AH=)@ú<~őAi^eiVMN!7VCԋ<9Zz]Z!J>ċ-4p^a7 \y%ZD^*} ȦY5zWSU&IHlPZL h!GiDZK-uљ:rsΒϋVb6 ~3S#19p^ ETnB%`OtZi8!^Ԣp5vш&`NeUްi &O+3W2ʈxrr!_tK] v+:/ SV5՜H]L)0d\[[jцuGm+#"-;r^IMమ`fI( #? \zu7 և~Gs e^Za4D@D'$h (1k9往֪D"irQFT,A6U@LRAm[R;-wptX+ D"2 9&>H&ͣ.R q߇AKbrnWhh<1k| NG +%kW dX$5բZ !Y֧QW3wD9rLÀ|^c!>BQ"OJ[Ӵ5(6Lasvw(ep5_J ~\x0f<)IkUf83!v;&^Jq7[HDaUkIZ,N?%Tds3-5NHyHZ ev?Sx[[4y  /5aU_&eM`[0S%90iu `ͭ5)r%0nҎ5 hFB1|phAޢ}+uj])V)IJYU[v.qUm,T("11Lma,cY"/IhLk)gTY6Ѹ >S휶ly]؁34mE~Bcœsib%(1YiN)ICmHh<2tM[g$45C#j5 =H!_Qmr+#a}ax]Ws)I9/ҴFt]QJ-4}Gz`93\h3}D,P:A)Ol=_Uf$BHmzZs3 5E> @Po/[c$I9]Uy@7nܸq#rڭ''L|:_>e)\~q 0_~w]#bl>_ηڜpR e_$U9oKٯ9"a6F_?/qvs9V/FZ_U's9.c7 >}_/ܵ}#C8AtZyeUuӹVςLZZDUc1 `p1!&'87yDEͨMwC#D#]]Ucs]-4&|:u׳}rf"h8:\5L3H*K{r6Z9HSR涿=3kmƘC4 )$W%vW /855%%"FBĤF̙Y RicT,2ѡv>#iN 9\9VNGH<4niײQiƗZͥtJhN*g$P𥥱J&iBy fƺZX `k@I(B@1d S O!&L,s)ڙ5OnK*R599klL7FW^sr ].΍6 R <%}N N-[L#CR9s1u%gIIܴ3D#mN] ^QC=i ufP EBY1RƄг(X$IPxR*X1cN)im]Ζ>$*@YNTh a9ۭʮHH]*a@ݥr>;jN<$!/Zw1i(VTiʹPc\<8YCdZK?KxAh'v"nXb2rzKRs 2"BJ0!\4Vsr.;LҎֺ֡H#LH҆ L ~PXlG)ZSLDV*Lv FʘbzhJ)pEd8g#Ey/ [9k$1'%"h(R|1@O՝V1VU!zTw:R"yA C2X^!GJw\a.&Yi唱ֆlBJ嶩Z= ;23+Rԓ}31ZDv%E:W>-^`831ŘMRJhӦN,?:W۲eK]S['Ik/Z/76+5J1RHJP1_m$hao/LB'ʫI)e֢d9ؒoҚخ>:I'lh zt_:vf:忓$3dA 3g9kIUՠߗcE)ig:GʳIgUSƾQ2 `J T(tnJ3 Mnj 1 )JHJIr$cL15M9Kdıh}2K"`Ieɪ1JSfzh+ƌ)ֹ#ȕrx}\5 zuN)Jȑ<7W;XI)b1!d]RMPPP.YRsW86Xaٖ(V$_ 8 R-3PiD-O 0ud,Y㩸hjEZEJCBFAGf"0c@@sFip# 1f)tKQ3uZNLeGVh:i,7 1E4D`E:I )Q7@xbq̒>33sDDtrqCY\y1,^mT%yTXCRT;}xrt4΢LK4 Bi/\)1`XR'om(QZ+4ajj RxF4 (ej!bIbwUHfCH Pl%~Wk)Fù[]srH]ҍP2ǺW\1T+c˸)KvH,B1)EcZ:3+LK*IhH8C2!fF%]:XRΪh]JsZU HP@! VKl]R9H($+ KX$n9G ̒)ePrN!d"EXxmvi ]A)fH1%HK5ifDSw AW\)E)C|ĺA"6m)0H+ IE)BZR4p 1JJ1[L{1GxQh4b9%?™YZ Y aDNS|%RMv:,r:Q<]Rk<Dd."3#!>TJ1 G9C{&LHι Zg(CN1RCun1cDTZ낣*YآJ|!ƀڪ3tdm2K2yyKP /* p>mTK XZ8gfRJ$Qac>4Q ȄHƐR%B9)\\N )eNKls b!3S)x4RT7[H$#XمHHκӧt +C,8aX9sΙ6$&-Ab[y"#+.Q$|nmD)&S9~1 B妵@"QiKt(S;1Wcl{뺮G333 ƘpEt ~^_)j&@Q6OvILX$cTbiF#~[Nf$\"u!Jx!HhW:d~Ænk;y/ L:v֟|<ꨣ<'=I{g7 /~q޳zk x׿z7l|~Úتq#=?꫟3~~鮓O[ʕB\P཯z83>`o{YfVhb&ZcWSl}3 uƘ~51h B`1 v^{)5iƛo㜶Nj:4M !z1% D%?҇ի[TPZ|CW5Znz}".*5@ӑH+!CꁜLbb J~ja+jmYZ# f-Q:'ǔ7Ф}&M 46RR)2R3!0fIg1N9ǘ*W54(@Bd YE{ͭIRkCIJ2l=sbL a4@O JsVkHhqNIM 1"C)$ňm/Obת"C!`#B!eY8iCfTOtS!b!x/5@Mϛ?7a Y"nT_21.%>霒6Z2dW%=27x13&ƐM9"L,C3FPKK##mCĜma jK/@ؙ"mR<xU9c,!w%]b#圫^O+- E.p mazqnM|EԁAU`baEXkNV l+!9,I^9qJ)gDoBm1R0A9| !(%Zm޴Y4&\,X8|ZI<)Έ(cV&,5!)QR-GQҕ)D+2*T9$]c L1(l$Ue!sZCяrl}S)'a8F驦{^w23umW+̛(҉8ª5FJR fQ u3F^YsJn^srH1YsAP5ldx|*D[7}N;~O{Ӷq[_;OO}}km-tYwS Oc9Nxw^U'o|l=kd.K.'? /x +^*9}gKF3{|o/\ ss_wv?eW9}zDͭ;'yÒDM0eh= )=E2afUbnFu4[lU\U0*l 5PƀnRTS]ʢoǚŶMf";oYO1%Aֆ/5ڝ.(TU˥>>s1OsNH*gm! )9uƞH0[Df7u-~/m<cmꦮWUfYo]NE[Na{Kz} M *Ro!D(Q`9FjB7J@*9lꀝv〖DnG6Ü3)eZ^Alez\)9$21D_JAa$2"Xhc #N)&@R*9D ZD;D$xOL(PZ; j0DҐR33ú3FւRYkr1WI?Wy!jmP%JQ1 R6Z(\D}(>{"gw}[t1m~8Cw?߿Yi}W^ٮu%xYgS7ou^*փkႵkuNO-{>%%w/tԙ_?ï~vV~gr[8XwHnng钳N?kcsVy_/xlǻ(Gx(6ep:gn:1!%Dr%Ɩ5Ik?0䦮nRʙ}Q鬵t4MS>څked49G TDuC6Vj+NosixHksXNۡvį$=vc xS8ΉXF#^'^*ूM-ҧ19k͟AZ6*AKDFR)I"lLqH5,dZg3qcMӌF`m5 _01oHD*#Hy=昢,Jp jJ)Z;k&!T!0K6Z+c= IDATa(%|=cŭT"Zv&*o @.nԓ;{BŴdkRmb \,Z2 scܩV @֌:vp'%Pja$V4fYZQJ4)1+-Ƹ|2ݯ%/UZowljtu֝z?яw>_ոK^veO8.?}Ǽoy_o|8ub%Oe==/eǟ2XHN^ws%K|i}E;bzy/:巺w=O{?rϷ}x[9oxk/~^ߙ͛Əkz_{[ys\k]WZΠH'2E pӉ)Ox{wOZfhR_VAI)8gU֐e ^]KwQ1_vッe^/KAvO_w[Oݝ9+Rgݿ[7>z}7h"g֯?~%_ԧWk6k=qJi/~|t͚?X}h 5kNYD4ȔWdf,ba. " E< !1{~9H4[bWvfY#C7k9bFOK吻T+DbDR HeSJpU_(֖&m0g@ND(*cl XsRڥdSa eDz$Kb,H]˃ Ig"6r2.MDؖę%\F2 ݐGv:sI)%'ęA"d +cVrQ >s&BsL)Dd 9ZV09 ;[,b9Ȝh-Hd?iw*+2gMAV᮶M)siJ)kˆL! NKʐsNZzaS{MLLLL Y!PħD/,uJRU'FQk-ao)YepťGiqB)E:mf-NJ̓FAUir23#͖$$cN1:)9gL13KLJ⭐pEdLb AzLZS Rl1IR[CJѮ~bHҶ3 B(,IlI*\ tdyf@2 @A.BHQ^Jt`@cΜŵưJJ ,HL:eoXkSJ"뮕N$IeĘk@x*`@̷B `έዅ \XT ^gI)#\S1lf\hLpf4*$JثZv){_lƚ^'.Ŝp8Kk-V5׫DJj \TI:[H>n=a4N%Jj OD[lL4٦Mi.\aÆg?هrG}qm~t>߽;=%K\}Ջ/~2I6h+~?>,^zGsmюmW7Wҡ`ikXH2Wjvr)*x6U%DR`[=ƱYKزJC[.XI8qA^vOͲK%Q]uHnBtkIKqlPo<9z8h$vՇANf[[255555===>R-{8*sf%i)(4fgl nI %UB<6#iL() Rpc%F b/Tt5Ls{z>IlZ-S{S3EZWU5}SLMMn޼ert:۹T@8{#jq,BU' I߁Xkku\UzUUX 7͛?Ġ~0^g{$@ E9Q9ZZQN)f6Ր`zhmjYgP#352.J ;@k#WrJp8ܼi͛LnGi.ѷ'y!FRTOdXRo3w5h$o|=')P9y_i"W FZgV ۂF)TfrK􋦩!њ($ r~ c &>6Fc*W~1&h$t0YcfM(R^5 6,bJHbF+Qk*9'Zt)lS?M4)e)a{H4$Ϝ"A1"pgJCQ=%Rd! J5&6#dcy !Bh_uxiElT 󔔼RJK mS{=OLzU, q|1SU~F)|jm2@4C 3h9:ڧIe93ڣEuc^Ɣ! ubV0(eC*e)'1 29kEI}eifjr˦S[&7o޴y󦩩-KWRwM7߲lMnmJӖV%c쪃^rnL15\sup p}gup|]p᳟pv;CMr׼;TaƍQG5[,\D.{_|en0{0ЌFVéo|]|ӟ^t?|o+?9{8%ɿ4㒅]{S~!4}?;o{>>}d859ܲ}??m=V޸[ŷ^Ϗ~G^un;{uhޟ\a/켏vyn 2k%{/|~y}6ˈ^2.~lm<*"tg7n|{|ۏs/J Y}efR[0BcfD4ϣs|ŋu//kYR9Q;IRtA|8(ϥfq-zO,z="M촓۲EjqK3@/gSYRRnd.|S H\.T\FM]#)lSa(pJښ;!RLigR$b E!dAQdsVJi \k"3O12@IuP@T҂ZK5^?zZRŃHg=i@f^C3ICJīm{@)x(ܖ!#.ӚBX?R9H2/M$L1*f3)b|@D"N)su643nJlp> B#bURUwmڴi=뮻7wuo]kŊs7 _җ~w޹;/y_8:d<7o|__r-?o|[ǰ|lUyw.X`_2}_8Rz=y_VOyg1_cJwԙӅ;>%o>y+Yu;'-9o]w7M=pߓv<~3sW?{ΫOӯ|ϿO-)/9wmK}g׿8#!-=xps {;\?~>Gu}_v)S>3D\3k^rube[#[uu'Z%3(D͹6r!!1l|ocE}TqffzrrK~~]>ᦓoă 1k߶bX]}=曏k/­bE#\Ab!cp1.pp8nye~;>9nrW Gϟ' 0R E?1K !¹~- |޺u._.uyN GzhW}pJRk9C #k֜jI)}_KR2THJxOڄRY,2@=`/$SS,.+bjRuN'U];TV A2jB,B=c hZ'b ` djjzzzZSz  P8!aD(*֊0TGu4MC`fu:c]%0 V#hV'N!"0ċ]kw:"̦>,줓n~0t"E9Tr#;4myjyC2 UCA;I)+sBXL%tv_ [jAt^fN1H~8M| رNsVX0E|7(bYg!ZM+/?;RH$*- `"ci-Xp0H$TJcZtѢ8S;C.<#\8v \ٹ8gu͸eh^!"QKQQCs.ƳI4D|ݺ!sk-? N Q P35Fk#u~OC "ard X)9A#'C yUJ,AA")b3Ka>KJhkSF(l,{3o<Sh"yP:yWb͛7[lI)R,9F#Iܯr7J5|=4s)n… 7~c'>癙?O3~ůzk`g?F믿aCEս^`}Jq`Ŋ׮]Ι7ne)l IDATTV0uZ~J)nXϾoyιw._aXs\y1oyqM7-_|ۦ D|ӟ~,[lÆ Gyg).tҳ>{Ŋy{6mt%l2.wׯ?#7sY|OӓN:iҥ~+^׽uGY_/K/![s=/}G?3O~ڵk7vm6?3={n\\;uq^Iͯ1t%fxƯr\qg㶛xQ m΅"RD4@Y27ֆs濹VcR;I+)JeoVd&ƨއoxܲ}0K-;sNGnxeТO|]J'AkEDE3R20r{7ly1Έ| -_&%pLiR37ph5dR mS4Z/vhX"RZ[ku"FUU)-F%5̔sΔEpUbRJL1VQkEs٠"E6(Ƙrf΄6,K)$Ntz >͔R3Z֭[D:sf[ƪ5d4RQJ{m 7o,VZv 1PaUU-ZhN;>QP67".<gqF+W<;z駟~nwy駟~QG}ӟvs '-q_~>} /c=.{ы6Ɓ ƯH]1>_q~~]nu` 4BH9km[wꧾ{CwЁid-tB:jY̦jZ|]C98o, 2Z%/1&YV{~}\5=shՃޚɩn[0?5Z)T30odq#2i:fپoXoyggK-_m7h%q~=o]?~SUbpIFD,uWLJQ  >0@JejdΤK^[7M43S3éu&Zkhe6CׯKis; 0@lɩHhMS(A 1Dbd#1qDJz:Ŝ3k]7E1sPhZb%CJi==ͩef)sHWA  ]&(a2ҡЭ͞`| С<: QP]&(3&d4֭ Hf YX]QC)wهu]"e+\1SIT Y@,|̑¶KaE JHg@#'4ѲչmZU>CP(1Qh/3@|hl`h1I4j4H;@ QGx8c:841g bګVWk>s>V؅0L 0Z-oag1G1D5bRzcA(xQZ9DwmkFq1r䤭mwURZ!J *-W!I1Y&D.cNquukmLb.rL;DH)1D˺9s!*"04xP”Śr( w}R"JWV+2cD hQjdUH|>"PrH)$GS"ZhJ鐄,8+fC**1Z$'dFk{F98AJJ"Q HRv. ,DD(&"c )J .1 ѨQd 8q esqߋ]QǣuJ+9c:i1Kĉdz.lN&DĒ,IWDF[,_T4}ADbE|>>&1bf)2( ͩ(0]km*SU㊔} l6%$M:ZFt:u݌FMhmR676tlO| RDG?9BWoΜ|?P#+&]B>#Rec!#QѕD SnXnwjԧwo2瑏|OE~fw䯖+rgm߾rvizֳ#???>;;]tH';vA,bX~v?#dL^?|B ZY]8A)eH[k~c2/7X*‘syc$R2*r!1}p;o8Cy4NX+!"d;sb *[i%" 1j1F 98{F(~Xف9g;]ORJ%B>"x(-䓒\$a qeqsC6oEs f6&pE΀M c2Ry(Ǭ${3u1RH(5N):APB-ARR Ș"dEfD))f`bVX[U"\ƑR+oœ{-W"=)\BDC )9ﻮ5!Fk+$*[WKR96CCD'U\ zRVV9#RK3{|29Q; bDVb醘a$q٢)3d.TH9mer.6)]A$ q1qibLiR2#9j%/X0(C 'M: (V* ˑF`(\A9TmzZZ;!RHP$'kxdJa]w1\@]=Y `bmS̮քhIuA)=P]BF/ iB#.[%h`%9KI@X)&.]iҕH-+mtM޷m PT6g!)JikZo۶m<u]/F9@ijT4idBJ]'@20|$[[9Rzk"9 .1z)3ӝq-斏o}xZ]x[f޹sAϫ:/}{-[Ի " ћFPn]v}Bpm1Ek ) )3+H#2t Η\76DdF$`TDU޶sUe8Ḵ:fm1hec-<}tKRȞ{$t ,p1:"F 8A(sE:!a.T(qR"J)kk*;bLw=sukEja-e:]JP.Vb?Uxi>/78/[>TY8H {'!֙T4)@fif6q,K0 c CiA,8AJF AV$ \EdH$S6ć!m>GM\Uu]fEhLCK?ֵAJ w"(U Wp̱8,t"h mY:m t_ċdNܻ5)c)W#ZiX3mY[ڮ1f4'㪩1\T 2%,C@ a~mmm2ˎ[YYq)Z몪&P˛!EEIһ$gNe$B[eF)ȉrΜ 7EiJOpv{ .xcۿ??Wj} _8餓?yqr[\rqW۷o>v|3p,ˁr`p,;B]M#  9_82-+Ly L€١y7 ҒO 1pdHOڌFY۶ZABM* Il1N1cn$ARFF"2eE'b:5RU$!t; J@?x)9xŗcnhi܉ʄVJitR j1:[cڮq'JJ>i{ĺ߶lUՕvAY F",{S>W|jY<;'8 5"?GH+SUUQlB{ 'mMD! !D`scc%R\zrR/VzJ{)8 EfK%y)rC\t>YAq !INA8*ȔBщ(2c >6Yb/.wyW="e$ ?F3AР  R<9K69jtKNC} !Fpn /D̥Q"H(cC1BBch"0gc22އ"Q5o{? eHP: Hö%Bc,!F殛 EYѸjޕZN! '!4*@B](%@TBJɮ"䠋 E,5Q/2\B !9Ih&RcRJF"2K6Er*Af'FV2:!n&%EK8<0 h@D|Hk-AiN@$D.t&MbD4ySc ^\G]:3meeI\ ,bHn/" \DŽD*e'$ґ('ebZsRK)"흓Kͣv!EOihÏ9B*Fo+jCtuu{kRʇz&5is9B@$Fr K0/&,2?`F=\ʅFVݞJ~S<$_U??x>wg9ys=A׿<Ț_/\~泟W~5M?.-'W~WRJo}[mN:/}o| o`b-k_G޵ʃ[]~ӛu: ?Xv~w'?y箯߅W׋uD u]u]^*"1(wr,q|R5Jl/UyB$p0r:ʳV9׶4$R + &@I&J*c7,lc>[0f+hVZimP!q$';IFR2zd V#j.)UkuҺRJT/U] v>MUգQcJ*',I,U #>R )"]BZice9_1V)%&S(wZd*k+["ER{pxdlu>a0"a1ʴzDG%]Qf wnY ;RH%(EΥ܉ΡΥ}(aL"N%\"x `Kg01$%)NC*P,q( qV 2`fcm<#R(' 0J4y;S@AB![U6h3%N9'y=sh)'N AJm0@(GNp!)AiˤUJڮZej!F [kDaiH\\sRJ>Tҋw.RkL-'{T9pN4kX瑱|j"e>&$eK~H&!IJ$$aPQ& Zޒ@JN;H @$2P.?L)qa`P)qȊcPcDɃHIWVVkau+%ZU /7x Z<r )2hdK-Im)'F瓫d@!HZW>WpީΑ8XU 9Î8{1>Y;kZ;N777Ǎs}wsh25u9ьh2VB,^{677WZWu=|9AmXN82ZZgHYm S5/~q}ҀO!D@I%\}hoUB3WO~YO?}}}iO{ڵ^{G%/yc>կ>sa{rtg񒗼{8_q{ _am) +A@g___:?-2ܽ{߻L׺@1H,vGL} IDATf$)HsW94Br)w`nRJ{J1#FU˹,$"y@BAJ,"Xv*h>y` J~1A]w@T"\;z,kSIO 2*$1EHzؒEE`^."3e(kmUyY|{P2wˉw*;cڮS2㺪Hİ/dGCuѥArLs e)DOSR+bY+++J]vM\H) PN~Dy ǻkFf4 p1W2K{}"q&u]b4+^THTvx(W⢰OJTzrٺn4h4﻾o۶m*A h%%k%޻RJkOS6pPHWru]K/fmH\Hhmqw9Kp!thQiE*kme$X__]OHUUa>꺪J,rs7 Amezbd1Ԛ SuJ{!u3joc1JYbk1UТLuf3 Y7/g&tB*r9ljGSB#%#% M!DRI'] 3jGkYff4ǖcf8cR<({j ѿY+ ѐTdʈL>y,Z$0l>kFf,8HLGTʵ(os5"'gDrum ιwʒшyD)=#'?%IA+zBLqIP4e(l^Rͨu|aEdE?r amSλs7߼{6VU圛][c{&d1d2Z][Y]]SF)}y뺮:(x?PGwUJϟ_eq?k,ə CEA"^r%'t3{BYg%{~|FܒT[.s9,R$=|3W/_w~q_fr>OIJ;/ .g?q{O|^V}XI>Xl绶|}~VDe:=TGZf!eL9KE6`B:ԕRF"c!Rh1#1sae Y$ؼG}($@:HU!_36$moHŽe (q, šLaEDbdA$ HyrLb( o)maH$CBB(iA12@MPc4czB{*sY85h'+M3roۮm={gy4fdUZ+Yi(/O~8$(@BUmMgVWVWWW'>EQyRAI,OD"efeeu4+䘓pAFн)&Q ǮdjT.)ɾw[ 5 }@-JFʛYS!R 90%aDKi89|a%vK:1F 3@\BFkE*%H)J\kR$IƘ7"sAHB0돊Vy9VM.D$m|0+eb!&n6yUWUUUtmLKc&1# [?ͤC9u-lZUUj%HCTgPDaN1޹:DjFB%sH®F6M>ы'HC!DDAX|hV+e e4PVjo`fmλ} PE"J|ιXBj@ʙ>PAPAJDH`N))rA #VbXF\ 2H  ߲Ǘ0[$"'_ E҇Bv2dLČCZc$'fK\.5Xh18%Fj*AnRCB f}6J̉Tئk6U#)E A-*ť,U%.7\9F| D[Uu1kGe˚j:Yknҷ](1F#EZyGqr^sD2Vsy0X)*I# eΰ0d 8,A+Ğcd<,-yx2!w?ܵWUg}k_C9dΝzg~.޳DX,]XRE }߶kC H&} V M[k`Es9u]nj3^r^餁܊u=$&J)JP׵͹OHC>8$/LRZAJrΞB,DX֢I(8Cr'sc1%G J@as?KIG9BwRȁ*[Ak#C~TDː(feA|d g]wڽ9ݔԅ({a4D,~RxzDFVڠtJC(/7QSJyiU=q4u*6FEFhrA[mv֭k[I]5ZL1qtscs޶ &y,( j|<ɁR\])wdS#P#A0Nx]ts6w{hu"4iH~Vm۾whyCerRȰ̒RhJ 4GNi͐ %N21|vĔ@+5V&VR 5nnYy7ٳg}|6K>Hr5v<O㺮X\uw޵kl6j:aMomvm׶m!8俧F S7x};Nꪫ_o#8oÏ~9䐇?_ڑ}{sz|sry[S[usΝ;o۷o}{wVםzlg~˧ݑtIzwQG>=fٹ{QGuQ|3}>Yy]iG;ڮmur `O)Jcb&Eh4nFM4uSUeJleDٴuA ȣ PҀp>1f1Ց4CDHQ"oɑ%R"7jd9 NeY@y,%H B) EpRHs%ɋgJ%Sj$G""T"Y4Jv8["G= yF9|.#UWZӌl6______f}߇#ܚeV+"sn9NCѨ5ߑ~R4h0"DLĪF0$ DPQBhf4M㕕-kM3R9Flgt6mwy^ޟ2r{ROd\* [8C>DfFFf=9Nos.xc[RCkJ^[r HJ麪Q#$Hs$8sVXu]7ufϏ+TeRJFd2iF !flJbyNǣѨijk6RT{ﻮͦ{v޽ݛmfvOŨE0"kd<^[][]]iT\:ˍq#EacfҺix4목뺮DZ[嫤!$Hw`,6heuummm2McE.bȚ 'H o{5!@[½S$0ŴJ%H޻lҴ.adD!J\DeS "yYBB䘃cC?JTi%M$נQmf pV2@ev ' QDm['DS@{g VR֒D(}}w!ƁĜm*Wc"҉LĀx+E\dVT42v:o[; >{\<9̆kPƒ-)F&[XcHD4LFZ9%֢5M 橲Rͦl6ϧm;wgsn:޽{>w5|"umΧ)Lts:m+C/$'[L^?N'>qF!D\Ї_;S>4`bowk֕rnݺ裏>>яkck敯|'}}+7O~yHw3BsxO9}{W_}|0)OOo|scǎawwW]u'{{//}铟w/x^x?|.[}w\p^tE_rgY]ԧ'|ɗ^z>Osu^wV~~.}__>kyްk׮o|W\qWݏ;vW^9͆}>疯+Q ޅ(c sXfN,rĒbJ̘CCkk$ 1Bd H+)%O)R}q){]yPFg;@IDUe5{`Hwdt7d.AK M!&((cv1RҼRZ` >H- .C S. Q#k Y=Hw*i#ϖK(t]osk rN"l= jul:[kWWVPET B>^!_h<p${'@ޘ4%#ĔSZƐwTIB8FgTu]KM"|<.4BiiE/%&F \Bar\P>B(1rZq)pZkCžsds.ޞc Hwh$X.pcD(E)R$;C _v "ĈJq@6f+/lEe1cf4+fbS1 u%)/G0*;ke%!Đr,6| eEԹ.tη"ucB >VQjl3 %ʹn}=_B${}߉gss-PsCQDynpS n!NHbO9Yk"]c,; 7k_yoo/| RϾqy x]zn?=nu/#zm7X-_1\ ַu!s1}C䶖=xG+^?^Dx3^-[Va^z>wh;uxKKp{c ~>_ԧ5ٹs!~O}~|)};3>l__^zOrݩ~E/K#Bz>~򓟔5UG>\x_|֭[e'|gj0SBm Y[mX)U4y1S00>"K4i)RJlN ;c Y"GٓA"a:cm1s@q3ǐji^ HIBLGR4!`)r (X w-]@SbLt!kEN. !@2LX\8 h9dq2{cxz)Sĕl%@a.bJ{Jy޻U-PR-Sصmo!QY=I\B,~h/C|1` r-d{K ̔cL,RB(R934u"DbN1Xh"s.b"LBH6%8fv4 k9&N`-[&ʮwZ&teNw@RBtR9p*aBJ919 dm/1 Ix=5 ԱPU]3^s5{쏿2LN;|gqe]v_җ?wַmnk߽Sz矿O7~7G?:ꨣGy+馛y7t]|]O To3<]A* m "Q&J "EAMB0AS)r9#8B;c6l@;;YY༝Pp hIXXn۶Z[o戊)!h ";Npq^ *Rι>@q$/W)J JJTRi1Hu*[sOCkle{9KF b!i»ػ^LHKiX@&I<h|>NlZmV:n( /|97ͦR$VE52L)̓12&PZ+XFEħ#:Rn\(cw1xd}2Y!"׻s}SA\QRӨR PrX@ɕ1*mq!ͺTTh/H#,d<Vh#!piC)ESS'dF$e-A̰ QD#b)rwRUew\i z*k\9cmxӍVv8@A /L׬5qN,?"h?裏?c9grtٹvX;wwy=kk{gm#9B'>`d9R.l2쳞3ߴ릝7yͳvcH!tsSSJxee뽌Z Nɤ5:'7I2/ riJ:@3uεm7|>9RVU5nJeBl6ؘM 2v4euJ.g^ "0iŮrZ7u- qKcniF=Q5 AFk*[)h$TD_!O璼^+g%x{׷|:nllٳgMv޵gϞ o766ͮ\߷|Ϟ=Bgm:'H躩꺪j[72Rd-k[m;'2D#tm?N٘!BUJ40o1FctS72ցy4cR]UftgϞ͍wHԌF+++x4 ,gP$UEP !1.u]gOL`2M WJȉCBKQDn8,9:]>3S C$^ B!䜋k5h7 ˠ!$Ht҂Q!$7wy6h,E45kkk/S#T9"cd(E޻Bp @)\Z 'aA*R;"*x`ז-[㱵:Ï<#WepI$s1bIIe&EnXLړ2cHĐYDXke>P*31&z @ Ami/s6LLH1-*""U&3YP!H$BIQ.&/K`!k+Hk0_u:K SiH;pr֦SN3en];\"]12 w!r׵zHZjIEeyV[-h!=W@HzzXJM,(Yw6xBhFMUYK1RQR ۋ Eβa (s%2?qb9[Rr̎(RN)mJk %*ШL3UU%E R :-y̦T;'ĠNe&[ IY5o `,O ~K9I)-R*$s{666{'43KFzhucM!QQ:D YD9ۮݜn]FY[MV&Z <D [F5_H!2Zbʜ i)x0w"жm4g"WWBod³`R@K1a"rR𝔳7FTLAd $;% pJ@$ jL9nBێalGQu}Ct-[l:B6nڴQ#)D!Q`[l/wmu_}߹J$w}N\{ޖJJyZumw #K}s//R:x X]N{__[NDvic=Nsꩧ^p@?"NNxy/Z:'\qE!7:U;{>XzWjߞrN~G_j}1coK/tgu_zea}ӛ9y=ܳn hgG?zYgQ=䓛9餓~>193{};Sw?OS_Ev~KF?Moz~-o?k4quo|{Gy_m>sq͚5ջUr1DY815a\HZ Lb H)EZ5\BF!B ,@s)ƮםQq@ШH-45MA5qX0Xϥԍ qHBRam]S+mVQքN1(YPEׁ Aa֤U~RQJk (.|蒴IP EtJRA|`bjzIЇb25֨ciE ר/y1Z뻎9 bw}|i/_4V= qr&(IX+Fa]7cJjjUXB6<0% q΁H~0T$tm;lf45mc"W(t:)$TI7U%  诔1ŔY#iJjbE!ԂV=AB\HS\|ypκvN @EȢɢI,Y߫90 CnG[q 1WOK> އS ~< cr)ŘdQ5\WI)0H4jG)n^KF驩hn~a!`VS>ȹJFj05OZ }$/jS=* VwdNQPmZDT^ΆY51%E31H]ȶ*(S՞IS5L&Ůn) ]-fŹIbrz߫)PIxQ3[g s,Ϧ<`s{/9+$3&zN1Z!0qM69KP ``4$=Q۔L$+N  *OsaaAҎuMXcSJU ^AƆ< nnvn܍x-s3}D\6c#*!H&~ay>;4/RL,!xQuI?i~['_a('~Yͣag~s'pqxTۿ֋wiF9}{5!dX ko!3W6`”R=5DN$"Rr^RZbiRt\F)*$d POcƶ`[p!ɵiZW}4 Dޭy7sU- qֹ1jHj*RTu?rDn6 L ID Mh ɥOZ_]iS2p*HV]@QAD-)gQTkEDؘƹ؎g}ׇIX״Q;jkNEL&@@ %GHMQh4.Wi+%Zfj %D\&>*DCqN9و")X$SSSqX( 7)r'$J$! (Ƭ*%SAND3R&"VeG*%%ucmap&F*Yjĉ8^>1U%<1YMsJs1C5Q/ ,17E&Ӓ5Ӽb"}zl}Yc;@ɥZ@ց!q}mjLT2YE!V%)D<m2^o!DzM1`هr6l>(T3tXÆe<"d7>{C &_.NuڲT׹"E`!`hM`F:_N!$(4:YuP% @Bʜ%bFJnI9.'jltBcj.C`03pY^}*s% 3Kl:A~JJ:s+:a=1FJPx@4n8kxΦ- )5(̰E]'c1}+b҂uVsvM9e%a}*LHjXSƍc])GE95YXXXט^P(t $Αw4njj*hm|߿ic?sq 5]蓅+a5W)!cJ!Dc3qBKBK(bR`)s}kxWgy6ls9y䑿5񘈚a&" MHYA%hBX3Huׅo4422 R6,<TPQ+1D2ZeK)Trk42 }A]WV/mMgPʚ \CcQvjzjJ}lBR%'u)qp(â*MD!H*_]\<\W)VGPK_ɐ iM(BO.+#r25i}kbYkG)e2#u(i@ lcАb{2"2&PsDH 0]khhXIj'B靦{` mq'U- +RKLkp1ĕCWQTʒ0PJEzMjɭ4 k1 X))LĀ P(QCD$SC{WY7+SPq!mdA.Q5ET':KWLcR"A@vU@8۴mudPch )~ \ \'{>AdӢ)?A+mfbkژr}rp - I Oxj9bPbj +3oGŬmѦUaΈ)%fvMPhD C7mk}cB@,CMU 7+=+#bIZKVf@%, B۴Zuk9C!ذNbg]b!(rw?"!)y!<kL ާ XT1p %"kQMߧ"DlkV 4Ko)q`Bm@bD,%sSkb  iGsm2e4hO4 )VKR笽2{/!rFD1 D-FҲ1EyAs"1RBx!!k@C1@Ym""št 3MTYuDz&nFsRu%*q1=gh#Hi#Rcqԭ|+׷VP@b@\QHK wU@∈ @ Dߠ[B"1%kQ 퀲)9\h{ߏcI&$=j2EC՚YeH%hj7"ͪjC8aCD7FIm 4I3 ޫcA Jx>FJDsu9%[X܆4ԂZ(Us25@DR`kM ź/)%U"w5gj2\t9J'@DуjvKMF)iFmu]YEj<,1[c9Ĥ@ r!bkQ5ʟSxs+2@ZB Qhb $JqXPl19I>q:z*kLT+zpbI$5RLũJ5hE cP8+H+=.,b%)V+Ռ 5ZvUgWJ8X_ID aᡤ`YjV:v-sln@D}ͳHL&db 0SJkip)Po'Šޯkx>,%!b mi}Jƹm5n̦TEN*a8;;7?5!m[lif2ƬX|4j6l==??[Ein/}lɘvLD>X)%\Kx)@Ē%(zYYQgȆYKci,GX[=:+^I35]EkTm׫͇vuW _kƤbhkN֚iBVX"T 3 c]J?(&ţ& AW 5 *vα1{/Kw{T;x|yx1q˞˾?|/Bƙ4Ej0XRWwP/"xME2} 1Ph"dZ Egm?9rmJGN eR\Zz.l5Kڶ5䔼++kO.R*k͔%Ԙ֢{ψ4-!b y0Ҙu9ghZET)ꬊF%yb.ƀp5Wx+h\cUW{Ch^wjhE.'٘m˒@60ma~v.=,jʨ}1 )YL %B8IQaARڟkzc . uӬ*zZԂkNQ]rJg'(1E9)t9j4MJ]Tn5~"*Gg9hQYk&sFc a'UB-nCL R9WŘ}5W6S4T4A%FMa:۶*j}kšiD 9, 4ژx,NE ,"JyId;\DTDOjQS>M2)bJ9)UK%F@ncRJ @$"HNX`D }@8k1,)&֠*"G&=ssYD ՘&VcD(+y{E%ŨN&Bߚ8s xBL)xEuq~<眙m2f@ BUU,/TuՐS@t:J> k +RRBDBŖHR"@%`L2XD HL9IJZ22.nPHb\d$UXvFE%5haiR̊mh\B3ƥSLhfffzz&x?7?uݸw*&UK"Xa8xa^21a$(*ck\:*Rﻮ[۶d4j1\m&³#BBHN4777=54M8E$klA073lPN1 LOeZcVQ*! Z } Ӹ^ŝ7FK1Qkf2~^DN1ͭL-5vf[b3Y ?ovn|$g\6vac֭rh4b>X8W]}Xo B̑P>3 HsqCO(D_gSL\yӏBIE]d4V_sw~S۹Kn]뻟_s;kX|^=zM^Jf(VՎ k?9k+Z6XKWX@9 FZ֞ TBEOruD @DUe[-vP@f$?v-/}0 cc/`9k glUWgjK-DbkAͳ"AR1Fc AILT\jCO0CDB@0ȈYPGSVj+Scb }P.u- (iOj$Դ4kFQFľE;s59 ᥪ4k4YU+13i|9ki0^5f?j}nnND%F ]ZݝgggzS#5m;g$c|!ueGR,u^mMU^C D81R%쨭F@Mbq6C3}2&:!u)w`D%d9a#).$"̤G}0j/ `V+%e+%Ma %MR$ w xY@9EIU ;q*bQ\Bgꋈ{?/XcjڶM)-/,笵nk/4Lw FYrWDlSYb]%MYaY,uRT9'Ad)]FS#͝Qq!Y״ }kݱ;S]}2 0a% 2j("1ƨ:331{"26Sm4J;!^03DDyaa!l57zUc@shdV&D2EV]ƫ`0,4Q&3t]7שF^rqjJ5ܕSL0Zǣ=at-.p<UWbYHRzEK0Jk4K{u 7jOWH55tݏV>a;W7BX8|XsFcKx@ѫcY쟎[/?s/i;>oMglaεygv3'}Bc οW+_{kmasw^ya' iV>F[ 5g7_3v#^+uևJ8v]KNNf}߱WVĔ2,R_nWGo+/WJWo6=GnN{n Hl>a\tM{߼U{ Goqx糶"r6sL" @mj,h4XB'o Noͨ#+xnإ _fp=Ns ?W!XMQfՈpښ5SӴ3393Ky~2曆Ow6`陏Xަ=vw7sׯ^+˅w޷o8*Dx =vK p$@KB秔Oo T"x#z}pi(ړ6!x)ŧ[f=lLɣZ0EaّR$&\B4*A"2Dl =/̩rc6sj~@e(@YF-U:"JmB۶Xh)j6a m=,Y1ʪ9{|{C κRPIFC(4V2Rʒ3),fsM9R՝A/+9dz1cm\lk Y PzqDr4af9t\8X Y 24R4Z\-*x9jS)G1$JI0 bn5C5Ob+n DD-xcYݜTEL* fÌ7a6YD%ư1>(¬!ƒQ@%Z}+lbT Py2Tnc !r lLXL+vk"lܸqE6JbFY1Xq#&R$P߿ZgeM7["&fS;+w! o>~%o~zRa4~Y/O?N8|n~s=N˖knB9m?:?;h>!SxW]{8v;/7Kx0_0Gz|VVv^zӥoy9KIG;էo3ͽݽ_[suΙz~Ql9}|zCЂ{YXs:g|/o j=Vn7B!9}v;}ˡo5@'p+K%nL$%|ROz>Er2Ր<Rwucr?qM'owVɫ?v.֗=ɟuWG\9!u)pcny.O!=?}Ɩc}@+b0ޥ~NXWm«篯p 2M IDAT ^p5'3Kٹv!yýNOO~^.:_v+v]v䭷I;~^'?x;5iO|bhEX/}")M%5Ua@x>kO\{eZhA]jN"9>|wewep7z>|u'*&k=u͡9rś5ο?xN1D%5LHop!D+C(4 )'kL ^+#km (֞L *!"5V,҉qu!"AJ@a D! 6蜊G%`Q$8e#)pR9Hi saHf)8`40?;#;x+vJ[[ڰp^K>?#6f)>O7=?sg?mao?Ɵ_p?+v83wvGֻ<VYÚkr~r)/%;>[>M?z 0o%7^ڎO;u?w綛N+o8 (tH{yn/I`۩m_k$Զo8 s wwyݷO~j?{]p+/e:X'[Om}̞|O<32w vrWßx紃O{Ϸ}LאGn=s=6Z }F"z<?qGf6]uz-,g"dg24"i˗D"3OjD!w2?76nnŖvr*P6BIYr[f*!g$W__{?dSlKAS:PDοp@JiT;xZ@ZnmVES1 &It#(B? OM7t'Yk/WM(FL1D,?æm}X{;OЃTpV3rUDوKRb43l}RƝu pdC"ER T;adbREJ5T[()Y$#h1 QuZ-)@ŦiT3Q*~ާAoRkg&,1yl*DgJxO]uD*YtaDYbLxI#\ӌvzfZD*eDrNPJXmrY]7M36Llc!>TeB(D5]Y5L=1IE5p+dO aY˳XT䜒80gѻTJB+b 3|,ff ez-ecRLG %I Qf\'}N;ɢ~2—LըUOkjd4Z/&$TQ͆Ui/_."!FQ?ݨȍ5Y0sL+61==-E23Axj'"VarFfbtXLs mшذ!q4Q;Cd6w1S1Ec60xt}h p_2BqI \dy7}cVs?#?%3;[=y†oxu 9ωXݪ{>Cr5PT~ÂhL|3tY6Nec8"rNA8YkZK㑏Q2??vڳ>;|ӛR,c/-_x7޹v{Џw~8Wg| VeUO}>x;! zƻvr'xO[}3۟_:ͻ:^= y?y/;?H6V=v_Xg[w3}yx)EpOҝ;wf8밳[?#9vo-p85oopq|Oo5Mq^WzW/z͗vv#?ԶpҾ'w1^e "buG 40 ibڻh4iCQYdb{lj/',vYUn'lAR$Al VW95!LJ Tu[|PMQ hJ7c;R,Wh14v":0JxXMq*ZڦaR|"Jv)Oa"]=#"|ڵ_ZX "IȆ6rppУ+ ? c1 ySt?2/i'X^Փ'u,tj1K 8⹋oׯ>.zVSN)岒Fck!=; _+&ac*#IL~B>)L\/89hĵ1@'9@=,Gt2D$ :1-R9\&ċhѭ0)WSQrö4p1)IL)Fǘ9b5є1AI%!h8}O)1}J fΙRɬ K%zx7!xMQZKH"IÅRnb1Uٹm[ePًdõɄ>ƜGS#f 3W e?C-& _G3}i=*Ir*$ fkh4e(r*0Ω8bEDRd `.J:ިc_z)/ mJV"A`."&ŧ6 j R!F^$Zc/_=}I;"& FMF'YbzjT D!1u6cN9Qw|{1Zks9"ZD y9{'NgO~d|^?u9?}-?<<9V'`6{7޾h6GD^=_o}S]^Ctyvk@Q P=(qY%jFD2٠~+9XIPz使= NgVCJ|4,5<'fyT 2o|S\ 촟]taQ'6ʼn":* ʻOAR!)ZIJ)ɪ/lKfZ8*fBWS{罰} a_VC{yOyJ9|ߎۿIx ~򤓚V5kE3!HWh\(p•wEXV}SeeaB1Ea1Ƥ\bdwc -̴k 4{H9 (W)dXWR@9*(S$ƭ*ce/Z K 댺,kMX""V(rerJQr㐀4\JI2m6h0 Bk6–Dk))Eew c5 A "Z bRS  9+yTk!a31ƈjc9S!ed~;iHW@Q1BP%,Df:ٺZt]ڸ+\UqЬX *2$>*A;5*(KC q]`ɤ'[2c6YJ>XrUd@!14gZCDƑs~U:gf(6(Bys=wK>mPq+5faоdc j?e1,)b;#2ިFO"Bop#=uox-:u3k~o}ѵr?W|N^wOy_|{|՟zGx1^kȌiԪ_{~3~Any; .ȑ?_(_o͗<}f8?s{_7u'~f}{|VߵgB{c}G\{wPlȘ~zq"Gɢ'3=x-98HcZCp1WM]]>mtfwy7\r pwp%'aХ!{@5Gs1- IDAT_q/eεCH1.K:9cbƐr3;7NT /;酤[@,`٤E FYu \H8wn_)^|$cf>,ZU容?~1ƾ㯹G&)(4ܞ?}ƙWz ɧ,lu"Zgk!y>׿ Pk_`X }:UE9&"גI?q}?vb,d2ޒ#%؈+aq0y_ȭGr*"!C*!PaTa%(uTDOwN RG*\D$cQmCM*SPoAluPc9W!H@X9 !R@C4jIR4"PFPFD6$ɥPÐ\q D.EB2璺"c U m5Z rOAQrN H6 U$@*, a0:]YOEDPդ[4PCy1dM1VW0j OבcL2eE CL)SN̺޻~@P;TmAf \9d.?d$ȜD8碩kP PF21k-gVԺ9D ؘ,4B0vȎ+:/~kvqm۳{.x/^W=FKo[o-o9-OϞgvىSq>)8tssig?-gno7]kۋ]~8s;CN?{R?oͿZßz/9)GO>ozxq{oq>_Wȗ?+qg+G__|aO:1盽G;2 "].=G7ƻ|w7.<£>iݒS7u 0?qQ9qΉ6gB|Q/yqIJu|/\r%\*'^pd)Ȗ5FSݯ>_yἘ/;9v1GW|rKʋfҙ3^ mS'ޥcjSЙkk}jK{!pej6ʤPm%>ۈU_ ˟d8:WZܧ\z ˄|L9K?Ǡ+8u]'cBɔ=^s16'OΆ688qdYX$ȥGp2ɤ{Mt줓zK$ v#`_l+HP@0c~1BET sue5wb`uͽc)g):K?9cJxsSY^ڦ;C1`F;1rDf@4\,5(Ң91  b8Ԁ z1뺾HˍbYC`m4O.di-cP @hдz4k Eu($R&DS1XVEXQژCTeO@+dε@OGJLKhPorm[Y(9eq{}Zchuœ Q*hI7ς(֑rZYx)A$Y)E`08+Mh;zSURJ#q|lC/(]JQǶrT1Z묊˝kW!b I8!O.sDLWeO@RN 9EH~"=rQr}$,Siu/:͆c1kKTd#X22Ŕ)QdѮ"dfVb,=(%(DjrJ9`FJ[v2ZZVuE&+vs:'ZJYMZkt6 !CSYg#Cz*]V@Ќ@m׉lE/{rfj;C10;fNY28&faPq1 *Ŕ0 'qZgιI7!cL>ַͮ-y9DM-?J"⽱ֶ)"ʨa$Umy%i?SYUKB"i]ڶGm D*>9C_W\r%+/|~_p]w>n؎=9GߩXz8k>yk5 ;o?ŏ s]OzֳzGs?~_|{om>/.L~_pr\wo붝oNs~1x_: i{Oy~ԟ[{y.=??_ ß|\<ꠣNx Wp/K{tKO~1<~KK?tW~gOov7w}KxzkL ,)q BXq>+/j5P6*ܚ3|Cf :c "Z1cNz/,p)1f5μ{b=w$gWG2lyCVY'D F׵PG^CZCZMmo PX#~mm@)Ʋ7E&d0!blJDa c9ekm:fNȺRClQ=1;T?**-hH-צKZkw.IƐ>Ze'z%$7@!M' 1G677Cpy߅>عM G|.>8g'Bb7àE N$,%q~:ZcYh}}}}۶tO-Y4YrN)表F} r*iXKr\hUhe":e;P3n$ڽkDMQf(C.!,Kq9usfGIas:|3[*W%M(b9҄A騢B>RaZ$u Zt:30JOdjA 7:iŻʒ1R=>'W%XAJRz:JvX QdX[A: sʉYj*kY)Ɔ2UB .C צYe Bq>)m;҈9W;`39'*' h0SvX*3r2'0 k y0s:OTX)#Zk#W|Bd9oů_^ Edݻ޵g#s6}?u]*c56"uz<[a^+a90~fZ)H2*L>|>A/0w~4|O~cܵkW59\S8Zk/}Kb:رc:菽{+ fs\aN)+){R-~Y/5DR֜s>rm׾Ѥƭ__Ac iƷy'l^̿k~Uzwy_}0= OhѾ=sg>ֶ}7wu@ 2뚒u,$uJ j( SRG+ҚIdm19sFrB bWGUDZD1P Tַ[[Q"5"QKZ}L")bEf ĶHcl+.1DaX,8d%~O4Ls9! 7]MS`$ucsX+uҫnY|[1Zʯ40A+s3g)?nTBm|Ѱ,NE ,Iyeg]t  gpNw`u)#tl6[{ӟ:¥t$YCMvk{EWUr<[ksYZ` 8׶5ELE. L58. "R0 xں56ƘsLzD'Ku06S.<5NRM_2pl3 ؎ \u7])AX :Y@ZZk"}ߩnbfe>* %3"@PE8:qYkrpVfjҤLj7jw͏7M=iPi4.-@3jCItͅ;߶m[~^E 9 rdI?H0oMމfeY$0 s^a +?EtFT;PNw]II5i^ /a;x@WbhV2i*@ΡVuZTY:VPץ~QgZJsfGq3sLiX2:NKL[Ij7 [jZXM9ĵH/=BHιЭ/HN1Ɛiay-6RjMƭykqO4S3HR֔ؑ2MH@X2/38Fu#OSb@0RuJ HA+ @ZVD϶ϹTXesKɜӐﺾ#)?ӳ=FOV̪k5T:Ȃ"c/ʹЬsD(Bp"CaTD5h@SqT1^KQc`%-@W{Ιd(ƐZ;L8al朝L&ORL9P)*,gށ59Ԧ`Pwq!Y B 3*?$'S-4LJc7&)"cfkkkDu^ gϞua>:3@I)-Hvm䦛o~MҩRFta)_XlЈހ_}{s=WD jb[[ֶmmOdlc_ڶ>yW ]~,4nX~ cqՄ.j׿fNϜ8sl :'Q*VuW`)l:uΖ|sx|27˼S,j]h2WD2$\TB.8`)}BVE3[ XkCB96k0sityA:Hd:xC9qsN!0T"OD!vKG]S7Z;&D8g/6N"E0b1DG6JQ}˹]v81՚_FBN)VH M,9Y[-b9κ2#Qf:3supH1ŘSƘT: h0,sWh"4HAT*̡e Ew/:E3ĐA Ɖ$INsE7PI$׷_G>!B^M<kLdE xka9Mq9UWjXOCX,8朽m>L5 `Vv-/:K UAtgZk缹bxGqlBlKmsssXg~2b Þݻbl62cݠ g7pΝgRBgXT2廰|z0#M7߼s>sJ%UlkڶmkڶZ$Z*Bp1&m58k:d(z^qq.c%jG"sBMkgYyu)2+ڒ ?w3ɛjIp/Y;v-rQ^S eߤi:xq_қ?l`@( 1l꺸Xg @Q/|="eΆ(qDz]I˭3  I# <b]yw]USOy^1ąNCHYc|".̌M(#J48u\Ѯr(^%*l"bP8+!*s^[)"gO&S%'uǘ5 +TT&s&j5C!F;1]ÚŠ|#5"'t:Qt:t-m'sfɜsJK W7U1Ц TGt,u5?FXDrֺ\.D]ߓ1SκcEpRCEMp4i\ IDAT2jy]cUS1N \wi*L=*D#9u\$)cA&.c%$UW)O:s-+ByC XPG!q5䜰زX(N%Vj}ݥkιĀZN,bڠ rXu9Y %6}ߍ(2cLjFյ# U0p @ \F$d )|!ֹ~,0 c4=NF!aJ)jZߪƙNOFrYk 19*#hZ 8a1&r~sssbM͜6K t}vҵ5v-bTtuښΗB)'HSƆr.YhEq}F Qs+EZNڶmkڶ\:ǡp@ vg =Y.ϯD-oTHC=Tԍ2?yRDMi"㬝NwLkSݳ(FM* dX3h%gn5*RB$ic#TjTGY+BhS6Dx'fcDX,D*Ւ+MARJ9C"a393*4)Yk59uwsJyaXBjR80!]rlII1qfn9CUIbLzRuѲi("R &YPH,zZT(0Ebg A#Ƒs8}]χc1 {or-EF5l b1PdΈFgeIDؙڂMXc0Iՠђk1]TOqCyPgX2&eVcs,Y@!bL1i?®U7`H̠+#C b9`6ѵj/w  mqcT] ;IR$s0-$RRJVE(WbC̢9 $@,{OZ_ZeGᬻV rV$ 2rB0w{m\4 A+!o y nbLsηn]VCD bsTj/9+J k*"LeI_' PzQ),&d֮֔œ3)=W" Hhi%**0eVt߶?ArNT>"`|t_-ᦩVGȥ5bHJ1SKVIX$vd奣- 1qsD5DTxfd2b4!u*T+Qʉ"3iew5 iHr9kwv(ͤ}kF2=g(l=({ԃXkG766ڶO1D9mDq; t%~ijo/ǡY IC>$i$nw XEĐbpSf]2T%HQ1ז1%5^&DAMkK 6[5-N=π!!AQs\gYuE!l:d 2 )0VD rj3|%JC9I }CuZ)y"S2h,0|62K!"2r2Xtl5L=˰\!%.gYgF-/XSxmՀO͍A2xeZggL)ZZC`y(*+&RLnDZ9bQYHJB-XL 5U&leȀq`0뼃>^bDURURc=R` Sdʀ-PT5dme+/#eKz8%E H,(,Y1Ҵ ]Кk|0hQsM))X+lvXRXWRB 4v8X6pc6cLl}d2L'1Ɣ#F2ciDZBH9S7*8 Nu45DOb҉VW=}ubèrpJi>S~d2!‡P0!5FJԴļKN(dif rԗ1~s=Go"?ܒ̥Esν`ΝͭsM7{޹m涏~ښ mk{_n?yO^\xą]\p'<5J':fc(Þ5;cm[3  ]NLY-Ps1ǐTni5D眳N)][b H2 KNlʹM"ݘ"0"lRQDDtT=GMit ,H ],‡1YmU DP(ƸX765fc5!KJqG[7ƈpO%#1D"Z$ESks;נ6ݣ礜R*9 G (&EVuWoKFK9ô!-t,ނ@V>!2U]eJکF b!ŴAオV?n},̠jYFla8kՒ#0]9rA^mM[xH2Ȑ5/HŸ۶u]ǕT,*?1w@i11fPm+9ƘV5T|1Z9 3fU`iЩ5`u/9ujc)%:떂 H(@)sFKU-%Մjk"*Y0f6rk 4K0Œ:{U'Bo@v{kz.ƨLe>  =6CS',*Zū90 Xk:p=RGd*Y",6>L1A@c) ,DXY= N&2T{o ta# `R񤂫O0J r.l-.KV+8kEyc%+,s0!R!B %ޕRJu¨nRӶ9"a(AfB713X(s"tYc2fDGC{n@.WÝ+aJ)'g51%DDД6hsR{q`fLafg]u¼o%ʮ{gb1v51~B= n 7}qdʍ3ra s`K]z ׳$̜Er";w·r ~g䜍[޺ڶ_ =O#:}}.z⛈tڛ{xn$ d̒:/"b"2FE#wPhuRʣFˍfq0jKN,-Aj[Z@验" s,$oǠ dAc&YgNfi:v}SxdX#^ $* - Ȉ`bft")1)lu}] noffN)3'ֹɕ:5DE?RZ;Ld,sS"1OJl<At:!TCSQɏ:|}oe(# 霗grUAкVA fZZ\Y$sXkH͌@g"A5\6+T"ᕱva :1ڗCL&݇ 7@D}irVyO@ LDqRR =Sںl.+X'pBUE2S17)dW$K&[^#XЭY zMhOSV29W*{O}̄hufN1&Qs^O@6CIZ?N) a昴^ϝzZM.{G4^N Ω]~ҎMJC@Zx~(J\F;iX,B(6~ctG7B)K+f e/sV,6bvp0c՚]޽14 RW hɜal3b^4AR , 97f>Y^ҎgI*{W>bpFRs_SNɖiqXoQb}upe} qǜsLcg@y(d;v9ollÊJq8#(׷Mf| %= +"z1TDSN.aN2VZtM7wZ{:9ﻭѭmk/W~<|_ǵq'9r]W~iϗo>|Ȓ??>ԟ?gn1# ȑyW=]toyG#yo>O>gൃ^"-w釜s\o2]\=_y>O]{k8O}Ʃw}}_^ve/9% 9\W7t{1?U}q?~ڐgS~3mTW]0 @fYU QtQdoXybݾ}?;Aդ Mg]yccCo|vrN&rRJJ):u]i8-(:!HS (.;`o[):@"jqƘI+2&DL7Ԯs kj:tȐV@B53#f* N_u@pU@|.AC̬ne`Le2 B6}?&}$RLB+;[Jk&xlFM%R33(%7gE`u}Yy L]09n5d[ia1llQJKOwD L8D"~⍱"83g",qUmԤjZ:Hr zL1aL12gBr^ytnq9#R 9 K^9 j We&5EZ%$t3跒~1|~O{hk9{WP"r $!$%!ţr4&j{l:ݣVn O.э#5\֪s{xg(|Ј66`|vZU}C['hRJ]R#EjZ]=zN;;. TcVf$=DT?Dkh Vkvz#}!0 fhEdB4lmhL1]`F:b Y:U"ͣ'$Qo 1 MkdRMoZJm0 No՜f3<"-jj>(d1RKRV_t!(8Blugs\V{l֙lse?t繚92L]03:ќr;;;;;;!f:DLq~Z &G7]|E\6;qQ7=xUSͳ//V'uuoxٓg={Cďѣ=t߳N?W+_G~?vlq g΍_/| O[;yg^~/{<)/yKOo}o\+fr|x_/_Ofϼ?q}ַz~cͣ#nyé)|yyf0$i =3\gd_kK[RDDi1~ HRkѢ큶kb!XGGI8s>ނ4 b=>Mz5ӯa3Wg)Lfglt\2L}`@Vլ:=7NGxn-"Ѵ\mٴ);A#S} Ch'B{@HmcCh45f tR7ug_q]yaf\M#Z+5!}r bJY&0F&*iš9HZLTti6̂1666SR8#B&ZWޚl2L4}>!& atQD3"Tc1"` 10>]tNVͤf39Me>Fz(IҪsEt"F$5IjZrqᄛѶd{kP&h?ZErq N"xK`a2 A"Zaf}M~fjɥqo"萔"R'J; ѦtЀRE?9-z=\~muTSGbB f ?7j. sZx -:1'T2j>hsQ1P"3!fr0j:3 foV&ڻǡU7N٨̳j AC3QC\I5WCg^ثh%GDbp7ߗ\ :9*cfjuwAXt}NJ=hfbggq}sՊױcǶC~{.b)~CY:V0.hfRsaԥ_< O@/Z`MK/TӋޥ1?/D|l?{~?^?uקqSSϽ8 ϭ?WtDQ]7Mc92;ZoK},-)/fg2C){۟)`e`:8r/\ *o[|ە_٨׫N߿w]-[o~w9n?ƟW\mlŭuY?O#Vg>pVQ|ϱű9׾Gss^{s|q[q]sgK'0EifFVjԉ(*0C'!CtCX.S8bIkgWEL1fry~>>ΞSC >&f!d.*ݟ88UCwYCLBRm&>ΗϮPNm.z $g&"͌*RT ` xsbL[ۋ2&Njg^ׯ7hG`S 0i)JtgN$ivH&3SoJ-(V$URr8N< (c)Bh%ݺK)ⴒߘC`P`~WTU#s-6`/Թ_mL!5D$}^=P_śsg2V1/f~'aS$Ԭ+D擹s0\bJ̄m.:?BI gZlQO~/q23qLmg= {_m@1K1j艜qCD\Cm ,1tcxzf:lUbBtHOiͳY}MLm\-]ZUqf LX,l]fl:aƜEGL)N`lDņ͈s-"@asؐ_Jy7M^RJ#H"zN1YZJ1LvO?~7W|/]K/8Ǘ?x={哮< bJ;NPO7>][|;(_w.};^].KE?}a{/:7z#QƏ_/^‹/:EĿD|o>'N祌J1ZGqȾqM-ˮ8Z_tmDD0u]:j$3zs0!ŗvcprzO0`nQ9$[ؔDOFS_{^y{zߡEka1D_cB4Sm9 OF)Zb~tD}( *Zh̦3Ð0Q !xXIi .(f"9;$1| kh h@Nj{B5ؘ81GE- nN$=/"}3b%Bw̫Z)gv̪V˴%8C1"|~ "`rkq9<a盘&HA1kJf73j|W!NǏoKZnSzwC}K-y\'Oi?df O#j#b+Fmj[sscJIm !&\U ѽ9PlZc֗{&j*D?~*RJDH}BnK@8[ <пe3OݓH; jL5<{Lk{X<hE*0C}mAB'B-^WZːKp EB!ĩ&:Ӫ90G˜b"xqa4<$030z5jּ/ #0ZcJ.Xb@vU'wO"RC7#Hf $~XkeJtX5120GCi#%ƚrC>38CLtNv"" Zrƹ<eSs2I3$zA8d35T~*k0"uIUgЄ)%/Hje+)xHI q'.{j*0D$9RDV]r-1RͭMCSpv9;lv]򌛷Tw:8u{QNnMĶ>{ɫRz3YXlmm-ECRu@yZ>nkkQPmg$ɞgZbѺ~@<~wm}ŽiO}2^WQ5\n@B11s;l61Cۇ%j]VfcPJDiF)K$3O D1֙ `UƎ*Uh9_Avls7G?s+^sn޽[.WsI'wUq fZE|qĚ/ÛOBЕG,F]hoSU.w$L#s GY/(T Dڄa SE)"9DX=lm=4{Eu5E=ʾiȿ|d_Jљ- &"NFD}Nӝ ~G<*NaE2l61_8t3 BԲ"dR̬*>s|)[D3"LXfdfS&f$ Nċ\0Lr !cplkzbe0:H(:ϟUZ<ςt`P7:MeTfJD@8f H(yk']=!ÏOjL;h1qN⠠s[;Lzn:B9"y1&l)3˹3DX-JF!xi7lJ#DVJ4Z9"LhR'D("njî1z`90wφ7ZaZv6$aWvW /ᤶ;յKn6 xG PbLHXRkιQh BSRHͥZy݄9ONMhj:QERk'-Z? LSP%DWRN,ZK-}`S#~TB[ŢcaX8F WkO_ 6ݺ jiqU  HQ̭D~O ~g~^eZ8lo䱶]c}E:vsEd︣뺣G^Ƿe5F6aXZN<9#̋:ӯQZEgK.O3׿'= _o;GGĞq^x k|gsA<`;karP_'>{{pt7*CLbW|w\>3RLj;.u?!?s}u%7sۮ O*5腏^;:8=q!"*6pTZK޽[sǘb!:G fK SRqwzO sӍ/ Η)s,͙.'rk?_~?:?w^qk?~Eqݵ/xԣjmUЪ#T57қYi?>~y5%zq`@PuvZۇ~0?7Go(<~g8՟O;~VD|'>_3MD7OnmoUeqyu8|{ѽBHi3 *RKJKwNa}M'kݞzQDlc"BHMۤ71̲V]Z 4b}Ow 7 jQEL 3 h>&殧ƎuB+LEPj6]{hSo4<̎Vxb1N:\=x/p10k 3SQ_I.:Hт5~͜<94eRY=F>ϙ9t?gl?u"uˏy׋(ϭ=sKTޒLh ӽx$ejiT:n"!l%.,a|M{2VDIt !ZgJS65E+Scs}| ߭JL+YGJ 6j`HͩƮ!11jCZ3PU%6c}[mD)eeRՕ$QiTS)Bu?a zNT37jkĄtͦ߉-*SD-T B> 8$N̢y"MtTkkxXN`Uiy_:y_۵a3ti2[n->r;ۇCHjPU5z%HuN;Chb1,B[!ݒ8RVz6v+˵0(6M+B}}w}WՕ7}c&/"0?wZ ѷzD=7t<lvl׺tT:9WKG3み=;X,%o}O^r,_7;7'.|uEnM?qE> c[ Z脊ZKW{uKT^rݵ/8q3`3 K򐭭cxğ |wcRj"5À͗8Sbo?ϗd7q?{CE*"{;p?~E~!P?᧝y80 ҟ|3?goy"}xCEA蓟+g<˧yLn{ ԓ}ֿ~8lmmq_>e7}Ro|9TјxتvJHNqJ"R2ЪRM)C1Dҳ'U--Cu}@Ё?%Ή&=4 'ISX,qaf٬61Ővvv=CD0y֫jZov]:uiyӛOj9w@Ǒ'"@Aď3LFq͙>ZK.6SF6K/RwF8qQ~(!־G68rk53z~}7avoWkǾ;,M:a;(ͬǔ,g6_X.}Qd&ac'hTA}YgwYETD . 3BGhkrV!OX,a{_=l6<ȑiGoJ8^a@:c&ռ܆>Q[[eO!L)Bh) OMBx>Gw~Z/&W|T)я>sԞd )$gu}:2LdN(5cC\Zr)Z*!M˹#lqlD"D_G`GUeeMC躮.uU" _^uF *-Yۃ+m:C>-K搆sk2J{$lNxoܕ`cCW9,p${ ,y1brLTMD3؅{ݮVzY,ȭ(['9KL1: 9K BX~b S jJ͂/U V$!:!90}v@T)xjHU(bI"%B5!c iJ){1w:o7q,UW  `b뤀OS8CkgLg)u)ƄԺ%&*kR+o|}\;Xs"uV !D3{_|zO1$ P#+oOfgɏ3/?_u!: {#jisөH8z(0/UQDeZm6o>vXfSrR 4W{1e!4 T8t]goqZg(RL9gX]-K/mi)ejՊw|:\.1oj.7|MӳdmAac Jo6aqkk+pqls;}{jR`& f%f3%{gn#Fl`dV !j)%k}2Y5Cx,r)a$"PDuKfZֲ٬U)F$ X5$JDovpb1QTc,-Q`:S@*5оSL&GRbn\.KZk3G)1EfZc0Y9=x|M9\TD mrfO VD}KvwwWu-17_u=^#Y11ud*5E Q~6-rҹQ-6InRcnMc`ozAڀgiEǎ=|dwwwZ IE5Dp=u]{>{."a(Dwq:bbZkJy\2r|+j0cp1j?𓔒<}̱%R')mx*7#RJ@ K.rγ>6-g.4I34M4 Me OC nO IDAT4})E*f%r+'d#l@i|*fzJ<:1TM$ 1#zW0X HKM4 kO߶->{޼ ۦQDګQJ)(+Ы^򒺮{+>C>Bk/ڛ9񃺔gut߸W_tPa3"VU43SJ)k3;yۦ1ZWU-5TJ|B >Ht*[)"vI$cBZFJiI#x ιm'd2``Œ ’@Dj~asgL yΥ6Rʧχ&"GSTlU k̠~ߟZsJ) )Nk[]LGccmSZk#I(3ĘAkeylUt! \\|!" `"FŘbӂ큒lkZ4mۊᮠ֤T;qss)%) 'N0rֺJhd~y !爄 Q9.7h$66Z`PԳBB0cH̄նmIf5;@T ,%? w衄Xڪꦙ8eGb)4MJ뺮ssNHD4FRg;N޻BFCF1gKcҔ!AᰶL/Ę\' ehuQ) 1ﯽ&4n#8XSW$g?fu+Qdַ>㙶i6rO?.}l+/}M7tg|;;h$zPΪGzVBDZU% l<ݷ"lz3R;Rbc欣ح" sq.E"l]g}nqsnyy}G7m\ZZj&rTk-Ef?hWDV짹g&KMZK-R]AIfIm6wud"k?r?醘2H)믃RZ$&Sv1w S< 1Vi#G@e+k'%IumW~ے1$TYŒ+v'4ŷşn´"^qǝ'3۸Lm${% kYڹhn4gE   fПPoODuU/;lR61E!+T̐ 1Fss{׶s.w}z⽍JD3dfی u] zR2ZRq\۶ͤi&mSJJVF^k"LKK[,n2L|rtX[յ1F\l輬"hmU] L&s`M=׮Y3??_UUmI3LM3iFrsZ0_mJ%Vi"eJ\3TH}$W2P%9qRJ`0TUZ!QL9v]tyfUqLɤw>XN<2҃1P9zPJPSdMCTm{kdܶm!u]4m ,g)Ep8Vue++4p8 p4F2=f9Jk|%ˌsk׮]f<`nx~׿3w}75\Wb]v9co_Wk_۶}_q&ᩖG?8G?Qenvi+++/}g=oO:?|r=s}+gN=s 'w}[?~Tuݹ瞻{%/>-]_ JmSv:Z*QP`0B\A<3~!qOZ9`10(2Ƙ)ys˵a7EKt;sq9f>茗r"mq=?Dh6I9(w_ gyRT ;鷎8B7Ųs=o~#07wNO/q'("3d>^uSoc|{}ߦmƘS8o_?ڇwlOs+%C%=T}{Js~;yǾZe~CaYۿ gPWvTAeRw+2BLQEIum HrjRH(CAe 2s.M˩6{7IuU\RI&LHT)A)D֜*I%pFho/f |ɱ֏dDc酀& YX֞R Z3!"!P4*fBh\f<5H% Z )$ [U֚Am}81Eүf RhMy0O1J."\21š%EMkLACH:I Y1'ND3I0bʺ9fCsPts&eu>oVL5`XrL %-IjwSZP9Rz F,1$r\h{ٽϵ^{W0}-{m/8;>K//;wu> ½~?z+wO|[o{V\?}CS؝w.}# K_S~~?F#?xW3 ^O|g}5\#/~qaa/{{uzWZih_O9s9>~_E]nݺ|# 6lذ^z饗\rɳ8Wuex=?__=9s_]s5u]>3'gxN~T;;wygJ{Gu]o?\[@_y{챇f+k[K.K>[9'K/tӦM~'o~F#\"S%E{O,vf=1Z&Bs4FIaUI)):NR1hH H+# x>rTR%h!BC FJ+\'Rdc +R.qH@{p`XۂG #D9Oh0hz 9%*N6$AjAi"q41b˪Q:Jkto)ŐDum4)z8uS ! ,y2`0--/cHme8'dnjEK^)crY!%.I/WXTB0fb'Q!c" % 'E6{ZtűE4#}wY'4S)-bC(J⺮ʼn.:֕P;Rhf391CZ몮*[R !Yhʭ\NK2)uq61躖VXL%%j'I31BRHCќ66]u!+(8Im&eU~=J P%kcsd2RE1du]v1EPoDGV _tow8eqpJZB利h bDFJb Ƙĩi&h$! SH*y18`)ށH΋Cr~sfX;@DZ)1 K(8-(f" !D!M$Mcswj$ w]Q[cHX8["1p3iڮMJQJ&[GOt%M\JP\MF~V졜έL V'۶ˮk$ zwvfahAaIDCD{C( I٪FկRAW3 t:{#\4916 Tœ+ۮ /9Q a]/!C81X||{ws{*Ʌxo| R <}zWW\qű /}8xo|M_xO8aWbwvcON ^;?YΥO7i'wga+ӛn򗏟}_{O{m{?Cr!]wU:{ǣ> x୷޺.<ַ>{6nx~{ߓy/| 8m\衇^uU^."?7nt~2~~T~}+_98#/__v3Oꪫ.뮻n{?Dyyy~^/y?}B_~,"J[)5jR\'vҊ.r YJBJC #"*%y켗!ID DN #gF8*S3% VAs@F2zH4ߤ,AnI%ӑgZT/pVjRLLI @i$kb DLD@RiL]Q5⒠I$ ^KoSچ_C;5TĘcE) Cz}Y``)EV%!T.'Bթ*'zAg!CsɃ<Fi1DJ+= DX!,P"+{'jnmU]Y[IIKDBBAr| {&U4J"k+ҊP,dқgMaLᾣ3Ҧ,c$)%]۶]צDRs + c<tm7U!!TI)X0_;yq*$惐;Ѩd$bL]bR(LT )"$4 ! 3s*Z;/Q@.V9a hcI)Kd"v,RZk[Uش\AiRW!U̬\氡IE(1JIjR+dmwBJnCy@˗R$D J) {/+ٗ1̑ϫT0hM%,K&]J@} C cLzH. !)B 1`yPp ?^DPǿ % Z"Ŝx#>x1=# Af1Dq1HRJlP8@Pfń{A B@\ qʹo85k1KKK0w~~ժ`amE.:??o )E9)MJ+ҊBA(+$g[N_Bf1Qך$b/U~HULfe6:&\%몒$,݈%Sb.!}h4zի^uUWwqD~O?tM~j4C }^v}> yشɝzꭏsom$q MZiP[ouݺuwqd2y?I(ƞ<;777S)r7ٟٶLp;찙BK=`=xZ5S>>[^)?ja+y99߭ϓkƍ󸏸{7]vWշr}e{Yϓ#<{>|nղ]`ds#\V@,#vWϴ^a#l @\u ȑŲT%~ +n1~ 0(mF/p!8fllcRL0#4բ*S[!D)\p\q+ʧ '&Ȧ7HS) Q KSXI*>$Y/]:s2,zWD$D8R6OPQDYpKb*MzEDҮG"⛺@j:`#Ar俜L "H؍s*qM=G ŋ7}=(};Lf\x y{-ʢD۶3I7(HYh1p5M3 V@r%.hV(c@eʇfYJQ>$8O~M!@ 42֪RpER9m2{o6 y)1'cɞt=@vmYkmaW 4kD!4ZqB DRL_N0S&Ge,5C1ƔĄB)&FBk 3HDK^k JjDt2 ZiB@T IDAT^kEd)J /ujG}trާvm]zTu]cŃS MBuJ螦%LPN kdcĭFB-2"  AQ G+m,B׶ "iWJc 0pC)&%g2jJ%JErbBH  K|//E' 8f+D h|׵xhcp]vmF3bF(ؓ\ ecڽsKܫ2T!H1H(ߠ4Zaf%DiL=xW Ax7x祙lk 1{t3$ygO><ּ{N?}y8TM{T97ǓaZ4N~ mۥ-Kffٿ0 )+ZL5$@"ZUp9ݝ:׵]۶M3O&++նiC٤B[C༏&p<ǒe4 {| B6#J e425h#6"*E0JeCCHq*lLK8nDTWh4=Fh8Gx;ܼ2sRs7;ŔZXH6"Z[ R ,WS1:'0u-Dp8Jyƫ͛7?iƍ>#7n|tӦ[6oYܼyG7=ixtm6}ϫola`| "h1r&7M$j1 yV(;䜝JgTV1 |h&M6$ Isp^Ƙvyv]y͛6/nٲ24/+w$'V՚5 ;N;,U]ksnuuy-[icy(!zc⤍ZrJQSF۳!HBٯHBfPA:bƩ`7ycޜs]A)=??;̍FJdag6el9_۫YsdL9M!Il1&84++ձ %J03ss;aaaa4 [cѮkcS(_ ):ڮމ$4J3iB<ֵ״148-RJeřX !{zM1)widҴmQk=77fa͚5k, G#y ?{L&''ln]gjwz^/o|?KwegSW )b4Ǔj׬LVj:Ca<^ټys_v 6l[o}Moi\j>RcOK+-Ly,ݗ|^7pCfx+_^^~M<xFo}WՕwygr)6.>vOoj 67M6V=aپ_G>GBHw߽UHj߰a,q/޼y-[.7ͳu\uU6Ou[uYgq!}{sqn`mB>'\/x˖-7o^nA?ʨz?wݍ4 ^} /PR؟Ile !Ɣ/C}Wf y8# y?* cfndЅ^3iq:"y Q|hUU%T]5 cF%X'!޷M#,Kk;1shT 2l hxJi b8;DJ+BJ1H"EV'T62v9prr0b! iӄJLE3 s/$PR@ȼ"N) 1KqKƽ%(pCʌHJ뺮xAES.sPqǎ-sԻ&1/أh*eJer#b g" A]1E>!7=*_*ɯFkH)I 3 cE ч MWaѧл4k*[ՃAUUZ+_"xR9[[Y *5tӀǍC!t1FkMg eIB!H*j8I`J)c "ɺb0-1ƚc!;2$S){v]3)#RʂEJ|ynBr]Vnisª笵)cJCu̝RaCD=J;9!f΂H#3ׄ2ɚZlEH&4d٣ZJz ZuKK6Z,c(1BbfL|2e9"bJi~Ѓ9!}ȩJkqgN 4c*OiID€t0(_.ESyZd,Ub푭1'HC]c\a)D|L|j#3BU{}$k@8SeHdc!FZ+9+G7UU1kM)(_LrU6 ;5hE0Զm۵) KZUt֢ )kmzaaaaaAYCJd"Pݲf͚pHD޻n28&T"}纍fXUa]WҊ&F>s]iELwx|ͯ׊ b B2rY愙TSJ骲UUc򕯼_6m۶֚ n}ի^)I>uo?6w-fg}wy}~vƛ߼g>ߣe_6ovPc|S?:;6ov/zW~ .}=ku{_{s{^~]t߿{[ӟ6rU5y֟{~^(!r;~q?3quuavL% )%l`=m sss샏1H"\Y^Y\\L ᰯEBX^^^^^nH/u-o%0^!<5"M P<J'V{)J6FiH)Ʈ.UUM˽2=P#1ιYAUr$M&6ZG:Ss!BN;U=P& OiUl)eAm2 D\Mg1DN!P@✶8 HLSa3RzBW}~ɪHNB#1&,J<[ Rތ"cأ\%.$"(J/!ͮSБPk-| QM2ddߵѦTYbʜ֚ ڮSyH^|yI H{ÂP_@.)*DBcH)[[)+kۮelA"ﻶ3ZIKU`.&LL#:|,g`{DwMPIftOBHT0)UZ!@@4Kz"Zǔ0BpHrB)ElYM= !7ozoLTPSL;:d?W!\HLWFk̅Ǧh%HIȐV*X3KEv*?JZiV,,0%(8( g(HZ)IVJ1в؝TE5(f aEbZ)I!R{tK#yQ cIL=ӰB9`9p uI/\FyZ"f?^VQIa/l┒.u]4J+ˏ>hVj$LɞThTp=^EziLt,GEה{Bey8%$(gN1e )8)p!Xez dZ`Ja xBR'.WZ5)-kDMV yC:sfTovbhnnNbq{S͍1zXDI9urF7/daif4ލC9%$7ܺᘣ 7<2t7/#x*9_IH+ "qMvȆ;Z}}G)YFuW\qdn]t;} ox30+@JwQbcbVlZ/Y|e}Ī+[GK FdR&9'.$sqi E;NV6HH1ĶitɕŞ}3> X DoL11 ghUǘ.8rJ9B3 r(V,[Qw%I*aPn1V9wشk) !Ml$f@+ER3Rct * |XD% !DBkڪyçd|p{g,yZ+m2;tsN DBBJ)uOMDdԄJicmJC6ѹu;_UfsεjNZ1RJ)e3'}iue+ETt9OC@&(5(\)`D$95\ZǗTF$yJH:/De]I"HAe^~rCc)e|F-w,Ē/E ev ]Ű9xfF "2WwK8HkD.PL O* R'K5+3OXxkJvb$xIDjs3xp'ޤ3*cGl{&Q~jJic㛡pYYV&āC/ M1LA#Nd2 (X )s1)ڈSHBfS3YŐ%#&KZ#LPUUUUIŎDɢB+)@SūЉ~SPXb_98ATs'$WI-GYM0LB?c_fueuiqqiŔp8 ;RfuuvxK(W"FneuuqqqͫK7͋K 󣹹z0SDMWY+;(Rʌou=eí{*g.^;찻8`ho:x9;9cogcxvy{eTlӎ?O\^^>C'ݯ2Z|ܪaQlr F{m^ %"U%_DDN=2P+e&R GA[rH*)Clfw-PI)577Wuv90GN΄TiXBjdJ41B Rj0*TR˹HZZ#J[KbHh:Dp܇,#PZdE몪|HHy }{Y*>4F0<KNP訍Z'+ϊc{uU]km@$RHqNΥ4-6Ak(Ru]b!YEF;E. Ҫ6胗""w֊Z@s.(sh42Zœv<1:a]T8:KDQq]5m; $CJ@"dǐ(_c!&*fv^mSM @9cuD4F!LP6"(~joVq٥A_"klp1Lf -f ,X:(B 1VS(`Erx} x^$N bc1ŮsK*Y9XHfVJ%jm[y(ZK]V$ Xb8@_JT)FY}{7J~$\#"e!f `>ƙ4 JP  F&;rSyA'{c}brJ򭗍{33}ֺ cL&]eV3є.axũx1 T>P@W fV) W0TTEL'c"+uTLjPXQl?#Fܛd8-úB}ii4&_[ƒ$Lb IDAThlv\-.//??bJ2eb˫ ڪ[/%i.(Ԕ4+]&VD),B bJ},F2=Pt]w_~<~sqU)iLc>8MA?kdq)̚j?cD6d189Xl;ON)!d*BUX"V͖( R}9/N`My&F=\OA'q\瘓P%,P9XBRۺɤ6bHVRkc빹9_pRJm*Z2k&MviJ,"i-`ҦD!LbfPTd &c4a;Z&Mj0#2`BL](=\Cp4FZ)0F)ƔT&+OhcMJ2HQ4i0Q# $B' #GȂdSX.bԘ8;#)[)1޷mGdk֊|lU"Ps뺶m]%FMB! 1 rtaH5v]4M:﫪y%HPU6L`U489 R >m[KssjXiqiɯkv0VU-nSwPLR! U䨕aňTi )`@R8Dnz/ڜ QPUH%V,+RZՊbPsM66 iyerwEHIs$ڮu̪E|%Ģ$f,%)>EesX|`` mscEO` @$hVZ Hr٣`be!wiffML#o $"n.T<$BT'0q=+fs0#ixmtwvwc踢m)H \~cvvwbq^q(kȠ4Pe.DN&*i<88|ǝ26kZި*q+W(9S9g?"ѥ. YHr"RΠ kK""1F1'`ƥ6TZig4M&+v9kAK13N#O\x,;k;M~BT9SфVeTavU!B =q^rɹs뮃ҙ3g|>.:y{qZ]ufrypX,Kyz|u"'4NfDux(3XCD@A5s""B~RC?8H+-P;}ғ(`&"<7xCNy^V+E{=Gh_լb7|G\&w :(YbZf͆C!xV9&Bi5BiM JbnR#0iB:ZQ-JM֪Jg4 @ώ-KqSt9njr ﺎglfHc)Flzvw{Þus!kEm:*Q sbg0#UUr6Jy23M:g)VZ1Q5`c@}J2Q҅&a9\"QZO#JHu]d].1J 1" ET„#HKs%"u?]SmuK)B BPw&>ٴ44+3q,%^9+Ġ2l*$!+ zŬLX`'X0wmUJYk/䒽=u_W֧Nf\Cp;qD׹[t]7 1fqˢbj? tc&D1E":wD7łx]}Rʗ16_ XΙ|x^{?Gh_z^wxw9 8kT:KlZ]@8XSQ SPɔ". {J6A[l|(kd%+)%?yl://GpsV5TWx!#pTĄ76JR\ ۸*/[= r"L϶yIn.}{on,Z;(h7!BOS !lgvIny׎zB9r)gƩ) )sVa&q1u V]#JVL!1A M7&:1֙q=u[Y-IR]72`i%'o4{ _rC(=\6ʌ)MͧͨQ1zN)kwwwNlIDςbN1c95wdΊ[nGIUW "N9JWJՇ<'b CyXQ*V%(Ŕm2!!#Qȥ꧆+"9)RVaVTK,kk{MKH­ZrFW6&cYmFĉ35`%02r)Ub\UȬR&":HC!cxL&%6 8KR,e-9kct9+<-QFQmOxc|8c9'"JpEprJ!{m j]3X/F4c?)g !H!kssokMWkIO͕Zb~>JS, CqFPrV\# T"gE੘6"9D64IL)%/q@mvVVRLYgrmՖAUΔb8{iqfa8={zgw rG9^il6ҫY}OB(6ڇ7eۄ(zԣN)O|sC?\{9_=YiO"K>Ow^wut ]{q{,D@Gh9o?/ ?;?򑏼o'?Z?|+aho׿?O~ ~z?AJ>~d6I+ H@n-1e:Խ9sJҜ>$ vSJY!vZ"nKTTDJHjq3mwmr)I躎c4C^X[PX_J4ɆA [Uqh [hH˅#QvZP#z/ZqNf8 Vn׸8%1ѪG Qcm NDT=r!CuҚHm7kճp|A'9u]ߕMUI#8A \URZECu$)Ќl*)qho31i&CG_:ZJm䏂|Q֬T14[ !R +%bx7ӎ UADŏdSfQz\NR.̊-[Qί~Z9v%5C-VPn^-mm0`7ԟ! e\1| zd6Rx+%Scl0V]%}cvw9'´4y?M5ֺM֙k`.!gŜ%B{|X=Sn8{TaGR*gVER`y&ˢ,jODJko R*Irnk40S L1FV]) /2f;;;/ӧOu]_O]zɱvww%iBǏ?q c^+{xxؼi!( UD=lqԧ~XctnƾIOyʓ[6T#ߵ/G]믿9)V7tc)b0 Gh]3iڢ: ox _߽O>;; mWW_W=w7/ǟ/'RaXxhsD]LQc"9km_l M[BY׹\ ;UA[%b4<35+4\w}F/+֚ι59iC(ެ1}}ҒSѨ0O}H)6z]nM7: k{ޏӸ^8q^WjZb8\,.jqCIVbjZ.Wz\ٝ #c" Zk -He^28XeGDAA`>&Tx佐t>V&O5Z{6qlrqR aryxx\,-ĸ8ƀTIfBD1ׄ` $?!ɏ1avwvf 2R"创IXi,Ki=}QK"K)G9R+ZgA[⥼ݔthm0CoSZ[~0`Pj6pa3aº"łZٺ]icZH34qquཟi\ViyD𮎾gݾ e")Nӈ fj[%  BDMO[! C.1[Y*UyΒpb(ҭ)VO5RjG} "ֺiAU16Hy7 jڥf6bX4PGA U*?B YHT7\PN?iZlﺾ~pïjz= ;+! F)ōy]ʢzxX,V4N! yۈCxUm.`ނ2T >z$/,؂H X*IH{?T#ԯ@=RWi%ӴZϊA³ZΜ9oݽZa~W:uS.슫Kvwf|ggͺwsYD,So_xD}w3jh)b uuV G4B" CP#8X7Ox7??馛.kxC?׾1y_?󇇇x믿[nx;v=ySSNx㍟gmxOȯ3?3_~g?/ϵ\.o櫯ꫯ~K^؃ޯ8^{qq_嫯Qz[gꫯ6\q{>򑏴oɓ'_=;9y΃z_N(aajbV[5Oi X&kL眵YFTbhQ@9ʿ5wb".-i QEbbqb4Zm餔C"%4!"nF Aևspb%1Qbjn$K@NRJX-j4M^.ϟ?`XkClH !ZV|xga]fa~))%C0f|uag뜈aV5%gTbe:uF>8]:0cJj Q:dr [k.ET5 " 3BZ R#)kx b!^9 [Hf崢 "] 4W-WrSܱT )% qJx )3"ژai|vva C>T\{H}S\=s3Ξ=bq\.ϟ;ܹÃj4U\wsκ61, Y)o~fH Cb%a8MS~54NΰHې5$[PTͣto 0Rx?!:@ Zr'05p'sJc(ZkI0 ]kIRQ3#(m`IGpo}HQ͕_s|gwQi\N$Y64p IDATV)ssN;Sn+) 4+XFkn3R?~'!ry痋8! 2? ->!)G,0*k—+qq|v&~ P,pn(UYrʬYgay7ٔst4q9/Jod ;攚*5ޮ;V9qK/~\~/Wz_~S}Oxxxxxxx9f]"!(]yW^yK.?vlرݽ=۹LR&et%q\!D*RV<%%|ÑjP5B)%2J[m/e:89GhCH'oxmo{-77bo|>~_WoV?nßw?}_^}CGx}t/һ7Mzի8gs/dVk[:S~oK,%V6z@չj [ppQMP)%B¢XmI//Biu}\qW̮$/~L#'Db)+0٢ XGLn}X-3|>֢AD{1À>$o}[>s&x1m)gRK`SյVT~mzՌ10zǵX 6kS )"FeGF߸=r__>яG~o}g>ӟt;[ֿ~׻z`MhL%9jJW23I 1f;+. =%)&^(b.9!"Һɒ}!*dtJBZ^9m"(s0 @i_AD4%*Ɛi}\ :0tDY7 I u6 #R3`TK1ƄP㽨PaH+BxS1EA)c|guNDɃ&!!BQD69|vJ: պOJ0lFV|Z圝s٬{@-1BswoWu19"i \KsOY6u,D.~_"/9i>L1j$]׹Ρ 1|ھJs 1AoK$EP(!aa“T&W51/K|>O)V+qT  ;W>!kѪ8! ]יRi7ʆ HB{cJC?saBh!PTb6tL5RpRŤ1%ԂpYJ)̷4U!!*( xs(<8*Bg5'T"JkMbPW2fֲud!RYT7X/ 8~I_ 1윝y\yuM0pZ96V k7_3Z6Ѷf-֭:4Un0$HW PFbzRӰ6B 7'&Ȭ\S}X,BJa 2 -*k[kDsJuu lTqz!,>rm HZ-جpDFk M8*VDrH&=jZ魵IQ$r)T "I1[g-b9{r1xϝ;D|g"fww0 :-w|k FdSlZ1|n~)O1V)' )F57j/};S$_+OuG8Gap"뮻<#}?C?݄?<ӟs&~"Bݎms38P?A?}wUWϏ'zqǿu~/v'O>Gxk^x;G?^E/n~={o|t2ѹeRC#3EJشg)$L"%y$ )vɭ"cM /(GfxөjQ 'R&U9 6̩ne#+Y(甌pĠkVZ,TqJOKeGXi5-֧I2@1#84%BLA;*a蠵&KB<tT9rT[E^"Dk֊E Sؼ"bwm!ȊYCZ+v֚;סϒ)IJ~p*V9gtZhR0[M3Ps !0JkM"|%1.eX),bEoYgϞ휳+5 dʪX%;6UZeb9Y (!I1ܗ,A+ % ;kГ&⋂!fRӔCJf}9OZ$}^:$T(-@bɏ's39BJ) +c5Zc: پ;ױ.gZiRLSJkz]:'Fl'CH)gE@TC[2RA)["WRPSY/,BuZS\;h5)9#p1MHeNB%-kMr83~sNOp.~9)rD,e0bBj׸mZI]L>$f֦(ƚiVI E53jP<Ǎ5:PZcNR)7^)Ar"Y%6L-x+` F"1k9cB 6j@j^q!QgC_;c1RL8 ; 1 ,TSR0\RJkcHDB 6&"YK4&i^M"4ӹj=6u{;t=|N? 9{{=<\t/^H}uz6ǔϝ71m Xӟ4ue,hXB^6 T>!%gʬS7pc'`m]wTq4 㪫;9+'?=yϻDy{#yw8y7!o|'Odܯ=~:uj<)˿//O|ox|xG? oxOO~wp&Ҝʩu d!LޣR!LT5qcA F%~&?AR%EH;BD&%{q AP%b9R쬵g'N;=Q\9k䬍nz8!Tk_?AB%1&`!}Ge,Di jcfNuafcc:Zw|3  є28e(e>E8,ƴ5MA *<vxjXkKlG;60;;z묶X -@ƚa6TZ1qij͛&OpIK𢺮]׋p\6p]-rG0jj#\j~衢\k6 '.'N] Y1}`"?bypxpb\K 5u6 P)93+S-q 8Ps`UWR$V snshC4Ĕĺ0v8$YiUTY]68׀f)s00sNIV+ws %S&!8MJ!ZR=µ9PTRNB{js@֪|5H~|a*mMhav69U jsɷR#&$4>~u1ιﭳ -\ bcv;Ǐ;q0 JiXk/˃ⰸ}{vBx)R6)S &_q@_ '&0,8c ?9fXWKt8N4AҚTଝaf-2-.Rҋ)hx0-떚ߢe Q97 lީK.9vlZ_.kǔX) +ĉ;qıcRJΟ;{avZΝ|}kI~\bݪjW;b쨉}c9[cGh/}K_} _1~xXϑ+r]w>OOtNnft}s~ٳgΜ[ޏ-/v|\~=~w5܄UV??G|mwv&Yj(f>NIrcVŲ˯-ۨI(CO$Ž4"#M.]:<5X2eɒ9'D\GbDS]x9$]ֺ߲s~3hEu`Js')a~1j!,h]*bV[)XAp FmA-ghS&"JkAU(<-^"MPDTH# Xi9ou鬲JOczd. Ke6rZ.Sh45+!n孢;14笭v7 ~Ʀ !!f=#l25JA5BMUXV`xWGm BQ)?Hd=5ߟ_Md?J)Dd\)̸-JXɨTP(eqRy=HɒUj)J)LRc,#D#o٬bԴ,#T>eEBw9% NF aS< D45Z(05~BRDD NocsIJ !L:"< ۤ @*2K)efl b[(D  eH9 UE0Ib 60(aN93+dYS -E ҝ6D]z,,qJqĎ6T1N~"9oS["dl1A/YՉ:to#"beHS˒4M)w]? Ä9D荲D917Gaj'dQ-cMw24IJp*@r6ԗ&bL9 #4MX!b9bTphV1)Ob aW̤JG9 nr,[-lW- >>P唙]v38;wnXmvwwaӌybicbqXl6k,nHQ;nwD'5#O]OegGxӞV8)oXLF"J)ʻyӞg(~G#gq4Foa$^t]w7y睗^zrg?s__k?k*> Ox$w\׎׼5x+"zs׼fz_{aOb롏CϓŎs뭷r-^{s[ni7|3_^׾9om7~7^W^ >uۿ3:vf%5 &J)6$:@ĶC%X `X RD ' m#ђ tvRHG_0W/O$q?3Rl KxD!F9mIdkUG3^S$86lĈJG^֕R^Z CؽgE A΂bGR+&^VJ̆1ܹs}ÃC^J+t,7ZNƜvoo›Ŋ t= ιlވ ics#t0뺎H8]'lΊqhk !VwB iB1zk5\~g+fF4әDRPVLE\ 0x"$Jw Fm8e<4e0jWZRRJ~( O9Ay+<)dLyc(] nuX2@h1G%'Br^!"bo) Vu?fx!A(X)hAC?u3KRe*F|'gd5PFtI$=YcRJi,Յuh|.<_mM.ٺ$o`^QkZHł{6o˃C5RJzE1hp37&i n+Ec5R~YAꈓ [ܲy6/D0[@}øZH"2yk ji̔rN1R2Fsg]e X$mSJ)ED_}?ء޽\.cB.\Y}~wٳg{>q1KAD^_/W1e$w}ϟ?_r3 $mgc~{M75?o{g`>]?CL5)NDO~ғ g?'?ua'~G~+_ G6G!ƃ9|،$dXF%th$*mq-6p4nbfM` HQm\0|$Kf)- J/!lJwbPVZbU%)QR13DbEJk5ɹ$dFh5@`tMaJ%)grN7y(ӅII)>Һ o೓|pSJ]Ca^JX TmS(161i"t.D7T BR!'Q*3jEQ'=D;5QT)ӘlWF3+XƔ`㩕\~^1l6hdZm([r뺾u;sb%;RY$s*Km rXj@9)Uhh {A%uBU(+JR))VV)41s:%# tЊS$ǤaPRShS)5T3r1l0֤bJ1x?VcXd\Rb$ŘRU* $~V >`ęl*m`m&RV!UHh]R%pJ*ǧXÚʌ ?* GcL׹<$P >.\%Ӊ5؂ <->TyM$J%}^#bﻮs-b +Mș' mr O76ekBl4"B>*ҘGJk-Ür!쥈L9%mk 1e~"ԦhZ4 ˞%+5pM*ЉR:V9[ !8~h33gN9+Y4s箸ݾgVrB8888o+VZ{رc{ῖv3K"ccp)rWd]7KaE4Ycx`< qwEDRefUs|>GN.b +}yB9zCT{c\HBdCVK!1K)WUJ9Ȋ@$a&8P|l+Q\~%!I;gs)bsn>C*ٙ &"a6)i&Mv}$05:jK8eFd'U $AV i؝4xPh1!4ňvq^eL k0 󝝡s0R)Ř!WLb+w{@C_k wk:[)nKTRvIoArFYmJWb r)'{[Zmd "Hq!rJSJsZ+@,c0Ɣ,J7XRG? VJH$%! |J^= }AYU&y !{V %W$EHW.$ (Έ$I3S*B8wbq_%fimdy\S_%DXiV UJ1  ɔ)rJJ1qL r91 @"!Įs6(m]I1ڹpDRJ-ТqFnf%K7U~ʆ, _6Ty(ٳ 0;vԩSǎSJ9sf^g;{{;uTJi6 }ߏRĉٌA99xك0_ve\rziju]bFnEdŋy4XCDiUn?HD?r88Gh?g|C~ێlF )R$'l ZGin>$RL aXe>SEN25ms* @ [T@LuRkJy+?SN,kX( .0Q.$O43+GeQl)keEo9)V "#Yb(U'CVl:gq*鼀-iZV턉1xJz=s? a3A ~Z+!G {@%p4&> Jֺ|>M}aEceM9]C Rs!0hf6 R^i:88pp^$?~5r84Jf3M^$OӔY)Rlf Yx?qqaf@{J5hiJ4$x*V1 i^Q£ў )V+D SRF)c;R`z)4ML) "q'p~:1 B)ojX|B#B6 ZP]QR@+©"Ji)VĤPKiQX`!R55(%gE\LP1i5ץjIiVʈ{mWy}x9ڏs|H &&ئ^7{hi( RqψS7뙲 KuvC|Վ5  PdDUԐz ~n6KFH>^i< j+phL U$ 32nD;Cq.s3R S10uv=e]c)L a 4v}#7læ0ĩOS`TuD)eR"bxІ)Zt MTDV6N2p=K19笱℘tTwZ-U^}|!^>sXbR՘|*w0BbY.z}κk^mY/CSC,VYU S˗/ib\.KNcq''gM D-Ͽ%kro~|ccDD7?&ND0!'o'dq2N:t÷'L[xH6{BX +mE~sw࠴DU&jYd=jTQ  &z0tYb$i,Ng8`{r4&T4> @[ウM$ݥ&$y0xι{7|42> *¨PfV•ZA-@v%Hu$%%%K[ӡ R Lrg-y}`#}9 .@CDFbp6\rn~AgY69OS0b:TsAIE ͽ rvt_DA Ű8::Z}|.iι´ 󠳰ήX3 !朘|-[…"[cRJuh yNipdP%zZjibf2i_c:Jc b8LFygĖ4'nsCZt8nw}Yg S}5# ּ i8'bn.AR^ q[! 9ڶ#XbR'a4EB1IJCE(q5,ZJ1D(|L+%#KQc X9Rpjm03jDw1.\@p3< +.sfbVbi(=f$Ep)ͪQk[2L4MUc5Ê> L4)9! &֌(E~cƍjeD|Wf܄ZVr>:dUgO 3.OcJD1aK)YqmNIŘ/g0<c )aK &edRR-V/*e)Uzb;+u-Μ9}E%aJ·)%lv988PUW pj.\)j5 C?,D[^ +3ܫh:/zŏ08"һnyE[}Q5`fVՏ~cz8'dulu 0 [ygӌpm]<&L"t`kcJxD\:X%sJ&l9zדּM#ܳ!!n))TjLN Bi arØJ ,1Y%&&q SIK>nRaXTcqlZcLH Z8XlUK]䳊immFx"bkMS :kMU$h5R 6uJb] -̼*z |Q<{s%&|t2K*fv*U !J)BF(QN9dɹLS`fWV $`#7FYD+gV ~{" 11iJ1@u4љ,V0 aUkN;ȱ &U"z9K#)W*Ĵ+}Os9ngΜfq[@)4vk9Tw )!XQ%l΁r}H'pq h.%R9(4)OT Z{]=I5 wC#.r)RrlWf D{gz.Ĕr)!9#3c'Lj(c{u}w}V+<ȈRiVlcQqp򛀣$km irdcN4&jiZ6"*Ss?`R՜J)وe1f:̻]唧4#,O _zƍr\}ϟkfԾYGf!jR1s.o'>{|Χ7O~ro &%aRJ_tw?o6vۭwuM7݈8szׂ}o_/6_S Xk}!kӄ h|b^j@-إ9.-7bmV bj,.h(a2/c5Vx|S Y%K5(om*Eᨦ7q\ss;HjPelVTQ:czR~^sF%$^CYKA?)DbG\w˭s.$Ċ@ *.\Gף-m:J.T@RRYO1|qͦrg\.Q5۞Ez!y'%6 s+H acƦ6oḀh*Od1ejyI5TRb1j[/89ưUR3hZA-aDȻV,aDKb m{N1RF3T5e˹dZ11jS8 Tg3ObR7*[N;23:tV"+bvҢ c@#RYm}4MK9$S5e! :gXn Fl)tiXuh+)e6dt"֦AUxnМkP+]xUH"ff QNP@Rn6sosL-ZJ!TE֕X 3sI֚K Tz!lnml fìA.UUӤ?o:PYR6hj=Z[JN vT≩egT.A*Lc)7H7:뺮*e48MЩ0XkK.ZXcMfSL.x]kLV E{Q~g%kTSTRJ4`S PhRSc D\kzUZ<7Te[S}([#^JcH+vN9OSR^Z"ֹr? !FbhS;b֫Ã˚rE,b)8ނ}ɈF{ j.ID0|M"\>y,1i ,UT@?d+zsw[nylT'd|ؓ@=zD[osO= ];$͓bLƳ,K+'9 "jaWj],kLk&,l51FFTj}ߣsƘTQlŢo"roUd4R0J)J.Ф~W2w\&kqL! ^yc9slV<32"θL)S'`hMm 1D5q\:N>f~J|(VfvbsZmVS1\T Z1:%Ŕb)Ea1 clţU5i1u9k ,@.TrjٶZVH>)c HEc ̫% =) m*8d1fbp ]]hA[m`H,F! E-+֢X>5򭰲2$Z}7>):bZ"U!f+QjD!jk,|pf\QsyۇԺjyP2U%Q-1pP#-$1%PiWMrʳɱ1-k7"}uchR9wru\psbJO`(f6Z'ĦEAeu+Dp+:'(!ǔcWV9VM2@bIKx.(zؠAANRN#.J,4_X4MPU j4~mhQ@JfwE::l֛f"!׼0+qMv!4,V ocj !ÊC s]獑MxZZZi*Z&Y1dD~RKԩϜU{{aq.\xj=n0N!0q=nqb ObJ9!ᶀƔc81&j섟)X8<.<<::>9g\wuO|ڗewg=YOzғ~zM7tWr-;V#mgZo믿zq]axWGZWX {1/e? ޜrMFgXMCXc-R S YQ BJޱ眵j kkPx[)Õm&3lb;¤&8-W7"T"xG 9 _/5HDfR"0`U1Lm`mq:1ꬅl)%^KG~,!D_7~@]7gV\:(zpjRR&?cg^U2:S$,C_nJ b4U_Bg*oc Pz Hޝ>px0#={{{b/J gӥW-wA>={voqp=fnJ*E/2ͽ"g~osxfRb23q2x3q~_W_xw{?'uz˭+{{,_ͪS?ٟ?yӛvT7o;>}O?}׼|//ȧ?w/˷Da]?o{Ν;?֯/S!׽u???|#{{ַyy;wn{WU򖷼o^Ḯ0vWt~iwv?|>y¸{f~[?x;" Q_O~_|uuu~scN8*Q c>81+0E'zàFZkQުL$nj: .8 0 j^EigΡ??QV8(ޏA9I͙bk kl)ОS2Fvvǐ҄e\C?K.8^>U}`뺾w34N43zA)aˇGλӧ',DZTMZF"Vx- ҥQ)u}GǷNuL 0RJ`^Xk)8[E7[3XgӋ2^}4Mab4M9[1&´lh(%oiUۮJ90 .:M#(<b‚ˡGo ?5t6-X.3/![]Wf)PFh[7R)E@s ^c̜۸?LF 5V0z=1=!6q '/gTTVad-&gaQ)TʱQe+.GZI.-*Q;m/#wKSZiY|C5+$CPKP!i]l'cS< JK1 Y;TSlj&KĩٵG ॒3}t-[9y1g1,A䔦0DU7J5٧N;nuo-948b#%U]s뺮&J.DT4-%,3J.WUW @{b}2QɈ)n{!,'d?/}K4vLnc+/{]+/w9^8 ;|;ϜyP?o-o|;+C'?j| ϟ3Wk=i~~O}*?[o}Hqo/~F~IOzCGG\s M'?|Lη~yHDg'C~{9sѱ*vpۜkly*]933L( V)4P3Yk!1]b3+rsX;c+3Pl9TZXb\0pXŁu%[RLj񧍘0pCm0K q4xֈ̬Zd[XUZ 1رR#@~n IYklԈHPP4ihnf=Aibt ujX@.'-)6 A)8 2SN)'%*s<@]7[1Z4w_å]Ҁ]轵NhJ1))L4( |J)L1ȹK qY]pK_s̙3}߫*[)ښS<{?Fg~7~/xA77; neȫTKBkv=̿~ۭvNW9fkd;_iw|ڎΟv} Omm}{_巾Nf\}"ig IDAT;;;?ܹs/yKvwwr̙/| OyS3g9G_v? Oӟ}C7p~Cz327QXkj@eڦ ƫd=94n=Zu;U:sӧA %B1#Rj:C5Xc~hD1j5n68Nlz7)9Hb1 :k X3Ock\1sXq3nf'K~L['X/Z *wr;ԩݽ5ZJӸ٬Vjl9Idi}@K\qqñs„_$Δ\r !2sa0M`֔SL!ƒˌ"u1xoYƤEE 1k/U@ctM3'ER* #!E0V-ut?K%2b֯5v{䭵پE罳;7WkR [w:YuC#Uee+ W^P- DE@['bh68N@FHu暡Zqshlq9}†IYAl3\ -b<k`Eg~;f!,•PYX*7Y._x /^pŋ.\tvG0 8 Q2VK VrO+u@1Cb$4ny;˝ݝbtdw&Rw]_vT-#ga~+V i.Y-%iʥE/:LZGbv4ϰJI؊ݡ&ՙ >2הlr Ÿ+<,VoMz9 rYB\.Ţq8)fg|~8|{v ^^^>MՓoo7 {4^g0Ovn̿w۹d_Y2^Wy睰^V`m m^7c{xԩ*7$~K_?/^p¹s^җ^i\v?-j3@FDhTagSS%%u m ڹS(3gCgiX={ҥ˟/^`]w5<'}ttq AD` wvvTu7)zUb H%d5*6f}=n kƘRYk;ko@3MS8N/ k_oͿg>O{^׾o?{qN^|c?⋯h7<ܫ_}z7}ӛwG4~~~^Xko馟s~K_|f]zw/}|\}gg_s>՞{[z 7|؏ؕƕo#;O}Mq>7xcx+܎~oy[^ȏ~o>m۾jh{~@Vq qYo6#3\NR<88X7xp}l7gE`6oh+ՈDX"i4ŘrYVu Ta|w}bfn$\ J\<˥Í\?Pj,̪BQ}-KqΑ*F9zuҥÃRTkY;ykL5?cG*#L)j^o`x'6ղ<`gk,#$-'sCDl\Lͫef%jcbRjXZ,1%h]fXuYK!l6"ՠ|وaVuV9Wy;k)$Jr q3n 6[k-)\&%dZkb>;^ub}[.h !Bj㪯 E5XZE4HHrlM%ÞL$t] 1wcaFc"6٬WG+"Ɗ3LYKL}w僃is{02U_y,Eٜsf"kK{/,E5"Z)pnnbLQh9IDaR#a]57  ,mSrN9A-# i'{ڭ߈{ VŘR 8/8Ciq,t}uTK55r.!- /̔4)Nu9gDndSl[1@pQC=,bwe1b$\J* kQ)̅(Dd SN3 HR@\7ܟT6M%JCZ>!QkFcI)gkHY'Ҝ2T'J,!\5uRᮂ$XWɘRa'*ͬkiL1DbK dlpuQcwXtt@vΒVtob匐~XB0,]38M)- TTLp"wiV+ïCI90+x%OF1^ ~j]p)gϞ:ujl.^ .]g07rJO)|;㹠u/zw-%f>]/__{7 *s.^ i R뮻oO}Sgpå&xL6deu7b{2Nܹswy糟ʬ1vZDf &\Jvu]}p%"wPĚGڮL9;jޑS.J~yDX̾4s t:cx,5/PJ b ]k]w 7c Rj1-FI@"B`r YPT7ѦkswE@R nooYaR摝wzq^hkʂQA(0-!F ڲkF&ډL@]{$biNDA:i!AT LdG#/:C[k+9 +S)og-86ji;Ĵٌ8%h+Z|2YǜbjvҰBLw]ǩK9HYF@ |h`PTkǶP1+{]I߃tttuu!@X11F)ƫjm^;PiJ H:fܜ,qQ0:::c{4qiͦn{-e']qzgaZ1FǮv._:88(Ews^,(T Zs ςzi b`v~%cWf`]RJZAUf7Y}y!N4C{벫|.0ea*F$/iUS9ApG8g*ʲO8kMq_41HAʱD רJyYrh")<-]rk=;nT2@K-hffa˒sRit0A̛vS^b~sF1AVjbThv;kxomvաwҗzY{n}܍7ݵk})v;\xP T{0Ob6*׼haow[4_ec _~wժ]ۏ<10\abW\cr֟3?3ur:Jk.&lz~6+ot9+v.qLC01s0 &vJ]eU}!۬D]Tݧ(P<]k Աu͵Hv{ſIƐeXZH7`h9sF["0{gQpO\ Yֵ/DDBNTI^0R $nRp/w`2XF B\ԒK-v=FftMOI aLI@}bF m؍!- 3uӻ" 6ڳN~LDB7ĖM%!a2=;睅AkfY-¯kkѲmH=d[>.S ;Cރ ='SZK.(Uf6ֺZ6m\ U%CdŰR3FJX>U"㽳1.&QyQz7f.k[KhazRJyiXkis g#vѮb-s LR̫dU\ 薚~utUBu+㌶#)RjJ;7lGcӍ.nc66J*$lUD %8YJ-\A.dMH*$"Z+4XT`0j aҥ F{vΐ-1%5J?x)hLпذ5Nm/2Ag݇޺f܁]KsAH€6lE4"v[Z+>@j! p!%3:h5<;Ha!xs"bs:獡R+"R'Ӛѝ&,4k- 'Vjadx֐6u׉`jzպZW++Wh?Ɗ ^h=TzmCU8$cD}4,ݒV_^hT"l/5;E=:9U\TE! 6^U!%hJJQRÐ,MmfٞKDFsjZ &ش)U Ny"2q^ !ĝ*iQ"pmURb@odRj^ȁc@tiYŽ]Ғ]X3Y]m$f(MTp{X@4TRi{1LTaL -( 홑KsΖ[mehPP3ȇ ]"I1R()e!dKf(IV)b2j!@F2QZCM̉&C0zd"T E^*90\޾R:k{qՆ^?f6:mCRC<-AR;dǐemW99pi15&eٲ'OTZż.3F5`QC45Db  Hf7QӀA lևgc-]n^VD D@PXh%#ҥiB(f#$>\XKƴc!a8eYŃŐ!&q_p'VY3NDrГn)t;~a%}^^з}c{ZWj]u['dj1DoQ0/\AiE/V*pED.%cTV 8ړbMW6xVӥ$Kj"L(q-3RN0! ?2%TJPT(SlyCDzt'u; R 0Virk@ `&ljn%iuZ#?J-\wHZ4MMH2Tj377] ~u΋N1h9g5lr4]!9[]m(lB梆 S 7lPҔҪP(!.攨ØŔ;oq W;AE1fgFMXmCn@a3J60Iܴ0 !s5< l /C^JSOT#}- 6dVu,OFߪ !`({ԀlزC*XK!0x5#, l7''??O)Qz,Q--'&c)0k3:wNKm*RKɸ9) IDATJE5gрKɶ;P/1JjA$l(u}"KT-L@_&쯇 pmd,sZ7WU/BTal f 27|j|o\S7 NnUZ%FJѕ,nmDfkÏH+}alV)42:֖rЦ!\EL-9 } 9kx`,KqC6$leu"ڪ*h 03|Gi5tcj&gATc8xð>Eѕ"RYENREH"ܹJqqɂN+TZTayx04|wB5*ulھ=Uy)ZY|Z'qֺi6v]{zI,3o67ViX4c[ԩbo{[޿onR+[!XR,k~ pAZ[JIKtvպZWj] D{k0tu6 ݖHk\x.H>"3)E`2pv5P)!q"'J]$_Zn`^J%mqu^j1"5زA <4 6SBEa뜊h) )AS$͍=TƱg`Vbfe)tE0A-$Adơcf|7RB8dJkTYbMׯ]#ճSY 2Za=01J\"ZJuX MM4}dٺ. Jy%FY![rChNRZL)^Zs2 Y;"((gS2-t)FTvw Bx2+(еYY-;)T<252v(DZYkAz:X(3~USix~ _AUmўu撼V"ڜsSR%˼7f"Y1Q^znn2_=q@po]K2q+ zDZJ*mH-0[cn86Yϫuȏȝ;w?zk׾BX?_9Qu/_vwa 2inspܪ۰uSεe֖*aR1ǚfgqqe+) acD8-7 [Z[kAT)5|RJ! YcJVnAbiU4&4kظ=g9br6P+P`eh4+Q u-{)gҨjhUn>U\;. $Яk圭uL K.,Mq^*S-nD@zg6pB܈N΂5ZUE- +#f=%!R H9Rk v򭐲Tjc h!ԙx*[c8) si63xyw jl#TiwAo4e thՙ!+w V5 =,aidtV'T%~k0v/癙wׇa9{g4mN^~l }I\ Zq 2/+%MUdCɉAԫz:@fߑ~ ֏pVDG,D0ehX}ҁ:? M$=.""蹗P$`mCa? ?i5duyNDjWDOmYy tO:k JɥFE5[jl hY6K&!hH S+T)B;Cfi!-I^we4n Y 4ÊlXծwWQ!&`!p1CSb{f޲ @ՒȄLf.c3݉-si\e^ZjN)43z(80XFBt^zQuMگeI)unz >7NfC)Cq 3%eHxq70y׾Y"z *Jδ TTGT)eY0:z>p$}L-I":IT pXw.?SFrA"5ekaXbL90NqJ%Q1l;\[Ȉ)2OfQUl.DdUAH9H\b9Mκ~N.n {Ṫs.Pz[)LgvޗRp͠t}5 8 69)Xai2GGGp>Su s.onS|5:E.6 `hR2<SUDϛ<k2i-贈)+*HVl 4mkn-kEgy혷aՐAah*KPP-s )`l7 ]9RG%H%BdN1֘BpF-{6 kN`)8}Ɍk8fFSٺlw\-a,SDR2:B7:Onf5%w.,K)bOcDQ)C!! ׹"ɘ A= L0 frh~!v&^s2+9j ) ^;H%ZMEKhĨPB! fݍh xF"V!gs̆T F)ŔS˲,y>?ߧVtw[P, THvR de9KL" %F$EZRkXJM%%TA@*]7 u6xK\RTB,E sJB@^30lv4N8n޺}vz???Tʅ~r[(*$aq 197<dԜc"Q\K1KC UjCԔbj>($1bWU@0J$ZK|8,%d<dVLιBJdHCDy.NE9lme8Є9oG%ǴĔDjnَa)&"` l)\jK efKN=4&D oVEJJ$20NcYkT眬HIQ}AsZRKj*GwU1<\s''GG0 A_k?׬!f78Ma 4 Cp9t|||t|伫UΝ;EW\K7nl6[w!%(|~~~~v>sN'~`4 aaBk0f"˲y^bJ10~Ωh)[ٸ+);%g**[Gi/qN)BZnl ˜C0sʥr1'[?1M]KvC^< @t4!*bkC{RM1·|8yvrrۜ!qw]"dzq{||mָZjc\kQx ,삾)mPK*Yi Jq˼\rJ9gC|||rcBp~₶8v .j=hU"iC)ժ l]BG"d>VE&~_6n:RN1~D Î1^osnݾO=Գ>lxwGGry?bD- Zwj%RJ}V9==ya>T)dDm\ 6=$3kfv8fW}#}׽}ɏ\c9#ԔRUpH+պZW۾wG~G||GG:?׾w~:ֿ7}7E/ /]O+^x??g|mo{=<pnc|;~7~?twX%^\_<W}՟3駟ԧ>~ꩧx\˲Oă>җ]z15 "Gm 95 \ȧpb:e^RJXp!BA*3a "Ԅ"[ AG>{&niZɴ hqnfWIlFE0'ǻZ9cK΅Obk3PPh)%.a؟m͇9vTk a*|)4DRLq2@B^ͅUQbؼ`Ωvfu%AicZq$\Xzc'wa-:i6vvv )9χ|~~cBiU^->=u/ ΅]Zw΢}iT!5,c֦κY"B)\h!Xے ֒k- 64 sZQ$$2c2g9=6Į"g+5|J˼̇C\bJX!\䎤,@N9%Kưw~n{ttl"ڟ0 8`Xe 0cHQ59C6f-X8L4>!8LK8 g-~…n=p0J)TrR=|=)` :K+g]Oj VuQ9P{ l}q6iLaQVK\RL̇9fvͣ_9 GbU[nz!$5~VEڱlvGG0Գ?s\"F+ջ&r[5w8_;9mvڜ2/˼6&,67/fZ2U"\&Z|X8kimwvB0¹Uk)-1"3Kwo7[pqZs 9uU@kyX;O!O$݌Bny_KpX%碍Yxx8|s< 6׮]{%7n>Dֺ~í[7y晛7;;;K)ڈ1޼y[c),sh}~@~f=vGG423$dm n-eYryp<zիВS)Bֲs+LպZW ?׻>h'>~S?S///|ɟ|~c{SO<裗?ԧ{/~3>umX'í>O>q)*n_%^\_>y|/^_p.___?]?C__2Xj5GBM9!\`G])qYRn"rM~k 5CЬVj%JZ!nwcǞS)[Z4Nv9zBT^JM9(axXkm*%ÿRD9/BXK)&x=Z[4lYn IDAT.Y_Al[@jZE!gh+ v5a?QH0BԦ+>a&xd"" 6 Uo /@.Ys-!aߨJETZTsi! he^\Jeaa`6"Y2s)98A.kV9??:m 8z5! &`j>D_HԴn1f >s)kׯ9s.< |m(!K9aqhaR$.H̩ 3 q-χ) m ?@6{lWExNZp އ`#oSaur^_]H%#eR4ݸqcݶcDTpys޵r<"pZcM `ȔRRN${Z+5z|8=;M)18!. κtⴺH{Dn{,Jk8q rZi⼻YKJ;eYSN-O7׬-U 8-ܤYd j´{D1uu-{m9Zt%Mx9묻l~{28R<Ctxsk."Z: :R26SJ-?~jG2bdܴݵTۣHA{÷>c!Q՜KKo2n;Q$kùC!2D9y ! 0ao#g-Rstva*\j. ;k-}T=3f<ʲ,a4  ;GGGƘ=[eb'55g=yjdO||2@kG>v1{ԏ~o_~^?BDxcx/K_?_~?_"Njc={󞗿_mc%/!~]ӟ1_>|~==3o|?ϬO}S<L/ p|Y^[[~~-=ӯ|+w^n_|<×u@u_u;|{W_}w}w_=w{e/{a^W__yӛ\Jy_~CJ}z5G?a"z˷?}?ekOVUj-cLmxH)Im8NQ~fcC+FQ"RS%޹s'ƘaQ >lvq+DlJUQ< #XTӃNPO ȝ(( ]>  {obU*#Ƣ$aXkɹTc/߳9r>l/Qi\4wرlu(&t %wWӉB fh)%TJMK1^k,9fP4l艌nQfز%CN-(P!@ "̶8cspE(i^hZEgRraA k,)AưoMޣg9ZK̖ kJιT%GrTK)6tZnq> <?ߟ<`b> IE !JJmFdZBD%D`VZZ\\2Ll2tVZ xtP/4ncȱ9;CRhk5okQ+]B@ HV2Gq0iL)u'ZAvǎ?yY2)_jU 1|"*sh/)%n\DZ*`m e1N Y F`Tw~L!$ /`w26GKEmӑDTZ{]q[kx C('҈ƹ˖[6jriLtݺ%6Z"G5_=ݕ%K.4yHpGUyF{Pf=F24Z<P',[DUDȘC7 k ?wYs$NdebHyAEڨ{\{ݼ;wn+ѵkXn NضxhaF:Yk-iJ)JRk1U\tH)d[ 6L=c?`e~k?HkX|?LD{[߂'>ɗ ȫu}so/>֓X3Ly_MMݿwSO=nx;z=ןg}x/7_K9/_O>䣏>O|bJ]z~8 y9"_z#h|> \۫;O</WㆍQjy7snYUa-RX^ cZfT0D(D+ݜ;]090#qݍe@c,e bBkC}ئ 936"5,cg_V[,^z#JsIiu>ju0|8.C1URb":wlmr/q몫G}a2Z*ڛ++!iRz踂gl !0X, ȴ qlTJZ͊-RTз3%ZK[Dj-^50(RtĦ1|TP3q)PkU1tqf)!{"-@&&cV8R" U;RՑH.rVU08N:hQJ~z@1.qE~sEBiB]ņۚDDp\+RJm"\Pp,!?b;sG&ClqATuvyYjK;T'A aHR.>xeRa;w<15̄6VCFD }DwLӨMn0HO۝zU)@V\Խx'`[v4I;[h ChRKy HV%*%uΩhՊ`u邚Tk{K#`Z^BpΫt{R -1lDЩ\bn".cqAEZ2rhuSkywt4m&s=y>vR'['_uoo}}~>_ru^Yʗ{_a)c?}>:::??_Do~_lzw[w늈~ʁp@ѕ :k>jv펏NNN_~ƍׯO?G^YsNJg@3\fs|1Ɣr."JZrN QUsH89>l6&\)-Z˪z[n/]3TБ^/tN`wP;Y]b69/˲ZNH9#V:'\%qK\ k?/yHeI)ߟ{8Flv w/Fp hAoaݞ\qڵk6@D)󳛷n>s>쭛NOOeYR Qx\Z `R`!cD%xvvvyYJdh?1E__ws,q9;=}֝;;u2Lgnl! YpXDZ._L .Ù 9}ƘӲ)ŜSJv|8󼤜h28kfB޹i4D(pP;l|~eL=m,4MӵkNN9bL9gPU,Z*sCp4S)SJjT!Uk*H@)faaX 8;RN)=eqm6qaB@t4RWdM%eg]&k SG`X JEX r֘bK.mh6W >)Zp){fC?͇}<㵓ov!0xi/8Qq3MhM{W`o)ﰔRIs I1KBs,˲,͍ry)!X$&0 0; ;~UJAZ·yspH1!blL󮖂[ vnRJYAPn3m{蘈NOO@MS :h1ChfE~c4MfDJ0Ma̶v]Z{1R $Mqdk[; g iu0Zsɵp`B0Ϫ"`q;)0 pttt=}Kf3*ճ;|駟۷izrWy{.w_gn߾Zn_r :_r {==ec,׮]駟+\qk_+8ޟZ/lZJJ 3C? "zӛބ}?!O7ظ{]ǟx|&TGasluv))zE(/;ӣ$l6Gns֖!0l߸qƍnQ]l vGG'N0a(iʩt&$޻0 Zd*EHmFEE0Uki{ޖi/F6 洭TYCq$Q-uY~7|M5n)[e^gg˼9K #+15f`>qwG7s=ws4Ml?߹s[n޼}r8z6 f@cͼN1UUtPA; ^W`@s.Dj iCx$ijWo2&š@3>Mqilǻnݎh9??G2bggwNO!!ΒZ j0LBPUP[k U4 xaR(mLoL1r%clHm, .xSZc"X~zA?bcflgÖw ) !мyRWLJHU6w XhɐY z352(<9H$λq af9;N4 [6&@ eB6SC`zpan6Tnf Q- ZC .1R1sF+M@<v  99R |8v9&Gk`:9)4~.y!NNNarMr\)GGG{׍1ΝKzSoa$k ["&1f'km.e^t|ibkktmE6䍨r^q:*RZ8޻ve˜߷ڹ@.b p2U_rTU,B)"pOGhQVi㍋%H8r mBBZ7縼1J0" k^{7cy ÍZ$9GK9%T7x!,"laP&brNBpp%yWuv:b8$q׫zR6MHjaKwswq]w9.袋.1s=wwwC'jDs9| R]IiUbCaćq>8DP4cZ|*qqQyh?ۿ<汏}_?׿{_veN+Sך@F>~}+^񊗼%wr7/ԣu?ԧ>}K_ou~?'O|SN>az9'#ND_dz?5\ uϕO[:qsOU[71__Z߶8D-Dfպ$X0L-UΞ=wO) 1{ybCHAkx)2 ZkM)l6Df3[.IGlM[VWa-gG"xrTz֥Y8 lFKf>~Ҝreq]C2j Jj5=s C7mLf^$ Z+ wV8D=:9C1ҡ존Kҋ{-@7".תga}[άր 2\N\: f--yc!PUvNx;t:_vIU}/2oB-aRU hfgϞ=88/~4.֜8ٳa9y>,w%Y!"ƑrHF I22T-;njrZe%yj}Xh Zsj&8}8^tE_|Mv^?a;s3߉xN {Ū5ZOAϹ4/uRm-\r%7)sJn;}ԍ7'=i%&/ѱMql{<d&x|Ɲwy5|*:oGP:Ԫ,|W"皫Q"9 =?㶓^GFjhzV[ 5,C᥋RSMɘ̼CVk[; DHg,4Rb^S'#Zs…^$(ku IDATSZBvyaoA`b/>fHLh6"!o}SB5 D8;܈5^"b'Xj@AMj"J&vv?b:tPe!n{Mr$Kyb>2Z'h[?4DN.L%Ԕ99q*ΩuRpi \nλZ}l@ &KyS$* ZkYZVU-5&T%yAPt T9*1FZsJ=0Zs wun)EsGx:Ofn8uy&U#nCLM4/ipdNAx$#3]C;GćBE"iELȗw5մ}88?Ƙ9Mj-"Ŋ՚RcRfS!֪O2Os_q'"!TI9a.jQHʅG4$ΎtR7ZU]|ӎ+nkUok'z]CKlJGqCx)yā<b&:\ȹN=̆[24qØ0{fST!9Mq/Rr- @|GCUKyNsS5 #H I72gi//JHI"8-\ric|^",(7AZjrZk=88n8X{b5UMmg2:4c­2R2nlA"⼤5 IMU[4y7H;|TGDyy& #ΪZ]`ZFűO?lJ\gϞ}Qi{G@7%NB4 ̲dv/@/.ȇo[^UW>+ceX8xox ^_zx|s?trԩoߺť\B ;%Dzũ7|M7ݰӟEu]lSf85n{ʹj)P͈t&oY"j#0zĵ.ڏJF57"Hrh)Ś8L)U&&y^?ߧZo ,¹l6~%|@l*!aU+o6:RF#JkQ"%FfTEx64Me;y {VrCjٰ&vzغڊHa D3UL<2]hFat +vJLK!ZVViZ*T!xڼQ5k AMd&v ux眳&j.g'QY!>!uNDCDkUeRK%b9wЉ-"R>s(hƊ{E(RR<1!^ň g UVfƜikM ;?8`\W#pS%`ȱg4ҽڋ|49%f zELߩȬ83P9,W0`3UȝRlfA@3uh~}f)B˒eH*D-^WeUj,G"xa|~7$uBjTMoߡ L.6'G^d˝^JO8C1Ĉ Tj)΄D! %l7SZ,> %·8s~\I!vKt5X6|qHWD$x?cvKdȓP 3VBJ`G8 Z&+j^Df@Ͱczth̨֢UX~X6exqx:̧R0 އzg? o;JZREߺZDtȉ'a03vM[s@!Ĉ,ƨERMogc=˚V08uWk>>u UM9Rz->xoĸkg^wu{^?ߜ͆N) KgQ|788SʫJ+s=rwZ=af1!iږ{Z獠f>L!Z*4ë֢U;7[zgQ:Ɲ5lE، 3l{N fjmFp rJ@:w94JT;6/`}4:UԢTCwj)EƘEXWXy?^+F(QO"9#Z`jUM2 Tb..XbsJ(YrwjUɀW3W2a6!4Jѥ7c fz̙=Ҝ1ϳsM]S12Y5 j}:(3YbD96h K7,Dbf+Ul 5ed*bp1U LB[sؠٍeq4/ uE D'k.1x'D ,|E2Rbxl\'vHՊ!sissW'_sP33Rn"lNBj#y^c0WAph&&"fD}W+r眇 CS<)ơ7jRK)5:`-Ts0 s%sacZ=YfՊY,Z-Pn [ѪpE[/FK=vtpՅ5'+(!&0)A1#*[P0fxbLKj)1yyht&-IPmoSf*pYٽ8!0vRl_X6hu|䛕ZksXQdUafXNIk plisvZ 켏Vge֜.A$0ތ^T!CP 7?+-\@ >TT5xOLxpw$ȷ`fs#_6%^8,>񏕒>/~#qɓp۽Lqqd-5hl#2;6;{ qK_z>ss0]g" Kym4,b S8HHw9WSK,)9,XuSJ/@8xxg?s \?j!Zߏ[+07Fm'[jMswr7DD]vI) QQy"sPGZڒPy^,hEMպ~[\F */FHB;""NMJFCS)R6H6lF!hOԆ阚͓h-Lb@$j)4ZUWI8T_m(D*D3L\jEh 圽od bbt |͉ j*` ha(K9AKDzNwb{{20mCWXWUk.-,L\ʡ#ai hDjM4r sQAsGE 4Dժ,ftjVBdd[tIK|Hpxz!3"ga\j D'; LF؇{n>2Y&R{T5bTw" d:G!zp0mLc`#- [MF9g,$$$J9JRx8³$\k(UW! 1 RON8’hi>1v[mNsⴄZ3m}}yZl `h~&f8YX؝f+*_JTr+y80K[2Jű#棑4R-Zs(λF ^: 0 ;'vj4sNEB,BDq<͛F*+3S!PdB8MtX3:_Ua(D=Z1܊qmR9rZ,`L*:M]`>=U+< Ud?Z99wwvwwv716MfZui'bC`{8N-p}=gΜ=sbpZH:;g-+&ƝtWڢNZe&.T`.s9%8as->xq<ŇO9a@\h"xS=&v"Uqъp>N;} U !.ư 7[w=jEՁ_s&Bbd-̜s 6;6JDŹ%4q]'pɝYXv:`[ `|f J)&lbdLVӶt!:Hʩjdլ e <ToFJ0 6Z"Hcu;IPS11C:#pHie6İZ5TKXmhqey>88`CZ;W1KZH䈚0鈿C8v2Ɉj)|pP{,B-.É} a汁!'QJIKRXn*![ĵym&i 4 ?\QiN9CG"X%\ra/E4m|4Utၷ!ŅZݲG8q%D Y#y.V- 6&m <9XD0FS`Q)34xw6j4tLI)t5eYn8 FtjZpN\Ll!Di!$Դi~d\'MKɫWj-ܡA0ii\|鐁t ˋΎsqw9iiD:'d~y,VA w)sNcNƓRsARN0o݄8'PmlB%pb➓L\U!qЛuUspѪ*ι]= ;pSlZkzJsJeK M:咙ٳ'R p( 0xm"O;nDȱL+Dm^MNƘ*b\KS*GUDoоOXr"ddj:Q`O$ѥ!`3(7:8,|(W} ԬVq!Vh-)hks 1a٘jE#Sc.jΥl ;%.#Zugs<ϵT,PR0rJ% 5dj#.Rw։&n)PKЂ3(R aYW|ppRK9%v@r_)$t?|HA t/U,̜)Ċr\Q$S6nyqٳKGj4 9YV%gx4j K)uժZjQ&k֧lV6;ojH?ђ3!6՚ 1i& Ks"NjMޱpmEֺh9ԬL9Q "fSf)>y}ŌRe9m:Is$:Yn7)DZR[ gP1"fe )"ڈ Xš1Z wӪePĕRKuw}}nϓ.l8%E `& 7 7@X xl[Rѓ\s7EK^jU`AKf9S~5VX n)T2TJ% $x?MӜv}!r!8[SJHLȬXΙk6\ "> yN)90 !t,klY]cA豻KFiN4VE|uN|a"fA}ܢhL,Y1q%DVL0dwzo(l6iTmYSLroYk՜S ;=lEbz8s[V'N dq6z=xAAZ9}[e=ݤ*mD\9`=&o6=sU)xp"< c_-SL/xq< Bdx'o'=5{M7-ӟށޱZL_@/6=/f{\s9NDo_uU ^"C= Oy7+vb oF"z/>@h+3ƛ~iUZ2r] 9 斘skyGZ;OyX2O3Y(܊7Z{j{wc b`_[9>2rY:z,t.cqD$s\*CJVs"#Fiz- |`R#iNLf@sZC("vqgfBAN6bhbyf漝)!eI)IvI3=?,) R*`fci%罽=ĵRHIIn|^E Rj ~L?MjTr鳚Dqy0i\TSKBZ&q|X^&4>9Qj*,".FQ_kqJw!i%4mK+ȝkj0&3rqS`8Aa"5+_dsRk*\iP8sag^y6vਔ:Ӊp" R+Iar,6Z+ ą8 Ýsi4.;׃ĺ)رl*aH,ːץ*SpTצ8m̓3@DJ-E\Mm;kd4`fA+Pr( 6|m."KZC9'jD?FR\B玜Syh הyd]w,-׬K: Z(!K05(qm׫vpm6nv!" ;c&jU06.>SW8iO6w~u-]vDhaOy۬jd1ƎjŮeUn*([Z `0A[WÏY< ?.LK.A#%x&=GilyU'>n(6=:RK)"Ό5MA*jaRڮRѕ\K%x/9Ĝ3%> z¦. {jiZ~` IDAT..6h1SI zQ  ιSӜRq5qΉ2Ü(syaL)m6q\wyfb$Nb 5 lW͹ԪSpniNkIx I t$lkQ6%RNnHk>"0 F͇'ZE* /Lm.3u!x$LkN "J9-dU:H1F d1SqΙR0JrF&/ɻ8r9QuGbKqj Ah|bqr)I4iM?b-i;,6Za}P.(4ciSN\Kp;DJsR ccwM#4 y;'X;ڮsd%sRʼnwj0wEOGz(Η 6@`Xbf$='ܓKq^甾/}?㣇 aaRRJzhZ-s<<^A?)ZbdNbOʵ6nywDZBړND ôݪ)tp{U?֒uqByx8 }C9}5y( C.E@YƸ^Dj3nj't%):jZ'N8qĪ$uRjꔲsvl6fgU!H,)yNfdJj%xԔ&wHs]"Mج0#wy! sU͹Q/Pz9<@`bXhӜRʥ,aJ Hy$xĄMin{Lxfi:A"pE8 G4| h$xbZsnAoo{'=姾;B?O _罽3{{g-vM)uwwwww{?3g瞃U nVc)^z.wIG0WoᆛooSJ4Ÿ@DBI?TsN'?yǧ>Oя6x+Xhsw\[|7>e} r׼8ΟC?΃߻/}77n믿/ oOO..~>~s\~wq;.r":88x _x_|/zы䃌N [{G<{/~_o|~<M]s= ox~>|x|Y'!_fj<&?B׼]D~ڜsJVE׼ 7ܰ]q*w;O&"(?>L2pKQg !"#1{-O]su! /xs;88>8^i"& VՔf"JY$x,4US'~\q3g`QUGp$! >x -JjQNmss2+ʠ*Ss~°$evѾ"Yn(Zd6O"2K-iC*~?sL?8圼jM7W}ozkxHD~zW^ЏA_@Yz֯mo[ou{_}۷}/~p=뮻c;{:"z+^1mff??++w|ɓ'7n"[o=y_W}k_Wvm3?38_u~}{ߟ>~K^x|3qIhbj˞׼]?߯9#'Ez83(?*o񆟼`r s^t)pW!aڎBx>BD/x OyoX\<^SaZKJn.}V-9תD4VHx7/JD951[~O]}M-U+ʔZPqZKe(fboa!Y/U[BeoRWHZj)U #4ivHUwjV0ZΪJj{VS쌫VA^IB!xB"Ncͦs"z̴M@\9b ͥTr-8cZKڳVyRS6P֝1AmǝPW`V R3O*-JV@jj*flѼL \B #Җo' #+BU"ZE8m.EIXq{NhIjU$Zs>dHBKVDԒ3@-<$8Wr)!gkRL,8 T[bmK0~QS@kG!ٜ9uH 92 sql0  #i^"fT *'N¸^mRN(7kse:"1UBAp]x9:}tѹ@W6E-HQx#I=rpYDb^㜓 sΣR *tS%!SLa]jUfBiw".1bLU6YDzngQޔ\ `db80qjB;;4o7ysw1d44 16HDڊ8kFmk^sOY޳!n7xiM cqp 7Sx[3ƣlI׉"P q\vihÙjQ"rs^!R?&O^ oRs΅5;㾝g?=O Ǟw?a1DEs\r1w16t9ӊi94s=8nA+ ss5LD9'.e[x(9v/X9q0|/r'rM75W_:Cq@D'vNG4zC//K_?8_/=HDw~?]{i|w]zd ;ozh͇e~<19}ŃxW!3̷ysy򖷼ֳ//?|"zի^uu}EUzՅ^HD/yK~~ ?w|#~问:G/bx;84k_Z҂[}s<Љxbdt6{Fli ?]KaqVsVCiOZ%+|M8sǯ馜Eq'%}yO|q Ήkdl^t6;qε~P;' !=1Ɵ|ʕo|ZeRmmr·aV6D?gʼnchJ-fYR0J1Ut;ՔȱZՖ ._ҖDA KѩQ?*.,{!D}5B?K+"hDFBFd)LdM2wd&TJ2;;#bĐ( pI)M Nfy3*1T{. "˥I:0b 5@O)yOars3ye"3݊xQW *Mr̠q8W8AMw!F"3J V, UĽ3Uȋsn9M85Pf j>۾j[ctE\yHmˆC MZwyf;刹8,5)=D̞&QRԐG1)g_qdyM7K`q=/f1,ec֫8!D盻Xsi̘y\qj)49{vN%UlŘ|p)DcY6Ѻh7\f#w^s6Y)ѻ ` ;q5l6Ѝǝ絤>7)duAfdffԊLDڕȰ]ENvVAvo~ _ܞ+iwXNR߻kl^{MuϜU)'vaXYu?G?0ND4 ч`u #saMGL& ǾW_-,{Z"ܧ|Jy>ޏ6vvv菮 :η|+>v-o4qOO_>W'馛^W|#j44__E'UW]EDwuףhأ:(  ð?ÓΣw~k:p?'ta-Х%zY$b[ CsN\(Fo ȯWc/lsN`MG9avj |[Чќr 4M˓r4V9ԩ=D^ 41DbBm~}JstymA!+{~%Ɍ 1qic,XuZ+ZEfN C4 pYBHQ кLNuH1S rQJ)<bC4uI$ؒ89gp)|^]wQ'!~cT-\x1s[\!"fsםw4k0Аy! 8CiO0Z jl'Kkc8vanQ.qM3 GkQh@=@ wӴwv{D9Avu)0X9ī^45 oIURJщ(HRDVM4'FwC ń Ѱ1&oVK!p]AEfD0/K{YjUߍ0SԜ(r ! qN8X6jl|Oa". ʧK%#伋Ch"rR!uPjl -E$MM[ ' Hg7 hFT\VUa5,)7 3;r$f f@og0 _9/A{ƸQ:T: h1|6h q?K NiaVUk'>8M۠`n3.5=mĉg Sc#w}{9Zk2$"GUAU>pC⣝$zn VQsv51F1䴃񘴤ibx#&QךsG>swjk5뿯()8'c qlD1( I # b!+^LH<$ IDATYKm׶m7if2#kLa6,vʊg&(e\4MUU?)lKNBlXRJ1}yI99nf4xn2xnnnnn4)ӕËKˇg+ݬ^ 3J\)c(X,lD9%Ft:[ !r+&BR\# AJY;mqo۶C(5lZRjԌ1!ƕkŇeɏ@b9J;c yCJkZ3+D.\Wݪ0H2RpU(rYSk i769H}߶te6 Ѥ)XUӌt`8j|X^cPֺ!5Id;u!®ȃ2d9WV1߇Jk[5F+%<ġu%h!q5FZ?u% ^AzCJ QAy3yH]"ǀ2DXѬ*XHlmꦪʉy8!GjV=*si9$)F*2y康<7M=?aaƍ 7.M3wl:~@檗xu3RfܑDvj܁]ܥlBɝJI;`_FdUv#1F|XS7h<:u4kĊrkJDyTZCf`Z) nT =Ǐ>mF8Ɠ۟c?iri`F[lپ}miYXXظqaÆdu#B8p//=/~y? k|OBUs@i w|3\{뭗>Ld9 )IY +uY4(iRZw޲?>|`0CEZfo , Q̝A1nދN=ElRfp#?w}ֳp '|{.>H?8 ,eX("HUNNjmTȆbFjdY ZiA a4MK*窺~QG9JC1ph@ .%&5ʢP@`BJ)S]'jTZq.s|%2ֈy'Q$!ʑPWc2InK+Pj"bvΉZC]]CH1v17O p d8ӌ4B$R,$]K*0s9I%֦,+A f\7*;sFRXkLQȬrCrNYk5yK8 c sR$|)~s(#FN98+TL')Q24D|1*@wa2 :TAYuK5 DSU 4u}:uo$A!(cI4th0@:%=]_C1_S*C r9EIJ jbog@k+RD?Z+& IIQ+D11 XF/DO,j!" :$fWqՖ(2S9x@ֲ(E!snu泇(iԈD +9'7>R)H]Jr%Q"P4 h^%Ĕ!,·} :sQdbNIlY XV] mBC$S$Rop3&kTrJh-d^"K&oRӬh$)7~ӞʊV;F¹ iƣ啍6#!! ԆIЋ&smtzaK 2;^y{|-F;vܱ$ꩧj5k]jٽ{:g=W} /|sd5r}:D Kw[9;K.~׻tywԵ:grk_ uT}?My?=o|; ~̯_xOW*WM7=yRi+zi_w{1fΝG~oo|+_پ}}ܞ>z_\sg-|xWp s?so|q|^sy衇g*cm}^םxι׽ugX_k?'dK|׶ Q/ӕtu H2x$|U UUգ"P)eO=h,< kf W)UUuLWըrR*\f] r֦|^o*M̠QŒ!j6 mD};CPE|,זEKxp0P9VUݨ5յ")J.NR2sYQ ^fL\7sNKK&6F "Lt\5%%W]*=A{ؚ*Zp6F4@ ucJŗb,dhmDKxJ)c5R Z$F*`jVʂT鰼L'56ضm*}Z[V9Zkc9cm*+J)qsSχ\} &wJUɷ !0.{OK=Z1(T>x`ˁ( E2%q~@A:gdK\3TiڬefjHHd԰@.! fhQå(&(Ku7)ƔRKu6u]/gV $J"Jdǒz\;gyxpA+b!gRkJ<%R1QfJif)gG"^o'QV UqSF JX_cơe'd+??ݷ>N<cC>f޺u-[ mdI[IOZ!ӟӟ+Ȫ2s쳿ϯ}QU}+0l̿X\_|ouٟ'~>n_Lw:A׿ ~_t)ZkgeVDx *-%ThcWrh4ZCxiy!wk`{RJ_{7/ څ;1 O:c kE  W|vB L|Iϑtak"'d1F;^{z'b5>JBFfl A](+yƛD)@̀`xzqMNA Ȃ,[Bq92'0nֈ*C֎Gc377A+} +G2eZpwmd\sɱR霫h-1ƾBgb}"*+i \Nc~w 1^˕/+k6RLQ%KIҢ *. Bu"Ǔn2G;j0ILLY'*ط-o2)1/)F*%=IJȂb)栈탽1&NY !}J1 6C\͔X& >FkKhJJiPZG\JfYц7t0+ K;6IL,1K3pRPA$LyR}V5!Dt4Ʉ*aV9H>8]cZ)TRj"JߞVZ4ʌrdWj`Y$Խ0&[ (;qu܆$0xCA-bD12PY DTBDz;-Lґg^6^ҳSDaFҀ%!;1BH$Z$'ћ lU:I:M]WbF\a{:VOƔ퓔t8T˃7s]ayyy*M Ku(VZHYD_##*g2 [Jk xFɃ+ΠsAHXd`o/t+>Cm۶]yFh2r։8HD=LgSaXk6ZRFZa&IZ;"FTM39Su̜WSJއ$8UmFT"W^kmuTbh!}?D@hZk4ee[L9u`Vd %&301h`Z+dZ3 8M~mS f LE,P,=+01ѦkkLljRXWd< !]CR3yGtrqCڶ .mg3:^bV, )ϲ.5zB mf1o] #UB#@@(])G$Җ%CWS[kSʨ]1Pʬ*[9AԺڶm} U]&"Y>}w= IT̀<-d<1L$(PuSC!vi3|Agۂ5â >8%1I'L387g(JH9gzN*kl۵!A2DI+j&cAe"3** |!t"!))$*|1 IXǕB !Z.-%xO ͡g%q!xnk u*d`c!P9kEޔ/[GlH:cmE#» uRٷTJkkLʄRƘbΐ3W[c}P #r`ȣtެ(%*H)g,52Ǣi%nWUu]n+[5`t!8u] mvA Vf1M9!.-->|8iMUU9hEaĜm%ҵ.óO)ˎh砄k$GDdb/$JԵ rB1`hmMtgΜrsZ9 QicjTk} QYer ʚ#:~Md460Q۶"PL@Xtc/i3SN"D:Զt"-' *mA$$ThMص1 PI%Pk8?4E3CSL1ƫzOi]:Y>N:iNo_=v}>tfY2|2">%GL\ #< Q`fk+\AZ*ZsLQL+!1Z^V+kf6hJ&B'!}]SJ"@.D$AB/'$>$0H́ʒh+k*LRNHp!YXMe;Y^]FDZg?ȪQh?FfJ!@۶!x'!xc5( |be 0|=[eOYG42O):W}/-HCL+:E6R<RIx<:CL{WU$H$'@@<$3 y RAAq CRJ%1EJQL9[Un<{{~$'kHz^rP^ z4b2;!n,qB99!2'FJLbYGSfXZ $A|9BkԂd`uP5q•qJUc#@1@A9ZXΛDk#7Pzq.JPɅ%BDB`AD!2i#b„;.u,YD$Ln6$j*DnªR5gOCDhc OJ#=r JhQjDR\:0I1+`.#*ΈcI9 U@ [h!G TkP!1 0,d)7)ɕı"ଝI,Rd8 QbHlV]]LbAZB%=,]d|v3BŊctZcyR۶TriҨP 1DZ4CX:!&c21'+m $J)k!DC{k,[h>d 1kVU}''cR"& \) `gF)cJL@䑛!U]]CK7g])8~:mXr˲891Q^?w]'Y6l] hyD=EY1wyx DbÔ_4ieU&$gr!jٕT>vnٟgXߛtJ# ; IDAT8Wm޼58=p9-FrSRZx%Aqδ(;N}1s9+b bE(Ԝ=(w'tk%W֠%i@&F Z)F / oZ `NbFȳvRbm3X#KZB~ 3*Z I\;5tClycG# J4r Aڪ!8".f5(%l)!rar&_ #`ʨwWAKU )RACDJ =yjԠ֋޻WUr5"V -ou]OBag(aNqKRbk٪( xRs rI FKBSG!Q5A,]AXkPHjt&":lKbҠqBX($ :;4 ''yE`ܕ@z&AҮք5@4tFZN i w}CDL\umâ0s@+ yRL>i' MRL{!yM pIא391? DʈZMJ &ViJIºiDtiEq. x[)'(1Rv ʓL^8=)ՎT"[З^0tx(FN*[9:v)c95VsFMH1N)A0++]c )E/󘀊U˩*qr)ha}p^qq߱kNB^ŀҷ>:&v}<#y/K.YZZIdX['Z;_7!v%JL|}xP w9yMs: b+2I@*W_3>ZtkN#?Y窪Rp_0"T}Monf٤oy=1\ZJǻRP6\b. L@tb1 HHSh(%9dE]Д.rGqbzX3{>wi %1B-=bш{)NNk N L)p뭷ɋvN:B krFe:]9|8|Q *gw5V 10*JrwFf.0))Thx2gh1}R!H)6"RJS "r1D!Eqt ,TRJ({2QHEX\@9F}4M]#au]}m;Ɩc r#2* .0Ca(5!J\pY;)EV 9 9r$$"o1R> sCARKHDI!'%"&jYD,>ֶr:$3HABxZ()84l%`T #!2FRTC-0%EUkh1Ef0*Gqw>t*[ .]by1RUct6 V, eeۀV$c!&1` qEI/X`"1I)ƒ :kyW1 ;6#1`)e*C\B:+Qof/?)<1VZ\"9%\(|WJ Be2Zb|0AQie&4b69DFާ؜ v[HILjMN͜r˝!ƘA]̅[ҠEjZ]Jv^W#9̔J_L)U* 8V@3)bfX#+väo0FK{5i Y;[:x(=SbY7 VUZkI#&j53HF[e1>LkQxM-*~ۏ7:#v><~k{u#_=3IRV?󜪪~`3n3ؓ$IZ[U* &Y RZ)kŶ ZH1@/qcG`N$2J[30qǝv4߿ouh| ּ6%ύ!1T>@`뜫JWWс4+HW\bzdWjaZR" :TA陵VLJ[oSNa?wsN !&*B11*- !}45 `cGQ}um׊|bm:UUUD]#CGԆPҗ&fI1kZ%kS P> : Ui| !Bt@Z2L~.[8d}ӕ I)q\p !%*Q1XiF{?-,%0n*9 N$g EIlA&6u=kglkZ#>ɬw5ti\TRf溑X>Pnfw!"U$">PI&̔&k9gN 钉"]Jk3W+TEt)ODYp/88s(RHH kPYcΣJ N*Bd"r+SD=*(9)YǂPOGcъƘB Oh\1EfIhc5>3_F* ,GML!^pNV51CN{`vH[ĊK/߮FM!$J%ҋb0kpKĩn5I2)Qn,$,@0({մM1k94U4yjSu4릳ʣ|4zevTH)%FPi8^IVj۶dukdd!.I1B{ j칁j;߲k.&;vȋ{Y/Ǔ]})SXVۿ۟gyc9kyK_(? _ž=wqd$L3SWuJ)(+12}/h믿gK.! sË?:P+Sm4,/bQz'IN`bDd6Fp(`S@k]յh iZ{~W|͗_ko_s鮌2>^p2 ·U?>| .||iyw \#x\uu'iS" ;w;O?t)\[k{ӧ?o;NGZ),Czbw=4MJ8~Pa(C>,~q^5biPM%m;"6eQwuUeH63CI# CJodHՕUĹzz-K%Ng3te"GtT&h@% wǔ*uM3Nmvm{*uU:+CF!z أ08`吪i<7͈%ۆRcrNa80ص9[׵҆KbLB`-#vJ44}/r]יaÊJ*2#km2N0;TZR:fDI^ E┒AhKA0r?gt* G+tL{c ņ :^4%i}0H6r5Y+NGN1bh11 H@23kCͰ:wD!z ;lMYz@j%[4"Y6,AF؀JѨ0$v3Dd)RbT1BILQ1%JrXc]q&\UgѦ^ \]7r]i % DH((iD R'kmk+a9WJh5YC%Y#CBN-Q- }߇1Ήele̬gV)]CnFcPYK&Pz?Vk%NDJ+٪IĪhrΖ[BEA>n:NgËydyqI1^M=Lf$@%c>FC5UU Wd;O8)*q@F Tb,E ACJ!3v{~68T)Ptcλ~HFec}}CɊ0;́xϾD N;M)}]zK.y;_]}|>w']z΋^}ʹxj]5qʂC7w]{;U9bN++M1j SٳgwთYܞ{^us_{噯^yo廼]/bk^sEz{n^})S>]g/ xqzJ{?k啕tֶv2]9|l6K)۴-Gm۶mذa4Issn4d2?p#yMlJ)YkVO#h9{( [.T]M`t^DJ\x<޸qnÜ WWV!~MJT" ]}`cV |m6ZiQkeràG!yͦC=ÏD裏>=7|K_b>]m܆񤩪n9t_{~GYTUPqL>&4li6XgXj5uTueU3θm2M}sGvyʩqO}ҡt*\fgOWu%}Ͱް>~kO-oy_K/t۶m?>~wzryuwS^u\sCum_c.۷?ϸmtp/~RJ[uqm۶Uz<^o۶m߾}sζm?;<<}c;wܺu={.yy{ާ>;>Oᇟ9mn:^r%۷o߾}^*jpw%/پ}G?c8C޽{8O<?cߞ}?qgaa~1}+Bx`xe]6wnlwo3y$ꦩJJ h஻^׼fTUWY@Z1i n xÆs_~/?y>࣏NWV!l6`V2aSN>{;Si ye}W<Τ@;Q:D?Gs@~|V_1VP:.dr/?'O@P7gL:D"hmkcJK %|~U8D"$f{Y%)Z@UMhSJ3EdR53m2RL2 g^*6cMg& /J"N&7oܸqƍѸrtEX:tᥥl|Vv$Bi姲tR:D0z=8hj}9ϫ)ž麟t:D5M6mݺun2Y{ࡃ>xhqyyk;BU.,=vLJ+ImveemC.cWWVRL U]գ1T:gJZsi`L2<,M%3srS!꺮9'sl:NWfW1*ҎRjH%$sgJ˪ E" "yS x$əCd)N=0W(ľ,(edƣ$V0 b{!]7kۮﺶm{i!x^Y(7:AV~*@|1%”R׵Y+g`nXQODƵu=dI x2qWV5¼<6lǮQJ!!dQQ)lԓM6m޼i[6o޲e˖-6mڸq x4U-)oPUn~-[mv۶q֭[7oT諝Eh11,cEt)w~:}/|Mdm.---///..>_׿o:t=#xkWSxq~aƭ[(}/}Kw}4y IDATd212'mnܦ7m^kGͦygfb!K.|gQ`(<^8!6 Ϡ]U_N+:pͮ]p_O~wq6m:_xOCnh{[>_m$w?ᆝ'#~?=wo~۾(/pj߿?ƛtzvm"f?^_,C^~w3}}_qe}};ǮZyɟ<?3?s?+_//^}~PWWR>m_p߾}}{st:} {^xů~o}[?<5^^{׾>yk|dc?p{oy[v>?v~8c̓mz9{o@H)_ *D =V! (R4/APѢ*#H(Ґ{{51?Ɯk{ý$AygR:{k5sƯ9ֺC}tZ%~Y馳~G?ѳ>;?n !m)(h~aaPkY߅PsCK̅>.~]xtTo:v'?FۻG[V [,8G74:$J!7ՇDʶ0c箿hνs_ʧkomEQ?s|G_Z5G<% -6>|0 :*kY!ޣJ̠W%W@ԈƘlbE2fR}mmob;Q!BC,hX U?K]{-+аum, )6 D$&AD >,eJ嬎qwCej庮'8RmqוPu=MjT&mEQ ]܁Zrv&L.dRuUeR)7r+ PDHęE*o;_*'BnWY- Ulw-ɂ1m)唍E X 0,D)c)YuK%I@J1Pbt#,Db:K "&#".8YHa@[Gsb=Z*E4/`Y!RrO_OXF)DCOaВMVRPgv)%-9 6]Awy4DV3O;fbrΓ zD!4ǀ UfD%r'nZtW8٬7vJJ02s;WPsўjJ)U3 `xǦ%ƄZterU)BDRJNi@FCD߱L?-9hZ有 :c[9u=:緶`=\eAz="6x5Z-YMR}XF.\dZOmQUu4?qzК덹01ru8YVD̾s%=՜MLHIA9tbX4.'|}ۆrcZ.cLv͠VREG1safl&B֖NA_[ {޸{c.G?ѫK_ңkn=y;~_ꗾoy[f˿ gpBg>~{-_O~ꧾ~xN;jQ7+{>'''t]aIܭx\po|'0&-8 3Wկ>S/xk_Z{O~r-rʻ'?}ϫ_>#O|SNG>r*//]z }7s3?zXuc>_ <1wm*\cݝcۻ 㐵4LNAzۂZŏ7?\2<늳P,7yI) ) Զ2KTo8*"R{#d!Z ~(9x^TFZWN6"53`uЦNDT‚.:ZowAXмf; [~鯞qo{ǙH1jdJ[&4*yk}1FI!JXsL?]|ō3]Yk#@f.4VOjd$Fd!R8nؒe>PJ*%(`KP4P[+Ѽf9=̱utʪQTP\0nf}[\T+YPm[ *C'lPP-fv䀨cT) `|@4(RZDއЅ0VA^.|pkšCfDluh>oVCN$؈ D dij[S`2}Yf0#V% P*d i]EhY̞}LFfH5T̔Ф"QhU踁h"5k"!H},HR-\ vv1}z#RRʪ>xj6J߄Jp{ڴČB*_k@j|[DV(728攍fK֠BbٮM h\5JԘZU%B35^] Sxq-&p)LJh2Td ;ڃ'1C 9#/TR5̰/S\!9hcÙRM&T#}V`#Z_KRrR*ST 5*a0W;r!&u}9,K&(!j9MVDRr&ԊSZYnθ\] ˙HY N JW'"(G PZ OsF'u2QUovu}o>|^opa*i$@fpxr"qI)v]n˿|~kkk߾}z-mއվ{oP#DڍzzzB@k"y%KBmlVT Dh){FŵZPq֣w]^xܭ?x'[x5<Nt꩝|)O~roܧ?; rLϱ-r{ݫMK_Yg5M?c ꪏteVZ8??ۿO{n__g뭷nϭ:AN]wOO}ӟzp饗u8q{9_uqXcso9λ5oYwuRr^qw|cHq4>s'_V"R?gnB:bJf.P;x;;jQ?=!"+Sշɦ)6Тi_k !Duf !*ZC*ߜo]5_yǮx BW0C=ϢA;d 8HGoAzh:J@hf ֭eTN B/5q٤n:S,U`rD"84Ƹ bF)%ϤO:r1 j3EJ|٬gfcLT9%'|<G2 !SNN)^ȺmobCS6vD$$@|FM";hb܍tHmYQRxVl pM#]Zf*.}XbDaf#2kgV Dul0VВ33ۿ2Z2k딨+$$%B*Pl3aaFM9b`vB_AmyImP_/hJm#&5"u  ]HŇTOA)|F%U}fc~*V|V@cyh?d8!`J2!RV#=% Lnmc-Fq̍΃-Bк^ IKG1ۨLbh 1"Sؔ/t͒=͞f9w]ǎ=~n-x5 咙߿yX ð\.obqi4>蝿yⳟb!oַ~ nI1bw[-?̇P(<;w盯?яV_,pDɄ < ɟ|q˟3?Ӻ L9ƾ\s=<ُ⊇m;F JC oŌejD.t9kyuw_JHx>WAvbͱ\Fe5CC>XdU;Uc4t2VbP~H!TA$"lf8%%4/LZDӌc|7YSrww7c\<\D@M 2UMD'JðK׭s.YWN-_k$A6VuZ#s}sN1SLUk2,0\lTljNa5Rl>Vv֪ E1Gqj̊,2[W& ɅLS(\{ B!̈8" +"M"Z1K11Z [.JVED%U$ My5Gz#ra9XfjhRKRarX)ŝX1AեLWj3+TʧO~x+Mj"۹˞51UW]?[r;/~mow Ozҽַ~nz=Fݷ{7<گ}^k\zW_}5\W_}gwvv^Wų>S~yӞUzivM7]{;ssN8wQqg}׼뮻4?K/}s{tfs>ozӛ8㌛okkwwܳxׯƱݝx+_7_W>OGwqz׻.)t8={nu<;s4IyJ?;g۞>ϴγ*<\?,8Q3l[=wO~ձnMj vX[ťGЅ ?{ 7\|ŀ| /"*"!a@t]zqǒ cpX wqΡCaXqTfDd!`n!R+,_2d`U1E,"`;,° , U4hr Si\ CE5,0] .!ƸZ9ulkI ѺXL mUufvM>WSYA泙.rY8&Q1œ(u)9D8W#BX9dR|H)Ɣjr!#rdycbgVR-YJɈ4ƋO5LG̼țCLZ29ouTNVֶJ3օ1}-hS1sqll7 '-`ؠSaD[ER19;~OiPpAeU-@Lf9'klFy=p@ rZڌ^h)f|"TC]**P!t]%@:??~С?'< @_b\vٷ/~OHSz믯}7?ge{_엾4zjw??qup=|c yC^??ׯ=y{w]}~sYqUW/0K_k~n?/{|-oy_ꪻ\vex+.{|_W%/yC?C?W^s OySssoocg?]]y啟gN=ԟ>qxn\cݝ83>ȏLv9:N)]z/7W^iޮ?{p`x:w޹ P馿0U IDATz`5Y?g[jfLrA|𶁓'bu^Ԃ1\@E>w>BB[6kȴ;gPhqU*cb# &䘌.Nv*?Z X1իMn fP9_G8mURwE&MP;CAT5D&z[>tByֳjDsrH9OA hU/s]a^c5^w﻾V팆h 5ZdfkR$bJ0B7w]p-'^;wj587ϭ3k"1FtBש1#bNi\h$nL A%`!#Xy `B"l#8%dfܨƁ{&e]Fb56@1fO+"9;$\.vvN9u5nc#3P3c\ CLѐDV󹝯+zU@) mعVK9oBDwyc7 (YЉJ)X k!H![ Vsږx[9l9Vm,9l6;?Ěل~^)eT>n"[BJQ(Yz1E8hBH6!1Sľ:[aYq {.S|F4[%9Vr.*'JԱZ;k!f,|g7_ }T!U7;W\qӻ}V{/zo/\o߻+^{{ ܭᠥZL tYgnmmb81aX12cQ\Z<'LO͊IHA Q|f<6PP EmJ|j3Y4FX! TUQ1$[JRJ-}"y[}@X3!AZ*mvߜߤI? wvv˥)\ _A3pIfuGCNU@dRlE9W-I-seUe<(h}g-&Na6;DD,D ܃TWWCTLz 1Y4c-,Rl3٥eC -fglhj&)`,bڷ\.rK)9:{x*tbĞfQ pjsTDDObްreG)ZJ)y3_+HFIaK`SYwX^O1EVR$sQ;r)RsDS0WF;J*1dA(QN̳DRrfbdI>x_r컾;& QVX I&Z s7ٵn! B4KU"$vVuT؜K&&Sq*A 'W Fpi]2VMKI1CdafE5~Jɹֺ.x$:1\n: gD{W՘.um4jMcs79J U|QY'Ces%#*V^OlDŇw]~{{{rZaNeb1#B fIÖ˥HHnUP)Ccbv} 77C?QkMY%[2"mi my؈RJ1M٬_.9GKy_.D5{TɐÇ Ii9~x@l6Ϸ~FD v r9o['qzI]9Z҄U0ܤTaFd.'!-9j3N"ȱ#":|t}bp6*$ϩjti+sJڌHr.ԅ0K)*t`Ik`3A'(Ҽ-U-:a}$5fH 2:ю$GX&>9F B]y"T9*iղ`E l6cb-l-"eHPpHJѢH5JRZ\r.RVL46s.RRV T,ú:\!q0$ 14ҴZTbð2뜔d22GF6E=X3ךR'Y M>:f͞u)lbID dIUe W c??7IfD\4Qasq&A3PctE{hJ{[M5*xb-H X8ƴ}?@D)ŜbNi[YO8FP8:f?3! Oo/rmw|˷za5ѼH;<\'r J}眠9WÃu򆿜s1E`CE7޸8||{/ν OvE]?{S7 C'56~9lrf&*Pd0t*lE7I9FVKOH&kfIY!ڸ$3(WeIcE*:c}u]]ȭxE]h~|)ho썽:~}T|\wEF6R*jsεȥ @;Z0R@mhmpџ+/1{oh͆6UVhsͩBl/;^C{b6nEn8*E+*sͿȹUh((`etP8P&+ U#@9մV;vVfv}[YkDoi21&c HN>Y{LɪFAFn2s<"h2v` Z| P#0UlD ']95zU`eQe&<]cRR92:zpлjج%=h˓\ˬ=*DTEoTeV4iFfbGR;gMuð)Rj\sL &1M$95庎j)Y?#5vZҐQ]u"&5*g"bFPC`E9ByGLqV6 8Mu8"w- J i&6DM`vsqRJH[Mƃ0{֬ L\ozO1:<{fpQ $=6ɤFL0 v9M ;gG4ޜSLlFQi7‘=K4y ˮ]N4gnSb mn6v'5ՠ/װkLeyZNAH{*+Cv#;"V}]M$!@ J1ߵPLb)ԘMfK@Upobc<;"-ꓨ;#SD4ZS@Q\,ЇQȰ9ccqZVǔ0 %qK_\.n#TBBIE6 3H0)[DT$ZJs=do썽7NqVm^=4 I'kUkRcK#!ն`RK[ -y_c {k#Tc]֙b١uK)#9Bg؇UkVH㦽>/ Z&׵@EMJTiX fasi9hmz#KW"5=tHvAzCRc۟t;gk5xys(ʎ@Eʅ *l4'JkVBb9bX\ZA <%3pS4\bI31Q"!i-|(|6Zsn6!b9ƸZv JL<Qor΀h"8-B@r(D bKlqU09`nšys,E9WP 0&*@VSJ9ЪPܼR4a>`.P{i zɨň9SUR!`DldJ硖u#Tin.`$00M!q`.*1+HNdDB<qhX9 2S5HDm۸jncV 0:8"fC\M{e&6~dRtաٜ0`YS%ЩsҰ ;Ie<6;cd)+kʊ4}䊥Z*)jU%x:yk6L懮C)P^d`G&356cUƕ<)jQVD>xTǘb;8Vr\.Wq̪ht,)s]RJΎr~sMnGrK.y4~#M@aAHЇmF=ؽcgy3:=oW^yСoUp]"UqLd3tǡڷdf;;ĢS(ZL7/a/LYJw)cb+DI=Df-d#m#LZ=Ztlpu!dkR3ߖ.t9o6.]'Hy"mrc7@Nirj;N9׀Z4N L@}_iJS)BD-y,𤚠vէSYHR}-8-2,Zo T5:a""2##7Kՙگy 6iv̒s)TI-aU)d ZXsj{0 vMbnޛD)%ƚt\)?S`3fAB7 <asqVf]@UǜfQ+aMfvd>v 0h%јf̡Ң[H@bEP[VX$ /T`RJ))Ff#ƌ)^!he,Ul$ "P|u }"{ frRМDU>ޛIP}z%7MFF&o!"qk { \Z>R}@H.u ("8) waмK[U7M O1!BTՑk;!'66@>-+[ >:oj Zt'b8"OўIS,Witl!xvRNjQp>,Rr8!x縁jMsC@Lx69SInbZ%r\.HY\D;j3F:cX7J!lؒ=AeU%""!c(b/}CwCom!=MCIf|>'q>ǶZZV1d""t[ݾ$FV7 o?|TB`q51gلLn_BBu]t_oacg8y>sHӍJ'r{;͆S7&?|;/o1r嘭~5t@P+8zE׋8+Z)}1Z  BQqHXfV֨H\ r+f5Y.r )%$njbG=_(6UBrYbvevɔZLBgDBmN[01xrΉ9\e!\1A0T,KK))et!kV)'hR)ٓZ!s8e#\T5t;lw=;6j&A &>P15ZZ̘B4ξ=Si?t0 ]68lo;u(* R],vq>|80w.xao׽u_Ki^WUKmͼڥ/*Vηn ќYRJJq#!}|#h Er9ûBpGM!`q&"TBE+lzN\{rSYn"V(("Pp b7SUF*)6YZ$W 5 15 -u!啔be*Ui*DL'B]_MLTٻ}!$7C!8S!1F𛇋gΏABڥֈ3H3 63elȥ Ĭ>(USJjqU}I; ZL4TKEEBRPjHS/ʊP' rdi  e`yP"cT{*0*TWR߅.;o6%M-OUQ{UrDw̹P1$ b:Q&rޙ3ĜYr:vXrB28SB򽔤RXY;)# 蠨caOl,{?cJ. z8ۤ8s.c6{#L`#QBpuA<0!њ41lR"+×涛K.Ey-=)3?d_LTM'UΆcLHJI:&| , ( F7#dl{?ߚӚTKRJAw]!2TD4I hA0Rh߰cP(v]҉Hj0,'{L]j_־}NO:N82b ޿%;;ߚ͵h)ЛUd*4 ^vه~̼o{N:'O:81a`\EȪ:{>nK)Ѱu-[H T/|Uo<}?7w~q{܋^Wԧ>uA'{coo1ۧ9罇gGfꀕ23l3T $J8`47rPqy﬋!mz^׫Y.eY#eY"Silx6MӴZZkC dC3n!x$ 1%eb"yʡ0E+ˊ@Y<$ ֺ("mdXDt;x4 Jf:Y&Bd& ~\.EL61S2d|a*1ڦit]`{b*zUY9J#)(N$GM <b=$j2"5s)!O{QVvڝM+W;6bDz FyzxʜR- R)_*hܼmCB 51y9B JP/d}x8x"f3R`U0}aݶ-bkC^PI gd")!F/RHm y@sd_+f !˗/]tl|٦m1TU5>>d}w=ZpadDHLR){j鳖.[Yz{u!r< -eb. `0 "ܚޝgd;u?駟bŊ+Vqp/Gtҗe{Nk_e˖mڴ%/yɲe>W/Yw|]l%wm~J~ FYRJ]]׃`86mc[ku΁u)LU~UXSfXH)&JD7zHdddäMk-0!(+9%2sֵڽpQ1Fk[,"SlffffPdC& +~e4[ -@L;-=wXI^k&;SdG- #LjR5[x|@]9Y3Kmc!!:ڶiI]׳p85S׹W) ~\\ Ɍ#`w!tɲdma=lX3m,rn۶p8;33333Ŷm-!FeQUYU@D`] C!JqúnT*3i{ĢE,X8>>^e.DmuS7u)x)ɣ$P]TV]Jk7ǀe8bd12IUYzFGFzUp83=3555=55=55555;;4mZ$jXI6k]>(xgMcѺ,+ؾ º$V± 2*xʔ%TQbz%E1B $FQ&mkaԵs6)-(kI6 *x`ύ [Edn[۶!F{ Opw|y-F0Fϻ2+\b&JNj30C;F3a:H?Ά#su.3H9t\8$pRu"`|,c,µ`j/HDtz6*T9۶яxuef 9ө{t>r=eK}gffK)y8ރX$n! ]!Ԉ) .u s+m.k ٦R׫ܭy_o߾}0{vvvvv{,YlٲŋONNOӍjbbbҥ{| TUJ`ff桇/~a;+(?mkggD{?iYі-[V1UU(SO"H!Blw]㩏o{^/|6rUWk.^x{[r_Q]twL?_ٲe݁_k~cwË.E]/^Mvv06o޼ynm0|d")S{>z7oٲeͿ;;ow=11i&|~I'viwu}|wN7o~s+^w;>ޝާ'M6m{mz뭃]_"//{qe׿O~}{7??| 7ܰ}S<}t5|[ںuW\!֮Y#lBumkwjT R 9>0j3n>,>&6ZS$HDUY;BgPfgfRl˖+IUæ}J00!zvPzԺ?2:>1>ekcbp@@WF׼ W#ޔ*vKnM*'ƒm۩)Ό*fMpҔR42%w݅Y55o`%uݠEiz8%zj gv9a̎jcPg(1s!"m4JRJy|pn"sҿ3(#ci!4mCL l fK!qww %[MeV"p=8VHg\Yb;a*ʲ0֭"$gpvJ!qY9(L;B @9& ZAIEQyTR{h[0wQ st?4/`͠D$M%A !2=ΨڶmZladddllw4@&j>/pB8{AIE\sX2́ T,SEPzȇm#w$7R-mXUh a0LOτ{CVeiLa*jll,x?;=;ۺf^UI)˲,IVlm6Mݾ|ࢋ.""Ai cc[oC+ӻ^v߰a"kin:@xgDkMt<ԯ~+"{W\o. ϟ]wܕ?ao~;P?? sK_w]O2; .<9眽8s .5^]"jN:iW zбEcpuO|ХK+"zwr;ݸO='nc={?ǿy!vw`pַsO!)~tAo{Ѷm֯_]wʕ+OC/&''nݺ|rU2lv0:XT] OZ#6z]6(K.2F%|Uak peNjLm43dƤ"a)B!_[}$*e"7m 4XdB>rTBH!\fpOjNsQ@JH6)<!I !7dKw XM#w;bѼh4PC6$~#S~<"/ O}'zB}2J_:/(PB VZybh+Pv !~,TII`HR!eX,*<^,^Ubz_]2 "*D[(Z(!w\T1+q7dL-h ; ƴ(C\4!Fsij"*RH̙B*e3Gn ,&1= #51'ܤy94+a)B 5|KkMJrId`oIH)Dŋ' c,d8̤I"o"IʢDM} }Rp<(%9*LyW!­V,Yf#)O12YĖZ֊*;߅OLOebd{ř$(d#rV'QIsmL2Uɏ<4'gkkTb$22 яp ;|'s"*HL}Nl#<\-[lڴ[o.;',/~=>;7:ЉúR)Qpd$$B|yDRK󡄚(^U('`XeW'{vw}=Bh%н8碌YО c9 i1\X"rlpX7u\ AeYic9]33^@i! D]a^ zc6J.”SX)A/ՕD,]",+$|Y=J)!Bp.31R AFE$[k'cSI]>Q^S:gF[kU($ЪJkq샷*CtBNJ]Fa@i6=FN(L.Bဝ)d!c^7>11>>S6ܲNI` 9\c +Y[2ت IDATcbbFVD.!\f *DȑRc3@BT*PC g$4sT(1Q(6erj†V=;攊:9hNWIkSV̬*e Y!UՃi4 U3$H EB:pdl̐e2l#nD (ڶm{Uކ.Rk}C""d B=RUeQR֐JA!TtZkxbDtq0Og$ T&1QHDre3{&[쬍: |:7(oF)m킕ga=RR+]ඨ.B`90f@E!2ƝJdĄbKTeKP^5pB ),K]V~*+Ժmٙ_?G9Fw<pxI0 T*m 6Z:ԸXRɪf2 \ !\J=b((&<}tv3=._^ѶmS'6$_v[O=u _~mmҥƯ~wߝŋM뺮{=C[;Xd}ݷ^{a 'ٟ˗_wuOH{|;Q?s<mg…… wxO9M6_l{=Sd~wg<Hb,ιmIJe` kH !H)4Zoۆ,"elmLr$!YE)"'fr31Ȩ҆"RvC( te`a^2ߨ(@uZiYY $xIuԯ4&(@ ZO臑Mݶ sD0 tZ *&A \&:QFa DH)e懀wܤˮR*bq0̬  @rxn#1F{hiڑ 6&)R:7km۴1HDXhr6Hcą̤b냷+qX!{Uy 41d!5 0 Q*'D()κ]a ӑ h,gW0M@WȞ愈SjtRxNI mC^s!XuSFB^rܐRͳ<'z$L\IL1:2do');EKD &6JLo) v*R) kxNK TJe 2"".amkˢ,!D!x;~sZ[CMOX'HR#*Qd1R 6= 8޳Ew`@jd0z{=35MBm=yGgdtddbbblRgggcaŋ,XRN==/;{eywMD(o.>hq@ 0k-1VUk5YVa RV>)Gf7]|+<իWWUu|3Er)g} ΦM^җIhɅW)/9㗯|?>ƌ1⡇/}??u_f~?я~T~K;:?O-o>W\GO^W}'>aod׿mo{ۇ>K|ǿկ 'pYg_^1 P{ <ݹm/}9sy1oކOx,{=s|~guY'<z?><R\g-Acq,pTes!Rp%u%b-йH GR +ygUJBѶ "4'I\*LUSR8$Jy-`HSIȠ9t'0 ,P5gNч;&ׂp{B )EBuf "k-[)) D etYљ#StD*q;9Xv$-9E_)F AO?}'׾vjjUzս;11$O)Oo*y{|IַOti<^x|Y<ԽiO}≛RxW]8ÏȦ6B>ʑRUY{}Dl`9 FYdm[ΤȈT2F߶Ps 2=i[esէ2&em)q"'ci:*0d`t1\jQJSo @"Ƕm!@ NW vc\ٙ&w,˒8D\ \ͥQCIiR:J!H.#hJ CR<$~ ieEŒᑣP"/@#+ @C'uD~4` v3eQ*fggˢmݪ^M]:t Hȣ`LJp/i{P;bƘ,`BlZ+,BJ CД2h$[3ABưFkT^HwH8ND*1dxg?'/? {ҥK/_n[7==3 7Uŋ-[l8ǴBm}џuRj{ x裏{pX%H(MLOO7ŋwܱu}MU^ozysZHWu4)9`.sw)cq1pv&cpfڙgv<}ove_q)ELE#馛>Lʂ),]L x"`HR"T=Jɑ{aLaT̆),$7b焔ZR91!P'R NqLq'PT=֭nd~Hc@"HH ZJI8{I(&d>YKjEu A#"N5*|L44ι?rR)̬!BL̰M $*"!RJ,&}wdZ(0ER@qHl=?`*+` u-D)E}dKGf薓IΗ 1$r >x {gWQt9F'8U<(RAevJdSqeJr WFB T1D`+9c.Hd-J%⸚ sى'ݻl)_,ԙ_i.RsAt_H7Ev+J ~FqF&JqI!v$vT)Ґ242%('2Ned;H8 '0((9Re9ػ{zzjffzeYh-Y)h/<;2Fc4K.:Ǧ2 ZcL߯jrrrtl,2z۷n!8fgf6lf8 uIc$*ȃbRZEfD{R gXN5icH &Z\ şb osb\r}/5ߞxѢEG?]'d/ H!{~uβN)Ma c,dmzZ2b0ଳa&ĖV*|G6H^!7C,%b)$ckիW+%u0@lZ4TJK!sMb%-{m _ӏRRk I\zQX<bn֮UZ_}GQ IJ¼2I/x~am]Zi΄QRvM9Z;O]hѵ^ad xt g^{c돏:;DX~Om },vFJIh@ܰzA7p=6|뭷qpF> ޹|/@T1 G%ڨM ~6Lmr"&D$Z' 8r2@za~߾"c ȣhC  彵HfG $`594;"!!Uq!%٨5hBmz!ĺiX)JVe,4zhۖ1F=2YUHf;9Ɗ2;vT:la9?ݤCdݓhX-tJ9򳥒RaBy(8%RHJ!LNevj Eneyr4bQ:3$t|p &! =AMuiV R$7P'MH=$PJ&GIDǷJB3L@ҍ 9y1F,3kT!)FC-*, du[ hI8fX8Ix:k[cRjcuLhhY;*ʲ,R0ix)bLq" hk yVj/Kuq7ߓG b(nr9ZT"@)<~ bh%Zk L ~|bc:Pߑ:qUgm#ɱ}-Z4>6jf0VPc#9cx/=gHEeu{?n-Y,ٙٺiBZkCU{E(;=$z8|jm Ka֭__֔Dh]VcTZhmDB"j) 6!c9n^vwΒY@hY kJk+_7anV3t%ccchAc88ZET轷9gQg F.ߧ^lb*(DCr4j6dh3 Х$ t+A 佒 .d>LȠP*UHUCp]z`w$X ]9V>x) RZX)UYur*`I'0BP%b IHҔ-QNeUjR1E}FjުuRd99i(^Ri lΔ} JBK<{uZB3%40=Q!xu{mibWe-{ IDAT:tө̖EMFr9.0fciڶ& Hu2٫9,S7M@xO(`  ]9T㒵)br6Z 8%i" 2TJM`aA2z'IoJج$ü(R̞ỘKRJcJ%q[C9099(G-;dX$I7Jm27 V>pR:PҀ<̜,$|kJxfJ* , psLBQr-+`bd#Mfm۶m_<::`(mZHӴ3M22fbb*!`0E%QJ]uDjjl۶[}/:BI;~O6l@Wx7^B?uk~?x_u'ciJa:T?HDo|!_pK|A1/?=.^|xI>LDo|RJ|Kν-N^ ޏ+HR~;zȡ(N|S#uds~UEQuh'dᘅČ1 |_J4DR,{QRvƴ@J9W:g۴ζ=3ҋEsڢ,1RiA-s$L?B+I\Nl1hqLf)I!(!v, Xc:sGD0 rg-"EXs͛ ):DJBM1Do:Iz>x_@`m$ f0e`b!r잠JŇaxEQVeYRIdPa" CN~Σ_Y$X9kv8qIҬ\*p۶9R*(ndQUX->nH9u@dL:S$$B! lnP'OfnSQd-o̺$O91O`)NRJIR.Z݌bSsI.is!]7}c%UYUUUE!=v>Q$%m 4cȡx)>kDv`,̷$BG0 9g낁,XPk} '[YV$Hi]U=uiZ)UWa<)ʲPTUUfff1.zd}ᇧsccc.6o(Lc Yo-% eh8'F5L k۶iڦieQF9V>R]]c5v]ccNѱ^+} .߰~=3o9{-L '?u[ߺڦAW\%ҥSj׼f߯~Wb9_rʁJ*؝\rM'[O;p-^xg9UJVyO!mqAHo@DD,JGy-rƍDTUՆ ^xYgؕ8c_m f{lzf[϶ty $|)?kp.o? C(8Ӽvλn~57 !꺶|~`01*)A@~h32:9;VZuu׭]:]ƍ9ASAEԱ#`k=+rRn!v~DDvaֺ;[nuMD?gK,6EQ>hwbVJG?"Lt1?8mܸQqWGJ*c}u#}˷qmu\5ٷ*cO?3!k_\S_M{s1/׽y'e'ND]~_tttt|bo:9'8g} RMo~meq)X12 H{j7)̆fJ+rEbYY7p\:ЫWro=U5 ө1zo&!F+wA++کD úF*B m4Mhi(@"AR$`k:M^y@]UX9LGsy7l1PvVDy$?I^ ZBw!x&.Jsޖ#KҤser'bXRk_HJnBѱcR1 eY,dpDTцDY#AsEdkSTmQTUELu=i&ЪV+mLa CN.`i1kԥֆ}z̐yJ)SD2f-`J82g͂]PKYEaFF"L,DydLc bb}˲uJed",),XHi` S\˜ C %QP.@ZVTӜK9~{<\:OV+23IAB(nbL [Ą8hL$sζ5xc ]0Q6mkgf0Ĉ}_ IR+]VeQG~Ym{dž`i㯷?ҽOD$d/.8!]LY0(1Xkm6mڶiLyd:&֣Ʈkt[駟>558{g`Nb_~uցc$(:kM0H&BhSr W|/rOPN=ŋ /zUw;5~GKNhS2E SBeӤJ GAłR 6\s58իVN;/|a˫^(7ᠪG8L)UߗR=Z c4Hzh K_S2ZK9~ˍ[UPYH)<[qƍ}W]fj"RZeI޻MkXenfv\*s 7^Z cXz Aoؼy.[.ж-8 ֯k>z ^p GU?XIn먈E++"gKSxkhzcDd&rR\ywli\k2 W_}ƍ)Ke XuQ |*fM=xhh;<\K p۬.Gf":Ż)]<*%j3X%%}$@DcL܁} mH*eZ۴mQ+ITʥNGΣEoڦu.{$UG:$յ :#?. )6( %RGQ 1MO7QJ DW{.$,hYU2fCoe* g&C H=dhṖNh^ S秷ܢ䤛QIs**","SaL@pbgY3z9¶FRbQ;k% QbB`+bXHw#IݢJ)IgpYd9%EGUhV)mzm+aтk#xw4pͧ42|kb5eUMHI4!)̱HPL)%/XRE$4B !#RD#\1#Ye"f Y ;M˰NqHIn1IBއibt> ) v vS!} ;@"0{ ˮNv[2+J* $a"d$TR-z3AfpbcGG;<`YZa7FRi (|.g97%E|w=|X %)28uM0`-A%=U. (`gx/Ri4[)(DE;bL$Xi($ސ򨪊 2ONR٬1Ai !۶nboll2{cgqmq:ч>>ؘyZwf(DԵ]"&p*kUFg&.afk,b}Uu]uZ*k֊er,r,߸qy,qWm\^%/~_߿[nݳg"Ft HJ4b_֗/-oPz^18wv~I> }TU-7p^#KnA 1n}wyUz֟Dʟ~sJJbIʠ\aLH8af%8$ǘX7}{W-zcF&G$eW<KK$"\OwmEt]n{3=@i97c.?|?J*ג3Ue,0E(H;~HY8p]t7."%BU2 W}'X* `-|O賟Y5csb~_Tdf\ BDLaM9X-QMegփƨY36Ƃ$Ԫ\!*>fR?0J~3Hew:El()hjHdN,+>t6΄Ʀo倾R eAc /QZ(fe&H+2|R(齏'Y{Lhf}ږZ+?Fa&سbKRVPKF%ʼ9 &1L81ƾsYѸЇ1թ4R'k([LtNM3QL67p)sRom;WVV61w]}>wm2MӴ<O<1C Q}|};릓H_?z4m{Ǔ)6FW(/Zev-楜T0[ >x1, \ =p:eQyR+1r,r ;p<m;f_!jּl'Cm_۵{I)F׿gυu]q[)\"VpB4bN^8cZ]RU5ֻJ+o>|'ɥV 1Ç_CF`犕sI1Jk*rFp e ǬȋnB|ۭ{ hCF>E DbNE&+6NYc[HSid2j zp2X5%r(]{w]}1mܯkDX=tr-o:'1(tI ("!pΜ]Jse)O1f8JlقƖ<|ec O ! L[ _h:VUs*Tb>zߘ}ѮƓ)[O{N:87{B0F$`Zct4aqyEIJg6ZiCYΔT=}1 Fib_r,r,Fu`/r,ǿ!(O۴ m666f?u^]]]]YuUU9N:˿aTԕsz4RAa@^ ;d2vYk1X3#~" ŊSBqaaO|}:nN?SB IhZ98ę)cmUWuUWa9 h1}k_{ mI,r9O*M`ەSTU)M{_wu}sOU9k GK!.rw^r%\RP|ٜfiY >}*:GDɄMW?(fcǎ=ǎ%iJ)4ki믿k>'ۦm۶ma{ZAw߾ |?Y[ss16xtS w]C>2OF r\5;Dd B$pMvlc1͛yӵG]mb)$bLJxeummk[Lյ۶m߾-kk۶n;i۶oݺFZUu]5X9ZkIvMӤuWXrG_0Dc !B1M騪ǣx_gID ^UU媪*cLTFk@O]vm۴]gJUehs$S4\^P2I!*+(;5DbgUAiȹ3!xWvI=GJb1rgے@HBiW 1V9$Ey`)4V*ƹ #uUFteek~6su.JXĤ(&"Ӳr*%BHCICos$1&I“ .3t$(k0u)S<\M;0AOlb&UePNE.L]wf!V97Gx<Fd2]Ƭo79~ zLqcYWXd6&CYSnfX99L@:ʱFe4gURsed4%2E(co{),0̧se|msvÙmڍDm̎[o6ŤXRRӴǎ?O=4mUէ~g<8㌵SN9ON]ۦ꺞L&|4 tXpX[ IDATph.hǎ{ 7?zezG>~a_|s۳g'|k<[O>?\r%W5 ys}py_Z4+_ݹlf~w~_~h/)?eȿ~O[#"ڲe1&W|տ|ɇ?ï/wE\7CGf$kIZ+\_V_3sK"I+b6L)Gq$řLo;{Dַ>ל5y ưU֮r=؂C 4d)1p1Xk}=_u&'1XVt21c}9)&ǵ{^{ƿ˿$~ү~}s?O9dca$T>Kڵk˖ՔeFu7Qv$DԢxHn":g97S&B 񭜰sg}rΑHP֓Hw;vH:|}HС=ٟ}ҞP2>% 7^3q=3t75gJ:s??MY/k?KPE>w]DמOts{OD3UUדɤk՚T 1v];BDԨ1 )F$%Xd[gÎMQ"*xJI )Řާ4(*YNul,:;n(R۶FQ f&f"i[4A)%ZX2tuY[Q*=\kSUH2"./@n OuR l4pM{_~1!ck('T-I1Tj3A$QHw=XQqEYȂInY뜳gh*"ehlU9dse"1\B۴)Vd ce]t$XĜM2,[gBOj~L}4m> ղ6f3֐sLU۱"56dv *0 i)_ lEdJwr DP0p2)ŮbFkC#vcT&dd`NTYWhuZg%#G<$4g78vO)ƣk۶m۶m-d2'ѣ r7 O4fmzXkͩIƈd::жm)19H^*p eZ)"kl*g7M/?d˱P,HyuWU___ODW\q|3Jky+_4۟j|#?;w|衇_qODtUW|^}7MzO5>_]]뮾/}Kx~^}򓟼ˈρ,og|9?AD)F׿oaDАw]6m=Wu5'h4Fk)X!zŊA9J)}6L!!*oT=iДh<Cǣ :fF~PJ)&;ɬ*}}!Ȭb Qk࡛%1ɥ7ǏWu] RE)UUJ0V;zc`bڹՕl1&$)I$}hNk=lY-i 1ƈЁ"$C 춠VuV YfE"766UUic|C۶m["`.3žpU 20SrIs4$A&gKFRi$,9 !ds*7dy]g>y3'⪪qUUwq.H"^l}?omJV*()9Wueum{|cc>}ϊs:X~X 1%IR`IJƘxI(rM+Ze>H`luCb>]g:IIt]ǓY'D9}$"I֌'|>͚sQd@

D}߷CLdL0X$♫Ė"&2@]u-xec-3ԭ2nϓ1H&F,bF[}U$Y;ƊZ*jSj!)M&>q^rU5Oʡ )s(W&$ 9p* A:{0>G5=z3n˖-[l !|;ywc?c۷o>#Gߺu|SN9e֭"ǎ|;[ c=ϘL&}9rdccm[K7~ ۷%I~O^$ܹ3`N&J|>oBUuɃ! ФɮIV/1Fbf>wMGDS/}oow.j|?F\_r7's+N7^| _xֳED~~[}ٯ~/<9;v^Og} ۟s@83G}i/}ݷctGN;#G׼5_~,gdY(k 1Bmj2u]U$!X\K! x F#^n[w"Kv.u$1A]ps(Bn @i}OIﻮ.8V[FgVA^HϮG#'[|&̩@ꤍ:Bo+J/ @%L(!@bcw (n=wOWdrQ0Đc;HDS]U֘ĕ޸J"*6( h Q.4HAWUN)&wL #A&<1zeeem˚>͕EKC~-H 1 'ʇunRjo;VZ70h16DXkCG*Zkm5:?($qQUה,]$snJ)i҃Yx(P$|=I:1 ABlʊ}}{zS=&11"};bg]1LE$k"^ #"\-`G5~|ciq١Tmkv}ID, <%'4K!zH>*pFFkbv"$Y"Iq3/!8\ʻCk]u! mBa0mb%*0I"a,Z*.`Z+kLUB"\pQZ8;+VRҠUj#P0hpSqa$Pⴹx#SH1"0"l]WkM@qmΒ/sC11!R SLs\8Srᵂזd2ٶm[]Zi!.;v(z A<?otsSw<uĮH!Rrx _23a%bS%g{"/Q˱˱?b9rN׋g?kWWd[[_~l_r,g?J |O5/2c:O=|G?="I$IB-_\N~lƦ Mlꐣ#5Z"-K1XRBXfҚ) e-""FkkpOXdoUR;{<p`4NW9fJ1 P!"x1Vxx؎D8UCa9J6mRG֭)7Q 1xRιҊf=OEv::ibD p+%RJJ+]2)D?HB0 *i1RLRlQo(V{/;V!ԀA=291}DJ9W=mޔ*CZ"Or%!*jիI[."ZRyHzB#=?>RZ$cO sM ꐍ٬:qN6֚z4rYk80y}b`H޻!+5LPJcBp;I؂lCii&wDic m5%f5'*JI}%(*6VNk|$8x4L& kR1eWkmLJs",1]_ԧgJ})!YU&}O6U]5M?~|)`9r\U0eVZPKvuafCBb5($mt"yˠR06M F>KZĔtXe$ES.$FytiDOD6-^ H!C]7z]0sRws3EV#?-ScK*D#07"(6:V*)xa"$%D1b͊RJ>x<)QLg"2@@+IRlu]U=1JHhÊ3*Z >`,bf QY[nYق98NgWpN&5&.17fe9wq43m\84z AL۲JHh*Q."!f:r=b$%Iř"&m5bj:t̞H@bL]-`EX3*Inf}3+O1TFgI"0@#NHȵJgl47PJB,yPl(RWh3T+ "wZ礳Q!aĹƘ`(y243LIaNxi~"{exlD"+[+VI%J9G% ^FOF*'+0\SL n{a0ޕ#QUU9p;s T@.GCS0HFsdk_½ *&d:UJu%!VB78SUsFM"#Pc팣+@ pi4˅r,r?^W=> "~_W`_~W^tE<੶?x[zW^s5s?||?vs9Ї]';v| _xK_ ؽ{w]׿y.oN0!|ay/{7N[f3֚̔t]WDH5]M4D""bQnYIeSQ}#ѩ>Cawy^ (-7|с)F(5ҊJ1n )2` EcJ)6ϬZR2@2  Xd& mrEh H$I1a^c4.IT81;:!Sb qz>H I.W{p[ @)EH-UP$,Qp'"bVƌFusJѲu]\b>DDY?"CbUcͼQZkYR Ж˳JP{4huh*cRaʅ*ѐ:40Pn3: U$".vK9o4O)}0yqVU0#J HJw ! 6)EJpJ(hѤsLYǔbߓRCrs62E`L PE@^uuuxXE!8Prz.Ishoxhzd@B͏j==yʰU =6hYq.2muUPSڪ(pTPтځݐIPIha(>2MF"XRN[ԍ ܥC,6MӶ8i9c9_x{{Wܹ^+ c̞={>O?oy[:tw;vگ?v|ꪫ=~{O<$̻`,^w]^zY9$hMZ+~Yg}["Ƈ%")$"UL7EZ+)!WR ]t#HJ%V+.*`3[g3|20{FCUDK_i% # $''R0>$%)"k*k_/y׮Y*PZ9WYc%IPABZI}7MS#B{*D4L~Dtl}\UU)$(xRjۖs٠ٙxhm`aB=Ibq灵MaQJ)IZicqRBS]ۑH $sl4VhIk mJPk 6:Ȼω N$1g!%iC@D%E|nmjELJpJ)fSE%5&bV(Q= S HOs:F"uɀ}T:笉 ٰ5ش Nx uHR}=B c""Ȏ 0Aϛ'TUNg; $K"M{k*'h&D 7]=`|} dEPҋs :$?#'1 =_B1 A9k5F"IAIXkJy$(1yxfurjDʧĔ⟒ݑ v 4@R+j ^x*x Zk5\ caËy"қ&E 2")<SSH"= &a,[pX03HW:)&V°֍Ib%QgR P&!퐰gbG+X+-k]ة9̊(QYL~@JYğ"BDD:iy?kkkMskm=m)O>yM;?rȑ#G1۷o?餓 {Xv6a80NXX 뺏?RYgNL76!4 O?y/yyFXc}E*|Dto?@B?tx[b5.xC%kRc(@h=мEDoX^wݭN^u~OދoCD/vTU?y`7ylTx4)R Q{TɕD3"=_檺w<~ ߽ocWv?^3?{xNjŵ4WI>ͽٟy61gvǿ߽ Z(_&1]z}ws~K_,ƋT~ky꩏9BDyOx~wDsضi||۵݂>\w̾c]+ҺZD|wS X1m6.B*H!v!ar8]}VZ[bGF) wF7)e R2BxF"<t;YH2T{@( WG%%G)mbck]U?(`1qw 簡$)R*ùB!|C0C(H %rvQYnr FI0L 8 +D*+s1)}׶mwֹUU]!c e62ŻsXk1]cdVUUJJi]܎!=c[ۮJՕ]U (EN(t.bT53U1n&#/{oD) (kj[͔Z+ TCD{P{S}gUf-)HuS1F@x )HЍf4 EHerBE9Ra l&E,2{n~HHrNdeYCHʘ {/[cUI$$ͩRtbm̛~P  ~z۷4?yl6kkmMnYB'˱˱SN9zիb9gSO9%p뭷L&D/ Q")U&Bm]u]} 4C!Q\BXʋ4$`ћ޴q-W^n}?[Wc?vSV??WjuYnCv0x|2ȷh1+  !H˭߷Eh4yGyUW_DdYKDYxѵwDc[M>ѻP?'?y~aur"$mڶM^z÷ݻ7Ms=o2vo[vwn\yJ)=dWrwy\M'hueHmB )F|۶DTUhT[P~u]41['Ya%O?}lJ0e! 1s'"Ibf$g&יʙيol67 \^h}3x`B sRхbIDkU#Ε-ʕ$eSAŘB]gz"V\'{;[fj"b fhcP|66ڶ}K{fƸc,)1&JrbLZزƬk[WUYN5LmFxSU5Gٓw.KALwҢ %;'VS6pP9֦$}umu] >Ĩka3 NJ2ϻ@q@(4d88$1&8Ȉ1rn z@U"!zb)DU1GS ^YQb]FYˊqb\xv<[1QPl|/(RS0+g4TCF̓Ǔ;ίOYDa-SpfJ+\ ,eqpL\5?DB>#l| g-G! bFIeЪCB`eo--9BRu﹛n pt^.V}e,e"*s][yccƮDD9Tq/D'h>W?z rۭ,s6bxՕS JWܦR/暿~;b7o!"j"I1µzTq|V_˼{.lJ.nܝ;}drB"VJKYn۶ǟ]=x4IrL,!ؚWFpf|9g9&ߪHie6 abҜ>4?dvlr@dO ߖ)aI$ף~91FZ;! 1t]"@vYdWҸ΢I3Q!FCEG4lbY1 !@&ls[L1f#ޮkAjI,Ncԛ\2Pm2[8$dgY\ Qrd&¢fŘ_vֈ  (7! #m31(VWVHȶ10눵jW@D]KJ"CfY@H`CəD2J1VЧr٠aL{ՏuK YDs&cZ6"pqKp2AQRT`@Dm:,1TB5Ƙ<2qIZBN)2d*RJ$#Ӂ4R0qLqZ$sxZhDOW>9#Q+S!amTk [IyV+Y 7 TS Czd[\崵ÃWVan1 Ó7}y2WE`6JiUY*H=<H"hQe`2ڐAs& !;Ād8;&kѺG^1Pl༣&ϹժϿs۫FژmDUom=qk[VxtbBɑ Ĕ n~G#Ek&RnAPST_[XcBn%xCBA y̔yկ} vuם{/;.nu LY)ql'luOo޽{7K Qq}z_|7A!K]cJNq+o{cW./_ϴM 7ID)IL+kͬe]$)x |^|7q_WIǞ=wMOfI8GPa␁3Yk]UY绾CƧ>t/o_xF/ p4 hlfqߘRr;(f֢AdB~6̞T Kaf6Uf&{q9 -N˗I+ctPqp]MCJB,:7զ!%a+$ٴ!{o%Guw陑F%YAHm# %۱clYlrLl$ޗpہ864hc%@` utw-wxoU7 IEs8[U}g >4cTJ)t#MA (QVմ`*d0K*)2 =H^+=zZ\* V.8x^ PZ mU,$Vp֕KX6gΝiBZʳ\kl!DEY!I+ږ!>2"c>wBA ЈRB9D[p1RWℐAk1B-=z$%FU4cRӈt{QJ[,N* )?vdhP ){ !xr%*N_7{vMv6//xyRٽ_IyGh% O1 sq@xIc1JirJi$%r98v@]( a,\̒xP+'J@$ l>9wF#0$qBgVU=J}Nm48rY9ȳ vYSK$cF4KY=ґ&85J((N*Uw;u08{]s6?vVۿ}2r/T~,ȭ1|eW>Mmش]#8eLIg.^Th`Mi,|D^a i&('rw⋗TZz=YYCBe`hzj?; :(;4פc/8J)9eF304fhpp-t1# .sR"Kc.JTYB>j˿ǏYcP2H3<Qvjʵqս?8dqwm̟?eĎVE> ^f%K`+,;y@5H_J%;ƔXRJ+!/Shc/ @LNԃԎZs@u94:)PԢ8xlV^YkV,(1:rm Ơ+'絞82E^h΃ѡҋD!b1b~&f y"WJE>ރrKiR:?8GMjZ.9R? 笳!u3F%^PVxG)mm(c\7r@C g ᜖ A\dv%BuJ+s:Rb]zXƼJ):0!<2H&Ӌ.pQĽYt)gp<ϰ!?|V$D @. ޹9sY"B[`|JFhd!=SjZ .Cڑ۷eYo4<ϭZ82,|m߾JB[kN1QWWh4G׶7mݺg?{-~6sӦ#4/mRmR?wxfSCn홉 }@k}t|?>+WTEO|[o+'?Ya^;хŭ[~_}ހ__~~8/O}SN:餫SNc={5kNjkoMLLq]w]jqW?~W j .?k֬-[^~noOӗ^z){o~/=v}e~pYg}_F\4LdW߇y׭[wWoذA)5{.o}nyog7xm4 2QX"2IZ$ cLk.EQYfi;M,Ͳ,O>25$}2B)c|koXkëzkK>:h  sc7!5QZcyQ11cG̲C ebxﬣ޾$I(֘,TZ1g\cmq'qEZ{Ǧiz!X(YBP%BJ~W:( TEm*(.4,˦h%0F)BA6 8N8Bк|Wc6ٲH{9؊Fw([nݼc;ƚ;V\&n{ݴi&/8v'~;}>Yi;M4K4M,+" #Rz'#ƙZ+U`نes^+@P & Ei1 !" )1x`ҥBR}Uc"4⤖pkI!BiDsl趑m۶7Mc ,=>:nFkk1<0Ƥ:-fjk1sƌ3zut69C7oڴiinݺlN>}ڴizz[޺iӦPzVc3`=}&7F^*vkb۶ygݺm yzp8IjqooEWގ#8btg}mʭc>~ _xۿn=7{~ۋoe~Չvg<yޒ@ZǕro}kqۯL\s7nܸv='sO$B+$p(k[[nQ$ DKV+]9c,dBvEHQZz}```q[gފzFGGGGGox8EQ{<;O<ļym;{;Z֜9s֮];us=sy;sC9FFF;-[f<ȴiӰ::ew|o?9馛.'x͛g̘ngW7;駟C/}}W}Wcy{N9C=tڴiU`yثe)Cޛy:y#P DR 0w&c3w5SWY2vxB;>R_Hh^b(f[kXDv: SQZPKjQ2a 콳` \ʋ)&VJc[s`>$<s(ʅFoK#mv^BPM`1C0]y˗5*xV8 y+y筿iҋT612 ,BahYE*b'IS 4PSJ#Bf7dyj1hwQI!)eڢ(Ư^`^kl4c)5LeNg| R;8j9nhj;R^!C"A tN:\Ea) pGNscaz\7ܼaC/K(^Z),}QZ(R Q4qB&֚B9J4\Hte8 ƟރgePBW_Ƙ|.\@) w~=: _Q7tRQJa Vʻ<ϬqglW@'_yQh)JhABpPʪ>Ra/TՕ* p$z~'~]|@'/;ƾ/'?Ø^V%1ԩ!N h;=^xğ8W|ڵk?MGFF?5sg{;󆆆n|瞛?~7D1( IDAT-DQ]e˖}S}تկ~lٲwc'd߶ǿm۶'u^yk}zs>=^~yq ,3zF!OQ;t ǟ2`s4W/ʘ_/NbQ;^!ʸXV!ir;!+WZ8L(cخ-2֌qyOOo|b죣${bZ)+ee~\BQR !V( Cȋ`psSI e/v֬%_@ "X)={Qa͋-bM1szO<ĻM'$;y0=uH!ȱ {'9g*VsPVy -.R|j }fF+|Q(su }gzKUJ{)WR ĕ4CEHHG\kV8OEq HRJ;+B\eiM-`jBI|klGVܓ>NA@h$H(OBN=i%rP4.A$[C)`1`%Y)zm!3QJ 펠11VFY\N5w:%hD<<Z cAdseVPrVZpk rŌmRZΰfY2sgmĘOMI!dREBJmrǔRHƩҲu (|*P)L0z2M[#F^x9c g 97FfCkzĄ0\Mi6?wVxc$12O`dh;}}'(w ޴iӞzꩃ>~.F8Ȏ(}yKVژB౩gLrs?1笤ZtJeFgie6SN:eZoo8Q(@,$B>е:(Z-Ւ$N9m8㱌y^inv[iEރҪ(CBuk1bP;㌶YC߰uWJ,47(J<.#,bЎkF)LcTFިOuʴ) L4ѨǵXđ,Log[nlgdZ+J>???=222>>~WW}_W,{?VyEѓO>yWTs9W]uUW]ه+j…q_s5o{۪;/[nZޓ83w|;/qގj;guVu^ʳ:k';sK߿}w@Wxe9swmTݮUA'(f3s,)%BmE  'p.c2P.Ҋ (@HP@;QG?:hxZ=-[re)`¦ 2us`bEhj dh4N2mԩӦN:eʔ)==(tEYfYCbA,1>'!zUSN>C1QX==L-M7^im,TZxyUJv?,#*LF{(37uI\鸉qx``B!0x dcMeEa0==ӧO:ujZRePHeX======`Zie/d.mZ DJQ t:WӃ.˲fV<YѨŃ7(Ry={0jd*LM |ϛ;s`ի;&󠲴 ^0jJš{\HB(NS-sιzz88ZLa yq= 30F?K!xEGFhZ-BJkB xP)Q-Izzzz'M?QіqU,Cwfj5Dy{jiy+uU_%zXTSq9@\H0+q ΝX<,iƕ hiH!Zo1DjĶm#۶5[-}y  Aq眕{Т^7z{:f Z7p$IGhd,Tɣ(!,'VpΓE7 i&%FIZ[ոR$I0c8˲;vM4'VBzVG,<~'#!`.a`|qY4r\eq^:_F#D 2JC}ǒsK,t)e7@B .JHUOPehyev=8TڭvsDSݛ]DT)*u[-;kq2Vy^( @VL6Oa7mQVʚ$U;3gL1czZ6::y7fh $IQ.D$T&$$aU8'Om}}}SM6m.,FFFz-[׿~7mڄ8hz211a1)&M20cC>xf4j= A7Emo۰a}ߴZB;}'|'ضmj,2[ -?v=|}/~q֬Yx+8n=|3'>=Rs}toރ7\p=>ܵB .8_|ݓM?=eʔc=vѢEK.< /z뭯<̳>z_|f̙o}[O;*I9s,^xɒng?11q'W\~Gyg1s̿˿_K.e]w]I{E]R9Ofr}11З.<* ^dʹ@K/wޭZ5tqV 0w\kM͓O ޯ\7ygcCkfYEVY>NQ5|0,dixOVmwl?s#)5!-p1l3g VeZkZXwz8RJ+@EkoXw{Bq JYv;2T`Bq'6lN@KydR2JI(u(КJ6loҥW^h7G>_xB!j/h c+׽CG apúλX:"(V=c ,ΙX~… fciJRM~h 6ޣk# 7ouxQ# [{7]q1֮꺵~t fǻ;-XmtǒLZ>;}7}k18#24LʽV .^lv3Ƣ("\jpb!^eΘs>ϳ94iɓ8,]iYb 7r75 m;J&09-N旑 `^hu{ ׬^28g{W\l)Z54w\giӦ; 9HHr-XaY1l YzgL9ZoȼyRʬ1meAS_A(;3~O<~Yg+bss߃,Z(bB1~]w b}6nXl… xٳrR]WTYϚ5 !R6{M6>)9܃/\0/\%KT~,WXL) 7ݴKX{VXd0ή~[Jk?o.z`0ƴ!D3FGmڴyEX~0.XY[E8X,iy tĘc*#"_]=b1V;79cw'+B>~|5{XQ|G 9GmZM:tlJ;r0ҏ0Qgơ Bp,Ϡtc?y`H`g'2k >!UwAEH"QDdyS `yG8D,#@&Ux04|&iH&iQZCRF0!_BNgL2^I*քKl 萍pjNE*}(sy^(*J "9y  (!9zFX; qNI|*AN uyxB(xP(1wڬ*DYk__*rQFnn*(K,SD2Qkf3TC:Cr&L̘TI1Ă4(!B{/{u;ZJr-)l~=K>@CŒk4`HU lM*;g'Ẑ BQ"pxed@*cLȒX"QJXָ6D9ܢE)"FŨZfbLZqJSb9Qkn?iVz6=>|pqǎɤil6,jZmm"wtm:N1Wb^VJͦM6Qd+\֝'mynj໮vv<%H R]1|\4ȋVk=_.tM(R*9sc<_<;~l7Ėw:"Z]]ef4i>Oe*DhH;heet'ok||uX#ecQ Dȹ$QKo߸i"Fv7@o~2X9fv 1G(g/:t9DT )g/gػwo RO@!\*1i%2GG7tsCGDu>`Jasӵ)PKnۿdgx_$F!/ !|=:(z1^:nIRJH16A*Leh3DJVLa"H\`%Fl P;kN"J딅b-yN<2(AIDcђ0]+%QI :cb.D1$L֝vX,z`7 (Cf@/(UY.ys 2nlfV퐰hmiCw]R:C DB`~1w}_uZv h'm@&M1Lyn\w:0*>|%nF)4֭['Iq:2s G&3Ξo}p0%Jd'gaP9jŕ>p0aEΦ,\UiXt9?x; 2R2FmۧcN:*1n6ƣ$xWZ7MӴhxcc`\}__O7o\CrʤA 8{^2%D\G>rhv۶-xD4j[,S :E/z1UeE+be2EFњR!`uo?ʬb1QDWD4uUUlY>G+"袗K.[6m]WE;4Vabk&?$h$_tcf!"c|=7]{]w"r64yK-K̢*cX뮽]k cIO1bLDǡ (.5nm[/wc?] {ݳ"E>b:$9օ7uG%;9bhS׍|}*>E]d*Ta LVQ>v/~(^:^+"g.gmu9w~g%z뜵u Av-iDtپ}|+<_^x.a%Zi5EcbkN)%.F&RܐYaJ5MCg %)=SaDڧL1Ʃ'(1$d޳0;"rޱ'% T@>u hr`!/^EƀdC{&ǧ%B0$ w"4f6uU }hiZ2)^BpSKr3>x% *q6DJRU*S97N? [QkbT#%*nţ=఼)+p`"{&I$iO&cRgwlf(BǢEB.^zs%"X}(W. ;zzBum۞r)++++++Pm%-%Gyr:- O-"Ӥp(B0WJcCa0CѡXBt,κ-ܨm'I]7J1Qikwn61T7oS6lƍ7޸dCr6J?k h[\ٿs( 7"C_ZĿ{ :%&&d'(*"**UE(x. a)k"LV1`R ܅=WC?x:m/7 "H\kR^0sg5U5 J A) d\xuݘ 35D"%EiBj:|=Έm9`DUFmMD;v쨌QJ,n 3x;.14M[UʳG p?zտU\0![veZfP'bo(Bdm۾/ODźAVg.RxbRGDK_:{IEh>W*(~P\\ ZZnl@JS@+0׾N=*bn x]U˕S*T9-5="'ZD]24SnR1^~L|ϯݳ] R.cD6A3UtEJMCUpJrDQiz!Z,#ꨔj 0 ITp~J5zE,==F x8$1c {܄ 7ψ$_5ɇBoJTD l(pS.Sov儏:ij%ɓC3A TXf6RYM>1q\[@,s![b& F`+mjR6OޮӃd$o^+ԠLɖ.L@+#Zhf>Y 52vD@P?h(F3wn>ͻ/Mla^%F,rR0aNuU>#麡id'6 m}a92bxE=Y22 \@bleb]ibЖ&-| mD |z=0+fN xR"d<`2ּZd{boN6cclC0X;xqwH$nѭ_%)~~Kmۮlٲ/Gn l3!`s!Fz"'0Y܁.b炰*4`vRӟFijc 纪Z9䃵Mc|y/:tbzYZ{K_!HI_zZuS7M0с~ e,Fі[6lOܱg&w-Juն׽GSJiQHI4N )0uUicVd;w$͛4muKҡ 뿶m9]|_^FI߿S~&1>I#j&y8̫zQiٜҹ[t]9X'&A/63ٿ? ?"njig"voJk$1_X7 UumL*ȹse0Sm׾_bAzb1=2dP2zHKz|{;xHnh]Mg|u=>|>="7_~r"1Z"DЀPuG ju]X xL&mGHmG#JT|>uק^kpv`x*pbtyRI;̼dw9RG B)uyr֭+sn>M1|X,|:afl6YuJɵgZWUݴMӴMM4h4jF+M N MSW]$qw\%cFѨmc*jdS"Q${@Z`(eNXo&Z(He*(GٵWIRFV|;Z,Hs5 CuUWQ9{? Y?Of/_=D/W{@nۑ1f[t<ٓWbR0ˁۯa"X^uU`kA),-.gO>?oOg* iKD`{ ix'm;ru{-W.{ 2??k3tNMNf+x _(ۼֈܞ{ǓWw}O_~dG]_k` ;Ywc~ODЗrT]Xh6ú;^',񷯿Qv[my5+V>ׁ%T.-E 5õw}(՟ݶPUiyKJ:5{l8yС??Ht!-b ޿}t<)PT6*H;߹O)4;o7>"˵nƽ5b@x/䒦iz\SZ*_AI0럄E> "_|BDn;!"ݻn,o]7\0鞷>KבMgnj-Z z潷$7?I/8_ jm, J)RJ1WKDz+If)'}g R¸T9w c`Cg#Zka" nQeXYnnanFitpC$/QLe 'G$Bp$ D>xUG>9C}]c E9E$SrB sdGL>b LI;̄JVm+85~-UI)Lx*Cv쒛t,,c9bII( (0JJvM&pT0ÆD" ))("|b ")B7>7/D߫u]H1I#ukETUM^VZ+h$,!Ԁe)CuI'#m"Ig%` Dl*cK4E#Zi%ַEx򑏼/8o9x=ퟹ-2 _Yg@0]T10)=zQ63b1:AOfrpM:Oblm۪ Vj4EG~'53~OA\@k۶+fi|)z#ZW1!{IY~lΦv`L5۶QZ]uAc⺩œO>Vdbih$Λ1uן]sͿP8pރ\rR{j֦k;T7d<mB̒s3$J{crҎ}C' RF)Gc2D\>fﻘ<׉XC)zG@L&۝HW"Q92$CtF;9\kx\\,Z(kz#4'|Qa1"y8 (I@6GDu|-]7XKDd2y ]ϊLSs:'"Ѩnj&7"-8;=ME1KkfɈ!ߘ *2xDE4 "D7+aV{:jƘ+_WU-9|z<_#vsINu.bXMV&Jk& !BRa=|pLeBkklmvDTWU] #M*|Avc " hcfΖ<* n) y )Na Q`>Xa1YKk0ggB֪ds.3EF[c `AQSSvRN!cpvjp$$R ggӝ\g:HAMpᤤ}S' ,w>Z2B$k(?y~ɸֺmڪ`SңCѹfyw!Ff1Ǜ7oD,n1ֺ뺮f椡%/; ~47mfG~^yU;v| c&-EEkR*}w:Nxeޑ͛FQ;lڴcꦭL-`#vG#GH9Çh#xp <,~8 {O>&nۑn7oc`TF)E1Z$S"+.߼yt:=zhUWMUmp|J$Q&2ڈC( $O&0WdBvR`F #ڞş@im*C&P#U%^uUd3 Y紭fUćT)bJ,Be"E"J 4d@xdPfҹ_;*bF01Ą?{wy\Cхr֊R7m3YYiZi]> }33X?Дg>9cjK'v擿tdznD>G*Q(3 1/I(M81Q.`?)HDR$! Y# .X͙/ܵ+{a, 'DWn-F:%A{Kpɺ2CBv C)*"\uJE/&)0*HQ+:#Ie`iBVJƿpd :͢Ϻ{Šk9o"UU3!<B#/$3̧!~y2'a>{e°CH1I/%y/΁m1DPJND1(Yk$" (h>\nS:~H l2*)p*]qV(JY`}lL8Q! .H:9uTœef<,1BDH).ٮ)xlR$ IDAT9g]JiY+u읇a LLTYƏT8$t(6EGh03+u'kKVKSDBM1B票^ĥkbd9k8O6pP! `GQpD`!XZqxYGIjcg}UIp!Q1JӦcWRN6tmDi8}*dS$-&(cLt:Kέ[OqʦM"^1b; [F㪪ѼihS]WxA@J΅ `,hE-+E[t76IȞ3:~h,V;tG7eYYR7-3:Yvx?1@mэ16{}q1HCFE,-Ѩtmzյb1`g.3Q-"q0Sƃ]::o7R!(xBK s\VTYU`\Nk k:l 20 -rs! XR@ DIۯ1UeYJWHQq&/U y:Fk &Qaoe#s,}b^@2M\H,}AN>@)ѓLpY;kD){{u׽WW=x{ji)9osv~ap%|h9jcꪪ#y݌c Ū9w]0Xg61UU5uUUUSᶭLնhT76FXB]狾C!Չ:`#\VhJTSmU|X|E{.e1",BpS8hrtIEB\œ H4c_@"wӾ~+]KQ2qJJnXPvaW\q޽x+ E nW\U笵V2p[F:\VL#N{EMUU"ʇuby:lv4Ғ<5_`뼏)Y% f֕( 'W D,'3()\S)TZs.4Fv$؁uru}g@u]#է2NԿu޴,pfJgW8a ZieD->8A ^z?!w#ť/S$܂/8[uS|bC"dh[bo@D,ߑ|dxmvaaŀ ؋謶eOr9; ɉa|`(Eg&QEiۦm O vQ8ʈP3PbJ(}!%n"kkӾp\E/z[m۶gq9s9uYgyisd9,fl>ﺎGѶ۶m߾ii"B8b.&YYYAT0,D9̍8غGQӴL >#lmmxϙu]?̜0tu]|)sދ^| y}q{gq7ui/yw)mo[Pϵovmw>Sos/޽{O>7=wݟ .w=.;v\qïEs<8{,=RI -^a9B\Wg|B]D)UUf<O&hTJaUM]U TZQ u·"JkΜZF릮Q۶13e`me/bQ%%Aijv>ͦSDK<9"8bH~E2B|"aW^v 0sac ֪ePkR^R(!<7̎hf<$"ZiQŐQ=伐ucֺ7Œl#/CQa62GrڶkL:x;J%Y{Jm0 P'6hmSZ6JeL{g0 HJ9,J+r,!6xjv4)S$x!ΖeޤbR7u۴m;jGm],TЧW@O)s Ʉ$/w" XyBL᳣{ n TeU6M6Zr&pt3bb1U ̩1J ##'ݡU,#!\v0}-=2vI&fJ2 N! VZ}اiRJ KO_NצӾ \VpPZW}?>|f5eӦ)lᙧ}֙s-[n=;.z酻_rιriwXn10;5 7pOO|k_{6r,﹩?77=yω[nzgs>? `^:~ :˞svXg|>c*cR΢(A.|oCTH㪪]gѻSHsIyVTJmcO>l6-j(&7i6jf2Y:E"a:!hT#_ct %ؙDH b"ib}߯!y?χhmUrBqzdDzl޶xRxf3obκ_mx\׵(Bt!WY<ڶakbQɸSOޏM69ӵaHJ#>OdXx4 ^}߷men1XҬ;7kرڹdc|>X>RdQj>[ymӆ?ӱu1N0Y&^xU:svYROUCyDc5LD{?>z+k_w~~lٲСCzoaOuY9YN~H O?| |wz}߸qNp>tϞ=wߎ;򖷼MoCMCDYL}lU"6($5H6][[][^\g$I0J^xİY Wu+%2YZ |qDb݅Etc} ]u K JR:8t kZ`HYVZJfױZs?aJ J`)QJ+tA<SD@/Oܘ8,LVC1`cXh-J̀m?#tJhT8XkvІEH1+\QRK2^Ὠ ]iL-@Jim#ig9+IS%)$dF/ʈd[~|1TSMhS*0T<}PKtg м,%j}joE9܌'2V|V1,!"u7#)%ιpEe|8ac -~R AaǠH&`I%bzyOKMH)Qs`P։,F,.{4R3'(5MEfsyfk}`B}`r1F[6U5I@׃?tSo,3ħ=b&k0 1D%[,VB"sqU[:f6}qiesc4h GҔr} 3R:?!xʆ ܎tIb Tda=T օCf}2D- < q]r Wx2Q,"L1H &GQXs'iT3"|(/xF]]]S&m*Pڡ%2Z(sZ;(E#qY0$ryD3ˑRHRUh|11 v~i;wLH80ʦN9?sZWuݎ!몮j󖭓9u0 UFxi &2H;z(:RPbzc4D'"#JцE[FzӖbtcl???GǷzΞ~}7N牛x[n\/]Akx=|w^'>d2yя~o(JAn43$d sPCsŘ4DOZw}fc*Yi,9u7Lz(I-PCl}$ZyQx4rrNLrH$<ڪC!@rQvԫaAh@yBLRT0((91@Am**霨P:=W pDLލw$/)F)pO!*?ZZ+ 3ciݢ^З`x}?J89w[F5LT qBZ˨mjf6fӅS mLhCUT2ăDkU}4T+th)0$ 9PU zXt}CYH$9J! H,A(Ҕ &;YE,PlJ%r$Hpg+%,:qc%rS'e"fR/5V^]K`auYW d ʰ !2'lȪpՎbjڶnM+D/Ǐ{B} T[k6J/ !C 'ix !z ]A.@#QRK%B LJ(QZu]HK)&ET @MX,6mu c{T0 ; !M1JJ!F)U$!)j 0.K9dHsEH _1K IPl2R8;(8Δ X"Z?@)1BXאlK%f[ $Rׁ#KXe{Ƣm`Œr$3%%zo-c&""&#>)F@l /SU< Q*b"qLWya$"Ynnf[N~2bE(@S񺯪k чAk1FCr߻oͷoN+Ѧnv8Nl{ @ycl1`zXmKDO>d .ۉ9y{x|[z ^@D7jSO=pdsmY<\ַ[ny_2N8oޟy>~wSxǦM^'dmr@ʒbYl֕YGEԎFl6[[]S)rD1-އ3} uB2F܊RI]UKvR97A@Tn+Va17X(MEt%\JUU坳JWuń o]\t "TqgCn#.)9Z+ZE) `KD HUe೉! ƺI#_`Ki1(֔e KwIXcs :mSgJ 5+Kb cHJ3Tum vnS`2// 1(nk [l!hvxÄ[4&%01*e#U:3ưNޛ)F#r0s}1,2{+G_Nː%㴮3BYE~ K\f݇V5U9|yQqhFk@ZZM65M ۯg3̬rUZ/rGBrv%C2s@'.YR*Ā`Y991驔H0QJ7us1л.VDb3 .gY :ݖ4M4>*Iz46.XLJ?r"`cd?,X@Bv^pbi+CHDQIې0)%l8=pW$M9&d}NL\b$ZҴflFe#Qpl.dHxx"P{D&Ć):- ć (m4i@7v2T.;!DɌ6u[=МJQGb*ָ}}xGjN9'O]'"RBv^3\ȉ1nڴȑ#9pFs콛ͧȻ(ⴅtI2xU-{Es# >{zԺ &?COJe=: I*3 1J@ !Rp(ɹYf0RӅcpΖN#7 FXw ]:)rDz'\$PA: d6!{EHePJකmQ;{9D)XE"3Rc0]Z|t/Kg$rdTMDЄa(1|gUG|樢PS gwYH3 ]!`3kQ5$_p<׵cdڶ*s  *-u2XX?:L|rĤX*Xkh8X^޶cǎp4ru5k&MӬ Z\[WK/}c7WUx;."c̹.__7}{ !\xo|׿o۟8pOp?H?[׽u{}[]/yK7=:ѾmeRP9ID4LfMB bk_?†{ |1o2ۿR݅VWJ,-DZV.ܴh0nC)&I){vU*6`P[kmZ' h-T+QF\d2&끱ƹj0ELC|'>D :m׬Lh<KJƘ .LDdH[-7B4%^ƬNhsH[*l ])R hĤ#h46RIHSuܹkq-//l65ic}b,ZZ^:͚d2 1hdiۮi5h4^XXPZݷD"TkyaesUxǣk}V|}ӴDއbLZ`ǜ{ ;v5Dq̎9qЪ.-LѵZX !wqG>[뻰OOO㊶CNdT%ŗmbZ+:U4 [nboAS2d%J. ^RsŌqR"+` }XlNsh%CLB))QC*>@Xz#JD Q1%$UiݶmӶ,IFsCҚX  Va9IE#rFE{CEe9J6>!b$ܦQ("Y>:8 pΈÌ70\ӄ[|c؟P_ 2зUD&^BdD@ZI +p\((7 EfVc\H%/cBQi-1Ō18Q҉J1qJb8+ hQ|;Mw(BiSBkWY۶5@ ԥ{ofl<Z)%8t]UY?ߤd2bIT֚@ 1F5@ub7mG&FfR&77E$]9eLI1CZ$̑D15ƕ^3&%?hc*V1FD5Ғ1sB [L5-R9P%UUcU61bFIhBڠ>r71b@Ĭ!ۮ N$#2VmET@}gh#H 1`cCTPY1DdΙ;bdEb Q!@'h!^!fmNV#J̠qX9f QeOkh!VM9_ħQǔ꣭P(_rfKAΣRUL}\ښfxB^HrJefV0U"9 dPKO|NĊH)VZc8&Pb<.B,S Ѯٔ1`]*%v}hv6kιJ|̀LIb$Bc DTu]gѣiVWWWW¬Su⻱5vqqqqqZӵ8GӺ2q1KExP DUm۶kΥEa,--xkkk+++ιtJDιx}۷㚦9t:8MZ)U`0P}Ӑֿ zԣuq=xы^)VDH9)C YuN,\R>gx L&iY~ `P9#nR%2F#D)`19QCpыәD^_*KK7i e+  Ķk%1Tn 2c@bL Ș\338 ;DRy|J)Bha%#}.# 礦$i(OLII8d" Su)E1Fnoed$AFdJe*Gbq*= /Ai-~mB0$!D"i"e>sI6zdҚsOD!5Rjfz5if!ƺD)zXˌs.8 P|PD!"zŁ}#USb<+.zP9"dF%;rDRl"(!JA9DM)>kg0ڤMHgYS)`5Zs6SʑL,J~c/L(!ȕ{kl]zжM,95 " ȹJD^rDକ [:Z9 ؋[af)XÎ *XИ! z4`s(R91"*ǔl_E$Dn!YqJ0J1A⋹W1IXk1%b&iXgVlt"(4t`(%?J!ſLj s)%.? s!M+ƪ/)ϿY\3OS`D3^Ւ#AѰuDi)9o|ۜX.YDAgSo`oc1*OdBT "8JT=` {um:WUU {%5$ ǔB3C`0y:%=geeeV2t!JHOֹxSUUB kkXF ;wfJ۶mLKpTUn8,//=Q;!3ol8p`:Cgm…q]ᰮZ^_8x={Zt&Wk-pNT: ey)0s|cc*EZOO5(ѐsOe3ȷcSr1 %u]/&HN(xڶ%!sMPgCw&Jmi1+b,\2+`)@NY}Ẍ!0 sI v1ٴXܵȘ@)-iUU圓kw5]EJ)%kU ib*'FkemcָC0&xYubꇢ4g1y GQn/PvV)ʬY X\٪rZmt2N&ڪq5Hd:mۖuQ"czOhTӴ)%D/ܯ#" nz8Yf6Lm"1YgG,d]m%|>Hp2G(_kuNZ()PEV)Fif4fq#7jMV;cH)O661u][z5 Qa};kwP}IQsD')1Ck)g&ms֚(s!DTuQsSieDr@B&(> S`RLS׶lJnB댔m$C$@4 G#u]ץTHJ(j'"64i!*bIDH|cޑ^p6!hm K㶏ɛQioB\ȉ EQ$3 UN+!˹%Sy&qYt?%7x3. 3ͮ j+CQ%& CGLD1uXKaDD>ZkwؑRJuG?stee4۶c8IIMӬ9|xQˌ֘iU}(4bNkۦm۷ڹsǎP:'BUŘpN2Sevu.WMf(YdQUBDc1V4J+:73IE'L Zr%%Zi4LpT2Z bA,>Ҋl(%r1hIDAL1Db@'1bڈԈ?)Pr@RmFdP"AK$'kUY}( y ”Rj.9JU0+qb1B r{%XN%R!MIcRDsϚ%0`i]Yku9z)Q RNFl4& TX.T.#mU5pbFܡuSlkz饷N&an~x+־Po .:Mm>~o܏FIMD@*߁1#Be8OQf3S"r)wC bhR {B-rmƐP)kM]UR1r $J ɣ3Nb쁚$] ZG%YאY7 \U IpCfmU:@An6 e29pzTJAIM6hҥh'j_3R!3MR_ r7C+`άL1&| s u?FR3P+͊7MB@`//*la1ed*]%-$'%F`[͙mi)(&bLaX}Ob7ֆBJއb"4 /2ڗ1!RJ\M3ODn3Ą'Jik]庮kڶk[5hƘ^fcˑֆT4kŘbfRP+"!P5F"A~A)2QPAb/5UBYQCyz]ƜĪU28ghٺ:s:zZ#+NbR8L?rZ ] 5!Dt+W;ks뼼Yc҆MI)*v03+:ĩScU9MY_UU>`. P)D t[!@d *<zDh#m%Dy$^ CV i1r)I+~^&ݘr#O J+b@M-7-Bɢ)EQť'u-dir\bH=[]l)\rX1nκv}Uo?Cq.+K;D^"Wcj&c>O֞zY2xK?&#Kqf)e`fdftT  *aZ~@c%[D~B=qQ YL!4sI:\d#"Ř J(eV[t6}Y[kv<`V)!t$!(J+uZ?8'6/mlPZ'(Fq*Fm TɰT}$NƼًB,y؛c8((iT9FbJVr ,g|"Y3BVDƘ"vi|c|+Anyc ڮ3XD?PLǖC}՝RTJ[+I:k؆Bꃞ2q#a%C!ǔEz۹R$t yH&(uqIEj٥Yi/2wG hVZZSgKDb)ƶsZ)Tdllg 40 (Z+mVHN1Zfk#ٲZ4 5#(9mKF}@αDi3 lc:묵!/SJ;W!уgU (Dsw.9u=Zy#YID J1iVV𠺪ARrҬMgϢCGgFR0c4('Da.?F zj@mJ3IwV}Iv8+ %$_E*kŻ3(_h% lyf\ZK1s4L!rY۝:P4)z5Gg2Sa23|Mܕͬ*z mZR4<b:ZSc VoQ#[n5Ds$fLI3@׍d{h81B9w1J»黨TN KZ1*6m׶i39;T }I"*VkQƷm#_N]ISLobd6Nj5^!;8SJQaJiko`]Uh8[{euecmw_"Q%}ێzFѶmۖ֓՘pWEH C k?THg\r=zt?C?t%[;ٳgnli"7%M)%F-q1%`ذJnv46Fc HB e`Q Y͌D|6.^JR*Ϸ2IrݶmHw@ރ2$q;S,10Bstky+Fk6VrJrBkԇC(K=>OD82'?y@ 9IaIIAhŪ Xђ)YF!`DK%K*'2k]0R~o,XF)կ~N߯Ґd=r[f/aKE!DĚ#eSwDH)VJ>GٴYSԠR ,U"Aaܿ @@,bFFUWU]m#HR%H$UUm1EX9I1)PkWœIluVjʮPXːZc!5YSIߡ34{ˆ*\*j^=~HPbh96 B6!50)DKZDzk1Vڲy^tJ&$Id'"x0a wƆ(%{`79Hbv!'{1cl6C@ GH{Es[J2A)dS#Inm0 *#C&)V6q(^nf_"h@0Ҽ8pps &H`c@`!)kNR@JrnJO}~!7%QPrI  vDzu(@QD<Y)ͱ{SNbdGtSܠH&%CWINS ΛUS5K3k"|&Rґ+uvC9rRJ++Q]]=ztcm f`bZ3^2KrP+JOZ;kg2tM{ᕕi4Zx4Fh9TRJt: ($1Zaf󸚭X|OyS>OܹСC{_ww[;B-$mmϭ}>}%#0Ɣ=wcJ$Ig/)oڶisw*4h?OHKbVzkg>SJ"l~<%g*KNJ2w􇔈S3 ̞XYsZic+#k- H"[E@2U&%KD)O~xt1$.t<0v?@b6ZWν罟(UFZ]vupqqqaa: %yU!tMJ 6*uu][Rxe-/Mcc/| os%g(xumw]۴1FcLUUZYA)(0J cNz^ &m6JQ6XP~!J$MnDeٌ )%M{T/bND194 a3o߶41] suUicBրYTLFK&rOme8ƣh4p8Fmۆp6]^%)<:[(BhX1}A$/ Y Lu]Q6`8R@Bжmu99փJD$P+]9'13)W\UUB =ȔCc$(dp%`??Dԓ+iڔ{e\ЈZ7>t]R2b(RcmA&'A)Ő1'R M"#*bu2VxR)z5o. =^m۶m*ewb0-#Ћ c9%H nPkmQ0C1ͭWeOܕ3[m l^}&pS59'm^RS@X1J)N6/HHIEDb)EP_sq X$<I!cBkLA][+Q`9+SH7K|nltaPG]\*q\* jF;kVvSZk`qqqiyi0d2fM4kk$h ƚ6t+G޻ޯt{{l1غZٙt tMozӛ?x|?s/ҏ5yϝK7؏]u^OO~Oߗ>{~??~?x<4Ko} >|Kom[G^A7ssݹsy}_SN9o~Q?=iv:3 yp:ؽ{ݻ_Wa?S?%?/?8vYz??m۞|ɇ:?w;w?t{k^{'< ]v\^^~׻'=я~cLJz ^{]v=?9߻w>߾wclc'8`yD(1Dfxk 2ŕ׾?}A"!:6m} 1t Uk-~gs[XUV DR,u=9`a!PU‚b!{cDB4d#QtUZT^^ڶ}ێŅAUK"gb,`HD1N`AB^DFk|J"f@FlWvmc Y͗oM7 ̧SL+a"Z)8kgfC@FÑGZkAUn9o @Js1k} "hT? ?PF%=cXO?kt6>"j Q3?HO*!JAhY묭ӰJNȈY`9;@QB*{Al5 YM'=}NZxVcL2C7rMH (Q9k6DJiV:& kQF"p8,..ԕ#mc`JG 3ˋWU҃p8 T|zd^A@ (b >dT.Q,w>t] 9cc j (1+mm߾4 s91Z@{vl6f]N#)UFpg`:Ƌ;o߾m{]1f6kۖR: Y)eAC]A'%sZ"1)($$N|b,4hO0eg^a"9i`PƋ uU@2F*"&Vgm圵FRS C 1k|5M̺Od4Midc:F:_L)ܣ麶if듍d1=z#+++f6Liva3dJ)(m^3G!*t֔ec5dCbk͐MkqNt R'NuDdmL (vdN}r8Ck}Pe:2@!z)I};kY5>$8  -HDRJ1g9F1:*r~p֪dvP;Fx8]嬩ܰ UB]׊L#TF)c}b(;8g{?.DXmSfMv~t5 ڎ-aFBeLnfqqQDJuGVVA;*Fڵѯl]?zߡkm?f]64ںxaO8ǜ?䓿1wڹm8vcͼvN۵kwu=_;p<|wq}Y[[9|ѣvC/RlGN9۷tI?}kOo}o}[_k2/zK/=ķ?{}s6KW\wUW=nZ>~vEy{)?mwL~yqЃ|#]wu{_~}{w{|F⋻馛ntz%WwZv%/y+5_c}\pǏ}G?u}}\s5Go喫袋^~_~7~wzD .ԧ>~{80c׶h8k5۶ދ˯%>ɂIy`㗾:S&1&}8cϞ5M+"0HPX_}y?ضm48t6DUUSk|7NK/sN] aemmǶ!"l!MuP}ZUJ$Q ιI;8s)mquOlԧ>UZ)0M6uUkSJof 38vw`PgMSWU}\DD!T~~葮kZW`4/E9V1#_#m76g7tgfkLUU]5Mc8kKB J326Y'Vȉ*N6¾H&D4()1Ѧ.A2X -`svLJPWC1G*PkIX_=oEpw1 +5SLMabdUU*kmi B\nJum3k|ں\0y}D $V(P"!D$u]UUIi=ǔJMgGƀ(uw]C4F%ZksRP+H t:L']۪3 2CFkj2mۛ*熃p4r2C1PC*p)">6ꪪixl1Zr@/`]!$Q]ׅ8DzR$Bw^t&W9)yIpZ۶mڶmfMqPR4SaHfm]F'EEJ1rIF/ҪOٌu6Re 'ճ0$̧(;"Z\(!RJ&ʐO!Dyk=ni&Ҙ{HJ҅Jp)QB5|O KUORF™k+BlJҽrz ˌ%^]} +AM#ы:J3zpӱdF&h uȇuRj8 FtFPoۖF U'8QyKDkp+u]G#ќv]' UƘ``jA  Ƹ>K9b%gQZ.1tzFźf]/kͬuJ@"m IDATw~}ukڮi|zfȄFkg৓i^1f<o߾}uϿp?^u?~/ad[_ۿ?n1n?Mno~8bY__~${9_yK߳5gw㎫`go~v_vٷoy^LC/}sw?񍿿|mڵK 'pÇ`Ϟ==:=s:+<zUWxp]w]p_WuEV 0C}zكG&ɞ={;w̅^?sm>vi}=餓~?-G?zoj?>㧟~OO?99w m_җ//?۷v2s]Wq27*_?Q CH"f`E~pgb`n 5⥴Rb@R: m%"[VԔP,hg]IOkD)TM+Ҿ!.!J[D\"8x&OrPDɖdf:٬]R4(-ZA~Dմ1Y]Flb9%E)BV A Ka><eXa&TevsJv2PRbF*LDusί}(,V4R:KaI-}7qEJk#[,Դ ԃ\[Ŵ" 1s)T%9H!"rʆ6T9 2ZU-9s^2-l'"fp(Z"fD4wر5ՕUT1wbؾ\WolF)aLmM&Tx4@h",hQuUesn4)iI)YkG)X XXk&Su)ƘA%NLvxZkCsַFW>O8WwޕW^);WW^ymb!Fsg%?| yVWÿ#z2}ٛo`{N9o|'pzZ;C(O#zt?^_{}{]w_|7z?xݻz@?}/w1]x߅m;W~WRJW_}/ /;r<=irGUf$}6j_:_ikt8KE'yNrzEv3=|&Jgyn1 n/qZYk+_9S5]ŘSnO?tA+ڽ{aV&)R?>ӥ([xNn>3o'yRv1EW{#g替('=_i&v7rDrg(#p O9)Y| {NMB ;sϞ=r/w^Dݟ~ի]׾7m>|C/<|x%/ooyDL|U|ѣGbv7D#Z׵^+-ba>Y[$8(Cl̔bX}C 9(+l0 UrE2ze"S9JܕXd1Q*cerNR{ 2ȴ1uNw 8JDJ+އ!wMDY0y+CRE"Li6?̢SP!%%Jسir,urM(xƸʻZDKbli+%Y'9*c!`uQZ1'{`[ct9Z{ !/ *")*)Ųs1H@^ ĤʔBpDԣeY"UAB'}w|ǧ?鍍?u+_˥cS?S\pzu{ݥ^?1muv=??O?ǯO?'>/1J'y؏SN>v!U>vꩧ7_9uG^|>[kkvwww{gzA?i~grکz)pکy)'[D[- n+]qp:tsr9Ln3r> Y'ق駟}Ύi:]W>7!$ksxLJ9۱e1{3f޸^- P"ćڳ#9v~g>QE 7ދ.:ﺾ4sJv|Z*ʰ]U~a0b/^ ƹ9o4o}"zfL(!bmCugbXe9 [nQriXscؠ^'e6T;Ɲl`\`CjL5Ebb╱T{ؘfv&,`նd:N,4j\q- hT ZɔW圭 jc!8vR@xۦm{0,b-1&60c *ksyrJ1ϼ]9﻾l}7 ˴<fɆ#yUn'1(̳\.ʃ5,+ C挰kަ=[n;ޱ)AaPd2L'qtUg-Y!XrUm'=\)j`oZ%Ǯ"_ai$HURL5BGH+%j߾a l:mlL&yv\T9Ku'jy# y ID+XFڵVwFI3Wή4%Gfўwf>#!Lsz 1ƯY˜1a,V'rVyD|&C,'Evڕd,&²TV9ostl~.K9 :&XYbB8 a]ņԱXQٸ]I|%ZRL]-˾-8^T&b1UW>N|>7{-w#wv$e&4~ }-;GzÇǡu9%Blv6S1 #v᳞/{{+żoE/zo;;i{;~C?9#u;I'o|0~"L>O>k_ڋ.讻1^}??3?3ozӛnƔW%W.;|?|饗x|{~grw[=ym^y?? _7M/K.ڽ?#[Vy,?c?cx;x#G\qsz?+_O~r>0Ͽx^WUzի^KKc\@\NS;'j[RPXa;tL,r)jkO?-9[`6{[頨XC೟Wi޹3?|IL?g٭<;ny\T@6#:t3s뮻sϭ 0/m)sԿO R+Quv|RXPIÿsW nX5儀?0xq K\ +brHnUˌDR٨7t!q裈:hTD"+ 9#7޻aHkj[ߴ{_ nRh T !!A0t=oC*E*RdBRVʉ"zƙ\u *ݠ 38$QC f1eC甽7qes̶EJ.nԢ@@%]Pj)"uU-Ec\)DTZ6TH&"rD(e:DihMװ7[WnF!k)]9{W蘧+BkJU9[k^2`8NCgD3Po֏} eSP͚cJ`3w^sP-B KYԲB#}`͖*ϨDD\6"0jХ!OH)ub>+ˮ-U :R[5ќRW,TVX%jƂ9<FrKbLɫRr8];l5iB5$E0|0 ;FUo6@ 616K 怫KP$5O2@6Tk /Z!m,1IJ~U!XNucV#O,KRz9׶ mOBӤTj1IaȚCcHhtŽc)r>NۚN&Cߛ&m50 sn|0 ΦS[L&Ӷ%4 0,,ٙd 6Ttt: b1A˙f fWm~zԎ׼5˿|73׼5\px??;Ç8p%/yG>oן_'~'Nۿ-A\g|_{ 0\x3Wcy1=K/9smǯ7M_W;=??<K_~TK.yK_X>o~s'?8Dښ6Co<[u_ VX)uNfn=CFWR7p'?l+yb=y+\{u<0,q)'m;;W|뭷|3 VxiU/sϽO}S1qǗq҉l']w}Oj W;nO;󤓞qm?㤓n30u?9XN|ym_zsCD^{w{5Rb!䆛nw/}iNӄ4D*a"1+D8 "|1#$@Ka1ؚ6llڶ%J4 m<(3!]Wh"1}g P MCz*#vuTl)Ƿ.VzHN9/  )H`6!6vwwmљB2P:>W3sŮXb)Y8{bBPM qb02[]NVZ yuQ3ZUWvj;ϊj/fA|{x+N7b'1Rn\VOh2lmm!bm<᝝m8昃!0 e|]l4`\Dl͌1%DlBr3;ݝ:v y3dcR3#ݪj|F }g: tˮs)uLHc _KL;p1IV1kRuN,MGr@Ka"7=bZh'Z u%!2ɍfPh3y,H$r4RmVQDpl_+6p1Ԑ1T&k3v'tnr6eZ٭3vg+"1UD(_Db|[*9ǡADȇ0&ӉPFBMIPUx!_<$(c4Ύf<&x\v}ooo., V}h"6M㘭[0{=] xw}}SNN{!!ͦS h *2hR΅@9ebrΒ&xPD>[8Y9 IDATni>n{f]כ|q,bb Py+%Nbz~}rvi_◞0sw6T3g哷'=ء/3x--3/N;Քo[caAcŽ SN9۾tɧL&9o9g=۞9ngmkrhOYu( o?$Ytpdvl8 ^7W2/لq(@H``9s{[,8u}N)*"Rc⚳yG mPU3 z@lkbSKeY3#zKpa#1+hN9T#0H;@y|8eU[fX,-6)!wDHc 9&JĦi"q؝mgFH,bJu0sbXIE@ql91`I'˜1".ΕjS?,:L+LaadrcD؜n%(Wsv(@JI!uuHJǓM-I)u]rBַ !sJDʎ!J%-'0 !Ntu]ɀ;DL\ 0dɦ~&!a   bY FU ?wʵm !dd~FU~Iňr):$d4 Z9;E7Ll˒y QUaы7ŽarV[U"z)VbKp!Uea]*o4Kub:53 [۳@W}UD2ZZ&'"3EPqJ % O/IUT5?_0RFSW6yd2Vp)Z\qŭhpz6Uq a JDb.Twq.HxwӸ&}1x_,],'{K>!z}ۺ6T%9_.Պ"NDv1d ˘bSJz?|MlmlNɤmn9$7m̹.bqȑQ0b&{JSLo}_ϟ~;yǾ=y_wW!37؏>'T01t2#bR9cӄt:t-ue}?XԊE M@>m_v)Dt-[ygqm;z'nm;묳nyՆsse+V;7眳okk{{;͓A)1qe Nx VNO D|O+wyg06S =k3u<( zg?]w=3yg0fqV4LJk?/j7_o~yxE/_k~K6ziQp!|7K#p ~o$[ ,BgBc)FBږs,ن@aj;6gId풤6 ;6wEĄs1fk ;[~udΓ4b&C-)gŽsJ6^5_P%E{[cJvMDD%.edži&dED) + Uz3ء*(9~ 3=E@v@JFqDB{#e%P$CJCo`ju'DA낹y`Tuc/kím,aa 6M0u3q۴ q9P,:$5[PkmS9[m'qݤm'hӴDhGґD9ѧ1$3.{aH1FHG*\P:^f;=JS *ιiU443T%l6!x &͚krIDy*pqA q[.ITH Od#œEUaLvc~wn0 3۷c1fh}ƘmoO~ _y _—e/{> mdY@rSPpXܐS!d y{} :S$29=X:=bלR.Dr&TĞZwF"R2jUE.s̎-d`fij!VોD{ SFĦ -a膾%筭}% sj4B9\Xos9p,~T>ԘG0 ֹhx)̍"r4L~臾LgV6:,9e~|{ݝwu|w fO/>[`EN5T`(XYF+9c:e ay`绻~sc{?_,Dık"%dUG`xL19ncEuC{K) `%Abuy泞aa) Dι<(6rta";Whuy]3b۶3<+Vg=I\i@c%Sc缅dDQFqf! \,9M&8f0-`AͻjWBDRؙ1%Qn̒ۦ~>#bӶ%8&k{X$괏=b!; H_dgBIg|lC=R pGsLQr&&4"y)%f RtIL 6ssZPe]`!d&#*"cTACK)wP8Mʎ$Ẽ2u"1i-)6]6JӁbI19b m=c&3VXeluE_Gh a(`"D&D򖭍^z\Sm,DxALL%NĒd*<ՏA!8C4m'xlÀBbs.,2N8ry?8x676R)&rSB$EM);B$80*d!msPbJs{͹ assc9]|[Sl;C?Dd)[ qwO=p`|gő#G?|8T$eCY ,#GĜv Oӏ}ғauc4D;ߦcmn{mo zzͱ>)ز{O# ̈%b#qP`4vreTpηcD\l635|9gtiigg5Z@p#8ۊ%;&}ਉ01¾RrNQbl.9 1;HPT5ŘCF`pkva>nС3 *momo{"J'Yf%$r6UƅcIkąRD +i%ѭڴ 3XYڰwVRC9+NL~&RR,rv9Muzʹ5Uyĵ1WT\VmUqUM A*a,:+UǸJ""jF,%WU:5 .KB;ncck&ftz+9rH1#c9Js!#塼s0dQ [qhsʦ !qi.f{~6HC"(CC2c" [.=B([_x:9ƾ$+^w{m{U'n[_ m_:I1$13_и52T.:bV,MDy5I/~ H]>T+&4_J)"13r4MJYDrViztTu7D#XzoY19 wm8"aMP>RYb/V ,v0D$be>baNN}ل&J9ȩOl|Kkqwg@%O@b8uVO)RQfb[/옕Gf)!Ɣ@f sMh3(XD!E,g,l'Ո5Ȅ9`۴g!Ex XRc,)]vvn3DPMN`H ͩhY&(NX SzL 11QȢ"T(̜"ZYӠvP>sIbêCh(:0#@X66׫wl9}@4kTCCD-.$- tTOBeVT6juFPe zv" )E)Ir)YR'LeݫYfX׆a3 K7={$[NvjH L3IF 9&Cy&&4KkAy`EVbqu% {am8sX7*e Ѐ9DTX售)FU_תyc)m_,@%+[ a嬬aQ*/ +$epZL5r,zz 5UD,U|?pmatB 0LB4 19MhFڀOfbnр3D%"}fG9Ed19rc %9@ew;҉Nّ !6M3i" ݾ/s eҶDgmC(Řr[[moUVϽ[@:!&4db"&@)Mr|b{,sjUs5lU7{v;G1VV;d*á=8]U@@ }IAZ;~DSWƌlNu6[ AcGFІ-ИkΗhfPmI8F{1jv94<#&B /ʖ2"H4 3kw͹?6ք0c;m,eQHA;ڃE%+EcY}Pι2V0bCřeT, 68^$vma警97ͼjxNY}^bYVPGT1q;V"vYErAO2 ȢC:G%蘝)sJ## }%#/Xl'y@AAɻRY2F؄)q8oرYbRzT+,pZL"*U,vz̰Ǫ%"E5HH*rZ7.iVs[| y5}Rb+HB-te(28u 9X1[+2 CҴ^[MCLq"54bTRM-Wx+SF]˲lUYr9J ui kEǺ a(x0mu'tEX2?dCŌ F Ye%rV`Ɗ04՜R|ҩjDT9Zj.c^Zt`#bhf2L&ѣG~#GsUs<Y;1Cж;Dо7/:)& #1ŝݝ~6aw}4kQ%.(j6>m{{orY=8bcE[,lATs!}ѽmo7ɞ7do{fB; 9",-'&ܰe[-T(#%0#Ztb X,d{X?&j*Qħ6DpTsΫ0Q6ιr!`c8,2&cMz皦mДiSJ#6 wꜳ.2ۅet(ձo|_Qk.#:d@VzNDp}ypUF< \Ӏz{ޛcR:A% cD,@aՔ!YYl>ƥ a!4"X]lrI:X胅I䝛L&cEfUAK%6bFYʔk-D&cȺ{K*h"YIQ$}5TKT89ǒK]nQ3z)"GbWGDV[!W[sHcĜvҊBQbG "&3jQr-X }Oкj^/XgsCU!Y|Ro[0uX;C:}E,{+b 5-B6a.9RD$*k;F&j^TV1 ɈC9dYUXfS$N%:Ft:))7 +)S1S]QR2t. K]S[±_ۣ\J9: ʦ+N#9[\+$&%fFFJ:K2f󑦜 >@V >ĝ\ww۶_U=z޻\.UuccCkaaa4}jSð)m]9ն-3ie ;;;ynS},i΃t־fSr8ʥÇUq#Gb M4Vum{޶m{Z:i`ʔRVM+/1;.k.U(K[ao0V@}.RTTSJHgTȅb') Zl6k9guӘ18C?X})+2к`wvd$2k`*tRj=v_ h vCaHM=4:TX%)ZIݦ/v#еѸ<8~bdkv*hUdȽ:f{*6g!\!AYVwN99(LI;!}!KH6dem#!x"Ȗ)#9GaҶyQ5 Yni "F<;瘽{Sq"bu{ s`ePFHqE EEJAFj2RYͽSJ#kI`Y|ɺx>&Kfic9CsceV7"]3s.PM-e-'l]ڂ8ƘW+$KFv!M";ter+KmCm e[v]@54 }^;gE ( d WL5{AUJ ;nrY(*Zu6o~ZYTa1}Uөɋ`򍭄{-)'Jd51;Qǣ)`UbY`IX*qdKxCFZ~Md`I +p,QXs=4R/]NLļvZd `܇RbiRL9K)%#B:vߕ`U&ˎM G C$I|&b9t]I]Ӵͭc=x1۷#<~>m۶ !sӳi۶YNd{#2*![4=m{޶r-]tUW]u'ۗW^|d{{_G}߿|{Ӷ.??o/>/O|W}{t:׷_*w+;O>xs }/G?Ѷm/䒷nx_x kſ/=_۾eE3 /g|QC)UXPE- N޳BJiQѱs`GqiGؓhddtT(uL-p`S@X) tK@`+ ^+hb14 0`]>f5&U -HE2o$(q R)XUN)Wa.#!J@L6UDЬY2mC D<7P"+}\.xN6Ո"rs8GB2HCFؘNY*X;w}JKGd"*Y`rMl&Q 0olgL@D,BhZn<C\.f/ZH+t^@TQlT9刱/L:AeYg3BJITS6ƘT/cCԆfŔ8n&4J)=}ȑ>l4AE}!DŽY=8H K;vC,8rX|>tXdr'bff[ۻ]QD BXmo?w~?x{믿oo677w^z饿(?ꫯFċ/++/^~廻7to~饗}M7[__|=*p}?uUW]|şԧ+|衇zox?Rȣ׾?}{۷ʆmT5d6K k*)B=D$PbAA8eԵuRs, HhŒ٬0 )FQ%B3@9-  y߄I.h<39"mf1%"jtb]ZUz/ڜ>_eXQ+ jsȪtڶ> BJe."}5 o{fNkꫪZM*G= pl2l,eҪ;LlE޵M[*rq)Pʻd5Ft:ݘ&1w`Bd#p[. ׶mJiwwwggg<R~e2I;@#51api;GSl>ܝ>zs?{+u˜sw%! twnD!9i/V8 @Gяw?99P˿7x__~П__}CwWun_G>򖷼g?<|}mo{//iH~?{i\'Xͪ*dkO2GS3DoK-k'!>OD3&*JH!y3b CT }vA"y/XZk. >$!ߊiƟ̀h:azT̡KkMRJ!uxJ+:F!Q)-/-3OYUi2&D@bd=.*1Rkw"ΎckNKXb@#i 08^Z +4⋱1u B`"6R*#+Nx"ѨvffӶLLHűc u V_!GZK*kڒ>{ڿ/[#B` !agf~sG깎yQX@D*UL V3U֍gF,V.9yg$ۺT ̄rGg!1}qڰO`DfZJR?#bN9mۖ!FGmeаtpedLS ];n6 U^.Xy0U02vw=`@`}j ZF: hZ7UA#_?5\ԖPu¤"{7BCBT1d2eI)5&@׹vI+45@m(sLDHj-F,$RA !*&`E^UjmWc-I?6JQ}WEϯ0⠓7Ąܚ}]O) ^].7*SgY6t. PYܬ-Oݦ!;ie&pMC%ZMWRߪ*UT"ߍZ #?4#KOs?Xg4J6U$ƏQl-¾^Ř`Pk)2s_޶jbC NR}RM<Űt}c+4Y7mL+ɽIdS 攒$mHBUǺK +C1尬ZsR(ENGJDc$kk2/f &vRl*;}p cp|'(×]UӒsub 6e 2`MU$K-{Iwaxñ[r\kUb?FݠbppE`Bm˥l"p$Tcon n;hBsb:"*"MueVFegQ5S^@Otv(00%`ꚬkvBcL3v{:s{TEWt4wv{գI@dk!@^!jDD?n{7_׉CmGA=[JEbcUĽo.j];,Rr3șRk{ MC}O?O?e?s?5)oo۽'|o7t`-=Qd%4Ha#w: !":??o\ m+#S0c癘LDxbq/`&XTcTbL-!Lk-"s6%4⇝"ev rb@go)ʺB`cfcc @ 80Whq;⺮{dwQM"q e K)eٶB ԯ v Ó^} [>e;״}*s:{cqt* QucQ**wP166dX'BǘjtVj[4g-Y[1\m+MO~D!&c2e]לML\kg|ELTdSW/y9(J@k@B EGp?"7[ggg‹{饗Dݻ..B*_J.h9lu˥D'MSHRw.n'_J|~q~C뺽vl7~)Ro IDAT xy#//o<O?!O?{ާ~쩧=|{Oɟ_SWḂ>oo7R7M_~7ۛF:"džҾF3mkr'!"Dʺ,V'aüAD)&DUMc;Կ)G0 /@~Hl{upԿuB#"^S[QxQ.*v2ӵ1GKӯ-U;ܷqֿtXcCАk[u1rя%vu"w)skel׶}5o \r;"V^g%U!O4/­[:suu5Μ"HX"p8lVKm9v żۙMWj^~W^y֭[?C? K)Z /+,4;39߿].iݺuۏ<O~x׻~؏>__U_sS5nO=ԓO>e_e)'|~??_w?&2u???_'~'~w~獟ƯSO> 솝y7ۛe÷ Zt@p&Re?" C1}(~"|Dnsh-vsCwݝ;wXj%nF=/xy># 9/#M6h R4 }ᎈR9#4845gA{b]WG9.YLYR kwڧv{+">c'J)-oR'4y4?0rb?˲⋗W"wBD2P@1 .BV@_ZG@Ur>\_ZSJy7M >XS+>*o1RkVwۋef%,29o!3q8A' 1Ӷ ,x7\|`P՜KYM]cTd۲@@9懨a2`˹A^zO]iPbJIMu3c Dtpf^yꕗo9i (y26EUQ-f&v@|AāU!?/i9X``HľtPEѴcw9}ɩ`c.y]9bkJT$Re]rDtmw@6WF% 9O)4$"ZJ)V3ei|[ج?x}%&U=\T%-R[Z5P#8ụJ[6SAcum+ɛlYU ҔzwqICv1b1f"Z`B&+h׍~ЎP)j9/;w$B_4оzoDijo@H.RÑ3ͷ2^f<-C^"^Jc1)w 7Ɛ)Ѝܗ4,u9zU q,M).4A-\r^}r֭il9me6Z׼É_Їp8m˲4EO4OY*x~~f^eYUGs4M1g= yyu"}}vyY{{C^?lf/Ҷm֭[>ݻwnf˲7K<\_R? 7{L/{n|#7s1(Т gԭm[[ PʁS!D3Rk)U-<:URІy sF!v5\mR/fƁESL~#7s%g^)jSL " xnߋ7g{c3fke9&RڶMr뺪4OQ^UC-}\Tj nyBΛTYMU]=iuBΥlj܆LiHdݖoLWU :0"К&9Fjm9*ND>o9-ے/w'B`KEbI B!D3zvM7 b ɗGm1h+.GxՇ>0%gv yPK(%g91ʶm.$J PHRK"y۶mGLO`&]c08>d21!Ak@AJ3s ?RGO9 ) RTRUD_m@]*a';tЈ'Qi$ s63E뵆/f" 6VQ&U-qڪT] ާ lS)F@puhL1'rD z)eY<_DЛ|xvO.#waH[WxRJ13Hmd)4/ը :ؠQ8^ujE*-9[j}^0GI%w.ޣ2ѸV7yN$? LD1ZʐeLTV_ >Cf<) q; w=58D [!p`WRb^i@y1[»qEձ3JɹNSN.[]sdd#&WFZ{TDk Z+⊩5bJ׌<˯|8@D?nӌ*.ZEL kpUJV[n]ܽ{|+u;wxy&Dy;,9>VTKd8nGof޾~z衟{}Cn7tb!JTK-۶*aL4BR 1jh?E!_?nǁE: ˲Jsa{5 [ !5esZD-\^]9̡SJB`yyUiɠsf-Or9ל 8V}@~;_MEhrbh^tUi xc #;x҆ZK26l5zpUGcFCXor8Ti oE۾^kr\MgiK("Z%JgVCJ*vct t 4MKۺ.jwC"*os-T5"oҕ =KHfVYwL4EFCNvh4DZ 8.Y;tE+)0TK9[ۺ:z$z.&F_Ç~ obMnA7^ tz ޣ5 s4Vrf+-M8?;`?lj2b[tEɦA@X B(u_ fa'@0c_1ޝjcq ~16D=q>B-"!c+O搒D;T/k\;!ϣ䜷m󦧘:SL#CYQUBḌaJ܌ LH^LN-O*LlE?I]yޯLޑC*G$ ZK)Z*Jk!QIYz.?cknT0][1**UŃy&]5݅]˝QAleʝZk-ލՍ'#N*W0U8<[}#yw}}xt^I6<`PJy$TjB[n=Cy~񥗮w/‹/u3)ƽ#EʶI`"u+ՕZ~88q(H+!j@ߝʖMbSD쎡-xl7vl7ۧtv}\X@{͒;a <Ƹmy24333ELg(/;P8 i ggDS4tX5Ћp$xCE)nLByBwQs6E3r0 BL:eP;4SD Z4?Hu]stBskP=?(@oi3 ޕ"ÊrJ)8zaYuY So1Q! 0Bb-˚R&`kc'8@#$b|˂NQR$@SV㭕RhgzuT 5mu`&pήؐ* 9,y붮z8h4Z 3ޗ䎭ς&U:y +DCq3DPQRj9 z}=`6_SDP5BH# 5K`^Oh*7P#@BSk.7RH3Ffbf,81O2ۿ @MSbB8;?gR^Tse}b"Lnw;U=,U jjg_D[o}cv=:ŴKsR2QaεJ)h`u˵VkCxG{^"B9[9LYP*L)1P`~fO՛_H4f5?ϘzΈDxqbWmSּ("U%dfY1\<|8ɲ!&QQU@t,#`SL1SjVsv(9F ϗ|!3DC4Q1^ Bt(:vKUAU{3>\n7HFci ;gOoÍ@nދȶ//)gTuy9,W^ywSZє~7\jy _!RU%DFQABbdu 5BMF(YB4šY0UJR]&(a1qP8ͧ'b\t 'C@< nvsJ:D_#ն`XT hߧrRzu&bLg۶9H5td 8Xs|t/|Uw B_E5/k)^"͵'˕h؞h.k+\R )8MSwLp+T<~sJ3$wzmKO SJ[NlmcqsDjiJ)KSd^-.Z/7 )aϽiqy?VMԝ,!_^s@/ چB"hRJtJ^&۶ gCdi"f3(|&m@!Mi JoE-!伭ˢD ;M C2ZjkR1LnH!-۶,`p-)n۷RrPaDH\K`l:8p EP"@ af >0-1i\WԌAW%y–#{I\@ZHݶVCD-ā8X 3 :Ԫf3ˠtD؀HR}/yg1&}b\4b?XQY~J-յ0ȥxn4S!Fb9E<\ӯ{ & yc-Bz'i3{u+R' ZTt ,̀){\R 6_H[qԋK J8|9L}q *S9"]3ne#Ԯ{C3j.ʝv(7 Y͔Tdu6fr953&0䪺wE!# [ iޥZRB)H@!N!p}y,SCn+ER)p t5YC^׈T,ׇ_/~R~!LRJ9,d̀)޾}??SeYρ9ZDbJ(朙HL ۶s^ l7?zE㟼3</>O?7f1|oȧFu[?|m 6_!4UJ =>Rȸy#h lxTpA}'> TI3 QzXi 'WUPP4ꫂUmĸ im)1:3O\8"N dp7:4c{ZR*ߚ{eZRƙG_o GR+Q ˥x1}jٍb\M]L+hi)'GWfs >:A|ꛕZԔ:=CD=5YMDŽ3ABu- rPUt[.9|qF@p+64#b u-ZPL0QL]^kU4O)nρZIifVk_0-oit) JV\O !UV3Q`dbX=S05TZ/$m0:Sd&] $55iG+ lԕG {MPirdr.5b ̡#@ɺ`+O!6GFEifbh*nQ(mfRs!@Zk8Mi۲T)Na+^Oc湖*R'~.B fzl427 axOVmW^ 6[\5kt. hO쑜񉫈;!)Éqq>l[]ܨ67RCš&nQjՕlZ Obczo,N)*`T.ἪNj6cԌ[GMmv#~sp_R\DT#OYֳ-4κ~J^\^B R궮N/\|T :۝;FDt[_p~b88Zr8~{m#d(Py9N>XE,id.J1H1@\ às^EUu-LSal_:J}y'k~{/O<~fl7v~]^Kg:g9:T94uR0of0.)#"V > Jn1^YD)[ZK˲m+\u]"^_cfspOlu-Ǽ}ukCߌ>^~?f7s!娙4p:7k6 ``^1ĶK+7(_р yb ݼy7M/G*[UD\"86%ovn7:Sj]uYmr}$ώ0fm[ᰬ˲nۺ,n޹0M7EӴXɼm"=YrSF#*/A14OnO_7b'q,A>`w !ԜmjmHhZj.SL΂QZ{Q p7 b@J4MSL1Oɩ]Bon sιUn6UjC=RUt &nnK|$78i=}R50sTqIpLifRvVzeF.5quBҔ~1>A.ɹ@ӛActQhi-eYy<TӅ UƓV)6Z6(5R<Xشݺ{{w~K\-uYK.z}}}}},˺nv8m_8ul7ۿʚ?3h}Qf~Gzꩿ'9~w?-?}?:^#:;gIw~ᇿku]~__|<{.//O_K_eOO;[MMcoo}W}#<wO\s]zף>oO~p8<'{/[[O'l{/zOv~ ~?7RrYuYu>䠄ZM}:,'cOsTm> Qw;3\riC6Bd"S۶mY˫,e9\___][X -$&U-2𖡨+V”P&Z?!5i)bn286h+^곥j.er6RFP#3 1Mv|q~c7y}ԧ6&w\b϶5jvjb\&ϋ6g9g {\\9O$]\\vp#u]Q/^0> xvup4/ Z1yٻ7U=Ǭ\TտOQDw9?;m]?{kKv1j=رv,\ 8)m?LHP!X@1Z" J"( 8" $ EiHH$p dsc'ƆnN9g֪9oZ~aeNڵjժ}0̔{`'oZFXSn>;J✫B9ٻjVj߫ JEq6i;5iOnߺ}~zqNGMnx֍{ j4ԨZҒaZ^n6q/n߹sf; 8iiJ+t%|^__m`k׮=_3?3E7|}??} o<\}4?c[uTɯw_?c??uf}ʯ_\\}__???~GGG|~7~7~w_/ ??|KKl?iooٟw]o}R<0=q׿ןx{{EO|=a^`vM)Z.jSn4BDv*:Xj֒,ʦq^6˂PY'fC ,B &3,X&bi"a;`U(@$rVU!a'&|888`irCB([OƓ͉F@cK %jSLlq*%0RB {tVX>x}pil>5;Ǎ2so#6-TCy9 Uw†PEq(X;;'|43->xy)V⡤R8D+1;q;Kl6K)EZ~gA"CWibwALFrA[0p98j`<2rhќS{ c* =rMne)*Vj#[Q. yK?tvb#mAPI%cZ#QXWNg YeW̤ uzpv||"^]p4Rkkr\׫5,0\Oʿ~c#oo3NCx V$Bg4au'$)住2ni54ade;r"]_qjH);8ð\v]G}bP3\λf]]t0j:99Y@54%f6v-ܸqxX2۷⫾/=_~Wm_og^YE=?nuyyo{^׽-oy*svv_|o.ھ|}D~??n&o|{~_zVOD=4Mu~g7>'t?Ǎ7hG!?OO#ï|+/ݿw/x ͛__|`~~g}g}|_ /|7g{?zg{ԏ͛76"zs??%/y/~o?w{'wo_{wݮ}nnݮçw;x|(1DHr\I]9w6ipL"@*PKtiJFVBffuDA_},ZȠ8q9qS"pu mkJ0niy6.f !TcHHK{? 7Na W.9(t`As~XFq/~Z9!&x׃L)Mdf9\Uc,LfӔ0.榔l&ul>f&rs3isnvxjE!+9eS '"~Zob"eŘVyrLy%Þw6 óCh@̴ h.*!|ʲ,QCHh)"ct"9bIrV"󘊵 WX& '\HMKV \`TY$NtΝ>xsN91TK@\Ԫ:+MM!(!NBrư%# IDAT0x@d7@qfS+\f-t268HT)LDZ PzlYKEt3k2΁ ]LƠHT'(!1e; ef.5>EA@؉#R!\;眏^<{8d ;ΗK65rs^(34<܈SNԈ1 @SP%[2puabhdL9%U >6M gkTX@2F,>gp4#3`_f :pmݢmin9酌]"d \v`g5T,W:zfѧ}m>Gfg ijQK'1x-@}goS Ш k޴Bj9WI)yF >x_s;%:^Ƅ|}U(xrhy;#|ptNͦTm:/b㫅8e!2/"7[)tk)ʼn aL4N9z>XS~ snX>fـ}pw~RڮV//~WH'۾ wcP|;_{wnN7nTн>9]?~bq3? l뺹͛?<z~bOO];|\V7 7qG'v?xxx__\'8<sS>v83Wfc#S%4zMS6~cU, Q64C fє"sq"JQ lj]:Z-xY;]1+:OY]3'9c)-߾aa59e|#yvh5PyB7Y6K<βĴT\ e-_>nyӔfCY8]'R*d)\̬]eF\T[j3bPTu4%vY AuWH1FaV\\\cČ&5*.l(\JVS1x%v\r&eh } A8jŮ")0hJʷ"*hoBlqa_V~8vD9B3q9#+ӣ̱9dv!k1JXHq9i49xs\JDv;jwbp"dkLP!jE y(T''k%W |Rd'15^,S#5CiJq؅Z2 Q!4쇩_м[r6+ Ăgf8񀭉Mre 2CJz*~Ad5$5Ê:B6"?n(Y% A^4 5/:Ѫ'lq9xi܂f[1 pEz>Hk959[Y B,34Z׌\_Ng jZm]}>|#vhVe81f I[ecg8Mf; C1^d3RӔ&X,լТEڵzu֭ۏ>:lA1[1jGCO/Y}\kCGK.:# ; 9GvKiw2S sRR87uj~֍{o޼y~8#۷n=#0 Tr"nTj>UGܜs?}ODw6~m''g?'7ǽC]'Dt4{o~,n-_e2͛ӳ_yb ~I{=yD}l_y{ރ l/}K~7I'sC^\\\<}fe~_~qn?o?S?염%O|=<?uǷ2CΙE(¥E|*`Xy1xppxtt\,C𭜥fukX8R!WȢ|қZt&-9ɔs.kkuŜ٭Gy۷/..+ɵgiUh̏ReaRN·@F\*Iyhq1#"V6 W#p8/Av\,!x}R뭚sɥj9r888Xr*%h薙4a}:F0pYc.ƾAʔ֝w,RJ\n`/עEs8؅F;- PTilhu7#Lw!$zNOON/7)3oJݳrҤ!z}|||r|rt|t||txZ9+"H6w?ybUF=K 92V-ˋˋ˳aŹ.בJHj@2, R1k!.vRtF"C&$בB-',]!z%!RToK}5dMwj)AwSJK.9z7B0% $ yKFfXʹRNUw 1jZ-#SYKQ3pUq8)gBVک.- vHds13[p]Nv+p[^7A['\_[7ss5 W斫)MӔSF۷-%;'fsyQJݢGr\(MS9bqϽsjR:?HӴϽ_|槿~r%O0na3l7e*\U:>8^-W]6aq'2^mWSپ꫾SJ{雾MOQ̷=>CSſxݹ3O>4/\q[ߙ>MvpB_9?ggyƯ^?O__==o߾u>۾hJ~ w{pyy'|p? quGx?k{^m׽uO||?[?GvӽNyݮ?a;Q tF=-JJL{%zT3ƮuRk,0 JCr’f)b\}tNrJviL%Rfss~2 2MiJ9e$"9N+E̎qN.L5OSIKg%K6Sv2WHXvf=?;?=====;==cr)Z2` ylLӔ&y7] K:M#X9r'̱>zf NuPRBrX.1 SJq\l)MhYD'1[&?‚䬦I.FSM8#@5&9[Z+)V>=G0Ehmհ0Z ǮQkJ\*o8qчE/>`ji_n8aoAhIi*%U<&Vjx]=ҩZSj:::<::<:8⚿kdڐ639býHEU!&d:.ïVq{'sXXHj70Z)Z}X,bǨbc&DK*%T' E%6훘~J6Fvb^f `&ߨ9qNM̰?`C"C !༇EUE6+3 Ytz j{TEe*vFw\U"t1 Eo]rsyv~gmΓ&.bH9vxrϽg=y{ދ_zɋ˞ϸvxٍa߸Y/|ы̋^r}=88!i;v}:WWG'׮]?~ZW'듓Ӵ?G}X 1.2%GG'~ZNNN\vrD]Mn7x_~׼5Oʃ?7o|_}=կ~GſGGMooo~{Io?oxï-s׿WU6]'_57^v4:MOOO=y{k_׿Ix~~}-/}K_<9y[_~| _'''~q?[[ONN_~臾뻾9yΗ}ٗ~~B>v_W1ƷO||h Uw\ɵHƤ:Y)<={+Vb,Zrʍʆ@wV[kEkGsn}!YJ`>9Mi;l;,NXC3j\ : ̐cL fxu=؜ͦB9Q{J6"\)e<;;)bZwVي&%'ѸfX+`10s.S:??[Ia6`q1%94N)a@apOf3Pw!R9֝4Pbbu">i\MfQ+\(apAz<M cR";3T߽.W伯= T\ &\oSV;ϵKBNctΛfQo igY6ۍ-*4MwѢ}]vnΐ{8]{W6 z+MӔrV-< ,kPW]~inRX8Ʈ12ˋ[nfQQ9iOS*%s}BhF9)%$.Ĉl`8rggg1*pMH=GBo SӠEx׌/` ,{|Q1 B4Cf*=S @h`ZY(L3+дh‡I+s0bfl s~Aj=Ӿͩj"{vRj01[#:<\[Ӡ֑!V3=&[e`r1͐B5 Y+o3|V4PEE3J3{>+޹În DUwUryl.IeWS}҇F*}"E!80C:\nܸw/.ܹuzzg6()k`r)EKujX# o>==u=ϾN]G?ח˥sn^4ιJq&K<[q!hl*MUcr*6ȗtݞ&d ~R'vsg7տz>.im{]mW'Qd:*ZXNyJI%S3RmOl\iY6r_ 9fiL>M`bT) Q1|J)M4ej fdM3Mӄ~a~6x ) +i5&'D8eFh9Lߓ)Gow"v_,bqqZ+-{8B)!A@@ (3YìbCnjTUdJ[)yQqض8'"od58猜SJVʔRF ;H=Z:aUcbf3 \HZ }`w`q`Z_\JN94cr έpZ Բ LH C DNZtXtl^rf1 0vć ̠ &VYZ2A,j(,P.J)D]?1akj2[K3f1rBE7눩TIqaŽ9W%8,3Y.MUl#+~]C4CjH?~^-K7 |kÎP-,\Bw!L9\R}CJv2+NG %sՎw9 ކWehWH)X))璳:yh7"(^GDAֳĥ,o V }8L$dP!; w3J˥Gf2Z,l&f|q IDATT\`LUl&r +J6vXl6ziYRjW&U 5p?U磙ujDW9Z)<ӻ,#.D5SmA5D~l|k^=Yk7Rg!̋Q{c}(,$eqtny@T.re&5eөEO3"kw'ggv].<LR4c&F8 8בJU8E1F@iyvqo|[シ̯zo_jڞt'H^79& 'FςrRJFQُmșюҀ'{A92P?^dgRgHPu)%&vǮ#"Ka69kT .6Z M}CbkA(1q*uZZ1ΗQprFs0X8İ4!~xq4\Hә!bfʼnH.9ӔSRwsUDZ^6̄7u!תzM-knr? u;89%5#p IgevdU-Ro)MJ! [:N}ٱB9 !BALXn}+SzV WȠW3grS(ꍛ_ /4M4U8+S.() 9R3jbs=M 0bL]%S5 eWZT{qNύÍR,V!driU+hϮ7m-?8Ap4Thbd]13sw^@Z֫AQS8;;ۜ_l/. ţ!l4%rы9Qfղ60޾}bb)Tb^9֋aF"r%ea\"wzt{>8Ӌf65Vz$B 3}^s8\_~vqo_}}?bZjڮ?e  ,!b!,BC9" Kђ#A '[cTGX !.:qfҔrN9ޚ>"vh:CX=l.XWB,1v}w}1PJu;BU&h)c=։yc65!)xu8Fs]tN"u[U ;f9jexW79EsvNBJ/c;^UK.SMOT(Sm[/PRlIѥXaL!u`x#6i^\OCT.KjqXiXԲqF t7`I߇ӄwTY\ M) ūV-:2#H%C@r%wd^F6A1US4v3!f#OGL2>MS@`mknRӨ t|qARyRJքS`-TQI$oC)k3tT<S jfjG,u]Gdb@Zq4$m'A|ubmZN#qPPcw]"a+rDԦpS""q޴Ԧ^NWPHVCZb(@0تԮjY0TPFf1ZGR҆6fcz8'b=6 QZM{_AHXٝhZ-2GGGׯ__}Raqu}lGyd6&^s3qJ,3i;lquYh-hcfBqߏ,j3 m0L1F\2)Mw9;;Uj^NVbcAWϣWv]mW.`)M2_Vk_>Ea0fJ"s)(uYe'.:\0is[S=䃢% @[t m8Xm=l6N21HbZr·О;kUV#ǘ4%"j4sh{Vu`oŎgbdy3?"bX3o՚@d@Am6v bҚG)p&U.%77;j!BT%—zHq[E!F?c.9LucI6|yqcqzphQea.RPYR?3)qm^Ch(6\rJIK>t}TU#$:ɪX3IVQ&vαwX{".UU/1ޑ;G],R3+ŲS  (;fI Ǒzk6vCD4#94Npca83å5>ZMu s!Z'{WY@"BC t yP+Ckk#q&j˰as"-KaћXG̈Dx,\ j:\,_oR8$Я%ʪu圇rJ@&c-YJ)DTMs2S4IwΆv6DfDFHwNUEmZ6H.ET;h-p4;,Rpp>t9%!@ĉ)9U- 苊vT .:|hb3 8F!- %FDXmc23kwiݾ퓻/N r Rfe0bZgY n?~ Ɍ:2MZ/$(LfTJI);7HܹGÍ]w|d=ׯw}_rn7PB];raΩ2KIG}֣;;XV`Xryvvݠ&RDuNOOU`޾}k׮jqXm\^=^mWݨ4O??أy=^M9'$ Y-fS<Ԣs0gC9:..vD`R`B*@L_9gt'1Bz,VvQ]`fqx֖(1_,5~~p$τ6=6=#B*K9Of)6k}&-8`N D@/ SD)'fYYځ0b͏0-<84$44DSJRhtRSJ ڗG:S7RRL9gF°;Nu80 ~5Uu}k!x3F-o\+U%cQ9MS._.!Dp%=My.lx֯!C&'UQ+TBAGmu'R9=40̂ *KnPaDŽ/՘ n@qJ%Wt ;$4\.TlGf^ `=!+O>~E8"stۯo#*5TRN$ɚujkèΉ[-)v;.w-H!"K*ѶTREUWVKbZ( ˶*A 3a`\(p1M#L"G+* J;)S몾4SJ}#7R#귈"q5UsB`eN& vC!G!-'\fRrJS)P}sxWr! i]C>DBϻ/-(;dWPK[Z={jeW+,fjFEw:jd$[?xnR6|튴[8*j턵**'V(s1V՛'␘Մc%c5ֲ;VDnRS g4>M={ー@) ##S%q!V{o4Z,j:MØ'34<㔳0Oonr^}ub;(;rW٬hXnb:50>('"6 x'''/,} !.xxONN]_"ryy9l4v]״k7#djڮOmWş-YUK-ꃠ3Pyja=R!BPd~hFVv ̥Ŝy'irJHe_1s;ոAyB">N"x{×Vo)j65( R;ꓮBF`+- NĂk@SNiJ$' 0N+1ai\HRY]c(Q]Gދ)Rթ u4"SC 3R3a<3pm+ A3 `&|V)%) =ΉK $"#b 1Y&ՒeJ|܋ldQ9?6\GLՆ3S{ QD\jуaH.9bO4MFD|*UM8JS*gt!2%hYR};85!h]VmHFԼ<,4NiAnHQia tL3.T,J7 6]'_u~?q[JXx' 8;]L >uYa1\Z䩙$Ҫb]dMTJ4 ή8{x3jxQA_  s j趵ň fWWUb۹#gԍLV{Œ\;w37mhA!ScV6ܰ1;jєY-pFZ%$jƍWٷ>1%rFʦ frW ;OJsyqڍBv v+Cƒ,kef{fRNDfq*%'Sj8>>*9RNf4)rCd&,(q c\행bQYf===K)}C Crb1MG_;v3r{];ݵk!xUfY׳r]mWv]mW'&FJ{oU։>Ϛ g"C$ HdCWJ&AQoAUD2hQD $!1`8$$;o~ObzoAyyw=7F~IL&nҸXc(aFan haJ)TcdkXfԢVWJkmP'1qR@ 1!ŤqαJmk$K 1(9c9jY2_!r!(VőuͿL1~)rZ$!čkJSm&du]۶mJ6Y۴iXS_4G09T3PS }%F *h!5XƶMӵ]0Fic\;֦kֺUbk]2kVFSLHm۴]63F"TTF+M1D\d¤2$5'c DRZe$9.˥ CT4xCuL&t2NӮ@^Q2}R{{RiڮMӶڮk5N)]HN9']"?YV/c1S)LΙ+~ư0x*YbfVtִuiۮ100hk\cETXEڮ&"MӴM-"V~!A "rbB(v2L'SlN)rǐa4YaB !|~),1'@g^IF5-mg3Ǝh/hV$rC+GOE4460-(CFIHknjP5V[TZ!8cJ1Dim䠍&t "eb2plV+sUJØ6UpN)ڶt5:@26c "2(SL~"umNiJTZN"XxT?U2 ˢT1IZ R$T2 }͙͚VS$8hşjTgR\D@vȟr~Pf-T\tYO&K*4@f*Ѻ|9*xDhf)VnKqe~Xdć¦u}Fo Mud2Yi׌HfcB?;|<%e"nmfJEa8|7ѭR6 gX۴k뛛&*K1y˸ħN; % C(0[,-Ep?bLlkȡÇ>xН_;o7ܚ9t]u{ݤz9 IDAT#Gy{;|dk޽=i>O9trǎ͆!g6GwG=O^xaAS_O?Nz+^qر-wgu'w; k_N;Qq9w_v1yw '\tE7t>/xg}?uصk?.{_?u|n?v#o?*b!pF%=R&%"sNSC3hFM #D1C 2ڔ`]}%:[WV0)8< ~\AFӲhrc#ϧ D"y~>_xXU\h؃WS\ s'P-qFڶm@rt2oL&kh ; S*j@ \)Vj|&c^X%U0 "}rP$j'ba~!cӵt*D}/拔1ڶia:ZBb1E* Mİ)7mk~6>@tcc:&!JM\8UY,B :4A,6 2>(]__߽k]ɴimt4MA a/e/a:Wsu];L'k3!TnP+=aJ*T攒~>UV6ʴ0iiQ >Hp!+d7޵k=%S'G08 IJutRq-T QAmD0HQJc~=.~}1kn:]+DÐ*+ B1r99 0V3jJebE0Pڀ9SN1u;v~,[N1Q3NJXҁܮ;S+^a"$j\܅I Z.=S2xH \8eI1I5M۶P+.!ѬD4&q'kڦ!bL1cI 1\f@ED!FaPZ00kN0+-,uڱŽ9w]f5zĂCHc8N[JMS{oǘI3\ rOUWKr sܚߴER4U TERT Kj^)dadTA0hm*TRU9t҂xC٬{əp lf}m}]֦@~ᬜ+3J2hsm۶mkI)-t:=Oػ(C!~v|6!*fMc&ƹ|yǝwwPQ47 lbnnccs:]72)&{ssZ7i]6f|;vb];Y_[e[.;x}rw\r%~ٍ_y0|_[ʯ??~ _?mo/nܼ+o?|+W_}wonxsǑ#G 8RO}Sԧn{}{߻OOGxx=qx>O~g=>񲗽կ~k__Ïs{ϋBϩωWѧJX# 軎k|1P)rJĉ` i/LEei:614kٲefB)AJy6]M& ;VYk:jD~V*Z锓ֺW1_DeTt:@甭)V#>xdRXʑAG~̝9YKi@\ݤUYI9?B tCC!Xk fMiXkY6hK@3dG6nv rk3Ʋbh+􁋝Fe/9%c0xc) zXsiۦq QPU'ZmZtX5%vt !H54d_(kw"F몱%uE$GTrm4Mщׇpagm7隦}gDJ<1q52PJHM:xU2Jqϔ|BPJm46z-PJa|X`"s CJq|rPcq`=c&}{9oZSM7J55bwh:SM+ZSñU`4t];]v$ "b1πSU0Qy dbZaM0dW"J@1EڮeqSjH9gJ~$b+5L1JS]ZV:2=AVήͣw5!+OxI>=&*w1^=0 dlVJ,t=F1wfTE{1§GGyEJh[v5 3sS TSB}0\kx׊{bxl5S)ù` kU_r9 ØgY__oV+-Dwu}?a3X ;LGu]e6lmm}݋b}mرc*K CrH)w]w kkkژ$9Iff4kk8t)~;o<kt::bJX3NwݻiZzU۶ɤiXZ׶6/;潇LSsZ__U!k>s5;qM=̿ꗷͱ3S?G}?1g{:W_G?nK.__/}K"r??:3'ދ/_ӯ|+'|CgO;xғ>mxqpt뭷߿SO={}}3|x}nxK^BD;'\<̗%~߿|9ux|1{&|RIW*۠(&زfjTK`ZU )F|JXlj JI~q"ɥ BfDY^^$c Zi6 1rX˂> ~>Ƙb%Pflz"2L@\Xc"%0yf5Ʋf ֟8`cҍn(fRĥ.>,amɱsvJ1OBL)'R1nhVWQ$KX2/J~هĈ>gfL&9c!v1SNT5bQ@ѻ=+x<Ɛ* "jXka`\6fa)iINDbMM^,oc?{`gCL9[< C~ -%Tը٩zi4qs 1bM:SFf|$xx#A[k]Hū׌j VSXcо|Dr!VXQ4")gcb>4\Fj#+#c MfV ^U5VֹҭL bX=*3(2}AgQ-}Ϭ0a-sE]hN+%T !X/bJL y$xhňdt] a`OBWɬr͉'2*dZs1Ze w~קm6նmkۭ[4s9mѭ-C5kU̍sms)(\.=RVtm2oK1ۮi&,XviL&m°As>zN3v@{Wr-$wߩ?o<̮nFNM~S߾=si}9#2_Q4~O?߯a\?1y 3n򈌏}cx;~~nNo}[_=#> *XS2G#u֎t,kBf]BjHK(: CCR.#ʘҀ,!%9ƈL<~K+AyC0<Zǐ"Yu]۴h 3 03Z;$Hhdٌ"BKi&5bMƖlJd FO!fI₯hX%VSe:p=E20+^H1gzyZ="%:)5PJZ @]`F rhX4c0-J@a"0 ǖ\"22v ;BrRk%"Ja)kK 0 9唲֞3Z*  M)& Ai䎠t褴N)iq0KBS19A1kmd)KGDL V) I7Y)e@#"&2O8}b캮ic͊#Y0M1g1uR%ъ W#N\ ah-mx:qp|=SoN~}G>G??|7'^uUDc?7 ڸ}!]kmUu<)êfP`Q1 hm*"c %.8f(%ɑ(go*5Yc4lu#GgqpRř:pʠ>&MwP9gd,x2E-#'%'F3FaR`1ع:kZIu6Ei_(M4-l4YSo (sq T1? bfٱ|[J2ִqN]D~r>rRB+<< +X]}3,N*T@'WkysX$2>B^ U5!QA<x))J5)8BU"cޥ`Ca T@8c5W(yS1 VuwX)2 |HӶm6bX|X.3ՉDkݴ-XDd~\C}XFG)LM8 Q=c 0+@Q-! kq4tFKKcrS~#y:L&k+7DƩ8R;P7( އp(ixEˊ+a /OE8*(k]9g@mZcaT k7p\cPdZU -eM(:SLp]/@*^z(.2F&~2ڗH1}Tbt.6$"%Rq11}%!]b8ʅ͕j eߗ Ymt]gd\Wg<ōPK躎ñѣ,d9w:r$ٻwnR 9d}ss}L׌uƸ'|)=ēO޽w={ݷo {v]7M۸v}cc26m1(uھNؿϞ=]7 r''>q?̳ǝzi2Ckց?`<Y2 "cN=2_~g\q];7~O:}Ccbڃ~~=>|С+y{#؝|NK/}k_{wxxMozӽ{ȑ__}9wj *WUz-c뮻F_qx7W]uջ7{;EW_}|>s|?}jX(PZ)E0X(~1-a `uaN|TsmZgariEh7F3(JME'z~)pZ].J gLrS"!U B4RXMZqGޟdcsc=vMkmۢD׮u]un2&I۵U5M¾1a1_̶-b9%D4m㜭G!ƣG{ϡ稂~|6/u&$0OVD! 3LI5t}~}O)n&<rZR%C"?Z8Rl N(EB K\`10䜴RX>37Ֆ"A <眳Xas$*%}}!K1+ (g`}"#1֮) ?tѵR_,~Y QC):vdҵmkrTmֶ];NiUO556EnP\PfJ;纮[[[ۻo';ᄽڽ&:G'$mNbrieFM\BoI1fdQU%ƘV`mfUB4nP!X[UO̺fUy buUţ)$\Y*6@LVYĊiix"+W!g @3>FG,ʸtn<^WX2rl_ 5L bl{eeD Da&5IFoK%^oJgKb:.!J[)t6!6Mk׮5c|>7OqC{pwn3wW]u՛SN9soooػw~-oyg}s-oyzY') /}KyMoڷoߓ; /=)wwOx>C9]?{p-˃q (h3HJkRݦixZuK2iP1gMYZWE(끈#ck\լ!_:L!tuFa{{9#pbDE^V824ib 8g-e --E$5JxNj:A9 L윛L:k\sUDZιGeyR@F>J >󙟻i98Jk1:gN9Cۄt.1& Г"FZD%1.#GbʬVR.Z0&. 1`hDzb$c6zdmjc^rn#mIe͈s* 5p- " )~/DU[[hA7<+uĄF.0 x2+l8#M7k,҅8QJ 9* Eff>x]0Zk"FOWQ7b2Kvֺqr )qAU\arnitPQ!X?DrI~!I7L!F %(ݻwwZr ZC0Ruz<Y,:D"Id( f cNJ1H!3J+g@`EWrr"'o嬡UJHh&mCBd.9cH|^j~985 6xEE1B3ER52PRV-F([.I$cFEWRi}(&)Gc7 ,$PbHLfSǡ*r-Ĭ0Q%A" M՜Cc=_T[%إp 8),)\/on*)EfUC1Dvj $O0@G uZoMDSѣP^M7י)Ra8t]w}1ƍ^d&vgop5ox3313)P$6P6O>UH~_'?ZSc&L6̚)${,喟9\ժ‘jIR|-\~pe%HJbg~=NP m5YTaE 5I~զign %町 c$8(m%"<iIF#_3mӸV)J]Z0p.8E*ҞuZͬRJ~[5&X(iZ \cdAȓYe\M#%5E(NEG .)q/R%H׀uџ #)%0mDjG>!jPHX,sBZ.RjڦZbqR}?lf Ykbfc+$"n2Wjm:[cR BZkG a}",9ršET)8̔jGca)TYA=RTQZb.Fv/SJZ.#Yh5,B\I#)̅ʖ8KvbG#$21=bL9ihcn-DsqcVp1(A1KW;YmFUWӱ wK3iz"suhm5BQ`jRlhT`w-E"DmRY˵2]%ny[ +R@[ѷ(7h@[q+q Bߞ)J.t ) ЕKJ/b¹4&B`)f5AYf2pp>|XXO>k;kk'$ikEd>['M&k#eeȖ r/"Na콏)um˒k777RY뚦K1VOf$KQh87MS u]㾽{676w;cgqeo޷o[9>oM: rm4eX4ʚmx >@ 5MILNuJFmKOTJKʴSRBQsMsJݠoتQ7c ǣN2+|kgmuk%Tu .m*1c@ _s-vﯹ_9b6q>7mcU"( c!k0,eYΚ#(áH$ FrfVy$z3ky3b1 ~P -I]eD5Ha+B b]J"R ² &!lRV LkMZ"QDC`*0bhbԊֹXhc9Eb,)NWr.!A\H1J[-h 5EH2e&PlՌ3`E^5򊓏zYN1"|G\F9WbbdXq*W3LZkJp] X`MI)+6]7ɒ㽟L&D!4F" !}R"W AUH۶mB%2PrW`x d"ҥGB\"Id@Ai=N)Fc9k{`ɹJL PX)4o KbREjSOR:0Eݶm;heݛU1R|[|,kkըaYI]K$21rV"[bgX9NJ`ɤG[Β3"2񊟏KU 7gcMPDk8~/Ji.\~Mo-Bbއ~>[5M4έ]pWˊ83bO12skֵ8V^5PmP*܈PP2K֤6xgS5KLV:eW# TC)Ι@Vߒ[LXRT)eA)E D#%c66F$ FE\phL|X]mFݥ&)%$bJTkʻUh)FV\_)U8"U5Vs cw G[\vg.LYVxVUW$s :gXv(c%qHpJfC9, W jlqHhۮmu 81 q7/0\JRJqM6+Vrc"ZbƘɴ˔mm1N)},ZibI938$eVFon۝K3ƥ^z饗3&tRRq)105笵?=]s);7SOW-Ww>:uW]qfI)㳟?=]?~3 z,U_OD?wh)WDşzғPD@DDkk|M`xIq]9W]pTʂ)gj0nVa٧V3O7{b$Ws3NBW}z@DxL~3p<'8cj/;㌜~{o^}Ep߮)Oږd V?wq}ӳdpC (VNJIDEBXޗw!?c('PFVROJۙXimyJi)bk"bR;@KTqUbM%xuuD.ȲV+"( a&1##aaM+ErJ1#%]K}T^6QHr,qV3S1F<Liڶi) 1xlO(RI)b lL&m pd. U*(~J !!朗%֥J1U5=Ze9Vq;ypsέw ޞM">uH `J .:p-uCr/z), (y1p-HQJfPB$V9J9)0䔉[F#™r!Fʖ)\]? 2F+`K ÔPBƤu -Bd0XKLAB9Jj:^av9#r a[ cK\yxދe[jnSvTI"řTD@xe睧f>W=!baH"ch~^ԧ4g}K3:5b·B'%PiPO֖x)f+@EP=VTad)&oİ& C6$%佪a@`ֺhjIEBk(1M Mfk6wV`Ps=|Ox`I4Ň\,CIEZK`K)~KH=kpz/OɊHYYE!'%qYRيfUA0FLSU*K1F=B'(P#%̷YGΌo(DNs>9X!29X=4$A,'K"SYQrE)TE Bk1)#jfs} U&ڝ|+ՆxXf\XeVpߪ*^tZ)TKpFm1Vk-Tw9R |AKHָ茁l3&bK- [tR!.*ѫ;sA@H$CPm[k;Dem"Ga{A=b E\+޶uP9D8*FMSUyC&QLiIMSl\\k,Zb +~ҺmdӼj-ĝc׮]ъx ٱ.;AC)e&F#kVD5dIlq!Ɨ8sA5_bZD'_7e@/KQlRLJ&eY""k,ʥW>_r =Ӕ|/;k+.jʹS&\i-/yE]uuD;7 /dHBَ#RL!\4 OoC"b-_q˞xJiUJs Z^8z@91DG))p1$W )乔⋺!KVsmۖDS`u:1J)̙R$Vո7C6^IWPy9ZaZO B^{9(֍#~a$J65RP>⃏)'JD9+MZ(KbAVLb( #J^HD6ڑGYg'jhXQ{&CDMRXK"~mѣGs.KeBsVuѪ$O̗@#kX} !`F#dL"HhEW{sE4qvl΅b "ղ T2pf1r KK G (@ *O c SMͤ>Y镆%eQ C ^&VõTdJIqT,5GR#r),]pRDDZU9')\UKFlM,f%$c fTP481PMy IDATFj)Vb #aat9q wRΔ)PѰH Ds%XzY]y5&՗ot|@5MS]A^e6lR!/#{s3wkd:CNIr*(ơC᷊o\옜\_}s)HZ t"D[o#"Ĺ~X,Œb=5U IՉNԻoaoEeUc>{HF/WR>B" nmA]bIRbb"v5jE:v/LvPP뜳^k9?Ɯs* i:u~s|w|(R߅#zґ?<'lX3%}_X{@k P3ߵ]8N5@'㚹e>ڲ ]麉]vh֩E{ҫ=C {g[~FX22ZƜ\jE_sZ߂\J)'jQ'==;]lz^iZ,F>::l}J1\e-ig;ڔRy$,1^ti\sK1H$Q0 "<7ӍAh1Xslœva0.qn^.0:U!Q֛͵+x>e1|8|=t0MsE¬b!y-'4sv/w<v7 i a\,5YT¾o?~ku`jzvYmVNONl7czqokÓ3~Cz݋_&scʱܡ-PJHԧaB |7C2N=֤>׽.+ RWN` B$RӓzP$$&8 *rrV˕Qr4\ CLf)8D!AFUYD!aF0 )R%3Bfk1A9` ᒹζTs;^oZ'W{re V'ST}p^MËjbuLƆZT"KJinyB@XBMukwŠjъf2:BRrrUߙw&H4!S7oq4Ӵl'AVoE1$ 'LbAV2F7zvU& ap.;WRJY1蜋qP1E%ircȹaDdYoii XKTJi+59 -l@mp*!D"vԑkb鶭حVׅ^XJ}PElT>TTf53Mf91c]AmPmnf N0TLcCfwH&ydDWT*TLiEŨ{sSUU͚a U츳 w = *6 / *X~Xw0ZMzzSDWsj0J9&]D谘ʾwMc}<4Muֳ2] ɵ PPc2]N,l,\ʂs81w 7r-Lj8M+/_lCvj\z9TiZ׈Z,Kayٜ-ˣr2G qJλ\4MT-jz9;; W8hk|.ɦ-bUlJIO]~Q=88ϹpbsNd.txpt2Yu]5OruC뜫Ml}hU"W񏟝 f?Z'z}rlc> c|ۇ>dQG"rNYU_}$Gv[غEPNy}zzի'ׯZŋ7xÅ ~ӟ~'zF_3x~sR7.h-_="C1à `\o|Es)=׊vl6_}wJQT'T.ogXo0] ,{{♶ͥb\++D,'Շ㣣ãÃjZ,F͚sqs;bm^KsV1abb\,jXxb0dbX.r\}<4O%'&*5˩L E0 qbe==l6&VyD$D,%sTHZ 24JnG<-&iNs8f[!is^쀦R,8`uay0[o띾 o>}ci-Hea|JJ90):H$\owBpyfn4OsJ /~b2@c-ռ`'B(ؔ'J"JN'AOpت{was²Wubgv>MZH˅g죩}h@qM0n#ح5l_~rrrrzr>n6)%#Jf>[7R;}` W)lވ[o<9.]Bk׮}'?y<9Eq\ rWx0.1Ĕ??99U;v\|s'?ɿ?C=v鴹zzrdy;]}>>gSv%3_~맧u7sa̝K.^xx\VC4\zk'OSJD\./]۾|?壛4_[oo~K_~^[o[o}k_low}7/=ywq-||g?>S .;o馗%G_?}sg~~G ox=su2m7D\"9너/_ 9ezGſY~#?|]w^iXy>[kRBzK^?|7T7i WOY~wxsM;p"(̝b!VJ;{o?}woַur00~佊\ @H.CĀx,)ThS P3 b0 ):k}؅"x;_wVu-H()iT$\ x5kmƊ j1E`3 |kJU )*lYd $i7q-@@{rdEi)H9WU/T#g6Ec\MV8q}:r&* 0Lr0; 3pHd})<7b1.v ;͔J,4ۅh gC6aHiԔ>rcRn[+{zfsλĸ lz.eӜ.cҞJv,mB=S Rk*VUpi ZX1τsh>4w ()j"frSPm\]J תsL,jij=[Alc<yGS-ǩ8 ٽ,I zt b M0cr@D&ڰ%UzY+Av}br& {]kNNERwa>y6,M}5ɣMxRҚbEX"p Ӎ~e/Gl ;X88K !8rJ1:59e9s4.%i7MEr9 "rrrrڵfcP{@R9͸&8 *ETdt^[t0p8'''vٵojf1Utj@`\Pi\0(CNikk׮uK,gmdRv횩7xㅋW_ $..\ n<\8/u79_O^Wn">///U oxOO-ow|;W~o򖷼կeѻn~~3S=SNxmo{}|`u>i>LJozӛ˷|>U>O8|}q~D"0X8rmׄX[X!80l>+qdfC!<ё!y\j4< h:]=wἷi}+WRrXgRJrBq1+[Gm o_wם=M07<8:SJis #!3sTDB1ƓקpzLYj xpp@.9cl#?s<՛wSQw87 rtDDRBh.\P}n6Hh=(Byg`4w9{PgD;ƪ0dLh#<t! G{xrC}QQiڪB>hyRC[shrê؜KuR4RTh7k"M'v1Ӝ~Z Xk(iF0TEBfC/y*B]9e=[#ЦR"bZn!k%1s5NhoNUSV:zC 1<rT1H*> 2YrN `_4W ӽY?@Ta!BvK"hlA$ Bޑs7s:Eʳ'-56XכgUV`/(hs)6[ɳdji.iMDڙ& ;$$~ݝ 1B᜼wh elZ ml5 3K7]!F4m/?~GvZ)%<ه!|<50__qm_uRl92Mv_,{g9%6T-rʵk_>;09.^xM7Sf|4MqNNNSb\ hcfT5>w~w|?}bR7yOB}^杯__Uo꯾;kkW~%|3z׻R4M~~ﯧzާz.\'>q7C/-&W݈6nVp/\;Gdtۃ:tλ2s.cjh5\]vQ7`Jm kh[hST"y-ڇEDq\pɚp"xC`!>+%hef6;M3a.%l:$Ufa1_d˩\J=a}o9ekaU/}[F{R`Fڽ'UKו7يhOX@AeQ0 Գsi{1LhXF@ φ4ݤŸ 4co([,{#Gl8ahȥC j5$ T 1l)NB:'RkZFB-)DՆhrExŻx v轫e$Uɺ1cH!ĥXi]հC =9BX"*+sj-sXj%o|+`^ n7j׃ 6U.&Ւ A-L-;5] 6Yđ(jM w^ ѕw@ (oMudG C$9ͥ꫏^wώܔ {AD+"f&/Ev3P=v1$mس;LN|h{ZL6$3q,}ǧ2ae$")MpoO% b=c_F@,wFdfWjzj$?&T2 CkPHaX08:bYnGG''gf@S)r\,4P,+X,ч6VJq0g!FTVZdU\Ji&rZgp ( -"\Dqqxx8nJɂ4m_{-" IDAT~?OV{+^?,}ss?__zާz` ~J]9v]w=F'&|WN:Ƶ- kbŋHh.H1-R6yFaaȠ9lET [8}AtՑn46mנ9#q yF"9'z]!ܡiϰ@#wd8*QT( XȮ,6؟X \" M;oG5ƊZf3.o!@pNiv`-VHT M vT&f: byBDFn+whn ƚRi39%#,۽CcvQ#bbj fyvMoIDwJڗ`lW(PMQRۨТe-/tj WPAd]@@m~5xD𚣤AZN}:b\s>g‚@dX.WRD U8dlVDruRj^DUoƃRrΝZSɜoAEQ)J)fgs]Q96rtʕz}t|auphn2jq~ǁr=_Omo{R3n?mC=t 7tySNzk^sw٭i1#OO:l=?xpp~ }{{q"K'6`3;Ι-C߮?:o9;>+5*Xfɵ<.Y䀜QH88W@hM{-=Ѣ-LH"հ{Bs.-{oNu1h&)6[zߑRq/FC^T̛h8jT)\8<`/:VzT|6\% zЌXR*ZU!66pCA$6uzO!+n-+D`׵@kHIy*f;s R`I)bҬNAtuT Nr<1)*2M<|Cu7NH\)6oXWUpM+Z'+Zhav``eNTr!"Km0I)ŮmieH:K*M")ЃkNi%6aF$kvp9} "Tb%\0[8H.!U泪HDۼIb #Әra5eUGã*ZV #bU>[8gV^:ae,%ن_bMVRiUpy *f3]s`n Rf*$b+3;fhtO{&D(Өb/9o<~jBnL"WQMti&|=z+^q}݈^\''xU__{ソ?Gyַ>Ot…a>̓w]ykS=SΧ_7|ooȏț[nc[G}MozӻnU}+_җnz9_,@Yl"Wu!D0Z\H09½oq9 R- ̥R@Tukb&Д؝JPXHB? ) 9źX殶wRRaB00ٜ\rnbrn6iU1xk j0dC],}Z;f600]3Ozf5FJ3:){p gy*j /8vNf"}dm6"~^UciBW:ˆyaJ))'":::~ѡtkAas֮aVz=`^H9Pɥ8"@"5 0dHFm*IBڭ'L'Kٌ!a\ ĜKOTs,9_zeScf +by"Y1)M1%DavL#%yN-/bf kcL}hf10GEM).U 5"VU 5N&^ jXA$@3C:6)=up{8ZXh >8fxgoq1lɤ"U$T䜷aNq V困k;]2}q1eƢa76U<mvϾ,PzRTkKhWINgѨpYv t3!nuM_rQ6I (ѫLm__h%X?ҸtsLg.lR]!_H@]xX-Reӱ})".BhQ6{n<V4fsڵv{WoCVvZ @Ӵz~dMӯ{~^W{w7w]7~׾v{S7?mozӛ>z}-o{?S=SΧ__Gy?؏=#(H=%E|]u୦ YEdfl::\q>rJzX+Al]J1.W+G.Tr|Z 56bͿ|G0"#aTl'!CDر0ҩX-H L[bYesN)s)65|0c ve{%VY'ҬݬK)!@ef98 !4H!(DTõi$ZHQײtf}8>>FzCƁ>::\ 9ATu 1raЎPyDz6 SƾhbaCUWҴzqP)Kbnn!z9W 9sFsޘ0rl6ͦ6cF֋}PPdH)eN9g@09z^h2}SE6\S<'R1F*Zya7 = YN3˻QE̘ SJ59LdsNiqɑqcvnzIugRMX!ب(dDW=fz=㈜UEXDYg{Y_g7U8]W s=Z ! 'M v4cwԉ'ׇgMO_3Fr޾X%v+tgh)"0.@ΓlҚ>W~>;p||tl.i c>#HY[,bZosN)4ozK.]x~+W p볾9g3/l#XFr)HI)m6̆k0n-pZ9åKVS9>>v!thr9˗ۿիW߇sL?]9&|=z&|kIO +svE{N4[G=-UѨda }{7ZRJmQTm/:ȼjMr@Rh>XFQPa.ٙ~OTa}}zxKt5|gE9,*bC"5'cM+_K.<YoZּF b9H!䩣[SN09%nşZ׋"RaԢ@EaZYWh rަ'h.(r9yNdfl77\3P|IyJ)9%P0h"8-̙YB{]Uf-8G=KRA+Am6FH8'FaoZضwVYUۈ"Q ԚJA憘̅EuYsزKFfp)H#,"TD808GT*f,)#9g,6ofEhe@P09Bs>;G7jc_Pr.%sJ]hq1x 4ͳC!Nb1;cka%RWvH xi"*3 g؜PVQAiWMHBubw, -g @zP\,mDM 'ϔdQD8PjRvPG`rPZWZjI֪. R pIh_&؄Nmڂ8#ڔ*K kBg{"Q)!l%n Dl/ ^FHH@@LQ '>PUQK;MyRdMjP|b6KpsQT攁…Պzm*4McX,ð8\.DMD\-P\@DX,V\l&G5n7$j97j:880qxFo9n(s^z _y4&1|-j$ zԽ5ª1TODDiRy)8-[XۈnI\DRhbYCHĕfu8" {IǻVmi=Jxl瀻|Gh{sAڂ%<Êx09VЩa6SkWʉłZu6.\9/c,2C1Drdu59BtTs|XE>g#F"Z1g%B"D谵իj*s%9eVWZ;i2"O57CJdxc&8 ȭBJ(g4M)eDE}ZZ0 pmޢpA@ڂEPѰRBEizB&]sT6V]X7K0Z^ٝPKs%UP@yf!D<\G脉\cla@H7L yFL$**6J *i74o(m{ibjJ! %$@at! Uƒvj !"Ls\p!X{!x%bDべk`Z bS,޲VUhd쎫 ER 'ڲ[ڄ^ 52 |r] qwKŞh'8ォ,ݳ5Nap1x+:1Dݘ&Pt=*! Dz*<=O'մc<*r&{dDC׼fd(] `76iXFvUeBD$ױ?նH:hCm'S(cYvUdQ"t;Z7 6\n*g:9PenZ?̥0=h KeJ!霦rR^.WYlp9 0… 1cFv;3]`V ] 9mM0&WոXأ 3-qq̜ ׌yy:==}=CW\!~dcb:_KX:"^`/'$άk[lGT*l[!/*, 0b8TZ2iE@,HT7(hCvGZ@AhQ;JV`½k:D$IDl"RQ)< e30@CݰE0RKLBMCHBReC(]׹ZV&UQez@mO\rV,i}8ՍmaGU\}(6jaXλW'fRը{ fT vv^! lbM%AFݥ 6{d+ZOd,MTKa𩆴8N!@'lŅ향݅m)KaQ 6NBޑ#.ze>ͳ)յ@/% ;Y4Il,BbpHԇ%!ՇB*%;GҌ*""}Fׁ)Pd!A;UlhRʜR)MUkBm!R]-:36XDU@N61 c:\1w]nĸsAԼ?=>FI{@p3zDrz]Ox`ө-qYAI;RRBKLUMg_!Q(1ū@T#c:}k'&EUͮ V;$QOM0 e\JBy1%9k}*R_8FGч0 ju@DiJŋfq˕yLe&Njal?NrԔgD,%WJD]4l~[[sg*5:$(6nS.93][[VaivkY?}38r9 +dI,ÀTmVZdoJ}ުŒ{GHa3F8GM.qΩX*vKB=!Hpqe^X$>Ҭ%U3QT, ȐLZ]@HN[)b9g 9&6:4i-NAq64OJh\^f@eD\Z;oSJsQ-*4(P0^IAAsιd 0N%028G;V,wg4`7]zGSm:1DADlR*"\꽮 5LlҭL9rB%iDSuE;1F6jru],9bb8'}k{4j.1A!#VFL{bkEzUK=11m p1n*b`p~[U*쒇/ZgW{(KN6-,z\]ع_FKg&j҅ aTrB`adfUss|!0 q\u.1TnTn_+4)MvnsJ24m}'?ĊTOrQ8.Zb=9 Hebfe9^vO7tLiޜGLֹ|uSaCف(ADX,EB;QYD[+JNZ Aϝ!γmr.6| :+.2xW[| &lKV7"J9rP0@9&[ʼnBkU @DlCmػ{YE" .7^5r)~zvx}(%kk%~),8byN9ir-TzTbM#.!ٶS| "AiŠE4 s9M 5LfV CL6<e#d#lUB $;%ЦL >PޥP Q v +)FfcR@9b!¥s3;o$e6! fbWŠŻ5^|PМ2vz%Z0ākXS[+iX&"4*Q"c:"%TD@ UFP ͱZZ $"j샧jq_ \x[ $A:M3"X,c\8.pNyfcTlZM91s=Y_+[`}NAPEtA= XCv{UtgyCY;,'54zWX4C!K4b]9wԖie@U(̠j-!\Q7@,v9Gd**66.NAcĂ3"&.HZOYwcOm8Uȋ uTGwVvM)8ܲ`OR1BzXYŹ3c7cڔ4L䪊n7wܽD\zjݥHw6*uкuPkWB@a1zz¾ "޸fNiRaxR擳SU=p( 49?Gtasz~:-ѓS̥+v\.a3OONC8kήH.^3p3nKvޜ]vuZ(]*G1mիW^]VGGG8R7νo޻NCcMDB"QbFM$6E|^8LFёQl!~H3tFA@$ *IsNz9?!EbrTf^ygm۶ GK<_}eeeL7[Q R;{'?ٷox;v߹{c#;e֭x俹mn۷!hwo+% uIa.Tgi,MHy7"j0zэ5θW@]J[ RgY kPra<k_)l t]:d4yP=u=_9 }] ""HBC zj!DbZgQg1"$\:\DYBLm[eM`bYuhK/|3c6p!11c]P T >,kARBr|AuNY!}f1( sJutb+\lDԶ@ͺYDp!aJBJjA$TGMI_8뜳j,g]12(6̈eJaa4%b(@V:,c]W($mx8 Yjvs1CJ`UFI/RE{D[殜3H!{FDԇ0I˻ A'cRo5V(â{8!Ch-ɻMLQk)g˜ O(a*M $kf8!@B7X 1ltE*#GTLI[uRRRA#n:fʉ12u1f)e bZҬƚd }MQSˆkb+hNs+A}ZʕPmau,T<6dF)@z`qUu9axPCV(~ctΩ !-ZXRN﬈}YT}:PsFA+Bd>?JT{_ pf.պW߽j"A|B0$io͆"z=0R$tW91E/U7#"Y2L^DL)YcLܓuG"b֥v'HDBBδJ]bjjokP 5lY+E!R`iڦ?SJ5ev"t( 1E1'V ^GR=#"@H[pe)C l l9cZ4;OK"AcJngu9H18*!Y*$8si$"@N9ĠŨaxb*=(ƕd 0tY W{)3"z0b>X\)%c޺B|r0'`FZ{ k<攊92;$dgV΂#\J;9N+hΚ**ĜNv5xr2:M3Z1&yVsV̜4a :X)}R53վZZ⣷@F`i&VR(=ϖd ^: \SjO\؟zĘmsSL]7뺮وȐe~.QqBCk':[djTfX#I:#SRrc{3/T1/$.V_y' ![*=U ird.u0=Jz W80LJ'ʑTt~E7-|VrtE@<N)κd)<"q|`˜bQDս2 BZ=m4M3+fM bst:ٳw}86Z"1&8M[̦Pn++[ڶqε`i_,v3 mۖsF0k?Lm۶}v"ڷo_q6onۣ~CǿU4C7__G??/}K/͗;?O;nlnۣN3d=\ :x,B 22֣HpH6Je16匹|~X(2[9(B $Ƹ WIK]\Аޟb1Vas uڀ8=Q4@Ec>E\\rF kJ'u䦜t:XbLп 54*JVmc5F̠sGi|3TJ/\f ,L|bLH)gZDwK6s)c64U5-" e!'DK+w@ά׻Kl6#¦i(ybЏ b:\Yk`R'P\nN9l,ccjƑl6ng%̧4)HZͥƚsKhw֨ @}2HoJ$Sxdז ":>49ciu Ad&".93+7ƘSN9iQ?7E]%&9B!Đs]RD 4՜1g,@c4,"q"bq:* u0kbe*1"6s.hi*6#CFBPc$D%dWY*A.94CfP* ">,/,E"pFW5"$[BsI!u.X6Z9#f(=QPfU4R!"E,[PkSz#+Z@""5=*PSZ%ˢ WF ="łwB pRWlU=Mpg ]scZ1Vx؆ :S).D*Y*\ 4s>]"&P˒iO(HhrN{-EI&kma6xpt B"kpYN)ǓՔ5ƒ!Sꫣ9chii#B} DūVVXl6 )qv,M;hp0r&-7,//x'd2YZ^ںud2Qey0pׅ;vGox;zիB7s!w}W\q;~6z7M !X*hJcp┲8-]蠲NP> lu \$S TbjynttT$L=1 Jj0Z~QO <4Rz]ȹ507k["֖Q,56IR 6Bq1}CgY"bf]rDl۶4ԠvFVѩbe׽HRA1:+0WdyQsA1sJWc(E@g]i(lHgRIZf,uA@z7زo1 XZWH}Ae0 I;kI!q.SERev8$fcY"]M1iťW}lM+qDJLK#0!^J}S}BJԱE)'=k-#CR(NrB i˻EV2Ygs \gDCd1DDDlYP1C.42"k1#3`9E6Ɛ1JADh49#{n/S]zBhoFՙ3\65fm[ևUqCQN wP{ 1fBB8,9Ң! dP_{SDBz~9ZAkṜjbJYY4T+K)UвP}qebD&9V.3Ut?>," uǃHh6'W1 UcN1}^H=TA1Xey@_-JJ*ދ &!cbW0,\UB֟c_Jy/kZy^m=D$T놌X(V@ 5fi rƐVCdChf~+4MjVe8LR ;ڦiwFC Y7 d:tAb !X l߶1^K95ھ-m3`]oD)ul<(֞t_ں&Huc)h4RL=hss0<m߾}v8#?ɹ~3|yϯsvyo{۽ăs/mfs?y%~um-?g=|U^8~闞Զ$u|;߹w+/oW~Wm׼5{_?7 O)x;,msN1bY*3ets"*$T(._"kBIoL"Ds*m,"\A5zߥ{+ WG m@/p -bR !-g/w5C1Ns #i,P3b4)e7I)1$lB 37ᜓXkr,lՏ]5z;)e$4XJͰhT12 # 8gc ¥X9g+8;R1 XYc;Ř25{M:54h>dRQԛWz \ZJ A#EW@t<֒ kQz5Og,R|hjΙ9k_^ ZZ!X{ȂC! cZ4Z_cFN9 T݄Z:n@אk\Wu= kb "yCj/T̃hL˪GucVD0rPihꥼasNݾyZ1YYl)숌 J14h4 Z29C@bP) vm۪<@{h$DUo)Reb P,"kMl14'eosΥ(H$R"VDBDW(kYvm bf,u(qN_ TM}!衯H$CT|1ƢH9y~Ruy)?XCsQOIJ* s_\[~ %4hJ+%-!,kkb bRF5 8LCLM<,%x1d6攚fuEI(ނ{읎ǒԏr:BNgt:1A;jہ!Yrʳn6NLOzNnuuK_uC`N1tx<ɑ7Y'ۣ؎<|pyyN袋~gQTT{~q_㽧;˓IЇNg+BߚQ[NpJ'~C}tx[|o rc[^^ U:f޳gϭj}_OG>򑏬\s5zի~v6ms{4҉HF߸71^_ZgAl_ny'z ɲ|hru mM DeA5Ye0+fPɒ\9gUe1ѐ<) ϗ*D/+Pb2[IJAΕe\rOgTduY073 IDATwYRrsfndms$mU.& fJr22c%ED Aƹ /eɐi\52Od"ȊHu޹ b7fө{j!X2i꺏z mZkc٬erS2F?tBĜSK-yALQrL9kT|1?H@zSNI`D,Tץӧ9-Ǖ$T𑀐hI'T=LQXjuQ C=%g"$cT(jLU񩖐L%#cQ qJq$Uy:\(}}oDcl4ɹ(IjZ]|zDI$2P, i[(MR)1g2hMSF&؅.DLʖs4jl)UJ [$ΙxSJ Y.F ԟ#GPXB]h\ Nw-A͝״K!i5/U 技h*kY3B-K~aՂg"RNZM9 so*^@%rEa@(;zS"@5аokP"|[?24hcF!i: T  ~5DhR Iʩ7E 3^:z>!F4H?ԐН b*E YX5zk}kjio 3#-R^z={+[ {RsY^^YBv0Xk̡+GBg\F1Eh6HZC8666|}Atεm L!,ĘC(gfkVKN!t|Q;Z@9&$bcmd2H0px|7oƗ~r~S3׾>{_?|We{~x={:'WV%Oԧ @Rγ3֊u섈j&O)i1 Ac c`k73K`t1p8ԌCo_|aTĝ"N۶mk*J ّ(sAAw#fOxN̝to̕| Ҷm_q^s^+X8N&,:g]J1gx-wt w]G}_z!t~j̀XWTqkVC1(7b 6{/XuT DFSΜR,j3 A G[T..g4ԴMunIkh4ƚn)eCey.۹o|Ν}I?6rtC@1SΜRŒ9$AtM_vte;vj|Zk "9}V,¹L\DF3 "’e nSN%3i7K1PڱMI1amገ*\}î˟ݷz?L>{<SJi:| ӚH!.眮e%$&mf)ͺZKF„heE) tqR֙ɥ.5c(x=JgzQkS2QWNƕڭ,jD1׎{0ne?bӱ6H[ê*Q? Ġ fƔjG/zVDr\9GS#{*TEe";1X{['U<&cQr)%IK o&>&)r5%5BB5pZEPLj2ԗ-ŏPTI5YbtDP>~PA&RJ 77"kjIㄚiR5_;+Ta֨R='={kGKJ9O3Z^^ٺ 3p8-"fl).66WWWAmv\!,)9ؑ눨뺜h8lSrJ7& ("Zbũ/ z tG'fT*@{N#j_t,W{RN cxmj:vS!Xʦ@A6T:jCc[3sL)q.,žtzQjB@vNJF5o|gER!ymRoᆋN8NO ?y"Z,"8٬5(gȐSFD`w^rɃSPY NS 1LS\G+#cuBXṢa 1-/:$]1r`6Q_L-Agqv.'U6@ [.:$9#nBEfki1nMZS 0tUAY8*$!iab*OĮ"@2Q gnti)!ƈHXM:df`iKO;ҴM<Ơau~[.۹;ĐYu ze;jKsN(;OYO1dsYP1t8 D覛'h'tJKN=n28mG3 0Lg"ls6XDB-R)iPq`"=6z=Im.!dc%"L>ΥJ) GeѰ$H5%zHTp" !ƘbԦ۬o]VBS)%өSʦ18܅gyzM3-#1>eD O`(iJՆr$V=:cffN?Td>3HZ.jB<0 r#VM4Zb]}P=eaQP .X*jeBtS.z[Q1+$Yo`ZK?Ơ|@+ֲ-XsvsGNZTUYl9|qf$_mp@L&mhdx<޲eU.fxEQ]1,-߿ol6kCٶmn6u]Hb.0L13mk Zٲq~1Bڊ1JF6۷m lvpDDd\x}m}_u鴛MseB%JLq2fh`0X[_ߘ LSW4߼M#nYm7M^#?#ߓ;61iG};[meR3:rF= Ljks̳Nf] nQgL3Lp}޾F hT9R 1v,2h0 &)UM$bvqeZu 16eVKa2T.ABd:j`a0V9baKS33V&8@MW{z%v| !"h!λo3v*XGI<g!&aa!YlZ-H:@f吐110f:\(tfBC㒂o)`BA+z"j')4Y4mXg 9-I@!*{Ҧ䊠)fMK+m'ө?OnwTWxЅ:b 2%L;5U༱UGChHא i^ᲒGf;JAQAB)׽%jtAbg*BZxbA KN9F6WpΖb`(qAP=3BfcL9k/jl6juZdY,Z甡KiK #Krb YBN.r6"`)UpMe" zL&#^K-]oUgUeZY7_=fzJ̦YP9jZǧinRJ*H.TSC hVh\䝭UtC(ZDh=_xQӴH3hL 4P$)ڡ^$|fІ]Lƫx&)g!h-.oYI9:aCf ta;*>)1iδ;9GNgoݷo:h˖'Z0?|o7 8)lݼ>پS~KwѶ]w^xa{կ>#o}[?я~Oу/_~"w}QYgp_~+uC,cO| OR%?++tν ox?s=wS=6oO:3u_C?_9{yQG!XaOЉvCPP(qJ 3sҥcCHU]ʂ!}{ 9Oz}:j`12KImSWAof00y?\Bj@>a k}lj~:: g.-_{ g札ʌvEǟ~hڶ뺧oۦfXb079@&pw630p,XkhpiGR8jUi1#d9bLZ睱Vu^9Y]D:7) ʳP@=+: iaGc mtdikR:cq?rU P HJW\wܸgu\$&k&%c2@u,.N(0>2^ZKʦ7”5g,!ES-^i[=ڎʚ̤^{2ܯx^o`43duSj/yc :Y$g !"gcUH>Tu!DfEI!%DoosmO J&ĒUEZ$:mBuH#>4M%JgF kL:`z(2ʮJ fڼ8[-E&cT7aˉN PB=DʂE#`cf] I)u]*4M3 "2]a-ErQI Vͨ>Rb5jPROɨ}ozQ!j>N-E0`ў [NFʵ#yZ2 pZd5̡U JQ[S5XR;B7  R.rIUxҐ,`0Qd@"E8ȼK@F3Zkb0LկBX!$朘Sl65"QΩi]x<mn`d16kk)7(q.":y'ˀ8N++[@pw7<\|Cve;sW LyW\׾N {Y'b\~?l6ln۷%H]BTV/ ph߿/Ib5Zcͻo,k?;noujGWMy737' $}sI B|讻C,xΞytRF>-7y͟y[gCe;w^}/?L0dwRF*]#9ӟ_uTQ`uw ?hI5:s^4(JEE[oJ t5?rf9sX Ef5IJfj$F;i+vhUhD4Rb$`:VAb6dʤM`U0dԙ Rf#D,o)'#=qJ`mtbbuo&Tu3#ZTTW)Q:zDreRyĈ$$ s)ZTᾴ3k چTH&L=hŗA!/=-fs "P)sIB-P@(jjIQyZe=s߬."M$TA4,|)/k@ J@Rԅ0-N("HJq6lS鄙ݻu`uc9=xb l%ĶmsiՄh5n}7t<O&aI9OgrpJI#?hڶm6o|$ T˜t2hr8FA;pxS&7o?ֳw {zk|k_7 ٟ{?x#˧+W]uUW]<@z? . .|mn۷-.~>?A;xqtjw~6B7_R7E'9QF,Կ;.܅|Ji< Ghwzwmvj fE'nnm@ٯs Jv: 7f0s>(@ GdH?kknݪcu1c+ Żneg`vy;?y'سi۶}?qQ۴"_Ƶ IDATi۶{oO: p]{̋N8^D}1$i[bP@!^|_<$i|Nϴ"^_t x{ͷӔP!3B躙FD+H΁@N;\oF2w3v8p1~-\|֋O>Y ι߾~ ٽU眣 w{9}-xnsf~udJY/8hxmlNBL9O9SNekoE'%1)~i7rN+w@}49+*zw_|)Sw\x A pH1%Xc}(Gx ; [V6S8nlΚZk榟f5 9./GM&pw91<aO]c!R`Ta9gwzO߻q O8'm/>D"R+|;?}KO?vZ9w]8tG0h*,ip+HmJY=9%A28k'XkS!"|C \cIiVYq9⇎u@fB-+ɢRf\mJń#Puz 2 ^ţY h{(3Ծw=hI?cB\\ao$,:IY$p lj{=ֳA(ojXfcK!XKJQ7AB ,RKPdfޜȵyZF?ZA`p{JIϣ)l2ѵ]AP욒;*HXF^BFJ< 0DcJx;Lz)MƔm;Lc0B[[[ۺup8TI(笤.ΚAӶm٬[ۿwpy82B{6f31) 2s` H[ fii)s$ߘ%lcLj&`0---%1}l]]ݻөoƷ8BgK7,FK t:ټo[K.WW<7?S?C6?P:!"/xT`З#yΝoߵkiivT9s*աSL2ԫ([k,8񂣏~s-]<;hG\udM˙# twŨ_ŧ֘ 9 }qO=뉇BDm>o3G NPtԓ, XkyW~grȈkY ݌!OLmAZ[ŦB.:99lĉw~gnw;3oᥧ̮,E'`a)PSCv3:s,iecMT岚cJ)̝C2.M'SlN[5tj*uPꥧfڶᰴl$ŗOEӢdE^i6,R#0 *Wg]7SJ1`S=~?vz׮vQ =TMiuyŜfJbE肾. z<+?Kwa}ۧE`)VX`~) bbu4nT˽P˛Jּ0R*0"򴧾9Gέu{}*pBJcڦ)&%CTS6 Yc2\YRLYmd*>Vkc@R! +X!8`scY>\X(d_/%$-5lZ^9άmmQ9k)p+z\+2UnHR2*JC(U"BbŦ3$\:A;  gp%JeAed&rɼRG#= XP桪~{Hl.I;JgRFNhq9<_[Q Kc}CZcXJͼAXcLAٶ4^oړiEMyŐN{__Kpeefm}OAA8WWJLe\^ZZ_____$eKKp༵Ek9ۻwF۶۷o߲e֭[5VCJӃj8 23wt< ]1eB]Cdri~yeeey9)d涹mnn{S?}mmOW^C6?P:YZZV*9nxَ3޳tBBJz , ]˚ZΜr"":ԭW1n֥sp=/:>wv-[bcX0Kbc}e[Ь.ߛs:˥#W" (| O};"E'W!t]5 o/zh>_^jƓpMcyI'D.TH E!dΒ$apݻ/=wu3aR !@}ABU)%RMY`,sG =w~;P9UJs.KD&Ťަ@gөd7 `::묵Zܳ4B@<#.mtJby駵M]lQޔv0oٕJx\4g\V?fvD@㍍Bi86MˏDxͮ7쌝0f_s㍗,L }3gVkorJٲeKQvb?xGz]z}#YOḀ { طuɩY ]z3xy6=,JM1Iƪ*Q E9oCN\v,9}e!r"l[Z^p0u)b9%3CZ!&(%"&SZтګTHr T 2r(ͽ?\ONM\R$eT 2~֢ r BR0!c6ˤYJb1< -Y2Y#!fVǓ(DC1qsbYY.'BNZS jJSBA!>(,Y1H*=+ZPXXF.+ [)F }PkUfVLQ~Aѩ0_wFf8\\΅܍>P^}BHdY眵l2еm[&D6eb1ܳgd2Qޛ[RUgk M3(42I0j1ƨ -Q_/㋀P6$7/jb$%Bpuo}obO17z/g>RQ#4cH cKud .;hozq 7/n{Lt֤q5ZC1F2PKbi&{Nw_LPj%ށ$,mظas($~Eo1{Zʚ[BT9]똜 cT$'U4bBD^Cv*׷eYc5"$a=n7MH 0J$]g"M4^)Ȗg!no@ n&d~/9u]8 YJiGZnWh4h0!(Bh\}h,IbqM SGyb9!Z0,b"I uU2h#3[s׽96$WEDckD23@ QI  Â(ZZi ]2 c, 1]I# ;(2q77IK8xbd1q$|ZX!,I-sXN2ƉH IN)&Ė&֭bE,"ƈ-zAhk,0p# -%]ӞÀS+uFL@wD1|К{5O-&2FkUQ+}4DP™Dm`YZMDUR$Z ;p#+APfGj#c0{fbp3PO<[B6#'M z%aҏx6A{W2F06Jn!YP$7>? 2 Dr="n^JF61RQ([_phUU'?)bҥnZ+āc l2g1, J\T$Ʈ5Z`yιe˖Xc.fwؑQ譵EQn:77WוV){ٙEJ)I:CTոx"bYᰮ|1xfYbu]j:nY8(X ca,0+%i#T'<:[;F=c.uקoJ_كMOǢ,Y)\хseYE .j,VY%jdELVb&␺ܶ J?Ʀv1s=¤UI:DuB~p~YkJ kE)`?'Zm1{{@Yy^oJF֭s7mܸq͛`0 FaUU99Zz|UU"C1ELtYc j[nsh4M(H1HMSu4BhnE\U7$5~\UU5nA(,NutDCJChcq̐NxK,^H'{ 7"0 Fx<Gp< xym赫Wꮻ>oWC6߽뛦JReE.0W1%Kf3@TFss[ƣQ4FQ!%7CiZ(p0?Fá`~~nn~n~0?oE.V%Ko7Gkddc7HD!Əx4Rx/,˲ȜOZˉF|ֹN0nꘊj=4 `4uSTJi覛|s~ B Nh֘!RME8xdZd9+]Yvl/Gz*C`RB L{u$+6-vWH!ת9Z[k,mݑU A]E39?I(V8dv2*_]ZIĺ[VZp5TɅMd"?LYX=d!d\Ro:ԉӺ )λКL5*yL0S#d %1ʙe2E3&٠A 4^oꪮU]7ĭ5eYv;NYgf<'jC*kMi3C1na"yLF`8 E>#Ǡ,n;33CD2z.{˗/vlȚLN33/ }un&cL_d.Ee#Ptam2)F@Ni![lzᇽoۮaَ-]xvmx񒙙NZ3( cfq$ ƛ7nygm0s"1,+]֣ 籰3 Xww;~՟rG< w|lۉ_^mWܕ:^gYk?֝Ӧ\=ѝȲ>[=׹gtVb>ܺ#&m'θ g{2Z?CaĿmmcYxQQ :cSQWoN5N?@kSWIC KpܢYDc"xN;,>yםZk5ZK, ȩa3=߸ G@i&"e# "obM#OoyC}>vAų?hbd82yN?qL<5k.RǼGh`3%_Gۯ[vqic`"kXhm1gv7o\}a394a•8C.6VZ=tߘ|͓ٷ@i0h\kʲSv;EYjg;n݊TUr:2 8rTs {{w&v}ptv?O;p7 H4(pI贃M7DX5W 2JqQ"mB{9uժOv|x8iSW>}]o,z ]}dK);r%' VI6F$R5MRRAFLwKKZEw0 Z&d"& ŤyRb#(lۙLXq*%]p2%#%%)wl:pC+-7(P'l7Rn*Ђ .dZ YcA|Od;X0D)TYHoRJiTƦue(V[]}@FrÏ1R(mV:oQvԔ&&/*" 9^'9GqY"2V"L]!f QDL$ cq66YMl‚ib$R1w-gG+`zrj$g*!2I ]&9iJs"E$=D$fH!DQ19+zvv@lK 4 ꦩ2 nٲu8GQ3UZiJo<w{??U+ u7_4LA?mXE,Bhj!)DWsUn-[dɒ^k,j CsVٓda@o?O?oX8?g~Wuiù_>۫wY]Lgr01\vkWƼ(]zuu3$>:Cìsԑ;FRLfP9 %5kn^P+ktxaU4SV8uժNh?Rsz ~ &d8,dipE.sC&bYޢRE˲Sv-R0c1SWDiBJbOw|6|ֳa7rYkN]*(+*@L1閝)ħXYQʡP(P% ^iEjB#rnW{/i7g>cffoU?-K7vX2TJico7|ɅQSt\!p^S$HROh1`f+Ĵ@º3>X5EĵWkYC8q]Q覑錃Pdc- <2 4f:yķQ ]S雌]5(坷rUUU$֤J^QIʢ5 . IDAT EQvoj<aI uc2qʊ{1`$C#_@v*%ζ, {o@!u {ֳ Y=5y *OݏGCqe3߾S4֬Zs|# KHd[G"J9{PҬBf  B3l"dSoz֚5Yie%"qݷ=>6Bvd 6Fyk&*$YBm:1JQ<[{ ]h^6%eèBfhO>)BRjT;=SlA_I;E]Drb!J!ʨ&Jkvb bTJsAltŖE@JB}ֆj$'m6sz"amC+| PF[e F@H!Bg1#E&et[̋bѮNH2FUY(F $&y"C4Z&iDLY%4rN."A"+АA4'rLpy.1qd!(fnT=Lq]7[Uu6EQZk TXf|5nٺi֪֝N:"WZ Hu *+aLur)Ҙ>8^ cMf?;+ײ,Ru ,nښRfvv(KctQɵ0c?~UyDI1`08-[k׮=3GWX♋ n]g~ǯDZtyL1哾Nmo~ҳnQx?޸O~zj3=u{noo]? 8ԃN䙋b~to=ǧ>yCq;?~eTk~{֚Fif@ ,1B2'=/̫'-ٴNPJ%FT:-G}`&f}ꪕ-^beQ81=x$ gF*@)):"RpJ`)I-3!@FORH$kH{i1=m+XE,1B!xqeq.?bj<ZUFКk7hba~cvx^o|'o2R(Dt)9>~-'̇F0(Ps ] tw"*aPicҩub= 8-uBkMHȊأ 1"1{GQL@rnWc5IS/l&P 1H"~>=Ez;VǤb-fsD[8kLG\NJ)D:eYֺ9B5F;+16{M_)*RXi%EҔZ":Drbi#9UJK"2t)-z!t(@Kj,34Rm|M(!f֠P$)b Ԟ7> c*Uq^[> } ,5Z:PiJRZR@p1;J+N:>W(T ((mP lČ r +E5Po[$w.ꗳF Q Io z3x!(XYZw̐!Q".1^U9U6-&bDn bdQWP2⊭)bL6: odAD ӕGD1g(3̡5Ik%Ȥh1U6IR^PbBba CGuO75/&9  O@ot!Iv1V/Z-1i8 $ GE, m5 RK^`&6Qc=g-MZ tιŋ,Ytff&E\g4"uUD1B$0XrD Z8~ffe'郘="+{N銎BjX#Rna`UUE9KLsùa3;򌺪(>hNetҙYk*m !h0Ss;o}[W;{z׻fff~{z+_ P{iC-q/kgq}ڠ7iպ9>p>Os9!Kg<5R|}MOX$y,^syڕ=밳rߺ ts޿sn~Sӷw /=nu nk˺X|P$9'`CMR<rS۪;]NEoN\-:I{ƠYk6o8`Y8Jd=1ݠr쿒Md9@!I^s6ŔHV֞-H$ZkEQhc(RUאirlַaq[r8+c2q)fmNR,Kk]QFɊg)459WI#O: rJ*DEJq"o9GaU9G)]AJ)u桇8:d<]PM]Uzt 6:p !ƨt)s*7BDB;HUj501T#GqD`>u$ZFkPի_DUbTZguPEbf9\!DTQ8 gt2 D5RgW 6q Ү218@"ͨAF2yLyU|}$LIA"(dbyZn 771Fkmە[}$kh"j$Q1ƘVfg@ HD7b=e :eQBPJUU57?B~ْ(VU32P+D4sss6n%箪ʇ(8UUl޲uk$-%<l+\~ҥK`O:8;+w_>qyǜY/,0ڠPWJVz굯Y?ڄ-}ǽoN/oOW;Qॷ^h?>]:w>p֝ԡ*D?Ͽ9N>B~ˏ8jףn魯ZsagrW~/,/o?g9g_y9(`/+~EWRe]^w˯{.mδjYw9[5JV,_qm=M}ޯ?;>v8;wo=Kq=dϓ<ݏ8/k혷ҔO<sFvA38jףy;{1qg?joMT a* qYkٌJy:摔 QMe$v1=(΢BF`nmbuUN،rc'rҐ3O^&[<'Jt8FDPZ!#GD+LwmRPFB1P!V*3e_P(!ꨵV q֚NG)&f!hBƕ+NSRlVMSP@Me` dʼn"Qx|h|*5sL1@mlOVZe}svy Q@i/IQ9aܯ뺪+;Z SUu4h4ݧH}[J1D&*!XCD,'Bkm0q]U-G̑"&¤SlcJ%Hk!ҴO$sHF@,)BZZ<KD@^Ou6EM!RULy"%,i*JXCueQt=c^|['EtrGURqHfx<1fҊ/\Qr"֪U >**&(d0Qj34MU!p4uㅾr uJZ m@4-A*$9M%SL!ZgTpd["NL$wHkl/TB6uvh1Pf^u|1l-VCk9GK%03PdI CdkłEMQ9J M>h1Jx Ѝd?c (A2E!B,#fTLLP&iPl- -78))n݋AL򯨦N&@U"T6+"3PIK ]#pl GnztkS"BJ\!(PZt+;h떭bӒvn10 p8`F~n\aYxI7Bt:X$ZS"2吡1Ra<Yv-˒h4}oəbXg4拢(T֭`UJ)c,7>`)B1nذa\Ow<9ٰaʕ+O=?~6Ҷo{r$x??xs>>\k}sΚs>rGnɍ7l7~Í?{o;OFvE}]pO=9B[ee<>[hKG~W}οmlxWĵ_\iޫ_{v >߿jMM@)^tEo^fo?uǯuN~z<ڟdqgC\{ߵ|4aT7y~e] volws+I<|qϿ _z['3lg{|7_1A;t7wӡ/E/_|V50!qSB]F-t%N# g82pYeV1H"1+X,u,R114u]U kA"HցST{Hq!MZ^ԙI=G!{peQ%(L! AxfVYo兢ژ{9P-F39'69k+%QJ1&e?#hɔE5 G$NWZ0(k&A|ߢ(Jgx^,}JBI(FkLvH)Xs6'$Ah$¬~v5YXv:* P;ɓf6HBAn@BC'1 4Z id<3+ۏ/) l#slj#IfFΒy=JIvF孴j T1=2Z@J+\\-fI%Qi7~ɚWҊ\rx&TJ[c3(H9#inι^733333t}@p8L7 ϻ7u3 o2"UyL}h<Zaa| w9cLjx8& pNiUB$K!DbZKsBy0h%n{{mo{ۣIK_??|{޳N;{}{G%\W~E>S|E=_xqOB(xs^>/%|?dYw%/}Ǿ[~x3.=v`7~_oZ󦥝𚕯|LRk.9zףZ?'~Ez+U;G7pw? |ݏ~.3[O٥_]mk.K.w/ͯя<}O7>т%%Ϙ};p ߶wkV-.39#*x/#uH1gI ^jIng|kXkbL9unGRMSo#,(D8t})¹艄¢9 /J*T(ٲILHNu44 (ƀLWS>Ƣ¸ D1!DkQ+!F`Tq>AC 2= :h4C!Ex(ILzshm&F RNG9C@JJ+bDF8130*ٝ HYDιN`0iRr%MLbқ/Lb' 2Y,I;e^mPc (X l#IW;uA$HTy8{N, Ė FQ2It Wc#0c5!Di Akm5FjRڔLMLbh $Q_N%؇Vġ`b( 294VLb9&)B c)E J"L%x-ăH5Cfm 1:AaADZRG$@&PZ{KBs"  ĠHΨ$n )A6p%*'YJkSA2#$G|\ rl'_k:9gZcyƈڰ'-NL0jO5{!S(W -1ja-T22>5Q!E!):k%QLʔ '!:,3l{Ca]Wє)1f<ϋ9`8cLq4#Qj`0M;.15^V֘N[+ E sss h34MEzւ0'ܣݮ19Zc++ʲvZǦiׅ0zsy-tǟ}iӦ=sιy*pac%1ʼrW>1C@al}up o cyfy IDAT^][ ~riG~ qxSgY[jF~ԱϹY߆=<<=6Wm?76^l5Z ;}L7$ASdU"P6rG+WeYGA'O\m>ݛǛ~;\Y;|+lm|rl%]fw[\~/-ޭ};.okFsɞOy}ջQܙ/2ݏȏ= `۠peeW< lG^sA;d:Ic64,( z^HCPJ3N2djJoN$[!)E I-U1ZD"5HE%&I<¡!/DP!#TRM,\,hk @)LQBd)S& O xkL4*yNZ񴋑"V#*Luzi8UuC!-6ejgYW8묱VֹeFQ4$!ZDJgZ@HmgZF 1&ꙈdMR11Fi-T)&uQJP񞳵DJeIjKQH!@j/!jXTEY3iTb=d 7Z5X#"R RkqN$|wbzvfF!GêGa bHXb*-%hz/Q{$3v@P)Tq%F>An'DZupE"DCS&]gР ""bs:' Q%%E&\ !9+L ,T2a$(C<Pܖ(H"pD"PZK$ %Y gҕAJBfAJfVnEƍdRJ0!k/qL#"'{ J RevXP!n2p7Q,X 287SɗBN@RE33GVZKhjmwC&&0juMsvVĕ}#eclossFlbJ?B7HxBA9 ǬiW`#Bwh@G4FK!Z r' >+AK)b9~r>89|y:'}zJ'~J)ZU5Fvn]:NaI^!e!ؓMju( \1hQ\CD|Uq4iZ+&s{aG"X[.?7CJr A?lba,'x+7vaeYs='rçn{gmok6n/|W^px&6bɦKn #w=k?ڋz6#DS>UlG׭qKn#eoIגf߼fk[.zҿgp15ozwNt{U}g`ޕv^>xYsγ;onmΗWx|gW_0)l1{`eߘ8ξ_^uN"8٥y?[.946"/Cg>{v<|g =k7}g la bnMk 8ebVb#RAH9ATm)fB$13.q]UJk9u]C\T7e sqR~Y7g7bdgU,I#Jɜ,0Hc !5)"BRJkMR[K hK~uZ%K 7'5RRRkcNV `8MSm\ኢư1 T 1csI#R A Kj]Ő.bf6ڈmLH-Vu]c6g)D B "NHZ (uHPnGBkU.Y,1ĉx$Lv@f10jm1ƪlU8vd2^S5 FHp.^t`48bNQh NjԜں1Li!Kn,{\EZ_<4#0Kc⊤ke=,Bkvc*5Rh$# \YI\ _t[c}˟&/Ck]j VȠVC7EM `ng 0h,Pkd?c1n/W}s%V $|JVޱO5(&OuZ}JA5Da&XkL 5fF%>ۖKnM'$DܵF:,23' {/R$AE 6 LvBh_CL53{yI'y۰'x޻{{,Fc;w1r j(lqE7_?V돷۾̸cw?MivtWNJr߷/8.Ÿtyok/:UWu;;жmκ pGNO )uke=yA7o]|ڷ M7*5-R.-$)1P7֭[弗e)!nl2??h|cv{^(RPMoG2PQC$Z+k|HX̍oBFK~Zcl(TR)G!ț !p2xYkyil.-3s̚eN)IER BM)%Fmbhf`1q "S B@4Y yN)v&󚀣)/T V$֋X5daՕX % mF3d,b`Y }6"6ipuR2:KH!tZ'AZ6Iv(p&'%[H"'"hk5.K I 0ƒ56q-pۅUFYHd%#8p(@[gWJI8&Ių)\̚"xc<a`Ι]ʩ):r4aLɁ8K)+RYl1́2X>& cmqzvR1sWx\QهȤ1uD{?*%>S䤓{ըbfOk1C{(ЄaUUX3Mgv8MpX{bj[ofdc\-޵{yyX3uÅB4M;i65u@,ki86:p=)#M"5T]:thkks}}#XUƼsH]v!0D\l=~Dmd{,B^ve=u;/\,?P+Y9_apX%7:z5 ODR6\ HmUEB2@$ĤM `ACy%% uu]u:CU '"4FIgs}K?%U)C" ,1ۜBJLb]u$Vwq\Ku+\,T)id`8u)͍ =DmiB SLHVWb;NƐmj:h<^5Mmb`ཷb ]Zy="G?k$sSJ!FBJc>(C:5P u($֒s{S @~0cBB#2y-3D$ !`LQEH)"S5'UquL)S*xmHj e36IIȞ{:[1kq&Y$KK֚tf .DH"s. %t2Zr%4Gh%IzXkQa}(_{,]RRi{r3KHV٘H@(IJs$7`0`aMJB-s [ԆmN'&a]M5DW !=X8%B=*&[J L1$g¡Rǰ5NLTJȏrcDbskKd,oP'!!jPfDIVS !KAr-Ez#RtADLsr0@JB%L\m *R┺[e+-8O@Qq8^ZѻwoAa)oDI04dJƒ%rCP"&O7kGH_9J%2De[R6,У^#c2:+o)pZ5NNg>޽{86a潏1nnnw}]9窪R#tscV~:j`vALu=سgϞ=ui"u]9̬BlqfH4fѰfV8YC "G,lmСCBx~imޜͼFڵJdsmNgS4mʻVw.fl{l=9FeW ?$Cn/ IDATv;s˿2$OrMrTfmz bufl[PFklͽRUYkRL f3m]0iw,RI2jEh#X:P@ت9oBA>"e|&hDjRPjCL뺜jtXgc ܁ iZvc@Z.tuVQOs v(`A`,OM1f91@mZN)ƨGyW,d@$7ěcաmPf" e4<Kgfz5%N-;<.]`ոcRW0si>ϮoD뺮B -Bj2Xd&NC! L}[=QD#< bGK>jV($3!~,_ƨ:,U孱{"4 q<C5M)K]W✖\xLs6檍00Bdkb[_!Ak>tR9BK2]$h8TӒ۰r_ej #bD]'$b@"Dļ}k8S6QȐ"3S*SZ@BY**rJ%3[Ъ7ar֌&b :t6C8!?5ŽoeBV z,9DID4 V e#U iyi.OYH.,! D)?YhHDPLey.Ft""]2ե8UݭlH̍< Ֆ 줒l~qߏ/xc@'AX F9Jܧl26̩цQSP 4GRDQʕODY_9lցz¸ ζm~!f++++++iafO̲ն ]׭>t8dLj/n66MAJ0l5McNUy$c8BBJIʎ;9-aq։Ν;wb׷gc{l=~Lƿ@h?td2hrby*js?3LDg~_dy4I!h0(#Zuͪ .c !! # [rN l;}iPE(8*7da; ْ@ ]@ -bJ!ą`k9$eb^*m1)Y`žj*N[W8Br֮@%PX KTY Le]8W,"T|.@!E998LsX y@J)t:K0f)KuQGJ&NOٕiQ3R-j1vaH;km/AFM &bz!mFw7ژ(ƈH!`*p2srFG/ ݥK/[P4J7"IDbA(/cuoCc )(Kk)st.1*SL]%  !!F@ ѧ3wh  ,D fd5!xWN܅1… +)dz!+_wNi0oC뺮 qJ IaLMiByR@RL kj Hh͔cKo4DR$taJLƠS"(J TiJp7D1^N,*C!2ka#9 :{U(ؚ WK_ sQ>>'r-JLq(b<)u0HCaNR̆)ׅP0A"oBJN($iDti) erka4p}{݇1Ķkۦt6mʎu}5x﵉L,J{lfeeEC֦svZ5Ɣe ]HÈX#rI,殢"bl@mg&Ƙ=;ܽ;wZk6iQq-(#jK<)X9"!D$ 9h{l=?҉f5!CԻfNCĢJ)0`?5Jh0Kq}BbбwS_COE 'aA%|S!QIj2LQ6Rs .3wN_KDM1@1CFctX+4%dYp!DeJ7أ({cMZuiRH$yg1sԐh0k$vUAʀ1)X{o1T֐uxP Q3Q %$fL/ C$Q47TDWbJ&1"Xcc@-h1SD_)Q8CdT`ª<]hc$qBDrͯo!qսҼ<^ѷO/K'=Ly}tu9R'!Hߍ1Ƃ`hZkh,%OdP.뜈Xc}F{t [TRN(ٽCjMR5!@!Fzg'N!Az4s1ĄIo3Sl;LhSL)gEu.iW`M;PAB)"dfA'BPAǁsT# 2W0UrO-@Vcz?aQUug!9uH,NEZj/EF@”"B$=PW RbVxw$NLK'=)έ#`T{pj{Jz=X4:]D+zE4W%l.BYʬ]`EN\] 0(<0-.t!EԼyճd_&a2EKQJ@ޜ ٚ**O7ݺ㒍*1H秣u8xރ{x}kl۶mMH) ZC3BG]+ƘdСUUŨLIi4K;vC9@H5]:V9K HdR[zq:6WVÑ& =#s w޶- [ksB1Ck Km&бرc?{#g?s=w}}">k?>?iQtݣ,JJ,)yl k8Z+!rS03Dp!"Dvu" X{vx~(6t}y KB<"p0Hޥ673ƱD"J40}:lY-#a9#2ރ[Kք@]M)QƔpvbtSJDN}U}NQ11%%:XygAb]fg?ڗ6Dc}+ Gy.r*RTc>Wx 9<Xxk6 1hnn?ۉJ*/ig:Dd'2T4;f)sଳAQ Y([6Z.Z C@-Gx;zuHD677تϝ)A=pB YSCڮ."6F jSF_ 1Կ(؝_'%4^ٯ#R槢@'"RD݆YJ, $BRS41ƔXoR}_C]:$DprDL ZPh0 cS1Ů 倕 hsTJppՇK%QX]06"zbnBTмDkLB3 &nI _Za0z'$`b/|jEK)%E)CpS4Vw@HD+1ZpR,OR:S )Y7Ǟs @L%H9KW/(*{.,4~<ۋ5٫Ȳ-{hWO&ǖ$͒b؍1;GH8Ez/Д<) u(I.C R<%I)u]@@=)Rqm)%k1')e JcDD@v\]__NtRƩ0 ٹsu4 Cz۶]&?8*4LYꪦ'N"vtc"6dñu]:'Ɂ>ӀEiV"!Mg:[i*?LI_Amwo~~j߾D1󼖙97./C $2V'tt(hi$@sC@0zHj^!"5d䉅 # A$"d2poi;KA+y<}.&-7%c Ęc9u d+hQA֢!H 5UUUu$rEV18GHK$BCJ0D]B6:3K:g@d-QBzP7M̚iC`UͥC 4CGHU.! ͹Bk6[cj)ϐMA5F@.!DvE"1dSRXD%s;BҖ Z3Ȃ00kTbcN`z HoıR rLf~Z/ pu,DTjmi&ĆCmȨbN&JEc}n25zsxa (P@[ aܬxd) m_tR݃g]0G!h1xgs!h(Y=!jSN!X!c-h&`'LxB)-y,Z[cp(d!ݪ>!&q꟱ Z݉N7X4 :=4?zӛ|fmmc=^Wm5[>=tBh)KX.\J~˘8V huD{aqhPLKj2&2d FAs24lfl3? LSc@aZHfpYUUF#e׵m %TR9GHd{11ƶm4XFN%"x^kCr `k_IL)UEZ".ҵbl! >2]8SK1IjϷ]h2b?wrsg3uO&4">b ,XgG! [ "M EC( hm 4ePrloJ?M&pSE $x}l6mf<w]4MqiiiyyyeeeiiIعZUR9!;"Ҷd:LhyyyϞ=)%͌#l6NUBm9`m777`B!"h4CJa6rdDjyyf651p۶LׅNim:h4B$2@ lB m%p 6$ni}st]q{ܷ7KKK/zыqؾ}Ĝk'sePi-ge H@ epJCT k!FM ZXk 0H13sB۶]ۥ>|ӍTcoW_5Lt4Mvw]yeJd쬱Jp.GdjVBBB<)ThfXkI"O{Z63~ɗg1o#Ӕed_y* !uV+Xvf6{O)ԗ$I!f6|4y7 |y_U q6j Q+v@BA$y疗w>bJUl6 ]aElR3K弤\`# 1z3?TM#1]{ɗ_?O Z'lVzke]8$k-&d뺶m.hiu. 1uy;7Z\8.t$ cORJM4m۴m;BlNJH_ksZZ[DS;v{ޣ[^^d[YYY]ݹ2TBtӇnIMQӘ޷.رTn#7ݼ5LfY]IŹO5A=㱵@bL!zux8``XUU]Uh4wN0Xk*sT™`W׾YC5mJbCum"G-* ^S N1Wt* :ԔA-6i1Te~ȩd.W937e(BPH?Ԝ1/$:g몪@Bqcɵeo1u5%ve}GҟSoɏLOJk)%Dr]W wW ̗@8%1Pٷ‣>G߈b$  YXri Z@UA?wܩ:53r[nꫯG~iy{㎇yw_~޽Ͼsݷo߾};toŅ׿;?VRN8ᨣz^p;?+w|{}WW: /~:K.!3'?w]o|//!ԧO~/| y<#ݟK._|w_E/<< koL&]tѣOIr45F+hM$^z<)4`&bHcQXBAzrE.5^1E;wYg9tSГuqn Kɝ?& Y;kT }+r98c2d嫪ʐӊ3OQg`@6'v;Nu1ySz`0 G새1,bT%4m35Vw\W5p8ZcBlCTom Ү Ptۮ#Cp<*lֵM4d66Mm O9`NͳtMTH}(XJ>bH)Byc=8ۣL3.c2ھ2轭JϮBb C-*9zkZI2 C]$ٯHEK BNXR~9DspGMC+X V^^ZǕZaͦkk ]8{Yg1_<W07%fd:ښͦ&).H$F_y]r[G k N3*(*%.\d&eXkTwk<*&,تz囋H. %_#<")weq6;;rVIP%x$7!̓\\j*p\鳁35 _2#Pr`v8B)6Ӝ}61͌kCDrA=콡/ A8n%G5I"܇XzCo"g"^5&qc`{^'\VO~!=g\|s[patcu"Bdxd8YL JYa Sd`Kنc\z~㽴p|z;^#"g]+*eVg ++G}̱>a#O?:!;c9FN; >SN9宻z ^]"r嗿/xtX#ݟ{N91ӻxbvEڵkuu.?^GoU|]OWY/ퟋjgtw_skxyp uNL/?3ԘNg$B!$1Hʔ5&Y^{{ZK+aڶ&5Q= Bޕ0aFk|UyOX44C2fckӹj8,KZxtc4=9>v9oX+~3?3:^-.(ÇÞ=Yݽ6o~N>{ѵ %'D>|/9)_{I'i/sj\v\]>R5z?w'Kk;Y+uncI)[kvewovL1_-ͦO~;u/_r≉ Dߟ~_98$oO~ |[1H'7O>[nLK:?=TR~߿N=_OW_+~s{Zh +h97Dׁ5h3ϼ$}9?1Y\v/{zHּkg]}ͯy{>KB_}id4/xsW|uCXxTP5g_gcDMg;A]Wu٬k7{pBguUW >$$2_{eY:OL)2M&YxvllvA? cȋF$S(AMUoFZR%k=Rʡ2vQ(u/ yBB 61끀 @KvN_2ZgڗXðd(/֒kVT#A PBys}cҾٕ07wƨp YT5X *0kMba1 #(%*,߯_%+,J@Qn곫2IͭO٘}煀,ߠTH)_XY}A"Œ"i+/"(Eq^Hҩ_̳lDnW.DL!Ҙً⑒aqzL~׈B&*kYg{yfNOn{CU5X^^s+++C!ƺўG ÕMd!lͦ]nN&1p :sJUUYc$1%3Dd2Qݤ* !2U]RS3N7 P0JCh2i,cqe:>F8b߾p8|M|}!ڹ8[ ۖСnak/9f~;ydO΃^x/--mmmkm43Ǯ]s1pwڵl6 pź}?D<ܻwUW]5K?<5|SEȣ8>w;z{G}뭷tI=xwX]WSU2o(#/ҲMw31#SN|U^)dK)y )O'=QԖRr `,)qavH6ۆ]y[ 4C%yG?.Gm"!S牔mꄬkۦi=tH)sCRQ%뒌C%f$r""55Nz0(D6 ?!]L@5l6clUU%ۧT?AY稀cJ^Ibkx<޹"ܑGd 2lnn>qkmfճYz%'9r*?^n{S%ffo{p˪^t٭>]"AB*( H|K# E1*ߗOIS6561k@P("E՜f֜sǘsMWry?sj^k_NkRQLR뤻?F D|b)Bc(^g5F;kM*!ӗ.=?v.)N_zb3u01111>SSS(aP˖^rja8 RTJ¹H%F0iIQdBt[a0}?"M&.b4zqnj-yeuKN*< X& bGmZ$`I*<僧* @$$a S4 mZijm j WXkP3c8KR|sG~gfNLv}n7Uvv~kW-[ NMMMLLRJs[lƍ7mQ'''oڴi֭~_ 2Eu:.Eںqӆ_=`??/6mjMZ\⟴?~ժffqlTU555UC=t9<_om4(o~TǏW{6o޼iӦ|+#_z饃`7Ӿcy߽aÆ-[ )gtMeOOv}}|MWQgp}8Ow=&Pr-PvODa/KY13CW*NFbDPBfffĎ(\Y$T(ʡ%ko=#8UcmuP`l|,˪fggafijl}$lzX G)QFkih~zeW^ o[4ga52o|H UD씥IafQ( m5r2Dqa,ҧvȝ:/җ^to_\8/[Srp֊?/s "/$ٲuҁa^@ Ťw{ @1*sSSSeQL֭UUun[ mHMe]z㍏y$Jk]. 1Z.%#))f ܖ H.'ٔABZ "PY#2!J)k1s#i !bctYvW`1cDcCJc4 E"urM GH1&U%`Jy17D֜$qPZqOZb xW1iqjc`dhC!DVV1fN:*UD9!H3YC9C$mC[QI_Q]kC1D\D$xk[%4 ]@"E$ MQiij!`$[PU-_ 8fHBN7}UYE޶))9( /*f R+ s[km Ymb*'"?uZzq"f-85`wh5~>ag~<1|q}Xn3wܩ {>yA^Nxq:]D?^t}syɻۏ=w+W;XtnV'~?yӛn* 7zu8O6.?g?_FCЩ*ucMw `67\pgh"x+^q?򑏬\nK_~{Au:;xf㩟__}>ꨣ֯_* 3Ov}}|u/ZHz3>Ov;ߞl<9ӊ8SwܱEl}>Ok,^}3n-iNpIҏ(l[g1PJEo:}~_ߴFu& Q!223qlxv1dC$l/uqƍBL!4M-h_7j>zK{_ } [Vm0)O-񔕲G)%!-KƘa`gRHB=|uVy؟?uv1[{).P!D@ ,_bjj_MM"짿_u#Dhhi-wu>6'}e؈eY tr7d._FJ6D| W|Ёrӗ-k "1e+|\m[k2ԟcn]CdܐfeWqvV~;) )Di%yRa q' |?kKK#x]@x攛+l[\M/zQu"HI"LTXk''7lzK isReX X3۰%5T(" r4)Y8D¡unTxyݵL|h}0@XCLnt:Nژ9JI^֚̈`$ L!ւr.EQl1% ))4J )aTT!RlImD hmRH+gI K5h2򲟬p%bQ@)]!܌l, 1P$uc pzcr7>>j˖Uq0Q=&'vi'vzu]̄ c~իW?EQ\|_|1lذv==y*=s=)';mɮo}۾:0wW[l19sdh5C]Ffٰ*eRJp묋 P fi $*!]ȦzWB}mJ_3s+:.󚳛p+d/nr=uQ ]KN4Oq(RJB~s.3̟c݉`L/Rӏ9h||ӄ( ؄HI4ppcp¢}7Q#$WJI+3~?wッuW(\ oET֚o]{>{u%]DCm5କUk >J $ܽϾB!Ȧ /7pG۳)ቓqekּcr~O%Ki,&Uˤ{?:a~Ĭ8 nkK" 1ʇ( \'D2AF>%+>y P4W}˗]z3?Lj3_|ˤ@16\qo=x > GfO8XfBu]u ,uS7 uSksͣp`DYsʒ%[F 9BJ"8䦬L%SueRq*599$YYSubM!HA1*;e\a{x @B%/H RLTDbC1 xH!,QeĬ]J9hlW(OS0pD*Ep >HF- (1F-% f*C} 5=eLn?LLJu;k˲Tr?jcP"@>?RQ ׃1N;ɯX)0tzM Bޘ6983LdYi>()PklK3Y}PMRh R9A1cHPWDB%JG bJ MaddbGlVҪ+C9'XR"P\:!ĩ㈦iMٲel]4MUU<;3otK =Ǩ8}ՖI[;cM,Ea ۯ쩩))qej#(,Xu53m6lITm \(z1M6l}4:Z'|7sa-oOqo=~ /xH΂" "P"U>E{aʩK^vݵRokpYd /9cAR,)B"F NDic8Wp~g.>,]rj8mx?2Tst'%O;6v{yUlG`>n}a]Dx?i?{w9ߟtЁ& Ms4y{#:\P)}⃮xC}sv֨TZI&wҚf!LjpnrٲNSo:-WU\Yg+->/zܱ9VvOF|Z${9ageiK^qã"!^|#-=)r١WvN^|^W޲η,^,tsYx1@D:d"1eU#>'x =[T%b#Y$g~L8l9xC.~hv4"INx9Y/9],YIwF|¢xǝ1߼z0nA5pimDr uѮpU]7MjpZrj%7o_˗ vz s $Ua44)Tԣ%)1#E)%Sf􈩭rdM*r[qJQ ]bLC9 c "sc81Hw$j$D(FL!l6ev%_TXkRҨPTdքCYHcр }vkMvwDqHp#!,bmḶs(u (Bhc EPQyH12iY"p_C hB@:bҙ[2%|"㴾*2 Sٜ%03bEJ07jq`3Y)۟$] Ð#'b)(Jjt M72J4)RI&f "cPDQʅ.X$xlڢpX" >u]&t*ȕ[ RuMS?ׯ{[c5Qvv,Z֭ 1fbbBA7ƎzceQcR5su]3sYږ'M4LLNvCà ssnvzosK1!M1MEeJvxu2?qgu\8=/S+Vx_>==c-[F)H!y71Ʀ>x4j"< [qκ5?cshFL 3Fs9G1uMt2K[D$`bM]8Wv:lPWu(Q:X[8Z F圂QɁEe}?Aq=4Yk1HkUOq֍u:` "47ƊqBL1몚U,eEq#7(D[AT7MSײώD:eι,tζ1EQ8v:1 ƴMS9*~u]mZms{ŀRh#*i!6g+MSWwʎ ^4~nnNJ)i9/S,,,d]t*-B\H⛨0ͤ A@VUԖb C1$͂3X ;^R_^ {brֺA1HR7oٌJuNcOLb,1ԬjPU!1-Zg $!<3RZ#V|y "(?UJ5i񞈬H+b ($}\ۑԪt-(HLU]βmPi1)aGCL m~ZJ2Z YcsYbUI^"FL62SiCq /C r bZ?r+c$`.t2) އтTu-ƽ3Bk, w2&jZglX "%F +&(;[n]@m2w+s2Y{g|֢B1b>dEJK:w1,m,F"(>'*ɂ%BPZe:IBQ5͗OoӦM`7M#4n u^,Jm4ӰnvEQ&~ot\v.tJ+Q )~8 6l0;;SSSeYA5M33J;`/(MS\ŵD(zXol||b L> ?7M2JiP07;z&σ)@ߵǞGF1;俜ta~̏#A@a궮,8̬5QAӶ1zlBQ-(0G+ R(T3HE QcjD-1#1,]-,)bdc?m 늢@ u!*ax!jZAy5J!(c)ٵbި1Fm$"FIEH S1*B9YBh4$Dȹ:BbykAHUQ".50F cdH1i&a1F W$BKk(ńD ((j@1I}P F,bR,:kD41K=pe:()/8b:RJ@9a(Wr $ 1 NQ.bV m 8H&50:kRHrssl/(TFkﲳF e6J)5RS@\WFIM vιn`0hfSEQ!bM]3&Fg- u2#/CdbB¬h)r&W^!G aI&(Ec O6FX ?QNM€!p}rE1C  3l|Lh笸ha^P e$^%X6$K|_+L!,+MB[D3+/i7h2 80&! So(36[[xSes-LO:*#s̚)T)A!%p )HRkDXiD !4!4eѱ:묲,$1sx2"*b;Lk$L¡R7޴unfnz4pI!0c 9ĸeL]׈XjNc=c4QCD,Y[7fSqbb4~z<c~̏ H"m{-P1ƺffReQkETx+D1i1PNj(M)<[Pf)gD짧t2>'|IΘ$$?%2u$n1Tngu[6?' __ZYgEppTNs<$*JVa7q&uD@80q3bdV́"y4V)Eeq֦][1F%}c.=m޼yzzZ1MSՠdS i-[Xo햮Ԡu,<u]sJ)Z bZ1'C '(f:ruU`0\ah'''w9 Xm)-ˢ,KXUUU,q{y[,\כߏqf}`^8lap[n}>=?ɧ~c^kpȣ$0o?TҐ@kmRMXg¹)c`6\R$xUZĘNs`?o,K`b(Ki-El٪ H" !pѦ&&|̢(:ssNo? 30&ɬ6@FA!3B !h9s*4FK[L$'TWZ)U1 ףTEĦH|r;SEY#bJ -dH tkRRۦV Cv+hP A]{ FnB@$H( (J)x_7uQeXRD$@V ȔDhq/BˉmN4އ7vXcbC \L$3 d t(dhr&lT:yO0 :F`> G(|$EHDTLsL2'sιnYv;)CM3DLEa⬔ mֆDžJ l:W0b>Gh:[EVLlC)ŵW@a"7Mk]CT@i}3%HŨŀav;]I p# (yaBYJ$&6GR8 ZÍF2@iU8}XedZ l"*F,uVJ-Fd@WrA4@5gRIjbRA(M?XeDzj@@᪮4'Gڝ@_%0$gV`n;)v!W8R(T$AMa zPUU5b Ln}@+f0tʲ,Q4>PUDWw:]Qyzֺ%G9a,H`0pMNNE:ݮ(P! EQ "ٲeK,˩]verrr5(HUUsu]5M- FE1齗>u"߶Ώ71xrzW_]G~dz8޾~/z}amNrbBAkܬ' D 1V!6 GԟG'yX"4`SXs e($Ƣ¦QLo|)3C*YgQJ3a&Da 'fstSR(,>gr93aQU5d(i)4%(0'Jry:xt"O}S> cDrBa8at u.Qe3CBq˷^MP"Z8Х)˺!Z w(3&~N_4usIvHVkuTj ;j)H[={Nu;7lKB>#{SAj2zKQ+'"!% 0|!dI uQd`N;!`5bК1=Hfl D*ft*wDyD@5λg<0E4xէ/=tXfU"~RZz;eGkԍanfv9g+(K4> ∱MIP0t- #*  m(1VgkQ **FDž]Ĥ~}JRhMrC'D"7:j:! M݈H^%XnK>wt{ .qzzzvv7^7D0^zlo,XD]7! ,;J麮ؚ:W:9 Q!ff9 "HTUBcmYEa;ݎM]nWkƘ .X@)5= U5JƘAbLV׵,Xc76sssG4~+Ǚ}{}˖-iv)_My5}wcou^#w?x<2^o?><_=y`oOHx%k.g3;|~Ǽfo|/lN>ӗ}wgLyy>[q?w=jϣ1b/$j)5qJ)@7uEYt1בb Qd⎩ZA-@bRR:A' fvDIdcpWcCQX|%ܡ(n\a5'QNL&"ٿR͚DNG5[蜘-ѧbd+nybf7q%FD "޲wy}I寍iVB ȗa&~tҁ׭;υRJ+v"J)ojREYC ̬!csNzBhP+R5rXKA 說O<!\d e U.w妉0$/'&5⃏ɠZ[Z 8RsJ-v@`i3A2FZl&v ] &I4Zk4ssEI‘wug0i-g.u7`4Y9\nax`1ᣓ• <0kwȋR*LS9Qne11+JPb!6!b'bi5LHN9'%$eq֊QXlR hHH4YxRIS$ 5'S`"H*f$L@DT[W]yT3b5 AlYc( hU4?"@3ɕXQJ($'NܝBĢ(1x//f2(nu-zq\vژi$~~̏x'>7a?[}׾xޯ@_u_e\/eWG濟y^kv{[ktll;o B:_~[`R5(4Fw;x꺶(1ČؽF1`'W٩D*96Fl/b &i gD*icd!B=Xbn늢,n+Mݴq~R#"jw么=-sN\-a9#i-P- F!FT'+^&Z+6-EȏU9¡Rs02ZkS1(?B=ɯZa_?۵*Eb(F$F3c$E$ Vk7~T}~v#z_o"/;tֲ3=cWLg +D\1YN~ t7d9 rwy{/~Ņ.`rQ9?5?g>;ګ}DR:^s05jCRU3oXؽmuX'o1u)K(J%7\ |om-H1^v={YC Q)-gw}ߏ76m֭NLhmcoihFiuw?ebH_6%KSk_yK[(?~%ISNIk^H oKUjIQ׹6H66${{hRYw?C i1u9gHxM7ۗ-SmGwM)2iFǾh(%M*ߗW2bwttb. nj@.FM1o"F>j˴J5o2mɭj.򓿆q”$!oᵉq/38c5(~C#mr9Kl$HN'%'bJN֘s{AT"6QEYHܬd mf:-BXDهHFXmT|0%ҏx!1Z)ٙ@ j|z%&$cL)&jHIH007F+VuM#GS>;L .4f';Y@ RACWYUb%N&X5%BZ,Vz0;DAHXA1*Xmrzgӓ(&V̙oBVSrDăf ]!j6zj榍jq@Xm!U] 2ŇD Ϳyd[Es@[[2!EY ӱV몖XU5j%c^|X#쌘ԸOMNMI|lUUV[QfU,1Lavilj1!j0Fꪎ@sE155566&eÿ1?k6n8>>~tIGLn[^m;k>tz}!o7=w%Dg}䆏lp5_Xf}/:oy3w>rek.o{x/_uU~߿ۑ֮ۗ>wμ˿ȗGZ+]yp7_~/ul窩ԕ7_n:>Ym?ڱoD=KK}/'#5In:_>g9_+=k]KvugxT.^s\dx׷+ɪV[|̗]QW^> y;oYG{{Nnlz qқ.=3ur6?_yN9 ƪް _x/}fol١\֕.lM,>h@۽80_"@Ԁ_/N+{o?,unC"aMixٍ7v!֮}5k޼h>_v!D Ӟ=Kw_uwh/W{"Fq=Ob< /T[o=e6V|kמ⃙^{^w IDAT-?"^v[tňx[O{ 1g-KLg?{ 09犢y3-!>-B%4֕E !B9ޗ { ?ɱ%d䊛o>R8ܰAo[~[VҼ}ٲKWVZ9bdkK&@ʵ{QYvr='.ڟ"kR |M7zֹln?/{xK^|sν}hfAgA0ԏ&2(!8<'12 j cɋCPFyyMCSU{k:[Ɛ֭kW^ffTs??Yo]G==rlYw˶4M#,P}x!n{6;^Ͼii&6¾} w}NHFQ*C[y~DxwwQGS9UC]G;|geyn2*5sԩ7}3~Gu׼:_;ȣTWՇ7{yZCurek׾c5M\~Ízޝ^ʛמz!p~DdO9'"Oo`P"p7ô%.UA p~^#ZEks;qT"bP! 6dHa1ѧyN$׿:A9cY *[JH$HHv93]z$!b!ij-CoA"#u >Df>(S.*TԢN,hܨR\$"*SDĪ!dl7;ɤ/k;J: ߶!D!,Q+Y#1<;pʟfȤT:TӧH=M_S;$)- t$*[$xXëD5MF; VۈcNg,"dR9`V' 3W7tЄ[r-xKu9c)4Vݗ3crޏ$׭)hdaHp۴qv0Sp8}Ͻ<`zx\ozd^8"+\WG(ʲmGqXáVm[a+?q۴FhAٲe3"cFn˖-jff6o<ggg^lBĔmg!+˪XkCMӪC=UG(k׮}{-oyV/|;=s9F _?q8}>Oǡv?8|<{/TGΧse;W}U{ۏ};"F7wciVw .J,|iv7:k#7?;xg_k3w|F@ Sůx@pVV+zo/dw˾s4%Ap׏n^wN ]lVpsӟ%/D?k^1es7xI?g}#7~=ǿG@oMgγ;Qg?+8~/38s?7_\~oқ>un 녏<Sc!K 4Ζ#ODEXCB9L1jȔA$kTU4mr̴ja<7qœ8@8 ~%xoۦXc,e (P3(uFH+ @JKS>d'Zj]dS>ntM6xLA2j # ȫ` P*E&>3WҒX;oFnꦱ:uɞyf17%Xd<ͯ>7#@ԩ+*5r5/fib= AIF%B'I)"J jcTU7)S$x|˲tHd}WJWB- M@E(DeH$sEYhY/ %Jx\ X9)Hd֚mBEQ(WHAD{LX:1x"YbH%~@Iek4{#EalKQ-G` r!p"uPBSFPW+; ee1nS5ЄC {&Х3c $|T tHKbA@:XJ ~qĠ>|ȜQX)6Ɛ=JhC Dc_R%쥝cLR>1-A0r -0{(_F]'SIVNIN X/9 "'@"' Ad-,>2eBpϕz,94?[CFt|CE-H۶M&QޅަGÑ>|v#}u,fp(.[vsss>ajԼpaaཟY6[TEQije#+N;]ɥJ91[{bŊʲ,b~~~Æ ֚mSC4*{˖n2p ~oggUU..]0c9N~qs9zֳ~=o|7޸41X/e/e8wyj_Eu,*o15U;#W:w\慵 )RG7[}g{^T]yn>9&`c DWHq] MN ?'6")3C٪: (}RLs/β0`c#X_ (r0;㊢njL_ؼ9l<-f )HCF$8mEӶ>'w߻SMZQ2GhekEH6+>iVUH14ɞH* %c?v{S߲Ū쎑HC2f (S>` [uPn/߾WX"@AQn>2[gtEǣQ=qfcSΘ6۬نи|-,Ϊ~~aU齯G0.jʕV*{Z;Љ bJ|;،Ku1qywy%~WKo1~xw~uz[sW>ѿvTOB1rk|g=Yaivi>ˊjEaa>0[4G=: 3s6dA1o窹|On :+sw[[[o+gqVg}vLj>{GM?'~9s=u7uG?+1m5m b&82ߝ?_VTH&8cc![ض ,..0WC1vb1Fb'*9gY> No?5z<WEfKn{au9zk"\D#`=e-$7sΐ1򵷾5]7ޘ,'dT5M۴خ< hKGYX9*΢xM[svȳ.ݿ9z?/ӐϏ?}ܻzs[혷u@9sι<=_/1ct~hc\VPqsuYڦ͝jU?|x^~Ȏ 8O=>can5t[y\tEܞ_/E$r?npwy䞯~eM`Uk9IM_W^wE"|zʇO^<S|_}&շwWP~Mh)(8x4fE[TZ]S9"+{IZ,8o/wUfaA;Ɩ@2<"K<"~{s]Y5O]8ղ֖ec!̄Q;^Ŀ]'n EB!xe4m|@A{wmU{UP&U+MS׵mp䴤יeOyJOF8!wD7 y߿l3Vz_z۩Νڜ !||ݺWT#ml:Kׯ91gtL(=3wON`U/]]衈1O~.6H9EEWzBNu|׿裧,cY  (p:[8ph)LtiB߷֕(1i%)xuLj1Ҹ.ˢ,*5NDe!슂StA0usCy}kq!r-GXB0K$rO&A0 0]F1[n*oTZc\Q &%$jO a1 Bc 'u 330Laܱ9sK؂j1(8X 1{SW)TӁG0=sc~BI' 9I!B'JޫFT‰NJJɮTy"8 ݈-9 5$+YGS$.:{C I kХT|q K, ,:Ūgi)Oe:!ڋ!#E0@Lr'޷CWm7婂٧9Jw.)]Ř<%73j9^HYUs+W]6rn^_zl|~n8gnժ^|cUU33c ]VzgŠpeYZ!ZCfŊ9Bp>1eG4uSu-BHeYYYm}611BШc>{9FkUli,_<8i@O_viM_гk^??k֠9OGۜw]p\w֮,i ;'|؎𥻾tu+_Q3̏mcfq9p5|YGpVߩ\w"kGo~\Zeij -.,,..Auij :"kO=`%\qۭg>B;rtMԋvE5q5 'm9먣ecn_SLQ8gn5RDb|g':a 7s̅]cmo|mBPs}b='.1s'>Ifl5gbG,_e~N:ǀN֬Yn/;6! )p'wS7Ma vA7%jYĢρ*1L(-]W^ǜ;ú5HCNܘ?@Y4թgb@jrʰȻVh&@ }Bҝe.vߺқo>C_\\\yvٲ gˡ13UXH&Th9mcʪ*S1;Bi֖UUVR̈W.TD $bHXdeQ,_bvf6Ā9Y;~UeIUn]"` ܌z85ux߶-(@V"dvɌ*uP].̷""4K>UڋJ.c:뇶m}۶쭫9GeQZԉ@5V@Fp8lYhU']1^ GH]\e (?Ŭ9Jbڶ 5VXSk8~$" s]upֹe˗!x\~뭧|&sEYv屖M$=>cֆmeY#n$[=OLbuE o&J8 -<-]VzKH9Jϩ|Z᷾Gڲ,2}4j:f sR1u+m>Ipꮪj͢\jOTo:՘ /~ eY1r۶M[C!oƍׯʲ( s^Zyqq8TQ̌B'*A4XVe7oڴi<~uS/,/.,b#p0ǣ+{sL[ʹG6‚+vqA:Ϫ{ΎڄB%m۞Ʒ.A'KXNt|_ Vk7߭ٺ?菏ܓ~6'YO]r%>=Mh/,>ۊݮzU[hƿtIhQu@aȨ$>p-X^{W];B籠ϮPS(0t:sͼ0 $*N[9g A)L 0@۶ڑK\bH8ȬZ`HvC}޲elO)ɉ1#Pz!Dptq08QrLCS(D)Du*LD1r!pk%㜫HS7޷R%eQ:szȑ6 8*~}wPB۶mQ qR`!f 0"B}YUYeId8cԀZ0+#s8鹦$YcuHa# d) B0j"0Z(!B}=Xu0)zSz>5`cz8 kd ?HF:tQv9Vtج$䓅3*ELU))%)\h+*ХuH''fw̷~ }o % 9ŎJn#QeUm[eY͑<3;窪"xaÆ9r"thL";J ._|ŊƘM ٣#Te(ʦi@PJWhbmێc%Xg]Qhs"4cDCh m3Q\<^chFi?4Ҁ)-#xGo艟;}ұ[ jS\5[Շxv'\ҖvꩇtVm(1FAm،V` ( ֿrDmMaqp֪oL̶e]D "p@cΧnsLb*-$0*e< 7Xk/scMQEuD;@Tf!C=ʲDL8Jnj@spcDU`Y(oHY>EP9|Tkܴh4zA$| g,#ǦI/8?h~`H?O]B]UU%1HNh06cLQ0O(E41F;*T0 d@k:5L 0Dđ--|'M(ȐZ-t 爨JmS"YuDGl{rvq۴,bfcCJ4ېuZcM*b36Xc j ,f\aM$ua1HԶ.zkm!gMӶ jxڶ%w Av")M],˪W1~ADW %̽NUZS^UT91U(̭oiVixdi!tHPIYT ɕX Cg& pmַb cxHUJśl)(WȐ:'tRY80 !L>6pF(:Iz[q ljv(nb@Kw25\&y$v  qE2t 6BԔ@0D_H i> 3d)⒀c: vk(*%BBIԿQqlԳEADnk<mҙLDŠIGNIs 8t艞Iq bMӊ̪U(jP᪪WCfyqqq|EzUUE )րQqRQ& kthfTyzD.xA rԊn3cT-T6̝E1fI=N0 stI{MkMgsLYTD S2~ޥ3+\[U""(/\!9A+y N?6SW89ETHTA[2B$8`i6FnBdN 씮? @P5H=i;kLNc, YÞU%+:1[_,0z)%C`!1sf$l&ȮҲX8vI'ZhR)!@H IZS\(b q$_ "8ek-1N̢2BQ(*yM9:iCD$(9Wrۛ92{љhg NH-c#2zhfRA焹i"rpӐiXW8BpSVC PUUQ8Y|mMYD:gX#P [qrޫojXyH!klYڊcHL)BvB-mY%H~x]71$).jΝ ԕqw,smnJbO@llAĔ@b`1Dȡ<$<-@dU:KF H7))Y"YPdK$A v2e7jgԌ >T1[v#u ̬v`0+aȠUB6weXcs*2ĉB6@ XX#&Qє f rDP6~(GFhV@^UXC '/3J@ϭi^`5T r 1ir2qҪzkPӶ ()3 \I/՚V֘-SgmSx4z⛪5Jb!H1Tk 0\QEЉ_>SCcŒ կ"ZPT?WQPz[ YIg-QY/VH> }hSl[m$)@XaɑNt!Y˚!t kEB$5fBl kPuIB gI 9ɿF)N,B%{)&B6c*?D&/Ga)`HK4A(ø+Q+ `9h-}C)R:R95k9%YjQ"!I%fz W5EQ o|@!2dЦ/ vw`م`86nHDmjUz=f^XX^.lڴi4ƺ`FL2ٴiS;]n333!z3~,pa4;g㑈@B@LBDEQz~(Yc,!Ǡ?2Kxg,Ylm"Sv찴]d4۟{y|z_xk?sZ~gO_e(s!~ A*;BS4Q O؝#Mَ>$6@ԭvrEP h1!+f%)Dwd((ch69;$eY8W(D5BI'YB9@@A bGQ@#"D)9ĀQi[  Pj]Pfc}!" EhIn!F4EQic e7c G%MMDX5'DK1{ ^('U|!CY4% (Gj[~cP(/Cdfs֘m덈1B ޏE2JD RUojj'sS]!ёAXg+WHYt7DEYGnZι)s;OΥ:2pNHl)vZgM]+B94rEgDPcL4Df,kgI1ƀ1rKH p ٲ,ҫ$зm~6 !%BĢ,{SVqY|bч}"jCBX1ѡ*MIP>4m6-YQ };֘,fc1xM.X GT+(j'E"W/s$/9YCL dTϐ=cL:E](Dv PnM0Ft 2ͅQj$C"ho!)@V(։h]26@Y/nH(%# 6b!'zF'Bdh"#zDN?uƸ%I0[%TZh @#BQ4ih %׵!)9(š2r2+"(鞆<;y'rr1("m$h67M#!&40rcfPb6PȀ IDAT,bz+V8ggϏfmr?cVUu. D`m6ACD`A5e\YU:| f(*C,LWe93;!= 7>v82KYԳ3ayQxz&+x"&u^sW+/['!Y')%OpӪҖLR Oi#0`Nއ[EN;ebK{*6vHH]yj!)SD D mVw2CF@OЉR69 @fa@TXԔ `AYlJбpN|( r41@U(-bJ1'bN`W74eU H,hQ*A̞͸r-G`Dq$h̤f2cl[2V (O^4#7u!DeYE1iX!qV9aȭ˛!Pen;J 1mYb "MI2 fQCwE&0@!ַ)ЃoV9+Kr-MژTkedhc@`I,KW΋L̩sՇ!b5/hJ!'xvEXif2_crQ6bb;R, 3s@t1C۶Xk!!4M; !"QYVe#9t6}8T`z孴E" c,ajw!Zө[Hz{I"c;d2֦Eծ(P-!%3M!Du v 2w*E%LY.жmQ]iQqIGk d`)XP |SKt@HBR 2שa4VQ8@>C(?&ca䜪r=jE5RNRWHN I0"1DセPҡT]H3PLе>pSvi.S(5+"1a?~Ҡ ¡Sf.!Ea!t uQU]fVO=2ᮊt6Ȕ|1kM^QM=ڦm9r圏Z;/G\ac~xӦMu]7mTDr3E(}d,]6?VUoCȋ!D.=f.J|l0(2?4$9KE=sy#efFgz}[# MӶq8cfq Ma ťb<5\s饗^h]vy_'?ɏ|#?wu7'tҜE4}~on?0.fվG>'՟]8>D"@u3JTS)mS@{HR$8ljr؈!hzPCj#,Dg,x,I|UL(|U *Yo;mD[IZZרI-wT֘Hhgɐ Vh0$V:EIU2HҠb`@e`+2 3rNBw !)ajFAT*) TjcXے ZgSccmZW?r ,WE1!Ĭ/1b2,B5m eY1pb 1Us߶cĊ 1)7WSY2_Y۳JEJ^{]:!21F"cIyYZE8jL[2.Q"GChDN\ ŭZݪ" uǺlZZeNZ倀Fz%- !tFzj!Fqg>DEHvκ\Qb]1FQ|"*\u eg!+Tc.~{YwsNT1 ^:u U("*6 $^ETD(@TRqiADPj+ H1x8u7ef?;@!*z9wr P m!T-'[)` 2wNQmM/Y%7V8zeˑoZ1r)vq!V5X TaU'C/n1QZ5)eΪJ2\?Ѐ85XʚUMv]UCo:U. taW>a-&1`l \R> 0J- [M 8gl/ZctQaҘ)Pm\ۂV1C}@%?KTʭ4wb16y?~J=8ŝS!1 Y8øXeNɢA2D"X,]w]?<,[['N'|n0ؼ:<8P N3=KowJ8qTXu9khRRF-WԄwCx[Oys?gg/O{GUz^e/{]z|#O<o{{0 ?{xWλnָ9'ൿ|c}/:|"ֺrϒsRQn T} sqXS.q\KD;cH,33"#>b;&Q{uHc]4۰`-!1E•R$KIV~ Puגz3]gN1Ni jQkeLQZa'5 #5dvǔ4b!F2d<k:^A\ d%Dy\EXV*JN"hL15X lQT^μ!YD;ĘDX{s N95_k&kXFrF !Py:=5-(X)AzcDlιRB,%$T,`jrDED8mQ-ya1B+aCHo҂U<`XI9X\hhxl*(f9qA+Q'95@fU CF6F=rU]ׅaX̙%CMpy2i++cTZ/Uc,:sn9Cd%zL9&ҧEX?L7UTG?B?"B Z4lw~ss)L`a4@y3ן,!@·z8QN |7#L&w9h0Ψ3]r%zzocJySX.bFfmnna88Kx;>Fumwwۯ/+=/+˿++_c=䇞?~vo|=3.m{Wu͝!O^?|?k?ޏ'^-eoٛY5_On:eW}2xmQwo^orC_z>{X.p˿A'Ë?}x+O#'ƇǯoYO}~?.C~}]ϱzGm)W_uozo= 7WdyͿ:8u>uzG=9smGQ'{gi>qkq,oc J a1H]9׵5!. C]CN::eC)F-=K`9"n+{9E*tK) )]1Wx!#biRaatmnlHt Hua_!Ygu!Z'ٔ5$쉔}u{ق**)RU+E1D٢tX;mFʩ29BAI-L gwj C@ F&JcE)t Cesq !a`M{cM+lS5L !{_ Mn䗔x#ȚI1Bs GHomc!ڭs:u(oSCy-9u9 lY=aBV\H)Ƙ*z:6 Fb`5X]Kw9r %|ܠDC@um0:bZ+ U3 )3Ԝm]*d脛RXA,%`DrY[krả2&ԉ!Al݆Ӛ`-s*[TéPHR?N)NTtU94,zC[+3DYkN&;"u*} K꺸!C6Myؘb Q݂ULq"bfTH A!V<0ZwO2VH%taT[M3G[JsLI J)PQwz!*,^ K*KaaAA)'VMA3fAbR_UGj;1`sԘ-j}zqDVʫ^e*RL1%|?zય@;)*h1=aP҅r(-~s/~+aĆ!tv[]wӧaٙ="r7筭- a<<<9^8SN)aƢW`h:uΎ^$2rf/^G|B]<ζywwwoLԉvK;NOܼ~W|~go7}_~y/nkb7QQx?z3z=Y;'}cW>G./| /~U7ox?I~3>xǫoy{g7@\W'}K_bx+$"g^}埽uy3.毬_}~g36uo^sՕ:9eWa=s>X7+|m0]W_p=|88:7t;uz=oS_&fg~fϫ>CBg]uЁDr)o 7SfeY *wHwS @XC_n7D$UYsa# p@59SCL )X̗*ՆFﺮ{@K9hB:9CRG{Ey}rv,HDduriL)tRvI@Ԋo@VI%X(@u 1(V~|sy!"9՘j+$Qt !ؘb@RRVU[dk)cPbP |] .آ QCFJ& 1B U)eZ !9ֹq0CκmrS<ﻔrc9Sq{pNgҁ1rbJ1k:オ& 3))+ D,KWC= 33!f 8YσqAiJyD.'f62sL m]ٯDDDj{QxkJ|1֒Vb7@έ Yk-Usa^Ӕ>ZSjȫa @Ja Tdv0)G)!fcc\.8N&tyezXy8^J83 RMU'OrN1ZsI97TʵZh) K.8V/b0\0Ts+rDZxs\|* L\4T1EZ-dH"QR\9"T.Ue[ƍ! "DAjPRW,\lhYRPѫPUXŜdH';ZS ZLym'VS3EۙN({N#)秔&"q8g)Wj\ Z*xUg0Xճ(POAI1D=0a*is%[ /3{rxa$`yqckKW1|>_, 8|ɾM6S} s͍~2ncs3|0"wӾdλgv`cc&}եHKtf>_\%\3L11!ClvĉxE[s??ӟo7S7I˭-xp{{{x~Wn{sų~+ _yw>.xB=[.oxcMGMDΩS\v/}/pG=(6^1<_G[:9j{5{‹n{G8ql?~5'?vS~[_?GߣwE/{>{'zšn?r߯p\O9Q-=x[z|ov;uz={[/oyyu~u}h2ɔdR2H:<ަxƭuqNwj'KauWKOaٮX(>OD`Y,"eyԭl202#xm7Ɩ08aQ$aa|V6#xvX VIweTy.+5bͣ^BSIsJs,"%9Ŝګ]aFadT*"CB"tY1Θ98 \-ԓSR3+#pԵ;cb*q(bT *KSQ*f;Fjqlٕt!k .b:?R1q!Dc "gw{]Cs.,9ق*ӄ* yP)cy # Y zssUf@ⴊ))l1w 2@JQs]+iE#v2pzPt@F2R&δ8 <0QJ-RB?R֏'bTԴac"hxG/.HF% "9B1FÔRN|mkbve ,Yc-iiQ` ǔr A%BS͠UJ5?:UOWB"I)2([R]gT<k%D…sN"YRNRaVzpVa:Q̥ Ul Ŝ`Ehzrm%@+ՌEHP:L'V8QT}"c iX\uT -)Y?٤*9i'\a*!uW+(o$I=1\օ156bq Փ\rf5!Vp-+pAO `S ֏jQffZTSsN r pgk֗]f={ȵq cdssS yB 8,)9\Nt:9ybO9N|9禳rԣ6fWgNU rs5Vm);Z%#z4D{k5vO4' y:Vsf=yW_~yNbx3o;28 IDATW_E_;;;/| `ooO󵭫$˿塗C?ǹƗWaXZa"ןy &xs~^{s?k~x~7w.(~/lp{v w~bu_w^6W|<η`x=:G8os:=nvss9g5ڂ!㬮2%8Fsf'C{:V{vϧ眎5nڒda6204"B]^[YΔK};~R,~$cֺw5Sݹ`6@ VxQѓ)+|W|aZ-S[nhH pFҙ !7^ۥ-/LYkr觀 ^CFCD9݇ Y\n(ڮp1Rgװ8t樈1bXiU6bɠBPVDNo6XY$K H1Ǭ*USeW5&SJHţ߻E%pxxHHھ t;?$>'5gn\:ﭵ‚OlooollLdb5gOLfSL9\獵BX,!!$MwjՉ1EXએd2qw>aX.+$L  {{,lΦֹnXhbs4xG=QNz׻udY޿}~v`|}Owq~~O~dz?6wxS W_u'N~ӡ}ӟZis;{;qOͶo+&ccgAk9>Jgӿ+.].?{Զq;pp-rgKt|7NVַ'?|=:Gj<7}q|o<9uz=;7>_K}ߕ$ RhɖB9c!q\""TMr!ƔRaCs.:I4iuuOBVKBjo)kD,>s],eNZ* ?Tgu1`Ș7666&Ӊ/}ߟ8yrgg[AtDYۭuD\B9bʜ>Ր55#UDJeͰ)I: YF8!nrL1{Q_w%ȸ*:;hy|ihu) "")rW[xI)bݒ!-!2Y=! s+/(u/Mk;g뺭͍ g]i\Ę2)w"&GXBƔ}MV#C\JL&{kmZkc_Jzΐ5ygE%-ԾXO]mnVJqMHBb/jd2f}C)|\,a!RHiTy O0Qۂ)'-V,8 r\.`>ð\b1aa9 e\pPؠD7N766gө56\,eiN-J=LJ!a9,|~xxpx8_,2g&2ΪiOt:ۘmlmnnnllnlll6L&NU5C쥨e1d:N'D{ReibX,tl\ $밫dȐ:jX\ןXrf*iv TA-WY\SJ1fƻfYqWB #?{^ι:f 1EsٳŰ\.Ξ=Oz_D677O85ͰԸ:;LvvNlmmwޫ̙akks{{kORǃv?o{b{Ǭ ܨկ~u>yknyc=1=>ǻ panl~gƟ>w^>S?Q6/rbKW@pYeT]._b /y]/\|;[^ϵU}wg~Wާ?yU+ɋn?zΏqQM/\q88s /ygod?Ûfpھkw?p}:Gm>7}q|o<9uz=_zo~K_~\n6Wǻ%j Z0"AOs Ą:kҰb̢}DZ}Cdc2eb^eD+&Lq *(rS#+-3"HҖ[\/ЮBN l efe+'";FUS. @($Z AD֚}= PsmNJ2\q9׆ ǜs]ץ] ΢F!@ܜEJ7!cIRcI?^opՋ DSRi9K[k c.<+/h1M1&;|W|qw:g.r9Nm;޻BgZpE*v@,DhxHM9*kbѶlnk !e($ZcL 1uDkl0=sȹ"BT6փS`kKY&'"#Rj1;ˢ6k+h9S'q!8c{ PNԎVDT!F#DҌN;eJiX`[Xeer&T8sZ]rC̼~WS^>믿+>_zO~u0<ۯOo{ף[/}qpE7<1xxk^g<?Yԁoz_|s~u}zֳ>p{=IW=i:'h^ﶾ{O?'8q7zwү=RMs=jߣpUW>&dzb++4aiq{m1zJ7b<8]9S A ]_oJ-im.'OJy cAXT/X0 ]W;YRNudfYc)VrƠ.j2TaSfe(ku\5*cLu|jkۺr88 H:bu5\[B!ڽZKۈ0z Gh pSn0Cs0RW Hqj*hI-)'`7¥̙c8BU}AfD!yp:TJD"+QIB,9q +]H&HRaE|浀a\])(F_Oa֮Y*Zב-9f0#aI)f\.ŰS`c[oX ֙FTbp%'Rg=VYc:əS^9g;;zWULbLwV3eTko?D#Kv(b{Ba|)~c0{%\v٭;0#`s{FaЧ8<>lmm946L{BPϘPWȦ9.[dsΓi90ٽ'vNo};~>_<޾p;o?zosm*ɹ9̒‹We\qōۑrJ?7_%y~ѹ9~g{Kmu9WW?p.S6!d:ѻaX23&AsYEXCTiHFYi1ZZz۔u !D}})3gcv]G䜃6.BL}WI.eձ3$"C4 ,%C"b\d 9!0*Mcg#A@,(Kf1朚[. S\ݖѬKa !aΊDkl6YĘ`X eU !}a63'!"od1ec(DvÃQ+M\QeYrfu^cE8K |X.^_9D09/dJ C A5ͬI\U QżPt(LTqϢ:(zꑬmXSc2"8g{c,Z(.Tlk5X t^{ȧn[h% p1ŚaQ貘EuzV"lLbP)SuDfQq9Aqȳ榳N}rZar}逍U4'BD1jZmaB7NqEhWƘtF~ǁy1R`0*Wٲ1␳FI(8̓~4 }m' IDLV+KLh $Қiw"hȀQTK %v<0 bpD=h9aXhskdȫ8OcjE(ᚦ5 tI%B&$cIe !Oz9+l\AD@ `xkYk ٗFK%"+FQ J~P*1]ׅq9a6 RتKF1)%=Y8(P1Ztb U4{" /.:c{{{e3LG?OyNmf[[)[V\pE]tԩu]w郃kd2(hK- aT7pן9s{y:uj6h~2}?rXwA)dHpGCn .pxxX,r"{g9\gϞ !:&R+&sW~#c˯kQ: _86sڛ,}uz"K〵93:κэv4BBV!.\f9:N2k]r%DyoLYQن%g qeΙΦ8q3/a; ('źc:]*s D박rK&(eM&p`-[ORN"IkCm5P"‹uJBea-O!20T̫J[C]#A1sΜc \p>hC!Dש)a{5`Њ)%H.9zVBf޻mb%3+\h@8ZdɘjRL s]e *IL  !DĨ%:Ci(tتd+z)V5焆eZ%)QUNYڇ: C,]^5c|KKgO]wDF?d3Zc頦 B3VkI-RZ;5SUC0rO:p4 UԜPsVê2PmX ("rV@(X( C^UbQgJ-xQIb &@|+FZPaΨѪA.")&=B^iIi"!`՞2T !jsUNЩ6iuͪV QV+HƜU>d8 _Z(Cr8/P$<FOd2麮u ! GwL6@#Cia`!3RJN(չC !\RLu!P`"^{TИScJu[묱9l,I;-FE9֔"1' Q(L0rY}ƴ;}.!( &T.3"Y5Z*()TY*aKI e吊YZ:ew! K\NJqNWE ^z_AjWB\ORچPY-9(1F2j:Fk I 1۰RN1m.sNa{GHdɐQD[| Fa1":tD9ՒYHʀ-RSn@N%RP5UsZ "̀R Ԁ46h,dV2zsSM7)`[~SI;R{Ӓ#fCX˓b^83pt.E*,1fJ vU<\^^;cJ"gdG9]_[l@kL98 c֡TtzFR?x%Sv4`8kC9eJѷ:Os^TXMeKAujcL Uh.O2gBjO9*AT=kn5G0[ڄ5"J*bǠswY6,*fNiǎjDU̒FUͷ*H,V;,J?k!Ծ,qclXү1ZM١Ey)F%@fp6p(BhD#lv' )Bu꼝k]JIi%ZP5a$~wIl1u]pޑ1]-ݽ{{{\o dwcx;ގx;޾5xdZZQXr΄hPn%3ǔZSLگ:2:J6^)_d]x.ꅀ9q>Z',mKZ9j%z]1:IT"eEt:[;%JVjl\8.^48Qa@x.F(1 h)j2Sޡ" \rfV$GQR8g0daN:FǞ?"!d>5Q كCrk vݲ#uNjju433#;(LG'eI)!daYU6BUiJ @uX(%D)Γ:%(=N5i,(U"upUi}#3 "4XNpN)%Q" Cc%#J,Q2H1(GNEH4 J[k@1hb1u!Nn 8De)@J:88WJ)֪H:m9jvYq0EN*2T/#EPN3J jdm)ʂT5eBHP$ nJ+ qz͔5&>*s!(5YUŝ@-$ߖ tCM:Q\ :7 ֏#Y鳦Q{r5L,P2V呬H&ymzF(5Km2hј8ֺI?يV)sRKRbSJJʬQ8TM_nk83"~qQϠJȲ9R!z{s8ά-.Wcd2(ᡚDTt% ǘe>_ q2w^xlbJ)bLٳ{002^حtj1P8DDa {9G-g)S>={+:IOuzS{׺7͏{WR:o RkI'~ȫEZFHif! ]33#jc.Bo/3cT ,0 K󰈤R`B\$uW~*hE{sJ3!2 ¥F DuH0*esnXddgSJ2#J!*Pբd;_1ŘbDZjM+@_uyh(+jc.ZD$FΜY PCi)bs:jU9萠'Z~jVJ94[83sj0+ i/SDlX[S9a9Klv9A1'o!rʜa1` +p^J5@IQ܆B{fVUI $,BTY j Ϋ ֓!bJh+b] h~C|qV#y)Ɛ̙m\Eӑ^Z,Xh " 9iMY% XYV/X)9g89ecmO1ڴ5 `1ƚR@LB!0a9(MyY)U({RZ}jzO˒!B@#E("DUG}gE_bDK/rMXvM W+iu^(@߅rO4 GsY"J)9gE?/\'[H!m=1}Sk |uyIѻM"ϑJɂlNT P(Ke  :)p8aH@R> 2Mk;˂P6rNѐBp#'O"¬!(7hr{B.eVZ B`FA 'D)BpE1GÑs7u=ovkZ4EQT4##}x O <6wRD!8<%pr_ }TRQ'y%N!%9Q DX$J@f)}6*(\}Ty|}.D( #Rj^ϻkiMQUUJbb}SWܨ/a篒@$gAJ=! ϗ@)S%6F =Q G%`6~C;SS>>sPRZH:Ca $ pwR0M**b G9ϗO`G^H1J,@&"Ƙ,ڮtM^P9^=U50k:8,0X&1\@Dj{!PS<.CD-R$+S]g, )e\1Ced%|YF ˘PZ”ͻQAf9I!|`XX  IķjLiO ep򈰐-ff B 4#HXJӉ"@s,\i-Ɯ5NqEB(%6w*]׵-O O2\. 㕕V@klxJ"Į&fu׍F,xGZ+Қ唂c֚W>LJ*OՈ{JDRF"g ^H (ǭO8`a%B#YX~g:xz޴-g3K!(6i_3vZTYB(!kB-iD)Rgc$%&ÌɁ1h lf&`X"hEpaH}!(K~e?媓%!6ZiC.u9)viRj *T6n;)%T #[yD A FKP#sl`  `rz920\ Ofq-'(B 4M۵:QkJb m"FlW"E]Ygml ނ,EY AU@1ҷMV "K P%@elCDE4u$I!$sMƓA #rnvևNIy+$r8c83h:NSP%RVqodH*kv6M9xLڤEQK !e!bBl.w)s+h'2**8[-"j֗< ACd!X=;Bp GEB@ SN%hry dD R:|0h a_UJŒGo-oR  R))1sDzfEJDbBE'i#dv'NbB*;B{P„H.g/8݄JUCLژ+"FDᝓRRzP%!;^@i4E/2ۘX}!pTEBAamtQRc;۶uiiR Ŧ2 A=}ϷFQG5B$(B ',x6ʲ&x::k8,Kkkkkk+G{_,z8$=hNgm!"EU2%^][[[ѠtN6'댔EY1l=~oo_poۏ>{o}K_WwOON9唯|+/}Klrg9G#ۑen fSUY Bu]6u KEhcؖ8Rh%wֹbwm|6N|ִwQ(T1F=H*# 1ȲG9n>01PeP `6٬[xvZ*5VW׷lݺumm R%|=2P()ZǣLN{^{rQD16 l2N&u];kCz>ֺ*TκmBH &*A$dYѶێ:jǖ-A:f̓z^wqvM眇J(j0hNy۷o][F xk'u=뺺kap:pN8'Blq{7'VN't2WVV X׳nnnٳVlT++kkkEQwHͼ8p;gP ʢ4R t6ٷok(ڽqp6#kE IDAT##۳Ν;Rs̕W^>\y?#?b9Ӯ??~\z9ʯ:?o-L/+?W/|a_?S_~gKguo-uݻk_쳯ڻ}/}?=y/}m׾;w<>kO;:sι;Ŷm<uk{'=Is{~q>_|;wܹs+_ʺ'?=m|3_;ٟ=Yڱcǩ}p>я޻w܃o{\!zwG%/Nyq>>1ٹsk^m|`OO{Ν;w<瓟wy;_`uYG$£)t@9ĕ;QZhmP9?P\4]VUUd"my`$p| /#ńMF>h0O$ cNB>R10RJc6^CL1RA(c8<]t6,xm}m}}m4Ukgdss26ME6CER؍*U6 )OAHi Okd$MwmBd(HJG8jßlQUUEz+-/p!ũR`Rh47BOˤ`LBј*B&+p.Ro+pL<-jee<D|>f|6ںcVIJB ޗ`<0e8":7K\j+,Rkۺۮc΂"Z?%&CjI%ȸUVdes6u6jeuu}˖յhd J`tR& w  Pp SMgIwnyPRRI9]3 h4JI彟N&ij\N<+AOI kkk9 EQ Gm۶nݺu}mm2 eQhDd "C$u6Am[JI,ʪʲ4E(1*+}T|a>sy>.!]gf?EcW1$X/CYq@mg=]h*vd;=q{ܖ-[NgxFOǟ_wCUW]onx?]s5wqǙgW_|߽o_]]K<q>ooq׮]:~`e/{mvw-o?]^?(zMs_F&!s 32+Qi879AB?:B~-BH<_^$:iܬ#1yth;;M8`3E 2m|f;K1rJ5D\yF qک Olօ]#yb)%ruL)$]|`[+ZkFbd^LL's1fee*rUwniCpz*-TѪ|la݀1FsʉVJ VRI}S7 [ c@i]sZ=1IHk)R(˲(;8R&MDF&1OT @(<%L/g/qIU 4$r IB/dzqՆV:b$VA5"qzr0@r !;D><轇|#z;+%h&y&-1C'b$~aTE;ll%c\$c{,&"o4Fs' ,m'^I.F]*beb\Jxlπ4u-(RI9=pw W9Gnc/t+|e}oxg>O og>s'oٲe߾}B׽u|;C;vؿ39|?_:㌣c-R:9'{l ڗk}.kn9oj9X=쳗O]t^Ox>Ϟxpw][[[9~3W[ouǎ?wSzW?O<^=~SN馛o~wy/zыyx:SQzݻYzm0s׮]:Hrw|~^O|~''GرǙz;H;x?QJ+((ˋ!.9`%ef]fgvF8&Wc(x !dJ*QmRM 2&S~# @I"PD {YG>x9bT ͐ҕy! ( g B@3P"ʲSbTB(RR+8罴E)EBO9"xh8D(yN%&y.%_1)8ZjDi2!guRRda, yȬ%Gȓk;k&Y b!QYI\r-;x $c?SK@vzi"QQ3bioa 4LD!IٜQS`]&GH  M*c2D&yN<-UDzXYVZX* = ]zCy6|6Mg' پY!D1h8 ;oJHeLUUp0 bSmוEeYz)Op&=۬.AR8mNӦiK}AB׵,8O%Ĩ3uKJ1vd{p8/\}է~wNsW\q?kOxEOClwuFs·? .n>׽?[<sCOz˿wݿ~÷mwCߏ*1SK6D5%3㓍rybdzK3R259徦b4z_RJmt} BHӃ D%3[ ƈ^p>n6H8Gk#sQcE ud|E7(@(\>LDDT)hg]uιYc mT̺eVr,7B6}󣘏7W|EaEI RUTLH֐;[g;3 T4q!K SR:U|CrM&+dڷsG2q^ہ[RNʙFY~8:ObH FN%~eyLR 8zFA ϸa"S6YRӃ}VlJ??\r'Z0bu^Oژ""%D>T~ fuo =z[ )@eSZQߖӑE@i=5%~Ã0x4vgދJ '9X:oNs!ض1PDλ{.Rj4e)뺮g p!F`Xh1HN7777w98 ZRz6M:j1}lQ *HGfc_^[~ _زe xꩧp {_/z?\8!^wI'.lfӪJ}[.O6Ȧ Gv?wqwtIpw/AiEڽ{w^x饗uYx6eۿo{o[n]OB{qw뭷>urRJm%_|g'páEnaF8v?\~}۶m[?}/b7׿}{_W>$3&"O,F:;N?8W!3G릞뺞fmVcǽs;\YYd6 yP3׷o"Gr yk&t\Z2Gi SJ*)`%'{QQp4 pzc$!˞D@lgQ/Z(1B넯<+DJ)^(Xy祔++;C]f͍ٴk[O^ St>db\KS- oq饴EQ0bZp;9oA0wwy(.E2B.[+/>RBBgmg2ϏPTT Nckl6ͦѪlpu>.L:woNAmZM6Ѧ(p4 p8 0b/Y!AŮcZ=Іqj͔@I$F)EQ*L!{Uil1Frv º`>oڮ!)8gRc)H1G#+VaJ393Uץ,'Dc]XmTӜu]׵mץڍ.tnV$#VX8)b~3: 1RD!4Ӆ󔏔6[V+9uŌ]p|xMˀ >9+=89}l0aYJ3A)8g<ɇHƠmS^"Qt:ζ{麎E\YJ|~=LӦaݭ,,K<8L0Log;|:NRiY.G}'< '|'xuQ۷o___gUe}۷3ldsx<޲e \0^DlݺR@mu‘Eh]v/_~ ^kys>#H_7M7|soW?g4Mp.m( q?cȍ7X{߻mϞ{W'_6Zkܽ}ogg7ݻo~ן'~4wm֊;~Wλ6668pe=y+>~?/҇KKzի/{!rzғtWs _?ʲꪫ^~}u\׾rx?s?/?p_3?3^ZzۿuQpֶm۶ڗfs8A5x 39rQ v咕}WEeYr`!Ueip0`V {kPs2u IDAT}SBo>Rhmmm4XPJ K4<`R뺐Tg Lҟh,̋oNZc(eC Dm1ՠ Fg]۴4($JMNQ`^nHk5 ֶlݲm۶;vl߱cuu㽛&nۿ޽ݿdss>5mkW8Ǩb{j\ f%VW.BTe9WVVՠ0|fD4FD@FDFs O0WHI)TJu] \)rNv0>~H!hyI@R*f*#ZM۴]g>xg]v]<. EDGađ"1( x̊" !Hv.wK魴 (b8 ZiL1F`8ԦL@R(EuB`td*eJI(+Do>(úCeG2c8Z)DA1*Dc5RIZeU"3M nb-!R:y_BƤxe@J, ~^DV!F"o-.g$\v` @|GX]h6糞{W8Q}Q"`kL?×E C,{X@! 6*))T w(u'ƈ|6Y yjEŽǫ "[xtOrIíD$(Ӳ$ƟDh";;z,OL6'ɴ]:礐F' IIM=7PUYTuJʲ,Gh8ܺeǟp'رc<{=޽{u]+%v]UFUDlz:gtԵ6ƔU"JeFeŽ(_QG} 'ܹseee2f3~}mmm<e|_7M5!gHT}c9wVmٷlGپm//-oyc}sAvonn^uU}tOOկ袋=؋.j2̱\s _ߟܞ/=ᢋNz;n;E'8_9|vǟuYsN}{7{{gѿ~eUUu)ӟя~c{5C?ouu#>O6H.c){.y?-F_8VVFLIwm#T,K6DK| !AUe BrCmnnN5 (m19+!Q,5ߜs87Dap8`TJ*e ݽg`0\[[I ȍiijPV'J)nlkp8eUB̃r R.ÇA1:GQ>HȦq66mBJud:OgmBvzO0,-5a)P#b!pW\z_cd%+aws$PR_<$, ӼO0PgX2R`FۋY)c QO-7M*`0 !+Ģ2J1bZvaeC3LvB׶>xTf/K-itKXUU5h۶kD}6 " 3ȄY1ЍtXl,6*wC ,n{a!?B , Xy&(B-a;S\.&RvIȴ Ybܼ,pR9 &)MեE䳝:deB h\.SЉGQ̊uVU5әip/eјDGeP*$@R1RпBr&+"٤?2]+%sLcb<:2hy/DŬZ\Kz:reO/_v1~b\]每i#p9M ;ID R} >a9l߾}J)>j-ٞcdGXU"jeᎭk={s=wޛ¬8p-1*J))ZkRUUWVFk=elp۶]c$hZHvm[c /ʲnRO&焧.0@%AˀY[OW;&t{HX{jg~ab{~z#i>{;ˍGeWw\.%e:|!¾Da˧~R" BЧn 8EL臭 LSSs5RVeS- @.` Da!"FκT 4H!s/$'w䎮( )/<oD`>o8ޅ@W(DO\s %=WDV"5x,'\@)%ߛ`AI vl*䲑RFzV "0qRr9)bLYZ眛zRx3l7`@q$0K"^ֱy2gQ$&BhͲo!Pҟ,jɐŦ .xU} )O3RDlY"aQ{Kp |Itöi )F<11b)*Q>W.-""(Y&q'A*BHq,1QdHYd;/ ^K>M"LTd/Uٺ{͢<l6Ze]C9 nݪ.LY4t9MÊڮճq;1@HGw&cފ6>JqB1)ֶvBc9F){n>]"H(cزZ颵4Bńݑ?=U)Wozӛm=u]~/~S~TJT99.MѧPwv9G!!LUEh1Y!G띏R"x;k!()FjX>"u PJ @>RBBk-7M3LC\&2Rڰk%XIr8gskBװͼr9e3JY@L~; iV1 \\6S \(E4)U(*Lw!q(SnXLR{,1eG~L21IsoH Q^;A7\ߤԘB1F.gQh4"@s@j6dBbtణ^˸~RJQaȤ@ $TE@!RߤPET)FDbXIӁ B>DL}gLqxypc(TPP*)QJI¶q W ԇDd%>|8Zei1$ȱ¬EDPjd|yFs%P(B)~nbd#RR|Q{SBtYTBDcC"H⑈h0@2,EQF#o#"ALwNDQY ]J7 1,I`zp$\0 R1|YD@LMWY՗I@8ߐ&P?Q1 f:۶/ =4P>B j(͚l޵~{f]E%EYEQiswNN< {a5*M9׵VV68MkDĦgY A$=VeY*H@tWF+۷ns)m0Ri-JbGne!ֺ8<=َl=>3'əgw َ:g?/| BR2eG=x\RJSVU彯 PR!RR*p\n{* jx(N WI @Hmܽ 9缔r8juMS7MMxpi@f"ǍSLELLϤLJ1ʦRr\e 'yQ)$&^o7$o'܊T9ӗUNITl/JMkDvBSDYT Phu-J=K]GXR}d <#k(i!rUCIRB"AB A!fL| ! =幌TD7"QptQ+"BdʐU$@<}#3JqRPZR{OXivtR. 9I(!XX8xcD1 ʨTIc1iSaT1#`R -9뜣"f<%G@VoFc W*O(9;!2 RJu~ Y1"kEBx׵sAC85)ˡa/[QbL QJ_֧8 OOI&;BPʵm;Ny 9V3-KrAœiçA )*KH:ZcwEQh  Eʼn.3Vc()Zgw UJw;#2Z2yD)iJH)q$Dg~eɮ2q3"2euMa8C  'xB $2RYF6"x@FB^x⁋́n۴qUڗ̌9<1#n8톣aM]H6U]_sN͝TZV9wXh,i۫W.xXYDFN`g==}6v@t_N|u!w~ O1"\ӥml ^D412'"w8 h8jkr dDviu*l}tώWw6O\=| DuAEB$ *abi͆=.OD+ݵ@B*Z]dn1cگ o]fJF9jj D; |^9ZO=cCns]t3!G UڢSФO)DlB"/=wt41>ۿ-Mbf./sd$cm]pL4Ou)D@6bZkXp+xͦrJ9l,i)ns8Z!yH efeQ:RᒖgrNj.SD2n1*]^}~]`b24 ķ0zV+E(ϳ#x[kuu]ENxwwSJ^ 9{Gpxp&#eֲ>zӧY쫯Jt ۛ{^x ѕJ/!Bp8uDϞ=,p<pu}sDfvWW_>yxYښyqůq߉i_~ݯڊ-sO˾i䛟m]ekmD&CUZs61OVxS9 )Q02ɬ/lQ[ 1lMڤT :eYWi50tdcͷ7+xLIAbsbC|sGmڪ? Ĕ1RCL9eBb9*ܘ&MT-EV} f5|$@k-l`["@Z쌭α}mo a: ~&PJ)UBqȃsd'&M󹝜QaߌqJknH=QCpA'rIPV t${Ƙ "*˼\$\i B CNqYZuh} 9!HrcȰț( "D&؛AE٥~eu9 ao,N8 /DRA#6"H[ !H-5^~lښB*I>N;wrDH1"Pȭa[O9014HK~%8zDei!,̉z-y S h  vCrI0L)Ry^>p~Chn@^FUJ44:vyȪ:31MӄCCQuB:Q3v$EX'^D^R2 aq "`'n< D5mU~]q}Cnc/YɺRʮ 5iYGD$B%ܟ *"v92l"bǖwZT!"kkVmM I5&O wZlzNhKa;}7; IDATަ@\4B)ÐRRqRk_|&Hz:)ěKƉtyӛ >ϸ]gO%Nz믿뫫+Կ >t ?gϞy_6M)ut:gOײz`HWU'G_zO_N!;쭶ܞ~y^uEk/ )[`C}^rT Ĕ{d|ډ[,NBն+^2\ $^I`f>8[m:ijԆZUDr~!\rN/y#yg)UE?$8]n_UַAbsq]usȈbBA<K, *S& Èm`\'g8k` HDM;Q)̧y-k-q >h&"MT{Yu? @76 8%o!3[ZkM)ʶr\Bn=reOy~4814& pa%vK[|bQϏhynnXu!N)%ڜs8 o]U .]9 vNsf!Qp@ErvTDUmR˼FvlCS&ڤ 뚜2nKW^xx:nooR'O}f-n/4 tܝ<~nzq8^<|{{[bӧugZs[uTvW=|q] 9jh]R 2Mo'.^{2#bǔy~ݯuu׿ _U:q+oUE2 !Z)9Zx{ /kqw7J@e]6;sz^Rӹxb& 4MȜqCGݮZj {?o9 [ : t<k(D @qDK *Pu-Wc:\]6- /KwÈh]׋S$bBÐsiR8̴CbkEZl`Tiq0WBNrʵZC@umҜXJY\gpyJD>A } !oݭѯ*dfDq HHM4u'|.3EmM.X{s)USm/5٥R ).!$櫫qOSz:qܷb`""76KFd-`Ց*m-PkC FRb.8b眑*ws&P꺮u=?'X4qs_TL8%rK{*ZgMaF|j"jʜR 73!>0 VK"ήPۙٲ.i4<̭ @)e،%HߋFrH4 ٍLI6F=~Uܡ}W\ڄzYLVv]9{qҶLHjZʺ,t2aFϽq|W6N L- L6ՏY {jg"2Ҙ̭wI@.Qj)dt!'8D!RE)iC@E%̉O QoKs=FbG15MRhԃ*~WKC92arՒ9#ຬ$qNNN-BUY⇋pv:D"]l)biHk1} X6]i'qSTk5f0̻Cu0 Llq"A f?Du.݀ ).3e 4M2eաw'+tsE_8a7i43g@RJ8;DzY{50s L$ }r6^ ǜ'ZDj#lRg%=L .d5UV:e Ԛt:>HQSd03U„ on3  EuG?T3|7GďFOgc7S? =`V[<3srvpqbo!p+Z0bak?Zu67P\J'<9"4}Gk-㱵&Etr8 /޾o~a(kudi\9NqC@RkmYMt:=yr<RYTq<k4noqxDkfn0q@TZ[j_~ݯut`jk)\k%~SL)Q.r2ƞfjwJSwsDo7@!aaYwJqö6{_ %G:OZ+e-Ho!=zd/o{XFrf#>YJy"b>s1` sB}D. C.~M?y{!)U:6IZ[JiuSJ"@f`Ĕ){IT]jYe]Sb0m區;!fbN63F6l5XTz@ f3+̧t8ζ|miHZ9rҼ*LK+!ܓftmΑ1`8Aiqt@{꾏ff(L_84xaι5ꪾ^Hng88=[#^N4.juMc!R4H)B8DgA)eY}K"1p;@8d` ݨg *6W*"#FNc ;GTE۳qJXוɌ\#7;3( R{j1ĉq@B @qjYT ԡwWT͜@)O6~h&:?_!)(^j$H'3(Z#5aeV*@কqFfMq)aQ1xM j"Hq04I)%Ldr 75] oI.j`kM.'MѐsJ9qt^De-DjCvdž:mS܊%˲.9"\8 9qe@N UZ)EZsoԸ SK~HH[ŻԶ,su<`rn12"\.VޤaDZ Ffk+Kאַbsytac8go!OM o;u\ɆZDs7xl@x'* "piA0JN;^$m~yv^n= ֪Mܽe Ge777N_px{^ץ0^]]9~ѣ79"L|rs{ssss:=zt<I'ʺmf???믿)tD PKa6R+VQBduK`G]RPjJm-#8 ڹUqzd=nf΢Pu]=^t:p-w9 p)`Kxs{8MhH~準?]曑/ź&v ;6%`"LTj)ScM(nfH|&j=o!=pbDain"Ɯ=)sNnPՔR&Z=uYqrpKDd>R[̜cV :P r>mθY {_{@ݡ] 3"bJTy( 1YГRf3affVAU3R|"QE@7T0m&9R#0 T[i*2srߡ~z?w0|O;c\@H\G(Q,N6{5N MLkg- 㐆Ϸ" UMk*!9̜\&:C2~ِLdWcAޡӳE Kq35ɩz a^R꽾,JKx̀Dn!Aخ  s!Du;<nniEQ4b򔮴P3zZƵ!DDkh&3efƢ("M 3p<eCØ0F4L0짝]@j` _tz1av.Ie]J)著$Ӽo<~ig&Uvo~[p<|ӟ~'oy^P,ӧ8 8x8ԲXU5ssss/ܯu~^oy[`e~77ַ/?Ç7K';GG/"O߿T:i!֖"4q˺Hj.ta" '@UQ߫&`@LnDRlqH(*pH)5d4"RJUa䏰.y 32/ɋQ Q1.H&tbC13DsJZ 7 !VDd]! ʓnE9oVqC"b[ SM*1p0rIYE30U$Smҭ=VRrWWɿObʜU$"7-̧uqBGN:!F%"l"r?l?"ZRJ)A-N1O^-Dn|V? ODVk 2 X2Zfy"P$Q0g"U-fhHf)qlJ!O5su`TJT sJ> fR1z0rjγ| ѓqr)Ev(6(#I5m cGHyp:XMe5"*&qZ+ xu'eYjw#"0U>֐\qDEz 1Z\6&JZ뺖Rࠟ1IK@ hl-e0 9r)L6Nl~'xVx8VM5%ݴofaZ2{C#RJ0K9>Rj1q̰;j[kRMfۛ~ך<{,+ݘD~}s`˲.RKi]M-%q&yC*^`w7kDIV- L6BmdT/_@\DJw!yos[S[X"B3wx@[-r!KRK4!xF7 LCJ!+tTq?z`].C9%1fW877<,0cJ;?~wOO]+0χx\R4*8 gJғMNxΠ\1 ]ߦ xѷy=ŃflH8"BXtRzm9gL0Cg`%zۀfH AgdAL@H4y ]6p-mY˺,09sy(4`$0Xt:077777)/GǛaq 4xDV7)=8 0 HMkZa%\A"ܜۥl%;=X!huY4ZnK)%>)1RK$JBECr^'wiEQa 8wJuYy=z엘*dθ'8ҸJs#rWUZC !+!UaLqS$v+6 ~93S'!)Xy oxG&@"_`e- +:򻻹R~mRj4ј)J,kd'sFeYJ)qxCN0/Ňqy0e]J[] Y~jG"ҐS@|ik9Ck@ØƉD)x,\__9gDwK/s:'Vω43g?ƓǘAssNyR7?ޯu໾^yoo9o^~ۿuҲ,???oO?||F߯/V'9#3x?(~@T"O6]_14~_8ReEC4xO&Νޘ` !ΧSYez > [ɜSgZ8k)ҤԺ "b E e?̻{o3^ˆN魸;a4n 輤ˠ!29#)qw42WhyRH-eg1]YZ`׏#^3MQm}IN"&T,* 88-DܳN! k*dvnU&h.Ң'iާzG(kiND 4M0 )C8Q9Ҥu 3S91VE"SRlV+ 8^U6R/ٔMߺ,(pn^u.9F[iS?}#Ar]OѶL'^۵˄ƯT"Y֍y/܁( hfM!DžDS2-3Qv(K)(QTs}}2u] 0o5Ȝ]{pss=~:,yN#7)U=N6|:sxu<{pwnnnQ{Xnx?>~C]][;Ο?hf;Noo{8__ &pr YPA+D&"+گڷEŢ[y_aG7eՠZqbGIvhox13i 5DŽ[Ma@y('ˋY\>t8rԀdO˺6ZˊIw@d o9`#t;^6ITIijI+k@Gl`7}z=J*JnMHkRkU~7̉ U7ZZ }jR)2/աqSNĄJW~lU6NEuŶM GpXMHkjݴp-jkǪě[l+LD:#4_Xf~#!L R "N㘇SMQ;kE$|Ox"闇u{|)%l[=PзRJ8M?Y_EH}+a20S"1Sި{T*ڜyQDşGiv!1{wȶko(ֈKղJxdrcShkk͡8罳bd9"'MHYѤ%M0">{R0!=[loV"^#RDk-&p-qsN~>5%t qfx:ZZ;qCǸ\ G0lӑ4UK^8Zvj۹<Zmᛈi$Qb )C r)e]׭~f^9j+n}̤,bj(qvnr8& P|:hYF3QaeNzC {*" w>an\Z϶3;ԔM- c>~d<ၩ~`4M;fV6N"&JD'It'e̦Rk+τ~ Y)U9k:\i${  hZkJ `MWrJsm-vMT"J6!b"ZlQz:(.4ԃr֛9Ulp t4#Y؈0̼s)UD!s@[Je3\]^1PSFٯ w=x.@D( tO$VX K!+5gU+=}8Wz2qbB܈Д.êݏN/ o ꭺ <$=Ui 6ߟ yf&ڣx*RK5N{Uɐ JMZkM:qrΧyϞ=UERWW7W9gQmRid&3Yץ8cJᰖRvO wwwЫo˿__rǏ|#GW_uv__|ɺejM27U;*1GWui@E)dWAZ`hadp&JY[kk-bG7/c5!^ 5 ޿MBZ[w&LiD;^|65Ki$.3)0љI۸].t呢!VKټ9Yj18ts }K +@N9TKiBw4+}@RJ)&vp8yYdq) B%ޔΰIľ/xp\B 'RWL5C!\SuI"bEPo54w gM ۤV,ˬ*Θ\6ψ1+gT B[AF­w&MTh`Nl61D@`9y!Q/bWC JYkf0MlQ},tY6IO+UZslAoM6\a IskRH84N[9Y2Š5I0LqH]W}N)缬kY%%@) K^: 0pcHbDs,,Z[0NMZӧO=-@!"Q-I~$G0Cvm6Lhnq`dLeYtNɁ#Ic_ ڈ }gθU~%" S[4RV;3`DZi%U! 6WS̚Vuz̋n뇃+9]w!"<!ov޶ŴDA޿րej- lҼ0 9Q麮2q{3[e9"U sa+?V @ KQDTЈz`L+rD?ywpJ(uL #>1!;XS>g/:̀\ƇN0M vbwWIMv y{{>fZ[r#GN'\9ȣf"ͧ-?Ht%+M1^ѴdHk6>1zB>钕UְUTTDqhiEeYuYT-ϔIiw:˻cεgE"G숤jh8$H$1Ј*½)E%$4vQ'^k9xcm 'ds:kۚ3zHlaUV@BB'kؙ5@nH ! x׫5 ɫ|4Rs"B.}@VQouC_9y6@Ͼ13e2ZkL:MܯD[;hjxr3UD8a)Er-90m7'GOjޜlOv)Z84 !v73(-fη/>{o|;I'GtwyMD!0ZOFRUIl|75S,J&ZR}ԝ$/KvRIYK>4Cs͓tlnր|ͥߜs,S"^D ̈ٝKf=Su>**(#)F~J? ʂ P(z B.)́[ rLt)*"7"!֧'whD(@L <ϓTt }jԀΥlLiáqqBh$>)C[b(*Rj@zӪ7!-f6 ɔ*R@!bتsuR29uSi ,cYSGse9š)*U|:tEUf{k:xko<񻥙cb>sV+rxP(m֔_ 1" HmRUV7jYj>h/tQu16X?;DlÁ3;yE V\ڨfTjf`="L)Ƙ w͵(j40SU q<*n@DL.,9PVLiG.-;p?\tԂq!崕͵{<gjh^ެF ݨMT"[!9&n$1&֞sz t 9"DA9Ĕ ?WjDCjX"B 8D$2D5UzpQ湬ƕ^ВkS\WԌ*cjLCbk upxxBȹl6B?*j@/Si9v^Q`B! )DٜL{{y\f㓣yΫ*!iv7?MwLofn0j>{??OkP?_OU6|(Dxʦe^meO:yyCՑ>:vy%*OU#~s=]c!Ud/>YAFpZNj2ltĀ:L *&LŽPLD  &mf}UMMQa0l7[ }) ńH&Oɪ!Ĕ-:U"!Ĕ#/ŌOqVY n9DcҲ=3-uZ3st813\r:^é)c9U_xW;KœR ޸JЈ6h@v^VYDی@1j/{#v "`CǑM)|P]C$&FS# 1Ƙ 3^BӘ7<S~V|lF$DA4Cǵ!8:j*nIDu&M)iḓ2V^AKbUET101q2%BHDϽ)ք-^ES{Yi}mWB@jerKڽinb 1!AG%z., Z#HP@McTA""*(ga Zlα4|a8-K\O"](T'x[Cuڋ{(fUj-} Lڸ1+`#5>-1vQZ9Z?zqA2v7&n8'̈8sLE͹dT:,K OD~H_֦kWc my.GҥՍ7޴? K.s)%L9A1 C(laB}ќv['mc BQ`^8^tۭçj qlLUmn챸CDi5.;a8:yS̪7zĮ_=|{'tȏ]wo~7//η|;ߞtg ~K@~6!Ezs.3*~}g)e\'*s{kSF_2Ҟ,4#c}Yj1f"J"3&guG!*%fսkkHo(R#@SUsoo}|tŎޡc5Q̆qhĻC.ET,۔"8B`'81Di#rE&F$G=Kv ctԂ/"-4-3C8 8^)YDr6)iӬn9RC{ip{)e4O8KD}zGs_Y_n!aEYQ}bL=`(mz3]-ZzAw/2f/zHǔzI'z/rhYɬp|lDU`ޯ7#O榍BV "`L 9 C:ϳN:B -e%A9~/ "d?!"HfFfk6ZC!DE,w~n2}ϗC  H!\E.c?R"}w -W[Z07NNDta6oBBEJe! Լ,H+h ح+֫ݦ=` L"2Cя" RTH(Ll`= |sd^۫nǮ^Ei8Z`"bސ zrvzčA|rrD#5\~ }\=YVZ) 6i !/^^6]@;8tMWܸ'7?oۍ*gX~}-W|W۶l__?c?@tx{?PT$D[2 "0|PQ$ZKL|.'QڸGf^1DUiZC̳6Y мgWQ) !> {>vANU=̊x,1DJY QR\5:9!&f9OāWFs.f^q61BqǍc]J4awŪCPkvc{wZs)RJPM)n+qD!XE%zu4(">krBTFK4R_s^W?`ڤbQ1@ݛп}54HB RLc2nSs]JJ)GGG^RK3EUbLH!Q)~Din=fa潽˗/ovy*"V+sv@1%=η8J6?׽u_W+=}kkO=ۿпϹx1?_=Ϟo{o}s.\Oɗ|~ٶ[uexh67tMWMoz>r-~Owo;ߞ 4(J T5%#@k8hN{ѻoB8A:B˪貰ψՑzQiv!4GpfbVOh D,ԜK8)e"G "@KD)F' CxwSN nHt%k4xGQ B \01f4h^ } jB{uzZ+K:-$teE9>:ϓ @H#a Yڻ4^# v("̛خ#ӒrL.9曟r%D{졇>99ZV{{)!KYLi[J)Q!!؜g' cqk;h&_!4 `Vk.]}vn."RRJ1r"zϷ| _E:l[){ywO}jG=X}_~4_x!ٶGy[o_z뭏;< d %.ᣃG9ju?U;K4̥A.9w|uiKİyrI=z,~¾/Ya i*n7 8RhgG>k9dګI}_Z?kMcM4uGmFARB ,0sjz(J\蒜ZyT[JA0X&}?% c9;o+B8VrVĔi9h eV cm_J40n;M4Mac8 sk]b[@|NtX`ZAb+)=f Pޛ")$wERqM)ke} EJ)n`u'W릩B0F֝A@LSBR|fథc̔} oZVeԚ,"&"6pB͸=j-pNnV9BZRnq$JD?_TȬjVh !S/ -ɸ$u6 z:8}  Oڝ/8O7*ǎɺCUSf `b1x;omTũ",ZDU1NA"Y"Wpf6MRjF/ŭwZ&^>ӟrUR ТB @7Mj Zk3f6QU1!jvN5p+]wM?(i 4--rbM)85^'dv3v5lqVvg(v~t.BeeEfl̙i~.9 U .-y}g.]ʹTKWny\9䵖 y7i!Pk~(aez85oA8#>KEB074O ȃ>K4_~VՅ 8n+Pc}H)y)nH9%+*R5Kl>O/r~73r??}gi_D"246!*6Ȝ*%?cՈyιw|讗>SE~Χ9L콕- n/ZXZ WE*DFHM`W(`B! !p'wlDeȦ`XAS ݐatC[ O7t׼\ZxU$roDm-NrV#B)uYnu!} ؿG]Y V|S`I)J$f&vQ8택8 dCv-u>;!/AWDj-c0־ٟ ">AbcH934SJH֠^V`5q©@c{۽ELDTXEjDX!u%"V4M͆Gf2Z*DL<#@ es:/ӂed''UY:;@T`1s0Q)خO4%4jbj:ULh1}u孝``P*%N&rGC5,6+WTB.Ź-Ndv.U)pX*w@'t7صE5s%ڣ*wWJ~=jV $Lp kZTP95iO-* p-9kÉ^wAѰ,~lJ-9CNv49'C!KO[Ԕ f)$JR: InuUݱܔ^ĻRHR'NI9giKDK^}#!J~|*9oIX XEڧbL!hf;=}uwwsKiN+3Ftu;7/IvΟOĪA_)HjZ|-裏yF44_-eM㸚vS-L5Q~RjC`$BCSaTj\tzLc xZ#nmw1D l]NDjjS]Dn6'ȐK]ZJEҹ|{bK^x ^0?wuw5yw~wկ>}m{o͟-?%m6~s#~3߻_O|w=O|ޯٶů{~G^׾/B>w/}B~?η'tBk1G>o>?jVfNy a1AJC-1"٭MwZ Ĕ8D"Uȁ'Hn{`6BZԬDlDPmysn+S1 5˗J՝U" <:!Z|@)##1k-sXB4JԪ^`)fBR<ϵ0 xQsHj)jރB( v&btbaK!/ J"۞5Y hVnI@3 p'h|ՓIe@4귙S8$x?'>V!x?K[5 d* ^I8:Ʊ:O4yRE5IbLXkiK``cs]ke~v_:"zT+j#bYV+&Z)!PŘq!T1%U$8#z_ƴtPj_`H$Ҳ`aZŔi*ALHM EsmRzp)e0S_ 0Øq4CYmjWގ`./~7rĞi1uDur3 "Bq5)%h;j9R3OHU"`5?A044ORůEL1‚pn0ynQkGD[6XZ) @2'`neLʥ>5VbN=˹H jtVÔzhEZ8&d)l&ܑ#1-RaJ `SW `ȄĨRԭЪ"4kWUͥh)*i&HCJɵ#0x7M*Z]Oq{ۡt}E@D =&9%c4^ 33EJ*aIPS|QN3y͎Ȧ 7:ɹZqk׮0 Drd=ٜm]>9:!._tk׮>v(v{}tiB1%fSn]O}ڵkcnK5{lqddD\wd{&I0d8G׮@q )TP{ ׯvT.qB #rD2fƩ9ηj[j "wy̾+;|+^9^o|<_K؝w8ҝw~/-]ԷO[ƗOze/_/>m;yg瞻Yz|7~=|7ͯx+Ǐ}׾zի㎔ҫ^__:$Ϸ't|.p`ݴ~O?z@d韹w+HXrRY#j9V"Bw_x[_u/$0͹nC8::~YJIDT3 @tҖw>_Ȟ;x"R~f&2O`*RJGr"30jUjmU+gfs4R9 D=DTJS&-R $D$B1.A1Z?69O-ta^:NҐRI0 R|5 q0JԔXXJqXZG.A U1!M UV֘0 jv;M_1r{u]zZSɝZUakAvTBz~]f`iEP(*A {NIV Q aZ-RJ_Cd.()RČmΆE}׋"#}cD0*`%=l˥(iA~2 #3qڻj *VqA,y2YU-1&G,ȕBVrN12O{E𨚦4T ٬ރA%]fbaX,l}*XZt;gi]Z 1p$GΫ!q4ٴEU D J]A(rz"Kd FKcled@y/@q@lPu7Hm D*P KAa֐*m0C6)3QN9XJFu'%o`k>V mMi9cJJLc:K)u999!ޅܐFp&6_:] r~INyK)0QSy\, {箸3gyX޼䒓x _Wn9C:KL9q'cL+{3 kw:/j5c+y_])뻜rL /rJ13aHU! cQ`^ƕ*էMbp3RL1k6曵죱\-tR"J>x!h@q[ZṞk0xF7+,1bM \!J)A}?9Y@֜)+qJ)y=+DSsI;aOYkP];דּٜRJ1bo極]ף[nTLZ&,OY ӍOiZk-U,%"nqjĘl9cHRr5j1B@]g1F4hZ6+MbEh1[5U֮+f~Z2khF3%81mh ʓS"b+G1j(+9'Ƥ k{UEbQ'*6j,CwY3ƦaG]oomf3%އq\W1F~kਫdeHt e C-V b}LDjݲG*5#_P"Fn\5,$F:ڔcbUgbnQ`*qBL-͂@)q "62s%[x)҂kDSeAW-PSʄ߅\b,kX)RkmM.9%jUH')sq^Cߚ 1.LJ\&5F֣䦾Peb 9.F\jJKQ*BT p7l#ĜrDCLv88mL0 KgyT.(%>[9rA.91{uW z(:29sfggg ܙG(㑣GgY QĊuΉ b**FX/|GΞ=RJZ5cb Zwl6#:5goo/`S⹳i"R*KCL vnWn#ߵ\g?O=!v}҉H)?>t9 _xUɰ!7иł-Q)1 휔5]ǤU`)Q/~zȹ*.yY"z/0%B!"w﻾{9keBЪ59?1sCRJk-|EHN 11K 1MMh)ZazmF܀ܪj[ &/.(V2X2 "]_ATJm1-3#NY&/191li7Zr1ܖw31;Y$VH'AEk^ Fcnq\թֲ氡j@Y^cL1bi\wBbES zڃ U*vd$x 9Pr2x H-жJ)$"'(GY"5F Wp[cz1`2M9'-"(0 h.(d%2K`E H_Թ\<Āi, M2 Zj׵fవ`iC)iQcmBV"2֠M,N+нKL0^k;g1Dгw绖Rj= ϴq*R^,  % )ծRAHXwRCXV]yhmDb`ad1IAF>!M=,+ʞ.g+:՘cĄJ)\8"9M8R4zkZZаۋak6^V֩MD/f0z.Ze11'nWnT5 M!Msd"F\FoTK 0CX@o霵xa"cXS);sjJc_H=*%jW@IUVM7mUe"!=@ kw!&Ƙ*SJaһ&kJȑ#ZuYk%h9S08yam˲IK)0| )-^vl>e\fv}K.})snۿ w~|3sW׋_|5v_qc?zToY's `R8!֡jj1Q4voH)Qҫd1;OL_OFbⷾv(bO;_Ȉ 0"z_0WË}p}^ %kcgϞ;vV˔iQe#F(ukT'{] !p UY259lRH(0dT K4b:"0 ]plofdҼǾT'Eʙ% (b,>n\UAZM_1qr!o_+ wu]WX ec:_+j[W:h'Ycjm"Ѷug߫Bgc9IJI^a9H\˜sw-@aS2m=ud{6rcd1,Xsw>e^U1"$[k`河X,s !Z+S9Ā8͜s$sI%oll9r 'u+Dfj|Upp=z_7~wUzΣH'+nc^oW}~ǟwӃ/zժLPы^xH-"F`)6๭!S]ssՄL5R+A&K0,G1wj"h0vK^5Ƙr.9AwHLL7-@Qö ^ aLkq!|fF׃Ǣ9b)9Kjff:V:#3EeD\ eóYs@q : [ /$#Zgi H^5^ j8k(NT:Re5E"}+GTň󎂖R!uDTC."ƵrYj+6j "lIʁ4P3Sc@TK&=UZkrqSWi-5b+fu j)9ǺP"+L ڄ1\rNE6 arM!+/,X)uEE 3cg/:1 U; BN&\Fx5*1گeQJUahHn-l6UJ)I)Y;q'fmZ(+EQ":ihI)TbC$jJHlM'{O:9Ƙjrn8A@ eB (3R-JEPj)#Va\2jVlǰL-ԘSQ-)REe+OpŁYCGYg KA* )}{WB /)V(@BRbFM_]=]բ RrJ9u]5rӉ;0Ì+sc!pnbOXcּsL Ku667LSz*}˵K>FcO@eJՒR 1Kfacl0&ߡ1ZP\jw 7 eA T68k6Uza &_Tu 3%131xS_RJik{L%"1*U*y0:U-=z%rƑ}3ǤZ,vsskrB5lVMܙ3cJ뺾a\0yWUN&x&S+h'Jm7hR\BbickTzcd0bQr]Zr-ӮXcvT,F*^^4\/aRgEN{YS!l,u})ersBlh-M]T2ʅiXgpv1 z!GR6˦Ғ~jФW4Ls.Yrcv]eU8`:g*"Q?*.[9":"kTpDD#ň*yi!zmBγqC0:u)uZ],"R|q4bۆ{`~)][[[[0*:ja ͍|nEӳ1f{{[t3/ƄSg8sTv9v~@xr$r{ԩm}uesSu cCRUa4w)<`[jÏ9u%^ppT~VӧO}f =~^?{}sH'-DkeaLnN IDAT/z{|)1tl9瀜iU9XDzulj(QecmQ)0"4#ZQBFyu1.5E*'+LGRZVb|>̵1FcLFWRJdfA$Z.C\\WTS)ƚιT@lPa‚?3qD_Ӱ)aH#PDTmK)@RNZ9޶21Ř޹-aYV9aa0u]}u9mTc2ȅHʹs9eJSVRk:<1k" Cr)-îYD2#㌭0WȪDxjBDs&GE1I9O^Ĕ4%AZaRc9'b*ERpW5X8' T0+kl%ݷx2vZ"h}-XkfՠQJ"%)cD/J0Q)9V_\TaLYPmRj284Qn7+UoؘϻN^\JaZ.b^V&4bB-,ƣV#ƈ9KJ98 N9MD#ul6O 2olnnlnlf3u֤z`V@m q^!FG`0 "F>RL3%Ew`뺮1PQF)XDƚ{VVǤ9LƔFd6t}3cmUSۥL Mwcp,\*xiDʚ0"λS\r+9BDs;;;bZb$fksZKqibB"#c''T֒b)%%Z1Y=1j\,a(x777lg8$Zm3w:v'P{0@j'Qr1䔘yqzJk Y,nln9r뺜01hS5<\s0 c,,;9ϝ9bs~0)EXUJõ*Mp8Rk2~1Ϻl6|LSpД,i^W%B]='>+䩓3g<S,$s]C9˝s;媔bjVkVJiwwCgΟ;\,R<=/(_>}Z{eu{{sǿO}?w~OW^wݟ0=p/|v/| /[o;;/&ӧO:uۿoG}GFuE$/W_}饗e/ۛ;[nK.yo|_veo|og>'O|s|Gڅoo+0 O~Ϝ9CDUzӧO>?jzG΋ ??}髮~뷦߽z m9gi/_h_\|=zoo>)O9}OO 9·aɟӧO?)OyӛEã^}"1G@'rQCa9B()G+T0? =3M)00(qB )VHf{uZp666767Y;s;z殒8j\.Wz1emb.qtJagAbJVDC<1 ƀscJo5FC1asks{kk>Oa3ME 4QzB) 8ցryplD`Ʉu~ū-` BĜs}XJD Jb,2uVиsT0J<`c6VhŮgՔ᪨~?Ej}j[E3ppcǏ?~c"&Տ~sX!zdQY4B{Wqi-pgoqvJc)ZVg-1yah1vF\SΈiXvb|NĐjLB0(M W T\:o:{cR!eTT,V_Bs]'\O{;76767;f !<;;;rX;wd\U)!3م? Z _ǎ0"q {{{r5BN`LW&icX'3q0ԭ]pLcDa08r)9ՆIkN9*ʥbSA*r)10 0I-g h1cU05ZF\Kejqx5TqXjYc aCI]t zQc-^9U@;c|`2֢8Eڗf%1R.9MJwmζ Tks=TסNԏ9!$t9m FsXc!~3五0a=*N|*gWFq0ʰ~ΝfG9qǷ N8X1;=Ч?ۧ?gwvwϝ?ݽ\c..lskRčA)bNJRdm."26;vرn6;{seIIx:g] ^㸻{y3?#;;;!kg։'N׿G?xwqzBs"B]svelwqwqw],w}ۿQ>~E~Q?_/#ȍ7h61]DW b+oP pu3f#xDǍW)lTX%= 3wÍ7 # tq-79+w ?Sg{?~O`Cha^-r]fsRDL)r:x2gk 4- W$Ĕ%hoE8M-Aw,[LIz\ ©KLZJLYp̼u!Z.Eu>8zzXOQ-ϝ/Z6))k03 5}@+i6}`ց"}&>@9"i]l(,9Mz^c^L)ubSc>1*FkLSIԠ6<yX7lcV#c50SF kH~>{߱0`em)~:DV >3ԒۭJ@a5t^8BJ3C_ c`f@Kk!2hɹVNj[S+׫H3D|k1Q) *F,SP (ctNCDc?g<ǎ}bh,FrXVrXSU75Ύ!x?q%lsx }W0yɾjs8k%b3O\qœN:>q\Ɣ%%.3f|\jRqoo7缹yWfr8>W=ӏy3p}⾏>O}= >G !\?- ݯyk_ٟUW]Eh?/'}nxK|9/xߌ/K./yo}+^;v'O̙3O{^}w|ǓSNɾu]__ODgΜo W'=ӓs=]v٣{=u_~9.bquy'O|_~w_qDO}y{>1hc m^{+xns{/<;ܿp_z+_q#?o^Tۥ~j62`ز!#bf<]oc?E3]k'RbO='Zc?O^s5<i*罟fTCqrIvhl_Ӝ[4 'L.<Ɲ4,s]h4"YkSo MF4nv]_AB[PLpEdB/a%miy©q j{!7}6Ge}xYVXʔ>+eHX ] hQ͹hR0 0 cl+0}yo- D1N>Y%a1!֛TqOfeEJЬr|CM\.dT9s1AAn)RrNSƤĒ z2|Tr1?FL)*R3fBqqY(HiZ9r Ճ#4fMw* G+ֹ9R'j"*%7@$Ƙ4]|Gf^W`SYHUTj3ڑkɤVTNa kO:b B9' c ޺UƪA1ւs@1+x(!Sd"cAҵSNbM%vfCr&b%jaAdhAT-E\`P97tQo ,!jSU.QkPmLF4k-j˦zV|i qYWĔjDe*7}+{5:Ozh*'f1)*>RdEsN"ى7ƘUn-pH R0b\cbΔ1FeVaΝ;rm777ԧ~ӟg>sݫNbՍlkkkkkH%咇qȹXkYv?XH8S'OWչsvnNZچQ%Kk\r΅J=666[~J'1׾|;>rǿsS@%ɓ{o߿~y.V@@ok"OtM n;oֻkSHD]ץ?n{K_-oy /renBD=ӧ߽z/uۗ_^E E3gsE~QǿŐO|3"Ї>t7eMAq&V4-bBCJYIt;!Joq։p ނRR,)~)y*bcTЦhXgY 88#b ɥ (\8TCmެXIboz1RriC#I]Щn:['NYTK)FoL3538zUd`^W>U&u&@6D-3cf`a9ẅdtk筹* @9RRzE::@NH=@?%jNf?Jg<z\.>1<3l&د{ S[0awZƮ3ExM a)"D-b>`%oh.S|+%S\.9bDi@z)U$' %%zrFkJj]}*uJEBL9PɊE O!bJ9[c4CWE3RYgU1Tbm5D*i.YKf&d+TЇ=zLBjk&pu^(޲2] Q`Smb?CVx*1[1Z⬟ xqqZ'TIZS,ґPPiF$RHjt*%`Z21u IDATB1fۦkriSMy LZtU!(2 *^fl_%&KX眰L0Q3RT?T7H+*oYoOp͸,:PijGfPDprir.!.hfRJ&*5n';i],%WɩSE A oWN;v1fFofb̰^9 Rd@Cw >ĉ\r姿?3g{4TG.Rì]E g1:qEL#q~RJ0昭Y7zX3H\ 3?^UW^lgcmK21{sT**@BBKhH;gv&B jcTlbۤ[/4&|A(EZVU(N}{5|/c|kK{ JBZs5;~x<{ CǑ`o/LO}w7~}~ыy.\g4yۓ䷿+?W՟3BOwpvx{+^q~r/Wnoĉϟӟ'=}=_z|oW{ж_}=s\m{rxѿK{{ٻwoW׫uJ4*G >;>rʝwu= 0!ƨv۶٬뺮bYiP欌XBjy?wmvm`b I(js*aa(?g_j?tԤFH7u]R}! àўjp.}MZ4 \lbFtԷ2h&I9+VXn6z,v/ PK@˜1=&6{sgϞ9sfww{RZVg^aZ%S(֊HuX 2fP4Rx# P}EP =ɡmZX{{{..ЏcHT/O8Øb@mM(:=|6ۦ]lmiu``^ <4.Xӫ2̬|quY|˜Rr9먆)c(DB*ͺlnKJBYBsE@BZ/-!ЯV3s|ښmf0 g>j2Qy%ZI1jj0 C]zﻮ9sǎ" Bq cj\Q}0(*gM !YB^W>YFe"5JS8Y7Z`)љsbZ7S+d.fDTb1u]i=1dPS8*zyrfEBșm;2%@fl6jg0-@9 P,ڏۯbU]'5UG/RXCcUwJ012JI!0g;3٩F @00C3C%-6ommwuwn65MZr,iiwww^۶=}ɓ'Sڱ3gΜ:u_ 7p50뮽7>7^sݵ;ǏaxGΟ?כּg>O;o>޽GVeƒa1$@mA>ZuV6^-/\O}… 9SN:ԍv|kkn3 \#.' rd:9]];_g~s?y{#W{+ow'._748^_oz<ro:wwp;q?yVUaZ}Nӧ?^؏ا>?LJam^1g{5p=9sZXRJ{.mmm0)b Zzy}[Эފwq3տ8aBZDXNl4:K]i[@=r"s6!ZANݬ:V{I: *EDs@h1ms{⧢Z;C>fAp%m%,`PhIAsUM@ܤ}h=%.8[pu0py۪3sayaxua|6J_գܾoWk Hu@Y.)>ºml6~^6H vf"`y(@˜G5WN\ C)ejĥ)D@61Ȭ pUkgDdb m[B}IUk%-BF@f8Ag9D,]l9 CeIH"%eF%"B!8)^_!3gмJ)=#TGmASL!bӉQGD͝)JXiba iBJy' \_BeVZ KdS!*9gd1κf滮|z^-W2+麃Ĩ.eXT%JzBs־0#"Q5kXCO3Ee7MQB9F}Œf,'1:$$moI9ŘJ- O*k'TuDEtUms-Ѥ\+葄JbSݎX ,8c*#MT#2j) 6RrMHSE^ez幠ԵUI2E-\iJK' LxH, iI'aIT ULGBرc:+ai泹ɓ q^>xS r/=r9mږSJho΅80af":vbqVjܻtIDT%WʳXw^ "EVl^p=tz^宻oe/{m(O70;;;_axܯ߾ej[zn?)A?plW=OW? O7}7_/{MC7|Ǻ///>|׻uooU!ykַ>ᇟL*׽usn/q+^׿}sG7կ~S /x 7 ???{7 ox{}k'~Ozu??{՞G۔1}?qϓG=sQ˭g<#W _ןiz~jb>z-H{'>qOЩ>q7 ֵm6m]zv Ԓixg9唓L(>.\O!S 9I.r_]{3s֎d}15#y)Vv}? R TB]ιW˃u]g[u0W._|bH\l-9'Ch DI1z4m8F|j]V t|hVhu\.1on6kg3< c QR )ZěJ'3 :kۮ+^aeWѻ}Pj tb#s Q"Ab_&(ŤZY۶muR!Ɍ\bғAsH*yojuX*0eE} by✇qԝoQMe5cL4֕‘'ju29)Q]N75&pI*SYh)>D{5&pε?+U.&kgnF)$TZ yGkRϠƅJ2^DD^UNUa [Վסr& DI9s@%Mrr!εM:\.\^,gnX(EP5S-"Zv S)2R=Y^"bJ` M]]Z"d,P;,QKڦڶs0_等'.\{￐ Sx.DR "?7}o? CZR;_I~/yRo/ ѹ{Esi8qxżm[9ͽ;:s޴=ȅM̻Q,r @YrDD ME6.v8,K9arPWnQIDeb@5J2Ck(P0 CcΉqFF0R5R(ɜ3W,U$΄;81FrkT%kWdakl6Kޗ1d[HJY1e}Ui]a᜜h]@]nYP䦗G2+DBSRpM=\G"뷐tQ 2uo1dbq^!M4^*ԤI$B7`ȄX׮.%)}CaS0*h)NU`syRR2,%cy̒,! +d*QNf6ߤjUdcÈ4 iho?Zsʗsi@Ŧb*E1Qq [1X T ]ĈsPTi-BDSRJv3ǰbh m[=yҦ-; ``ΊanrJ1a4;ns'DBc%Y/HEH †dZz2hQJiG=5\q'?Yv]g! 8 >44[,8'@Έ6~V噓0cH ccY'N9ǐFD!DŜFȉf00Lq|5{| Boy=|vvܻǏη/6oo/x^wɟ ^w|>.]iMry@BY2~(2 M躮73ūHYd :BzLM+"iMdι7jBʍe.]BԄmY"\հcC >zg]La)EQ=]?*҇ 2J$[`TP9c C/MJ6g"a ej"zsexk fme*Z͙fJzbT䀢"2'l \L֍RSi(@ɑLYYfZ 5CE!B"X o+WSqMh߰*i[kErJ9%PD1@镅}7 k cT pq+RdYBrΗ\):}t\X2 rګzgYnyS+JT 9[k\ Uj2VQZ5%(A/aXaaB,aD619TN)#5F/X@,:`gꆓb)g-o|+H #.Tԛ75TjSEj1(k3ÀMhcnl^tT-DH9 CQ]AP$|q(h{.1bTlRmili #XkT_D0k'b}T@3RF>'m3ސ)+TkBzbOוAF-1Wno *QQ0XG V](Bmt.d)A!M8ܨEEhD,K5OWQnrI0ƭjojޅET!2Զ݉'Ν;w }$t1_loXk۶Xw1lqS$pr 1lֶ #1pNm.[f8*r;mmͮ\r\qL)B.39'拖@ IBi;uq|>?=ڎ[wj?=h|3g<9yK^%ys?y_9ھ`̒ѹSb!¡d)yi QuNU53-efֵvP9|'sZ"snz5OԵe]ZR "?yꤵV_{8?1TPfM(Z%yC48 N!B!hZc̜bAc QsDNo gaNI{So2A}PXTVK%hp ^I"o-w+ťo1;¸UTou[V/7e)WKau+:Jjx[dZD J 1Tyyf99RhNHP%C&|y6Sy U)EPTqHF5FrJ~gc_b8Ɣ4cS5 )P$aD@x ԣ~B3yoj^fݯw0Z4 g IDAT$]!<2`W˕ZS7tB#Xc׺,SRRT;uV\ckgfYs.8cwi ARN1Em1x0ecƒCcwqRFʒ@2֨T,`\Bh PI\ol%eY;4ZeLDf49Đ4igY5"9g`֏raPZj3N&D 9Rϓ-5aLJH䘪9 v\3cyL S*d3 Bƽ̙w 4- .ei\o>RjzKCAXG)DλZ7QW6Ղ&Ur]p#!Iiaa " v*JɚNV*D(_(U _oq%[z1(Y41%%: A+m綶3gs}jԄ84Vr5s~wX4*gWWO:}kr pZ&1dmR)-WWv3[[xWT>k*"x C4!!5M۶v pt?zm۷|5-|q|L/Cٟ~yK'@_C}F:gn~M9DHrCP+י_kHڨj"aKO5Ҧi Ɣkb6ƚ+W0 u9: .]cTA֋ PbDDԵ3k9ZGmŘ˥~Jzn:h;ڎX:!B]Y:wݵh>uug%|Aso⋾Z21< CZc5e!˗.hii5V@qSNj-hӶmXka>)]"o~>X.C:`5Oɜ+*Oo۶ 9#8Ʀg*ɒ ]"Ĕb)%@[4^ca1mnlM51 WҢ).RcT,ІW{UX$Ŭ9d9gDuqu։p΢Rơ *&{FL7d3OMƄ jrFnCƔlό֚Z$vngy/_^b]׶m뽯E$VA,_DIOӔ|R-rE2N+XDTN9@1̈n҆ 04:}?xI. os ! BcfV7iv@LcLIlGwDVDm>d)d=|9ggQ:BcH)gu z=*3gUʅXuHatcΩ:crJ3.sS6OșS Ƙr dS տZc n月ógUD!!"„"#̠! 8anVDB JX[ WQߥhKAX5G bP«b-8a1Mr1^riK6(I:kVJD ދNReaUu 9\}}c>5UWKF@Af!da 9"&c ;\dȹ$MK4g#N' cX),JV[hzŮ<<5`u藎vS%ˢr^UmKy8c`A"X!SKֱݜYWRmƘݝ]=&aB xqr]fY7@<_ݬ?~RrΝ欟u[vpa\z6UcN:k"҈V|\M|6F580ϻ;{Z˗skv)Ih;ڎjը_4_xm]rWthtKI(c1Z㲭m9kc+ Rk%eZKzç(!Z`%$MIcLGk;ukC&1e6m;Zrfk|1ofkk\)@gfגּ80UG3`7 !h 6{/ ,a>L٬tx}aqX.#չii)S{_*oQsΈלLh7:(C؍,ȋ5e۲(eT8B!FT+u X̓ͧ r+'*ַ78hsVE\YQdcι yoSYYJ<)eְbDXS<4e,DZ5B*SjKR]qS[F-W"iݯm]8CcN?#s KNAZ8F,)EKH,sSـS9yb[ d1ױĐfzL1bau2SdUlX9 #ΌDYƂE$=gbLf.5M[gVpAڔa 6yq s.DDZ112M5X`N > Ij (ΗB9eYT1&pK#1cvTj;w . О( DlZ30 ~EżMi3H*&mRFl"DyJfu'BocC a z7mI󍚹(P46bx.{bmU$TIϟR!DF)am)" %4+NFzUY L/R~CeD #Am03Hq1o6#ThAK7fҚ\)<1\T p"TՑERQYT/^ĸ3{0 DmZk}s}p9RһkofVΘrl4mbNqwޥK!F =~ܯ~ݣ tM^*X,tDw^]c >9x߈|esno糶mBBv>n"1cƔ~h;ڎGt6W3?3oW{*jh3H'z%z3+(6mu]׵]GFZ(30 2ScЛ:(DN8rJQ 7zׯ5BDySA*eA✝:CBкܲN7u]DkL]JS;91%CONk3mL $$vnvqژ!%cBZDHQW5ٜ HQ[7xYg*U+)Cp攒Da\ L"慶ާqH);gnIC5zo1G'Kd ~'HHz8ƚiN)JE=HE?Cms11~6j䭅VE mЂXK W@I#i_Om/Y$%IUD򕇵B!D z:ņI-Uƴjw4w %gNU_˪{ߨȥJZ)ҹXyvv۶Q(@sn!u 6w3T *AKpB|nrYF;w}c 3?ff"ZV0lNNHZj)rcպ1F4u3kk@R^ƀ=h;ڎhqm?G_F[e[lmommm}Sw[[c@zϚ!b6]1\u]ά\9Ӷy*$/o_WN$@a(z=k7nZ}!B+C+]eۯil{]5 ]w~ΏH 2ڭn۶Z.aYu|vzCL+vS$Y:iRD$w"宽HcuMJ)1+PR}mf"[Z.kk fB*A( /ĔB 0aHIT{%4mk;(ĘB:QW]429 cB\.JMT|Ąo,U' BN:-)F9~ݯVkM+|Di2\ce`=4ne.eV S:ArʙC'`u%vR3Q= o|6ݬg]9W'j`R gBq>GPOwKdJܒgɊȩUN9M+Gw&q,"S֜AM㛶mڦf[[v?~|bPXIxjSB&ж|1fM6M4M) z%t-88䑋._t˗bJ8,GgΜܹsO~sW_Vv۹sΝ;?%mo{-rٷmӃڭۣ?նy9T}{g:u~__9~ғt5׼<88xX^ÏGjQΓdgwggwwgw}p];+&Ɛ2Bfbm-٬u*B߯rf3缈 ^!M}\FSDGn !꽝k&SŶMX̭sYM'ѡ&M/sf)˧U U %G*Ś'Eg:q\)) NF P -s%P uГJzA1CYXСړUCI7&X}OĥMz4YIaH)"6DhBMcJM5g3"6tȧ]Q'h'kck%2H &J/aAl6ۦo۶U+a *`#;P IDATjZV+XuyIՑ;<2{SN)I+oe)T䫍!VM㋈m۶]5FZ笳NkVn+p"狮tyomooomu]GJϹ+Kz2Z004oٶsY$PgT?e5SFUjH[&x]5mD*,PE"`UCF\Fi.Czv 4 1ƜS ˅= ժ_ĽYY\9Y#w R 8bڃBFHf0KUáX;;&d&1ھTYTR **3aICRj5dS 8O10#D *2~S%& 3U2ㄣ]sN)S*`k*?8_C1d E+2kLNyZƪyj;Cpa˗\r>{qj "9AQ;gYX̷拭.88TDztaמ7?_'}=kٳgΞ=vV(KH:̙SzFsJ9<{Wj9 =s24ofvvvvmu?w<>K_WU|-ۿ~8yGV}Ց?Lq=նy9T{/yK^Wo}[?ɟ?񝝝ocϏzj/|a9g}zֳ#wz-j {c@Y d b5zn0DkE"2]ۅa!B۶2XAY'k91F]OWnkgΎ.K su/~cba!pIBf}u{k3g@Cb![o%R(e,cLj \W0 iSm:E"dtEVDo5"bzsYh7ƪE+PĐ1]7 C q1%DC)ŝXאuy\ \b /C QzUPR*]ۚ@{Ķ ."چY 0ҼfB9ҰZK{HE! H{m6(eV6)6%dJJ q^s9@~+kMNrYW qh{T`&2Xک1'q4`6I()D AIjTcnM0!*}kڦZ +_aaeU{RF|иv3U!2ce "qq.3R$sv @Ι90ΆBZ8g333(fKdk8RNdLd/m; L;;;8+T4:hY}J"g68:P0I9k2gN)ߴqBgџ (C  J"45@u.=U$Jf@p)Lu1~Bs4i7)b)ECh*UDX=A B**$g d&Օ,Vj [(?Ϭ*0fEYA5g%Ai3ThGyR/dBՏm! `1ƨRe kRur*_NY<֫|=|ý77 BJ! IȄJ!E,ykjF R\jҢ "sStsa<dz>'al\~;;={{~CLٗ8!H !JΊ"40+U:܋Ʃ*hҤI\&@EYl/ fvlaf1K]z?:VIRmSJwq> 5.hId IT0 cD:ߛ;Ghߴ~e/{׸gNt뭷s9x#Z'N^y_^կ~ɓ'~?y^v__7}؟^Wm|~a$h&4rK-oOַSOegz5oOZy0 7{GdC{׿/쏯{_~7|>~=Mryh{}Ԧ7G?>ΫF |#G\tѩӧC 1E9 o6t* WMѶ58e67oKjY-;,GFӘL [FO--"Xc<\^D7LAAofi:0%S㛃Iq[fBIiGBtњ`RHh[ H, jwDt tsId;YuD,a5SL0ײAڄX9 9ҵsvI$j4|>͛! q©@Rs)bP`en fOk@1BBX;֐&$E>7 Y!N!0S6ifIb8ȎgLu'ن0;okAs6C!B"T%G{`!TBkIeOhzlh%~qBF^eFIiaLgb* WR Sy@HJ1eTcqƬRJǎ1V}?7"Ta$#bZVAhv>yl]8fUD.dY]åH5[ b ˒=5}.ca%f:JR9":efK zPДfr"jUefv"Hf09@6d<*o.*K25;jc1J`ܻb$`HuZ싐 ,NJ*FI,ݹ'#\2Ý&vgg9眕LS}?L~btLι$Qa杝ݝݭ-SWr8`;XVI8DضmJ)\.cF dws2Y6&N#"20i|8P&?ӆ%.Gh x7~:I!w5|3WW']wus.k /j^_nvo/=1@7f|NL"q問o xWhQ]~*\q[nr!lL Nj5,91o7Ai-Dchmc9糙&T)S{u`D,౰xՙ"zaƱ񎙷sb{{kgg{ggg>]a%fDbo;C)"P۶rwq>{8tcǎՊ;f>D"bc+S5$ SnΆy1 *ƣq4p 7x}moo^x(ۜ{s}#țn bGJ:_CPpYg=W}׹q8y[|5CcJ c@X1&@ipåپ$I) @|ڏXkd_֘Gɝ^"Jfͨb< c{GgBe>X[LaE`N Jyaaf}g-6;qF7'XO"{vl;oѵ,4bU gqƶm}23TD0'=EERL BJj^%\mVZM)(fqcB"4Xwiby7#b<?yS4pk8[lhF jee9&hI7FTGDV#GL8jߏhN1HƴBժ_.~!6MHê/ywu;MCLIduzƮ9s=],"T3VXe9⃃ӧOkٶ' WDVor480;vmxKַp O~\wu\.V'=I/yK^W7'=w OxB}=i/| _җwyЇ^Wmo s%}󞷿?_*2r㹿ZЉ"*u=}5\reSX൯}KPkfgy74MN(pك›{c_u !t4U\aQ "ۂ=ţ[o|cxa|SmzipރjJ1 5 P˗y]y3RHɲNٖffӐ~^r]W/ؔd38MievInFv-c8֘4Ři!u]۴1Kv4C1ðW}?mڶmU$&).Ū>gL"(t$.AtMk[T 1"dۄ_" Z`g4:cV9XEulb/' j=/{1J12;jgЀ1MkYEĠkږ6+1aA9vUPQOML@@}!27M۶]FNcp ݎ$NJ9sIu ,Y^ٙ14MlʂJE k 0t,/"X M戥$"1R `艊/klfmQLVy.H($v2HJk%!*z-#WCjjo XQ$/ ndx&*]iSJj_% ȄrT abI4òH:NwV ~Ln6eEFJ"Iwx_?4Oxskk_sꫯ~[b/}ы^t饗~/}0G{7>7}׹qɟWO|o/~ꧾyyUzs++_򒗼oE]s:_@~-iYӱMO\9#jeWˣQշO=G.Fi?ZNf07xUOݽ|9Jko_\| _<*@k-j8$*RJdOf_~9!}]}UȌdQۮ`XҤzjmsSjT$ӟ4\|Ŷ&`d3 I)y$aS "LIba㨂F$UԚ(dW[șD)FerΩ50MylQl١Fd%*4u:&ψ~)b]=Μ7M; E`gItr_d YNm1B)&êB%û@\ZTm w,-ߗʺgblm[:3MMLA IDATۺ$!"]sSÛ4^PES45DYwPӋ ¾:ߵ4MAgGb0f-;j;bjMCm$dqk& Ƙj"ٓAU'W f\%"9T uR_h"DO${囱J }&f1[" R!A`~M+ HM;U 1(w(9[Mb(X;[ۀt]RDs<a@msO|bSͿ}c[Ox'zRo__`^śx׹q/~_`y\uU7|zxw]7}tQ ~S@ (2xn~ YBo{gſx8df ͟4i j'q_II>&I E{J^"1-&9ɂK>O\veDW_u_~\}uf *ݕFPb(u+(UB]qV_qrlZb 5mi͍zpa& (buDpX;YC! ͹`ʀB %kr&S[:IE)Lz4,"b։ɉ`E{[jK<BI$њ_u% -AͰwT\ХL U(TLb٧SU6+qaNm2]}paeO?/h!Sg6eJu6H))cFM۴m:qx7ZkG%C$),h햲ّ.ЏT2ITHJHn$1%fZx`ksj֬\ "351GIXiStEg7D1E^Ah$X2pQ@. 6!9[-}fn{L>䊔mn&\]{SJ)c;~\ppx0 w)Sm?y/vvvvvvfj2ma~SU+B"=lDɺbv;}f\UGhq47xţ裩@'in/q];g]h='yOO}jxާ7߬ig>1-o3?gfSg=ZysEg=׿x&"lXJzOEP_f3W "ۢkf| 9H&V?Q/tu2 "Z%fvi%ZX&Db 1zmMjr!n Y}`96y6b,R@Q&"~j c,"Y+(F'$DE5*LX;}߇i"r%ɲ l@dZ/* @h`am-04M{Ĩ!i,{o-M!d{$L޷8;k%{n.V&ӶƔ~2`|>:+TjiT$MŤd&>uޓ>#1 o"cjYS R2r۶Zޙ=u;Yي# 75iڐim 0S)Mp16y%5 8 }?M!dd[Qjkv\SY9+#lTi-MVM#$"#^/Z)k`/9i3:SI U`LJUmoC}ǔB55)_Ȁj؀d61%cc (+4 5W uޥ?L3a7]n"4;1%%$X-vE9Tq(P}фD 8|)%@IIw `b,8)MK攱5X1aL0' 2"iF\y;W;"H,|dSVaɤE^Vcxtw9׊ڶ/uihkkqmý3a`Z1O>t6yyTu[8a1b>v;!|Zs9swV‰'NK)-Wà NB~̩_%8Ghy{&dFkT+<%K@kՃm`0M1Z9w.iU}3LBm<~3Ϭ~1_,nkL_D 7\R a:}8?l" )lyP*Vj.h^'99ᚫABGL3"=q]7 1\K&NN"qϿ$!9&QBmd!~˯ꦸcaFf6nu8ԩmRwJ-TDKOT%%jmzf C!d.zr O.y[1%zR+ pNW߷0- &luh59&d sP \X@#眅}U>53KJH4) :JSRk[DUCZ_7Z6^=3qJ  ى)1)*Db܍Ri*X٥5j4du7+)Z t]kj$iyM2dRV;F#X\9aV_lebWb1F̹`6D!%ZK@R $Z\n"ΖIS-J(i lXLĹii4Oa0T2Iİ,5[Yf&:)@Bb_@sQ̘EkL2= GM-SW*Kzng a,䘭LS4d=#T+\H )%;'YK#Pw"RTI.?˹8<<4B_ kڮMN-fN}qux4~oo뺻Ouiml>B~gݣ ! US"!I'9S$8M#3#6D.L6^r)%loDEZBirl3$ePiȵց*,Mλ"@Qq :rCd#bII3`s+vDMb S=zfVT@Bx1z2;bJS_Vfl" AASi!G:(lZ @$ڌg wx7J@l mb(bq1i!cR BZ{OL1)L54eYQ&#PUqaI;eD7H˕*lmYkmn=1yu"2 }E ™f4!L9 yvmJ)f>"Ϻ;7cCNՕi1I13,M94El6DYLZNQxಡ,Q)0 Vp7\z_)CZ rDޜD5Ek("1eG PtΥ$jȦ"8皦ޙfR(,ܘtkͩF!@@ 6nhB Nfm1+ō0A(8c)[W^$ $H7bc "s02*!m 28 (#B,i1!gmZDJ*,$Qbdkei\T틆͐X7VL1gײ0$jUcJP,R@J?cNVC4RĕYXƋ0wI$Ju4J,y#Xq1H9]xf.`W!*j[5m dL,_s/} bJ|>'S{/40ymm:|ηf'NV$,SL+BԩyߵGo_m@m>8; )1᩻ǎ=u"=L2ꯖ˥sUafz߄&Dk0RLXD$II2S ҧYw"a|kPФ5CK3׬658ιbhrh؏ )IlֶAGѣ%RLCʿm|??Mu`?ݗm+{[_f??y돾K^-O?vt}P`ݎ`;t (5\3Sw]i9ѬBL[;oVlQagPs]y%9Ǵ ]2jM_,uџ+ybRU9L$h4Hma46qB4Cedl8bfCÃþ_lnM!R:D %&-fbUTS6%mY&EVd]JQ3͊ Job!ĦBWM)$i-i Ⱥ!U1T9EDRiʑMcfLL%Y}yUSw_ AsU¹32A5LѼMf2Ĩ9fXgm )IJ$YeEY`PB V$nu]kΚY}Cv\T;P#n]u\3cZHs#[E jD`"B!D"l\뜳UI!;j=F~.R"$%D&i-GUDm}Օ@PQ`^6Bl&;H^2mIP !B2ģtWˋ1K )%aT`bs tw8"bJ(Ib}_&;8eNYxYV캒qEVHBIA)Պi[V#HXݙs:!1 k֌eFE._?"IľɍIADZff\T l VTxO 8Bf?-JL(uf ]fNB85n{{k{k|b!8v".kap6'h$9v֖ލ{{{0.˥ hj;>%1=kTM*jgFdf)3OO8Ml3D\kh<8IP7r 񴗼[/h*Mrпo~a'=EnOavGKk  :1*jwhg.D%I()@gf?{?f`aQ"pKD(\b57[)NUB% s#1k2@% +C!;CH)Yk%lutaPiZ88MӔ@Lg@8MSi$iڦA$bb{^OP}wZ0;1 [kem3ZL"Z*r;pcɄX^fgV-s]a<~RE_4MӶ{fOK3#* kB Rn}JN!X dB>2(`)SB 9Q{ <9_җ nik5RA&h08%AM4Qf.عyCSf_QE̫b$BP-tYn"+C콋  CDhsFM1r1i2e.%ff]۶4 0cFUf3F[YEd[bz&bL5U"qwm"bE[lxoYtIRg&_j4]U\_PRT2&*J>`[JSjɭExY(iN4fnbK֍L(6TQAbcct.2;|cL.3E,Y$b 2ZJz~Ӑ2R 2#gc%-ndhH>Odz`h.t IDATӻLg"$e&2l('5HTa HFFE^v3n?4dSO=oEHmo5vR<ΐ(bWѱl(Mj` =I)ZjBĚVL!,^TgC2b߱;vXӶB4;Ǐ1sVI4ߜ}9..H94Nu—U|/WSbرsEyɓm0&tppBh۬lvX&t>BY7b4nvRU߇Dyo4a۶w~xx\. .Ƹ<<td{4\}\ju~OOf/u7hpo=G+[˟<Ͼa}v巾O'M:i66Uk)I^ȳSWӴVa%8-rBS0!L{!),HZ,Lu. ٯM -L̃&"UmpA qaj3d-trvM6ǎzϼQhЫ05{ύ1 19pVвs;;;,W[֌ƲO)`beZp$Y9 xsRDvik9@,ka,!E?9#B%Hg(FpRֵQZwf$DT ]_4j4 ]rz)YeL!yzgl6ϮHdN2$#j2F)"aLV/q)9*Syl"1%@`re#F7^v+R^2cdQ% DǮm0+s^3NaR0mTR@b 05ffmv]74Џimۮ3P L%KN^)]hrڮ%̳ir:SeͨĔcIEU24^]`YCQ?6lΈ<έ6 ! %A[!0 )|6Ma-P$c* aHȘSJY QUatM7L *"^Π qM,c|d)s 8HvL$ ٠uPA-5" < XJ>  D JșCvDt@K"L9'qXDbRP= iX!ɠBTa!fJLyh좖T_p4a[Ɵ*#e\ι79!oSN9v$?~α~itǎ?s?y!'2c̨RJLۦ5h]ם}9łi>?@:{v<|1R۶{LfΎtD{;|;ϦaӴ\.~Ig΄NSdڦQI0r\.wo|eŹG;Ljdl+7mT88,i y߂m kx{!030ZJr1g# Rke%Ib&J&qU+^}8@{]-r,v'hmԷm״,0 v꺌Hnyd2I9no["i9^'f} 7sQRbf3d}V۴mۚI7iQL8_ s\gQ$$i3@ۢld !L昸Ó9J%h`("vĬ"SRyα_EDSLdP,|wJ.{۷K((iI(g.Rsfp 6#d8[Ր}c2ןyx:F(l6Ͷvww0YcF$f{9<⢋?Ύ# a1ڷ8fpym۵M3k)Mmc'fz;q{{{6D]9M'Nq !8fiY̶qxx\.KcgΜ>}rlC#Pw9gW='aW^J8_[&&w|__x#q4Q߀/~s{?e;rDһpEK;_g~3^_~-97؃?r+_sK_8‡?'_oԞ?~]|S_?x~_~kM_5)\l{o|:ng~S=‡=qe 5?yk>bx?]rM;^ ݾiہ݋?Rx>_|xuNmZ{RRR%jY 2o17M8#j߶8@w.;-/tmZKCR$+}j+!&#Kt/h\LZ_] 5nv""CH"QR*EQ!h6mv-38ZXEci9;:P5[sBP= Dj{A[[7} aXqy;)&MJ]<2GFrwY<`ZJ ["!ߠm+߯q.0IJ1*@eT+zޛG۶fιP !TUФh& W /ת~cA`qj*=YȬ"gȘ|̨b`9献\Ԝs’A[MIPĭZa[T0ԔcyᷖcvpTĎURL $hbClhVY52 Nab+^t,ReˎRJhAEJCe>YXU9"BS7i8Csf3iqs@A$ y(xBhb*W0[ia(JNyhmm"lq 1ê(¸r1+1Z1N(dfS& Z&)唢څB1n)#L .^f ]pP냛:{1ž:7Y[R Ϗu)j9^Jd(swo'sJ.f$]Rl6"2N_,^{~}}Cub7qi@M#Sm){bZ pzzzΓK]-ˣ#OOOEd=n6ÐXr]"x/,T qb<[ONNSu \xeG/?>>>{:vm}Coկ~M7?dN.._v]{U/y?{xz޿闫tBDz/?㟿_+^<[7<{~{~gw>*?nהbx̿UwXVxXUxg#9`S1ԕ-D\rB̴zgk@I [뛏+֛RJiq;I)$w} Sp%*U_T&sQ8UCgJ9qY) d-يeP+@-V}OL\`b?+b&BjoX,SS1j.şS8gP+.1asp +Kߩ*VbX~Wy˹lT#0N!%V@v[TYkNFΉ.tm-䜪3+ "J5.w[ +C1I(f ǾwqRBuFiX@ȜJN}1"v(PEW *dG⬘5vIKI( S#Qs(L#b\\Tt}|jmvAyz=V2%;+z}4Qg1 eA] v?qqTũsU^rF%갘hc| $X*=VR4eQ¨ԊnPbf1&BhMz:zlQk4gf'@]f]ϭS6q]rﶿ傷4Uαq!fSUQзp v).YU5!¤p#E 7yܐAXg:lHrc]M7Չ008֛fs!\xp >[Q&eI)\>AȗaLyRf#"G-#U0At0r<<<\V-8+WRJꝪR  ]~**,/_>==Ž7||||33U Ns\a٤f8%gggo}[;~;n~g5s>>/{_~|Ͽ얃 _|]>[z_zw_ /oş_<??tǿu-?o|/otrwk_C}5GǁY>;p?{wOoy~=>O']jOA:AbJX6nRJ.)L@uꌟff*BV[D@d9O1.}m&v޵5s"F QnYHH1fA+co{a.ڶV#Rma[.siQ%8Z ))]3!FJDP&z7y\2U%]-JΥ& BdLB¢"z3 bM [#ʪʹZi#/@Ćh5`G%vDKZ物jm*9#6|]ԄSBL}:_M ŸƍCfgb|.H7Ɋ4}wئ)yV{{ϯV+x/*4;_{K9 VLQrLfT}C9).# 0KUyx*͗#\B=! ݺ]\ hpl.,*9E"S.\J&f.@Lp )lh*r-;N\ZLВBղ4qcJd4Mce~9B%Y`>9{!%Jθ) {Y )!fRN`0;,rRUXg!VssR#:S'G<@U'88B!W6:=I3ʵZvmD[lupH3@M. 9 3zl<#n5E|03[ 8>]VeƤ 1|>WjcB9JX({U 3SUiHVZu+v2Ƭ X;5}2)k[.Wcqd340:<Rr\/XbZjQ)aU]z^)%kE;<99))WM)aT:D5.q>>ZǾe/?cW/~s>>s7o|ç:oku}֏\sH'q|\1{Jq6seU"HMҘ Юi}NO[*b:J Q"ne6ArwngoU#Ƌ T%b;VhU0ET3L˹hjfa=`w5Ii* As](Ӕr1T٬zqq$43(eEU`QN)Ee9.̀ 3+-TW. yJ)&/R2 f3lޚ|m㜫)ּMc;9PLgl]A‹Vrw Qr*s.+%*ӠEULJpjzeJ]ۖbXI9VE]]#T]Ժ8P 9kҪd*ÌQ%K C׎{ؔDAvگ-S9l32X Y[sӤf6Ρ`؉HZs ufEvlo6!ԖvqFs5\SSjŶH[.tDބ$% wN wKS1X Rpu;̥ȉU?nfEIfEM4p+&n/e1#z`ȋee&ؕy9B_ 1"B֖뱒fYX%jų .iGP"xh' #1/EJ)vv\24v51_BʷrUdJ0`^t R`oܥ P1v ^eh3ȩ:}W琢mHcRtf.ؽ1RJ)Gרe_H)6lfsN7X04Y!&u]Ҝ IDAT>ni.y睛 GGLq²^mu`\l2f37!t^r-W+{J)8}uȂw!8:H70DʕsNF_u}o .5\)?Ƨ8sxڿWt2^=z!n7G]{GלEDnvxǺ!vާ>_]}yЃ?p{_{Q]oW9WN?uqP|?a<tw>zEu=-T$t*")e1jgmusJd0c+*Xԅ9ǔcמZv5RHN"1#f&Sˎ Ԙ~)XcWE Amӹ2sIPXҕ*T+$|>N" h5Ck+ dEhj)?byttOSܬ8 0_(b)2s5hYmc?x4SuXЂ5sSZɈ@̌‰TMt#/,ι{]·SZn.K9M1RbL*X{ E:X8V[M{B\.s>dx.ibj 0J [) r})j]= [B({)eU^qooZ.];QTC40L&SN ```ZgB1% /b1R>t&Vq~~sr.f!sl,U8SiPՉ5\\E[#@۱X,WN "9d{ w&8M`B}X9T)-¹B& 'Đ%ZJ;hj 3sޅhOvDZ6a+lf3#rvFa"LᝫӥSð)Fxp *m69+P23}u=)0M9BtAT,[{LeofՉc."`Rtq+Mݫ4P~,5mCaU'dž6Y,fX}5 rbBXxa\Cؒim<֭pz9]o.\|lH{{^5qN\9;rvze}v6)[16VFiڌ#)h 1ioou~׽_z80+7uŋW*da}rr;nrvZ{ a`\v}[]sbOӓ 8ѡ aJ8]]?p>=h|~//KxӞ'=Iwq{M.}#9?S{8˽?+{_|-?r|U/zyrWo~n*ׅwq^qx믟6<|^}_?uK8\W?yݎ?x/|_G>NwYȤy[9O\ o0 l57["=Ŋ'Lq.AYXsv""n@!.<הv۶k*v qiN1m3ŋ-g#"h_ba$b]bL: 8=`bb\TVV~_|s:NX?b WP k*"RL4tetNJ.9XNu1N 1Sϭs.N)\WN$7?~UEŐA(y xr.6/HnO)|5$ˢF̗ ơ"q! I)YsHyfB/@s",猅@ 3ԠPO}|]CwJ;'qߊ1DV#T4-WˮE+bd@+9 A䧔p]3F[.(9R*zjɵS|~Sej1 $ \Z-9E3I@ -9wt]zѝ^TI5) iV)m860.FHl T5s*"X p\ ]Slm>(_VaV8H5,/WKTh5'p+Y n"+dSr>n\?@Rq_Y[V(*>+}h||qcAv{s'~};N3{'>xU}snW?__w+wv>/|>kqG>|t _o__y_W_KOW{'?q_o?n m>}e_=k :7}xOKE}? ՟竾xs]i5__| /_y3/y{>ps[?n{񍟨||jo|d,*,\:(V}wcX&QRDas뭷:EׅNɈxYOS$2RJddaJ.`b캾:)ҥ;ov$_ڤp fi#(bu9'Rs11Oi=Pն"H?Hw}sAɤR" -q +E8yKFTu'C^-H>Q9'Ͱ{u]ӹt9s>;;;f fw!t 8yL"|z;.Bs"hއ<` FBD-50Ѳ }MRu#2snX舘4ႄN -Ar1~[~@AE検@#~Xv}((H8V$sA?8圜wO#Dh6M1#~qmkH` ,4|ro\,U54ŔbS8;W֧g13OcUp'51-` kܣ.FS)`fkr8N8ˊXSIխV]^S,4bыJGNՊ6fbD E/LxA|FDE7fVJJqL:9?WB\*@!wcqynj7+Rs_!WQF:-inO)U􋾛9RSgq6s 쪵fEͰ٬-JU7 HFP3S w)@0_rdrR*̑;oBERR&?_D T;7Z?IjO[qմ٩^eP*rN۲o2{vlonDUL9ͥ>?*ZZc}<;7+ɂ--Z{𘳳jsWU#+{$|ʏ%e{ N@Âli 6+QJiab ]X,%#K''Wr "?:apkʕ|8V{_wCTzقGH[o>\9"\.cnqXVPm6gʿ~O^{<'4~~{ݻ_}7묒\,|6Ǐ+o近qO+r>??xwq?y/xnt`uE`S)̻!LjC&ÕYcʥi0NwB@fYo6qHc# ] J1R؈I1ąhJcBw0١Őh+1K2mW4, էsLQvVKfQb\[E"{tr[Nw^2C5!tغ1"Ua }^bQzf0lI8QlgO;z)'UǺ{N״✫ҏH{bS #{hKB"!njm<&8f[SM߼$1EА^EEyx92 R* {>qf|fGD4\,^2]+hc)óPe-Ru"R un|9B}ñfA .uςDbuE'OQؚa'ޫhif&LV0b"q*"*#56kG)"SRS$wr&3u~6M4ňj\B4LpYzxl~,yum_FV97H*LJkCU `ߔ}9Ί1+Nv0S8Ms(qYR*cyꜪps2ĩQγƣJm"< kz= 휢[j@ nbSmeìd F9Ps2rB]51&&r/Ea4JXhθWoTdǙ4۔J1bBD \M:[YA4N\Uᓳ1͕Eb,2z+&bbu(O{tTWTr>=O!ύgwqbĩ0 !rehFq֛a䔨d5"}JQĐO#\q0+/gwŋus؄[1MSuŲB)2b5 U4MCQr\.Dvv)L&R+&||H??,A$sF`3юeȩ3r5uNh˳R "&zDZTH$PPwbS9'%i mn6$2&ÄsޱiANSU4w[%b"ʔg4YkgTئHqNq02lۮۧqbjl1Q޷6JY,$TrQ"$߉GTi\aVRJKVӄ0bH9QBG]qZ-j1qos}_W\%{|j؍;J!Ne.lsm96ЭAUȠ T]`-u݂fS9g'bzrp1bxBg}WMUCjgԤQ&3x]D55St{n}}Hpd]07lY1!1u]7 DQU9O8lֻQ3O|Z'YI)rU "HJJN1 lh&)NH5b/ ^0 ܰa)qw҇03&7`,a~5k`Um[$އ;RJ%[ޠ]Ȗ(%kK #I@U40_laӄl.bD ZIEiw)i9[wC;g|l++qSP8!Uх 5%(TћYГS2+T(=EU{3-b"˙w9XUaoNcEp5B >{7ڥ¤;myz _xnj\> CzՍWj}^P\Jkievu烨Z"K@)\j8+%=>m@K{Qvss4MzGZeS< ft6)%+%c2,%CjJ9>FW\?q3~}cRR>99& sl8I6R9sNUI)r9 غ4}u}? ]bou>=|q>>ۥͰb)Elv0o;LmW2 y{zK͙2/?4cLSb Qi2mEO{6o6Üf1cmɑ"a b{tB0\' T w2j)o6ZS\S*Ed&iS)Q"[x0l%bh:+R͌n1FL 1$PdkhgQf"TN^3L4 0="0'sdXTRJs4[ ;Iy_ 㝊C\e U[\="p*KX9*&Vn-wSoe۸_ˌFWeslUwWSvi$6BM!w\DSq\V̇PT)[! jۭ<*q|jzY-K7R13ٜ4rlbro!Rɜ+vۭ?t뮻kes>==;tيy !s ^kRoV0:!;88- HEf^~|>z>8?3NG7#@evEaPX9;Ugji9ʶ"B>{o}7~{>4Se$"KrRMVjUh%;\n;t |SZnX•0;֒KzCѴ.XUL}KhwlVE je ӈ܆W]S¢'z^?|3}ȓPk2ו $*.Nw$KX,#ߺoY8ta&4d] VQRJYP%&jT*!C5i>)u6'{nT=J3*HDE1eZ13Hv2jnl\$S–/nn$pm̹ Q\D\ׅ\zRv`,`jQʙ)%"5?'RL0JY)N1f>a()q|9bqT$̦JLS)bo5u)3p̌t #MDTCha*yJ) _,`>"˥rpC IDATTk; PjU+An0S3"Ej0[1戟TO>jKJ Mwmk܀ZAj(fzXcx0L _+!fcJi޾O.l4V'f1娤TG0RvNwT쁷 !b$Ck/>¦IwsZKcu4W¤mp#APIvsƶgb\H*(i[^nޙh|ؕFVS%24d eAQOsF"rL9X$#)g|B3^dJ;n,sSUc9Us/m63[,|wN$+dVIn7}}j/>[+g1k "𦥔r6fYVG;f%S..QɕR)%UG@6ql`w>=|q>>ۥ"C= mDtMimT'h@sWfLS;^_v*D%P "XʭA\gT9W;wJ1&4;jz{qu4ݖ-d6 0NwcuoD}#G $5Q)Y4KXFӳ~`m}H 1&@~4$8CD6o췉aOyhw>9 ]uʒsNOOڞ6fQ'$аZpNH:Սy:R'A})OqhVs,̆:b&DuXh<1:-\ȿu"z3~ӷ|#kr(oI6]=Dij•SrN50lNA沭>[lBjBt҆SDSʋ27TUXҶJ{K 9evL|\Z@ v6Ɉa1ehAID b(uSN3D5UJޫӜy:rG3 Hp;HTJF\UցiM_ȃsiFl"%4N9YXPŠ;ciy_?j9ԚsRrv>S(AoUsbs8fXUE|f6E$+kVK[3ʅ9^w91M4Nc1]zS՘<9!0ą:a>~*8D佯g2T\Ap,q+&ŤسrK_)9iT3q)4IDz\둇Zswi]V Fkd|DU2heTm?W/LDTgp,&5Pn_UQqlk3$NZ`dVEPC[@,rRU?ۄH=c4$-y.OSbt AH)ɔڜ:9S4B%(K.9ŊfޯכR2~8`ۜ`fVZ5un6O[,a<[jo=6kxţ /.E%4 ӭjoa2ʇ2TJa֖YfJ)]l6޻r0R>Kiџ˸ӓg| 7λS›G?D7cnSsfWy.B>pz"-JmH}P0-îsC]@QDafhG!%ymW: P @\N0;Tl&f틟uG=Wb$o~^vK Y:ܶB΅yج(9'}`L J_\sΕW}(#8= sX * ]FYY!ms[W-Ƥ-U{?˾ U몣`뼪N㤪> 5U>9t7ىox7ވ,V~Eo򓿰<Zù8e^W[.pcbX.)פ%Eb.)<"ZUֳJ5UȹL"Z<1/Nj!ΈbJA ;B)T]6Vp8 a@D4M1EUE-snbwmPLnFl2C뜷ȆaL)-33$Ywr`{_U}jqS^麠rc oT*YsOq9"\ԠsFZ|/ it.qMrP7v^$oV m nPN3L6qI@4Mqvy wd+fJ7dV[:R"8T+WmyCvf^o6@QԀ0Ja7HUSShC]׫J股9%F# >0"^\dVpc@2#8Mf !h.I p99!bDNt z-Fb!qZ}3av j(dݬLsCz{Q.ٌ=wr`b([]P2iZفCVB-^E8kv޻KɹX1avιsDDT%o8h.(Xi5[R!E$T̅9L fKbLWiQJT|J)`!_sZq.^savzN*0xz4ioo /\ >,+(};.˗EX\0qY J)044/}[|p)xɢ_bqLԫ|gNW2EO?<9/}Ko.g:aR.:r.zԣ?Jg9:>xf'+\.d%S$瑖0&.8JR)IY$"Nn3 i ^-1Q;1Ŕc^UO?Az|Dݐg.V^i"RRĩPJ.)P^ i¯P!6rN3%2xÑr끍 "*F"X4?=uQQ8s;<80+D/:":;;q*ʘt\B\Ĉ'|ӛG?:3C)iObuuה8{ޔ,n 3"2\< a;1aFel.%+fH9K]XKy/bX?U53f ! ÀPfx3lHxW\Ha-'Zڡ*Rq^,qc焭2 3*%i9#\4NfooҥKWNO8ѕ+W>>\\}sz"rzzSZ.^x\"lJ)CڬכaVNtxx8Mzrrrttt||oU=>>!t6+YEv8:bo\/²$b !q4a~хEŢh\}Q7[n_˗/}g=k\я>o'=I31~Km禕Ϫ;;zԣOq`tKXP~8?}H)&Lo_X3cw]X);a`k=)_7Qѻ^O?OzHɬ'DGQ"z0o'~`9o+מ/LKh+z!IB&"z󏾙'D;u2}݃*F-)O|ȓ o)%Obz-o{Ӿ-/|+E7? oy[Wj'bFDT/Sl᫮W3u1tzԲ}gwmD??Gi~m_1_%_,y{ve_H6F{E) іHsٌa'J1z>)UUbַCR$ԬrE[5CQ P977m'3h-4RTV RiBɱ0wgݣ9ms$Wh2 5Z)lًLxB0 @ZRv&e"XIBji٤n, BLFR9!, SSSB]r|MáJ80`MhLE Uqߌ1u5BEQʪ Y)]%"J84Z)4Z!n!d% #Ff $C jyY}SHVwDj'?0J9g6@beV-t"aJI՚{̐BKnɂUglin,+Pt,JZmFikE)bObyFCp$ :Kp0&$F`^ozzz ,g~B ynnnvvv0HFrJ1b"^ "h4 -Ytٲ bmՂ IqJw^kҥKQf?K.Mu]aNx Ƹp%KCU`2,^pNxѢ33-ygo1?;q]t9|`ٲe%\r9|ꪫ./|Ŋw~_xdW?/] /ٮ{^SaH<̏>Ԏ;;7y\s5[^ D~S7m޼ӷlܰ~׮Crˊ'z/??[Z|;6m:䓏T\Y#"6psRI8ȓsZg&Y~87gC|'PWh4FMXrk1R9v`Abi4aC i%bckIcK\H*xE9],!qk۟˲4Zb_cLf*^gT4(ff*EɄ BT+; {f+yT]U=i05GA !BYn(9e$Һtf,΁JqAyƶG*3He{c1FK.As!-C$ɥfPQ^N%GImo $cHBX KAofƤpg Sܼ's:mLK%`@4֖eYqVXI£&[즍]2d#Qd11 lQ*і:kUnT!/Y|}GyEh9Mfvu:SX-Zh钥˖-zCUU},J;8dϊOZݻw4z#8#/ ,Ydlk$t @w  ~nvp8Dę%K,ZHdXUUH\GǓx>Ì1v>O}CZvsnݺuЇ>OO83V_ͫKy6y? ?Sm&޶mi& w˗/_7y8-_cKӮyL>~=׭=o?~;ޱ|;.kx ׭[w衇%/;9ϼWurs/ˏ>_ׯ_lٲM6ܹ ǻnݺnmvۺuQ?){yxs^(޸aêU+Nݲe 3HWM|OĆ0N?kU V-O!%ɇP7u4R Zjޚ+Tq#-fǹk'jeueu^Qhp͹kֽ֞qݺ7G؁ -a@k]e-BfCr'3sJI^2/$9Ko\ cIRz0C`bt*ZEYH!Aa2 jDLRTpX*}?0;M6nPֺHt@҇g" Yf ܰ>V^fعcL㩧vڦMк-Jq͚5_o.7眳_ ~gTu5/`П?c9fMR\Ytz,qC[,M$XI Ҙ o4.m'ވ[ʳ I[U>a !PMCUWu]khOqK-uU 6Jq?9NsSW%[v &of8J2p1(sZ FQ5UUUW>9(H+Ɗ{"bdY'κ(/Fn, hT5hB AxBeI1(!,odcsJ!$s ٚ! .^$K)69#c"*J)d?⼨QQpEYZgznCD.7u- IDAT$|y릩F侙rwr\nzNu]S]XaI]ߊySlHev:-,Bc ՄZꪪNf6$8$JN PJ'r'97'Q"O2FkxgwsnE@~{]gjLD &aFk10i$ZeV+# 笵rk6{| M^%F=1D|e쐂OHUW@MHH*[ئ܇-]tҥ1~UW腋vKY`fA0E v:]kmannnϞFJ^o uz0Eht}UUt,X`۳gOcff9'Nmq8>{w}wqwu=?g?ٽQsNlS*Ϗ8x⣎:%/yɕW^_EDwug?k׮-[e˖ݻw<&/YguUWɟ/uY /]vtMࢋ.:wڵm۶[xg޽oڛq׾og>sq_|>x 7l߾{y:׿;8㌷mx\O{k^7nO|+_;^]:q\skq8O><9/L@:M~ok,|Yze v86jmQ$5 *QR"BӸpgՆ@'nljD7ZQ[M~NcqbJBL퇍q_|lݶv_f͍7p]w#Gnúwl~lxj E#OUVڵ{Ŋo{'̘<$2BC RA)beY=B􍗪C )&&zcĔʂRZZcr$䃬s1Aꮖ0}hɼc2'! "h!&'Y/uPVnj0qJnD!Dpc媄ԢN,TLƄRH3@Rį<N`)maz@`j j(6w֕iA?Q*ThŁ̛cƄ:?ڄ[1XCH4Wl/+͓b\d%gДv%N@J'HqrfC$I*bIbySCPZkA=16J)c(ák3[=gF  #f+>J DdLtc~:-&U!S5R3Mfhk9 Chig%̖7 gX^ژ5qyh%8z ceT4:ID@$-bəxΕ.MAk͆~@rp:,خ6cs,']7 E !c"`H_U]jU4ֺJ̿4ݩ)gm4sчRVºҕ8PUA($z^w 'Qق ml"u=B .z4ķ1c릩g)Dc4Qz YtuZsn1iZD:USc~b!6Ew?Ŏw |k_x;я~t$_?̏mYr+vEPsg}_|13_֫7/^ 8??8W^y7zbWwW_}e]yE}_<r!A߼u;s1'׼5Ol}yxs^h+B-7U[61ȀQ[lJ}tiRZV^=N8A)BRS7 nx$KPJ+j)&Z5ӑ6`rl\Ӕ[ iYcgcVQ)"Ғq?x\m|߉XODHY+hrZ7Df͟y9-EnIB2ZICt1I=, iku]+T`BC@(8Ϝ2V)ȚͥM3U*f{tWZ)"~u]m㩧(rn]'> ! CA u]3QM-"pGw{]aG  *S0u-2~Y#k$wPYٲ1YS[#ֱBmJH#I\2a]wa,IZr{f!ImXg[ I|9+PE"[,%s畁 V#*ɇ5Q~V7MRi CJʙ~ ZBT.73^k*nXkFpRF6ŦZP:[8TYipS/]Ls_y#!ۏxdA}G*Js$Ҁ& JPJYNBrF>$Z #r%Y0#&_1X)xUR·h67MSn͗mm/ IEk(҂i]-$VN.Q))+$(q$^)}JI4$`ѐ*F֚UN&jI ⵜQF8dFœ A tRgޥĄ؆'|V#%%bȗd"'AHb#-LA;,M$8ۘ8cVUUQQ!oy?$^}xc?CϪi}ٳw07GDbu:eYvzi-Zv%]כ|LIvJ)ٳ'`1F@\Qʦi)Ut $͇o꺊1Vը^Qt:L Rٱe~̏'2z޳ۿݲeK p!wggg.\я~73IDkVbAI'u:믿{މ'(?Gq#x?x}Kgff핯|>[|c:۷_x7tS+xrŏ}c[uz|#/}K]=nڴ}o[܊|͏k{>㝷ya}#>wVHڭ[W^}-]֧or֭[6o5}s;`"WcT,sGZ@1s(ɀ:?Lz~4f[5JJT -u)QhS2&is~JlY;׼aM3hюeّ|I]7rh@n^M-D\tw'6Z,uU[cPi;N}|l%^]"(R]%Oཱo|LLVd:4\'aMιÎ8gwlݰq0_믿n۶ 74D)A(uSsL@^ 3ffgkQ7vSFpc@L` *Q%21(Բ;E%^ԉ QLںH'́2!EBM cXbN[UbIgc0&}HtbD1T`vJ DbD"GHie(B19E4MV$ߥK.X`…<77cܳglSV\Y T?v{/Ytz=j`I_#t:^#C9dɒ%s(ʲt)lUUu]ˉ,!_7EQLOOOOOFq;@pe˖-]SvRSv~̏'<=16ɏ:?c`ʕ^{ ^kfʕx:gyUW]gyf%K瞧=ipw/Y/[l'>7wW{/zы֬Ys '<*ӟOc:׽u^x^~4Ox3\v~׻#G<^k3+rŊ|W'q}\)r^$ %|0%6^7i&)Vݴk,ٲe6y\탛7o 9& !yt#%0#~~CUTW`8ysk(7JQ{ȵӎ/񔁑9fo mL;wq8vWnwX{H!}m/eYqAN|͉7}ɕ!\b-B+T+_r_&aW.o_1>N9_kj(kcbzCF)JEa !9[o}AN*I]SO;m֭UbqkXyrYo߱OcN !u}/+Ϲ,^$1G u_ xJtFHdB1Pbf𠵶F;猱!Dl@gmԶ[=@-%DQ-({/7M-!r%1"Qz>kfb pZ'ckmUUeRW]"6!j_$&?H8mO*K0Â"dCu:`{_\Q2'6ߙ}9N @T1SLjpZg,cl^k-~ĞC M1e* ƇԲjRFd" 1P۵%=q09t@V9߈?dE6M ˢt:$~+6FՈ59 c~("R|㫺v} ŁE&GEQv:N\ S: e23i2ж^of4E9==]vJ`V4d$L$"V ۳wvK\Dߠ iU+m([O8^k#授z:uIɈed/ v!֩D @,&#kFjnb2:قJ/1c WUDcd^I;r&!]ᬱ§J)U7MԀ(H6555YܦPh%B4DѺ( k 1s$&nM78o}lm+ߐU1%5ń'Y13BNow[>.RlMR 4AS6A)SX,L9RsOk&)ZK#(EYeZQ-Eᰪ*Q8TWdKDbO´.A\m aQL,fH>TeJQxuc,H4Be52`0Ej85?p<Aj4bfk뇈}(X:gN-lO$8F **VF+4?CeYNOO {=˚HYAcJ tkI4iӃ۴P10o@ Oպfϑބ"'WQ80I.ɔJd_HSDn SJN $8]ATBmz$'1ܲ&[ZV4M hd I@v=?]r5-kvvV=Pؾ޽33:qnvjq8mn¡tҩicDPH Lz#dh477'WRQL/iQZig{wnnk͡zG-["s 'w?w=55Ue[xq,Y833H(Yf1VJ1k[W[W9oz2>}{i"\0w!=wW\qE==8-ׯ\pwܱ| /s94o~1ׯSN9_@|?NIρƁSUV4xM#,'WRn d`C?4 {?=3]eF. .&E,HFaJ5[u]Ur cuZ2Z)mIzM1OT$aDkr3`@$vSSSF$ǐ̒Yxݓ—K҆fS!&mBh<T*:eC IDATzvviݤMuU͕xUUUFR+y|ti%!4xO<:6AT)Vz޽Atf.$?KRd0 :xC&PiaPS'6Zc-"R$,,jL¹R~SF YZ19ڲSGe=!#+ւZ#@(F{XbOLQ TQ#*E2p]uqB!,JjJ2,$3B 3@ z F#*b t1m K]N6I#ŕr;}03Q㛜rւ(}BQ0HO)eknqs?F#"8fD`,2$,-"ZZtXZNZ$"뺮+)tR#1QE1R9+TPj(D̎V w1#I>ln ~V ⲩ)Zu=F^k_-n"Y3#ٮE( mje([DINCt}/Yi*3t:kGyG!S@>m'iBd׮9cf(NR)XV> &aRKtӣB^C i᠉U+j+X̍Mݤ{ĚJ|VZYE1hP=1Gқ:=M DkmӱJF[QB ?y')߷oߞƬ8y1]W`vEt:e)M݌n355v˖-BTUUkuCIJ,XՕhĿZB!$@1ِfùYG[lZ)uw/^wŇrȲÞ6hIYYI1VuZ/έ^~:N쩀V<4lp~Ώ0Ld(I $)iM)sQ2`cT)k['6h mz{c 9&nk0#SQYmr)36Z>Z&BFDRXlYbm7P7B}I)%㑜SPk>""%B<{r1dP8 `Jlu5uW&l@%6F-!tbHt:uu]US4J$Y00QqOG}DB$"0o8(Vj0TQŌ`41{.NrL gZӅȊ(e"&BRuHr5GśD[!%QZRx(H Hk )JdxAia͸Mؑc 94E%v'0!Dh#k6V)b6%ap+崶kO^RP]z`IH-'o)Ph1V,(Ha ٔ#m1)*D;v !c3V0> Pi0 "c 09G)d@]0%r8'RdJ+3y,*b#%y^4) D+m55BߐOj_uXqԎ&K!,mR[? &b/D,딝Q5hTuR)zSnGkM TrUk-fk,5.Ja46P ČZ^w٢۝bC=tzzR&ȁFk s9 6 bU,J)ʠ0恒gMo .]|/|Oi$_{;D,`)pb&'\HJo*e ewDaˣg5M*;VPQi>~$f7<,\{u͒#NI5\p>'y5J- DRARO3} %bL'P zԤ$Ghb ldXzbb1heYTu#=+e843yQ%?3%QZ7@@ O(-#9mCukPH*3h]K,kB +wS-*fRQfA8)XV4% JXNJI%WR㥭eDZ ͳcp:ۓdR&IcsЖɉ JJa]}T"1el4q+f*DѮ,p,r^HmȦ Ͱ:01++5 ):1qbFJh1p.x_U 5bM%!aZHyS() ZGqP򣅃LJm!h cT+~W>Z꺩b /^~LQ4FJRJplR,˽i5FúbI'HLIeXQϳbB3 )D15Mjy.5p ׯG{7BH!b.uofR e!쨅>o}F"1*ѣS =*MҒQ=D@kt[E)@ΤuUyQŌRȤ鵛vZ #*Bmw>+ⵗWZk}n]m}7yrkc |`b 3"Q+5vf7 ",n8k'TJy *# H%8Ti8F"K7M㛆UD+D% ba*# 1s1DAk & 4*T*eFى3Y+$2 (3> Ɖn71[kuJ"r`P+l;ާ΁1Y;V$N(sWqf*԰{ dW ۦYCZkQ%߫ɶB%)1Dߗ%|lS7l+[;|j e1e|ҒɵsNl!o>xNOOeh  -X(Js,K9\ѝn!n}E#8pdWzreE bŰIj]{LOOĉh.\;S>Q[u":z^+c~̏7~73?S:Q #'Akl](؈E B-\n1k ZE֪t߶t(/_Wo4b( FxE!~G0FoFPhŘ)NWtfXvΝ;ǀJ0Bcbm&N`E,HBkm̵;S]֮]+q2CZ+Dƀ p 0TNH)B\`̿8ck63r|2HJ*}M )h}M9Q58&;w]wرzꪪ~U3iBB1C;>F/R[mt\{-[6#H1mO>_r>YjcQ)c4uE ;dHm4nbdkg0g b Jiԥgc Ĥ\)["@YLpd$ Ɉ"RQSfUJIZvݦAXJ9W63XwMNYy#dc/{ yH.6IqUZu<|ii' 1V!Z 66F',(aNۍgH8缈&p^bBq;b+TMJq FE$kIPW$MkMb e3Lod7"L&*U4nr&:V|ab`~2g, C aKG]WgRhQ@ƒ/s'k1(0+8ptOrӽPr5 /b333a333Se۩: WԴ8%Y4FjU[B w빹9I/Y]a(֖8(RGp8T:d 3 /(4Gc~̏:t/^BlAH>Dq$b &oC"(bkc2ʹ̼FDB{2_`T-v-rm$eCd|5ZfS斂 3N2n_6DdA.%'""!붝z6nܸsO-c6P6H#3SĨ"XcC:IxΖp8`|PEk &Z•eQא491R,;]162Ccג9Z#9@\3ZZ!HwE6IkBgPcDBNN޲ysxb6km]n1f۶m֯qn11D/)P-dGlK-kuL3CDZIM,JuQM-fØ@ȓiH fL2b6ڈ#1(S7 Bs&-eW\z6z糾}ɷO{icΙ(>/eߺ*Y`rse%51Q&2ߍc2mrSa=`: !Rd" >DsB u]S$٦) XQ>XcYcv!)-6DM^j'kl(Hz޷ De )LCD21Dc$HS.3`I{I3@@oə1xpQ :A4X 4(;3Yْzk(E&VCÒ0l%Fb`Lb)P L,щ45)zWQgmI+@92K!"SXoH\)um4S$64I,P|y>4MuEQB_JL9hTҍbʟaVDcs-Lb-AJ$S^',kD,.'Afb]׍o!@Q'CH) ĽX#r-'MLyoR\I XGP{X qjs~Xp=z=SJMOOkYU6{73yʛ h"AzٻwܜŋOOOKPCE-[:0 A#R#"[kU)Q'xEJrG1Biau2?ҘO߸ԝTw~'CKo/9W+G^zx?xaFBz䪫`s_}}_}me;"ӻ]LSq SO)5hG#1:r=8SY\?x49$3Id$3"ܰz5"ްsY~]xӍ6k׭HdrwYߧ-em7yem[6ʃio8ԙg}ǩt|X9;Sg!BuE{8"*=qN9eTW=pӀ=q=A!]M0E0a =OFz2mԻg@%;cm'ϽDdy*T:IZ}m i!EH͟J&ʵIt7Ӟ#(2^:uJm"+Iv‚ db /Ѽֹ9k- b Q2S!,˄d "U1ZLe|!R]%*Nɋ(KA֜R #BRc[g6I_&T%rDw!$UNY SU_!"kAC!*s(&PY7A@$]tasoYd !(B(G//Mњ 4G ڗ4@`"1qd |"QJo{Ћ EEQJ47JkfqgZL{!3)XCzK9@MYAR%vL ÛRkđIZ3JR  ot/hUrߒ+ Wje۰ƀJbOml5J^ ~*9:{MQ6OB(bʕv[$cƌ5jTZ¾VcLY?cƌiZ f͚:K>D$uX|[Z2]cv{bb !^:h4F9bĈ,jEhOOܞ'dEo!@ >/IRҍ1"w+2 Ɇvp͠ݰm);w\}Չ\aHb~çMMkCAH)g>|"Ağ_䱳]ࡏ}lO@ooW,GvQBo9]s.xCʋBT9u}ιfiw9]t*Nj[wؾSRY*KS'QAjwd%$"X @tM1=C=λڀR'=\-^tBRv+ERZ?lE&NHiP׿>O&%wĽO9yԽss'NIr .{'Ο*'>7* ;wV .k޼)RkfEb…3fhh6w]7x;!ʌ@!j1\qDf8yrT@I4m;/4fypZoR H1DC^uҙOJ+E VwEYL)'U+%%~YX6F"Z !B^EyQ*#!<,TWJibյ(:&CK]{GC BAE %ƘCʰbURAiS<,hÆ#*6DU' s t*Ds,RkL}&Ĥp KؐC9 !P ҍUPZ)FxcDAl VJkʕ[@)#-(.1Ό K-6);(TYr %Yn0l㢈%w E׊5afFa:IzR XeJ)Jn"Zfv`8N;ѩSֆ,@Am\)"]c s/GV*AHrTY$5yǝ/r'Vic9P*&"n&#F9rڝF)|!xcD BXz4~bslZ&TI5D9E!(!ھ 1*VbA7->a9}ݰm^w{]kc.w}^z뭷^zm9C8V%cDO8Fn['LCGmņz[r/;n-}6&Ny1'Flѫ~G|qMCra?]+zu/, X|ЪmG̙pLkx~19iWlR)yW]1cvTF"(mVY9S$:C[cրC,—Tg,@ >FVuT!e鬭֟.3GYxzXWtFjkm*7իW 1y:&LŋIVQkDJ骳MI G)VTb,,d0-=!#%(vJ+c:6ǃJ2 Dч*]EbQQ%id>=kJvAhc6)ctBk0'I2[u R.hQj K'5B$@@1&&*D@iBBQk-bdb$q;17ƲrV-EpA$=ZCBB@HvT]*)ޥ'! 'gӪj$E3MûUjW )nU.X 7 hDJ3 T=eh| V\'IYEx?),^~_~i=x6 uRZ(?w. gK6իƧPqͷ[|o'[#w~@*c>^{ye /:}Gy gF'-gIoo.+%R %E'i >x@Y0o|e#2\E/qK^u!@,je2gPF;gQ(|钍1?*ivǯZB`;n#+_?'8n|`E0qD >[e]C!Q蕖W)%i/'LX{MtES&O­@sS}Uja:pP#Ygϭ}:d1F΢Bk5CbHOIor /f3av07?~|5U4nK hcv;V%K:#0B܎)Čկ:/,2I^I mPU *l5BL8$O clʯКX B=D`Q"*\ntߗe)CnPM%1"E e:Q1xbb !a[CNnML<)#xMjoa۰OUoz7g>󖷼e@~8,~o\9|QY+/? 7_yo;vQs،!57pox$yO8ǟv5i^u1}1y촯>KK%6m/\ԃȠ{27ep]?^ ܰfv䛄N^{]n}?ѩOn2u:pQvk^_{w^ݗ}ߨ~\c0Z[c%b9O>]|XQ ?y, f_&`D1_am {{z?AeSviB^䒶h4z*}ȋ IA1B-zIi H !!1ڻn8K[3Ni>6L.:hZh E3aD~{ѱ^"U] hɒ{GR$YySx-( D)#O~#عl!BQW\Tծ4U{ڡ,ԜѓJ+eTYQ^f~~i[cͤf `dhѣGXbM7'm첥ƍgD4aQu8뜓rq!}ZiIA#PQ*W[nyo'(L84{{ k`Μ b)*g9|S-xCk6o ZKN_<-u>}aL `jw9M7qT)D@b,bgՃJQ@Z,XF ]@H (b+Zsۙ~'+Jem]Q{PI'*휾;`RHb X3SB+!.Neg"CS[IMqtQ/}1JR CL#\ ZZ[jObe.2TJ&cr.G]fKA,LB̑##WpS0]7T&CmRFbd umpP)ew`0+m$>/J6 @HBd`gڄ}THuէbX#N~*&UJFȑIi%8[YcDT\DBk`'̚+ld9eEDDQx J)11>2O"75Pe,ŵU)Jb#t(ӣsS~r'1Vr?Dk"X$C6Q_^tsyԄJ_H7CYN7Mk* insv.3ƨ<) gPkrv{|Z@ӳ{vWWaGoCo9Yfmÿ 0o]}>jtg q/>ғ/~K:{nM`~y\ꫮ9K'}{9+>w!c_8Q=~C'7,8;t |o *.?_o?bU/xW0nzs]pe?]y_}HË}ysO<^u|)sper?'~I9go \Ny9u}}]~~3>;^/X}%I: og"3oB=4LF3s/epbp\)&ڰӕR]t9k`i6:eaQ2/ +bj*qG%XIS mM"T2\6{VȢ==FY?d-(ߦBhDy'\3|<իV  O &L˖>l6{zzzZF)Ӟj㤔a |%7/Y ݲ#GRj]EiN.]0zZh6ŭD%.6֚yF:eB%,k):?I#FXz{{%?EЙeK1sVJIZ\jV!zm1Q-Vz.&@$WPf /B%T$lfizZַj]$ZHܡK2io-|% j?EN\KJ9WY ]{PpwVc(|Y/neu1Xg\NDM=g@=*T_EY@'忝<8{9gQZYgf$3}zSHS1ޗǮ)eYFYmq֊CD4f;v.BΉ/ٮVi 1וUWXf"σPu#ETGTr :DI-Ė=Y%,J0Dlj,,|L<WyzD:[k\_ebkI#D#]bޗEYHXmh̅K_eg&%D$ eH[7ɲlXcDõVGLs0BAT ijle>HJFe6ݐ E*ny5fSY8Bz !k $ L\eXÑ>D cbl9!fDd.)/K/](QwbX#$%ɭ$#HfB%]ULNc-=zX UQiS(B 2c,S$D%tvghbJ"p.K^ֆ~w='zjA#cƌ|ͷjz-nv-bѽN( `0JkT}}=ͬYgaPzޖef͚5kx%821Nʋ<,kV`MS&&f̘vmٶOM>2J%m]'FC6l7 <+VXl'?;3yF9zw}…^xa+l? `/o{G ]𥍷8}@Ǭ/)_93x1d;?G~Ky}XonOk4}1˟ OǗ׬vqynw9?gĨw_V/MCCw|G7yOXNM6oJ:/|xMλy{hC~x_mKЪ^zw]u__qwi7m̨M6i59bgkц瞰E N{n=0~8"~GƎKDZ<sH̴߾aˌܡ*.&"脙wo]~[o=`uXgϛ*@ᤓVvڬFQ<gs+|Y+x IDATsćw2?g7/I O{s'_tѯ׾(*T'ΟzG72 ݿVZ k _k=s_ߓY3.C@R }YR %Ы^Tqk40;O;&ϝ $ P%a7\F1hf_H?X2ҥCPZke;-)D$SQj#NǫN|OK",E)ThMDEY0qYy޾FrYg3E`qJ$MiگA%yBcmއu>҆!̠ tڳP ч(Z1QYV+(v8U$5*7:|z!0{%bDQzzD/M Cއҗgw|yr12֊kcQ&BjQaߨ6R`V )Hc`P VL X'(hA%P)!h(އ1xuh@'Ljia?<'cBLX虐f mPX)f6lżVx@Qð_`L/1 )g}I"kP%W0YC]u&)dUǍYv( Һj ۫IH(6hQAÜY) !AUxlZ'63޶O9=7O}z|p`ň7M^_m_V/~3'?=c\8 k]s&y}h`eXf/=) FO/|+8=x3bʾ}y5/>kiSBhMdG;Fʋ/&lX51&N(F~xIh&MV?wdT{xDB㎛(O@(sNFmXT;{|eS+d3ASCyP{",?znJ`,# z;N&*hdFSJ)&BI&Aeg^{I:qDa+Ǐխ9eƈJBO5ӄbob:' 8Q'O b~ (cy :/Nh]*xByS.u-[QfPXe ,eAkD[XL1鄆Ph7~4rG6M\?&"EB6.sDTt* T0(Tt!e1"(NPa Wq6hXD޲%=R!q#T>JZQ1HgmL'8oSet5N(0 mJ`5ZSd[$n&f7`wm,3!t)TjP|2Tx% ۮ.=3{* 3)(d"N: IH0%J%El$ɹ9%Xp#X񙐉 1{/.02ABDI "ii"#yb u~P-.-ɠ*sE1KHz9]/Ptv{u===G6nՖNNEQ\9/vnCu8ZS>!M7nj1s.oVyal59fiʢۃe[+h4lf!(637 ň1&O ۟* 7Զn|e0n?z; W}wzN_ٻgϙXfc;GKi_FO.Sfڬκgmw_a]^}ޗGm9 Dޚ/Il7N~CNmmiYuuơw׺\? K1+?oο[xɷO-Y;S|0͈^{]v0mɝ_:uȍ7}mݎ3S:1HDh `/˲N'((SkZ㌵(^JW:+DQeQxc eQN]E1H1_$'e1̋NZ5\Y2J4]T*~VUtXkLeFSk#F4MAv(R{388j睎Ovzzz1:y>84(ࡡv]D9_Pf8apgJ0'+O1xU]Q9?X2ZgY& yVM %u(vnE.Ժϰj"Vh!cC8P^+ Q4)c 1:՗e.!ο0ůXt @.eYJJEPEQiLDug #qD cbL 9J G>1DXRR:ZIv TFB*\FkS;oaj8Ҽ*yIpԋVU tJc(ʂdr/DGF-jͲ,k4}6y&1YeZy"g&(q (KBi*16Vj6VȑJ9CO?OZZ-xg;7_uсG_~m_swsցq7wL9谟!ve/:Fj=gxg׬jsKW"_.9MUߨ17YY/i=;e#G/ua~s伮7<\?imx^wٵQO8163}|i Ǘ7C%I \3yϟg~oQaK_}MєbT 6h-+sjH3KbE_DZΚҔyA`E -ZB+DV t%\q>1QaRUްK$OBTizJ eͲLA]nQk9MY2Qie) 0Z7 C^XbYzkZ1P-|KJmSOңhHTu()Q,AT՟_dyeⶎթ6J]n2F@颧\T c$^6R}N.Fap˴L1@1&+GL//쬌4`.u)BbҷBXZWT!O(i!&+ʊ/!(LEY$~jf/w+ѰPdfƁ3ITZ)JC4R4+gG!d0gU80kYrM#D6cLh Z41Vҩ`R\?+\⨕z_qF/7JeZk$fT) ՟o6,]H$W Pe 0Ts)c-᩟3 "ETZ{(2(6M Y.x/&:SYkTz#*B@$EdIzQ|*, JY"IxZӍA*k0i SIu)wb%RSm#R 4eYE"BQn !P;1r/96ÖC~GL*VYzaM%JX1a ?4u@ʨK=OF@ 6jcPsTxaCѓp p2)(VL߁ei%K@FQ)9BHjy ;+4w#T>Q`"9Hge_ZA1eQ2"4[-q&"cX$2 VmH,ȩ֚)}.;wn|:蠫j=a?>u8cȍ7{)$|_{x~M6;c'-ŵ] $` qޥv̚/oݎo;bNzg8qڟ}:y\S61=gs'_}ޗw,{s>3.>y-Ճūw>Gg}VX*oDs;̗}z㭶]>w?|Y~\84W|m>n7HN/~#Nߘ{|Ъ|^M6?cluͷ n›.UoG|$HBnpjFtRuJ%&&C؜(8QF9\h,T8[J/<|J+M $ffO%T=4b**LF n1REOZ@eYƘF!B !J8 ɕ .bՍFY'T& UX$mԯN=^JK%N/3i}EYejZ넁|@:5εIs2&@\&[2h\&*ehDBXfuizZͦ]6D 3k3_2Ynz51<χk]e hw:ZkZ+y [1S]fR-*gB8ZPIOմ< (J81\Z$Tv&tm-`qRqVc Jv[FZl4]IK#/?d{GYZ+wPzNiGfjbgwkVcLxR=!xC(q6,jȘ^2bҗ5|` *$h8)!+S"N5Fq$."V;REM\1/\͢ȋS`۪=VKPdG'R:׈ 1#@E$|nrceFkmYEީ r*bppzz[9眄7@h4 AIQ M O?Qx˲$bkxʟh6ϊW^{v-(al62JfѣFj6+W|WZ(-,Bkm__Gv,NXx oO=IJv˺_ *F1x[ٳ+o[oxC/+S_~?,{op(^FK.ř:VW7ZeQ[#5Uk:36FA)4ä $O:zU-KYǫ ubcg*URF#un3@J%Тa!ĢU0KeTFIh": 2]> Lz J7Uk#,hYgMkW  |S-kJZQa^`}ubeD`f.x JJcLm]6SBâ,D;R"i*S}+ֆxC"#0)B*8ge7M1GC*FHwr&iCP`Iu ;D 1+U`6jd"|)ڤfTI-Bq$` J)aa *+y$! OK"#X4q.\'DHRrBl"boź_+!2P# qU HUo =5a 'licGd@Q7bjg7?5W<'jgҋ(1qk0"iAl'5d/ +y&.UV)k=jk2tV%!vtRY&|L !X82JV2lVVK}~stD05* ́QFFn *'lDؚ(e \*n߰m@ [(˻wa(Ï0˟p>s&孔.sZʋ,VYIYi\G-HA';B PY#g@ص8Ĥ;@ԩ̩د∥`%eJU1R5Ff3qy8*[bB!Ĉ>)k8bp=fP`cM#˚͆,DU2U59XEQeC *smD*1f IDAT JlMǐ|E+ ! eCulH A&D,`J,h;kXk 1;ctө1 S`1LFL1¨_#."*Aʲ#;H DJQyYt:,&js!SyJ!d%Pe'RR,XD UnTy: "q\eiG" QAEa!Ffn"r°Fq̕RSEʲL"`"3UA)f!H,̜sYLeYgI2 “2ձPR Zm4x@HڣAHQke3j%Dbu朴}rB 8f, sƘF`I~lYusGɔVcEX!(%qYLv`bD:Sҗ-|1UEQ17)3{omUsS$HA] Ja$DA7CAW"5"( HeZk1zo7 Bg59֬߷Z"3C)wRݘP>uVEf/%53bFq!b,*¡U׉ZCȞTWGgwR4tD3aЋc b8DbJ1Vj7(K  mSr"-%LŶ6#TWg=?HJ0 hP&f$o r1Ahy=Rp.t*ķh"U<}b@a.#g `Dj]Zp;2QS(TH ;j^ ;: [YTU^ d`l=١b;z't!'9fZJicLNQ)&9(lks"yW:~|}ccc}77WG9㥠󍍍6.d9٬alj1ԥͰD8x$a%%8^{oߜ>|Wo,:3,S \aLBI, $EͼܘrRB\[_SFbd\8ƭQ{J1h~ ",`R'b DlR M=Řb!1qɐ;.B=!(J)88 !$KpLQ$llfr\Vj=V)~w}/8k23Dg q4~VF0#DD!ƵDb )VPš U@oE(Z04ݨAȲli 03ƉT& $ zQs#5<[, :& mvPR=Ԭ. e=R++0 C3' " )&iafẽ x@UT+M)u]a$әKi2eQ`4 @)ōn3w]JY9ueX 1VBs NnK28r)>%x*O*IWQ0*8cΔ321dSr.y͒R!ԙYR:N"AxܗpPuPbLf6yZ!'"8gjRJ>j+j9HDb%KO1q>bD.Y_XBd6f1'TL/t\GwN~6_\Ǎu)$'YY؉qw)UyŜsqC) ҳ) t[ł4kOF'A\`0#g" 8DLէX* 1f"ԝL)0Z|>C ]&8LdPӡ-gk37GV0ᶚpJVc&+B-dB,Dnֆns#9Luq'rE {Ν LJYsȅ(BRnfZ 1MDBp*Z%AIiXVU^U',,K$!FyP^TL$ݦ.|HOؾu'n<|x09$i=漭j٬<曎o- )f77RSשYn2h< E^.~6>qcy}c}ssXX NS8p`cGy**뺹Z['>I%Ql|m²d2DHI%̴vnjX}e_gGmlDoQ4WjSB`0[y37/ 4Y6YF7oW5lj^r.;= J)pOL&EQwjGm QD$b%,HBxE1 A&PĈ #t]6_aZ.waK* "X;. n% Ęʸw Ky*Yk% NO}. 4^l,"3ťLic)1R$8Ի< Bm[+TX{JEvR\6\ kBh BASq?Ĕozopwz0)*(N$j*Cݤ41X]8B!rD@Rt5W{<[E-< :`mC+{Bc:!4kxB*_&S9瑆&H6 TUN2 X-2U`SB΀"fZ@OdsV@>mWl k0!P?uZ'*)vH$9@+M yVBx j@T4Zi>#!r$DJZJLxggcJ]A vDiv[ +s;!`fN`dTXBFDz`T 3<Ř*1C! òDUJcFf 'v0pt"7Ch4rT QG4!\VP|7DM4bVM5gDCyX)D5H S˞kMB@ٙpsA^w*\@t]ClEKuZ-H%VKdDBL 1k.Hy kMR27p8 mtd KtjQ&r/%i7Z*lb~2GT%"=qb]VaXR?|8Uߙ5UG)31bJ)r5i, ǎ;jgN9EJɫa(%RJ};;]onnVDu\0 Øyw)bQwSN=u-Wm HN>1.ȥx_μ#uATHTyuxnb\Dٝji7@^ ,O!AƸXQ-3sU@0nV8-e[C1#flZ^ykd[wsyhܻ%0s@8 )'4xR%3rѲ\N88Tb B!%З{bJUJ.9 0rh"(@ːkh*2Eƾ6;Cm-~F d3W'ENRR[Ģq)c!a=al@V+"Wa+e !֓SfkTZZ4s؝T8"6JnB([Ynnp?-nƂjje력Ef4ٴb!F1,B(EK^=XL1vBi=O),7 TwkbliU O*gO$L5,Y+Y!F50YDRW4tw93FB^8I Ū6B3M+pM7 !Ē3ʉa Q"* h^ZQdr5RT4TC7f`rhDJ9*,U4ܺB3&03Վy8q&'^2bVQRyfn8\AgPbV+{Jjؘld5So8goվ J(~0x%=.}h Z:t$b-nsafKO*C<!ƔHBsPT, sR4V[E+EkN3ZH8M8ɫeGBz+( \B̖Z(T A0 RUXJ 5|Wcc@`b,<?j8VK擻G9LX.`Ģg*G ҙ677WPΉUm_шb66֮;ztZ2iË́8Bn\Bi >q> _&"K)C'>ujnb bE˅0w1/ߛNw2 ([]GW(GGI]SlNW3DI1 fV8>ly2$aE.~u}:0 ZjX26T^(! iؘ  jtWKU;ԓVaC!UU+Z̽{w]-(p(^a362dQ7$pUN2 3v P ^05[c0xKs^1FSw׆PB@8jnj-2; ywvvhq6j0M13"7Ѻo'^L!0]{bM/bVc)EWl*ʹ7[1uFS CGLLU]J?!(wx%UQanmmcLѠ&X Cubg.!!)8˄ӻrZJ.%U>I9%%) !uL8qbytV4n_s>p%\ve[[[w]_xkkku';O_mw;skz|g޻ȋGѿv֯ Gk\՝6{<܇ٛ目ݾS~?CH6Ϳvx_Mw9x<)cp[õoZ47ob)E$PNls.& \CsQɥ<!mZ${+epw+OE[a$Jv []K|Ad y1խ p{+`㵷|TYα?Cֵ0ބR'4wWQDT5ˤfY:aX,JaDb$ )R]4+ZPEǜUM̩61R%Jj-e#KAkLwk{{;ԥ:dq [lnP 9JpqBL#*c*i6p2vMZ" oT? +2&Ej.ﮕcb:2eU M9/ 'NC ԑ1yXP A=kv 8Df"2^bV&լ?0Kɹ !s&s.S!ui:#G0IBV^O&DFKY 1sO]j+1IIܱ$Sa)B(RJy{gg#Yu)iϔa92ez"QVM5U0b?cJ]38TTH`8Vn6͍j ݂Y$aĚ6U- ^@0K.nȅ~Ftwb@pj*[ VaqpLT-մͥTE9Z JqNWn=RpyQrL fTIN=z G TY+8QW4:Fpff5%ڸ@%\JRQ`AQMݜEB 0W-lB566ua0hY Mvq⩲ܧx ƭTKi(7v+^հÄ:#WRRb'B%{ M*FSQKˡr7Y}\# t{ȻY]tw*pwRZ Á].u7tM8cf&7Vj|8缵UJY__\[["sy4"u\0͜Ǜonȑ#eQA:v촿}Ixs{^tEG9zK^s=o~=> Eqm%p.<;|3/~ŗ}?.zE>ޟ&>j#U)eZ!Dw/m26WC]'kk#ƹ IDAT]̼yZ;:FKfDx7bYZO-aclH2Pw赵<CX5eS[-K’Z꺮`jM^ 89j0p$q '=E|RB僠{8+XD6EdiNt 8SMp~dR=DM楒1v6 3ժֳ v'n ֱ$^ '+*Wz*$f9qb{\?~|ggINkkka:4kQ;/TY1bX666x#GN>Yޮc߰vÇ\.n X,}s[[[ wξ~ qݽzlLtasb븩˭ǯ7߼MZt֨W[y;1SO}ss{É3~_|/. ¿x3~Gw>qpSyy;w4&y*Hxܽg2w/=:8_̯q*o{aq1ι9+^'Wɲ,嬻G?k=14? re_?d ;78Nw?[|nm嫴|솏~|/]q3nM}_uk{דGx=i~Q8Y9O|݇_wuzo}ƣ٭Ϟg=짯uYw=/^<я'kNw=y݅yy9?z#k>?ɣ[Um_#ٟOyS-OIzw?뾚^:8;cc~[Y[8j1sSy!FSJq*!w!)8|+%%ǵ.q,#45kAYYՆՠbL0n%J~ Kd0θ>zsԈO/  11|ẹZFNMjzE<ScVb,6qq5 iLc~u]1Tri>~iFeu5,K#q"ϥa%9 ͉,;RRV0̬Ȇ'qXFN#nc}m6{}\-WKۮK![0S-1؎D;D .:5o̬ZLUME\!U:󐪉Nk^DW27y:p;=ׁ:ݼXVUX¡QRK!Ш1sj(%ǘks"Z4n)XA"1p8 ɽLR]NZo"Dj:C.e\rw)Df撕cYj`.ͤ#Xqs#3#6 ?)TF)L!8{;j0xViev,LL$f}ץ @ZZLՆ, TCP^elԔ`U/g~b9=FBCx7_ffЩqOd5kz* ~'J) @yJJ8N$&(凜65}3bU9c!\YBגŜG[΂!x?A_9ҙQWJMq;;;DtGsbs\*ƸZ/?~dr9ȶo_vn|Ѓt> Oj}_|_|goζ%Ӈ2y쓯';}iy{_~uW_<|>|o^4Ы/U]u}yGp٧/ߑY+?Wg\_</ȥoOzxs?}m?k\{- ~[|=//6|՛7eM?<3w&3솏0MO|_.>e㔷[󖏾/>x3}ҽoX>knOxzZޝ??ٗK=B:[ݟyO}OAx{;=W-?V#ځ}}O7w]冰jfEUBYXPuf +"!FݱtTppv"7-+cL.:0 8a@e=b22,]47yC 3DY)E0/j!D G8(&D2,b{J]L 0Lf խ1 _gu> 8KdOUPbM*jZ„b.")P1ر):0 JYلY "+NlXVZ:, ^" Acku7^w EQ*oQ"&S Z. {.I(KL1ĸB(lDjm1 j{!7CG:r)#1!k0?aPY+2ZvZ֥YTLUa)©KKZr-dfBDž(!"$V4h.%K]MgPbjT}R+얉#L a'8fr~KK/F}7~sу~ _}/k\83+?>^q+&]pnzw;1?yί\ODt9o|oho8:'L9=Ess~e_ۅIO~ȓ/қV7ݞ~tpI?=?ïۛ7~[]ἇwhvٷ$L燿_դGW S6Nyx_/;v~;~'v`":8z;[S>u;ܟ^>Qvѻ>WЫ3g?WJ"ZzND/Yol_,}5o=}G_Nyo=N<+>w3̯Cs>7obm u5btpi]VI35QXI\ms gr= s!r  w:$TcX.@"2{5>7ѱ< f&-~53EM`VGbØw=!x;-E NImENn!:u1Q$.l5 >2뾩3ce.kb0#s" B,x%$Iuèj=lRI]rFd8[Ka-W"81-Z[xhԚ^brR 1%I1F ;w7pB0ּ`GLDsnUTQ H`ք!ph]XZ8&5Ra-Nw ",j: ~ڨn)%" \BZ-B.,Iv&>+hX Qݦ_uBS0ܬ3vԖ1S.DHjf\^c, yuIՖ%;wR-264*q!ʲEzBMY9ХTp͠;׵Z+ˆ>u]bfWs`cUJV5WuU\&`q,vHm-."C!Dah!bcڭ)hZSS^ay]pc`(rsQ =zq˲sώ'm> H6W*sLq¶N=VcxʅBtR,;? ',TNv((,"h7 _bLOvYhC;+%=cfdڝTMF'(a\xTѱNT^`;J(ĉ .As!pG0 ҩ4s#Ồ!gتfU$&(׉Z&^T-bcf8sølnCs"q!|.!-FCY3yBk.|)sj2'6eX,uSJwaX?uMǘ)FQN((O<1DD%SC7_a(8B6jY}}g7鑏|$܎bf^]p^$J|+mm/?^_Ͻ?{Tq77KOk=u;#>鏟e08PcޅN+迷;iM~W͟]Kkx/bVǪ/?~Weu띏[;?Ji׿ϵȋy>6)-MNwK+Ի^]h-;毞D=loto [~WW}/_+g!!] ? iq؞ssp'<䔇_'9Td.CTd&Zp8Vr4mX5fW+{4 vDM)}'"aV'MOMdܓΦEA#n)r8-BԦhj L+p–P0SsMR PBLRSI=^V SgT tR[w QWee1s~c /8a`- ͪ <8K6R|y̹w Skb j Pm,$u̥)BkTǜs!, "1+wy:TK͉8Ɛ)إ.e#缳X{a7R*>X$t)!rH=-c֯r 1WlR1(S"0񨃻M(eЗhԆ7T ]Q({1mJa(|0QaȯNzf3Q-%W9:F)-&FN>lyG.0 EuY.ti',10jݩK(ZJ]gq"!x/w)Ԧ^7cE\B*~M\o  ANT .`v5: JpGQ3QؠHq- A㮅NqR$K u,|q%p#z7obmXGin^2 E+L}3ֆ@)fZ/g}{0HH11Y.9\1+>vg@#wH02%c88sxtBMޙku x)5t#!Fr7fw/-P)\m&KT`ZRͦZ/ϖ{c!sQP XJ* 0U4u3s  E}g1 ufBcN7*k(D3{ɸcu0؍Xc=笥 r܎|V]B Š< f@ܟ Jg$tC9w6"V"*e'Ŝ&Gt:H'/}[o 3N Tws]7_,rZfMDr6*YVU)E5noo# cL)(huo[[[VhCDyABa>Ģ.0V+3[__?x -~W{u Ӂ%mr'>?s?=l6]s5zֳ]1g~3?;;w킷_p|u)}ʃNyC nx^r{^mq3hQ]-CKD?ëYzֳ~\y#}ßvtq@]|[^s4?i>wg >}>#?7?v /ϾCYtozĔI>!|#yC2{wOc b]]oxw?$䥨'L=IJ]LQUwvvw:040Vdz愀:Psݡ7ri\ 7xqAD:32lTbE`1YՐT͂* (鄰RNXb<N.Рdxof~vaU VYKQ!D4A71Bm2`J)EADXGfz9/Ub)noq3cǔ:f67Ւb,!!u]Nk(`>J.@qҖRa\`eO2O-L~AQ@B1F95T’R\RyUgS"|NR aX;ƺb5 ~8'f$F[-%nj"!JI¨q$PtpG >G)4#-)qA'I/٥r\Uq,A`)%)S:VS'^CRJ\m5J.$6_v% ĮÈIꡫ Rہh~(37ݱ_h7Ճ@׵v[gjFɜ =U{<)n]C)ܥj5avEZ`]9"+LmrCt}?`ic;x?<?{/J@+. m{m{!ݫ7 }_/_k|S)|{oϽTAa^tE__͏??n~;^g=D6~׳տ7˖9%ؿO\N>w_0\s5?;6?oyA5;v|^=joߎ{?8_b?ѵ/z]oWE׽ӽ۞zU#CoRH/E_߯7Ν./ůy\^W;Q{z/u^}uv_շ>?!|3m;{g6o~!* ~o'Niݪ 1ij)LB<NSq\ 9S 2mn,>5gPEQY[S@0ZV/ oϝn͚;z;vRc0YDd JYכZY8)QH9!T<"q\V? O:\.CejmS9}݀X Jk.< 3076t:Bkě9glB1ڪXظ'1!8nS5F)0ۋHm-ƝIEZcsL;TZom9N%r+GoTTu^q\d]swMV6 L6+PNmnؘs_̲nEN3B{L4CzS4Me*hQ`2\OF"ښskh>}ژ֢C#yuPЬA?4mk,b{T!TRaJ8]paItՄe*4mUoe*FM5Ɵ~b%h4V+ R0az1!b:3멶{{ocV슮%Ϳ#2;zRBA, B;` <[VcqWq49Z,@*%ˉ@UHBh!gVa"+aӡEfFvNc0vhn[in*_X,ӧOlDd@a\<3 !W"PJY W7ie!ŘBkKTA4Q ׈ lADa!rZxgD cEdYf-^tb҃$W64_Qw @E 0y``tjk<7[@mh`OR*n ig+(k%:fԾ-ԆK<)l\ؕ\.Bj;u3Μ9=# o֛i1oLf`7{%m#U]VRXc>ː*A)enlm#1HDNZghFL1l7 f!qS[RNGOdUN2i\9"t}Q@oѾ?߿n0ox_tQ{2c싯mV3cb Zp8)SZk???cN"MBPK H!@~9Ndk-=|#' tǂ'wk=%"/'ox/ˮj\ƔTx"V0Er1 kDJlp91 "nUZDK0[Ȝx$c@iySǩjKxȈuk,A^RTj]!"oE"!q)8HDQQsZ0pk0[֬*^QIPr%l #jUhJrRJià9 N"!VYv%P$E\cVkվ.;RJ1Z *irm%^mUO0qP1pGfεAEQ!f{Ŭ7 x*XD]ecN'UEkBL)֜bڛ<:SZMB/|![$(s,g VI MF >޼6Wa^0€Ơ!뤘b \c&15LTE@[Ulz3Od.7회aJ1B)<WsVphsg5nl.(0;9eh!;"+6wGFfp{\BV6z;(4"9I{D}ԝY]u<)&@OUPgF"D6skc-唔enHak?/Y]t 9ijcX^:wFE q?1 m.8e RvC@ǝv*vJ:/܂SjmDCOɾu-4qC}R`èe@\Tb! f<(jpޤfFtv=<8hԩS: H{{{v˝uR>F/l7vk4M5Gk PSF&2!~7S-c* j[T`㒐 IDAT6%'1 aޟy%$I2?rbZv;)wEg !(MSJ)(|=?$@ugy,hhwG4I *&<;tJV"vI+ "a.{>r,de#\m0Kz扜)̢""z[ },#9Tk3Q^b֜-QQA3B׏dhSQSxVE=좍1/-1xϣ5~tfg޹BHH,tkƪUKL+E~ֽp.KY.vDX8:^@;v'GteG7v(>k+R .zA|#̮LMnj 8(1EhfkK i8󻥜D`55u!ȃNP{RQ@6TBAu境rY5lM U<9Mثč 2{G`H$ƀz,l'3ZkmV] <ߧҠu g3±)lLI34cU1nwJCa89)zou>Rv pY"t… L3 QÖ߇VJC[=sff3MӴݰ$АCO ^G!HR]ajr`nzԊ 5ijQ $ 1'ϣ'v_"/>{ׅ9҉wޯz3{1<^DTYUmܬ|G~R e_[4ai9CX`"0(Y|3IWcax\@7A-"!ԽUsD59&^WQ58QD}`(oɞZ1jX.,2޾@V!۞4VO@fWX)*Vd[(*o| bN"ci-z]RuC:'2L {(PA̖i^:}R9<8IET)i˕bJ) b`}x>:RR0 Vk)%ĘS)Q{T[,,†+1b BR*!KZ+s q)"!!UNbJYRBB"VkIKiaۈJ ̪`-rxQLZI{bdRs[̩v1h0l7# i260Kc L(FtgUl6Zq;KB?Am!(Bb V&lJgcSCJPhC4 VDxQ+<69*.@qX* ZYkVAEE;IGAAh>M-T6a"*D7Tfn `9$h"f=$g )+ v{؉5G8RKon0bBbTMλ,1Qb 2syw c@ҰRWj3Y܊RJ1I@bq %~a^,l,fۗv~Rj̸;cÿX Y7U&ND9 dSB)vpq{\AcZTVVkl֦.& u-"2MS1N40.X R1#\3i2cr dZZ=sϞ=oO:b@ChJN-W6-DHcKmm*ƒ'~?2XZ}h}qgΞjm0 Z'å1^sd˥57[)1FB,&VG=xaY"TD::\ǐe0.vl_hRM_|77 )I_W."6ܼm@_7>=19{ݶBDQu.`bD1&iC*4frtg58FD$u 礢BJ/y^nW1 ƔRLB XkM12`P .j[G0g{H 6yۧ4DZriau^\ ҝw9N,!jkZ)u1jn7x\b9 I3hoDnWEaјSN9ؑyYEeZ\b@Bq\(x`ڀe4ZSJ1e Ƅ@ hFV( CZ'C:ЮCS PV)"T&Ũjn@DRjkvZACXkp0rW% a bH9f2 !$b|X؟'P >UrT: SksHh3kFE`a&ĐݼKkm\HP ƈ"Uah!Vk)U-fHk\JE~P#` ;X8PD8Vm Zc A";Ļ-(_̪0m"Ҡcw'HDUMW\a淅~ iL*(d"BD|B{@a!$k`a< ,alW kznK)> "=w _EK.^hA<]yȁPht+lӗ]v^v/ҫ˯Z.咈Zk8ZE$ +nͦvttX,b;NStC>xxxZ1V[2Wj2 ӧOʉOd;[}婏z9>/?ruOSz _X.^ !n1O=P{5tC:/c!ެ$9'P [N|̭}1S ۭѷGG2-ͶI[! K3rv2/Խrx;SP[A"[!ĔHhf;Qk `lMT}MVudӅ8JwL0ʢCbQB D4BC(ҁ0KkmX\nq&;F69JKO&>1yxY_ [ ƸZ(T4"'˱[s 6hJI}|r F#AC`TPĦHEaʤKDucŁ6QDlmZj-١4B BfSٍ^5j|Ǡ 5^D}3:-He\x!Wмv q+_sũuo}L#E(ĶDrlܫ`kk_!mJe16c猪 34_1aMd+r4RQT8 Ðe/򋮼їa)xjyoo)R )SO!5*lyd;N6:}ի']$/?\WW]w%|>UW׾ 1Mۼ7^ gOΟ{~>g9\L<__x yOzo7zU~w|wny-Ͽ~./䗇0oO:Mwp~w~wa89f6L}pc!ug,gC!P(s+5O~2*32aqjrLV+밒ca}sJȍK G!-R!NVbŝ(_}5؜جBJɊc ~i#,܍,` ޶.Np䭹]v^h&6#@[avX>.bRJ)0T FBJ9圹qf1ԊY!3F <;w׈ur>`\ Q5;HMTs1eXŔij[˺h_ >0쁄6b }O؛6fXZv;IĊQ@O=hsZ^Pׅ$ !H>2ZtME;{]#_4ixC n!fqf.p-ڨy)VC` 3 +hfhK2PZa2xLĞ(HQ'!b^0CEJ3F]cjc2Q PIk3Dx&2sni~\.eɤLm JH)Ɯsv lQRϟxMz(F0XAr}xn56{,bTeOr !5wY-a?~=4#EȊI.Ljq6",jM1r5H8kHr vwd)2KyC7xrG* Ln'C`jq*n} nܬ{=;G;.@LuΎOg8gژڝ?8 u/Nfvڹ@Ԍhñ[< CvSJŒ]6mV[ !;R@j>jKl6POW?~?9]cbB1IJiͲf=Tjҥ*{{Ў/p0dO6OָQa8y>Nͷ_دѽ3wkۿv;>z??oW+gg!|wxQ:wpoǾo|ǧG7}Coo\׿}ۺ?+ <^wu|+o{{_?eZ_ookOMo ~?#?"k}+s xR&?+zwF}מJ|N6bJ63Md"NzٖRx/fn93yC 3U[Wҷ8qF&ڀwbVq0dy 6Aצinͼ.բL*DcGw\#Wyw$:K1l9ZjNy_0d+h9ƜN"Ƙ"5i,13bonm2󰷷?,F D$aX]R[sY( C^.9Zh=MŐ9Yh$>K8@X7ΖRl[RJ- ?չR4m dq1 d>@9sL VtNdɚRJ&`g][?|jq`$kEZke*˃-n0=&^!v O]k3JCpGY;{Bݯ, ǚF3R1ATv[jf1(bX.aH)X,qÐ< CQD6;g ab! }-LՐ$dR>u693lZ 7  8])iRVby%{oR7 uSHÊu]TsS% /WT;{ĿvW8!![ʉDێܬ Vcq ao=ehd@aFVݫ^=x6ȴ`mVJεi2LÊ Gãz]`#?fYyM-U]Ô%Y۔Rnf6m#) IDAT'fn0ӯ*lIUS{Ο+8wܙgNﯖEanvRJ1B-enTfRfrLtp0.=5HU!LָfԩSgϞW^?|{n_|O=iOI\_n}ƭggO ^3n939= :էEUwpf'Ol@Az2v;Mej\J1X֊ @}̆* e"4}¬1Ş䙖`˲9oJb7țAaO!wcw):}CVvO>ZlZE5ŔbTN۩b ӿs=pݵ`##׾亻~}Ou ޿ia1X#k9 Q}%׽'o9 <ˇqEkn?? O a)Tj-!!g"$a OzғD8H)C2,qhGթ)' R!s@bʹmm&alefF`xF >2(*jʢv_T!D Sy&%H t<5LLhs)R)>5y"/mB9% CDJH~9ZEv,cŤ&ʸ$*A[nJqyȳ~.ҮxYqvGGkgǮTB jސaDC̍@Q-1A7OCm1EZkeTDj-(*A;?+VWؿzCۇ>S$yQ{##+VWy[: ! ֓yO!RMH) 5*P1#pKFGKh!{O}Z-&a\4m;%lu+MR B888<$TU1juμM[@_kɚ;I}20"o =?❾yN po}Vp`ZX̧ Yml>Ov;GG?xMQ 3GBu3/ƨw]7|=_6E* =:ai%_ã.< 8 4%RJ{i3 RЙzW.T .S(S3_%)QUPСSu2HKSϮ5U0p>IԢQ)"Ŕ_!^€8ӴvRMUI۝r l<a_=NG)0 8(bM8,(r飻ey+nޖeWkbȔYپM(damb})ENӶbH@SHRL] PTx7!,pi,B4[faYcP-q:gL#)`v $10lP*X\RB$&RS gRΝ1KB ,6!#` aʉ"v V%[vv^8qI/ِy_-F\BΉBU86c,,!QeV"%"֢r8'|R< R6ϞǠ"k~qh?+RqޔZZm -l tD]U1F=k|"QTSyXqw}X]UcιS3 yʣ*UI[mQG{@0$xݶ pU~. "7 yQIHB@xR9{cc̹I+'^{^7/%luFJKjdmZ#2y4;߸FH%'laジA*O.9GdA%BDD,?2v"UR[Dki[fb@Vve眬)s9BtλJ tZ8J)`k˅ :h4ه L09',9';E+:7:TWmy;4GfÔQ7X jc`769usja~:KRJFk"d o!p}sA9~mmmuujtfhafsΛUEzW+XsXBaIqPf\)&r VZ,pLX(J!FS1b1TcԈKA)Tl]XH) Dj)VU%Vndپ'Z4SJaC9e B#Q k341sλ5'_q$Dr~lL5 U%D^e9` f־7BW+afJGǒSYrvbb D Aj-*_֪BJc.bB$)@tArf ۼmF *픂ZLDYC.+B;oi..&bƄaa΢h*[pNDAdu:m۶yѮ΂`o_umNwfKY[[;cq#|#.<M-BPM%A-U{Cꮮb Diy}c?|p11@MoHՔ^D>ĜGmk{-rnk+Ɵg<v#"/Z'=]~<ӎ{?q~^zۿtozkZx~u)o9 <}G'9{^;~uou_Os߸t3h`fASb~ڍ%{m[YA}֎4zMHyt)GrCBΦӜxx<1g;umZ1-!jN/y_jYS m[EDg|@25%#l1Szw})'@ 0"r.kݩmaU`@.'G!dIr[. q( *18۶ _!4!saFs>x!l6eeS"*m;9ðmӆ\ |t- f@h'MN9е]۶]7)M%L` V<8AqA@-ٖ¤H'* ]dݬ"P[G3ׅz!™٧8gE9BD@cvDЄ`X 0051KJC)2geI9K)R !TE-m `_}#9BTn򤩰B 7VqtIs%\i@d"Y2Ƕm2TnE9AuyT?dRUUp>RXKK صizj+jJ𣪀Z8>]P %\J13n40sB!.b̢)Zې61e1gLMt2 CX fyky?J#$-RǨJv)'4}yXQ(J-GcYj)Y| QD%38(+LvDd\1gvwv+u(m-Bw8e{.dcfaGM[ٺukL@DL{hœ9USūf;VxG}ѡ;3~W9x<^?ЏsE{^tۗn{Ž'|#zxowpd|y?Ӟϒ_s;O{&q]nkq 7s7rI' }λN=b> ?|pO_g:zmsa|7?ܽ{b][́>|=Qnll^ߘL&ŢZB?iO[t»KYL?7^?K3)~y ׌yvq淽cҗ=Qu?vrڮkۖ) #"Z?|?ݻv!߿X,q:|<'Z5#3[T9L[,ν\aI9?wxX+A\S_,gٽ<䓄y6m6t>x<Sly;d\Kw4&z뙻vQcsNb>&sDjvd?j XGRxY!< wˆMMFD5yʔiqbך\}>s4FI1Ds,56՘`WH4bRY>d ޹jOp%CTI* Urm]HDc,c'n?XK5IJUd`SBDUZL G 6<XؖċPdF*w >,1dGUM@pGlgcɞ:缾~ڦ MPQfwwdmXNʮx,cþ4M뜫(\0e-\ނm6+(RnR" Š1ؕs#M^9[`=ԱIli3-pװyZj#~k_/!Wf5K@Q}sv]4dNjڇmń{P0ĕB69ePq"s-Yu'Q!2s۵ԅ[,L0ђ6ق*MX_RuB}JFXVƆ[VU3lJD#QK#9^+68GR (@vm۴=!v}k`r-cӐm{\ DO0[e2xa!Mhlx]wڶ5 }X,Xk[DtPMWWWWU|t [xv4#qd,|>}fR!41xmm۶LS®0lvv6r^D 9^?%/%\%/ߪ+}N-o /|ηI|w~E:p⾟|O'޵u(:k_b]F 7ܸgϞXp7< 1;= {1:^ vcl۶mۛos=sr8-7|g}طo_kJI|WKiiB {s%yMKȬ(9g睷B&UEcO!!l6J;Ua{AADǁl `1m]R6uY7Le_n!su.#rؑWT^ ƵӰzZ5Yj($3ɓsB.P>9DDd瘕so 8xfQ38bLɫbM([6(,ɠAQ ڮڑ1_Y Y׈)y?yXE41hs츣+Q^\`Tꈏ}ò<4rU)Ůޮ,;KU&KHCy-CC"} C\VӄnV7ֶmmO_#0?_hE;&;zO=[g I'edJD3eΧ|'?uq|S{WtT{馛O?t<)Oi` S]wc۵k޽V97Cr}lb:lZT5 0(Tu5ty K.+n` _xHゾC$GȦx+uDn}\`@Əؽ{ŔO|B۶vJňu>qlO̙sq8 ֎{!sO9vt҉w;O >P]6BɖCnN>و8tmyD4hD\04M9dΔ)r.u`EQձswylU])u,!R"r!P4P#(9iQ@ff(wG HfO1"LYkFg*4 C=a@V)8XkGDZ&:p+ HI@툼6c|5J<$ooګFġ(X(Hۯ IXH:j,13Qh²aZ>$9saUz^0 3sd x̬pl |d&32 1TPQ=wL0.'_꽖K;GkyM͗.YiH @D[S@"U.}T5(c?1"eǐJZW\R{W1C",*ITAԲ<޹8@%DPd9E14 c4^g/FBAgo853xPT1HھvY(k.%PeӴ\g lj2Ijx(z0 {fFxI˸8ڦQC޹\{ `U vJu#Oԥ䪉E;"QEᄈ>eLU%5FҠ:Lڶe 1pǘKX]l$1F.n۶iU|Lv]u8ۘ4Pb9Dd6PEhƇ`ȭ|nvBUpT[[ֶ}6v~a~(~?oč[ K'Ahx၁?S>ӞJeh xoV8{Pim|gr-pi-sNElɝ9L٢_NʴT^{KxۤUnDk' ݃^?,?|`<@J 6E{i["<ӛd7~y5q΅vO|3`wY f4VJ̆c Y@$YK炙OsgUM]BBS4M, xΝwu O8Y0m4\W-VISpDQEo6Goabu+{& @RTw~eu,ơِ~w=30Ѭ}'Gh\eea_ R>FI?"w5D` SR ե<DRkƊ`TvBwuS9 )"c?w0sm)6 a1 6U\qX ERG5ogEöc۴ED1mUQj@>;'19!QӔ% 1d#MMFSHmvRgœkzyh`@ TIT$TN1Ad!f]a4D2JvAAb6Z'Lj[VCd- H"UebMbLei4/]U)""%3}(# ΢Z6vR ęoBlhDbΜeܽj3UpS 3Sht@rSaq3 3hb5S(/rӌ\p/.ͮl] cPAĦmw ĢVT^:Pf1\kJQ,uf yg,\uG2 >iG7탽L( bhd2YYYMhvumUpQ4"0c4+++ι| ];q- Bc]בw,j5cD4/胯ШL*0 Dm5M!x1oݏnm[mؾ.?qھ9ʽzw" ѸvTnG4sc’ o|u/:_~MUaӽ孷؏SmrI%hl.ۗ]z3wy9M k'Adzu>}ȹwbћ]bwbxb65YAFub G2)%stc^d2X$ ¶LTmp6!攻Imo$J5Mc88URᏘ䜥VWlc+}r6Ltm۶#huV+R6,c0 EfsGm/JMx4MQ2H)mXkv9Xd< 1 CoK6ݓ[ThHd9d@/b1PATUa *d:V$* ˈ8p9{vd 00"996ft:8熡C0< 6a fP׬RJɪFz/(<.Dg9!mB|f悦i`6mlrc)#!D,2c̩s7v];&m Q9)sg}ߛ3(~fWX-Y 7 ']s5mM:G*bM蘨{oMBKsO 8Vtmmm۶m .! 3n$2`iiBMhPP?RU& [Ag4j#vmL,Zan8BMlX6k5iki|ٌU%3RA2e{ӄmZYR VY ?jF !Lӵղx@YHcL)g%r>4>94&WGISD\9 lf`xT7+Fbneq*qRVS{); m-".X}iT1M<=}PNdBc͹_ʪp_YYٶmm:|D6dMx Y*.,_ ^c q)VvBW[|Z6{ ^BJrDZ͔_G aqlbbGNy/rB۵vPP>sчж 9gQ>#t<).Q)0t}۶m  ˏ#;,)|}1̖9je=n&='tmueeemmmǎGuԎ;VVVrC/~{=퓟?￟ъ'#l6;|C>|XIgn2LI%~>amlm_u/y=kϿm{w_ozϓ?|<]'Αs9 șp߾svd9o>C&2-|{Zg޽{[ƿldN }_[ߘӿ-菞|gE] K^q җ#K/yydrKv.y_~=Ϳiwы_p\ry|E{Ϳ9@ PcD_}jGGY"d GbiKXѭ6HXw~VڬfW'P!2f46("Q-rJCr%Ũ)%H)&s!"!ss y09"'oRu R’ HDIf3#2ޑ#QsVQe(G>zN_I e0s.Q"9p`F 7aҐsšS9;CPKdє#dYZ緂eH(%P0mN Ho.1F ZCLdgfb;DdLqMFDZ#pyȕ*+"B̢9g6CYY8!cPӂEZET?ҏMSsUPEدY&4!Fu*YLD X5D!n2A4Zmuj1c"$_tJ0Q-<Ҳw<%NjaCWʊ5T(HH jAs lc^DWps:*TH]2vȌQ*┿BFUeM0ŦlbƷmQH]駇Z5rr;#YMa@,J`_5XerQ:rjMѰTUYyHN٬% \)HD)gMMGUV6;@YizJm@ȥ 4qg@ƕ U U jxk_snee%!8RԶ-csUE6K.[Cu]if6ŇaH)%u&MI;֓Psbta*S} "J)s۷OW޻sAJu1.B=Ky>cJ–t_Nɖt/jz-+.ݷ}-_}S҉ Sݻi=}vi)ke3N!]ٽG9d2Ŏ3O=3ьuMp ![l<3: AUH12 tt̼1[ؚs{}?G-O;0&tnK4޸e/IPa/\޿gXJy2ڒc8_v铝srJ1"] >Fwtٌ"bVKj7,֔< ^M)'d08"|GQۿk ho}-#VCw~ƽ{Y#S&rmxD,H"zcDq,>}1m Pj Xl)\AKoA6dfNcj#i}uEW E š:l*Tҿ7PE-bsWa9_ hkF^D0 E92sj4H舲5D zGdD5MPєX 98# f,*jAmLB#T;K cJEX) A-Fjq4bgBΡ+&X̬11WxՎ1VLDr< :pyv<,*HV&eOC΁DFRSN%GlFůTVJ XO_TF^UUW! Ȁb/00ZlƛT`Dt.Zr46MΕn.D1)g}Ӷdќrd'b!]v-)I)%. hj򹨬)"}DH1o *-?[Q|&s2>m G F+!4lUgD"08~ UsTBPݪ6'Z9d1UM-GbIJz&4>ؑglX,lll(Ţu] ~"jb̄ PiL8|lehHa2MӸE?#sijksn&'ř3$ebV`gk{(W*_hp?~_WNӯa*@lY|}uc?1s-zߞg|Q:.:޽{n]g9[Q&YjgB#ԶteKxA1TK%Tn XnPՠMh̆t m m[Bcҡ7C(- >%<>f1]F'wL!ڶᜆDDV] MB0"RL9aAU5|q}s]vps/9.Q$}ZGX8ʹ묳[[ns{S"57bef{cN<Į=2wLbO96iਤMб_A,Ne8C|dAT`Qg-ES)2/] y5a*`c҉MZ̼w(:'G$d7D< ))&H3JXt )9g+o}A]]6mkM9rfN1ѓ6=cqJVESTfTW(q e8͠kJѪU(HdKZeU9px)O4b2{)u-aXl9o2r U6,yR}HNH) SW`A'{$G!BUEϢX _@Au025ҾjJ)j?hDu#B))# h f"؄R9lb-ZRI4 { TVAER\ 򵡦Lƈ-eŽUj*d;aU*rʒ[huٔÒ& ZZoSJl."jfg!l( !hNX3a笴SX!"bXpy5dCgo "dfhI8&ҹwZDD)SJl@9qLIT-aS sL@c.RG}Y/w EFΜRMkBs-_5BEhAFc|ĤgcP]O•c1Pq6v+@_yDĮc%gN(MKIHHDt\uƱ_lSƘ°rU.ʒVV5pLw,mRd4eHU3EM 7X),I[n*cgxPMYyCU> tEeg<,{4 Dl<䒋'-],X=`숆ђIHJ]r s!;MѾ2@ J6ʹ#^Z?_( Ĝf}L7RSQ 2&ŘS@VU4C/rNmUx6۸{?thJ]a>|x>7Ms}3Gֶ"̜s l97!M' 1F )dYidb_@0fmmd21L z@!Є skfSN9_Ygu>9&4\p7[vsw>^[WFƭW_Gqo~᯽a#?k^[W{~^$~C#9[9I'oܝ++~_>9g6;z^=lv#ǙF5#"ޅ&|/ՖozGS*w}?amo*3 VWp/H ri5}JKd,%6t:LӮko\3c7,'+P\J\k|>BsB W^ywMk*dHT{ȴ>|n9#9#^D?D ,qw0~O} d9r@v|X:: N=N>3jՃbK cL9!bM -RZr}tj oRbʉD"j֘y}f9S#Ñ >M۶]N );nuumeeF#f#Q6w&VSB{r9l(֦m!c*w`)wՌ,D(go gX;9-c>9a)Z] R996F\fٮv'Єf}*Ĥ n(HUf.nQ沦lTڎO_3KB` )7U G9Qb)NSrAf`&3V4ۏ<#1Nڶ_4u Gn۶}2Z *:ƸX,fbH1ff-=!`BՎ0 "֦!I D!8NS9Dn:LӮVox"pkƾnWՋiHpliC)e ϙ[JZ1L)2g|6=>(1cbP;Oby"0J&8Є&h"QKub{Jt۵ɴk۶m->lm)jWEC0al>f}jd2qME_XAS0%$j(,\|`T;T$ N)T7 X#c]jRN@2]"Doƹ-S8:_D뫷ףq1|nZFJ ֎>#?,/QYC|F4,W9KM/}dcVXG?zs!4}vΰ !d,f2!ܞ[1 F0f$l Pt @ J S JZ^CY1sJ>Əݴ3t}xHΩQS=c |1q24HU|0 Ɔ[0N%Qr(ː]f[$"TT"dzs\mK, ƳRLюy.X8Ycm2Ɯs4 搜 (+"ɁV@֖*xFB"ILk"U $B؊(9{}w Q=ӯ߼?g>u_/}ˋ}g{?q'|OxW8wssp>?_~bfl5~6_zS_Љd $"#C1MN/m0.#;kҨi~qsJ~s?NxwtM{MzCgpڼkv޻| g)i0gb Z7X,kOD/!8ҘA{97>~_0`D<98u~r¸wv})O[;Zb%6ꅠf*W"yr9F EcTl4N RZc'ѡDAD%}Uv]W5L)>f/ I*h1yϲ#IC!=rdCB^s P+qL&u!\ι:L"*Mv jfJT ]LH<""bWHyTD 0OX*NQu͉`RTSi9gH%fMOLDi Gr5js}@qr [Ѯ.p+S߲@-IS8XAEPfbLۆG@"6SQ7uQ9hXw;Hj֪fZ1l-' s*G]*uw!:).[+aɄduHptA@3fkjPn?wk1=mn4 wxFk_V84U Ƅ܉2b_d\җY9Syb3+w)͛%{0†[3 {̪kqX89k{{s*[0f *4q~P Nb\}5Ϸ}Cv0h\zKTC~!Iqo%=r)"v}x$d$a&BFFFTO,^7Q1=AzOOz_~r7=oK 'E=zpHfy} ?_gkt3@%?_lpb{?wqj@/*x|ߡ␦1nw~og8?pqOmh`g=I_:l_>_,nzo㬶>^pfvȵ?gcF`qj)*!ؚϵ9N⋳xDBMF@kdbcB1Y\Э[8CYU|VQf Ua$VCЗU- hT qB7 9`M6э !PZ `@}ץ6/fT FhA6Uc|jZ-9`灵VȈPwiS΅ฌlrl "oSYDXjT\X@`d4i5:v&(1cCWmӜq\, t+ 7;Eq &s֚L'>IlpOt=l6f0ɱQ]޽!:%Ɛ+#wuߤqЅabt[nmğԪb1X,f9"nwwr IDATom얈Nj̓ȠQuu}<x ^p pm/袋ڿ~MS.웾w/yw/#M|{?O|[;z෯7xɏrwZsC?˯~黾_^K|S=٧u. ė>ɿ{/>K0ݫ~鋹.'h|z~{s?!Ϳ}w'}uYt@3* sC@a{Xs3^ ymO(~ӛ> ބD"pc,Q(Xwy=\?}`%/~k~>gί]t `kgE̮.=ϛۈpŕxUx .ӣ+fv7\zy ¿._~b׼99k{^w\~/r$WC$bna`G?Q0)͋@+@ebZ.֑?mTo[m?n𼕣/v"EjJ\TsN%j%C ềf-٢yz/8DC`YݰUe; j|&0EVN |p=w&@p%4Ҩ}o$"]3NL-ȯړ2m-e5^rLhDb$ \S"EP* X_T SVPUv\Q<di[ !ŗ Tb2yBIaa5oo~pMV15DJT fY HLNމLPV1T28b,ifHYB+ז%R2씜Ҕt`\@ CaLR"q0*b1pB)/6Z !ŃP=_L1vF 0s#TI^gPwFDr҂a7[ ZR;\I%aӥnl9ϓEPs*F G!r#3{;v]Da9W-srX//W;?Oy??s??)%O~_<<G9{-9o{s >g5/{ q3:ct药ɟo}z{//s} |ܓ~ʟy>\S_={_<|;g=N9.:!g[2>mޯրm|ijsD骫{^iVCL"3_+7^tU1_{Dﻮ׽Kf"SuAi c.3{ z,Gn(DC3A8hru{jjW_sEMk}1Nn^+ +o/WtF\}/<"r3~h%c+q#_9~3 w;3zک^&C?)gTliX=z4!?wf ߲3la8 '8 oyܳZRzUTSJUMS8!*CfZqZG]0W37 CNꚋY--q9W[HZ0C4M2D]5 3Ӕj\V*Bsʒ(ͥ؀aMbgJS)!)IϭS_Ruu/ȏrBz0CW4M)gWb"))f`L "uiSTf"_Cڣ Gaι=)6jTiKS YQ2[[תp6VV]QVj\yKc8YZ8}ѣGkL 5#a[[]׫ѣGi20fl}ka`iwwg\.}]tF́Cj5!l5fwwgG3 LP‰W8p VjU؃V5ݤZzN )QBHcUA"++qPDrK[#&'Cspdo`M 2\7Z8e?Bطo{߾}}jޮ 6i], lk OJq|k> !iL`_)/G9O9唜'?ɝ[)fUcH1Ʈ4Mitjy=w#ٞo~;A#?OēI}?䜗4MI$"ً_?÷[.ExoC;a޷ПfPdfky?ȯ޹_/D z͟+yp~8|W)^nI΄ri<?tsه[dn9: Mؐ#c%n3;]\8<ȓг D'nffrn@YEZ%Z(y*չ|>sXw oxyL)4`BX,ELR"YElai\ji6au]cn169ex+"JEZ~@k5 ؁&&!F*u9Jŀ4'aЅ!raDB#yrmӆDA Ly_H*G9{; :"֥V3@ $$#0$iIFa*rP(#5`fR.z2"u1"\ VX!( cW1b?R黮2, u$*W9KNMDhB'p띫]׹:|W"лĮpmH)ʚ5 !9C9Ӕvwwiob0-%d&\OΦjo 5SEo q<Ź8nG| AU+Y՞bRVӻu6EIrYӨ#8\LdGffT⠪N$2iU!Y$g7JQ&}Wo:p1QFU\hT5N*J2NG]qILSrJfDEV 13ΡL]WoD@ *`%+ -YxVIUޖ *CA7av9TRokm,r~ )YuCL<泹O0 &w#EWF|G}>7#Gj0r"NDYtBrsb/:vW;(\/b?z뭇tKO:WCI--3$Bdp8h4'4@0d:{Æt3R◳7tC}Wq#wbݹTKߔ3b$.F_po7V@U^|xp[D7]s WԐ @H]qOT";"u1MiLd`q,> 0b|ȦKbf[( W#1]?uY`"BHIuZJ!}mmoy]ElJi5SN]Cr23[9٤;u18N#).'Q#F]Ĭ'prNY`d|\,K˷D䱋SoiLu!wρBSuQ0ZRUR.4X-Ϧ6a,`Z-{'Q+"XM7E`LvU(Pd)wUL}gj87`[s4M,211ujZ51by1f!N;H̝M%Ud b 61f6NSA͒T?i,33fDr B$#@bU@IrKVUt'vS1!f:Ek?7)Yf9"viU0)@@P=V݊kX]u aNy1M`C`Vq؇{"@:BZj%YL]bE[ڀ|2DE^W:HLrLtV`bLZ]Dܼ`ˆl27}u9mrSJB|@TJx2ׂE7l}:Wy†.1K>Xco4?=OWݥŮ2wۑGM,g~FBJ9 Iqa>X-Ώ}cqg}FG:'~z!b㠈y](hɐĖn%v(Lzz̆a<7 eV쭇"U!vغSEgu-! 7xg3k虧g~k6v Hb=!͋=(D53n{y38cmAx!sk[!='r4I/|^c,HKϽꪛ?n5Çqݱ" ɂ"^r񡫯9V㙺5qJQSq .crDAsF0j ) [R{Qo[IO<~vN'lBr :2g8H)#@Ɇl\EEXܕ 0f~Z4"NIk00HbI Ns)⠕dEJ{?CժCɌ JR1x=Y0`bw72u ldhF5c!gpQiP8K &E ](0SoJh3 ! க)e+Y[70Or՜%e˚p-k~ \9ꂲjT B^C?gjgggXC]$a>C[lBcTLk}X 3*L`mwj-7`݁c-f*a^X9`թd`(l?-M!Fn`0.f4*ߑ$f8uaэjYU](hsv%9WI "iPxP"1a.tCԀH@T |ٓ7{xd`Θj6"uITU|ME3Ǵx<7o~]NOhMHnv kUU+{CJD؅_|zT=@5DrO /|t=R?kD=|9 ];yk泭y:sJDIJ{5|Qja*rw|>&\.,R"I7A F(*b%l)qm$sLӴ؝]yaXUՔG>'>Z g IDAT@^kߪ]!1HƬz(6y:ךf2xj@tzMb5ذbH `T[J%4g 9@H "&(%|d,w*9ڊgDbFUW229¶+i;5QT[(fsDZ4U3c[uAꚃg`֩%"G["WW !l@S몥SIkZ*+R2QsZQZ}wd g'9kT#!> ث+ȒsϷD9~ ~NRsNe {c3syWGZlZi9;SJ }캽ހ=ؽe~Woe|VIDͺN(6V: "xj@u[[[D5^꺦ְ 2zzV,$"GQ9f@ej$B A i"\SkHڜɁbC1I>FFH]_LAb}PӄCfIHniѹn+uy pnI.@Eb7A0hh..pAsXƒ\FoIm'SS 7e<𰾃tu* -!\yZA-{޴twvʃL9/ %Mnha)]ʼn"jV8XAѣxt`5}ZppH<V*FcQ&3PKAI=N &"*7ȞThfI-hd.QʕZGLLā 7j {Ӕks%TK̔|sNY#!8餓,8qX,7n0A2u]hVDb4|l\4bu&W+M)Uut\TzJTmu>2ݫ碮P] ^3dQhG#O}S 9 7_|˟Am>xny[}wMtO5mXX+3x}uwܬi&O kVqL)@BdmʚZ~xSq\JMKFDˁT5v]K5C8sTUjJnc7g)*z"?jzcc.J9fnC+Еr.2w큫]cۄc(`0cVE.bvq'}E" {YA*4ChZ+[r+2`u}9rJDF R& S Ud./5h Y,O;`IQ5bԈ .Rjq2!q`5C"vVymQT:!Y,;F`189M% qL99]?p[êOM3 1RJTrB H%%*3}!usa\SFF2$&%T8ZniKs/92*W &(5I$g h5)T&{l썅Cs(h#[Mw55 r7!X}"*24q<N%)TÐRƩz(F!uטp03?Q-TB') kk LMT S|8y+Id#Y@W 57G^L Țc"]D9ߥ*Z쐸~8⬵AYrXϥL!jzSO=D&5wvvwvvwwwU-umͷrq9{p ?q_H.qavwwDwܱ\.c߿u1F@~X lvnpc|_?%|%/EIOzޜ =s~7w?G~vofmg>~Od:t"(lr+5mA7kW9TBbm14Y4 HDZոW`cpuVxZ.4唼Ŏa]V? 6N8?1pm1tI)iq9g8Mq[sVA]T#G랾s9mCu|`%ݬ@gq;/9Q3't _l??۟Mp< ok'>Ľ<_rOSz;~;'^x]*o7-ww<X?Ԍ߿w~'?}gzk_u'?/?kok^Wr?[璳{[~Il.3Co~+V,/bbA$hœ1PLQr/q쒕zes}<w,ފ|Cx&$c9ZS=*]d5CIpJij=IXe9W&'W]ZJtMJ4V)욁H.JJH1d=./ͨ6gmH:!`Q+ = 01D[cuZa6t35*B͘YT)}4f};$}RGBz$-;&}ft|Tuwws~6!1F0E!\%?eB1G;q;x_=ޟ~6BK_ҟ*n{K76YiQ:yoo8k_~)~OuW:t_uo}+Ϸw^ OKno\=lm{+~x?'o͟K𚗽/gH?_wyŴZs^gv7|G3Zksi <"RQBzZ )yCyFPя_i=J@BKE,A m{91{$F,crr>{~sr겻'/{o>>t;N|62"ZdC} I$}%[ $b@ǔD#8 /}}C²9Z8 QB$i5!2(eU(Rs!@P1.FH.9Օq&Un y,)giu4apRR)#Q}V9R|z1)~@ =@ :E<1#)s+EE>m7>jK݈ bMK)Ba8xf!fKOc"B`)bjH+15ߔS),\7Q)b쪢JT?gc5bXz4ڏq\6GGGGGGjU-f_ E Ҭ @R rCV]@U8 *9 9%k&GdePbNP%Zi0M݁nуv4F'Qa&߇quD }܈> ٗUwZXK?.Y X.5*PTJ~bF$8}~xxߓwKϦ~2b;T?2\R&4 UZ L¾Dđtn+L{5J45pJ`Dqj  &nTg,WWD\\v՜Zu̴ @PkPOB2]vC!f,䄄C ]8XbF [[HT<| "o[=V*Ev#1u48lq8tQY -B BN8y״F%-HREKxٕ<qqF9mzRa?8wcMy\ÔsڤEL8&:8RhL{-`SN]zeqbbl>z 2?o%GGGgoo׾~?_|s7xUW}}n?=7\uO[ii|O̭]ǿO/~?gݮN_~~÷V+ʯq}K./x׽k~/o_j~E\1wOzc»~?W>~zEE_ oxo~]vQD\)b8;.ҹ"H IDAThw+|M/*XPy)Z52Eba)ڗ( #〈b3F05i10*oƈ"RJF!6C ȃ'NiJO[Ӈ;|ɷ;;Nx_s"@3"β1fN qk@% fsJu"z.iVӴZ/"̞^RŤ.bJk,zGL!xH*,G`0Kܱp)/7jnTJ)W+|qWz)5<5?YT;TӲ6 C^l\ r>J,:A]ƠUUcƏ1Dl'`"L⧏)e8O" HNXlK.&VRaGzG[D#ڨ_d;,\U҅ upj@Ş5𝌄mjP-nMg^**q2",>S3]5+m7m'՚@4Xfe=)!q}UҞhv,yܩRoѮN/w5D7m(ͷ@Zq.#t6ʼ>!4]W5x!]m]70 *oW9͛ şz?w}Voy?x3 ?|Ķ_z.n7ŗ۝H'teն Ht0T.})%] NuAUԉ}ckЗ:=NX T-fiB 0qِbUPRC18Lܢȉ)YQ<:mKv܍ yK@Lr*!7 qooZRҲQjA=k 0:1SͤOMv"~TÒzCsS(8:/ 5# ĥdgˈCrU`{yͩ`v8nMQ%ף-%୺su'[Ŋ*P짓WwC PFCl;zVι28B`aR @8CZaq!v.* =W29(ƈRczTUdIpzYχGG..ĩE$5Oz}*!):켲ȥ*7, q C#IJBJ9KQhiFlcLژy7w}k_>']tQ7<Ͻ뮾=o$~6|5qC^frBrZ0nˮxNvS\ ?q8۞}xlxgϯ{}S T/}wܵtrp⳷Wo{.D>>o{ӟQ:y>>v'ɓ'=|$m!*z#!sKAbcG*ǧD &kDk"8Gf-QW?+|QTEG< 8 nE]Z)v`n@_93M%Q`+9i8 i2\:8=S.!-R%<4q@*G3E0JhBhuh.u`&h|fFP[ZSQ;öjB`Q7gl64W]/e1s]FaOs.EJ 2 Zֺ7uvi)H..(Bd;>cs\q>+0{{Ĝr@Dq N-Ed;iU)"2;׬柽zT4R*ZvqX܌B#zDž+߹~GfV9y3`D(5փOS}ina23$1'ի[%hV,"882vrmmvV +CjS0MXÉ\Qv,Y\U χm i~+کMnPR&]YP-៸^xT]_8:9RBgT=\~w4Bf8T;YRm_ vq4qށAXjnnZmB=9+\sKehɑ;PM1D^ M2RvpK.3+*ں"DžA3vTYv4mϏeI- v䉽վ51%ϼW7>Ok>ŗ/?25w8vӻ^-Qǟ}#?ߧq׹+n/7~y ׽}Ok^uTԟUqvgmߠRq ]C$WD8W ^JlmE·6EA1b}'t^t>Q+j 0AΥz)[W|S IU^@1QIi3oTu'fҒȠZJɹќ}͓j؈wg|$^,"8p2D9>#Y+xav;vo>(^f)1߈ѹsϟ?? '8e'iF/ *ZJ)<u AY[ Kڝ1!2演"KIHV1IR z1`:0`sUtqRQˣDToV 4NӴr."*"^|kLWש1qEuY}x .,U=ٗ <0Ms)uS+SW|lTQb ɚZϹ遪*ٺs&&6"3Fv0mι FJ\JqM(MGwtA*Z-1$ҚM0ø> VT\v q0 xL]Uǽf͜:a4P-RɬDlQ1|ܗwډ%jp[cܰSY Qn1*YR7Ĥ[ju^RjVJ ڍ@D!VrK5b7dVˏ=Ck0_R}s]*cR_Ѝ̡^EUTYK+Wow,(r$^,u~"mڍ-Z-'X!(D]퐆buyӁv#hBZ̘hؘ önR~1r.-Xfb355$.EsʉjъHI)m6'ʮ8L9g3i"0D7 zT0|[ou|᭷zf 1q:mgeiGPϝ W\v4M׿雾ԩSǻg=? ?y o]r>'o|? O/noyЗ|eƟnyN_/'?:-w/{s~E{uw{y?w~'~O&}շ~]þI/}w|W~agy]5?_R@l>;~}|}}qvg^s5]-Er9g U5İ?N2Yqu!}a8F!jqq`>3hIVM7[빇F!>vMɓ+W(O{OH9:<̙e/:}ޞ8޲,zm .:U] 9 q!TROyYRb Eg˜Su{M+>Bcz=/s)rp`^@՜M˲Gͼ}%N9-j'4Sw4I䒵]L}scԔRJ-XH`)SjАR]0ɺ%~; 58/zV8 1/ӗsJ/e}.q-=ߐ rx0iIPJ'~ bΙ"̹ 0 AT;:$C_OoYUe獧r\b&|PpfS;4YSlfBOqB[׺Z\r=^Ovi*kMБ{bE$0{NZD!&^\@)]MҲ>qVSnQuƍNRG%n%-9g/:88v}gvuv"<'2luuE)h^EEX=$MO{Jp݄K.~e eYd}>]Wr=:إD#\DJl~MCwqd&96/db tHt,˨X6ݶuE/"B- ^ aXVI"U$TpxtTJqy?c\Bh&Sf)RZ5jfD;mAw~/I*T.xZrN!1cTagqO{"-l6z<ϟ?[ϮϜ[b4K/O:jgΜ oVUW]uWשJW#"4Pÿ};qqCA ̙3{಻]qݮe9<<<:<4QRC0! !|k~7^;y<\izӽ_՛v_ wj+|4O۷#ɫO^k SolE?|{߾OwwO=IW_}|pow:;'䉊RkDK)6[Xԥ38wq 8.˲,jC!n6u:*,)4\YH0H9\ )}xIiYRJلb_1ՂH^6/ bj%g5;DFb5 sՖfǭƈ5DK7xl9HJ]eQjMFlPq4w({D͸kת дCդ[&V=n֛Dҏ%-B4ܮ1 ib*ja>ĺtJ\ZE.tI+3R/P']Xn̢R3A*m{]}@&&ͯDf\A"qF"T$S)b[~ ݎa,B !xI=&՘q,RMPq?8#1*h GVu{z7yx*Z*jV38[#eܜe7.ސUTqXL' ̀abܜG7Jɾ'#1Y++sZ@z+ٔCpZMuXF%K0MӲGGyތXRՈcARf}V*有pƔߜ=S6Q` Os4jpth] 3j.mΟ={䩸ZGk2?N<,*2bX]cS !Lr\{}NJ/O/9cǻx{IN);"U%$G'P_}C@U͚E<C|JK@(}sKWړ57$3) ky-|ivߪeI"OJ#܄GV.2ϳ2qkȟ}WE럌SJ*;ƻ]WDK90a`")EݭB۠w Fe4c@nH՜H(*jJD1i !Z^!hjJ-duwI QC(XJtR50RG6RP(u@j E !J嫔TeAf6a.7"8π5Uos%!28Lxt^ySL #50!ii-iI˒"*AB{־؞hq fDn]7$ àZ L{:ɮ*U(Q3E#3Ibzvv7.`PBHH\/܉f1n.쪮~vTRB+>bS0DSV5U.̅CPҪkDX:Rj4*A Yl9eHJkRGSD D;%rW](.Kv?c)PJ.KZ嚁3\FB :"2& &vCd IDATo¯ڛ:}.uIq +3B)ܟYA*.!251!B\#;T݄݋W j^Ejp hhUQEkۚmTJ'KB:fS5]yaȝGDiIslYRj7f1f$)93 0qF$%"jɩTr d8 13R #NxӾ< wv}I'ZZ}1m{F[dCs.'/88t%nv>'X4u )t3lkCzM?.f.<nV(QC:;VBfoFZH!q15>)hAKLF8dELnf1cyxqtStp|@u\V[ OwaB wx1sQ+0ٓ0tR)v1Br{HОPAUĿ\@Ns @C5LjƕBnîtRv g43}U E/d%xT;ly .E*ڶ@ې]O9)YEv_aSj+~Y%{ 9$íB8qu4d E 8wݠQ_V!]pA:xXͨE1djVe#&`Ԯ]ʆ*/SEםDlfoַpbo!,)aqȑ@T?̪`Bf!xgq$BUnfmOJڇ­V@Q@x1; 겢[8 Yz>w/:1ƒ&3%甈)@hfR֛a"idi,D8/03&) Mj)zf1,CJFGؖPnp[w_ juJ.8@H(RHN6ȼvwb09 8 K)~4;j:nTjpG>h G"4 } jj] \q&5n`"n;($JO3uF4iթI/mf}g\7}i3dt?:أOhX1\T5jB11 1Xs4-$5m͠ڥR5f 3!Z[Cs3G~g*hEs)ꖈ7G!88V3 lzMs6JqP^rG.@iX˜rF@'^|eknW#;ӾO\5[Tw6*]uCWUTcP!"U"%e)$ n6vwP.-5;[kD%u[}/ hP{|` ReU7%@@SS)Pl{_ ڶǭg1|A3hDlLV1b+bmve_heڪf0xN솪C>%.B iO{yٍ\8L7]p=;"٬ʾ8CH! ˲XpQW0J>Qqˎ2/)?:+WNNs٤yq(3# r=@a ʎsd 9F"b0]|_|yJmq%*NusN)ޛlf's.!ӧ/b!ߪD8DKeDS-bLFPTrJ) jMX)G7G$ȅurcb7>~ﱿ|1_'/yoɻMؾ7>џ#osY^bo@y) A&/)"ޥ3 uu8;lcwJ:1e3ÜSZDo6-lT WqZ1 ! thjZ<1E}џg9Ս9s">8MHĢ6^ov[CaEBD'c`obq11.L-Нm`R$x~k&=spc\=b!_ #'@.YJQӘ8< r\Js ` Mm KP KN: <\b )\MABJF݁1(_/TOZ}4Rʲ,#k/*[@NI) 'l> ,jUG(FrMTj)yDntp%s1"1|^,-С+:UlV2BU`9gvlfIs1-B3̌ DWMRε5!ju&9 jگ`B3\*E FQ Gf!R51B,9W{F $>[>Zr5;?`Ӆ]:xN.nK}է)W_l`7&}ZOCCͨVޮ. QkfCsۘmIR 90Wňmװy oÚi٨1RDEő= XC,HqFZ;$Bb3Tz d1Sm88#4o촃RaȖC]UM*8M8,˜RNK"aB24+%Fꭠ#l+#R57!/ЪzKrhZ1 m'CTJ;}EUc*jrs2N1֚Rъ:8 m ֏lqn'JH({ +"p`$R\` !yN" yvC } .= "Sh&kq ~.g5Va3Wn/wB@6Pǰ֋aJaPfCzGMU-21SPۡ.V6Wԯ*rLFmE("̪f&~yV:{6w;&RƁTM: ǫzyNIĺǫͱop{JPㆨ=%[w^]!vn%myٝuB(MӤ7[Eƺ삗Cm=K/;uQN RѮN4M/rqp_z_ybRJXDyw7Ν;^YWRJ)ð췮us`7ui4fr"3Q/QS4+`iIǏj.ww|K袋ck{߼~WOy'[~5o{:rχua8 ZLr]K*%S s_JR,"ڊhϧ0E(1 !zfZTU$q!@\}Զf!Do2-9/bё[ sE5Dx:P>хeQ @f^& H۵%J?h#ZG?۳^_qh7go>3ww@W<W>%x÷o{艟O'}VWQyo?x/`Sd7>.w_zCos}^%/9:}涟nk[]j>?\ׯs=}߾H'+T}dU!졘j CΥUf\G5e^r q_[ 1* lDGT.[q\y~+H9 ^8=<"_Sz|WTx&vL/֘>8_rFqDIMk^QI:o6zRPOCxzO:UK4N%BD+o7aF5$܌OJw㣻YD=G;uynf%b3lw=E@[EMբJEoYK)%{p%@ް$RT9@q M%ŒX.ETLBB1 Aɓ!S]rVV_P%} $@Q66+fl=oc +i8010(sxBdkazN ?!)ջK;*Vo |Xv+HH9s)HV06V:5#LYygf聈ʴVukgITє)ĘsvQ M?IWЌDHyZM4_+O.҆B`yx?prOQu)g+8cWˆ]Ӈ3jEr5R(S<֩%=D,NRdنr{]c(V\q E p}iIaT~Wp$$ݩg cπFgAttTOpBnҎSNUuwnUvyRI) l G2k sjaNh(g RwZg>;?zs΋,.N䉞Ro*%YsgϞ=Z,<Ϟ)0 'N|=?̙3z׻fűk4*"ETjiCDJXA$"9ÐYq8Buh`n9QsK.|NvWx;?)W>!|pv o?[[ޗ>ʃ+ŷo!'~qG6eCHǃg7ӏ91O !I_x.k훲y~ޫj1akyS?S>7_y_|@89|5OO_z[;F\{]/o{G_:{ .Z IDATU7t{?ʃ+?zݻ^wݽ{{7y—Kot{<[-q7o~s|z'?!W=_W}W}~훲{h˪^xι{{oQU4"Ė5GxOyv A0Kxlb3(KP(O|>HgL J#G@Pns^͜suvX(yWƈ{Oko/;o^ '{'I칲+y_~?c^tە_|n:q魗~9X߭W]s~ִu}iyܞ{5#^qU?HZ`1Ъl$0>uD"1ιZcs;mO#LHbnGhP[O.nWX7L9eBMXdyI"X} 6X-qv&BT:Ai>3_D"afY?VA &@QO0M8׸94S&̓Ѕa oyMm&u,(/ 2! #kc Bmj8ks,R T!"bH퇡%*sq2 5i\Ӷ4*"XuXȜpC%ﰟ,D.S OR a'!5:4M۶[b/BSt1Xcxy. BRʡDk+n=&)(Ǖ&c,7]`%f!h:,}xlTI d 9cLđAssDjzmV0.UFr556ʉ9qA"?[Qs55gVљSZNlf'*j*dz%~UIL g"o& %w1!Q'4c4HϔLjIPy`)AJ 5Vg^2sfr77 D)]TF*L"BkD0Ftw5a881H)& j$0P7Ӳ@F6&Gȏ:ET^]Wt&V9R K᪀T"4`D&:(j\i`X*#ƔRVdcE$&5Aя;W n4oB a!lذA噳٬Ĵl6UИ@XCq5 qi׭,[QOMW&YYYgmRq)BL='6dI1aX߆’; Oٓ(`4DRLv{;6~G<⮻Zn|'tI;E]t~sa׾o~wmh=_+ǣ_|ԋ/ݿݛ/}~]Ww~wN<3>t>vǶmgWs<|E_wC}}9KҦ}6~]}%^G>szW}豧}9W;rq?+ȗ>2q=}&?+troxeosƽ)o{wu]r%w4+cg_~~?Ԡ'y>)g?x~$~y<Կ?]4g}}.s7H@`EI--/X$1})Nl6[(,Sk϶$zAXABR̴D>%&2Jk֒Z.Zj lJ7f+PBI/ zw&tzl31$L1֪rfE|( H TWt (|4~T^-񝳄PRDi,t=DqN3ks9ʥ=jp"%鷊11QLW{)\tRk 9E%ӰsKNGXuR]F} (( +#9[^gļdi p. v]9 ,R6%V}&ǔcFSA7Oҡ:Aٵ8,n4X V cQ5TR<딉SP2YqJqsBBOAJpN"90y%aiQ17=,PΊ轟]+7ouʊs.+i4X;]צYFSXmewnL Ps:^X!9a1UF: P4[1 S4rmX2J5\ǷruN?뮻?O~tǛ.y{o7oMo|^˯xȺ1x;?NxAO?s$g}N;^Kyə' ҕ߹~kgw~1/%߼$qzqke3=EqΫ?yqa^tƩG<\x/u+_\(?=?|LI;gw<޽#^Od2kl ٮ.'C!P;RRLY[hr"!%"dcJVVKr)ĉs=WљXv0v(HZh>Psz+D1ʴA„ɤZ"De4DIǜ5 p < cvL$=1Z׶m85Iq_HJ?ֺƪIfڮRc@j G0+:?cì醸'hؘ"yyǜ3(Pf2i]"A P1+A&9tιYkky(Z][kDh-"㎻{x'?ɵ1F`>1zc;vݷ0܇;:EZ&NOO)Yv˯|+KOʞ0&t3f=[<>[}'\mݹǿT$N<|ǗϽܯlW3G{OGno-3;I$:{߳M;[iM;o~gGt G]})wʖ~Cwysyߨ!{_}9.7^u;˰+Rd"ེ0h!x/s'Zkۮb(U7=>x?"'1$"$)*1zF5=ގ]ۚ%R- `Za=%N?C$qB@1BJyn媞 +N̈жLJ6Dֺf7:gEI#Suݤk約c!ĨM094fYs5zA3;% O ZbJ .{BbE*!nuVwR] D4{!ʨ!B/-rn’Ip .q֦I :4T0(RRb[|/ 2T9WImՖ [_,yX-E9ضBfÓ]gN)s55}f:r:gL(bBk4S }@㢚m(^f‘'m9I5 r lIHQtjVj"!i[ )fV06iHF\\=ʃLpcUɩ6!EN2d}l6-3n2KsMiF!cPKn|Ї!' iN˦@TF1'VQ, QGGQc(>b%#h81zӫm#FŔ]KAXUUD+'Q xc "i BABj‚5:ryigN1h04j:8qMӴ]"^0`HmrTcHgyVz&V &EKE&5iHެ:;X_ M^D*Wde'{.eCb8d雡*tU/>k]OLRׯku=#m[1ct1c1xes@pkzAtkv~!B+bVJuF]Mj[zcާ@<a1Lsf0&]k-L4MG43d  h]\sdrxC9;ye]ve-oٰaC%-oy3yӯc'?h=##Knioxŷ^|z_znXmn1eC!IszW|u:uCgd6tB2;Lruغ۸.\ݫvNο}7[s{םykǞQ1~#9UzKzS`iǹ˳3.c?<w M7g}qKKK[O,!|z"hD.`B` L)hAsyi-T4ό hG(5.qN 䬆0)?$B[T(@ =}lieZa620C`QcQkm5i"EkѠ60 zk-!5vy)wY;Ғu&6vf-N2)%`cȠ~>ggm׶Z3CZ[kWۦUNĹOf4qC >qTW)ĪCQ?*^D8wdb4 k4EXR@cLD:aFL vjh!V!^ 2MӨ12[vS!lDSBR]Y' 1! PEALڮmɘSQ] QLi>qI"Z`"1꫺( ۦ|Y@`NVZpJy3cޭzͱAd(cI8ilNrֺN8Ƹ%I yZbUjO|^$RtYFT9~81[[IR >d!%@ڮm;} %Q9kU`mkAVd{aP ‰ s%3.(%zmy-f{gN)^85]7 ά\W2%F7eKhުvR"$MX! `NHq-"C9E%X"YaĪAB`88 B>͓`Lһ{ t i<;7 1ElclAϳszaql-ӏ# D"X"C;{,Ty*:'ԕ6Pl^_"S͓q dLT /}bۄ!%u1FYMiܓ;n1fî{:d)HHs+K8&@}uM״ׯ,Mt*>}[+++++Kˡmw05 &SĦKvҹNl۶bjL 4CD Yd1 Eޓr׬۸.pםwm޲YtV$ >QbmY#S\m0P6Z[\+,Ə5g/ݴiSu_^׾/"lg 7׿Wmx6~ކO>In{~yW7~~3/~G~\]]~ٯWlasYc OK1ޫ[pw/iGv,9l7M1r[~='w&nGQ^+?sg{3@;3SE|򻴻4|o8벳N;ݖvڝ_ >)g+~PyDO=vşWZ;yW{չO w9`x3$7~_p^zϭg?̏dvM`Qao|.8+ޫ܃d׻wK9py|& _Wm/?YcuCNkĘ4tnC{$L@snAQBhl'es԰U"WJнxV˼VPX*JFQYB@{uiAʭu S)o(8'5W0K˨0=tV9&2 eX,u5UYm5  CD `]3St:\ IDATMӴ *W@$ٌCn $ !hꜢIZ5۠Ei9fBk1xg?9Xc9wP&?bUNJy5&Ѯn,o- @XDEP #%0(I3U#k sùZ z(皦%T+D< 7#C%v!۫L,qB(P܎EBZYk9 )@Ѿ.s Y0Q$U&l'5(I1 H1ql_QBbAX E]mVgUae)H=dVh}bv1@p)5E9v-ٶQAODs4Qq&h.1i?VM_%HyR%X)O9u !U)c+_BM瓮!'a1FSD-u2$!z} EuM6iiڐnbLMӺ|>ߺu-c2,--5M$I/b$b%sʛt>奦m9~ka?gu֍7("%Wyׯ__?Edc~f_x 7~Y n~o: 鑻?RS8esS7}9Ngo|-z%߼Ġyß093^׼wwh:fy zo58NM{=n/?[;O|}廿[+Ϲm=.xE2hǜq;?g3;۝|ɟg_~I= w:3pξ:O;iy_Ox +~wY ^uf|8iYMpn\b6C,b-}gDQߘRMNQ톌DATj<%N :ʖً!'g M5"jN{ԫX@7 8]qB~ ZYY9G,M-wD!}鶦i&}WqX[saˆZGVZ٪᪑Yi-sJLR`ׅv&c$\9ICRJ!DU9AVQB㜭wfV+J{1i͆#)<#2psVR+-TªYJ֜! >Z_q)Q)N&]ӴPJS!1>)ew*$lnl{I),7hf-[n]U\FOR|뮻.//#jQ bwtM41|@hڶ%nL&bY10)0WVbsϖ-[h2U۶Vn[ _~^{neEs=蕯{\z^zE'{; s7}u}M_R/ Il~N'9.~t]ϰh_mo生3>/8:y5#kZS-T3bDĦmSz @9' ` E,f:g(73u]h i ZOSingaD)ĉ0G*o qN9U@{j b5 R u&y)u/nD#8\E`Bgi.:V|)FMCVX3R pV}7j2  ="4ABFLmtA$&viy6ζ^SH)3biVBQB2 [1`X':vN7*=Q> ZĔHĔCnJ1HjljQӑrAeeK0,I}a1@$AКJ.9R$RX%HHs5mMl,+qZeE)!3Fe?YAZ":V$ۥfTlH XkcBbψvNۥCmho()Xќj2;NKBDVHе5nyi99hP 4diklUeߴdh"%K0c2d d i=LWj~4XL[BB fԍJ$τ`M""Rw Bh1э.#8ţAO"p%m~d>T23u [s<" )'".TeRP (uXHAMAH%&gu ejZ.( 0/r2UF‡%xӍD.?^C&{AlT?"IfYcC)nELsfHyjD2⟭Nܳy)rH>m*^yg&iWaq6/ Ί5)DXDO^R0ivݺuV#Dhs@5#sLQ= *,s2t ,}Ò5)q7/mi{;hoo 0{ɑ/_&)G燮p^tċ6N6~>~?[ t+o} ~wO0~DzAsB )MD<蠃;Hv4J\_4)*"Y)ڽ!&C0K”0$,ziV*dEkQڠpjSb1I\-E]׶M4C*7Pٖ/;-x,Jfk-&9]puhlSAy? XڶQiLV:fɅZ?ޯbTa)!,*MO!bl: }ti2Y,5mc]ZZrޮRT H9Jehz!Z(٫ Jܤ5ª$1n_mVƔbYtQWfQXC1}HRbL@2/J쭹( R%3B`%l4zsjf(c&" hcfyhD4Y@9*BYŰ/oc1s q1em]%qY;< 3UXkJRx@N9dC2 H\R8*H /J[KcL>ƥe%)4C5mۙ iDJuQ1N59,~(f 3˜PV@*u fQ: --(IE#SrUpZ v@Im׵!((H uGB>DA}nQ +&/N5́#RFIKH)x!lٺEtmLɇH! *{?x!DtnbgS╕i 6(b(m&D"`jT)iYyж],pjXk|T[o'[a 鄇ym|^3~:Կ?uvÎI.qk;~[k~VB 7k-&[^S4~ԣ0Gkީq>k?9=3_,-5f!Cd52MZsZrB̈́j ms`JXkMc݈DU}#(E5+_1@jPCc vsbgKb !̩m[5Y$g-5U:AT@aUC(8 t1?0NnM&qo$謱ɩ)]o"s@u9ßQ [JЊ: ƨdߖTHgRݿƔ#AXSw"2 _yRAtl",'9"A,TKIz aPUp2P 0#(*[!32tk E zGUSJit-i)*L.YլA9X麮\1-}9`P怑fl+ʹf~Ab"~@^.,E)k~%q9kZx}!ƨ8bM)OS $0`A@ fiUNe$\qLQ! $r&#OG胂9I=peŲYfg?a%XPi%p q.cEeFpb'U ÊsA3))ЈQ?N bf˟Pc =VP9\& @W`t(jE;rQFpO}"lxlSdmeK'vCuhrO"f!U' *|Cdظnm;K}aunyeY~X]]ݼ{>hukmmK1`h0@t?"'@LZg$J6Q|>N**,3Sbh5mŘ!Fdׂc %:f]kcmOggy^統3qm[cMjYbQ9SKC@A "FuF"5B1dQBbhmc24Mv@&ct=b݉D8H,HS3 H}kZa;$0 -i *YYYv>:Z1'-Bc)Fxsm- UK6_/(Ω 49d_*.`N9` Hf>ʈQ:U]@PMvQ:k.VZ,'Fڢ i"$A)Qhmv2Xa޷Mӵs)?o'Y&.ZED)}CKBgq-%-V%EBxQ=虜iSJhsl] .1Mk|"K6xsmZ1a`YQDg~jklfaQnOBdʩk sM=Nƨ9dcݒ˾`3" )ۤE6,@O,ŠyYc0vCO }a 1dj.8+fcm88osA ـfH+ª) " 11(F~H&GXkcmw$wRcf16.nz{7Mkw`^bd{el~>@]b#c:7\qU{*ExwUJpc.1+hm6mkԖTMZ{PcӦMHHW]{uG{k$cLJ>Ǩd=-y#qu]7D8~~~>sN|*uWNV}a=!E"` X XԏFDY׊Zlr![s)h m/ e! $ڦn+!IZljm+ !U%ɚZW"c !E^zrC^-#1@}d͊$Q MU[l@KT){CSɔ|EZD&{cJ)xݒ^c-U((qKQ5󯷠1fʺb ]e]**<2"Yu,PN4摧l>aS?)1" 1AZ+0`M1$rQP)!rLL2`SMlH%^ح#;\ IM3&pLjJ`1d˦Y, B&ȕQdc$KdTHdMh,P!`qR`+ -",n82TS[Q&'_TS6%DD`,r.9fBAx/Bjc~#`NG,)o k V\&!&56nU~ h:"1Z&k'9p%*t€"`xe[ضKӡ߼[|!o~Z/<e|n7Yǧ?Q?9k_lDbG2uD˱;߹G>MDf>WƄ\f/bMcLQgn z((3K% stEjM9:Y|4M۴;RB016&?z&ˑGkԂ!kKOht`+꘣ƌ5\sGᦣ6!W]y1pJ0vB`-f-,DhHn;t,X2q ; IDAT!p{>c** |4i-~>햖E9@?B1 &F|!jc5LhoJ єbU#U64%daPE¤#Dq9t*8:\qY":'*ZgiY&F,%1RL H _}ԋ%gQ_!jV sJ3?W*y*@ڶ[ZZjb-ȶ!%d,.!u(s̗5PT*gy✚SF5+/ɓ8J"fT͎Q1Iy\d")c=**b,R.kO\F&1 ж:C ~ C 7MӸ#ZYj2'3YKÐRB@P}_PbY-EF;GB )BKyk!1mR#WB/*h+uVH&d%zc+fq7ݣ%+$ 5@P-uѰ[Av-LPҧXU+^x`l**ahIS+.SuE!C"BLժt/-84zYyGDT%4BHQ&*T"6"F('j ^&ۈ ,1k>ĐLVUCS˅XHcL4&rlLnaP.UdڋΪq]e $,Db>g[ȑ9IbkrUW `dbrf|:cm\պr ',a0,"qoiekyz&@Y ٟƿ|8/|IumN~q@'͐0;ڦWzGK{۾MGmBcLxγ`ݧa!j0# sl6U&)pL)vDVNS@mZC,B8J.%%GOP7|vص]wW?PaoC%c@Dફ:(PpŦ<ΩtT~ -mY_D-L#F-_ao>G ߄:0%A@U$2Ĵ]+"1 !k-䢽lbrBQX$V]WSTCaHCxQd0G9Qi JM(ēJ=n)cdeC1:.B|euU"0D"O)%kZk4c%51D&mI$8&Y+o1]$^Rt",Y@{Bua\xWF 2Kazk% Y8>q "LCVk*PKIڜŶR3H6Z |sL"!XrO{$ɲYK1HsDh Sisa7p$ߪIMdB} TApӶbsRt,"&:QBK5-h R!Q\Y3EAQnɾeXl^$ƚp<vl2cl5ąUR) a1stFRq=|B0Ƙpu4BD@̒TOb nŪ*q1@ !Ee {iƄBd!`9y:rAq,}qIÃsT{ᱡ+bY6d1B;`p~llHR4ăKKNloO%f.gy*yl "HFMv`ωHz @ `9!̇a:,-e50aHoTx+) ")xD50m[Tb D" dfC-ֹ(:-'mi$f cccn>+a>! mm,CIe 8Y?.ވƏ|~xwm׼7o=SO}ы^6?.绿۴#N8v,(7^vѾ>d}r}чצ:)/2u8׸F7!DH@Zxs2G?zy,2嵳6Yu7EGS5\ks $"^S4@(6\Ei҈ O҉HHH'.IkY5x$r'sYfo|gFGpI' K+pfW}?¸^/j/va.4'>.'_y姞v\tqc]v9\|g}3ht{ ڷoVWŜ<Ά=UE\pn4JC 2KT4Q윉B-Pd$:7 c$W>/`)D9L0%/\l6HOZ6$k&~TDz#a$wSFwoT҆tJ$Xv%c fN(ςA,4*-Rrja5U:ĪjXeD&996;)0}JS"6,< P\]WqSJ*#2{#]׋bz,@+`-{b~Z7MU5 0``"-betl ˥/fX&( UT"hr(hѻ<`tY8ZI0sτ3}hT jVb+\LmTgH]a &gHSx@hFR$DG"pS\U)1b6i7J]pl.7v9j/ڎb9SD16QfIkyɆWkHfB_yғ4M[c9檫zSgϞ=_ǝzy{ Hyݫ?~~|W~v營#os.o^u.t<|OWowkd?<9n}?xN C? G_C??AHG|TpC޿Hs[0~kTm^Go?ˆj5 Z4u]bq:#?D?|v嗟~1D7}׻ߣPZ y}_z}%qN;K/3,;*3w3<]3jfUO"PSjC;,nYYC8.ҳ9_rgMwnwZs1?&v)MA^wBPK.E_/6AMHJs`b$J0X V̌ LoBQO#0NQ 8LvK3E$Yz8As'PLFn}T~њJIA `y֝CMTs2V- 8@)3UTmxs>ӆhK13UsS)3sV3&lXYEq]Gi΂LPJ K&h}98M4*fO=83N!Ën>IH&i=:2"UZiEPá P8fiDji /Tc&X3U3p~ꇑCJ!lF[ *ޣV]śщ)O w߬YD%Hgf%nbKm+ذ8?9g<פkl)&Ă{Wg`<@″ L@k :-́ |"aKlVF0Spma۠h 5!0M 3Wr9ɗwi 7Ae"C'T&NY: ,Ů L9hShf^xCV*vb9\_EapNtb+wð@Tqab,-gDNEI@\.`m>cw|u*7VguQ}ct׽K/=#N>Wq<)7?8wڷ|ۃ-ib ˽|{|{%^:xo˭F<{>4}߼>v>q+$h\7]O =ϸI_e{?{ǓxO=Kp>$w<훿`i|~ҿ{ӍF_}COǝ|_Yu 7Y%gɢwև>g.'R=\нp`P3I/zq}[G$Z nrdgU޽{?od)4u46ل)c˴jފug;`,op׻+.l߾B lNfzgo䥠6"Mi*_<|2>Om^[}4 J9d93/;="lefYV8:ekk+wUO8xp.'cUm@fwBfuOHr\H(!2,y'!&CR$^S;ioj+j4Xj꺕i*癙C^֨IRCi*qV+Ƃ 5U7e'KS;BMM#1dK bVKTk s0QBH$P$eIS787ņc#~-p<2X}sU1fg )@Ջ=?~C~Cّ,c * DUt"ɁCS0\s \uˬg>g!`zTg>k"0Q{o圧ib[7 4`Akd kWq,d3MSJ@S$sSа l,)'S7޺#F *Pj3]^371 !n"b3)YuxNU)+C[C$X!VU @QPIɨmiE!5H䝊gVA'CQD|[DPw)<{Մ\UJl)}(, jfb=571OG# U(b3lNX/Ijő:"R&89Rc\Rvc~9.w]Tݥ՟J"b*USZb#-8 ،rS0D$d r Uhkk;gMI_@WڻW0 d 02b0R@fR279h MSI֍1;v-'x={~\p~7ٟm.+vWwlc㗻㏽i՟<'WS e/׽o\xɰz3_闼_~+a_v\η>W׼|;}ĭox嫟y΋xw? >?y>oZ~e7ֳ|W/;拇7}}ѷ/NsIi1#zhH&;'Nӝ>|7ոojit׫޵%D4si3Su'otgu!z=W^pGq/Lx/r$gjBy;gLĶ"J3SO=.;36L7JkM&f-)WnTG4 ]םv)=hfq/⻟>{z+[Bx!8 mR.c`214zad?lʒv,3}?HYjY.4UGs;;;&rL`>)kʒR-qCs R%لiJKz{U2W̲怨0#&( _UUZSbp*y8Db"\Y؜!44"SqTmpѺL zYX :CB4 "a(Xu 71Æ U!e!E$it]@9t$$,!r" ZT15QP(qLeZy2.h8w I&lhY.dFi/P-&,4!_D&+Qr.iO{>'?76>8zS=vUW]?}ggg~ǫ~g; ?s/xoɫ~gRo_y/]Oq۝q xk=q#߷ |~q x[^Ï='/y_79hG爣^|^{7oẄ>n~Qw=㇟K?[/]=#Oop|;tR˶/SNǩ3}',WZwn5I'Sr/CR4w?M i "FZm6&pܵ&&q0cǾlЇ>s9GD,&颋.wYܤ(%BrJj%]ڎU!lf[OtwZO/U5J5{챞\u"zDscxIvi\/XI [)bIl EC &` 1E4y.0R`ڎ8c:zu3L-TJz*oꮖ!pY7f4];!~URvRn C3s)xϪlFܩz{$€:7qR@1@EQٮ՜Q{P9kp#,%T_![RumU [Π:fR7nXPsY0QSTf1L`lJ5U:)c8_6so d!)*C{@81ap/rN9 ltRo C23UA4h#H`԰X&4%*&, Ǡ Ņ@Q &VC? IDAT1Ȭ2Q4{! wYR̙ ~Zؽ]}t2Q[4sL@#M #usbzqVPȶU%} )[H]a3*0k! Vf1`s~m؆V򾛸W< J5h]PC]mT۫1Dm~NV^4M.~DnOa 1 {`CpH:213`f(0 "v )P/) kXDpLGZn$7djupg'a`Ub򛆐QA x^?39뮻NmsUoFf[²ݏں=WGNn:G]p^s?>5\s?ݳU( py=7xO|#~BU>ݟzؤ †87^} o[fSܢ Sq9|Ջ~rZ?? x?,q?sƽ|i}njG}?wFgyNNӔrΈ͎9&^Nq /AɚKh39DDF("4b#*J1Fd)'^@w;;lYJ{6?^g;&9袋>l۸/s9澛]t?̳ϟsN.Ws)xn{bq@"e.޾KO>ԩD\H@W\q]NChSu;w?_u-8!E<C3@Zrc+f1SDZsJj>O+u][[[>|y`S+e-[|/`sJ6fTLfI٦2I <.̡yfS1,uB  j켞U'3!y &n$eTGUp;HDH.0τ&OB*1eXL̪ i DO9I!]*(D99 R0iN7JM嬊j+Zʔ C.- ЄZd "׶IygЙ;sL3!Kh8p`\Σ]2b[m AE441r!Vښ[X%5FDu%Tu*. !]nQB%bJo"\kh+r<pfs,id{[;ejIYȰ-"Hj` Qf( kB#i.s f0Wԭ ã7f.Ĵld006 &V5Ow*f 6&af_*Y3[W9b0S-sc(g>̻T˖k;1p4d&a+6/\tt#Ǘ˼:5-G)qe}ߋ4%[ ))'yjvRNHhZL@;YK,Dh*vAH9P$Y%KI- 3ð^ӓS C2k)m Go x;w/r{s^sKゟ8o[jmx~@%_y~桓#quOҸx-b|87xUW\rSW\ ';j W]q+?3N?OKr _x<?ge%;%/Ҽ20OBijiEĔ0i;BKeu E>3p@ A?c>+~FD?|/۷Vj{3 3ϼ=<(lpo:ߟ}>)')=3Gw'?ػg ?/{黡 /9i#9(r8Nͭ?Ϸ~׻5ƈgqW~3>};ܾ8nXY)WEi5%&㕶/KX:łɴ<{8p+4%i&nbFs 9[>vSbAtSD.v̀F4"ٔTgaűn$B`DS@Dȼk)]Ѥ8hN$ κZ)+' =飰fjKABz]#AI)$YrDAs PJ-&x u1f^'RTL"13Ԇx:b(NU&gP57j-"!0T 2s`hdظw^5 5>V^瑹 (sp^3OW&ɔiJS >D|p sL*o–귰ZHS K˷3-EP{ĝV7mIaCi;j*;*a:PQ#~{y>t[_Y˾/~Wħqp~-[:4.rg%<Oj?{~_7'~ſ^ݝ?}={ѯ'_!zro}^o<ݝ_Љq"}_DxiGNE@=a1)'əŏ9iJCrds\3Q4',;!V Q oYHU^zaڪ*8XPf\ &B '؅.{.9Yq²s=/U$۷+ ثO?{tRglVq1̋rk{?8kkV&:=!,}714~30`)ot5ΝO84Rmo#"x~/@b .!;I'BgO~w*t8Tf tKI jb@6`b05G]uԜ$l \ۍUnmD$R/`f|Y6F WqkXi# ARnJ%Acz!"@LMJmn)NVjrn-0 19Hq*1KS9!_VYd"mxz^ X#xrDuETuoqQ3TR:ff,"@5OT) 0ѽ<BU@?Vܠ+S~m3›x r[7u]B !J0C;ɮFi` TEzZKP m O,2?+˴zNSy#u&U !Xd?v)A'3|UUt#qc9kߨ%0P0bTgt0O^1nBhWtdQ ZsE\p*H*!jA+S+ElPH1lsʭ,pebX!lh"Y^RfN##$9fguP`o»a`;ۘ2xZdž 7l&f39i6@1I!K.k q]ʉ TYݩQBpdw]v 0Iq4vl^ ,Bzk-ðc>2eS$.,d"yZDCկbJJqNغ;n~XM@}賟K/̎?>yc ~!yUW]uG~_;i'5C>~F?y//8v FI:7|?Y*_ o^/|;ɫ>q~KKzC0`cfSRbfHS6~wv{9spZofK&&œ 7ܰZd ~𨿒Bkk,x(}CŢQN90,z4C+qַؖU'(-""q'D\. YDkCW')0sm@uhAoxgFN9`岕Ҫ{y4M)"JQJTs*Ed<  M̿ _{E?zYd&&B1dsa`k>pr0 nj1D<8k/|sD 8.6V;{aG~޽U9@dAUb\,!zzL:Mc%qp1}n5Gv +~|G?|_蛒AU.dp0Dž 8p5e)9?+oxY9W'd Ǟ|#w'5d%ELƅP(Pq zlÕZLCug0 y',nc٤дF Wz:Vk)'Ԣ.bʄ\#b DXGGRM@}wjRY8f)OYETHJ)vZ黨"iqB"0,a9zND`:OC$&2}D22pCQ 0ð ,-M6$6fCs#vq b c&B_&LADs H-p)Z gp#ܕg4 c\V PgYἀd!P7E.^Ox s6& ,X7t|ﴎ>vMhΡpܒ;bjjb>:mmU#8S" MKiHTZ9,yEAD)\,H@̒M8L,V $&2֜p@U Q@o˗,LyyP2rRN!D&B$Q15db0Z$Q%f^=縒h̨G( SN)Mc;B!82B}[5g,$i@hj5Ȉ)Bu;!z\,!1s =%!cF l:'*8ADEj>;ӬǞAPGdVQ`6b!¬i֫4)2UɕhJ9pr_!b&E&*<-R57z:@Ɔ@8"hD5ZxȌsa @D-FP8N`#OIUzR@1%*Z HZ%!nx(:Dc[OE > ̖1! o@hn^ ~"4MW2FrXVZWivZZ9;34lmm-inO r 5#(c~UTr")MH*YV]TDS8ER2aȖcw|7~;z=':`#_=ucw|I5n*)M)Sb`D@/5Tx䄪׫%+~ZXC6M{"y13YgT\3Ra@7|6I5aEE@R6%@@aaTxiS6ɮ ABj: s Ks|-ִ\.8N4gz\#!ѭDdZD HE?1"H8&Dt #F F3(fihhL.!VUXj)R~%`Z2.HDl %-^CMٴ[/J)|PëW dCQ*BblN Y*j1qi |W`9-_ㇷ;rI!,j;=w/'6:ըJ^o-8dGYh)%$ER:NȒ\PwngХ쑘cbDߧnt;J{2og5TGP5bbc-5Bav]X.i*ISNfΒU' UKTՔ4M)e"4ĭVw^?Z٫]M) يkwObHż3g =:\̦NvpjoM ߒh=1c=e5Ӝ 8@V+)F o~ݶ *X!!N+@HfE sdJTFQwblnu]{0mۆ7=Aq @HS4N "H.uA*.u$:T55z֙;fMFPctLs><Jkn;TX#? l -[K;*Ť #:`T|7i#) =C)Mq(&0PU2@D1D9ԚjNYU&./iDfLD!FW{I`*4^ٔl; U  !p *DT]1 MN=jJy)D% &~+UP[@A5eb2Ռ4w,8h  l]<:=.g#lGb8+ѣٚ8ܼd^@[N'8j綦HPsm& 9NRgT/E]f_CN T?Ȋ mg8E<9g0prYs.+FPCA$:1MQ\u(2E0,:]M#U@ՄI_fݮKszvL1l3P@TЬ ֔ie&94"pP{&18BDbR\.Dz&pdmh-aX b~twݱ;vZN˥Ԓ\ Z3?n0S`OI)7jH[۳LDv @ XoҢ$$1]1t+TdiF ctK:ZIEb*Y VFĮlpǔrNЮ[8lv DaX,]߯)\1aQ- aRSArY qru1R78,UyT$Vjsn-U&^Fۣd ԲdS*jIwcNxv =%j+q]f9%@t9^i̜ͪovTŀ O9vȊcdk6.=ChP)H[ūf( 3BW~VZ"BAc&a-%"i25oycաV2bI р)#"s2S~.fB=>*#vLs>{ BC$Vݗ[Zd,!f) BsBudgkE!.NE4圛uo[sVPOZ`{&L5Qf;z8'KNbuFۗ"˄Z|ɊDPm?u~Gz0Qt+j!U]?F%k__qp-WPT@`\@-|rUԭDЂZdAY(J;<0:U}10I$]]]T]|C9bS^he=!0ǮFb!0=ݱk;fK.~_Wɗ9Q/srw>m80їRi;]G^ǟ~OrSvs,tiRDp Eor9̵uśC*{M +ʧpf4F 6]z\Sܲ: &w5i&.ީ'0J"p4%s]N Әi ovÇ-&~Z~G^ ([9Gz=7T }"L% x/n'8v$( cZ)i`.B"{1ys@tITErJIs"UD')"9BlӞQlʷo]H y)H  s#x lB `yC:)P0(ڈ<6M0NJI{ O9\LgcV/Gl-qܕF@k )WK?*<f_,tb.Shw(Df&Pc4q K jhyPQf ϕU ɨ_ Ե\k&U#->’mfVf )%~F3ilECapXs΍&DCF F T" c5u; `"X`JXDWx^ybb<( R:cw|~$ @_cׯU/잊u>w}_7|ǝcC:ZkعVjr*3SQqEC*Z)% C7 LmLig!.v7S{ڸv~k9bc 7%4S5YAF VֶoU-{ CQ% "*i\-wTɊH}CNytm7b0M<` 0cr+ ie.hy ^i:My(*Znɦ]BIl)"&svI8D|LT| D'CڭE.sv@TtqqD,y\6KMW51,fq0sbGF"`{(ۮ̹UurrbB H$ (1DVF;!CPM64݁ъnݠ- i.V$\D<@pkSU{5|s:9$$:jHNTZ|ӵ.J*CeQ!D YARIBP,>lIVةu͖M )-1lB10?0j6©D)L8Cl:R2ُ)e9eQ1-,<EE_]*;eĺn9d)67L+U:ȩ.:9<*4q5hP7$NY43$@H/3H'n) !)E&uU,6ac>,b4qL}es+͠jkDTJ!D6W㢷 u抻L 1WB`tbwzPBgqx[*'\Nov7>WW:+3ϻcf9v×wpvw?쎔Nc/XrnM6m_͋ZEUlL] !*lBXZ!qtwn@!s%,M3X!hʋerRj+!9yؽ=e3ĤZf8*ې]am/ cډ%b֏Z_B(Z rRq@#ori_'C[lpaf-(uIUzBb󌋖Rr軎Ci.""RZsL(Oup#3o,LdvjewKP3x /zfNT*cVgGi`Z,4imZ"s\wCBɦb!RK=n2$b A5 )4LN\i -EtVb͚҈q-a\KJ)D\BSyjYՐ.^롌3,UFQD(DW~ ;s`Q-^cI]6cTYI!؁}Bڞ+ueFSQ)̬z͗ [QeBL,*"Mi(TDSvh魖"2 ̬dU&>ȓVRx|,E8%}!6rXTxjAI4jN"yGEYIYYYj6.>Z\j{VUXՔDrE{+J/0khK*lVzNAVD^s-jϪ$lڦ}y\YvZj᤾c7u}.ew~wt|5äU˜6/%ͦ,<-L+Q5[zzD+ongTm芵AT5eKWs0kY"h4:l/6660!,DXD4)jLbC4BNoTd9q b2PzD6,v].gb.s4TsNLr.$[LFMtP)!WTרd84ÉT_4m[ɢP)eQd l҂*`)&Ps1j+9cjseGRqbHOkT4| z^c_<ϺYuw2~k;WO^}rgгb?}3yg_ Eq/x܌_}Я> C~5?CCgn0i>"ff# 8X/zz0G)%۾L"Qab\0hbw֝x"}[kAfQh>/48y ( XwzJ˅-K#z `]d T51UBLd!21_&DE1-E]W^BH%͒c"x\a,)+#"zsfC!0+T90`F줱k%@L"*\Ȑs0F_:2 1&'"VKWym*Zb2hrlPϐAk0+Ff"8 Y% %DE)c=X  m7% &DTJuxt 8 CJiLBA!v0W}BQkh7@tU)Rׁ@BJu덿4z6ZUYMH!F$E("uV+eJ } 8 ]zxZ&UUMƲXĀ IDATF$;OUk:KpP%#RBL-O*Ew]8֫aո4'I9[T)\,̏u1p`Zk, S 'N S)ETz_Lh BRlL-+3"ԋAJu <W0?lqrB LCf!5[݁(im3TNӂ YŤ'1bJI)EA!gI8MgdAUPJ%Uc}k>@al!"l2w[yM[bKnR]ZUG(XzvL"SB*APb$EVUqhAHy2Df1v@d4%tN%!n IF5rqqzw @R咐8$sRSwvv)].Rso7Ʋ/oί|+x?߿}ɓo}붯ʃ~={ϟ\qޙ]q-O{ȷ=^W^y͕>ϸ+?|_ޗ<}?ۃwuַl|uWyϼ~VLJzpZǻ/}Kw~uw>y[uy\|ϋkhW~WW+6";(x}iܻw젮dƙ9<7n`=PJN94J. /z3Q[f@[<4N1 :/d8DJ,O"DcfñGo馛o>QE&#~gwrrZl41"Dl4[D#C.yz=c[5߃.Zrޗk$ŸZFc1zAmBCr0`~1lݬ:[ͅ19 )4^aH9Ii ,RJ.9R%Ŧvy+ R6 58Clj'!|)%r*R!Q꫉J;(bv:lSx0֤3хͺeY8꟰Dͨex9+*SdZ7X {S ζ\@CTh)k sԧtM}Tf#;\eYJfu0S5̜c"_B~۞N!ZY6,ʌaT՚ dꮥG+@0~x͓' ҢS[ܼշ+j1M>*qLd%dI:UØƱlN18圫E E.6dK-UpWmXzwV8 䨢z8rjwS.9o>aX3*Ѫ8:b}Ɯ㘥 ((@Ͻ?~?xt8zd8Ͼٿss}G_syؗtgR+?}}a< cg>x##o{=_|w7̈́{=-sO~ȓO8}yc>q'T}mOzȓ.ZzC<g_q33vPv\W}*W^s36؈Ohe/MOԧo}N/~=ydh:~ށڟ\i}{>o_װݮ눈WeCO,z4p/8ƸXrSSRČ1vS65I)YR d?`|JFZr)b3'UE( ~ofj&n;SbJ[r~6NiPX'pȐ@xqPjUaGMiFB*riEMĞzu E06 ^\{,\{YD Wo6F5Be.EECBT%LZUmۖnJ""⭧D5F*W9]Eu\ƕ3R~hJb2"̺@mH"5GN0;`B BsmLm 鬀LCZV"J2/:m'B'Qmn"V쁉slVFi ;%%>/lAJd(opRWشJ輠`hŨZ*L^UrΙS Ã)%[T'/3KjY fF967a!MR|EDZݛ+^RU:d[![@CWVS  ,u唁LC1io=P?#4 VjѧdF{CK^QE'4#Ni{5 sXxk\r$@dSح]m'<դ"-J:'oveU՛eEDUj1ԋ%5_[d֠VeTD„hgY흒bX. ðZBD:.I BTaB@ LM[βa&@B[[z-#sF R(&Zd8,m8V['\&aܼ _ל ܞssIGяߣo Oi=sm yxDL=|"Eʣǣ3lčaSoyw=pO[f踳IW76ʙg/ [qǵ_a]ƥ-ʎ{\WG&vq ? _,uuۻ?':o'.G''zh;ןqm6RC?fh6iH6HI"ЦvjBP\JpaM:X]g1>l$B뜐׼sU`@qB 5ATu0u]Tw jCTGh蝈b ]-EEV0l*,])eD8d)Y` kQf.m bg#QSM@JԌ·m"*XYIo1i7pT3 ݼ~k׃ 1ŢϹ`uƫSJ3>Oꭹ{*'!{LK JK(![MRv=9Te ZzS+nTQϋ "zqtcL[U)N# |⌈@VHhi~ͳ/g rJ)K ]FPѥkSi肍ʬܷ' U*&nz*@fB"Zc)b+vRAΠM{R|K! @RRlJe&@)2Srb bΠD 3'ԥ$jw&TR.f| s~z2TF -ӄ`^&1TJ s9j ]_44ϋ߁ {@*ًhIHiՕcU#.#ٳ jQaDb`%&;<8I{nόƞi_kwjz (xR%@EiG=yE @hSݪWhe inF[Oެ~y\;bW ƨx$J)FzR@ sC!qNY{SJ1.v]esuvT""eDX,1F;]4k-jwBr'#nboJ0 -eKJ3n]}󷯭䝟}Y[gݞgiiҳDaZ唊 kfZAU ðZa+}ލAUS/H5xEKPʰC]gEu`"ˌug5-B;&E\JS-ˁy^M7xwvvRJƴ|LdrikM!bc.DTq\vq(% κ80 `,{"KNx2۷~WFC+Չ1I 9Բhyf D%@!dbvs];;ǎAAXi?*-zI%8LΓU2[Um0Z&!qq̥ Qrc\Z-QiKj?9eFDZh.Ϛ% UbBcig,ywSTlbVڳUZoBr3֔=Dh,D v,5'FU[߈LN SO\ᚪ,U5cû$ ?>aB 6Ȕ=~SӤ},5 zXJܓBS.vX)1L9Lj*ICg01!و´ K^+[Ʌncss.pnkkkXj,Yќ˘r<" _!p\t}"#e ājD)e4qGZ•1or,6Z͹A1'~~mo{۱cRJ˿˿ SO=1߿ٶTҵ7_{?{0h9qusܹ{{^|u_RrW/'_nu[GG?+6*D{y9:=>}ItwpR+kO?w>neqӾBO~ȓ/7}M{kf 7\:~~{_} `i':o'.ˇ.g %ZcTq;a wh~9t1=vl^,61DXUN'3ČZ,EMeCcW Z/YDhB`D{fҲI#b k9jpz>^ڣժ"`c%޷* !_1l5qic.vaYXb5\`3É+$|"V䤛}jBDa$6X`hZEbtqCHb$PljUDbTqrcia7:Z,͖[Έ! 3=XD% JUvBxgXYRMwfYpvW;E#C[[[Nr2{ARJ)lJvw!RbJE5XsK9+jzw]mnn\8W;̌~X*9؁n&"[F)$j~9v­ ޜ>{>{c'>'y|vn?Bq?AO]u  ?\K~iGNtNt]DO>'I#\{Ͻ_;DW}Ug2r=y'zoRDKyJa؆hcYy',n'c>+DÅB!6Ȉ!XOqK.=׫qEL:iT'>2]tb;^?춣~꭯9368 x凮m;d{Ou7?5`=N]7Nr7oAjՠ"RF!m TH3Qւs%! MvD;nVJ@e'+m)cmgrBSQI:!ۚeHB(W#TH5" !U,b)BR7x!@NP"BAP1U~Ze@AZ,j#`,f2 0 7P36vC]0)2q{{WL hQq$"S"|Xu ԏl>d  %Q c.a$m՚A`fQAb.t4'd~fx񓍰`K޺\ut4~dX,cQBƤ(5|Io*B@L`w(FRA1dֽ=1vXl Ƶ]]X˜Urj3q_dR)9eG]Lv#! J@O*͑CLd@e= E'fL\H.U ,FD9R {h=zb!5T4;]c,T Vޔ Dg؂?3UH>fVMT B :MSCѧ3dA}=澌ث,b P5AU+owf8g~Z…LZwfgT B TCs՛whRR- ZTTf,VA0m-4ȘF C8X=dL",@03͠VĘ>!p)Ħ3v̹68cVri\0toooz1˰^à .*\YUY*dn)%8&|Zb.K;vCΘglkZ]}a@1(ØR3bU)ԃfvg)Xnu'eçp~?1"!Ugm_~+oo=}繎p'BP;YR RMhRՆXט.ri5}f FZ*m)g< LXd !YJ|c8elJrNp.Y 1v@H1FfTR;0k*(rZE(p!'3"2I`#Sq\"%**V}ܝCؙ]\r&J0Y^e]_eRGe919ɒmQQ02fT )AgJѳ䈐1P&t]SaX0:65&*kUKkƴcT\h]'U57{Ugv'DTRJM CSM&ɧR}mG0+IjO6z{YAα)m^a b*-*& N2@;N"ZyV^h"bZrQP3,\(^k#J.U'HiRt&b9jɜw:%VS cM6/L9Wz3{iU]>{6sԬ𫶖{ {x02=g^ i,-@$XWEEřI3){fd" m,߆0Xtxd(v:CTd@7yԺIPJZq^,eqіʭ&=ﬕA_]x.S01N´Ss0yzf~1 WjwF#eJ`効S9guФ˩Үx]Mf%)e"KEE ]& h{{{Z mX)4%6#^TŠ'  )W!jΫjww<(CNO'<{o|ٝ:I3RZG hSu\)8LfAjLMBE jCoQi"i6 VZRc)]̀4Uۈ(J q*nt* }PqS aŔTD%P"5selU\%31DP (R=LT n`)sX9t]cgQ&!Z o0%~*ӡ"DmFĎ/57m[g BG6ZMcB@ -ڤ" Zp>@ x53 둵IڹUWBl.:6{C#!2Y7S9JJʖܐ(Ơ ȋb+QUE;yEShkdQJu ׫u&EH(( D7"`QT,)%)KUr1y(g)yˏ]D"o":-qbY']&,XUìZ2I8d5$0Ui 0k1+zEK)JjX{;Mcn98H!92Z45*Z*ccUE'kd&*k&T#D6iq"xmnOoVpd Tu]gQVS[<<eU{t/$deX;ĢyIgr$@dDl-T"Xp̢:G%"W;\Xf v-!|@HY0ǎonlP5(ζmѣJjx\2w-Nu]Da59,RP4kcD[yL)t 7H-s! )A\DEK4@\GAZJ΅q훰}m]?ǵ_GN:DL㸽j"$<C]$֑ZT^4 1BBĉVр:ˬxJ.%^D]hBU_%ք=)"9mQRjlZhWPIU #{qF5R<Ռa`vڲ R"D(h Q8LSVa5N0U7- r&8Cp'bcuLdYI9QJI c B@XpFU軮{{֣rb>[()9ӚN%x,yֿ Ri'M/;<у4,(PcBWLZ9XfP$d@1C3 MzzrmBFn:"*.}!qLT+AʒNR- PۚݿXH ]Q*rڕ&=ymN[{G2S\3^l?oԩ{UzX%8fZZ$ -"D.7oTWه#luc? V1ǎ.,K6+ccة-c$ 9O3k1f13hzZN9 Vb_JHP{G紌%ds90amU ى1>vxtmz.Ȁ8mLsj~ۄ /5N#1C]C]ɞYՊDJN91[rN69+b)E`-+ABd٘&FB%b qXJiv,F]zkSUg\DB l^"eء*9AtI;e0u=zzpkkXVu0 1=i5ErVnI\q2|+t`n dT$ʼn)VD8pu}Au@,$M/U#zx-I N61]6(}^5챪o뎕bݢXkl3e&贸-$!Cz&<ؤ"\6fc4a?jjC1cLN%C`gzeˮgιHr`C@$&DcA'v!ȃR"ˑ`Y8ؑfs^kΙ1\2ynA6sNUڵXcƺtCZR/zPruwE]z\RえȺ.(376x22gsWx]ԧIN10@-$lrRI9G65G9'̖޳x"Yh#" jZ2BqFD4̩S&z^,h‡UT,l~\[/{~~dm۶mCLX8܂ɢJ) BfҷwTN};nzo5i ӟ2~/jUDP H͗,K=:wHX{a@'N3\9n8xxG뱾҉!,˺jɝg'?ݬ,Do{H GB٠"Ѭ3-L/`?\`Gnn P3I %{N4k=R{P="'1'1K$MPyebbh[|\,",Kk=RmY%l8ᰴp "FwTN-y/ELp$J&kkJMO0n6o@P8BUx!hѢ‡E &"bPM[6Fi`(Q9֥xoV >;av FǨDbE˜}ʵ)ɝ겔(y1{DpeKMܽt03So{7Wr- u:SZ)nSv*Ϟ:IXץ5ذnC)2A5Y#C OW.>7agY1`D)Čs IDATy@L}$Tmgӡ}!{ JHP "+򣍆yg Ie'̀g}PHMzܤg2Iqx1q }9SSH^HN4;uk'bnuax?4~эf{r(1Q\P'D B驴i1|A,;sQx#w0ҽQJrai52t>?Buɝ6lkj43imlzX N_UӢ:J)e,U6eOOӾۭ2Yg<:BSտ7׾~,p}3n֜ eQ ؀u[պ,RX.ZTmXXXN2t2(tRL7Y|tcf?D8'p7P1{d;,*pWZ$ J!f[ rvok~j)Ի yrDTt@hFe8CS9 ]<_q9ⱷ>| ?vK/Z,-xǹ3qLA`1 RQd%Tu藥_}cdeHGPƵҿpL,9MF 1rw3w6.URMc?"ahpC} 'p_ͳ#B4oeYeߛY7wwo{ۅy-,:WΘ|c93|8gFȡ1/Ù.8Dթ9e<5>fVJP؇0fjҴnz^#Փq-XU-oD>za$D*>R?(^"Λ[$⁂cgEݬ?=?u5"*޺KQ%nE1Aʬ.PR"K187>=R!mFHEJ9xgCݬڍ7e mK hևkZYR̖Rflţ gfAM")75a"6?8.RR2t{tMOCq*ڙI5Hn?3aZE $˞Q."ae~ʻ{n#4sƝGeAge,3Zm|djZJ-''8|Y\/޿//.vwR9I7O?_O>? 6L!AiDw ]KRYjlYhzzzRK' TtiѬ9:/>?ã2a[ Cf$J/D\jli2{뭵e۵ļZjelzwm,;,y6HXZ@'bYCVMƈRa^ק3Jݷm^__^zZSȽieK] e c BB朩܄`H$D9 JȄβYLz (G0z٬ Zw3 A]a2N#mqb٭Svۭ Zj4.m5$NAؐf"3lmowoF$c O !{r 28LR[/pƹ,)Ȱffj)ip7LSbIcaB؃9`,!#Hi'(="qyY7P,`9{xK݈YQQܻG[[yj% @X)):(4XHFp׋VR{MϹ\UTyqt%u|n>W9OB| m& n>z\ˋ_`j|,e~: fhɷK`rND43yD'Mg $+cHnNkF iYPj' ɨi %7|]P&v>|HY>h\;s*,̎帱tHfuR2DMh?N*i*<*(q VD" l)¥bfgz齷jeu][k˲DdѬgZدpw/\.u]S^4R8'᧧[RkRھxη 'Rr 0Oc=c=c=c}yZQ  ncYh.tK"|c?|2;0 1haxȣi'17R)m}moS( dE}zQ)yGfa"RKR̃:M/ 7[MDf.9&DLћ땉n/,Bm#]\@ #w:G|0ha`&@ِRuI4cD7PW]JD)*(R1cԳ,Buݘ-mn3(0?N Pe]=i|^۶wq.2Y2cL8*q8V###Tre? *Z eOpb,M%6MfDE|4e%*9,aiw Ϣ".Dd=_MldHub`QA"DRKlQUVm7Ddn:sQe`@zLɒ^."6Tfxj rfRjQ@gвrh3iPщ30{3{uYe{k:"IKD.9ceK5c۾KQ9Y#ۖXϊ[x2U5{nH?)9~trhԙ6P"eC9z374yjft7sFe%2 3K4*t hsZW-eWBg^qs4Q M܁p9{K7Qӱ?Db~TՂCXe 18====??n>޻573A| 31'c_ʩ&sd|` p|7̒|3bw(nydL*9;<8,(h55wٵ.Z*ANTU۾nK}G*8L۶G\Q{kRm㬞UТn]D1e+4;=mxWEiM["lvTe3/$dn25rcfZ`|8 "2`4bo%2;:Pj)V@Jՠqՙ0&lጯуIbYy3ޛF:.('D>kb\ S]\#>R GqZ1-4;gx٘DNՂ C U-zhdJzZCmkReKZ{zΦһGk)}+J,H 5pbȻbr;T4Lo wEek] W"H!G 19Eޭ )*݈0Y̬="JRCTj-QPsD3D HmMyAH]kzhcVǒy,8Z"[2lu8p=aqfFtF[< -rC :a!;+s6Ko8G92e׉:dúy8ž6.Cn@0Xxi3RW#0zOb e=lf%Rޚڇʷvﻙf^7°#"R.)P,t%J6___ރH{w)HPUZmu]߽^_\W}zzK'q"X(+u} ئI2'cߘx͔N;L뺨H/OOO:* RjaD{~omoFzG\=f -KNAfTk´R4Ic<;S\0Cn7(E˲K1FZu΃_'mw&poI)h%CB`ASʨؽFE5WƱhwY)!9y$nf_(Œ7 gYR:DyqO Kʸ{DK8N ϔ"ݘq,QEUzQ)I˹eM0-͉^f DڲYHUj]f=3| ,u)띘ejNEH4;pAqe@7F%DR>m2rɢ%d)JtdvF* =TrR4EsL#+Å$SC' LًAT]LdAia&g ei޶m# ղRHueA^DjJ{lލl𩚆a0ٶ{lԂCI h̢erxo{@:r̨!܀o+-D:8-S`w?" z %-HyTWQ UL d'ϋܬ0DUrOh>RO"~JIp&eVTjՍ,Udmݡ9 ­t z}}A@$QŽ-|:n5\rx2xTȨnmoF!\K\.EmU׵h w.ZTfPye.@H7o3OaھZYz[km'uYj)l 0N$P o4'4&0JI&kWΓs뽵 %A3s#rq8[n0\aA(ےu^ # Z2v8u)STZ5ۮ;jޜ1v_]<AYD%J'lYB.LD3j$~˲.KIRMT8em+,j"_"`'ONzeͺY3`Fy0^TXg'ˊ,=KpA. m"{'MS=PG ṗc `±|-u.m߶`4CӪͤ: 0NWEIlee,答3n̐ GO]sYh3KQZE ƨAٝ~0sı<H$&6·iEYKii|0Sڠ~cYhTORV`4KfzC2}zP*ERfyCM1B~NW|AƏ/??bs 7ʺ.;ZpÄ_Ke,?VCRU\XXXX_j3o_?>Π0@#unS^BAU5kO//}/>/~Mgu&0ܭu3txDlJNuY/{ ؈ pSk9)I%<)DU-EDkAykuQ]|^J ԭdt2fRBf]&Cr-#9!3mg 7%(=nw| <ԘC0B Fp5d$\fvtQDnM YqJ1`!E%i|GA $"D[JƐ~.~#ljqbr>ROzroǠD# DGEmH<햌791%P:$y8RK=A:Ξ0%%9A*L|-uAZ%č2M6ѠHY,D˲{~᫟~uYe"ÉhKEM<̡r o^. G+Q,^/`zu(a%b0w6}Ci$,TRTqd\5ь36Z<~Ł8]7leQVfkbrG3sdu*S؈Jg%qBD쒮z@a:z~7C{0#z"B'&o%,8NG-SȬ?Q{k ),.Fuk;*/.R\EGp99@ Yj NJ?>wf"QUiGĀ즪˒qt%~>q,pbzb rK}Эexf43 ->z8̙03Mm2tF8`; hOOUU[vfgZy>/ȏ;$9$~t:u0,c22$Q"6fT݆gCD}]菏Lq~ 줵 _ U+\iru[O0󩌜q"/4,)`=šh( ).Y2tXo>2fiBGtf`}L%עtdYCC9}|]F"9k \TMb.^GR:4Gg3yӎg X6p꡵oF؅q<`3VԎ4M$ga\갫I1pRd,TBA,Yޭ0RATTey~zO>o۾4¡RH뱾҉qc"l<ٶ7G__ȮcSމbYHĿED[EDgw@ * $/,/ -K]j]TЍ{@ۦymwfV1e+ 5,9pff˩v6 *ZZ/ֻ֐D FCi24{P#!ro<cRJ, !ԅ &''x&ϵ|:mQ}N DX#4v1Z:Svs"#Dz[)(V[0|ɀ `jqɚ}g '7}˲o//:.ˊhHhĬH"g rL&3|o0E;] 5PޘL:*a![ZĬ6(Y.DFX7;ȇ# >PĬYDE(&]LN(,^LC<>+, ڶ~dT$K{_EUnVD87sK^D`ZTETV*3+ jސ.°Mqy#LgiRDfj\1o q[G]KۃHGJaC5ZJ 79Q(x|*9>"I5C NB:+tсM9?#[C6nrkO +A)Z4"i$֗5۫_̋nIAqs)"}!&0-\Gc3ocHO,*2]R ʭ%j0ws]r]K¼=BF,#w.rDk{w G պ,BԺf0WQ%%;XXX_~ۛ|}`9!?w1[mfY~e]Uu4V2 ҂Hg &].Q QOPz݈iwKr2 |]M,z'Z+\0x̘'`;3NÙAH<N"OozEvfYEPu(jC4`hA1nn"\v߶~ JʾIN Ek-F)G]cxA$| !:'`|"::s졧dlJ#fқaN JHYDd%075͑)$"G^2r  3|kz3s TZ-i rFdpY0Eˆ FuYmL D`~e]De:|\2=dL옳\ Ղ@PFXVKx;z17!*)Er'eacᬞaHoR4;֢f)%(d#oUtXh3#(| ҧB?c3ʑ7#fkl]xci͢r80o8\}9 yOV{dӞ&}a}( b@db+)e]ܣR+"7M"2 z)ELZM{O43[mu}'|KDh)Xk~7+u4;3Zt_$5_;믌t8g?$gw})} ֐DŽx&8hQަ8"oDdf}ۿ K JG(B məּ *֥RZ˥>hZ JIN ѥ?h-%(zA|t̝3;U cHVȊz0-03ot'Q]ץm\tns&Yk70&r]ק'dm[oͺaie!lΟc,F7Ai3F';]7ws 9fuv03Ea8?3y '@fY]Z|L#?k#KɀM:u^.F `,z734 oloEe}W@ŞƳ9-T{JUT FX3f&,l}Faax'WZeYu]FHϋ,OO'q\/*ޓ:4Jtz=!"aOpI8,c)?SZ:ZfFw8$&F%<(غMQ aڞl#m6e=9&έG<:h&=SQQ8:]PDR<${)U"B =Ht"2ۧ/Dt]/˺>Ѯnzk{0²>Ք`3{Jm`fvb3M94>{IflHoPvӽG͂$QP6oNjJqET4X{:IF)U FEa@'1dJWN/R,24c!Z1Rk]\.fqϑpi˪bҸ|E;3cnַzof0FW>y_o۶'墪~{}^0u@M!bRVRJam{t r#[U6o˻/// .nFLe]֢K N}zzK'\{g&FDv'̑3qbDyZJ3]37'zY?ַEf{|?c.Yg1 7O;3:xpFpib N8[ =؟v$%&K)˥w۶<6 uKm{o, B89gnvf2!G/;i-17۶k` )>IKmvs Q;|1$\A8 AP& c]פA4PNs#$ޘq;"$K]DDn]mf檊7DHjiP 8nkaG"` 'OQAy1B2/>,u{N&~5#Ã`~j[73"N"սb 2IyV&j jo\ޑ\A|2PmT#y.۔ RhO:h=ʆr 0h&D }dUVYɸx8q0m%Bc6L#6M!!?c Q24Qn!u49IՙhIܙ21qY[R 3 z>9(rBCࣧ$Ϭ0LmΤ鞣<"!b"ai̋Fy9qܘF:gq5ߘTY:D.m7$:>%_ѐEr >8QЪrrs3וֹj\=͌嫵̴'MmBܻ*"OOoo===Q7^ޱv>ysǠ{sAQe}}}nETr8#`DNL>Su GpK/o?~W-~xyGRLDu]6Nrdbd90s?L?s";8"T”4sPNI%pҹS!W`&%=gGm`]rYGt jf(Z+ɠB>`"Frf]ƤDRRji_uYWfnmm!ZX$-iI6}hTZ X2"mWղ˺'>NBDܚu>D\@pgRhтtc2_."Y{5$PÂB=,.~ε>É1&p=c"E9TLaFF""7nnRK--3xo//H]uYTTX etffR-@tBK64,YB0gͪp$ ` @JQ\3pR53YEȹd%Ȟo'f"Zx1FG2l\CufRreZ,uU4T-zSZE `C@VyiM?/GgI"Ϳ 'KgИ3f7Ź̢=zオ}:dO2x٣({Qu4tT{sd\dT%&3c$xԀe8x[ӈ`nnxçC Nv:ƽuΠ`0$: {xN?Br2&wrڻ874{Ҥb;Y5IS9ؑܙ!x1kNe4;zlԮJnDl=JFQG8JyuپofFbNֻ<==yx)|\޿|ֲnW~k;م8P^~R@w'u]9PK].&R%('~zzK''?O{zo#Y_KbZǑwB\Tnxbӝ+̵T ~sVgLr;ZTĘ>xl",.>A}gf(8<"ڜO#ܩdg- sɸ=ġsiF'z 8r"ݒ]A}oKQ3vobnϴH4/rv8s{lR+1+ũVp&dTaGabȈCӁ0_BD<.?;WGL-q6j3zQ"RU0_ "Z:XH4UHy8R9 IDATZ:V 'ܡnDQ*4 GDᰪDIͧ(Hp^GqYr9nb2mۑ2hg~xh1˞޺Dɹˍ ;NU 8JQPe"⠥X>9nO0s`~ Dɩ*ieޭwMdW}zA#\kF ) ! hUI\ 8( !r"&@'JU6^AHBXX$wZcO)W pug~?_jU{"S5ibjvpu֭[NjR'>Qʂ8B`Hnq5ULm!4yNQ%",r'N)ӹ|p<|> |?҉O">kQZu:X⤪V/UR„RJ-U:4IU#l@4UP%^cfm9lNayi3?<SJ:"`pjx?WTfӵLT[jJD0ú.iy@Z꺮$#Hw0 $zMX{c8aɦzrO]YKYRʺOޢs&$N?Rʲ,p׍!bn0jsj'fNd2} ְg+Ge]VTm]9*K.*#~jF@")Z;@sVP8}ղ,˲8ueYa [hϱcq ~gT(QgW81B04RDZ؛)8%B j"NA;|Ro8ҨuR{ϗKksd&wO ,^|3Nrh"H1d*05TRu=MӻVd|8rPF[n1.V88^kӮ34P4sҊg_UQCv~sjؤ5P+11HZe7"S Hlj"иfbmvv8("rZ݁UyxMLz3O`"'3ް~oMGygS#px^t-qW-}ө\w?|<\o]Rj)ݚ/sɅRwnC_30LίUT԰ ,rm+I\W39LkJzG{#a"43q<1lnw5Cq!#Z4^{#~ 8X!:׀5aLk2 M$ReR}ќXjEݵiXҩt4BUi.mlF[srBq2stWlE)'KZJiDU-6cp3kEg~|!Zj䣏&< RkT5kğG)0aYGRo: 0! 6#B3[ײTI)#@#[I[Ñ[BZzEugBpy7BUdYR6pa3YDe]j'8}z{`TW̜o;9r?4*^v<;)|}p?h=WvdDkn ~@SUDezܩO&Rp4yDUq &*먚q)J׭wVaϣː) @;j9mu~n͠QK[׬0ZM#Iq;[+u]`dWH7u-fP5;s푺p }ZV3#蹏Rp{8n7h躪gBfR)kǟn$7/L->T\H攛/O7G1\Xdh)-OO8#n b?'/dJ@#o~\Nd;wfk4+`šSlЧ̬P{oa̪!BJ,@׸o%wy`K`ř׸m>1R {s5oQ؟3 JwnĒvlU#PucYN`n3)t{=l]\vڭ:{uRtp8LIĔL"U8ĉKY{ҧY0"a-Ch"h`3>1a3D,N)e"HSHsǁ9UwzEcZTp8Sb"5"*hČJ)8|G]:^Ikw> '@ қ~=5_T0q#"P]rLU$uj>Ul0z7|CAQ@t/6rD̢f< re]?? 'nޙa):eWf D"\ &|6|+^jOvSPLM@fQr-sJ)P˼kICË%iW+؃$jUˉ:l@@TZo0U'u4K)T범ԩ΃s `|:N4ˠi-;xnzIֵT1q21s_/S6͖=nKn#h"-}|x><OeX}#gN9a8E23x IPLjߖmt3%NhUu'j=Ag3k c`+BؤVaD3lRk1USKVQAiqt4ɲ,WWW%jr/Ta +28JYN܉EDx6zMХ(L8:kM hXk@-~D@d9Gë947ޞ&j1.MCظHa\p,6 ]p[K@ lސ;pY.ow+Ԣup`ۭj?W['jжZ.eu&lKh]]j4ŇkIm7Te]y]:RJ1Z "nGV"!1 ÐS2D^+Z*p]JtNj㺮ٮ2眼]D)A)]u{48)gLK9YQʘ1ں, Գ|q>#/ZV5_{|>| #)Z%6 b Udܯ1clA1"]pwt4#ao^V̔a[6ϧ}Ny6}bW޴k|xb鏩m .w4 /})ZslM7XV"cy&f0+:b^dVD H{+q@Cf"6133 ̼ܠYK9;Wwʣ0cJܜ:[]﮻Ç[Ukm=}r2Y-5@a;sۊGZ4+h{VAk_ aq=.4vNjdqd[*&"/Ru4M83s)e]ETCw<8Ѻ|rr~Ir"EkN~DO1[K^TdYy^n߾C0KYhxr!OZiW]q+uQJ= apUI@nj|RN4;|q>ǿGoO|LC>p{ < 1j1i-E|r\(#%nHL0qJBR Z׺/9 anC\ UkX+3[.h&|~Ozz;ۆ׶\VRԚx$BQ( R&L4VbpJ%)W t$!/Fv\"b݅Ew@r&Qt͙gu]D +T0't r_w׍Kٽ!E?JwDsR08 uqr=M0*#JIف-]ݫޮ+$%#acGt}]J )"9aM_HLir>tY1m˶oa;nqg RZ?qΓϔq4H)uO.뺮VoJ)Ø.=@mK@SʉJԄ0z ̊3P{m^De%.Z DiA.b,ٯjkjj%⾝>'bS1|`N|}uuYWGTfN9S#7maNz]+" RDE!yl"*,s7I+n uj ӠXuO&ĿCE`Wl*kqϠ-ԣ:a-Կ7>̽|Ė`ps_h锭93UQ7"h*r{dkڊu,ZWpǾCAA:,{_(؝p?1m;j5~QNE:xGGWM e-ni EvRֲuLDH0ԺkYʚ֜rS@Ո\DQ%y8 ÐrNXM9[Xj}ΕH='<}ccIĮe>[ O0 C6S)eEFG?Ukq|>B 4 DJ*kYӕ=g|q>#/ܹs0D=eY`FR(|+D|+g~ؿ _45@7+/~Y?_{?W+o_ 7_ob1 Y\_]ϧRN|Hbjb_+r<|Ә`E[xU+ZkDU-iqyYZ+ܐZʦ[!DDٞs:JDc MUg(rœ< qƱֺ˺R*Ub ?%bXDXKy,wrssÐZ8ZOIĘ4eD`&n2jshhDvÇjn{"f8 k"*' }Z""f?Y^tuuU.jW#$`-إlV DvTRRiSESlVԇ8LJ3R. D([: {ѭ11\B^R!yò*F)X(D6>$LХ[uaUőꌉ: ")cJ;!\w2'Y+%>^[@Jw0ycd ]UU7jL1|vE@.~hv) l6!!]Ơ` h+8q" ww6q) DaZxZNHg8@t6Vڕ(qbJ98eyeW2r]yAtǜ0儬[:=:"5'=qp>p>=g ??YknGW>~g?_̜1o{7=_WID~o~K&/׾/j**׿o|6~/}sj.eYp$Ac7-,0iJ9]LZk%b$Jݚѽ'm>䌘LaZSNzu]S2lv]x(fdyz[zXֵ;kMј>E+6cc6E-RezE raX`0 [RG IDATpyqz:]___Rk]kVNjon]3?e-R`YVw{$֦0H"lȔ&b^am2^|N2fo{n]e^ui:QJހ.lک1`,M8 ZE*"rJ9üEQi׫ |U%x,BDo(]8nbԮo˲~nI@:$,rX}6\+8 {0RThz9NKt5ʫ Ae[!t>uuO3иf+(r}ָ'ĄH2ǝ&~?)e8"9>&@9Qg"Dh=7Vƿo+>k)a)Mb 6od-3N.ԐxDmh7ЛS%ʶ%QNDprqqȩ WD F;3q3M{֫`qKYZJ)=fJJs>G/.. .RJJ1ScFrզsJq:x,B9'j1DY9|/,Z?MxoO} %x>VKi$3qa sviDf@Dx7>Kj5Tq b4tXk䍫l벮>f7u]}qYkMJ)jM4A@~8*q 0%wfDjvD+e4N/}]Ӡlww<2P: YͰmpOJ)9"ή .{e4{,e]X`l 2Br^O娶]FS5"b5Q*9)!r.RRJ7"'Nչ }WyM,x(h8a}IEԉ}QbOp802Hxiܤ0 yP91SjR*z)&xJwmiV /Р@nnU <S>-`goWy< p.;wn__yYְJ٧i HNu aJ9fj0hTm9c7*TS)LlMS3B@O˺ua: <:ҕS!AfR<{Ljʺ^_] "yV-Tke-@#bF()S۵8}U"rZașSE בk-~cq`TkOTqKqH'g:HvLt41FU5Qoh7\}32J D~_" [ֱּ,;t$l &5!3Dp+_W7ǰ7> z/dpoYmI"k%oagq&v ^HPB"갧vvx|l]Wt/mF .hIEvt+#8g]׫w?}ףpaJCv# LfS{p eu-:[OqKߞvGcV9%5A(F^Hɐa|E.kWZLLs^湍s;Z>4!!M4K)˲.WwxqS1z#D9oflTkݽD둔bUMkZȀv-6xj$=^tsVajNsc?$ XØRrcJgxTO{)95F2/]Ny^܈W4NJUE5K}sD.$hW-ɡ~a8|yYDEZDj䨽UpURGT$zZ"R[9M[A`b(5$Ԁ`׺J^}4j("vG  -K4έ&JOQ4][?' ES-RT\lg/ 0 0 8圼v]":xQJ\T+bT S{ַf0S@]sTW}/`lDٴ@+E{L*jUmD_T}&r$0רo=jp{{RG&vz;uEFp\2GkkU[ydMAov lHQaX`{jN&F% eSC/ޣ/8f߫yZexLI+Zմ$|zX X裏Ðvu5gM˲.[n9H\[P lov9<][) r:`(aqe 0_k505%fNaӸӝ+1[:i=yoQ,iDȋ?w|єs6_+7+__??[(9MZE Xf{j-ZJr4Ƞ[9T=mopBH$L*T)JPq^iL>_ G|oTjx8"Zʼ̾؃٭$@9%]U%*]̢m ķ"R Ge^ZLuS4F宻6Π0aؗSι!SMenZ!Cگff] #p&PtrέȜĝKT$01_h>{z&?3]ARj00?!D,9;!il)VLjT/)Ù'RڪUZE,VzJHR [T! K)ֆF2~-k(ytZJԞuAKTȋ_3ERXm\iILE@<$@Xy]+t'dTjeb"TGh ФR(.-y]l7a>@c]N4-QC̞ыedwT1~bhW>Bi4g%nHCQ-N"g?" ݪg 2kT^ 1UL=g :9ÿNDqFi!Isfc!t=MNqnK\&$ʶ o!xD'hD o mk?qݶh%a-K A ?fYp' 3{{y[>'3ӈ.DDe9޺=^Ug%;a{G}}}) MĉAeYe^0逈T@t9,%G5" H J"f׵ףwgO_~׽w5y}쵒O=^~w~>sn߾}^oϾ^]J} q>8wK'ԄĘCX93%t){tF ** z qwmK5o~+ B&$b <ͧtZ"ﱜc62$&8)Ǟ'!{`G["E034䁨7>vakTAb 0PT| 9*2̷YCa9P|Lw(" "Foe_DԦ3zU̢RkM);`s~7p dPN.h>;, NoCL *߉/ܭ'^r rf%J-IC{i{iʺe5Ʊ`J &w8/:hƜ3Q\B*Cћ8U촮jW"朇Vn0/]B7:7ЪRȴ~6NJDRt v;1. gIgV !N SL<7ymm9HHU;5abe`DH 0&a5r 2h8K Uj}M'@ [q4`-B͠+'cn@`7l /eiD s//V޵`mDDYphp{|g]TE~6Z6{{^>y}u=pp 7&繿ňȱ&iYvQ-`j9{{.9;sCSR>Oܹs*ʐq10M]1;.^YR"Dǯ ԠThRKQ"ƿS?S?7 Oovng̿}yy g/x?ok_oLW?s~!8H'?w?U{oGyv'5s2"eG^,痽?7_\gbx ~Hh:̾28g/NF7\u%OJ8 Mis7"EU;ém.?γ躘 }̄%v`[(1h%kF"f֨]rҟKE% J}#:;Z̴@PmתZ6|G4Rlf@愤hn1ZzG.3QLIavm4}ܲ͜6nExTCI3L<#'VӺ*49>3"#f^AKU\>$èRh=捳y xsǙ/΅GBV71vІ:65u-pAI&b1#%bcM\k&nA&F1K 題ѫY#VIwM\Fd"1cY[RO-HVըGXEzkq]0 nZtj UBʣb8!=]iy5lLܒeqm˱]k kzq?;ީןt.E^L$tً'ۭ,nDU}:dӎ{f|jeY...c5C6c v}~GiV6qlPJ:Y}ѲaBoth  rO1g~틞i|>]j料j3}K{Lo~?+/T35WyGcҗ<4:/ZwTi^׾.Ķ7v0x |A"o}%B-kYօDIS0 >` ;蝹 p`d]Rg6 񨪧ɫpy'$9b0z-"j9;ӣ8 x<.Pt΃90.u ּ r)!U%rV l̹ͧgэ_x-1.s9TwMϡPv ѓRrUDOCa67슙 trMZ!ZNiAUUίpú.w].i 9a==۽ )S%JژӐh$eYTb~21bVzfF%jT~][ 05ChzZ l$2iC3'şv&Z /i l瘊IaCNyGINE"ozjx*RZrqTmgU7D*唳W&^Ռn*@Ӑ}1"ս9u]چE{ Tۤ9Z7]T8qN9z  DNn/Q36?iƊFoAdU3K95ZyW̤VUTȈ;C%)єFD242RRϲQH޽4ZK .Bݣ/?ma- @w)˻]ƶv^`Zﻗ:ɶL^-ͬ΢ZrԸ/"خ8/d& _[KatiŹeЗC60U-ۏn׾ԐZ׵N՝;]j4LAR9ѭ[҉ZKxLRU!S~WUx<Qֲ2'6{}9G|ʯ/no_/_{_G^?Kʷ|}=}=̞=|iiJ)өV!ħ>@N QÜ3!TfvWfDW|W=`^{,kOD2/;wNJ*Bnͻ= *08#_hI{x9ωC+Nxsz#&R"<< *TӮ  q|ċܹsg]:M9 1CNxڙf`"laWق)E~r8< iHsr6o`h`^SԴꑙET}>R{){Rx`G TDuZRk]UD!A IDATp8Nq+G3qY R#>e. Q]__Rh+ f-~8:} Jܮ.{ja[.o*2aygj97Ny}_m-˾z|#_eK'xǯ?9#mo?|> `({GZR߽5aʜQ@Ho~^,b,”D|uo?zu(4!<P_b8G`z5?\ HCNCBFRT(B)Z+Q7C2HF с9!z8؅0]j*Խ*BDȄxt b[4,IEs9 cvΝt}Njq:tsXPx PZ("`6{(;SZpNZh(ZrAW΀$ӒrJ<#[>U݀DZjE3( !78 )E'afk/4vrc@EQlyv/ JΉ3#v8U*bJy k=/N1{D-uM!%oZqr:eYvKؚh1pWjހ’ikWp a (%`uI50+ жUq$ [TT͔ j\ƧI2HMV]^DZu9͝G9ryDaUd^+3D{&`LaO9\3U>!'Sq"Iq((* Lw"2 QT !&N RN9%* `~akf DDHOYҘE 8qʹ4v A T@Ԁf%;?"SvHJ-&]^De50OU*ϰKP4/4U]BFnhPZbvihXh6z] f #Aۇ ûgi|9v l0յTʪ&'QOSo=I Iָxyy9 {OzgD &90 hZJ)H- @L̖V-&Ui`"ׂiuyy4ZjGʹ"֥ZqIu9hU*rQ1.C&裟\9qu8Gχ^ozw/~?;yy7g}/ү|<H5Dy[TԢ{4 q[.apsy9RyӞ47K)^>!onѝlisE%SN9%KIO =^EQ\ѐd[Zf8Gifa>sp `.g... *kRZc&eta= D}Ĉ -gAq`)[qZ ̜x©Rֲ՝)NhMD9'0"ru꡻`aYyЛ1VjH4֊QCQ%aFfbN5BPاmͫ| ^TLSJovbN t 5sTe9Z0 K9gQ]5sb4M֔2˻{8L]j-TQoqJ@#5]mEZ16oj?vK)kj/{oqs%Yvd2EQ mظXNM&[#"E 6$*S5L%YKr9q%US+7{XHjlYēwow3*N@{;C|^8۷oOIØ ^JZYHTo{"K-2U\} CV;CCЀ eS|gf(g/mQZ(X}F.F VH *.ⓠ1*i|a OjFIuB>FdGj2eœrזpr30L4dBGneC% KޱikHdrtD4cIX"HB cC##pV}8F cu]mD"!kfzzW_.KQ$`+"vxuTCb `%2G2k="x<r]׍WWh^z&c2t\~{>;m7=s竪Сd?O"WE\4?_?w'v~o01I; nذ>xcԄWo!9gL O?p.'OܲeQWFJ@@Aڮ뼊)tWRSnEթQYc-\cDN=I6h$CaI Ģ<Գhj=5^˚nk;>'_A4dѰ|]I- P!xF"}1`2;KTrNp"9sf}%o_}Dɛ16Y挈 \Y!AEpyyyvv3$C^ڳ{UP~#RP䎾ȧic2%"C91gtO?w۵;KH]׊%[GcB9TD$u!kT|\҃P}{c[n:Ƥ+P!BCԵ1k۶LڽkMsNddw޾*#GN3@^c9|S啻lU CR\kK;\Sf3McR(.~F '0Ab-,c1|_ݽkXR irֱҪ 1F=Q@Ϝٶuk%"a*%B d(FHc-f!h'A GF7 tvؘQ:7%)Մ=((#IFꄪA2 =$C`FOuPTNb̛]mtvEQ99# uzqA M gM+MޓEM}wSV73t 3|֭[nuő=;]e ABZ!YYS A )FF }0ެ1VVWƍ9FXy T6ZM|)*UUu>xzͽr~h3YNw?o,Qt///Q+=e&c2&c2& 5vƍϝ+ w]ѳR &$zA+,3ؾYɓ2xMz.b U=Z8HHzA |eenRg]9g] i-(Hzf =&zK0*žSbj3gt^~M33F޿:-=eg-AB]۶Mm窺EԼ39qrvbJLkZa/)5tc7yUW99frjYb%.ʰr: KK˛7$#%J$%(N""T\woI,][G=n'(w.c-'C2:\ctY[A.kU`A5I'ǖ]f| SOm=V.t]S_޷o\6#""NrH̆'_!1n`l1"CKH "hy(i׮M?oBqycw_8Du*E9g#DB(X-Da1A$r?bQq$>;hqsEUd$%4s,mel}0KJCT5(q;/^~g5{+'\@0PլSeY~zcEl`՟Z{ر['n"b$ܲxɭ[^%Hzm}TQPs5dk 7]s|G +G2LIm q` nK KzZ K b\hr9"M`R+H8_ zYfdU^+D:=?QlQOY0e)p V袃{Dk{\CfHEW9DDR誊9mb*u%N6QDS*VUc֐!Gug:BA{k\UUuU:ߚvRSzt2&c2&c2&M>S|E1FnVNJP&kxeeue*J~dusAv<^gkmUW:PMiPJ@]uuS9EMYT6$ +Py| ^m8K֐uܹ뮻nffΞ94v/c򳴮릩Uz=n"2;;{I~uc!Ph/=UUו5NOڶmJuCPUi꺮ZܩAZr&Aqy̦7ƍϠsҶR8\Uo)Б!J"2TiTɷP*"p؎;zy5U%-ȅZ!"k*/w.V-/sUUUY4*s̼g+G@˻vmz術abXY]cP׵u}tmc Ќi1>xpj׶!_!K̶v(iBSci@Q4X浉H,57< %Oy쬱f4lNBR 9*FShTW|& YWUQ(gNNX?NI-X8߶m:8;7=WʂA 5JEfu8Ȩ\ zV.6^#伪 @ 5#,rz;cZkݶm۟y䝡 XHs$Iu~B4B39w dZ# d5  ԭ\H^K78Plj*Mb5 5,)/%M OPVOJZ{br1OOrRw9Ims}ԢR ) Y.6<I*P.tyKdʈ2RgeINx1JJHd"!umv]b(ٗ˖vDpkݧ{Z8x>ȑ{#*ȑçiO;{mgp4XD_9{7~/|r< I3OyD}+G) I[Jj~=sc?yxlY:Yȭ^a"rsz*C{'?z(}m^Sו1z%سgVmD,;ir>u[W=m2ki|ǟ~g$" j;vlW+;&CVB(IBm۶c/cSN&Eqи"5r ( BDC o++^XB,+.\RN h$sQoLE3\le@A r$R)ogR5 ?pF(sJiF75^~yffFŧO޼ylGN;47;nDZ^^={ܵ^2_qXgcpG<n_/ j!ߴmkl!Ns""> zfg7Gw1çu]w욑\%<ٝ;7~Fyv`Ήxѥ;vmaw2e}1 'N(~ $rN/z5֭B Ϋ#KwڬXb mU~SOBT_*2{;wnDL!"\XY/|_w9Wc>W*q vmFd=xm=ċ/ZKO={]۴ 1>;v}?w=;~u6&HzycOܷgc?,g_Aj|s׶BpƯ_&,s9\ɯϊ{~Y<3ʖ"O}zhڸ2 IDAT$l&*"& qǎi1ƮmWcfN^0e h@ Ř̓I;N!HI>h߻6F T6,(Z\j01je8+<ebQdV̑A#k]<0 a pަ ϱra聽W*F<)qlw@ 2׹Daka ΰ bGPCz($m{V'm6fftWAu0"*LɞڃsG2? 9k A ZE%-gz\U'Ӝ!ndE£ߛdr}5SS6TBb3g1!QYc31=`zAJ&z%뭵_T35R>Z};brx1ֹj4WUb۶mze(D"#W'c2&c2&c2& LMO)Vbtȹ%Z|VX4몮 e Z *.i]j ^Յuu >HDPD" ,05m~eЈֶm+ ]6jJi:1$yYN䲼ˏ`iiynv_@ăOsX873wO81<%۷mkpmۺnԵ(h EkmTR^j 7qڼnjBRu.ֵW5z%X2~F sssOӧfgg"-w1{ѥm`E(8R]J@{ct騚[ 𥧟馛Jro$B)!zvʡS/ʂi:Ch{R'_[C5<v.t֘GO|7qwKvܸn:@6D"ڻw @>{^CZb6"!8SnTs ߿hy'1 ߿9!J+߿`lo9&W={Խh929E=g#V޶'Z0S573ܽ ZkѯG~SDhUh ^3mI!x+ԚKPaߜHMdmS* '"9@Hƪ*UUPǷmݞ0g38gj-GL ?eW\hVV"4&ÏzYbl#S;YKu2x%Z)\oC~ AHMVz~d-Q޳8Coj|( D;K 0|4^IS&'  -Lz@ E);$\c[*@Sa]|w cWQgB"%1̊!|Ug~ԔQVk5Xߓ+0Yc@uXRxm[>ZRXk\U[k 0ʒܳ֒qBsbf&@}nM֣111ovdzj*ĸJ؎[Ek[:3釮 huA%ť_2zzV k_difjj*^+Q}5OE@DFQӌm]WBQ6|s_n; !9QX51/xёZ\\,^RN89;yiiyvvsxx&]:&2^ܱ};&cafYO^RA͛gu]-B@kCŬB Z+ rD&PQVԊQMB[C]rHk٥Yv?RX]#.bbh[t}V$ر瞻~晟Wi&ЫW=1$FCn"jiCT':#}睋>艈K PkԎ@ !G[c`zzڮ!/;MuU{ѻ1|~E XA54BY}Lb! ?ڄ}r?_L:}rNѳIB~v޾!m4!"ZX-":g|=ƘQ3O3LQ;hLy8(̪$*YdHoફ')2(Ħx$d7iAB8Mҙ5HHIc1&xCغe Ǐq=Z)f j`\MA;2EEM"nyD?%_g  P@CB_$I0tiY@4@6d5A1dBzFDң_rk֐5T0AH r<1NghfLأ[d5%#!0I5es`&1TU=55\%+g1(Q5q45$h4Zu댡ܸzhiFʅIŦ`D#c"kEf5 > @z"c0GNgU 埌7291:aksU8߯JrzC v]}|*]'Iޜ0E:E\=j:ċ0TQc,\5 du,q/sUeu]9,wVٔ ,""]wZj:ssO[=um{ż$=Bh;RvGU{b쥥Ņ"UW+6nVɓ !0S!v=v!tL%8$]zwYưK| ^z噙1S(uU}s" :vN~ݳFzSJ-NTy;v `(}4P9"&33ܹɭcY >xB%ר>9y ]Xu[ p;7<=xry±"z5QАh\#8NoC 9fTg}D">>tmF AHxݻPpU<{ٞ.D:C2hŲV#ecԺ={T\$p][C=trE䯡7PN$v@Aǟ-uW1MSs =KZgYDʥMMS}Ls 1rFLDnj(uVSJKޢlqAF HZ" cB*cTֳj MGe@JHJƠ1/C Sޒ9@6=na@5hdU7@2gx$c!r̂pѯ}ю}r驚YI""ron4f1DsAAz`Gz#N"4:=%kl ,*P.ѡ[D/CNA7qB`@h5WWVE:>@UգH9_\b6 r|@&B0ACr͔kH«٘?v%TUiW MEk1rS Pև2J7r+±sY;A'dFMdLd-NڮJJe-ƙSʹJsgff4BE/S7\w_<3#>`?u‚B;^]r'ş_̮g qܥ^9 yZK{Ztzf|ڠ<,gI 5ozzzܶB9/+w11xUC(\Q&ܿE85ݳHT=4Gs m;fc >[nZWWu矻̨iwD-fuR ;v(SV2"غnT3Fmݺ}m[%<ܳ;(I:KD2QNS@- H%.$8*"@K䅜VL&38#XPa_5LA#f-W֤U4D4`H?!f4a` 3JrNI4ПOeJq2T-$K=r 1(^/XDBz4G2 MI5g!X—פ'Mb%XFr@O€H"A97jFu]gT7oX׵}s!@>fDh R5$ !p"PL)Z%NQHDQuW";bPPiQuV'z“u2aMdLvs/5^կ|o} #+|>;;&ݾFЖQ zP˶շ}R]9kꪶւJ@&kffl͙wbǎ/~?x?xʖ#WS PhMX9'9Pjȏ HmX*M783g\u)6RL.2S&=/qܹ7cr.l"/↍177,6m,a::F׾/+|KEY1 m\p{ٯ`hcu+7ș 6 mnАc̣G ݛ?;k`MjڵǗ[7;v>zJZL:+ r ]@m۶;4_җ> fxwo;t;oOdGqY>~ڣG7_%7|g>vmr ]]%*z.gȑ92 ۿС+? 3F#2f߾:o{=Hd;ç08璼x<~ꩴg ȕ\z'v콟Q#D!ـp=;sώfDT^oߠ@2}{n'S/15MCyA"l#$< hTHΚ,D$x)zrmeln[l9~8ؾcQT8trmt]P @B眳@EdeP q5 a \vxr_$QvLǤHʵ2l@i9 A(fHNx!z |a} +^ dCP%\j*I_VD@J?-NI[ʂk[\MӰp0qk|癥NÏsN0N m0"V#$PcDHNC!uiR$Q-m'TXGDF"urq)fU絪 oRNdLdLd9^~{P\d>z)4R5*Bu5Y9/,[h$󈼺z\X51Z{h (Dץ]jo*WUR9v* 1c\nmo]%CGkJgT5;d k ɓ6m >6xX4xӅs_k!Cl"ȱk,"1hAYuNvJRq> 9|XbUFH4^+pҮ]3D 1<虛o T5&u>05=U9m;ku@V/vBuUSOOkڐQ!,E $(O[*c}dJ2sumvXIYKIK3,Y_6$D.~@dS muTu\e)9;603ƪPj%/Ơ2 ڇ'W_YY]YYjzjnjT<ҶmX1fNMM[SNmݺ0gUHPSD$dS7ѐюt_nJQ|vԴVC\ԚPG0ju3Oo߶Huο‚)k0%I2gEaYBSU)GM*0\UQ$3N ,e+XKn$&_wgUƐ̑`/М5F__HЁA@-E8٬O7V[o]LJ R0Gb(Y#8,Q:28y/gβ@'B[O Š5I+d܍ǫmێF-Ӛ'"!t1~ų?|7$iꪪ+XEpݺu^zc?wxGfl (#Gjs־+gYYYɷczzzϝwOda:11!c1h j]%RTB>jEu:D<;47?fn sKNlnG2"ιݛ{3粀UoE`&C8 (jpHyl" eȜGti?CDZ)$A$g0Ʋ ζd-[k+<#|E/HAc3T +mjp1&-|AHR\̨o֯\".x k"ڼy['},mm'1~v9Gz$>6Hds bg:uUQǤ,Q1ПYԁ/ٳg7l6h٦z~KEn&\㈀ee/ Z \Ud(:Ro9iw|HڹژHɞR@7c R71!xE|A@k,JE)fc @5fCYf;N}2M"WUdL|𕫚90^+XT{ F %%jUYkY)jcYM HYYyQTDgaZgDC82>˹C158Fv`10U\*h7ٱ}{%<;vlwݸfCPNX3$e Zcn^և :t\YK_Ҏ;TeGD9Uɓ'w$te!!d 2bDGC6I'80 `R,uh; "@LdmJ!Aћ"fk)  +SfUEDAkvAqo 5C`1\Il!.k_[}La)}FtKr( b)F6 )h;_ɑV0ǓaTY(KւޤJQΔ!vt*愂Mו CvB(¡k YD9@Dj@ "YGSSo{O?IDh뺮Պ%WXMkK3Dmujg& IDAT ƿ;YNdLdLdI" 31FS#jBs9x.C5+Wa󛛟QZ"ܬﺮ1FuhT] ^nkKFa$4&Ub%VSQEH.b+VJ! c ,LLD,xo%cR^2 B!<)Ae.Ct䰈?RdS 4B T8(sݘ{y{.S38WGH??nPD+[iFۮ D'&d%Zl-RaքsssBه屵UN\< "ITOBX|\5Ys- `! [0i/ȘA$r,?Q!J7 a !6r1WDIS(?뀅C6uM ڶy*u(@ km&e`u2\bb[MYD8uЬ7Io۱ƯۣX-3ǫ} i&o5 )#DZ2hZ[9 1veˢ1F@‚Nif! Aee485ɨg PȦizhNYA@:ߑRXDnDR˱<*UJϑc[,*|LdAS' r\ij)C`g4]YƍYRp{D}R{D0 @L"1f  jI=PJ z2L37曼uZaԏTC̜9̵瘤+%EygXH@oIÜ͊I(u ;L%9{ !ihQ1OɻAxK 'G(\-e q[2&]_sL&v]v}02jݺׯ_n1jFU P A;JUERGQ]&&111ozĐ9CDkgT )y޻ʩ bZg0%rxZÑ+CDFf$Rl ,J[X"5 ?1kD,?SWXkJVٿ}ItW JiZW; #GɚtJH6{pQ*,ܞJ"v$H[ƴ1jύ (`*gRfN:8?O/eh_ݯڼtx]QJ#AffJ*0v]H茝5mۉVDڶ >1uU9Wb`P+`JUz g!R <>{ӇO+B,N#*TQiOUU"aBtʎ`QPj$Jqf-4~ΆEQ !_1H9V,P./-Sm a;8n54m>_Ѧ+F)5jY, 8&+15R8e~sw5\"@d\l J&MM!a 1C`N΄(R \j(tw1F@cU՗^x4uYCamg* `ȸ 0G"mJ8([44 )\L."sJ eNYTDUKb=`(12NѵD;S8ɚB|% & E5!ތC'9HaAB]G䫔P; .!(2g OTh"(H+=WKM ?tlnzJiϺ)P\Z`ƬQ~_XulQ֪𐥥ԏjP;[U.m@ bro}&Mƒ1ƐNqyDtKׯ7J4x6cL4wy: DP j~W\1\eyy6o5^ADΜY޸q#%T;0Dc$@6"ypۭ׌x5n#}ս{~R6jM@5emZa>WWO[lk 5wx&3:ujӦMI'20 I#Hr4{(6{BrzܜE9rARvRgĤw#WU|ꫯIb uiS pd׾X9%r ǿ'-"H̑3o pT/u5c׵":sUUW5$Sm8=Ne#- ΜM Ν۰a# rZ\'2}_.]}葥ՕՕUy?گ{O=2y1jN(E `^HɨsuUWceeF%>  ( 9t%ۯm♳gff(тJ{y)xm!1\% 5.Zi}۩%m5?q|a~!פ{4cv,M;s䫗ڻ/\_q!vi!C{O=|9W(zV1ԝbPN؇ʟI*kɔXcʼn+6p Mⅉ>&AI:)WX1Z=BRu +CU mAzh6nܶdLT*u8KD!D}=4uP+ѓbL+=B Q~m3Bߐa"U=|Pfa"D&[!e8$"Le&F$!A~V\y`CFUsןHD$h&NojDI$GRë  N10"7~G7 {M ⛌\:H}b%;> HpMXxc0sH#B" #p4?8e'??Q0 y~ [A'o(ENi,,*RBINqee{*F^zIUm׭_~jj*B[s殕 ;l$4E DJb m۶zĪhu٪n:ɘɘxB'J֎]~Zls}nD"G@R0QJࠢ(R%HR\BmUl8Uq%"٘ ~ {Z9vS4)"gg 7:@SfẽQJk6FPH)}s'd ܆ɩɒQ7dGj030 !ѿW>_`$TZڥK>("^tGygydUݿhEs3kHvj>g@ @E!A=QH7bÎkI"uRSqp:ƮEYOD \o7:̾1XF2C^ *'re(E Il8*$z̄:|P=E%<̉S;'8mۮb]TZSZ`b-8)EN(K8{'J),Hu/)1y1u^EL -^R춄*xLﺝLYK @@H2YyȂIzo:>3 2i&1EJ}p4@H$qǠJ=&J(` CfY?3o(ɿ*aPˋ׭h )P[htKƎl@ɋYm'KyUp[B!PZ\k8 I $w8zƨ=MP ]{7DHZicX(Ћ@}1rn^v R J>81!djA'mmmmmm!C' 6F}.~?O=uB)EW?;Eߛ4WH X0E))҅-Hi{3dT7M5*eEHhwg[L@)\*/Z^S1J,zW_8s/<|^T~uлB&Nʙ3g5IRڲ9AJ|ٗ_y3pӥK Ey.>z39 ŋ_dn~?_W/>s犪w+ Ν%1KxqI1JoA>%9]>cusA׶֊8cIL+7p⩓1"-ŤRZ+#liR^<7H=\qn kQFkms}XH =GR)I| u LEJ+! O_0}9ϙX$1 c!]W|YJ3zx+sG48, y) JLDxRu] ArSL'[XRJj} hcВhDch 0(4e`׵MCY23u19׶&eF!JXj1N8MƓts(%5GΠ4f&T=! 5Mxh(p!#UHeZR eAL1(@:QDID}\qJ1)@R؃]wSmPIAJ8L+ R!Q>%#9&eS Ldh~' {oӿdmg@"}Vhm0<}"sׄ;7X7ɜBbgL G P'ҶԊ((1)œ3IcpĈH! hf )L)j4*RQeYE_ HRx2'aI3)%Rq֦0E[F-Z͛oYՉ{{{{{{{/:`~ɓ'w}b<}} |3g8q~ԩO}T9O~$zdgW_k_܅)FA!)!3WU4]5qU7 bn;v\z0F{{@>>/I{IyWO>_~̙3}~/^#ofD݃<o9sFv⣏^z@3"$K.]xҥ?u.%6Ƽ\x{@iH]zc=/J /9sĉ7_3g3dgE,Z+ E0:IQ+JO/o{18_ X NJQJ83H轟?}>^;ԃf/@Bcw?4mc1>ϟ2}yA*)_K_|cV Xl<_y@!o_9`r??|.@YH$,eC`Bc.q& Qt12h ̔P?h!ɨM׀Wb`q>" 7҄$9MB꜓D)5P"ckҶeQZk _*FqR 7BGarbq+%"C Jim81H!! 輏!( "dmFmCJ PEӨ1xb9-0fNu֗P""c\s-e(1}gW]B޵]u]݂o6,zĞCEnJBR$aȦEGHцuL Sͼ m )eqιE^B/+R އVJkHR6xջ CLM*(װa*} 1gĭ6H'[r%@y {Ɵo4w+Q"$AGxփ$BdDͻ#9'xfs칞rUuQh4`ń^"d HtZE8BuoBŴ6Z;Jd}jruVZ|zf @"XIqԩSo| xd~o=?|,|cJiR*aJ4%O`~|K/4'y7Lijִl 64)OEQ r >s`WDPV^>*EΝ{WΜ9#=ӹs r<; gϞ}W (xPaIyă'N8iNxǥH9"CE}1DHR>^|3DWR2T*gXLƣWڙϜ1qVBZժ,-ҜoezJ<x5wG΅g~@GnF3b+HZ0":]׉S'_{="F 2{Ʋ0 #}_?яCJ콓 !/ /Wx U2-SO=ӯE%\?dX@ $^ 2)^26b!7$6s*cRJ0los,~4TTY~2$aCmۮVKf֔eiAE뺶ilɧsZ1&K)><`ͨėcBdRXwՊV17c h4*z,AK1E$ Gu]QQ=jJ)U|-|F[i|=EaRwqwYwy^F)f6Z~\E@׵-"k˲mfkkSJjX8EYV=޳5V+sMEQ5(UIOcJtiuƍ7n!FǏE"RZ(9Wk._|jF Ԩ:~=sw(ŏ~/b@lvw<m' , [Ƚ6d^Z,qY!b4W]uTU~$Ħii۶k;)VBLŤeH)\zwwwG|cG'w~8ZABL:C<1ZcZ0e}۶ 2޶i:q,wvwlfOwĥ,Ka[|˲84Mswooosy9Zy4MSu]u]V?R⽽7BFeYȖ%_|ix|vm5rlvw7FiÃ[?wueYMg[DJڶ{=q\X-m9uSwUQ[xwwb_9eoܸCD,)i0<5'l΅m;GeU#E9իW_~|C#G}GݛLҥK?| {{Glay睫ꭷ޺~'뮽,e۸(w_xٗ^x"h$5iՅ[[s;],W\ۏ=jikx<>[U9׶mZ,^,ѣGLSTWBƔ̫T}DzX?kZBʫ^)&B@ 6ɺu2('(nZmӭ(!\]-XjTUe >8c̨,ٶi~~]wݥ [x䃔 O>W_o{zO_~a7!" u9kDa\x$ནgg(,"rqʵl+#"%J@'ɞ"9VP'ps o~ s^SN~Hƀz!`1?Iyg=p4f2,%xᇿ7C( ?h74{~ĉ  ΅ _~3gNH^ep?+!yHk~Gr'j?zC/Kp{iHJQotsjNmn(a3 )K`~ˑՏq / >qMBNyg}'W!J@H[8B!_$eeR9rW0D"*R ni`)ҤH#@Ycn1J҈0s<̗֚#bPFN tSTNG~hr*́ȳj#CzwaÇXKrQJI?yVM}KS죐WqELzX U%gPuM9ZI6=hSZRp!VmyچRJics23'̛!l8#>Z,ĥHcvyPSޕ&qҜyՃ+ OrI"?́"O)q5>{,v@ջʒR9Z,"2{/,‹#4EY_]%GƸNT3Imdv!_6'[W?Z A6s>#L$Nb "ڃ%y˂}c<$;y.sA,R90Xx<RcEZ> B ,M4D0&6pvO֖B9[&O13}6d*c1{:=zNqM(֚2 B!8!mڦWk{x( pstMSdBH[gi,1Q % B _XU8Lbj9'.d2G#y\))yne9*%"1EU=E-GR\#V_O+҇3ƔUYjW+E5nmmF#yrJ`@$@)UjWuy),z;j>PZ$]Ӷ.8$*QUU!"'ZE9C·rٶmQ)*˲i^lvߖ;mbwdT)|!)%" }- bR(JkʲepmN!mPˢ,Gֶ5w!R >"ښL鴴"EH֘h\X{m~:ʢMTUU6Fy4Y׮]j< 4ܶMeN\EUjcB𢓭zXF#E#C@5(4 mq"z!I4)Rz<U4mASƣΎuCEQvm1îd9뚙d2\9'2s۶]1sYr8%GFt!t9#tkk&3/k{Q5M݋1QaJlg{bLm׸0FQaZ.Djwwi" 9mۥ,BJL)i# w,oY@)ԤI.l1G u[-իλ^[.s(l 1"mwܾ냫:D7fmVȑ*Gjpkc` Qy9w8_v͖lg[ 6NIi$u3[c !%iise6m]eY@4ƘTȯ˦i9!w]'aJJYJu]BLhyTUU*Q!ë7̞ZU1:q;4̈́!ιW˺nf1Ffu{DXVsn2=zT">^7O:921bR3>?|3>#oERΐIYޏR1FkƭW\;wD )D#˪2T~Z`LY%m&<cw׫*R/W%{0j_<}u,i]RJO!ӧN籌%AI܄@)યtpoEHu۶]hC~X,XH K_AjާcP}o~`ʿv{}0"6ҹ1fcJ,&ZN:cL1SBEGo bᛤ\皶qNt" cLwO<%Yh}}`( R_␝/He l Zi¬\SbV' `b n$n8WM"iMóA e)Qzumh#qPsTv}dL-Q T$)(ְ\V 9H)-LגP Bp!U)$8>|r .뺮]IXhHehcBEUU]<,>pQQEVURiۮѢNNGKp…̕z*;:)NAgžg |(Ґ#TN7E} 4niy>a`)RE/E? :_S??f( ܲ? 3_$2>7,YWΆE GA%`?"sMlR~/&~6DBښ `WB!lܻDlrCy^c$+s=I7TIil'(R&OFQIW 9\E1 F\H+E`mQ6&1{HkRJ4pX^fP;v]R{G%Nژw}s EovZMMD;;;BijclVJ_)H0La0%J,$ժ~ pcw󶢼{*;8 !ei ;R,`Zx<,\4!m"X~|Q*MJ n75n=QN' իW~w~'HӉ-+׮&HښlKfپs{~7W b<F#QZ~Zc"D|JQi1)F) ڻטPV֪ڔA$܋\dQXBjm7ڦ)l1n\ymLQjT~]׭ V1FҴb8<8<xA;e3")&,urr]blW)2R!b^7EF$€'F9r Hi弋)h+MXE!x׹iRj:2JCE4D2G~7C/|O}d.AHS O[zuk-0+֛YL1D!$C {{4sJThǏ6S{H]z͂l:$D}ohA'CpAgaH={9dd:K/?C=)XD;}' !jA.Y׵<HJM&yO!mkBg fY}Bk =d\ 2?IiUPIJiDRLQ+sem&* +;Y@gFy.֞`b HM24B( tV&i%'9\"xNERJ'x&1lRSsxO|I6CD9X'jJdM䦅Dmor*!Jo8B93ʔ`%nn,YwaxFgF@x=1h|_ o5C PηCSI{A8`:<01. Zk kLQ¸"$DV85D?1ViWID4"0ĔDCNmt1G>zPs^4` id"r9y/CRP1'8*oe)V%qhpiGJi\zG-aSHƚZSQV1z5ŔUZi+c-T>1yέVuSd/LJI"E cvjǓDڲ(QQ2sHJoʮ!buZ;9<0k|c2;zmr)h2!myZsއp޷M۶r:<ڵwc<6ZcCĠ@zBYιb 0BݝmmMSk ffd2,2m[Rɤ,-szm+tAzC@9{ȑ(Vupp b.Y@ѰbqضݛsjUei#N2eHQ!h49:`Tab3dUt{DlM&b"V3_i<϶w;-(j>ҪZ G\kF9d캮\ 1Zqk*'\ʹ(l@)Cs1Deޱ=|>A8c !{"VёQ5%7=vh4>rHu7nܨm[z9ZcuӶm8`:Nrg?؈A/;E6-!ms|ڵiwww;=1]c;zO&SRJ &xU6xc@EQfx,oAVchYվ8E|HS( ؟&r:zz;BR&u]u},ERUef[ӭkWqFk:[ne"EuSs`Gը"|绘6LqZ&r c#CιÃgwbNއ2#֦("%\)(BMbj<m-FnuʍYiPR|oi\0UY:9s̫樔o߯}g!?DO ;)ĄO%|_̔~(| SK#~7eQPU)éS'/_~g_DQ%gL9ugKG3MCa$wǣ^xo{:k G0okK/0 IDATJ]~g}'X2{顇Zko}Ëϝ ft?Gyw>?Yѝ~sO -6H}>||^zS'?_?s#ECZeDMٖN$7ᔜw2) [{_C .ҟ_~ԩZi܇W?+J)1=tK/#:MCfJDHJQYu뺶 wmZ"DZkZ3=0k*A E iMU RbJFg`">"e2y7VCǃ(i%BDq3LJQAc{sJ@J1d l yo8bL8YPm$p]r*%e.z3j&00\RBO~ ig|p ҇ "0>U' ze`)t&OPR !|fq,Dkm } v .Xk^i-MRɞMy},2j&g@y ˜&Ӄ1wBMy鱺ńre+F6\ǜLaE("-j8JQ9xB"ŘǐRDbQaBP-lqUB{vVR@afcȬgCmcemY4 Yk5ZRQ4&ZQ YXX;eEεm˜\M&;{G`pud2FRof"z82ߓ1ƲQ|08/u{G]9ׅtkBPuZ| !xч 16us;{NS7yQ{ǎmͶիWەk;'/1Z+eeLwN$Wr\.ŲA{{{Zi0B|X8甑'T\,ۦܚf[8FZGeY8hHo#u]/KN DZVC!~.֘d2NŲa/ u @z'}]<0Nx<.*1_~}>9Re]ׂ8ȩ6la*-3,T51F"(EkTGiUUhSZeEbJ)fl)!!c8ȋkV&̶gXur̉ Cśoyxx( #˲On.K!/Xk`{< KhM4q5M\.r1}H>ġ뺎cxa0\.WEUUrjw˥*ز,?.<`!ݝcǎN& mJk-+Mtm׵]UVƘZT X;OrH0<}f\NUYXB@jzw/l?EQMS B;/bJlkG޻nիm*92I)D|ׯ^[.1Pi#clQ bKJch4y`J!1(z\D24M,N^hww׹p>_,.b\-x<ݚeɉU-DLfNߞmOSY&ɨvvw>pqaA>:"RJݑR !*W%]$|R ZȖE`\^~'F{Uɓ^;q!Ykۮ+칔bn]0o,?s5ipJ27 >ĉ% >T!8{VΜ>F$@<!?hϝ{Ο?7cJ?,,Vb 1j gΜ6Ɛ%H13hSV#&NĹyEΞ=`5|Cm@0G.\@D)k (@bz'$@ <=ܓO>H)Cwd.J5X)}o=жMZbBcؓ& ֩Hps獵(t|T?uij=FL)ԉrZF+KJ)mH . {{ "zp1O2aO&JkuQZie1}'A8G|RFvuX+r\+.$9/OycQ?ޗe)wT3-TQJ`CJ)Y{'ZHLUJ&OY#4fy, RJiFĘ,́"DB"Qt?J|׆ ,^"ЯޔXc⎱s..ubq1u-F2'yLQ(|",ټDEY( Jy=[Qi)aCdN*1F)"&$:7fL1${}•VZ} aXTJA YC=!e Zc z2DܒzД'#E`H)20x.ns֖ 0P r!]} !ʙ ly3bqnw" ta_־*u"ϐeTfE$Os æMlm(GjRIuD.*m43bc #!M |a, ̂}OY; L{@$[k )vvRZYم)wf(\H,پi֖}1pHc(_kT9*EA>WBzgiB@T!rI59F۶i[}\LXc IyB𲃣Q9='c c!l9jZॶjeP[cl6K)5u ǣl{LŪZ#)2|zfVZUUEI<9, 1(sMB#qժ^j@*ʛλih0n)VcQ1+/;vtk69v|>4Mo[u%BҚ`ԵsE"~c׮;"R4x+=zt6ɗ9rDmjĠd4QF.ƮGM)ǁEHcTZAăq=C\cVѣq]'ᶊ.ZJIx oWιjN<bf#k Avvw:Y,ǎ+a>3ECz4ӶQ*z)d:Pd]9˦] R됹[J]6MS7uQڦo7]@t0'moƳ1,˶m=qUJ1;S}رmYdYZe)m0ZIŀƖHZ[|pw~オVk8u4OK#DZOej3o{jkɲ~ Bvw/tc#Gn@5JF]wUc.ZpkZ׹rz`isA\{o|#w޹bY޶֊e?v|85Y`v]J]Q%[-׹|.T/\\]&PD׍cABCL;H9\|}9WVm݈;x4NF]+5߿qj:rdw:vuG|C=\2#QYu]gcZĀƐZɕrNǤ3g6y@Oh#Ժv!CQk[/E5:$a(4T(>ueþm[cL^(ñ8Ji)9usdWOln~TUTOXk}ϿLI3ng~j6k~m @1Xz{iWJUBGJ)R2.ZFCvRֳW~nO㨵fc BRžﮮ6J)4M޻mO_޾q7#7͇T>h&Ci!ƠuR"g.Ta8߬W z8>À@(/CZ볳3f"R8?䣾C'![++sy& E7ww pVuj٬"+ʤP'~if8ԍuuryvr$Bc5+ii,X '׮Zh< +Vg|׹yBܤ0v4PݡѣrsZT,/&{UU pVJIznn_=|_~ɋ\_7sqyRsȯouS R$RI%Pb6t-Q8 hkH/>Oc?o_wyoϿ}|;]\=3FFﭔ:Ԏ=ZjlwbZnڶ/..O?>{`% 6nDZcwdvf7T|dZQLay+9*gHZ J%A,M^e.FiHlAj btւu(j...6-s E8d; )A )B&&6{"D@JHZk($g'-{Ka54dl햦{{?ۙco3Ij'E2'$ĔeM8/LJqY]"HJdYՍ6lҚKpJ.TjD"=(ygS< :=ňvd b-cxQ2݆-?$dUUB=4@!RgB0lN[D%%.hTlЩHN̉➳;Ppa'o#-4˿71dhuWz)+8˲hT{a@ildA6US;=I QH PZ)HI+s@QWw$r|XBT  Ie4sm`\Ht"0 3ᔫ QC=S Q*Y7F)b1Fmd\FP T` ,NqJ4Ǿ~`)Mj/U3~ǺH6b΅ys+DZ@Դl[W;@A  wnz7Z nxlW/?e9t]WURI7أ8ܦk!7񍫫몪>3#lPUl>Lv 8Dq.yT(O)oZ)V$E}x~TJ*0qvQJWvmBj+]].Q`U7ژ~jE\H BQ dCBtǮREUҍVg;Q@Ra6qdX뵪U px(@5ٹHTMQňуf_ggHvg!^>mF4ȃx|8BRnq۝oכW7z6zA'WU]{罷t^ RA)k{֘ ;B]MfIP^ Ip$I N*-i0Zk!}؞Q"*0qW)a9;ͶS7riHQ٘!@F' G ɀX6(!T)ךS'i̳ifk,gb;;0cu4+)lik;)XR1F@cLU31HH,($Tjety;vݪD5Qtβ1izv0쬝' Nl{J%NaYDt_Wۭs~8=At:(D`cT缐jXo/^Z*Y5uۘpST* nЀj:0]\_ov *ڶn7ƨiΎtaB -qW1F"bt#bݴuVUB@!0U0m;f C?MDILOȓ?@9)Yoͦnڦ])(2/G6!$*" >QW(9ߝ C/P8gCg4j6EHכ@y*SB亮_ݞ=\J5#E<SbCyA idZgOWn{q~.pmϺwwl?/'O<9oH4hc ~OZ) VRhYcYv?F0ƨkx_~f-2F/?RLXZ)!) RfJY.<#}mTZvkSBi^#P@3e7Xc&-R<)ͩJ"NRS-uyJNe DyR=E=^\"%fk>^${K1Ĩ8&&Ez\H!pPRZOC/*eFF>=8.I+@<\e5W5wT!0I~a%DY&Wi'A A$yf,>RIn ~(b"t`ЈiJIDQ/ؙdF *NDw}!J RAPmI] %~(t$U N @ arNp *+ "#mDJLSRj gIHĘHaSI =O,eZL좋 ET: moNR$ .I3DH >0lZA4M)] hoGgnwޅP׵6t^(E{Ӥ9 3+KRY 1 [=[L")vM, !17,咥a&檁`KMH% !*[RU@1z \|Ry)bĥ`mN4y#+؜w!/B',wXZLI"!bē[T0G=+Hex |#呩 Py#  a\u^.z:kK9RV`ו/v}(hE(Pp!D*]ޠJ=Zv$)TKؒ_d1>Rd&1Ҝ/_?7Eqr{8}|:.s'*SԼKV&ckh{T1qapei `fn)qs!E]W$ iB8N'OyCh@ 9d?rV|أBd K9!Det(8b<|e|n{///;y:vǺGU֏[kO@9|ӮyyJHYUKG~FTS7O>(=LcuCbGI-r={m[3a(M IT"#1$Cr1ڕ.nv^WuI<3ǾbNXf=B&-%wRvժbrY#q֛ug( *C& '9h0 /_R^]]*?Z_P`Z4MնF? <8v;D1l6T}4 )g;[縢̅L P4r4M볳qiBp4ݏdiWM(s֦)y,ܫ뺮zu!<[g;OplN9oWA޹zӫ˫oUCҖ Ն"(8<&"L5EJdr?yD Z%yY ^ows6傼,۴*h3y~on^8?uɳ˫v[UU2f1RG%!Qdwi_r*KMRDu]#PZm7["?sFx{{wuRˋ/~qVn"Wd(W.;4M!ƭ: jv%:mWjUU5!~iǪjv&v%A**%%yaZ=.ُ1{pssswwǹWއ:{!1JC(\rl6mcfT<ϛ@03d*DE*0<)~GލU"j۟pӿdw*%1eN,= $EQu I.)03oLh1EDR|HK!ˤk,`a^Aˈq]Uw)V)n]A7$D +!T$:(J)"h9boSt IdB "D) F,QD/Z\ 1)1ipJ,+[ЩJ"%w9N4x8Qlz|y;DU%dH KH!ZqCN~Ef$T#oO9/cYˣIc b8 aFIKM8aYIB[ <][ )TQuTXlTRH!` Ĭ>՚e#'γi4.zR/N9 K2نsOq#jXP40ϩ((LQH3k ,CPR(.tDž`x)"ң EptQb٬H[[ 95G uvFJdeZ[ o0á$o< |;( c cyC 1}ӵ:/NhRv:oF> 1$' d&(|#AJ0\ Bl)$eX0TŻG2,O ˮ"tD9hq- t(@LfA.d߾hSPBI DSi6S<jS]BZKy l3}@Z\{"E@I҉@J-MeC`n;(VBs!x/Ҧ\tx6!u]sRجW X.0Mk-n7["w|1˭/.6 pK߶ӧO=چ_C)2GMBRmu8wMSb=0Jzն"PJbDQ Gq rCEq./RS0DFrB{p̫ܿ"S*m%G'd+2T8iH\R!5TJw_7G I,IZO)1>?>H\AzE6%enfT1,!)W)f5[i,ڱR"J*\U_G㓰 LFJ-mMe"P )AYc b 鉠Jٷ)Tq(\VmX>$;[k$ATں䑔%虈s{KR-0n.0pOD8@Re:4c5.~V_~ﺮmzsL>3X$j*܇asӮXȯ`vD/rήk^=YlTF(i# `tSYIZWV1 8< p?\08뫫vNIvZf7g^12RHfmc.ڶgYfӴ"P,)уq77/_ WWfnWu4zqW}p yYDuyO9|? כӧO*?di(f*4L9f;)m|p\)}~PRnכj'W77}j"C۶lpKD^bv"z1.xBVWUU?  _^\]]G!T REFWnw> Cuݱ?OiZDiT & aDiOk 1>߼|%PL]b#9k5Rqזq\\"J*i7WnnV)#) !@:ӡ 1BʹT@Z;đ ` ! "5ؿTs()3 H!y _%_i%?%T㸶$%2'G9RPE"j )$ B"\DJlXHQ+%lTU>FR|zeDI7HyvK~((ُGw[Xo鷝v|1}X ~Vo|jAEDtv-J??WoR)m`@Xeyry{|03P$$nO '[RQ R~`cgUԴE^$*,wdpt6J*,3+ BZ)SU FzVsj IDAT@RwFy_ bD(>2(0&K條-⍁Xdʘ|Yi l¹ >**%ɞ]f@m|JCtΩ:c|ݜ"xNιq|eD S?f=/K`U %Z(P %)4y"@.''8eIY-ҕi(KNy4. 1'1ĀH`5Ag'gK!ҕ!B*TBCL'F!:') .g:Q l~1A%لSV֙a@ P+;`e'_W` HDe3p/B"@F8??f1ZpYGB !e!Iz<"KT8xbs5˜HH=HOl"-z-]peɃ!(-y[{-4%(fNJrN"L.hcE"@p\u @pORf Oy{7[nDR@$%71<}?̳EvRRZ<׫mu@6HB+@;/3AnHg 1ƫ+NE6uXd82|4lJIn@4=<i&Pr`(%sqBrny╦ 4}٧[k9H{t*8@y]0~bomϚ]w1Ķm//vVksyqqqa0?>GR+|O-\ڶݬJ蓂Sx1q,Dpގ0Ms;%ee"Ôvo%Y\#%IS# lǑ/U9";7H8p<RMۜy]7}R$' Jiz|+:3f!Fczyus 8 ]ɋ n[ :ʻY/}WJy|x`Ѻ'Oۦ^ʘ ZTU^Uhжf7|qNDAjљ8R!(-vۋ)a蚦vg绋ˋ uۮ xpBlOJ ZPC$>L8^^^Ue=Â9|qbW!#eDai“ib6v[ks|xŋ]?1Fu~2T(P\m7om{eUUG>rbVaBH3 BQ>PѠ1n}ߚg}`i=#S޻a۶e؅ϲ3x}E4JsUR*΢Ӛ)-E%RPBI!964PBAe!O,dv$%}<6}m g_oKp<)o@c F@^ހbd.fe.<~P]<K5΂DT`!%NEbaR(HDiλ@ Z*93AP-kT:cz:xaZtm-Y |D(%+dZ;SjRZ)]Uj}|+!q' ͯӧOykwgggFWlaWUd&zLΩ'>R0:NWrz>EƱK)(i\vs;xwOvgͦmּxi*~)18M˲a;RM}vvT{63` xIkSU]Ue(GrVjFumlm[WU(Ou~_gwXnsϟ Jm..///Vvv;v`!ֺl{e{QGanwv֮V|Iz8c]UIs9}w~v^W0v}uv3SZsw%=9IJ,BO~{NS"{?>|}n|#R*UJ*N7奐BwY;hSFiD!H H"C#H @sd_ZJ*RbfHl%QD%9۱Zo@ea bͩ hOqi(0?T RO݀qFL~<"@2e3c#RQrЃyk5R~1 `Qt`HYSC s7I(H^pgH.iw< >̂ŖZKZE,ty0_#E qe=a,(5uΛ8'`7"|+/ᣰX,a/R`C< χSIF~ʯUAO#KZ>%Qr$H: T[k(-ibKZu"ВL z_av<-S̼RNa} {쬪*pxU~1y{8< ֢ (I8iyB Qk +y׽u 1qǑawO?BZ*;whv-vBDcDUUZC )CLnJ+ N !gzu{k{8VϞ`c>0kBp8<<<0hjه.ٱ!oA fRj=z^\^^\\ŌyiR'Y"U nJ<|wZ|'1SJ'?iC|,Lwo}[,ly!k;/˜ qْmq1ayB@@T @6n4tnL2C T e C7P35D.kjߝ/0<">x>O%b,oIRDD(/"aNs14qL-#O.Q,gߛ03[G>WR}ZR"ɺV1OV3O-l J k2] gl~I7 8KLm>zԑ[b%%ȑBв~{W{HK)1v!C1X|>_NU  Ї0"~1|\.2 ⢮uޖ|S{߶j+]Wt gN kgJVJQwAC3 npa?irm0s8И}gɓ'UkU^ٽc㐫b}/uݛ7o^|yqZ:߼~zS`#J{k-OWUŬmCCWJu$ I@޻y7LKUf\V˦axׯ-Y.|{ィ*? 23t] s^ǟS1qs8O_/r/ Ad{L1}blfZ0ï~~s)" )lnB>$\/l֦}?lV1t]25hUk]UUkvϝ1Zq03b)aU/QqE,̬ !nW7FJBWRJQsyfXV|^Jvswww80!mf#4zCp8բO %Қ.l_gsdm791Lv3cVxVan݅3PaQN(DH@h2 48)UO)?$|<!f7oYc"Zc:%Gk.x"M5 97Y(;vتy󔏘4H,S}Yi76Ѱ)&Y;?jR%''pѷ-f@us?wc63!fO:tҘI ÛOL%{8)8pF.dg NBBe R,t,nejLfDHDˌN#Zݝ#֩:BdR}w}ڀ9Zb~%k(04<E% k3bj0M4a-(0M(;m; %Vsboz;4Q$Pp'Xq s"G@*F +Nќ:X_9GD)q1$/ɱHJZ‘Sc@3!sjQb* TdrI,ZH(2VbsMPBaI11W94 σ9hiZ3ZP 8^<{R>ĠCLA;uF8hi9?@bQ%C UYWՍ1x8Ăzi|ٶ34' bP#RC]U]i  bscPÐKbFD$DL۪vޛ#sbkNݳ'_zϟ۹&wfAkVcrD(b j$)ݗTUUUm44YL˥ϗ*b?1!(^ִD(bT<vw$$jWU\9-QQ8n~u]'X,QSdg=׍!T9b>͗|>羪OŋO)JwaC\gϞ>[-"u7s2W>,c a 1%9[ w4"C1E$@̿jFm.ba aHS8JyLaUż5~CRq:$$h Rb6[-Wur65c2"TH?,vo/eKHɯ>m;uMXWι\^Lg|U;]U5MZ_8H1$1uy&$vZ6mX,yӶwlֶmkaۼ~js8->}X+"; ǡ DB@EؐqPOOfgAa$#!*5M96nϟ~їnn\uEai?q r!gH !qje]m/...ES͖6ž0VΛ9fՋJ"wp~.p4)_Fݮz`y8nn?dh?{r ~Sf$1Cl*bN֐V0cUbXcl6gf>-q؇1D;W-yr~8F!@[0 yGdR0D%DctI Z&&M3 pj6NDݱG^ibn 20!D̉צ5a Zn=4-ӆ.B^H,@!$`."@5hŮsh)JY4ci>cB@Cb`aQy]-*6ќ%&jx+5KQX nbL%(C< .Xg,Z8Da]WowDjљUd9o1Emɴk@|hGGo~Aҳ#auȖRbMYϴ |*WQ1*@`L?2 8q‚ ~d#ǔ0^U֪Sy?A9W? !0Zu>NmAV@Rb'kLIP7by7CbQUu\wrfgE E 8pd W>I)Fb h K-N)L8|P6 *zIO ̌YxOD x/wS!FN)35C@ 1FTb0D)EW˅R$yn qNp5[ouIŊþw]_PWjm1qL1FH ]g\Edrtw sCHCqɸc@hanv,!DZx\.Ɛ;S }%Z ż!11`gVrbd;i]wh,nª|>8<*j6N!$aΊH1z6iqc^1sU|v%ĘR? ~uDpX"'IorZ.g?\vz=K$*mwn{l6.Wiv#j}v{}}kn RgsX_fV[gYJu1E8g*,ȹk洀jCwue`Jɘ6P U+c1BU45֐5UV8#|W]f>]^߼)!l;kLHK}yx__~˺neLAf*2sz6kg%11DNc ǡ7nOgvf+Cm[=m.Y6"ry'Wm|}'OksRS i8g0=p+ak$ Ho$ ddl֑ct<#1ً;Tjz"#Tmc Z 0:g _^]xwY-0ΐUU׵FH!18c9jx'EC(UU7Ms<_~RњP7:XKh&˧Z۴``;WjbU؏L.r 3J1400 9RΓtyN..˶%$,u]Y뚶UUm7Fr?!㌝L05钞8Tfw.7!wTݒqL)Z몺rֱeE`cm)HIׄhcI*y{n-{ r+AD1'hK!S :o :21iK3W,E' nr ~!?u_Diw&U+T#.y2^=bao"~AO =sKT>o%u%Qmg]9*M>*6R?s 6 4E6SA_INL6Bc@CF itJtU}U:IbJ cB&bIgB.JED$8qIq}bkq40?D8T l i4a "ƁS?BZOH=;pG-a%PZc Y&)I3,ZC&ԘxpիWm/?yz 6 YyUJix<[mdԖ!&F0ƒ!Zu͒D+aɓ'ַ1Za!QՒGnbJ_W^㏭q׿lŋ+vn#sjDDUr|C~Ơn&qMPՕ3ۅx/;k˗vճmf׿b~nnnJZGqJ059=YkgRyu33sbL)bS7bX b:N\UrjYC "o kP)cSNȐ#*Ә M|4b<.1;3-Iq)2m͓V&nx?!0zBkNנbwj_q~'Tnprڔ3QEbA@sP0*gUz#mGk, 5oԌcursփQWsd 1Gl3b1$;H4ٷQα9mC}YR"`y3]e;zC7??2I?Hg-Y\',Ҟ"S5VUeQɠm;KV@4A1D *"RA;Elh&fE kR> ʌO&k10j%P,1cL$kܤ>*^ +Ř:u/O^P|Y8:D2{ PU-(A"B%I_M@1ĠTgxlϕFH{Ԁdb㩪i*_5EOJ*p*$HKP(j'̟V{$LE*YjD H q~+q915$4Ժڟ)I}>]9r6R$H'!1eh('"/&<goV/{v;"/D=L${lMPʘDxy=Ԟ7W|{8/L/@Җ)iG8\y3ڼP&i1hz6x_y8^~>@p:VUu<ooZ׹zC!RZtX1b6h)!$HD9@PRbr7 _p{ZFJi~GWWW"rssciŅ3"->sUr)~c*S 8ƘlM4Ҷ-":k \r]zAj13/K}}Vbl6`b)&TUL6"~>HX \on#u=[O0zDkG$2dWnZTKm~ӧO꺾~}}Z>裯}k_שb< ]3 gH N7e켇!Q6UWm)4 ,ƸĊ=9j.윷 c<8D<1%%&Q 1ŤXNS^1B톅]WO.Vx77w!xuu58i6="%l^zuY Ef90HDJlS0 o޼~CX/W}?ŅsNXS׵!wꗿ/~ma Vg}ZU՗>lԜ9`JW9Ǫ}4uZ*_uUվl3!ȩr^iàg4O>}䉂Ku|<9>i?/#wu:_^\n˦mJ.'W_׾4M0Z2ZdCW AԚc8*o۶muUz}#E}8lcuA1t]BBa {[-V͛7o޼Ajuuq]%t~y j[\W^b툂P&[q^`":sc=!}ac7S;ZS̥zSSJ!ȳO@iTS( HE$wn{ܘX,...lRJmۮO_C7 m\OJJI0h)C7 acOyUos0yV(Q?|>_E{s}3:&L2,ktx~'Oh\Eٴz=tg-KcLT@l6ǩSB !qA@1Z1!= jZפG{|OR>MdP IDAT3ejGҋYU۝s[kcR$‰:NoeR0EBx_,K%KqGNn..Xsuu%T+Aڷ DZm۫b=cF^Û7o?dnw{{կ~xo޼cgO_<{bX/-2^~ٛ7!quU5Utwn<W˕JM]Lo?~K)V˪j:gN;@4,D+u^o[vr\fsM2X,*U)9k۶uZkp8v6{ދ?Ycs8Ϟ_Uu_oifxhmHDiꤶֶMci,[~6v;%(!%V !YG(ͬ5"B^W$j^~z\/^z> CaAe^O4!('tz>jᚢnnnf4 u,z}qq\. 3݈vPDMX$duC3ʘ)*"ǧ1YzOÙi>tc(y's@bʧe"2y+_#ݤ ?D$b2?OO ?BcPԩ>P!@} !yTANx,^]/jn·yOh=b`J,4 s.Q^*8s D UUsȝ2+0~YNJ 1WL$p&d7K" UGV3d;"35qM*j+.mHNۼsZMG!WZ,6Z(p!PT8C^1:KeL!.v*2HE i0fR"wC* %SZ@Pc0Ԉ\%Qj ;dTi# qJ)1}OAvl(%`nQCJat XyގpYd)Y|d"SH'NH,5I <r]1#&ND9 grȽ(欗(EEv$J2 !Eݗ_3 sL(a4'۸ϑ>ݝ5X .&"OYgS gw[oK3>V>Q6h8+6ґ_gey\7Ƒ&5N__:zٟi w Cy3 aU(e=@8ajĬ7j}F YUc%<]no-qbEclszZ(r2"a,H\ p@X-WBh!F1"K)&!+g2l>_׫"1owfcCXyÈUeWOJB̛|LS E!KdM{VQ)Ħi/SZ{J{NUWv8 z%r(Z)WO^umUkt7WůS0 1'Ou]kDsaJ9GZGRj={4M},Ksj#z:1aSbLSHoK9;A c8P /O@DkSa}ryv)c+|Օ^Y@HjqՉsnujF2PǙ:ZXOj1wڧ!wp<E@s%>k8qLy0#Nt37cajN1XG{DX0oO>|=oMٺ,Wֺq wwwȬ..a7o;jWW}˕oj[n6O>{Gq71&1%nۍBv\,0pkHW/^,nٌCWFTJ1=Iы*{bM]\_xg~a40m7 aPX[몪%ufb 0иX$kW+=j;sN$0p̵JY8jLcS䒲@amǾCC= R.f兺 2%ƨJM&錣23CWHC#9UD{OAdL4y/jC)⑩#3#!3Bc '5!#DđPMJjԴ=k1(kVwxn:8&,ܤ!zQ D%S jw8fv̈}ePFAP3bFf^P !HƈE:ZP>2XIo;!c@BJPKufF54 R8]nՙG_~2Hrߔu>e\*wEf=O?5t>缳"-;R )S<茝/b6*f>00 WUp F+o~R]w]wۦtއ0&y]iB 8jL1}&ҟ͆%AZjEUmۭvWϟ_g|r3___',Ӻiib6w^kcH~l7|X^\inжrU%*UKb/۪W|X0x<noʉiڶ@07mE*//_|+)]^^^׫{CA$ c+qTUe ޹i̬#L&2/dt<r&cG'F4F !,XoٺfZ}J{v׾x4s8 rc[ѦBѯP*ףumۮV+D%f)g*ò4`GY1ƼUK Őqڱ^))E-D H8b&Y5F㹟)L4C8BkL45HقyuSܔcY~edi;&=$4EDF}g@~rPDc`av?= _o̹&R*;:N< :k32ꁓP%IϾ~Lޘr0opzyQ)-SAJ|/z폪('~x[w{D@#]'=Y7;!֌|"f d7B&> ?'&Ň5_#) p?ϵZKd&!_??,C$wO6׿{4eȚX픃6t|A:qrL/'%0~S<h!cS% 3gqM n{Pc̴nJxizGxɣ$8Mi  z:j1ٛ6J%֐% LaW~l?u-M {! %Ug1V:fSwdI}N1~_|w;&ݔŠ/>1%eZybr|~P(<[Oߍ }H: b9cBvSw؉4a )mq?Xx"Y_8 ctҮZ`NɝE|9Քu⽫~zȱPY{utX8<:֊@QǏT>C?,/Wi;=Zu{w]7nqJ" q8&Kcb HɧVfO#l6zL1ji" a֭;}UqbO6Ng19^qqNŔv;fm!2Qinjwa28k_|yb08!iܨZl6$V,E"-Y>A- >PH#M-rPsNtFϘCb`nxx?|ZuJcl7u%~6 T՞ YgUvz9ѱi|}fPx^cbi^ ͛jeY׺J(9bHRYY2 $Qw*GzJ4Mz(SkR۶vNn}?v{q?v01AH77w1x|'O,'WWo&21뛛1 Sk>;m4ucCff,zX4sgY;fux}XLIOB C*"/J&Dfr\v>l&A 8}Ն&u5c]b8T58~Rg?'WOʓ8v;8vJ{݊30̢1aPR$l,"RoW:$!SUΘXu֌a]7MUו}_׵AHJgf}5M;Z9$9oLBD3kSzS*]wRsd,f:t\c8ƐR 1חrnI[$1?dpknTը?= vypgjFKd10Q_7FZ*PNzN+4)J K(9'.$e1%;2 iD5-e1"ObQ$"8 4:VYxM֮[FhY86%+ %M)%dn$l#DJ30|"RFQsOڍ8e)kN1Xnr]'Nn #\-" %s 7y!7ɉ d"q( a' (n+QM(Οzz!'P*w|琏iĀ3g'#w&rkz)ql>A0u}i.iݙc_G-aϘ<ǀDj,ocǾgOoa0P (g<(P=zzhk# !)M 2ɧg8X!DIqڝSOV"A,4nvoBa1V+B8[VgݶM) bYtj]ytbt $,XTJDVJ v; .( A;b'')H1xV` +QpjffEd2Z?99!rZ)UVRS'i*|AȼLCjwm۵]Ŭ &9/ݣP`P?d@anPUYD̍V4TraQۭ]קm۶m+% \v A/IT3'3Ys85,X>ifF fuggEU-Ƕk}0 )1Xkݝ;wPPUA]Udw}w}Wbt+"UU5YW$9Zs|\JҤs 1c!'9Sl= YiUe]]=2d`+QUM]EQ]/l{՚m> >cTZ&mL3ߐIQb4cJ6^kf7 lUYDv{|?yC׮}Czڥ fv駟xi!E><<޽CZV‘9߯ʲ.K"޽VY,GH( &]f!Z 8 1RQ;99 >ŋL(!>&] ݻʕ+/^\.,7l6x"B0,.\CuS0pJޅ-|i}(&Rq$X2z ĒV|G$L{HLHHE Hym#S=(} 112sAHbQBsVp+~DJGQdȔI#d#ݥlQ'7"$L1DhqVhR0őYadbqd$-]eJ `!J1=bJ*TtqIL*4 N[}Wx&S'Д@= IRǀ!K6EQܼ7/| !L?)f\/p IN|PD1>&1tFiiPf—("X BwNj"D^?ګ*ɟ^Q{”p4<ɐ/{ 5fc4t#H&Q9CY/H뻘Y b WҌӨ2c0CȓfJ?rH R:kPҋZ+KG:qύ9W^N$ߴ#@D4!&͓-_D'qI)c6Vi]VeJ ,)])س,I)qJ֣q*!(6l'3'b`=4~eڛnGk8whdG9MvlyX#SYdK3Ɵy 5bz2AJVP *dR2ڈGxlBC:3ޒ!aA|#}F?^yL'+Hy?0 `p~"n&|>œ5vMC+!K#01$|,FEP{GcG J+g'fn8>(#4@H,invJ)i˲Tbe<ɣL&L/+Fk)ڢa"'c )1| "*ޒ;NJUeY6R Cf޸W꺚/UUݿvkZ#au )-YS+3VJHZjR׶JUQjStnXv5RMdߔ`C!KpJ@e]h&Ia.(#Q ǐ=(%v-˺9>!Ѿ*J!u>jz'*v1Eq[7k{{ nr)hƶRYX!LszV"pnhmۖBcm*s5CV9I.$ ~RnzI"JeQJ"ZôL#Qy''h+UUUr>OJap}?8w1w9cdc0 1.W<5g!4ͬ*+o#"^b~ppp͝;w߻7߼wn4 Qi@kklQ(˴J|JR$ ? r)D̋|OOOESm;h駯)Bq,B ޽$5|> G\jaܦ91EW )Q:0("4 ðmڌιAC1iܽ{O_|…{{KMS׶*ʢ( ۶W fMy^ݻww{X.bTUggӳSk9JKhTJI >(>L=.WU=Zbh6.w^b^,M~w}'o7gbxх |s{NOOJcroUs~6 Ï~gͦYcx!sFv{'|]U岮kzQ*ي)IdUuTUS9^m5c .bT $fC8>99991.EM2ČcDH$+c<VusSž kT%Ŧ8FT/ H 9ZHN2%!LBykecD'sLc.&8hAbALASR&P)RdE4J4bem1M]ru*9!0R"$/YhRmuΧL&}S %i;ܗGw~?8__v[_ETȉ A"zt \7/|e|JTy$E,_q$` ȥjܼyS^{m/ _O~c;[[nR0蝿qƭ[n޼d6Afh!^&o48G_F+/צG#͛7_7eH,(ܼ^~zv$!}[0T#N8|F'!ʽ *Evg@J!lεSl ,(R,R)Ҥd ԠDJz(CDI.$SbE(\D,'E b3!$1kι|۶RkἋB`> Fʲ,Z3ܣppg =F*]X;C "`mbI>)x3j0#W@@Q:]ȸA`1) M&X) " 2 B&|| Vl\' AcBlm=EC2' ;0d83ojsx,uQ.Xd7W^!]j%"8=,xDs_tZ)HCR("E* G'X ̂b"Ҋ2r\<bTZ;kfj{w"yUeI! ֚P׹Lh)֫H®ڶ]ZYVif]m;UYХN̾m{u=,+;,FΝY;w\̅j 1qbUUZfWn]}?U{uU ;qpr~i1[6s5)3};?'EbԳ" mw]B* n3KtZ̖!G,KSؘb)F qVw}ln7X}t̔!dee)+>=PY{˪jpl)NM]u-͛¥iMpvmߕEuw-jwX,tNOOWggo߾#"zꩧP0+(P1""u-+%)]o-fau.>1M!QV7FkMGbnbV H#W %4M#yȈE1en"N("jmuEQXKJCVՅͬ.Ysj+5|ho۷0$bUU^3+ۮ7bֶb`y'ލݦVU΍[*=/ K8"[CUp^/108oNyEQs}d%x6flo͂9w'F.҅>g\rtxTE}8hjsΝ;w$J85}m۾1*tYDiBN"bH1v}^QrWc^"49iiWm߿zf\~ZŲ> /_^,^~/_~g]fm]Nvm?ܵkWꪀNܹݻm^y^Q1!iߴCw핳hsN3+6 AZ9x߅ҳY\hnZ[-L3YkF>88:6o֝;/"!/C.=x@+G鳟rqv:=>}XhEλj)ˋł?{* ;!Ʋ.^ttttpt30PYRd 9fҝ-/]ܵ c!ATVD6[7MA,y1DŽޅ]){wdu^TYUU9v8 ~`N{c&R"(R8 A2f)"C(9}0!8f;n 1T,8Qv{eJ;BS01(C x  51fѤ#AktI*5'ĈuI5cNE!,OI.c-@9,,ds2K ٴ-`@ QtGŝCyU!vӨW!t]Jx FQѕLSEcEfg)K#/C)c$J 5EY(i |WSL]}|]Wtn^7{I o?\!N)5nfhtɲ>f?7H?I;>8,Dyʓ @IxN0;&k#7OxI;MSijlw&xnw5'21Y_MF( S?슕֤C l!iNw)wi*љ L**1vS"C ϗL.I)U>ZAnWkQ0 ie[z缱}ǶSJ RP5{1@dAE]-GGG‘=;;{wO0֛"WU@M)5Mc2u>Oyܹrr0 Ͷ뺙G Pu]RQx12@TfH֖ER]#12bUUu Q<ಯu ﻮpkL:Ck @jfBYVM35Bك}vnCD4fٶm7vmWa+J[\.xvէN?>yprvz*%( kC8+"|ȂD 9{QE"Gm^EONONF&fb9_Eb\V.hwmTfٞݻvvW\Y,s" ӶmEN"eB_-0 )D`o[/כya3jD"P `ޓ듳z}I]U{{{Ga_{aHw1>8joypt4/m]X4szzz||,",8u[[(CQG>r63ZY*JZe\"kAL &(ۺ^V*7 m۶rYEap6޻g 4󪪵t2fzzw1(©TUUB)1bs8 K)F$EbaTsnfGX[1ȬHEQ7nzn(ֆ+R@HF'crG1FHf3vԛeINe,ͶtdpW+-s)c. ZL\y;9sϙ1ER8$pg6 "b !9FJX9%GoEsFwh|J7Z)V@a[IX+'aR(KBÈ}wOWO85RTތd+@J⾁g"S'x ER"(l#o|ƍ gR߸qc@ӊ|ysJ _W~F'cc~EXPS&3s6DZw|MPu-oo?~EѐB)֒ZRTn\! 90f0.0CP&W^}~W4⤅ބD2u?Jss+H3Z1t,(3r¨:Wsɉk(l 9"l\L "x LE. E3_A1vR9hBQ+d\uͻA!$i$IGM JStCԚlQe.~}%$R 0(MEݴlIzk+grQMŔ0^yo R) 1pLcRJl諸_,Jk%!aλ#eJY^obJJi֪(*˲, c.!6I۶}?hz{.\pxxHjXV#  A"EcJZkKԸEU1!NN$W^O*K.M HIw1V$MlΥ|뭷]3,;wIUUUL4Dž(NEQ%NFoZXIJ.1U]RbSLC"LyɳK~B{Ȇf\-kkM?;SD@HlSY֘˺^0lFz0l67ڦSeY.jܥ)|ln 07~W2tR 9'dҔcj+ I*kݮ7l IDAT/ͬF@yvnBPr8"t B(MA@Zi1KSC'ֵ\S3cl60XSaoljJ mTIkmcoj/MS]|dh ''apH4ock"E+eINq1"KeYU2jNOVr1vz '.˜{m{Ã|?V6uѥl.< W._nmvۺae)‹/ 9H(2-椨wj2iVk=Y]4fs i# ں,lsvmOONWUuUUau˴#'"$.20/B EWjj6kyj18b:'S=w~`7f@A7"S+ߡ&])6]U]WumlAtn4G=0BoB"NAi/wPX+^wdV7(3 O5h3PV DЪR !$2cT+]n"!e@J1%)@M:wJ'Sk/q V6fĥ Z\&(AHH 3)a(ѝ'"e9 ;<Ĕt&6Z*h-ۭY7 Kv3 ?z};]2ʫe nClHq|_c [/K0P\ nR )'eBɛ}G>քs__ /0B\S.ND/Q Fҗ ~7ncNzR˓?}IY =! %IJ)!tNkjFN4]g~LX1@.jJ)&T?rN;H60( acܜ(?d(tb6s`{LWN vkB5a<\TX~vЇn #V.u @@爞̩0G4s4R4ؕ/28yzp4@PVEi àsrk\/egv4;}FqOٍ}h1oɌGx(dOL N 1r ^V&ȘeJK !4iTI&7?ƘRY]g 7`q!']Iʀ`bȶۭiDcPB+b uYJټ\߷λoSJ!xY< *C)} ;yX."EeYm/l)R(hUUϮ\65}M̤U۶ۮ)Ixf|Zb3s]7l])#8F~i}h]+czz/ ]n7]9?O>4U/.DM!%[;CLIieě{f|X,]8AYV{˽"w{gm|4>xJ)}߶i~qLs6UU14A(.6Zcap2K$Bɉ)r7A1Cs^Ԅϔ1W ;hy7J޶;SJeY٢šHTV *Rݶ^!BJeQ6)+WV82s׶)@ X5ro @E=cXeUz>; P_p򅋘+FR9 /~Z 1ZZn+C?]ulެS ><8X-RRCaupvzve{LUrOQM,S ($J1#pB|a^MqX,EQ0t}?ӶmS5غ冣BbJܘ&dݞfga4ZCdYc~1B1#s۶ݹeY1&!mwݰ:[  "#RcKՋ9jUNRuҼ]?<{pr$Ť|sifrS/,ɇf{]߻1\-dQ~lvS8?r婣kR9*2bLYT=qUJuCl+RZlNHIf+xz^)MabH)$?8"RU]Ol5 K˲( !I#F̉e"#YO- ?؎⇬[@@&EE9QFi > 1)&Вsϓ\@f SJ 1E(ǹAƒ9_O8)80IgoIN*mBD0]TZ71&dP83F|/GyTsIT]Hu#c*QN)4̚=lfBy, q(v~C@J)n[D&gz̪p6ʳX3xRRjRР1Zo4M'O} nJK_SҎ"?Py/IO`_HcR/Zk2~b}{߻~6:7D1!y eQm>OMHyf.B˿4och}x笵Q$KC1(XikU|/]G Phw~򷦺w~ߋ bN6?͟KJ^1fjRJ;ZD}8UEJ["8CҊK$GaswYIQ¡RbiЈ%SԬ\ɾ<")ETq8%F$VMk-@c a#!)*R0Kf䘦l]k3S1< VɉYBX !14G^$=030 B&g#Rd$ObNp( L]Ynh}%{:7B'Io4h$倊;FffJ<y'# -͜ϭ1wKHOCT'GHeO"3R7 gς<IxzpOr~G& 3?*ŮKlwn2 /_GPZބ` tErd`: cਯމXk)C"KV"%Ok,}CD/ a ijYYtȤq;7tv1F@|.E/7.%.h}2bYUU]҃bLJ+ٜemq.D_ǀVn7v1F<2{綛fFkBݎAZѮڶ >(NJ:Ƙ$([,c1j)ma9\S.IgO|\nmYDwzrڵ- ΚY]*Ζ FD ݴ{n۶f\t…#kA.24>\ƍ3(P`q4uU63Iw"(Kyn]]ZeiSJHf@jVJs$Y3k$m 1)*o'DT^! "X  hfMSt (Jk٥"(dXB&thT*Sr>e9E(c,i` r>drf@(Q鉁1z^ɟͯy\ 1u*PpZg NHC4<+h)a<~H1@䤕/rT]J6NPH7|~h 2y+ Ҏqb*6~\;<y?:yµNjUH,aƏqhy~2GHyO;G98XCcP˄1E" Xz3,Ki%Y)4 CuP7r8:::88lcz!|&Be%#k=D sCaV:ۮ۶5ƈ6%~.jX5&:?.Q!z}JQk41,*RUa1|궵l6DNιz=PUQUpq`c;8-'|HK!^ZJ!JQRddUU1]ӵ3]`$[Ƣ*}1EUEmΧ˺8t} 膁>}IۭUaXVBF'1e/٬( cK>=D@Z)Otzv*';wv-)*94~H[k1m6zRX."ן 1Wl6{\f^,O]h]ד9e爛 Qk#փ$`l;њ$]V|],jRц~2gjiDzɉhO"5 {gboof).U5"xmk,J`ĐA*(dds1\1cS }9ic\!3_٬9<]߯7kDv]]QVyn۷O?pC\'>}l`Yk9w9HV*+sq3j?ħ!1dia*L^ID 'crn+}V̮9R hTUVU? `K/y{&QX"?wy?/?oON ̫n/ѿxiÿ}7SZc jy VlG69 N_o~:")fSc̬ ֛oG!oH/Rgo_OOzs~u6kyoˈ_fͯ뾪1TMCJλ?I?TU}7Cw~;c ~wog;"(6JG Z <*gUp,:9ƧXD"ͻ5ɸ T)LD*㝷)&9ҍ$g5.х{ECb$Q]H18e*ɐ%Y$[1yR 笹꺉1%HIDdZWU߾ƛ\Lh#M% C@S aa.h=_\^K 23km!&UU]\߿oZc~6!zo5RD4,s0,3 !>{~uK(`l3fY펅ARVĒ{?ϔR>dUQp8; LyF%AU@n3 ^VHµvuCBLIwY,z *!>$]׳R[eJbn={&"$45VUUkCRb嗗˥%𣏷믿X,4dђv6R|8, }fV#D&8U*N5#ɥuj.fmwjնmU7*Mι3lq뺜CJ5s:g//.~n˯"HHVʚ%HQ|yKkԫuʞ㘂( ID`I)ھꪲIpFa0H%viaẗlAcLS7cϙUQ8C;E)g="PYh{]+;gxRaY)A B.񄇺Z7Ʀ \ 9rIÙ;o1CU]DL4lmΜXm%IqG"@:N1! n{ᆱ%Рne1,;bN9o~ʨʾλy L_bzʟa8~=}`# ~?oM}[oeywb?OzͪU{/osPIE΅J9s@9[SL I tַ4Pa{}'X3d*WBԔ'#6s*yӻI%W+}# .58ǔv&Gdn#kɤ2@T  M!mE"9l- Z-Hyj"M:gȤ%'9u}gi7܈Ƞ8Q-9[cqDYE+y!tmjE?g8%hy}/Fh/zǗˀǜSN$8OBhb3+j˕?%Fhc _jR`T e/WOFȗsoKdGVU );zGy,rG:)_SrGIy.,ŠsV'}u q\^[G\4$уY ǡǺ{eXQ~/9gCfZB J^P)%O5Ws?sS#5,j=Sǐ͙mۦiV^u]Ou6Dr$m'0"h!1JN @)4 1lw7a e0vV͚vPGάE0_l> @S*[TGH)aUULoM1F4/:wZ93{Q}sا;,0!r b=(|~6?Xvλ??yy\Zo'vl65hm)Ec֞?yӧƘmS޽{ju޽wvXKMӴm,˹ħ~i*zК9aiMT$O4MyUpb#wtOU7cHE뵢djrm۶MmD3orA !>~k_Z[7l6ߴW}ŽKe*Em 9&=dž:XV`pd+gCgn\S9V21n5hJ`1"ٯk !hZ;js_y㫯=z嗝z cZGkޔ7ZBq|?O__Ӂꚜm:\.nWggK@4TaRUE4M)A1gqT]1d*fSGAXH2V1 Qb胏ocH fc:nWo ?!ݻgM3"Rtrzc :P>&r0ԯmvj"D{AllWjލu3ZUAl:X8ln6vRjn?{W+4c(1g1zm[YLZ 7dI))>~ CЊ$G>9KYt ʎ-DB18g k3b;uVzXR9[ )O:Ym#rQIzǓ}n%C֔Rօ縶t.I]:~7XrA7[&ۭbJ{JR2{m8N9NT 2;yksNUjqsԘR֦gCF -%  1hͅ~.4v\mi8H2]wXoa`9=2Ɵ GC/9X_UHrN9V87=#1}k!SdEt1B=Z|'RUUԞCc?zM=TZ)7|a fiр>#q 4OO}'w)bs[o)r1W}Qq\`rɨ2hib9 )TmBRHr1ŵ!*V,VGɰ 0$NR 1 p"Z9/ʢtyVHl}L[x1S=ĘzH]Y$Ĕ9&"Xc^uAc2產A9cmjܧ–-<B3>!+O_`)'.c-=^.W*.mAs(;M1cĈү|[j J۴d`P; Q0HdE5.X~劷h'lhd[G(_,~a8!wȬ/rd8{ 1/Ck=#_j'уΩ1%wNN/Ϥ,SanDF͡f9œRRd椕hZ| 9jsW%0 Aϯm69 fw04tZM߬0úMSYZ"< }4ĨlBxE4Tl3: r͔MB%Gشêvhz! LPb=ԶLE1d1FU|AL;=^\5Mۙ}zCT9Clb Dd9iGӶT W Cp@|l匯|}\{ۭYzTUmމsBB2Ĝ wk|>PޗtCJ)&CUu4uU[7 rm8kp1KUw֞5M h_~^wZμ^'VsN9gVR]^^~_:0 ^-m6mۂ~Ub? Yuz_b^\\TUu}u^!ra:I3ǔLFcy*& 1lw;$ PЄաN?-?^-۶UW2g~۩7Ϯ9[̛Iitq>9CkG/tqqᣏ>ZV+=j! "P7ufnRÙ7,:A>aUGGl60TU30fVγ8Z@Иi\s} #O>!t]֞_~_kfB4d#K!qs>'}ryݑ5e]׊0HƘ8Cw{|6M9mSr]׆5:O+oR:]?8_1_mhB}iyUUUwiu5 24Jι_}wɏ~o6PX[ogɓ'~rr}|D aXl(nhN$[ǡjט2: .>5r ꆒ%%8Dœu->3"Y FDBFr@W&e BXVb̹LvI#S,&Bm-dL9w#*3rA=EHrf6Ԃ0;Z9gR E0E@P%$R2  .&~荥H!dQgD)3ccht1]Qh fKkorV2p2dmq)X"Z(GQv#,cR: r} sƱqs,w@+z}Gڟ窪(/T88*w{+=KsVMmASkfx*"lJoFW"wm5"6z|PN#Wz$kC'Kl bĢ3ƂUJ9LΜމD $ O@,lp41H\te  KPә%duֱ1ŤU=2ƞL,ǖ };eҷDMݜYrDA2 &DS0dy]?td:csj`_y@*v Y (9@)UZrVh%@NL50s͜r r3"2±xVDGz,F+V5 0mT=1qLV(b5(waJǷvb8)- {il?G8Q^XK[(b>_@З _X}sY.O w]9w%cx M;1NNKi.]|'/)Н׷E.3ԙ]֓q*v\Nb0]sk:!}3"8'q=O [6,lu;֙r91]L"R ['39)~CnB۶0epX}J2n Ki$"]Ue[R9,OruXv}$̆;Y $rCHa?0L8w 9a^iLemXXNdY!5Xhrf!SƔ2}@ϒCau$G5u1.i}3u0h6mX,8KrUy;k^"ª.ǤuΩiŢ}Ԅ0!c^{fs|Q..\ aJgC5m`̙!uH9苈C!"ι! Ɛw>U4sn{ȉES7>g XͧD)ENq4C[v+\w"@1E$tgBUda2Iyk\H"2YWWHQ>ƻ% 1ř]* }çOo>pu}}us}`􇞈GD)f5hu~~z}8...4aAX9%k5;ǙS(`v=!DN|:KI$8fYw!ƌgg3̌a^1~ͪڳ?77갻wEXv뫫F 84uC}껾c,Y9J @ԑG 0Gq9b5߿=x8oXBRH9p #2dOٓmru=VnYf{?F"!K@Pd)i ("G@1ơԙ aڲu/pF53lG|E"Y[7|6sR!XCggfo>$>}?z틳6)༷h!>(t4Eg*KUkhwR,{g7h+buv٧>ծk+nyQӴp{f9sDH)~lCrr.lZ 3\k Qc)m_y! )xB6;밧큞]=҈W˳y,56e 3T@.e50N!@B"IZAer^GDAR*@S\e[ SǍNr7T>s*MSDWYk UsN5 2nsuSO~K9Ih1|3oά'—06YW-왙S 4@"SD DV"3Z2!E#|_j Jlu;9f[0HPu0ԏQbB@Dgӕ8u딯A,mSo4:g^qs"5p ~UET:(`Fc֑ }֎:$l 0v^zݸ,K1`T-  DKptJ"4V1ȀOJݰsfN} IDATh,gff>-WUS.> C 1"ʒc90 (`|t9?8C rK^y˪Gё=s&g`;[.=ztv ..uͳg϶mw1ggg1a8(Ԧm[}41&ZctBBBH9/fMSWRZS> #ߔs1%cJ^3ά.V_oGOgݡB^7~RUmb 9!Q2Xڶ.tLSwU4"ĀD,H-9e'`aqv6[a AKSHBRfa̯8a $!d*4WuFV) fs11Nvkyrs9' UB@cM۶ukYgl:ÁE:ʱ ȑ=L|ϏB0夃!DPl‚㹶jN޲?9B>2W;]EEuKȣ}.Ql8zZ띗qsIW1VaRet:AXXj)TԱFFBsU.SYV4aטOQ @P+5O=Si^ qKJ'#ó4 1I_cgG8ʪe!AC2QD⽫JQX$aV/)s>&dd(F3Falv*cl L^}:5bfW39}?^45kTЎ #bL& dd=cf}wT4ܡPKOoʦCw>W#ʍ6 ]{:gPL,X#rQ:姩ą43'$E1VU7!A"Z_UuUk0ۯ+ncrM*\Xa1VQ̠/^Twƻ2)Rndt)i=vx܂V?3"T.FYYN4Ġ@*X#['cRSr[#u6UN{oU(̗ȗHƜ͗t^/GER<͖zN+&OԨ2i78 4!( NΝjl"Q|_OgN+t1*LzOuW0E2Rb vJXy_{l#rs zuMp쌊o:RZVZMhMqkAٳ٬mgӟ{xצ^E*vι *=[CWr 6Rs1G9g@*0ƌ_!%.t+M:?xɓ'~d|uy$C?yDY ¦i1]}M*5mU0b@C/#Yc(TVLZ6ɘb鼭w-䜇n;AgvmcL9WU"\u "Ж-n)O>}^]!ү_za9J688%|M\ck_ygss$!(Yc3|ď]3^%,[Mc2=85mABޝqtAы.*(8E5jPt7u=UN҃B#by0rG]p:uMEmdq Bt 1c61m͜Op;<'3E}CBFjRʙ٢z(:D0~G"=bQ;$ȷz(\'G~d҉3PH',tBOr]01eˌ'fͦ[g 9YN` "Rr+8VKɚr CM@bk Y馋_|zsG:2MdBLIX pG'RJغ5:ߜT"̒ bF2j*Jїt  Y&_ O'H΍(eU&FH /YSc)xlkS$1oYk~03!)bI510>k[ŠH/bʗ`?#~_@ȩ_%9A+9=ޓ/_*|NcΩ&r(E⧢ȋk&;Am'X;BN+bʄ|ytcO~`O!b؀^[w#R8$}4 r‡U 8﫺&pW?TTۙ٬in&xer1 p8R !2g"J)}~؋|ն"BL}S{S`L1^ݔ"h 䓏on:жK/Զ-3a+D2"zGzo\! `kɈʰ`fDjjWJErvuU[G%MO>}uo*g>)>-:%4]V/r߈*=Ld~^3!l9cJ"4>צwDgii!ZCuq<[U-V&"YQ$KT:sF"fzyfz^8.SΜYCU|+׶|gCmzP׵vX,mhZcXys]ex Ɛ{!뫛CcNY:艼Fz1޽꣗z|sN9S)e%Uq{M8'Щ^zѿˇv? 奲z 9Wו+_{J<`eO_vc Ĕ4z/SP(l;ATd_y_y>.pgÇ߿_G}ԇO?5,gg˳K~=S"ضmNqzI `` Qitsso:@i4w]7 >4jc\:AM& RITDZ۶-uf'UU_7Rv6;[-af{zެHM4|>o۶뺮sM몪q~o~޹~[֛or\FoIN)?|5FHʣ{k [9ŤCEwM]ue%3 )3DRTr[-62_&lFBd<8 7|:ٔm*vtr gEqST}Z- Ji"иEB\EBnÔ@f:]o}dЭ{9͢2vLie$0§Q!b䑪8= 9'c*T&beAobMHq>_ D=;pѫB ,F,g&Jk8=ZNL˰,a9ϫU>c(H0Uy|R =I!mSL$r^qdRK",ǣQy*:5H}c1GE  b\ _w}ϜC,6l[ŕfę%-rܙy!3?%Wn]*nf `t]AS0EE.B*RRUc ArX Eƽ bxڠL.: ϩ8DG4FFZ8Upfm)]N#,ت}2D $U%Aayp SB %/ja< ~n_7'."atY/ɍwtMBffw*"ƫSW -oaYn @;,N8nmfz;LS'S9qqtIIF޹ۯ}yFu14Q r9G̩ ʓ9PvTM+(Wav{+] O˦i@l~vwm:k¡rdfH୫quHqFDM۴vn nrZY[cUQrPov>/UUal#BLfgA~~ssY~vrWPvKU-3a;Ӵ\rcw2TT ^ K4 1`,@yzDZkgjw]7³3c hIeA=cctj۶ʼ" XsCdvWSku OLo/yҜC?nZrΡ5)SL#ax<;[,z}}YX̉~a{uN쥗V˺sL! Zu}}}86M4cY]Uڋf}{_ˬYCjQVjUUUqpY?\bX=zn]]]>|ض-MS} 볳3")7"rvv7V0mZ!P? n~ܻwW^S#x8t1, CnL)vv"9 X̉W)O zY cuIoq]Mnzz^.*}q"`Esdh_&h|^_,|ħs_RҦ 9poׁRˤL.d`b2|; xU%ڜ~?S;s' OP$̡jACDm[՝yIiB ϪiiΜ1prJIㄐuϞ< )2l6wb_޴ɒ8sr\jA5@1"5$HG4&O}~2 Q"9!WwWUV]",Uh3Ҁ{F}w_#}]F73?XA޽C^ Jн4^< =M@ӞSZbl0nXcw]; )-pz1(TȇϿ+9MSy6]ׇy^´qC1sLi !, lyNyч`J9b?ەp>uPiGkn;No|z S^.mѸ<Ϟ={Yy=D,fӤ j7Zku:o6D ,pΥ.:%snz1fooRJa  X~W9.:kvgbX~_bWWWө%;~y~=Zz>Sξl60Nׯ_ov4M̧38[$t>nn@9v; CJ)4 gvsv[vÒb\Ylnon 9~wޥhl47~7lo mw͐!2'DN]P|I苤: n/m? ڏ6n^enWF͛7iI-**Do Ҡs,J+ { W&1FCЙpe]XQ?7e 1r|Xxn?k?_/__/_wJc_IIMo%jYcv1H9'Xf믺nﯺn*[cAݻwӶBP3+5Z2R46/R_Tbl5ҐB bcN#XrRܶZ5ŷ IDATRxb4bXJo!QEpFk.CVv\&Tinie•/jn:ʕ1p+n\ +k PM'ZRVvbh5T0k i6[nuqlAM\;OT-w῔,BAPu UwOEM]O"kvlS߆̤l a{!$oi5=קXqRR)k$5iE}9~赒iԃہYJ*D[E@QKr;]_NX5g_PR]yŨb# jIG+ITX6P::MIlG)c1[AR56Iʜs:v\ʆ.i9,(R-\kt ^AH 6zEɩT%\?XK]naVU}.` *Ld0~(K,uH%r\wWjQF%B_Ed߃G8mzzJ^n7?)*ry|BzIz~U^Im|0B=0:37=u $z?Qxnj1:p[9(; f;8nonnV'U ﵘ+NK Zz_}<ϟ?̩un{uFJ8};h,1CD!Kt:,aw1)u*T"a` ֙K4  :Cn d,HJj}cN>}c,81eyRo1%VB(\ $1H8aZ]RJ!,yc6&RNpכq\efB㲌cqYg4+q8T,|煞E) 0 ]7n۝\uyUT !Ja֚s}w |<e3OC;p毾ݝ">3vACx-C7]o}:axϾ/_gf 1.c7׷/_^~W_Z:0ua{Mhfj!\4_ϧY@gTEz聆qx:ðn7`T5jU)>)"k?~l?H; [k?/Tɟ+fsODQZ/= A .NM QŮV:Y\D`΄ b԰ [|Zg "Y5nFH)J1v5V!sᔱ<2E"cSwM&A7pЛpKQ%BAɒ`'Vҥ %(hJ*"DWYEMtB(HUB } ZLTS+EPo/uZ@sR&n/ &'Yk w4P3,ar"ռíG ^>"J*R۠፻L_v:=b놝줪]bQ+L/esV7q@`_/ ]fZFB!wQ#b]iuLO{ڊ@<յ4KmMUѯYbL1P 8|i4WJq^BtF%%[P ^*YtbQ%*kD4`FRa> w{ˣnXKD"$'t-dp$cRyBѠ{WH{/2LIa}|\lSFFr{:Х|T,KZq?-ւPET="+߆ ,~ZM !?\Q)碴2Cm:&Pˇ^'yaCw">9UPmO{vd=Y훦ICB8Nx:l6t9^D8Zks3T 2Y" 8n6[<á@:Uv)Q31"$i+5xR4-,($I-Œ@zCzϒC%84ͧqv[LΒQB(KAr l}+ X2 JY$9!!dr IDZax<_W)%:D9}7cs2ǔSYtƮve"b$iZDۡZKjQ?O2/˲}?88g] CLtݻ9_铴xt:ݽ}t:ޱnn}?x)󼌃z=u*9RE #0vj a^%|+#Mw<;g\b=r5@"|gi:OYkV{c/_?|H4MoT~g{͛7I0yȄn͛7www{]D_n^JӤ 'Mxٳg:y9#{Fe9X_W_}/>au!CץWя~?/w]{}O)XcЉEyɑr_*DFIHUEa ,k$So> 93 vίEmsUJ6HV6m:B=p:sfu?el6f8At~aCX\ sׯ˿ۿ/3b?I-+_zqU6V |ǷrW8"BFI ph "I, LڠN2n4jNs_\ O6m$@)BDE'oP"iӜsF {e<aƔۖ" ai1.!8y91XRz}A@Y3 l:\V^[Ƙ1fiq fчai*SM-Е:51tj,1S3g]DR Ȭr1ۼ%67R=EXh(.xr+D /~\D(&BnXONMq9nc R=# -UfEf@4-P~i ƚV"DƋ'Yc۠c'M31|Q>\eA0sJŅv4Ak}"1sX5w!¢8)z"hbŠB(l هĔ^]'\;6TDkU4[ԅj͵7P ; 9#f@ '- I+f3P`M "bRg(*^*>*kLjTVf"ߞ*V[9 ~?)!kam`'a<jYO,u-p XvDU`Xga.ƴo]M9OVeӷQRyI-Xָbrjg9眬:\2sXT6y&Ƙa|:),}wh,B{k0 44]m6) :0Ιu]08rfyNcrJ :Zcb9e83 ]3TӝQeExdEy>D` CoM)ag ɠ:0uz1DXْ9kY[3Ԝ2"29y9suރ ` 3`n1Da cst΍(c: @զƻr|%;nqy)OlȒ!P;53y|>ۭ﬷c=t'\Gd7o߼{7/uB<7)j<2"c<9٪o`ΧtKnU/xę]Gk켵wH9=@Ms̑ i{8}uF /p8CwݻyZg`9ioVƒ3b19*4ڜ DCƹz$Z\Ivw,QÔjri^X:[tqoe#VV!Q1PFzԐe  u`3DCEuBg?js M5yH!9Ĕbv* bO2TCԓ]QB8-Bҋ* F \۵V cœѢ-W[o !Ɣ>Mرέq I7PbP@**q.İts9=$Mѵ*B@4 @ļ^lmYVXKY:z*CPKUʚUgVMY)7jA)$292T[S<EM,?rXU(!R dXd::TǰD4Ju"VsQj0 X7"*D^@8ջ($8K_*j)E@R d$$ 2Vo~nRr01E!2"HZt_Ah3EF#uDY[ژ+ҭd`50!#E::g^8*L.O+'XꀞJqS~|-͏_?yӨY\v[W>7t CX |,AVXQ N#6H#l,-&dBxC@%+}v6*fj0Xgt޻>@, "vsi+bN׍H)8M4ȉUyQC":c"DdZXW~RV3ݦGK TA$N9D9efY] [\C#If@&6R`Ӳ,w.oیt:uny^ XC`9fQYG%K,,N1vƦ0q a^S^6aYPВ rF IDATo o Zu};";O!'l;4绻7ŋ,ιNw޽}p$:sovw1~7Curv^њ|:dv%/q/%;sf7nv =RQ"!{a%7cQ ԑrfNDw:CN%2iYN,ֻa69?O,bv8Oyx<@,$ BQInw}?sxӎ;߿r_]=ָAM%d9-KHQ6o ^<<b%C"KN9,Y$%@`$jmmj0ǘ1v cT ;!,/qYB &zTLNy1Sj,4So0!eYe.PT<*T;8"@0vC 2Et<鹡morEރ5Eɖ*%kxXY_M^T .(rGm>A/hn|lTic>}8XkTֆ& KbJ5mp-d/0SAq4x6e)+|Κ!'?Z)SU?/r5 O})zc?Zɧr=Y>OX9 QJ`JETi]weP̀*܄M ^ Y!1'ea1!Ӗw0^$icu~REfJujG @$*i^%B6R(sx睵0X*vۼmRIap֥R33dK\gC 9ǔ2sX;Xk]7+,¼pg%:!礶D]۷)'}= ln777ZMm{ﹴ&t뺮cM]&z 2FsLå8dfovDnΝNgjzg^iЗ_~͛wyl0M_(777V9D4}iz\'"2eX1c@x1d5)2C iؘclc;8)ºDCX=k9V93"%-O&\WwY(2)ecy1T!̄H Ԏwy5 QY$-PߤҊRͼUX~++z5@r fH5i3hxL;G"B9˺}Xi851T 592Ė0XՏIؘ»(D$]C}vjuhlUM(V'$3 ^M&44CTnL|{:!m <kO E# 'du'=4M|CΧ65Γsŧj֥N^>*>||]ߺ?UNYW'U]X! #R9VR.DɩᴦiRѶ1ǺբE/$k9xJ9w]9 ݠO^Y캸ҶA?~}NC8dZato$굇4MBJEs3Qbc|bA V}p8<<<3H}WnG>&:Op8NZ҃bVkcqHVj3뇁bt>ͳgZմ6 h=Qw/1Fľ댱x<*?13~z&Zu5, Yy9gr.t>߾}O# v]kF-6ʘ*}rjE1"b6<r(fnmW~'#1~l7ϟ#"cΐI8M7/r۩l4az{ww;Pce=_R m$QRri*0h1<ϧ|5cm[9唎("Ш~)v 6!4$F| D|C݆֡D$yNzlJY;" *8!"9ٌtRO^:]#B0}qf"l6cugqo0#Rڛ 6:*F8 )ibI! zR}o5eI9+Y4M1%ƾUJș[*.(k,%*OZPm2E#X^[,ފ* ze XBӢjY J> U󑚤kF._WEv \nͬ!J-ZWxMC?2 lUkNj-| 퀏e5:@&1?7H.Yk8SAW~9ߺ㎛-ߠ\|J4YWBȋ1X |TyV|/UCSԨIC_D{S+ u D:uM*5),ٔR&uk()|8cYPNp̜9n ]'o$,98 Q@meYnwo,Fڬwyb(љvpX}{9cj榟&kِ~(cKƔr\BZB eH9_v[CEr9WXՐZ)YBLR#@T7QtTX=(UU)k2dИi!Tc3g91' ,by`U{ 7ΪbU4G,]#WdY.1vMI791%qCo_GF!,jm3p}}}ιH)  xh x2`UW`6yk[V TӐafl P9' [N x q"B1ڬs3}k14f:a9/̎ O!C$7K)М@9cJ1Đb-^ s1i+  |Wy 9XџeZRB D9kkM)td@FI0l5]rC4>H[6Xi"WjyV*b0^&FaWsjLDdFcMZCJ93SmP_A@ ~ajFW"DXଔ"o,Raа|Nr [1Xk,R]C5V>`ꥐr S6|k"Ӭo~XcIV Q\?bp^}e%cS7h'8 Ezi|+ȷ|J4OiSD:Y{I֛ߛ/6iO-Nb`(uYS_Շ'~F߫: iX 8uyKNYC44^]A_qXԽn/R|TO}RSu 43fdcJ9j8 PO8b`]=2L1%="kukR9r{:,(}L Hav TLTYkSJ2vCdR)g}nznsGX%:ېz\GDu w !GﵐQ/V8 e@5VهD,~/N dj(FjQk|N)b m <+O ,̡v8R9@X8Ę\GXgL a|nqp}筛IWm0(>iHw)/y19:u#Ox<ԧ/^}r~قp㈆2gk) ǜH8<<<2cu^mӹ z84M ;ׂ{ح}U"K:Ot:={ԛpB7qM92kD%y7֌e®֊Ȝ32WAcJ%:ۓ%u޷ڗ< @%E0 Fv#, )L P"E/?Ząp) `꜓{ޙhtWWK{}~Tۂ!`yv9'D!v;Ut:4IfB'qlaS)*՝j`|5pH^Kn؅rZ6,"Wsb'hҕCT!J*u6l*bu H!ƾg( )icE^~_s&feDQvV Sc9\10ka݁ow" ʝbܠqL Z*{i[Zd\ʛk/&|R(!?7ϨN WETdNi>_f6BsbP;rm 3b$BƁU׬p櫭GP WU ng~F0ZQ?/Y`+݄ZBt?0*րu[$[;ڕ%(t^W] )ol]7S|P{/_ Yd ='4ZJF&,6?-~甕3D+/ŞП-/硶зt*EH/PTka. [jVB9U2(UN9gV )L!8!zl5]sS (9~wqmDL',mD@ɹ<93ٗnm7ՔF*+2_571!2dtn{A` PM9<9·)iDՅhnw(ƮNf4NQU]/4Ӭ}c؎߶a-B|Eu9knn7m !$&Co^TWoJX\HbqqOެ")tUz+,`绮WW*"/vkWrkh[q_<{p84Jat]:ĄNUqupcizcFE$i6Gnm{H)SJs*#QB])8N vݓ'OsYuVifiL9Hf.Fx;0l. kCuz>s4/^v!8áj׃r9NB gO<aD:mOBFlyђJ<3ǔr9wESO6sꐈ_;۟ι]1LDiݝ┘%.vlCWU Ղb~\x)4Aɚ΍Ju1aQ5 UjrL@5ΑW F'o*b3+bfs*B!v9sɸCZKsR2Iu4g,R$chGiz>hJلM[ס2q`3lrN0儎)nw*,¬a5nke%̼D}Zw#*&*$0gaΉE|jP숼m")෿_/ڍGoo{vgXo#_ 8Axp.Њ%TEݮBt66=9ǜǓXbAUZ Z Wz7߬DeOG9s.4Q|'kaU1,6LlbUi5Ncp6;ZneΜy9'*; HC4=EkS9Dl"6<.V:"L5o%[pYUyΜ+Lwe򯰞AZŌwSJ l1 Ty+̜ 3HƬ^iOȁ+M[ 4hAb 4fkWE'pJ_/rs`WT 4E3xy Wsn~LaxrO1yZ|i͜kO jxW^5Ē-`^m$`P7f⭆34iD)xhj~# !( 9)w,97  <4t&,҄).x`3S"t1Dmﺮw)JEYE(n߱}=`\! f }q[MsDoweС̶ ιyNiN8no[1Ms'P.lNKWꂈ1n?~n{h聶 RX*lUؕPlRYLZj~<-oX !y bQA8mq5b]K^%M7O~ŋ!av}y9%;΄(l;y9hcO1)x<٧,p啈4N9a3xzLW*mhmI,+[+YUB|pe;5Tjv@l+>p:mZ\퐯H@>""$|J,k|&T',4Uַu>Aa%Sjyι{O4&I;kDM1DBbLS,1yJ‚љ[K1g)^Y^y>NȞH@3w򪥲 p{ǜ_@Mq+9 QeËw=aaGUC#r,c#{5%.ڌ6SZɓ9doI XȘHH]c]A,1Qi.$jk_{˞[oO ?o,dz d=hJ'\UA\*%s[J.֧Ѝq^(T2J"5ֆ3+]JX#-yMK93Kyofee02`SʏGRyn%T\.KJ=TD2y|{oju@ =|V[MsF[=\c*bBt(`kYciZjX:Q1)g!GEH$r!j[:JͮUʘRNdsشtF,5fVQR+7 ,Y>`̢h <(ɪVU@t[P"wa˨ӜT |:Mx)B#Ҩk`<et])D&ux]׍$ylljTIB<DukҊ%uY?W5m)؇byXj053.4Lкo|^3^:+,Cpy8'Rg4z_xqރPkOVfџɢJ鬻~VDslE} t: HfF+JMɓ){Cι.xm0ȝm;qDyyeUcpJik"GeXcD0N;XQF=Nӧ滱Wŋ+>.a8;V{L)G{...;Bp8ƅ`;q ,l=ͦ ;R!0 __\!,蠜#׾֧?D|o~FDRG~HDA Y>ͮ, ?~-'"E[hSR:9RHd1bR+s[ErA9qA #5M ")%7 9^kJA TT,Fkm*{g˔ђې£F{`+2I.F{;eXN[)ƾ%,+:'EdZl],efаF6 ,@Ե6530nSn:熾RNɬ\S*Edm+[kK,V-1sNbln)Xj]dnG{ n.QP j8~ Jah97cʧ=bgJ>p!g_ȞLRo8|rlq$Pr|Tcs@)(xXMvJLk}^R[]ԋMPlD6=]mE8Z`i[Fo/s)ţCI0tA:]`!}˙jq6b{Qx)U1I%Uٚ Kԓ@3Y΋gUNfuHޫQZ=l[BNϞ=Mi]n;D!2svɕ͆!Baݝ`yà/^xϯq]YȔYv Nӻoow'o6Ʀ=Nϟ?qi"p<6믿"O~1^&w8yyyˋ=4N)&ppL"E;Xn]d}q<9mU]? 00syu٢}oa3\^^Ι9=~O>94y/.BDsx᜿zBTùv56} i㫵|4?wN0 {?MӜ]{On:lgeLX4P`i}"*ˋͰPvܽcĩ6g0!,PBx<>$ ,|gFp[ӕUsN9'&ħ͋+č1ۜ yfW .vQE3Zb#Z,R֏UTΜS6+whlm2ds8s AѩB&Y#}GPTQ0lnx3/=l6(WʭIB9zʶÜ2ͅ4M8U1FIs:s `ZTq 81V VXē!v}{N)4:Nm):rP̆G됚IFrEa (+ioAVL&XJVdE("Qi9%DB(,:,_boȠAE]g_?> 9t_oٟ7_xw?{_^>Mm/?*Ut@^ _{_Ϯ7<#j!Rו !|_}/|MK֨׾ux7QL5srHlfPH)͉%ސ,uHJ#V7-,dB!DE9g?8AUH"f&ku>qɑ/ǰ"0<~Gx?~o4~n> IDAT(k=0Qr78λ׷i2_=zn.._kx8~n/Mߝw<ąGyr)iq~t{n6pc!)M8# .>nFhi1qn8X9/zLiNiBu~BpN|;i^<#M?ywnnnn58 ;2KJi<\=[$ss@VSaw2vkO.y{{ϏIgBx7CΜ(#-ͲjYgeI) xiv.C>^\]c9tU1xxsԼn'&(rKw]`qե)2|nmpuP{O2[oT%;ٱSVeU1SiLӔU-;9'w<`4=@r@@E:}H% xr(+0[g]s)%!su)+:H9%`PHf=DŽȀ(<{ig>kۅOɢDn(Ou6 k?9?I]im\ Q*Š`HW]O y_mϔ癍O!J1#,x]RnrhLQAU23[<̪l8y0\J4ע75t-c.1FF\x 㬛JT=%qDـTuj2/8?ȉJjX([mS  Bm v7Qc cި9y烷=< {-^7^ 7 ;I#b29R[Cc]/EZłhp0) gXT=;[W ώmkdZKoD"kWUgv*U yB#%"쎁lʚWy1*K :fܖKnCW̸T`d*l6]SڵIMjkWU{!syTuz#ԱՊ VXW3T̈́upw 癖\^{{5"^k]khȃq+]奀<2S _Z_y`ggH"/7E3 =4\" [d66 .﬋@ߏQr~=T OԊ]+a=$3I+v:rVΫUc5?CBWa{>"Bz;/Xo!{AY]B"˕gcY]qtYi\%tRNX;9޵`d`v6Zr'jMr#{q^88>{LU4p{{+"hZwn]]]\\>Eygl6C\__[rp8 bAERJq&3<~4y*L6]U0 O1nw[Z͒;x8ia`<Q ]? 1ě7)SfS;rqn49J)I!ZW3<̊POj@=c!nޟe~pfx/νZկ&[r-x"'㉖r=DWThcyoȥTtERFl5k=,Ji޶(ƹle c+ss`.ġzD-5Ml?&,vi+^˄䋍o㹂N'a%fۆ?~f,.yVZb F1ω ٶ]i6p:7}ۻi Z k҇7H?t ՟[vdDڰѓǏ~ag"vkXpp__~s777s3NlٜϘnm~i ny&vQzz.//-M#8N\(|)}wwr? vĮΩgDxqqqv3UJu ,@sM\`09=6%9ykm92kɱ^Z^7AG#OnuNk9VZ[ŒK+(a50bG"ۆ5s'@=m#'L.|Hljj=H_P탽82ώM:k57PSzFF', J"bKt;ssaHR [ dZ-Y#5Yo_k3H_eKa/m"߼SyOQYtb%BY9?o|g|ӟ.TPȲ'~=qqb;r  4}ۂiVaZ "n}Hq!$%"{ J)E!FX#dq4/+vUfft/zjیVhf@ u%r"4Tɰ {rHZ~Zt⃣41\|L[37K)qm}[nf4КXoMcV*:\#9WJ6*Q]MR,Lt4G"-QYv3tjjXN;PQ3If>WAM%B8Q+<;.QV_K9Ιs6aE>j -kT!`gyֽ>-*{)Pll* `#TΒaYd>(?h:myxRJ@]VE:S4=89/_1Iе2;P_{{Ӕ9lZ?ڡ%Hh jPB/ZixXєja_4ocǦZM7f;6J))Jz@suekm7:sy'rn և< m 9U0##!;9!yᘎQt|7;˭>zDsBt 6ͶvI9X5|Uّ#vt)mvv_Nv{v ! RֳJ@7v1*^{Z~E?A(l眒"nw0YE}:qFgQuݣG=REk1ť!x<9JtssWWWva8N'3]ny.n#D>)q<S6ٕ v !Q?t1vy97/?'6Dž.K!@ Mbnnnnnn6~n6D;HyԕnL[$r4猴tX&<5Z+=b4"=zA!QUyiȈl Ig˔B, \P8%Q/PWyǔ=<9 X8u'3qAD519tJ h6Ѓ\#E:xD`fl6[5/M)Њ bU_>?(>l 08 :CdomԲOVJ ߍ3mڪ9 Kf1nۑt:9{p]fLjυJmIkVfu^^^v1v}ɓ/0~oa Et`M[|?-R:ֵww}CGEwD\.ͺ?ǟSQ_S`z]dxY]6Y'1l("~cyDVO*ř ޘ_?s3۞~WɦC4O]=sJ9}w['?iw42eQpn9_->  QCQI~5h4ldoSG0ռ" 8Ӷjĺ@kEsRkE(?ЧEB %$̒3(t1zVD*jNr$HCkz'ȊU5la1r֭k}&3cat:MiBn"} A S)UQd;3!b d#y>/^0K >ጉ|EPQKإZ>gըTٝ/EX[V4Fir+eG`}wԗ^tbJ3O%@Ug<1>gy_K;z5mޏ{O=c8m=hǩT^ \ZWk.?{!;[L*9 gB[t((Eb1蜝a>`fܼ/ؖݼ$!x5["C}hw&`RpHXs2Yo6s|_\$Puiʶ3gGYd0nm&q'BC.v}? C0xls*̙7330g"zn6MR2U5m_v5^yV6[$}? f0_vټbl|vE]^^{l1vn3 D{Mill`{.E/^kH.CٚFE{B7tŦŊA q)ͳ#!t] |<?7g81L]kb܌ȜKw2p8n_܄f3.]w.ۭwwwʏ?qtu%uP*wY\U%s DTYJixMdH(Y\l#}؍1T!R9]ר+$Z?qcǖ7.rsVՔ n dN^4(#נ$6aG?xc~,Iu{WUsMI}@%2$JaASaG_.د(@j "E93]U{gfZ+"2wUuPR}jΝX_|[K)Ja+D1@Tf Xxq:J\@yLg-A""H)b2tyn*\W)x->SxY3UH)RR. i~guK,3[B஥ 91RL##Pb-bb>\KC?Oj_?[Zu3}kw~o[+kԀ~~*;c<M\Ԟv.&گDm;}ㄵ|ſ_??/%BT?3o6eG4cAǒoP?XmTu]-*$6_B #":ΉYEb&c;k[T}B 1<<暸t{o7t#.q&zti "ePȧ$}\Z)Qc>lҗ$/ /"ɣU#x\yѸ‹jP5@Wݥl4A̷OC<ϡ'/}{)>G|:ڳ=ޓWWLjvYW~w^ԕ=~ZPs~s5 'dKQ*O<ړ5=W5bCu]n-R%F@Ztˮt{ R8.KV??*CX\4y||i8,c3G>nRJ8 C-  M"otoooaqc$0CAs)+w \H)4ayunmNhPӢC[\:}s_{^ګc?޽{w/'=Nu-Lt<fp~̥8qs.իWBpnmpR?%h$Ŀ7~Pb E _?.9KaO`V_c~ >V`h݀t(b%h@gF_ےK8 k_+NP#Bq]LJo7?u<\-B9`03)y/~WEd^\J-n~ҷϯv0r1"m@Ո " K*ry]eR00_ E])EjE5pe /"ak%:}dp\DR 5PGU߾)Boٰ/#EeOȭ "v܈ 0ÔN7'dZe^f0Dk ]&"Q{)0&05\}D^\7WϹ/UnCtKHCT >~DTUJyD(\Iogs4cúd\3 (5Z q믩B{'6_qWlWS@PY.!X+0S-02%c`Ś8Qԕߘ׼RbK"tϡ=@DH~.Qm VV/vۋ e4K←Hrkզ|6FmS԰vYq$يyq[j^wc"@}İ|nR~-]f=[Sԛkmϰ-f?72Dpݿ=^;>p4N4z܋6s31dh[]5=Es4}1}wzƝ}q0^vQ #SwR \Gp:3鲮X2 )-=l=?s 23ߔiig F4ٝk Ӊ1燇Vqiy^1͛B y/|T 3s=ӷ_~}ss[8 ƦF)᏶j{#$wdm`-Ki]9_VOۛe]:קT}GyYULt 8d>rrM({߼ys<i:vԙ(.9bzuw"|7߼78M/qp:Y$LRr>)Xrxt:Ag<;xO251s|h۷o/Np:2s.oes:K溔͛.~GfV']<+ck]8*2ϋVR g0br8eⳮEpM C= uLDO/>}Z[VIQ'L569C&"%LJo8%X39pL)碪˺.B"NJ58NXg:_ںhilO>qgrx5kZRAdCUʦQ*figjQgt4vө(^=]xұ+ eUlw>I0tdzZ̵!ivGvWW%t~fsqa1)Ǎ.?g+ +JMG׶Q$9(,E$kw_:<ɱCs "#VKHci[ qb5]~.mK܉5|^QvXZYBeZЍCYJ+qHc{_|sйteoq>eׯ_DZF' Ib H. ϗ"4 N)}|1-Y1tR[m{w04a Ɛ.kvD/b CN84036?rMPjDgE9JcJ*16i )Eʲ,9wrp8:!Gu@S+fnsÃUDnTB"I !PT9epYDʲD任;*|~\ś6W0nseK+Ǧ<>V^&&4 @hOLX7q=BT[;LSamM[} BƎb;|5 *c5٬O@Z*h!l *:Sr8ktQu Vq[92׶d:~BZyn=TffCkۅzZs(9K.ŝ} h~Դe$fD  ]jG}-.FB{R[#ܫzYqc !4)$doN;3@b?8ݏCtF1x=:t/>Os>F$QR" 82Aksݪ4;ۯ pss3M!y)!uzYNitÇnor>ߋ88Y)yξwՖD/rU077+F'Kj~ܛv{L/߾}7}_zB %Q%! #_ι""TJF48[B`,R [A!)EDb0M4MLdηF]2a]s*)"7oO7Y۷|!TYn 5׵`]g)>[s*p{F=Ƌ{&QDe]W[q4$mE=i]Ŭ9C)f<LN~qK骔wWA]Ba1:XۓH`I˷kUs%5Jo L۞Xه9[Ȥ"E\0Nom~s/1k(L|w|ɲѼ!5@հ^k{OB id;җf#Ry(wq3h**RuZDf`)MUDbL!LҘvem׶Ve$7TE}]-Tw8Fؚ#X/|R"DdjVT5d]OuzWYQVSG>Kb}hDc`J1:D4J`bD[K5F  a}C&R1m5{?f ` H5~R20NSJ)x<4th$=N@jB|nOۅ}|A C޶K 2nڍTJv䫚 Hg F+R!ct8Vp]##CRTqvnwww_~ÛۻE 8N7aTާ4o ?t{OJ)ޙy,2bB!ƻׯ t:$BR\+1"9H! nM~ ǔo*bSWS5g7j&{5* ( 1c Qc)pNb R h5줔4PqWJR1sc9"J X|+zF'C{W.r0,yJe]]]Ky]f#nN!8r^u/8Py>{ݫab?]5zNvV^+wÑ SٖOCDEBFnA NդC5ń!"nb)p` & B @db.B iº08n-^fYs)>bT5љ˺ZQPPpMȲjġCęB)Q#toj)55`[Mޞ&kQ0Cl-mmv^$j%!i[WU,뢪b ܚ13ȈTw{"4Z4C+Pmk=PEZA;-E }PUl7bǠ&!Is.-"^ȹy."yz9\uZ ԏMjwHHDĜﬦSΫSުF e5D:j.pX9|/%)V tZVTNo喠y:uuA/޾븅/rge' :ose\e}$/`/l2c5|&9~`O#q6,~cM9}QO,{=~K/S{?zy'yڧn։PWf Cy . hv]wΨ.u vj'(3tFYDL{Z.8 v9/EW@V@L ff^M<ّJ) qS7OԆLGYdM cG)%zEhOx2_DxssYo-h9Mvфo C !!q菾pHSˌܯ eY?p~,tss_t}QZAw`;qoݨmc1sU2t?OzȱW8R1 !as~Ϻ^਑\JQbrK7,KLOOǛbiOۻW^㔆.s9z}8yy4;_zN7wwwz=Ęԛ1[ ٬E@9gU}g2)P.oFN0uSQbR!ĔqC5?9y.Ef5?*4M~rTVjgĨŁ[99Ƙ1ƻ[Q1%@P_u>uu]dX#vTk9STZ)pǩ^ mx !pݟ6 ;D;w`J#$XOUq!q<9mVRD,E354Mk^]Df0 sYEx!tsAwlyzHji,)aREE(f1abJD8N{]0xFc,yk  1ڱL/z0rÐ|#)ئS]qpѼ ]Uebˁ]tMg/ו=w@duwE1C̾!8iUբ9 08qPRdYWDTep)i3$k{q}iaPRJ#۫Mjam7rыiNZOӘ[Dã6_Kk'ONѯtK ui #z糓.H۠hrq}kQ{⯊ٚWh])Ff^ٻmji}?ݒ Ѻ.*Ɏ3n4GO[ aYMmM՘hmz{\I~hw7}{u)o;JwXlMhv kóS6s"uL軵r\xX٫Y/N b}7re)<m$x>}ŷo`/knT4lw}6/S\xץtwz{ LuW瑐)%eey:w-jcRѺ_}u:3OղEBևC[M:"4 RJCR|S JeE"D@cJfZe]n4 4(f)7oޜǿ0͉u9M !߽{ܖ0xlbb /wW*|q3=ճktV.oy.FqP_-EN_XRJ,b27774Ԍu 1 |1rU\.&K^t:ܜJ.ſջim{]x8x8M>Sjyj [+w7hwhȌ.43TuI&vn2Q1#&SHp:0 t^c,)EMoM2u[Qh/*B"*y mK$BvF9+mա5(洜Wi1]\r{QPU.)X:1q ;a@fGG)"4^;\cc Ap4)XD]g)2M2NvG-# p`FRQ)!cFj00P[s^5,*BLnnjtܗYIo&=u gURE z cZˮ(5 Zv٥4Ř\+=_.Re9OӴ[ `H9ƀWKv6[J=DU0bp.5xFMcv2h򄴷{~弦4PehR܅*{/}"WµT0ՆhJq{* iHC|Z^z?wZDt XU^bu5"?! 1/a{oL[cA¹\ڣLMH(9^1&5_v-I6kԈ:`%X庾l{ pGwp̦>`+D 9cV:l*{"ZRixǵZUSHl';Wl}l4V;^N??%lԉeUڧ,[ @* y`qO>[wVWWF_&6aӣ`Qau%Q>ZE;.! b4V$,P}w m{,X[@z'o# }C&HulO8qw޽|1~^e]GK'wSIq'؍kT'[9J͛|gq >C견0Ng_~/};nb[\kC3,Vw]R0?PGR̼߫ RPwBp6_?HIDիW1Fw^c]0s\}f]8M2umYy fJ۟иW*\r󫌈BjPu'mo}|<^=g))3{vקp8N_54x>t,u0_iҽ皦_1FMOVUc+cBT+9݃Wx |9_,}ݙUX au]ֵR2м{XlfZ$HDSfo?e]7~]pL)y#JUUSJ4R $p8><hp;Dz~yQD@A{Z"ގn,G6RP>!Ӫ]ݔJTnp0CkX RJn+3 1a123lɪ2(XS:w0ڳSd!BHtBܶ˺Vb*yIDeA3$꬇E7ן\uUՖeuQv 7@EX<>;ZNRaBpˋ" q'TXрxKL2NgTQk1@rZZoA͈[c~ 1 B4A|(g"XNSjbZEDf]1\wl"jTNaK)˲^KAqj,"2:m813$D"^l"ͮ[q`Uֺc?sAJ65ժ,Rpj.֢-*r/=İ`@lO2Z NZzM`źm 뽜~I$5J>ɠv6VC0Q>qխ?kjہtcZzFw:Kx >a<)z R}{n%#&\IVC5-Y*"pXK<Ox|E7߽{7_|ŏ} ^Gkc;'>ڝi,Ri! /ph1˽v"1!*\u,`Mv7 FN7w]VAq2ca?<<||2"FDW5yOGNx߯eYkf>Nөs BLURLs xw^ͼEWq)}Q R4M7)2kM13bso0qu^&=#EZK#jhhHu.0p{ND0p b[ֆ+cEAJ2f2 t:NPnag򺠃"8?OnnnRJEʇyF[)p^昆j(j>3O˕%5i23booqKo}ͻeCca1!0*q0%KΏi0tez4! <w~)Bǁ\h4MXEfTfiYdv5!psɦ8p=[TT0˥,ҾV_eHWA6Q`".jZ iWhbl|mLᕩhj E\Q)D5ׇjCJ!0o3H#sIPZ.j;DC~T6[6SCJM0fO^^}'uv3*\ IDATs~Rr)i 1=4rL5r$-> p 4WPlrQi%Kq1pu\DDJ@[yZ[vkkթ @WO5sJ65aJ"j𛗚w 2fE|nW&<3&"Ҥ]uXvleGV:a#벯qzEH+717VOX#}"-m`s۞Qp{mLMfa7nv?cM8f.j׌b*}mQd0S}x! zz(TpreFnϰޥ@F*R`MJ{^ m ;} VF w$^ab[ƥj@MҞLM;~ Zm&t"UZd!c ,;D;ϾIKC l $}ŚONK!]^1)}c?ۚsbzS׉Ջ _Ԍ#'Fۑ?okF.ŁMDiGЁ#ZCFdDqF}ٸ5&ʟ&(W5W7`33@eR tuefe!wAgѥ1B8SѱDΗǯ~G?޼y3/ x|/cJ1DFݗ_^z+mNbZ cHݨGht]}:np燇oqM kkFj[ y첽m[ 'p?|r1U!0N9{o0bR90Rw""Eef-EQLSN\4"GuEe^br[xH_d1j^~{9Z*j"xyuH)\APvBoAeUuaQrjTw%^)^[T\ֱ+*r6}}? uFk ^.A$ps{w8ݤau]}VD*褋 x]EL. ?n(~O=?RT '`t&'"B"SJ*kyaSJ:;r94"e2ry-NaYg i*4M^\.˺_~yww7 2rabe^.Ug_F ՝z15Vap;1b)]Jj!窀fHmH Tu%J1Ew'7kƃگFw;5g*k(H;XL5|wqL7ބ]0Ԍh!.8Հo SS-uW^zmL)%$GH6jo$ )ۊStmYsEWv܍gw)r"۳8R|yޖ:nײX-1@9xCNӡg`A-%4D ԟ C.I.uֵ^)"*3>#T23iCN`)un !h[oݽf^[ ;@ v^/F4;ѳxɶ`qXn=Mp J ̜+(rļ`1lng*rT>k›JkI@"j˼P(@D`U܌K/@xQF;aޟ70ՕaH!,QjlFVG'NwS|BI.!_CmiՙUS>\(E0_KT)}kkM[~}dJ+:z]6r@_6o0^c F `H'$n|0N_ |e&f~_y>Z\XEr'O/wp]kih}j{VlWmHi+R*N]蓍=Eg<;JoȤNw*,srBÏ:>g<:~~mnH'p'f)br}WZ5YU@P-G-qVcu @d"R4NtJ uҚ}D;^5!ܡNiAhww؛徛8|i,Eֲ+QLя~k^>$G_4Kl+qq7Nz!+Wb[hPy4M8?wWnP;vhqYC$]d+Y>3{?|p>L)xCeY)h1,뺈E#1;>17Qߘy]u]/y qV  9tcۻvJGeBro+# _mDWjK Rp!E B 3ӧ. d^ycw1zTO@]&nVu,f)knGCמBex*oX! QF͈݀iu翻ωɚw'ͺ%ɍl=G2RВ֒-6de"03=GDžfe!8moB pY[( uPVbToQGoyH. &خ)VЉbmQua=9-:} I U`2~WlOCU qgnRB! =sP_J1|gW1yOJ i6  Z&($T7ıu@ 6CmEWacyR2iۊqM1d ;N5 E) D+;@pΎJ0ik^׾Zf{k &/*#S^必BfJ G.>P}FS5$ >U#bQqn[FJep!AZ+ I͜ǞbEGײ$.jJ@8ڗ^SKm34biӺd9vL׭A.틷m+]6[:z>Fgbx)4fP1wj=Cx]cE꫽1 :i.+E;uЀoZް6|ONPoZ&F%ʐ "(amy*އt)I#^Xzȗq} K'Lj"&mxyf)λ)̀9!5\SVMHlH )l0t֕o\>0H>v]xr: LGK;1}qF)N1D\ m}y}VaRLShOO21A49[;8(k -JP/`f"ZK!S"f">__^^s.nǨ%tPk)'BLM&_g-7uY_^_y]ג33?>>ݹ],r:ίt>3gOݼS+R$Pٹ///yn7ϳx+*F5|Z?@ `HRT1SRU!M[:rB`nFbjٓv "!rb)B@& ǹ3M)u]!MEԋݟLBOӵZ.!J ڰh[4Q#)RA.cW00Q3a<9""! 录]LiJ:K$BD~)/Ml!hm髕XK@v0m8)"LނU@P[Ru16߀Vz+Oh#lIemTx́d}…j^pYe2[M3lc|ڽ.n*.=Ɛް@N ?87eۇT_M vwiAQ~5a(u썧ka@Ɉ m`=WI"4rAFPOn U\v^ Q8DCSU$uJQ@C1F1GDr:3E +TQcҔ&#ƅ' B߮;*P[%tu;TlR$8ZŢD@4i)! J$,0sq-"bj&wU%!?p)O?|r~|\YTټ;o.q6[G5-q}<b L˺ww4%DR9zbVD""Rטj奿R}Gu]s%N_~˗/~$O>Xk%W+ G9*RZb&6Oǹ/_>TETT/!4M>t|=.x,wO: qv_v{B ?@ig:lvV͏wǸsmV2eB3n7) iS $Z+V+]-C"1"|wa#']JF (o^cVO>nņe5b-&7-KY|:yRJJi7RJ*c=Riv."뚙f4RJczf%"iJ}@JU@HS4朁U-Ӆ/| "2H`sz'-ol`fE.~NB0L<0'H̡'LE^TZ!V`" Q NbV5YH34b@WO?ׯ392W~a?v;Nַ]r([5Ta6T.Pd'nLH采zyy9-0M31XmDRkV! RDPm|-s$U=~z<'>|x||$D5;ϑKQ02Z\];~~y{zr{T%N*.2#Hbg@F{SD_=^Im1*p] 9ϼ:7hdQ+R `Q"1F-%k cfy=DD_^Ӈ1n?(N|5+ʔh-ȏZL "b13jl7!PЂ@STUOֵӿ?pOO{"-\ʪfh*ȴ0yyy!||b&y}}yiR`DO5lij5ufZD)FA,hq9/ s0a^]ɫ=fYe&',E,T owKsRS"ҷj jBHX`x^g\)k`J$%SJi"c~8jg쩈Pw^ D>B\4OL[taιO nN6uf[扷%Ͻ?njιo q Rt*r%,p*B C -D`6*e])&/ۄp z1މ=BRLkdD92m8n@w[OXVi籕icjS LJq=Ѫx+N5RFTfV"@Bu!VFeiN)4@N,J(vK6{c3Pn IDATR!K)MnlFU/n eko |Ќ̮[T 24${sܗ2Vv\6؀pģVBmO얤Um\[ebb 8X:Z;{TE|@W&j 3 xz/~)}K^ht[G[+d~uWsn> [ We~D7*? # /J lMM!lBYza/ܪ YQJ7 - ~bmxyOyM늍?7=]$At o! "9XL>7od|.b"O\=oc[ښuKw\PFgu: mԦвflFW><Wcvm$[}D ;c6ZC'¨1f8-˲.˺:π8ivyfN>SM{^9<>~fv>׿ÝɟKKV[5s%!u۲4^J GRƑP7Л<;xzzzzz)Eݬ]1LP|n@PE$BfY_~_}=/i~ᇟ~OV۷oBSJO12r>YM:Оy|>}8x<̥d1y7ӟBΦ)n!4@ vPV) aem34Դ?9W3ij:wyYb SY\uhd'FV֋@U,+"Iii/Y{B lԴm3M)q+Cs#l5kqPa&'g)ˡ|⎲\朽TtuDt a׮RFb*9PDY.%++P2 0v B#7zi;15mԲ.u#sdq+( >BА"!a^RTq`56@TXn-E-k5V4hNH)ff.n$Hu\tH4k;")%e!0;Gn=^:40 U,ڄ0]I@.ccxu llT*{6w4f&bӾj(VMKTm+$Kv}ǹΐǨq&f.Yz窴ޫ6h>>l2S4%5-/Wׯ_~}}}^]nCHSٙ!CK_=MS)%p L&?qչ0|(nज़"yc||UD yÖJS:/d1ռZegf۷_|勈OaMc5*bL5H)"*^Kb M}u]-.}zoP5q99>?~%0M\bC3)7Я LSA"g2JBHV'bfx1S6 [U/H7޿iB`攦:kЕNKVЋ-YkeK\=0点䜳3,ɫ&Z=3v\MfPs.X+vX&mWBlbLBᲬk^W0b"҈Qj˲O'c-}EW2irf]8\[.1$h5wIڨND (f6tjk@,`X+KqY}8#WRy.2U( P^4z_"MO gus uh56JwFj|dyqQnY#oN#pܱu( B75*k Akm~pm{)%HuY |zC*\ŭ$36""N/ C cjHT+o~o^ fP DB|}9˟?>>>\x<.畈n7s`-y]Oˢq$b>|0\CD$7TX:6MɅ=؏[ex=!{0XUv "r^L5FliJը?==???~ *zf@Y~"7V=As9/^AЛsJV s8&䵜On7a?g=y=scw{ ƽwSڥ*v~^,ĸ͎p# ܁wcY ֏ )4W軬۷/r>uد\B v*="8DBJiRH$n摣n ͈!Tt:}u]ÃcL. 1층)4f?봛8:D>1u]e_U 16gA)DbfW)-5Ē/UsX5B=L߽jg&: !9ӻ/^kwtk",.s8!q jc>pfz!bIe D^.?0v@0@-7 /"7 RyUࠧ N'`סL5x[:{3.b4QC;mA˼U)!Ậ~!C *(Rfc@賎~CpV2^4&H[xћ[];mȼv; }.۩BuހuC4v(kJvԚw+lӣHi|jMUb+g RwYcmPC-m"o'8U DD rZ5gP0OoOcWɨhqw2pο29azAìMЂ2:3u|܎# f#IY; @Hlz̥} tLi+~p|34=sل÷Ĝ 65gJu䙋S>^qkr;އ귌;<CEUĸᘶ99\VsNk֌gno +ӕ=J: _\/Dyʯ^gE.GRpxQS`g(ɋ΅pzM{k!QCc)1UR빚uц 6TwjV,D$l`/_p]f&~m#_ֲc1%tѾlٴ ~хdǺڶR7 )$B?x<4@hہAD/!~a~߇LԟO)EJGzb4C4B*trYw'&@$!D◟|wF)9MsiG^/?==M|>pK;Wl]J%0:b.pSUSkYD9(k.@w|اyBi7{bϿ?|HV9̷}XC;(YH){Dv8rLSz2nR2ya,O%֧dh5EoqôOGu_X}&R!Heqa`vDeApr4߾ P!>lDݵ )H "T%3P@S@o@F@S iӇwųu%m0lw0c|>+QɝAWGl_DMnB&@!jRrfB5l^/9Nyk^yNӤTgήZ*5zR  5Ax&4ZfXEa9JqBв"%p;*q&~M))DiJSbkj'lsv&:KzҼJ5 ]Ct2`l_{AyPCd%:;3FTpxQK)Ps^C}C)-y7P( A2 PcaGVQSc:&Lx~q2 Z>[/ȶ'(N$ع oM c=(hfC N˲:CMQ[ ->ZjظЁ>luh qPͳV l#T/ ^Q*1>[ޒB^ SET6𕡊mB+>Ԩ"V !90maw"Nc̥,r:%S\>87M(D4VB2<ϻyfb5T4$0[YumE16oKOJQ0&QyVhP[&;k֭AQZG6]j'n龡 nzC\};Qcf+^"x.FoZ wߍ n0[}F:[Y ^0K\ǟ0Ѷ ]'7~\2Suƻn)z7 Msll=C<|B_E7j;H;m؋7=X7?H7W%1-fWKe l/uH'x/ndv[LyfMDVD/mz8mol"QGbv#~)A7c`/k}0B ֈuDj(t>g Թq]7s_gu@kwNgaK-ŏ_u]//DtO?~q߷Hs6 M_$>/E 9Th廧i7zͳH9>WO)gcPч8_fOק߾}ꇅ|~y~Ii!P`4xiv8r rt_˲xʃc5SnR岜pCL 'Sn}ǟ9_u%tUކjji#n9 R̴  mQY~^xg(M]Ѵr!b2E ai jY)^`b̜ Pֵ-VŚTAH#d_;/8Dss6RK.HyyyR/)Ý-#3?}\t-E|u!W,>z(Z>oeօ٠P=sWǘ"5hMSBF[%,RּR90QIE!0t#%p#T⩒ڦ]Z!>]?[[Tp] ^-y 1GV-FqPCwDDr^sHT=ABjyBE0;hBDFo. %u%=zWf )R|E nD^za4yǠBlhUA (-U>`voBɹsjD.(w*7񪐦3`A4jhR /A5 K ?5@X]5B_kbj]Oou P# я4 [vڑejwLs uK ]'0_ وhj3"r|}=k 1}eQjx<PWBdD̥@{z&.$>kc)`ޛ-ӆ͸q |YExO7vtLtPOad6[Z> 7d[[9o_!Ԓc[w.nlx44uAKm;o7Xؕ-ʍFs57?w-OUջ{{ewK݂2nŅ[gӕ*]6;Tvt߿ס@fn~B4^FZjQa%| fzse^y]6'R6H~{Fmޭ햺Rw-&U#nJJqԷW>5~j%oN",}p8Y ⅠC"(CH1U|v`{s4MXc#(_::rڑBĤm5p1H|<}{.EZCD0x}yo_/_wN鲈gz@MTJ4b !䈺FfLb]#QE;lwfhW",X IDAT"5>Ԝ&[iƗ]qf?BnQ#FQS* 4!":LRG"b ܱesɥ4{YǏu=y]}H:0Sq61hTGX잾Re]))=<}/ܶjc82<)Cv0X1KZ%Ún"9vQSN3yٳ\xiYD7m8u?ZV.:'0O#2||k4j*W]'󦧺u6v0Э%ypl4o HpH%uc4arzf v:quؾ`]/Sn1o%bri[ zg8 #Rp dnY|=ȕQx uy7379 Cx.L26A/Q͸pKwM/HH3 c"ު$|ojqE'DưBY'dXit 3nA|ˉӞeҭJQ!qx/d8\oOp" !>MSN ߾.%cg}3tF?a̘YTur#9bGn?˸$U ]s#붕y޴{ ^j?]Cu 2I yWs%f, s{^GDmPGK3H)8c\Uܒk'Sͥ0)벪j' gF"|}c(U4z[աjs*aq7"R 舐QTvU`1怵\TϘ)0 l!S jmzCJZ6xQR_۷_.WH&P =t)qr\+75Zb޺WnP0How]K¢\ϛ#7ش0'nB7p2lf+-dpI}cy&w|m(y3Jvɪl#5znm2 I? шy oI#05 |WX {KzK@Ɛx91Ƙzѷ{+;B.I:O_tn4vo}>"˺F)9'˲}Ya Ȩ ^U9MyULJ̴y7¨Z28۱oYÿ!h&O]bo4]4ĘT%\Myw>/,,/sJSD;_.9gIaB3\ʺ֠MJnD63qTLsyx|9Tr2 3'yB8ϥ Ye-k"ZDRr^aOϏ ŻW7`ݨ.˲^[C-3" evh]va՛dLc3ʒO?==}=s% .sGjURJըCf?/kDM5*!8cƯt56Q[#"xxh+/~Km *.R%3QSпɒݧ3z:Vwff1pJΎ !Nv~QTORl=5qUtd7Q븨4R>c;*qLD"%*;܌~!u]=KQ\r/9bJhk1gMDDʚ_s~}}=?=)b"/RQCp#XLwz*Rb8 QMJ R>UVU#@hCy}[X{rY3pOM*sC&F © W$Қظ  aCg0+WPf B!҃j5jQ5a8(H{BH1{;\#::$`-I (5>_:e!T c;@9  oy0u`7 w WJ8fS)B:V!RR0nf*i]ʺH!MbEM$#G2@p7L ^u4:aHHp*w!:@s.H5jULs Ѩ" (LY `Kzqظ쫽Kms+u[6y騔w5{kT0y뾞O@0$~81Z4̀1~xxwfW7F 5UL*yTG~gC&TBLxlƓU]*>g#o],kc ջ7F4g *Љvsn$cPc\;Zex.`\ja_6$j,"-Ysg58l5 +Vf~# p+x،8F]f+Z/5%Ф72fTiR:ѺK}}VZqԭꀘrai_@15UKHcʼ2n Z/x/ScCᕑdr  va?3_3/.CW(7O~MCDbm !;2\$%zt)EL [\*WSMzً Ĵȭ uS'+ 1w}QY痧oO>2=K[uщiyw<d4$-0؄0pĝ:jRK[`|}Էo. m$*?b?#Ԣu&۵wh!F˭|!.q??^~w0ՔOiw{[M B $RTcw< [hlf&~v |yb"b"l]~N4*"pXњ,{ ]Ջ{ZJ΢cBB& !)L5nZN=2 Ѐ3xDEg ؔF„*dfdbC`S/q"rhkW RF@ h;*:O)0R 췤Px9Uy$8FNlUBBNy9?&5r8}ӧOD$YJ.ܘ1R@P_(fSрPK.eR ݕR|z\.D̊@Ą1 VKK!LBۮe;jխHO?r)MS0aC5~ඍcdC/Ci&oD*aW+#.;"+Te\1n^R[tf;ܩٱ)MnRrbdD^ٵ7z !66tbQU\qhPۦQ^avkb]Wz T'F"U?G,FwkY$;;& c赒\HkiA$2iG(,$q/B" A$ff9Gᑙ (3]#~g#"cP=.!KwM5a%!pR>2cE>!IiAC8rK+?G] k ՓwNC+iЦSnAF=dXju\9sZ<[Qy ) PI|m-[r-!<2kI[`uP*xvc!Y Rf W,EYz niy컨n5ڨ+:eVBjy fjCڛn9%h1m䝾faaRѹxxQ$?==1EJ ~a!CMwuLQ:9y^tOme$P]Bdu[ɭnJzv5.%MƨJ+jZ/= ݁GI)"ꊹ{Ky'XkjBkuV[a-UxT$=%^ zX&W7ҋ@P2mӃoIg6O[m\KXYtC?Bir3ro7zZtvHˮ郛8X3'1oM*4f"v:BVK%J<֙ZMZӤҔORdu *2,0;A(`|umB8eYN$3ѳ"ri^tJ)1#12HNDt<sk88D7w[a1YDymv Wf ˭[N.*daSKP(VmC:T ^/P}DE6+w\ B1CqstEa~9/)~ͩ1Ps&z>@ag/y0wӧ+p0SDuAdrvK\El$k1d+ŒV-)yDDEs*1 8DAW_ZЋmc!Kgw ʥ DVPe*%TEUPؒ ŕ}PұݘFPsT\ð.וqT;G1%}9UK N9{03H$b8 )gJ 08QN/҇|8/fɖM-'&UqUxnr^!")CRJc )=D20QD9X̀҃ ,$x754~'U %PH9^:?JC~D24^h[U+ѹ[/sJ.h5S3R,iM-bLHnv1F@F9Y5⪅)(3BWef5IYE9iʹ *f}zs Mj܁9+*zG$0ZYւ#Kj[39 Ci%e +"װ~pKz3XdΪdn-J׀b{mkƗ[ ηDBf*b"XT@x$Pm"QD'XҢ9c1Z$9\q' CN'Toi \cxRoD%C+@/(5C1 MkCl !EWw-^*t4 [5cE<tx]0i4WA33ͳ柞|y2Lim18]t< qY{+ɍ\D$NS\fVF+oty0ХCk4nT04Ԩ#87/oTvJ 7S}A]Z3R>Pb(>n%xY{rmJ˲$Uqj9|sοF|v`f"޸:uhUg]e1jX*m}k%{*V*zkxU=_|~zy23Gz1dn/y|'~m,i92l>H!U-˗!U>ĴcyIiB )`DqQ7_^^ʉb+L-uu?;5iNġ-A"üU$Xъ9y>0t0b`0߅BI(`(4>| WQKLkʓR:6gG Pe얟~|h/AKC)>6cbTc4jz z¤C:_TSޱ15]VMTw퓯A=>_u#Žb9}4ya:H-9W~qʸVe74'bJp9 SeҊ$b 9 PyrAʨXaMa\z fr%hqCUmo˃MC/(`HT#B`U,!3Mi)11 >2| RJʯڷܡk IDAT[3VfwKsڭ)m=Th"7n`)!Vӣ* ;/ %%%"b 㹳`ɒ_ukpWu(]RD \ %&M 3wWSҞ- `*9RY L-3甒g`1!:w- ajP Pc5Jw&zt>:a[G~17O. rOao{Ffgs"5`_B4mcι6aƷ _Wɀ\!F%Vs^'os\n7x}Vkp3ov|S{)Kwӂd`">l,oS-v +.HUk?اxov>~8o`;Mt-vrUoנ$LlZ Mj;0 ̼(SN߼79/b|M?(uɲ,4ݹVnΚm/)zmkݮ9.u-G:kFedՏl=s+b!{1K)=??nN .> [f+zZaղ wvp3Pv X[Ok^d?c9Voi~UD|>'qR~)Ȍ9N|FÀjՒe, ZNy" &g7S{hu/6Wh51Eϧ~1p7_U"u+ϵٳ'(5o'm6q C,8x.]n5kWeYL5w촄u#aFDa=v0Ii "R" ړ$(*eH&Bo֤a!9՚db`l"s`H3$%KxAhsZ8d)U|D T^.j2SnZ/ 2)agɑ!f1mVe9T$8j2n|"c+`]{a6bEdRE 4m|mYxbV5?N<)*x@ٜs f lD\]y޶Rw23GP1f5BDDL<{8D)%UA#.loxS:* SzkODY֫FD%O9 2]0!2X[,-vA_i=a#l&ig˔6_Q"P@%4X2ٛdRL cC3bx3 kM5,mRv1X^t]<<}. 5ڊD ۻyQ><ݑYDGdQ%4 cu"חM)<?>cutsL7\Y iav㣓bY1yYûw-)' 3ڇv{B4A!6K_Ӹ3ڼzʘu5 2>/~/_?_ѣg=n+n;7[?﷝za*e9ϳ(Fוѥg'g]L dUu=cT=Vi+悷rJu1<杈lemYRJ@~/糞kr!DibZFDMĻu}?Mw(GV߀-E0,yY8sCx<05S9si8HeaJ 2q aQ"1?}mٺqs+%!iד<J PYk|>7^\DIs^T:kڃJ[U%mu0$0s^iiXBi+[fAHF^.@WYq` UQ%ȶjJD6k,څQj`b =TUUG^o 뿮_psN99:r jod@Z՞>VA\yM6Ȼ:-$ acV4,mXUg\wG`.$sܠJTPPT]KZN|> vb 8RZzwwY 7)1HDY$yF7Zm4Nn<}/ǡ8vQQewQU^8}Ф*VULk@^P&(-*7ڵEd+! J%P)j_'K2E6mMͰߕmap֔jZ͉fٲNSu~k=ҶL}*fov~&śm7n/JNnV73obh~78)7+t5]lePEB+MUSXEHPT}ǺQT D3foFU>ua޿=٨lne$P,{QIw/q3ՉNMw niȐ]ڌ̃j6 0VDI"G"aYe{`ee?o?|;8"=+FAAD(/>&Ӳ`oHŸ0âG|e4N& HCyMCeYe!f%yY11Akx~~/~q<߿ݻnG]\0Ѻ]8ur)y6МxCy||]J)?qPAF| @$8!9Z_зu1YTU?=?_nw?=}3|?NjzO6'˅C|y|NFpW8~q6V:],ow;j;r_[&Qs5N';LPiռ VDdaLD4y><^تPRf& ,K]b|* k2s6fp[Ob~$4ǗwfAT;n+"!$b vV zGt>-)%7!rBB!5<1V*)-j͈hS@ΒBa(e' 8oZ҃z5` )gC"Q<#yDrѰFl1ϸmL,snb]BГP%@@u0'QJzo 5e)k ?!ETD.gFa!3{I0AQ%e9sޮE \̲di`n? Cs -ny7]"-*0ذd&l6ԃ:P!D̹/%S+q&VH}QP}iET⤥&N17G_5.LCch^+fɪFDw+Vƽl+ $9% l5e8lYZ5ڲ,*;f";@ s|IAaY晋ƶ3 c˥/T>$1wM ȨgT_fӵT~ 蘾[GᶉyIboR6v?]Evкr@4 ^M2ݓkɿ|WEYo/7HP릋Ko׈ڌZMmv!A5 eb.L__Z@D{E|ׂi2Z1ucRD  ̦*YRZΧ|ݴۅI˾KOߍ)b$ vkW}rjot ԄuuӮ $ 9Wb;r"Iqè!%眓0Hv/?*q0f.fr:iQ6s=XRʈK3.P-S:Xoۯdg;ݪKBz1 !4a;Je03NX88p{.N(1O |Im -cPUoEey?}$߿E >쬤p߿{8~< c]{!d @P!ޞ*# 봎C0eY^^>BJٽՏ?~o|j59=&"\ {ErUϩw GN֔a%ac*˄K5 j3xuFKkP9iqӄ80xN9 me1Ugc䜥sb#ַ_5L0޹.Β&CJ:{@!<5{㮪"u6, _dJDqqTՔPXh`d5(IkB,eY$4~?F0$3W3>[WyEU ON$sjB}̀1/sNYT>a)-1(=! űAaH9Nf@+/pHTYC1-tkȵY*v)GyfJ@A]vAiZP[1Z-]ZGvTd!x x\2q*^2!3 Yq W5|+ݥu|w+ʲh.N\*&PV@&@--D"d( m@"j9Xja+Rj4_R[qUZabRg &2F]+_FTM^ Xg@7ʚ,26UETSZ0*{Pl ,`mڼz^B+3;r F$ECmgRy ].X#md77ZttSJ9"S͒Gx].֜(Nw:PUs1rInxvi y&G ޶e\StOPEOYXcV~\VxcK, "(T}UQ<ݹ#lHV*7p'b_UatǚmWmk|rO{|coMW#Џ2]:CI -JFvsnLY|̸VW뭓R ~kl7@5aT׺N777/kd).(+#޸7͞QmǤu3_Ax9䉝.Btgbщ㸋P$n?]EOʬb7o x?\1D" j*t79z<ӔK­jJ#$kNrJIf˒sJ)qLJqj<Ǐ~ !|Wf^!Dr@ThU1e=)e\7TռM|ݢݣZ3D+02Dt8y)LJ IDAT8 mac j1UHy~"xr?_!n, ]Bxxx璘5)IDr5 wP6fj'v'@#31ˎ!63,\afO|ş~eB?~{<`=rU[|a;Z)%)xhCn7ERN󲘚h2SzTRP:\rLIAO9K/ǐSB}Ύx)z6B1v;BQoVg/J굫j W>"jīMBenT0sO3cYTICyĚVҗJp]q>5e_ A]Rx%H$Dn0wn`%:e@S5$N),=b:jJ,-i碳0<)0!YS)`eY\h!8j|auJbra*01]`*D3u]h9J5a˒̠qJɏ|D4PLa|V6GEPyY?p~# Z@EiO8ȧtRU3d0c4i6\ Ju'$eU!"`b2Aa*:Л=ielKu2m):}KzA09Ý)УU ٪j'B,uYjmD e;|C4O? J837*eC,ؓ] ]-Uk*uD:NiYAtu+еVۭT\MyJ|xjob<13\u\cJo^n$oձ: p,~ QN)q  r#%+r'"*wOʹ:yn{j+Ķ]eSRw|e0[q^,2֨'s 7 ύܖcfK`{/ .j#;-|1Cj6)lW݅MV]&V̫Rx᪹ў[򔋟Gx|bD4KZl<[z#pou70hܥv~WpMh*J 竦Wv_g@z+U$PWY&o`x%LT3bZnnk]."-J 5xeY<</||:ϧ/_rNw_.ǣ={w8p8M% ; }ШjkJm4Qw6g]~,WE.^p(*̲wr18C "⠦h9gKi0-XB` 6'P`%=?|%07!lӴ !\"s*Ekʄ*{ fy.a朥yEf81}ؚ")@.7 TQ0 !L-2vc@Ȧ>BS2c [xsMܴ HE1% U&4C!|3M%;x<.K .4#p_/a , j`L4X16*"Fp# 2C/|󛧟`I<)===}[@~}|<#``I2)i!ͩ x|G}m<\wlk34 EMkB3 %=6Kes2c !%qOb #ÄQE|E8> #3)؄9g 8pA!8NcAiQU`^CQ6A R} B SqQV`Q93*0Aܺ qqP*evY䱻` C5fqM-̑Dry#2R`S"#eq @AbPSd6:>/UzN (ArB%@CWO RqI͘Xr.}04S4ҷ)f&d f5MMT1DL\]%j+F-4nq չW纽 tĬrmNJՑ >aXYXaڌ~)K# po,ǵ T*˲ Cp?NUԀ1kFw^ΊTSS~9{ιz~Vaۭ5dLx4\Sz1njia` n`]r&@a\Y: V: hm|JDE6t1ܓr Mץb%,VPcJQ|6m5fr}o[;-}͛VЂ[iYqҘf [ DDG-^6U.5ڨV:=<鋴uRߴnz&hnꮥU{z}]0D A/P;aw+ d( z VH9_b{^P`YKo"W6t=L۳\ $9|>)2%!çOß}/_<==gݞh`AKyf wş_zw]RX-WDk]oUf!Tarp}w?hL@#ʂ)k|.:.ONŻ48 #"0#nCH9ym7H* "\˧Ϣ߿?OOO":N۩jʉ׆&j4[,̚.9ADAnWɧ54v;l\-i{ꂙZP}(LjY@AR*8Icd L=vSrNH0]"iks!kDm5`1t>/~k0awp{eI""Rڢx8|pOY -&,@pKw@L݇ˈd xӿf.y9z:Dq 1( q+ڊrZo5O 8[Н)Ų, čٲ WāBa|n wIx؋ʧϟ CgYp=K"!`DW&0%ͧ4>T9xgDŽXɐ1Eb!Ւ)u1 ~ז+XUjQw!"2!3f9!Fb6tYFkM99 |^`q 43q\JNY$!2Vw:[~³F煽Jg<8k:F6-bRhoǏ_lJ08D-?ÿӟ`9kښU[c-OoMYX\NA<W Xǫ3#RJEe1ϮwJ_T!wߪ?WŁn,Fn}P6{R@~(c{GD5Si|y"9?9 x\hfoY~Tfu΢f6MR J7DjWՊebƩc:Px}T޲<g֭mMF!̮DjJW'U* ^PYÑzZbMr{(ڴ "2/I$al vV{CDʦ]ɝbPr%kI mDjS\ؑVu6mCbv.+P3)n&!a̬C kX" 'k2;'J vnvjd,PK`K:i8 p%&V"T'U# ~Zˠ4Jh%nt@&_ZuFgݤ]ԋKæH\ڠ5MѤƢ췰-HrC 4UO@M5GCii#;ߺP5ڈސۄvϚwեP]ktQa?1+m,9v-#vmKXǁضBM˫Z/ [x!@7L7-8wP {@ږ4FZ"󢦬K[&Wm/۪<. !; r"Zb{pZl;q "m+ːz z-|wd˟ٟɟӓG<6b=>==Yy^@I>}찉»w~?O~߸C`v>_ْVϲ@\𾾉iVm|[oE\zIM8c|xx~6"0MSkߴ榩v|:y2ϟ?4M_#1G͙=Cim>ADڕ#G=ώ8Nsw0 n /^wDs+kPԯl(.)Et5`,)/R2ӜepX^Dbz<nꯂs>N#k7-)`þg!3{Q/C." * =ҳ{:T HK ! HNPSLJ8 ;5}yySB N 1`fH@DVSA]i~kzk㇪hv|n% nD"L!(:Gx՚77#94K;)1Z7-મaVlFU6!D"3b.keu[RJɍ䦧qPt>B-~cSLQe^"͒PJJ !4]T2[\rN* aT-ۗqcc biS. *2JԘUmryN9, XATr1Yo>ܩva ˀ_T7~b@e^+%"1B\.Xd4 #g8GɿO *XfD$1_?oRYHZ@n)ĵ@Cʖ6aC?;`oVw~wOq05&rvTntFa\)o^pc'TⅪ%糙]eqW~ن!\n}1)jZ.HJKJ92b_!@6d{u\[]9eCHK1#;(&O=1sʈH6igP˿HaU*/%D |ȶVLMOBDyUPXH nKEˉR3KY֢_t8!͌͘ ۵]nq( (#GP[?dU__]c)x"*)jzmY03i"ѦyJ'1UkB[K+Pʔ% Y,XB֡"\ _CMGsyĸlT /T}oByQ6bcڶ6޲kf-$!$McUՉn @Md]vWD]hYB~Dl]]R & 5c۪[?jwu[)4m~ErS7q%jŅL_{y32Vt_FjuX;KzC}exsѠ-)UZDzjsXee5Mӧ~ ěBQ[vtھx?ÇO>1p8DaYor$LJ'?_߿b<τ}KksZ s[Czȯiu7zSڴ`58x{ICq ǣqcwww_ppjWB^e!qtÑcr|}x1~\*I|Gt:yn5*YdzOݿ94&9y92(̑fI)!*EPܷ.ι]Ϩ(LhH4^zUaqYERIKc|{v>ϧ9jg5h7hl4>¢uT W?]Us)•^-F:VR+vXiDL&714entSSQ4#P{˭Ҝؚ1Dv)#JLpګpO<0Dބ/zi5(˲6ϳ+Ly] 1] C(r>YJQmrr`T}) Q+CX)gLm?`'ިq Bb10HZv/RLDO9<4޿j?95H#Z<Ƣsvo| CCP|bRq";e^dUE*2{QY|rʄ֜R??kTBMB~_](l@?3UCB ;23:T){;\ЮQw&yӊ%XR(҆WV +.HEsJj"Bnq~ iEG>Iʹy*˼L!K6[jjۈ`Cb)ڈYtJKY]+hw4+wT|nj>B՟j5?H[ڎD,9[X}QsgDtb&"H@VٷŞcX$GER9Z Pnɮ[w}SaVH:f}VAJ[GHշD̹}?\}+Bp`Q*ok`I~)PD$#"I-Cz?;T|LeaUzss^Fj IЈOMk KO]] jxzղ+i9EAMM Po"M59[J`4TеOڿYVո)pAӲ!;O?\MѼuqt$3NnߗN_~İVVXL@?\6ʈ`aBy;o.kW vEOpvPԽhͺV?86wMFy`ŕ: ?;Q\['GQg'h~7׽4Zt:e>|x~~ "~W?y_aow{_j !usm/?OЅX UUQؔ{}g?鎌o?}www8"۷{1TZd0|>?}d_}ՏF&9Z 3rGkFU!>9]áQh9gg i:www ʠ&;ifCu[!"_fD#/K꙲[|nꭞ(#۰Ɋ=VL/]ЍΥbD8T,js6S?}MoM!AJy9}s8w@~c$3vȂD59``5K65n2o$FvS{|5_m lVSqW8 p.ۯ燇pyDTOv#lޜ IDATݸA/2 >}+V@R[6@*E1 |B=ۖUa1Zk}vE4ѨlɕۉTqAeRJ*-KaVHJUT!›~!L}^s1a9܏s?{ُs=,&&Xݏ=RO~]?+#7%$)ozK_ϻۏ۞j@o|ћ^2<;F %*٧җ7zgmI}t}Ϳ-tr^jho|o7w9g Ǯ;kjdl[g<^RA$UT?LW3/ԩY@ЊS J2j)e4ԔzVeD3f<T4<4SlRحa^]:TS!/12 hws.ƐM(6<X'g;GaeۆWV$ }c=αH:F64uX5+"TX^=zf.1 (8E@@iBB)"9yZ`fM@ mOVI0%MYe2."@-U Qg䰪.`F"ű1$x#3g|ٜ'*9v["s0TSdGq.@EzU $$$=aՍE+KBQ v9Y:&ml!ʕQYٲSH% B@lY/.i]!(nvN6*!'3Shڗe dkŠT7a#`rmmѐ6 Mr:5`VBCiaEs֤o K0-r]E!'h1/Vd7g&I  TRj0dfQggg'''{||X,{-M8 6)-:==U9;;[#!pttt{ƍr>_z=sxpblLޒHd݆Pe5$2 s+M2n$#F {wptwC׮wߵkw]jΫh :p+@(׫7o~t]=z||>D,w1XL2ӂuld[5&Itl3ti@\,Z!U|=Q{~ dh 4}Y0g-mZkg1{;9[8{d1f3,*'uAJh8.͐&>nJptP hJ!\NeZ fu"@ROϖz6[u|HL٪'eI2VSˀ ;Dl6J)XoV˙o%a;'Rz(󞟟ߺuTwZmZ?}+~ [!j>`c˲.:hC )-„Z.Ğ/;eaOwZo=?'>GQUBŏ~ݏ?x#O?{8RR '< U0_~w>Hf~o}g>S~PwC`c ^= _7A3 >w1z?ēo~K _O%vc?Dd 1>TWh%="+3Ȏ(Q !NS=8wIjܨ妦)Tɔ8KMD"QEɦ66@ڦÙ>/}nmSw]g'Q1re)'R"}@*)C'fjdLڀ]{3,diCqV`k:\pZ.nǙiˉd%Z"L 6 Ө8edg􇍎3#3+(\`"°rQމA9O<;"^E1_&N"ckM0+y"-HzϔVPSB1!eKc&F36ya([C$'IҀ"[]@RJnZaٔȁ=}>MLqzz: rm`؉mӰNdr.M >쪙bRJZ(Df{]y5KC#M ]sӠB0KETU88x ^ ĸ#j+n\T%u^7o޼uuGG}VI7*Ĉ>qfD@Ux;[]ɤ"tw8_bP7 X@'P̩W .ЃpL<0.ǝ ;qPT#!Ι=V&Pj*ٶb:JXQ.-56TȜl2JKMӑDػn8Ei̶cc@k I̹Y& P&M/{ٰ\9yk}>W?$*4#fفjv=)!~+Ay*O.dE3bq ֒3"+4 I׿;3ptxEnb.L8R'_Dk_yK@ף1JPj {fJK뻟aXl>2ju3v( UKx|Y6 M#9bψ.%T'u B iZعa ة aXZgh*4l!9'g" dd+tRI""ɞM7 r)*)SrNB`,hj= ĜibL2Qt+hoEh)J3*%{D@D`MM !D o"Ӿo_c DNi#%nFvImegK@ز m7-*چo6';$)03P7z`[Sk p@!vPvnQmfJ>S}ۊyvJ*yi@;@ lzX'{!]BL;go.sb,lqIq체IDsZ) 9}?[,b9!>L$@$瘳Cr*h6 OONNNNUHwZz[?ayvl6/lfuNaU!ƓS :%-Tus#K&9KhCP9(`êM* fkkۚmU,ո$Fp18;j{ q=Sbr^:9]g(4y3+ Ȉ=FR_/V~O;yV2Y}N,칎)IA(QU1zuZ)&sH)NƜ)5 3AU<^lA@C{3+9d:cuZ/b 1*ЀMZ H6Y0l<ʛ 9.QM^ĖYF,uA2bT)<;g"LzӔ[!b3 E:,km,,}7Q:TLEĤ@vBAN0Sy3hR4K#.DBxLk}ڛ%kU40cg⬼9ԥER5n̋̽ӟ+YlR+T?6p|RM̮WEE.34ZIwLM<=6ځmA';\8(^0z\E2%hD% mU+&;6"s("S; -hDD-oJ.B&l~(Ib !,WӳոvW#zQ';JW-h/dMmsy [WYbNv*Di:-<L$A@4 '"*)zߙv0{pj 7슰1𘵨JnVbi/jlQ C7!ĨZkVo^rPj]"rZVucƽLi=cX3bݸus]Gș!zl!zٖPYpL;ˈZ &E1uv6Wy[΅[NNN-fc^6@j%Mq^\W@fWhIZNLSJ)(3s aԸAByOkVkfJV5tZ,K3;XH8]n7o-a6nqx88X,GGG2M^PbqxxXr3i7mx67Xsl [puL&ZŰch%|>?=(G}ߏ!1ihz("~$G K2gGG>Η˓{`=͞cZEE)Q)\^RDqU3}'3l,ZfjRtF  )aEPp9[oy+r0k(Ta={z=TzSO9 L; %j0Ҿ}&fj) =GsSJu3Gݯ1&gy\-;ƟsT< ̧e~U#U3s!z0Iz;23ZF'Gv$O$e wM0*c1QXgnq8::]O!Y^ǬzF$R*GT)%%w"ju]JB)q>UUw$:OiFdRO"`A69+]۞V c ]{ՍityX ]!RZ$5Жy0:H]V$[^^{TŪ#d( %2x4*V͋>jV *d02n5hn&I*Pʆ0@3|@DMI"bJR)|dc~U J8'S:h =a+< '>ꧭ~bba(Ǝ]&+OV/"KY +flD1v'lRZxLiNWӪsc>j&v>S lFLՕN>U~q%&;fi,6oWm k*܈Kw{>m#iܤ*-5S,j:rፗ居h{.c1u&r[)_CC!"u7GD:2 oXZm8EՍذȐLd6abS,&I agC<8U cI1meSngFc{`-j6f)l"WUbsqC✕-4C9g6lF#"ɠЍ[''׵kW\f12og~6f l&")q=˶!/aJ[*VT2#fdA5Pba( (_gڲQ`l4j"#)IR2Ӎ7GǾ!q ]כ3%`¦h59'B1G-ȇѫ@L1^-,Xh#fkow`䖙?oܺu{0 Wz8@䐠zƎοo 131,W1ן}֨Oż:オ6($BP;gCJ6*sαPvm*.l@[ía-fi-IsX6K$@! !ǵfV)s}j} IDAT-`y=9:4tBH9U̻9Z" ;ZWNHD,0um„"1X<FiNEl08qZ{U!)Q qy.gfTW2c>Eّ#TI'rc/}m'l Qz6z]DF{*ugaX>bq~!(QF21EUT""hM~$W=+O>S~3QX8 g&XDy? 3u p@:!z 1@ 4t}Ug%BLjtyER~so~I)C }mn̺aJ^Ir:HAG’LQbK_J "⼛aRX.ϝ-m׶^UE| q-18 @c NTCq<dNü9/Im<)+ܘlb͝RL|{I s -$ c$A#1(r sh"$f_ pEF~0UŃDհ8sP8 3+rD kkc 51Ĥ9 HY\$0ϗ mqBd6nrYR)2& T $\wbJɱ#F ʝT=<="ETAQII I!ň b B|y)rwIBaMSC3aGXatX޸v fVbU"&LBTT1;!8<.3tYQN-)qΘ/s 2kreK4fd-uUJjs,VmKzQJ{j![ T>AK6. ]}'{{/ l;\okIQ{}]>2-k t(Tfи |yK4? W)]5 FղXt4go$X04K6 $1E@`^Υ#VJAlR}|"Dn#Q$)=+T^0[bPPse?;L*؂<4;x30DEABѕ 6!1"X]QN5poGڨ9i[NV{N #ъbt䥞XeiL\eіd .jW^ X؀q=l6̼ Xãc8j",hc* IS]4:a0D :b`Y9F`KEA0ޱP/rƥ9yDP1?}18fBMY}g&fݰl0Z DOƃ0/)5ĔbrĎ;BBPRё@mhAZwN+>Q/sߊ!tGWl6t7C|:J)q+ p"?O:Yc}䄘v_{ʕl=ӤMk{s&R8-o}[_ן~i=W/{7 !T?2Hqq1#@LmtnH enXlVDtZhɉQ'Hu`f MV}!l^1I d|wp>^qO,'O)r/=]X,2AЖF h\TgZ+THݬ L1$:;s)z;wݘW&e &.38F ef`=mB _sM?LC/1VY:z$${}[&9qp%E9<9w}-{7Ά/W]\w}3Ŭ|8IoU"ӛ_=7oW3/ϗ l"b07^O| q_.#Wl]̳3HDR.x~Q^*0\R9z+>GӇ^"Q@LIjoW̜n?Tc+_"oٟ_?zbR'/y'&q=wr !DDB"К$vu]_cω1'IL[+ [gZnA޻&:3&1D$ng`b+d ׸nVь3+ȊKlQ-)+)ba2kR4 ĨLjA"`0݇9dtp j~3"v]WguY]!4u !ڽ(LTnb3+L \VYǰRgL)9 1je& 6eKef@>9_evDuG&6+^ <i +[ˈFuybL93H.Ml%Fb!$c:fKhLC^)Du9D; !`Le#-\Il@Uj+i=β2'{b10.MIbJ!K*@Ħ yWce&4ڎ :uI#{/x5^(i)-m?[!'%Glg'g&w/#p3j~ǖelH2N 7j4tbҮeDZҦwc >y1i{{Q=J7o\i|1.ݡ1힆|#Iwt5捽{DΜ;.񂽝 4"=I{95֊623ݡ>Ѹ@NN"vh8{ †WV/^O5&g1vEmyġ*s*+,Ͻricg2ݦ+N6.| syy]m>mZn޼iG-"1"o07Rm8\p_4={CD4$ADq@y6HvqJ:vs&;>#6l>&ZfMB@5{Z-W881F,ѴZ`jn;&SJ@M쇠Q}_vgb0A&g.͑<ȌP2,ssvmN}Rr.jUyX0|tf,CwU7hZޭw.c<[CnB,ٔ#5DR1 vj8v}߳cA"ԧƀHwO)=`)+L Պ(iT)DL 4S鮓$-tLH%4uSgwAd T ۾%il2dL̓FR"& 'c[M!B0 Zy`⸔c>C "n?`s:f7 }UXs!GRq a )[v$Z m:J*PSmf2qDvb +q z]=4l:9輳& M}ih1]ZDlc!%М֜ LicRF3Z$WT7"b%ʾpE+{ZV\{m`7vܬt;[uw[]6Jg>i;/XM'q(m7\f@'.ӈ/CUu/0X*B !Fބm~ر. 4ۢB`}nurg (!t\ ZRuGICb+yVn9K+!S'74Aߥ̴gu>WryvvfKzspp0 /^}hu:e&aUK~Wbq^z۠L<"Y^k4^xP$-7nn6 *up]wYTճgo/w}W^zzzzzz^m }}*R1cLIDJD,ƍa2coF1 !,h{}W63GUD#1q4Iφ9u}?)O*T\NbHWb_q<_ݸqٯ?s]}g cO%CO44RAv z",_d%s6@&>lʕ+cH)(v3`E-W-$__JTU5%HkDQD@@S.,|X_=339@IDӤB*kW/K\Uqh#}qk**o<#zg HЬ2*SGgrldUɫ:.u~8eA.c6]~b)tI4&]CNZ(HQxQm5zx28C1qL;I9ɥD`X˶elÀXb!bR,RJ|Dr۟Dv Sqw2{R,#fAWٙ2@q"bャG;8G u]&T1+*YIS"𳛓$֞1q0'݉`lmsOi#P.V뽡H KJ`crzMν7O3vi9oL DeX_b+!_H%}9]9*"kQNRNIDsFXW03SZp#d ,GҔe9AY!J Ȩ֘dH ʅ167璲\RU.V#,:3 @J&f=;_d[PJal都dy$1(içٸ*9gðq q A`(UN3GD6m X!J<#~"Oݹx['^튛Kz9uKaͯjd:TK˿VU{@:ÜzvX'{q9xX"Neyg IX4;Qo]~r۸&tlK-5LREPmF̈́]{.4K ""v콗jTeN$DhZl.9ZB2姃ԹDeU4A7̺S+Ib7 rN)Dp-Þ4N)%Z* bX켠)Q.GvK@=r64;{.~P&40Ss]tW]R 5$w^3>6P,/d.%܇&^].`7R 8tnm_ܖFq["aft1a %E~ u!c;J0g[ɱ"sLqWWجıM  {8,D8 @">MMS;aVCA4lq؍&9 U)1gk!f6YNv(U㋌E?yߐ}|k~7%Z ,%l'7_OQ )uzyEIl8QzHlfB1&.h )YDMU% :M NsWdݯBA_גĔDرƌBK|by7JF_5ݕy*Q337h2$HR-6w]p+w޺y֍[7n\y]WﱡTVN#:* ;=6_vaDnB6 &b1_,lMXF‹XZzsp(qn>3SM߾Ke_STŰ4fuҖ@+AcCrUhl u.MkVq} Oc)}'-I+Cp”?z0VRC0E9R5ThZQT"ϥ Qs>,6Ji*LY&Y.EDl臮jJcbL̘:Z,f;ju33I2~1Eun#ٓi^*bw*ZNv)%dsl-iluCۘ -+a63g BJc!g%Yl bL^m Ef'bIa ggg'ίfpCv;@u6S:p3;<5,ٻF9YE&LqLGȷ[cb"AE9 >'}a@ ⎘={=3;|xb,iCvט2|A E4SIf.bB@|t~~."}3{)"DH9rبRRX?_ydS+ Y:zG"% 1YQ$ 9ώ3]㊬FE-)1Yq,>XtJS,YQ Y5h4ep3q{ IDAT{QNQnQU"mؠ"yt{&DRj۸Iv,P A$hJbi44M/E!VU?cȿ+y.C4btK'zߙikDUlKJWTjfc`IfM覘Dp&fizg%Ա~@m$9\vՖt.4X MHpneoPwA]ҜomdpH˼߫w\r[lW ɲoRAJ!Iÿ^@QKMN{wwvC7I8^D-dH%%-QY>L.{}4O@:+y"p͐ 86 6tj|ֺhd$bO|P R%JC'ʕ+}쨲DY*KګkbË6u7N)s4i{K.Ќ:Oq* !g5_y$CH(.;B\>+rB_贓EmI-ֱS5영T3;O\2MQU 9;d̜N}J3"gޚ7l/:-]w~u~~6F9<<\,gkq*X7 ;/WDJiMDZ* b8H $GhAM H%H0a{zzjw}%*YLՂͣ@Үg1w7HQDRdp-4!~0>C[)wz|6`媀ґ%뺾c) 3}|#Ǿy(R!IJhV$WOo9ա/Nu @rRyUH1tMB[w7ND'~W:b^P(=0*DީjrJB~[!J\}O[rh*~Qt4H4|d6Ch}}~ 6g>?\Ϗ)yGLguK3$G} -C{LybL1"|>+ԦL!x9SDR_,ׯa4N{`ڥ`ΩιbceTVk#VdoYV<  rB:RHf6'cQ>Tae(>~W!L'|=nnCv>яo aLbWCE/?;_3*s?X*ȿ~&"5o b*)tmKEbr>yO:24ORa,ԣ\+8 '>7< sE/JN1A=yܴDh(ϼ$%hԞAtj(acmr&< 9 sўf4E#zŁ3\JD4Tm2Vr 'bW &F ! ZI.P (6kZht[ߖ䶊W$Pᢷk~Ir y/rO.qq!D Z;\rm |iėvsɺRh{)K fhi`k?EEEO6 .ldc'VU׍PҦkI]4UTZ\f5{,g^*0Mv?|mRʻMYMZ`1T.)gYdG0k*Ʃ9w;qy0Ϣfu|ʾ}RvU0j3 Fy1bD,M0s; V95NUbmm_RÝP4>JR-bd5y j:99Aڵk=ܳ~_W^zգ#c,NUbNUŅk'iz͝G|ynB/"1eZ>{#5PyXZǤJ0ͬn:7f- p`"hXaTwe!3*g&~*I!;(]`N4) UD|Iя~cy켳!9)A4P;w]Jl̃i c1;O_/L}in9c¥')mѨuTn◾lg>+G$T/ =19G"qcM٫Oj8#|_5>w둼JʆdLOyo4PݰL1DLLn3|wek7)OB5Ijev#L*ޜSf<.)%1K7hø7qk 0xg9(1a{̝|U5Z- J%jseՋXF`oQ 1)H䕲51%n$d{&ګ\VBJ/~u{l6GQ2' Ӹ@h֘1kJ*c7eEQ}cHzG"cuޛ'ldf 0eYRp~~҈)%AI/}#{>! =x9[W;U+"ў]08{BL"4E%`6fQSqc=gsu)6viz0[&Bd7G\!bz1۝NmPf='٥hM9,t  QQL_g^ΙEDbldV{bN1( bo9:8 5]zQhCgc_oYI`[dmNj¦^|~'靁&x Dt?e;{6CxG/XTTtVx'麗fܙE?:lx܎Ot']= p% }m.t/ឃ6v&΁Ԧ~ڿ 4J{}*a[ %\M^vi^@M=K] 5Qpa@e]i-c\{j;/ZKUʅ55X,FEOjVa/6|>WM%fN-1kq.ڵRj-QS"{潹'KnzKliKDId]21gNY*qR͆SJ$HV2YacEyM1hIf EcUpblZ\Ão|_CoZߨI]4୊UD W< 9"V50h3 ϱu;??>>@IaI",a?fuKYYVYZ{cinR5z;.j2ɢo-گ5DkvYkƯ]tÈ0Be?*O>#>O?^gJZ߸fw뵕TN\B`zX3-yL1"80K fau}WGKф~_^{+Cyf]:c62hL3-cCs2C64i#<ӏU`w h=}ɹC?d.uҦb^ 9Fe-ҫ뺞U~A="jMx٪ hh D v/Rϕ EF-!HZa {]v]~]tJ,%Bx饗Ν;,0=!ܒ塘IԨTsRQrLƈn!"^z~i%"F#@݈3۵kҋ뺉#9.fzWf+L@@!gMC:$x6o5Aetшcآ&4-wb8tUZ]@BJBOO|=B?}<߅Uu]7>oӛ\l~_};o0OD[ǿgMkvYE_7pMq%Q:WA4~+ŋ *?_: ]0 iAJ"J%e*Mw}={n&YUQF9}MǏȗt|ԙ3p/_e'~&ki;|Nj|˵z]~??uU Z_]'I*!5"re?yW*Bu_W7}{dTO:s $o׿g={n:s欽 or$ <P}{}~扳}ɓ{OM7f*=},D di;%[~l)b.3C$R1wk֮H-?},_MђSS" ˘\b N`XD 2AB>0Qڬi9Hd;WYBg~!~IdѶ]<3)uST 9٭:W{&>t/* b]zU$d6!n.6Ch@4@e@scfFgnH-*r F9r9-tG]+ t5;TQ!!g0+D*o/?s6\MC{vض/׿庯}}/VVV~W^~o|Y037ؐ8sC%^u՛vp ʅyz}@8u?_ʝ8qb=#G8yr}}3{qU:q?q'|͑(ēg|3>szϞ=6zO8qۭ>ЃGޫ7Nbޛ⋞*-fF{5;br#D*K2OdFMd|jϞV<.GLE]jZb߁e8bja_:"/HMhX)à!i%KCZ݊/R" VN75g[ {ҲM B ]ld[\2f)p%m-z\Y ̱Nt#̢&V IDATĔ)Ď3T (CC3Z 9 Jj<Ҙ97{1_ dS?;N 5:lrys|4EqKzfUN)IڻkCe05~Ҭv' 0#W~ 1=ER|Zv 7Tړm@ +kZ^ 7)Iej`Cd$qb ]m Dd% b֖(AAHsD-ԲXc<̑jq2+M3W@U;VJ< %:Ր,TT2x'GE"+%X{gUfDs m'to]iZm(dt8D42" (*Bʜ;w,ʲ)\ŪFB{t#"f$4Pࣸ2lJQc YUfJ `KhA"h L׀߿nmm3;~9W'zj!"{O<}_~߿ĉSz!eE[,wrUFFoؘCiLg)Ce5W{kװs5o~sM{"h_7QUЗK~:h@4=WbpܻwB<0vf,r)L$*˟ɗkU9h #&c@y^n:Ui1k3-iǝ+ v.)I2#@2n0XM 9X84YrM5)H/ہjKn7='R[4˷X#)~RPGڨE u2pbA_l`F[2̑.,{,ÑP6Lhq9d|MEgOZ7T-Ĕܪɵz16TU)9zQ"8g6T;>*4Ue,u꺴/f%V?kiy4ۋfLWKe͉V}`uVWתW;wmۗ^:}袋%)l63Hass3Poxia6?shNɢd6CFU@;]C$N3SUoll؃:dHű{e*J)EZYY9wgMiPa&A LӔQ{\ AAR}{:}z7y} ;T|gGb{88ǦU[[]Bu>DǏڶm} 6fж;m?qdB׵n$s?.ҿ+Vf5b2f "vs?o_KS7|^Օ`fÒN56++bf\|.<77ٹ@}a3P Bᰅe찓뻳gN6?) P96T]W}_Ō.RM pGyQ,#5C*v yI2 V|ˊ>L!:O !1i1 6H@n<FB1OfHM3&ż01ؘ9w-|K_ڿg㛳@۵'h[f׮5,ia?3@MkE#|x+^gLLj VཊbHx*8D}JOG7xDtآy,c+zlJ'T%0X"Ï *b0lC{2T\S d0f}*|i-zeTѿe622-М1 d%%,^pV%@dՃ[/nv|ir>1PwwF \b 9 m]U'CVP鄛roW`)Mq)&(mU/"qkq G)e /uϦi1 Q~-L\TVMh@pQ',Qn246ԲYaM$ ;:F/ ~ؖߺT I0=,'c d/%/tH.e۹$G䏫z23sɪf}iDm ,^yBX'l ,'*~R0eUȖ|d(,Dw~>ovNI M+*@֊)ֹ0e.EK$IL@D_|O^xc`׮]Vځ3Hfd{^[[K#)zf=9FPg >\Z[psQ"`e3<{ADsRΉTW\/|>fג9&90 ;ii*m^jtaAzh"0fn"n#ӘI |7&8fUV@Vc 0L2UU嘏?w_tOvb'>@q2DpU]gڍ릮Thޛ۶vh-;vP+"UzԪeffzo߬' x;WJ2vUhnl)x,zh!◿{TU%-SVsT'ǏؿD{i [avU]+`÷WU/~+^cfBDL'N8x-g]מϴttX">,$Ҵa@U3JEj%-y#(gnvv%UE}h@5@93sZJLa#uDy"FG!md:ٜm#g £i1DqD1*-"8M<{P.mkgJ[qԸG᫏e#USaKSPwŹSZ2[n'a  LGFK2E'I,;2TS@#d25nX'郰 f3L#q"y&62O1F|"6|jA۵'5D$ \4Yw~咓`e"RYkIL G8M؞Fi-g3GYAbkZ^[brsfJ&aklngGaf"r-9ojJ_0UU ֬Kɹ*'>b3ChۅrfJN̕K"@4ҶAUU9M9=sܪɟ<|>vGwI^ײ~rH+UBo5|{cfMŠpx^UUUL.yki?4$#|x'O"{ iaC4MMD'OkqU dc=o:&bK)B?g_2no;if6EĉWfXd7!.sK?!77]y%"lnlVΎWd 6΋leb}/zcOx>+4<׿+_%;R4u@uS[kvU#-˫q^H RT C"":xSG˺@Ds>%`d|>ٽrNk;Be69 Q*2aT|IT:#Cr9j|-4<5` w¹* Q+ ~VWWmv[z_D0GYPL_ iMv@"mf[KCDN}EK)3F_sQA Chc{s#WU 7hx% @l QS-H;۷8)H2„%.JLO/XSDcDsLTbG!al;O&=q'3Y>_HgRD1쑁Ua2Kn*:S!)?=Bqh站Gt Vw&fB\!чnnH{d9 13{ޱ'?M3w<" X2J@ {bSÔSdGseѰqT@%+]pM";*^x9 D.2bE%=$:PߎK%JTv%] EL._5 b)a-gE%ʟeAUDEnh]8s !HQQiA2РkfjLrqF;< zLJYgMI f޴->rbf>jD'(qP=fƞ)kDlr%jH$ (Bk9 ΝEK.*<3W;GRYwtQAj,34KJ݇baYKUaq ;,ZA*n~²w`apr96Y3/3=l6bѶ@C Sm%NTQz18lXq`SʎT {s5Oj'NssVjH t">tG?P~?#|~G?ŶT) F6w})˪>AE-,TW[UgUUuGŔ'o&|%yo~O͓gݲ/bz̳4˺!&Kaٳgٳjb־:@{O؝T3/g> u{ݳpy]߷ԩV{>sf$y>䟼Mo7|7p:皺iFDox[!],EgMnsX,nyDUԛ[CfFݻw?] t㍻Vo^{5߷o=c ={3Fs3O<1!M a߾}Nvm=P"ڧwޢU-]EٞѶ H0M. EaϺ*|c~Io7<>j=VUD}_袋FB&xK@2{kvVV⽗q"C Οm9Niu]۶]yoLsDti}as>!V:4D+5hnBk#bU !hͶN9}[ <~#GsZs IDATE;ozl۶kif"2o2"BB$l%b!ALFSUݢlLɀ J1D⪪/D:>|LlId<Çb1dDUE@|ʭ6lX#cBr.;K1Yxi2sd8xoc\3j,Ջku]~>Zҳ-Zu**WU5;G1PfuCkۮڦnR JĎ%UΕHO C p%DP Vw6%bؚ@A-5&7Ƭl0Fv:zg/ %7g?F/ѱC%/mqi%tgg}z%>)SUDݪ!;߇OO.T#bZ4Hع@=2jYcD+}g{+dE<台cQ \Ǿ;edIQ@Ǹc@Ƣ/|PS#,\NY8xabauK2P5dZHU?`Rr%YRe;Jb?첶X\MEKܾS>$˜!UJ-Jl׶g?|E-jvUE,9-6mM2vC瞳_]mZ`P]윑KѸUU m1-n`R`6x2NaB؜sd~ث3x"w, ff2ٞmx9Imf/69L>?)"^z_|t" AnDLk%P+aWJѸvOFVraZzN RuOf+"URUZ4hŲ2Ͱv\Jjfˬqu]؜onn777oslm?^DY]]evFATLU]N F5r\1-҇sU2`&T=7߄9~i2;W1oUHt؀siGԤ0rt š`J zÐcB^/ !Xb"dίl--Ƕ: D&1?fY\` v˕l]e/z2j5aqVt|;?"&Pp[:υeܧL,=Z,F"KIq"t9{ d+$4l N%+Sv#@dj̘`gP ²\LD04N]@~TqJ"XӖkY@|tr EVJQd[lbj[,&}Bji`O֜2!yW#Z64A;TzV,FKhOJ'3I[+EjĄX0ΐҝSʊ,8(D,aޚx#;ϗhy$ǥc[-_C,|3ZHVAt|&"aNg+DD ]qYK#"YE$dwY$;#'kY*!z1,#(eV].6!LCfv"fK[4b$&[ޑA%@̱OeyAyf겭JM.2]p{cȅY랛$CBt#ECS`[d{[im5 *`#Ы {o .HJLr_y Q8t8նbIїTdѶ$%#6#0sE4UH{w$d6BAIg_QvMHhs05ڮ}uٮfr*j˜hDv!#LQHF` !bOLJ(;́47`>F,mc8RR.BhO+%s5:H l׵xGl0w'Kk3˔6BQ5zFq=g* ZM|3uC(<{#94d`Uk[´mo*= %B]X}*NüjaSSDfF[dyimX(WheFnBHot) eZd!Q9m;wns>$X9Z>mHbpGO)E(ABDu5GzY&VH-c"F3Z !"E ɴP* E'QollX@9;vi (T}#k5ony|Ѷ(hoBJ̮ns>@K~G2TU? ]XdNR u( aHEgHcq !X(BC̥5W "r$A]U1H ")H-u {շtfsd z_5E$д=(Ǎr|T"U%H盛DLWVV狅bT$C *ՇDcP&4+prђ}+T$xoFg9fFbs |!HזDsϪssyR#"Ĉ$A,6\0$08v4,3_}}3cWWFܜos%!&)jfeV]ϵTo~y{n>Ogp¿ W a΁9d54EH.|5Q7A\!!=_bjk[|1:Q\":RPU ̩PbydU.$dVDN1&^́97ڄ(@䭔W|LsĎ$"g *Ah>&* *Jvĉe/ƾT裁ѷK* ./k y:ZuLlȬwuwwunI(ܮOOD^? H EX˸aL0lt~Le2}6uQ]2QI8>Jw՗CosOL."B̤F(Q%)a>mL;,E HDAFKQPp)3b 3i.isDyf`1uJL1 ',q^. bt(%zaS=_qfhYKT˄H`2喒d2D%ˡF65dFYxjz11'g\T Fg[{5;~OAd*R?XYRO6@Q /4dL.#4Ԋ Or\UfR7dehѶ<6 uir`ȢBgDֆ,S$K6ȭ픁9E*Dr,@? -LUQ|B [vʨ *#f&) 0U =FhN"v:^k׮9|۶,✆0lXQH7@+B  2u|"$@`"ުY2`2#K1EX"ڵH\9u5sņ* hJl@>@F@ VAϵ]_Jےu #ˇ@hqÅh+{hݬ.vJib,p3oFDۇ\'uȮ4*NV&4~ڡ_IP r-ċ#f2HƧhf!,b!trUyp<1i#DJUpMzuu*3:iwMf3Ps\9Oz(+Ye(] 5tݼT`d6˘lAUNTF\#&@බkvu]-Τ 4 dk_r `#0B]d}?vw뺺!" )BS$2ZEjŌg&REU:MTyc#NDXU,F`bGV37H1I?i9.&N%:vsED;S3W\71fcWVV޲QVU=g߻L׻U4Qy`̈/6m[fnnjvʮ]/رco{[jy1 #B|ꯟy^*|BugD"V;z4eN) ]wh[oU5=wWW;آ/M8窪F@^;CC9v4BƐLvN$=8vۭl {` wQ83R쳢ېt{{]w}}b~]w? 4@VrCcGv$9T?-}ggحevtD)Q-IogO7ܒ?JQ߱nLcgF,J Ja -^l}h[E6lӛ>HTzhȎ̽ Ge\JչOka|GhPK$pWR2.NxPTsdeXG@q:Ph,AQRI mF&KR);f6O-ZKQ:xZJ5g4e &+ZHT֡mS:"mg(9-4.ߓ^F %g()C4,)&E۶ƞHitu͖"K/-ycZxOڲcN\ˆ} DOTXu $hGL_\bbz"rlpmTңeX0Q2 @W|g2;WY$g]6x1)v09Fv ,66;Ջ`-vl=fBq)(JK*3cVHF dm/7[CJTU*xoΖu*97 1FY) /5zĦD3 ^H$36N3sUa1'xy7"%v!2 &DҊ>u>е],"mh9v̖<b5'ے}Ե alSCXtQ# Dfvm.kE.88FZ1qUslb4'hfM.4ɹb4 nH$tmu ̚RDnRtvdJmkD\5ޓHf0N>}[\y]7+btDU1U2Q1 {Jۮηu_mdq} RYD$;HE#jp"_bKvnNe'Ƀeɔ8gVVw3H{9swo}~`[&Riw9U0b 09X\WJ1?a0-VL 9 o9%Q #v0UU$2l;FVb77F@fl$&G" *jRfc]}QXm5ǗY8i2ŶM,))UWͶ1&]!~57´.%~ؖ+ ! 9NnH h IDAT *ꃷאxR=[J1&NbJ̳F6P+FJð-b<5¬fw*bt.tDSLS6)(1Z&ne3#Dw4j2$G$z_p}g~zm)ۭVR/^5DMy^**9& LQ )+!ҹ/߸~tə"?a^<,ai|?8megҎSty/ {JE'pPH3yvߢH wUm,H3"N ֜olđ̌Mg8DŽ:kXvٜo}I(&zǠ A9$-t [QaOy9eNsG X߽{S,isQE0MY5BaPX*} AMalTLMC[wj;ϖlx|5[3'4.S 򑷋ݖ?Jw[UW8dǨݏjIټ.n8dNcqIu8 P|*,u9ln\ht7R4Us@4qIg'GIa#f4h9ee;W0!"L9:nZiΨff3֝RHkͿ4ǿi3'o9x>?SB|*j'>AxO퇘EZTyoΞHKlNgΜy7>>t,.W˃ydV>uҽ?gw"6Cݚi-}٘*FSgo=QKu j&^g4uյm|k_y9B@n^Q_|ld?lx_~m/LիGGի.Lt\*gɞʕ+.\rJa\o.W*bo|2\xQspt&η/]t|K{ 8PFL(?$1uA44.$0ezɧ|zpN[ Nq}cWs/6*5o O1||hȃ/pP,&jݚ䁘2浳6oC jO3bi cH e)S;:'<FΌr'-}Z6eC 8A'culܮZjd%]3gNBgv͞j"y+w"əs'̑ܲF\@ n2j1Az|kāCk+;>=nsy -!yQ<ÞLa/>ḅŁMDO\?is WKS'R2GF dל}م:KppiMڄo!DlmM1Hpޣ%"#D0b:f*|",TUϘ5C1!#/"ΡPe.DJbfH5ϱh`{-t>݊;6x3Y\5[-KGI'\6rsLuBPG(OFAEDi!)H-̧DBfiƈ:li |#TavBwƫgsLC4܇SbUycׯ! ZE%'cPV-f 9IBY"0 O-+_S7 Ѯc>x|׭ 8 [9¢`g2ٳFncֵo7>>>?_w﯎o<䓈?|^rIRL3h9/Bza6-ԙ/"3(۪S&?Qgv}> 9O>| GlsS/iFXK cAT,kLrϜy7gX2tUxg4/~y湗αnW{%߼/d,J @^=<\vɾ9::.sÍI͊0ILjѓ+WLSS1Q.^ho96b{o^K< {!_aUO.9T4 ރ6>Z'9w@;EOpr+'W1Pg }k@--kmeeZx,"c9(A0&506bGrof (6ԲSv"OQ=7hsBkqziut[1XF4 VB?2!-tU͑R2f:-\)oP$ueڼTu"GoC Н;w^;{,d!k^o|O=u;|J3!0xS4YzmjQZ~c?ժ37z}=%$'x7?tOzW͋ ]&]jҳu գoxϽM@;uyh]vtt…#rh&9Hj&k;D{]/N xeepҥK'MXm)uNsNǴxxPj%zL"AC 'vYTh6h.༳P!ra,lsΜFJ>&,"T-#*Sz~Ga7G|;5[sXIWa6{Y:#9áp℀)E<ЫrkdElaE4]3+T3A'@I}ޛ/1yKu >t]cb 712) "eܢ/!‚!669ՑCUtD朢j \b*GD0qҔUOn#d/ za!U1j_}p¸ڰC$g!"SYWc)%`ÃC5HuKΖ~eŢ;X8JLmTU** } GJc[),1S(DRއ ι9UUbHm ZV*rwy~G7낡zsR"rN|D>:@y!n<ڻD9%G@\Sg7}DxK_:|sQO8 >CO̹K^C4?:sIX>,q9Ʃc-SګηP /;wsʐ|+WMď\i +^L lCjK.1.ޗ>WN |$ 3ڃ;;Ezvh <߂E8V|gCr\ aA ^Zc }kWT۾ jl6U5S3Aݻ?pNxh[X/*kNsfV%\U "{BZlcGFJ{xpxgsΝ]*.$-E,r\,+ژA5Q TC E11%DDhQ@݁Q|Z+, |EM 9:ϕߑt$$Go{#dR5ScEbMD*C8:("$JWaaUucAC! 'i,’X<(H+9Oƌ]rd 3Rբ֌(-+ -HΓ7z:Πh։MDviB{D`+8-oWXdT+3CQm6[31~M+ *̊ƘSb>8v.n!}@"L"|Rwu[s}ҬnHJN]ש9f17mfN6죓j*1Б.̫6od#IL`hcuRL04 ClVjB+< >[:uhժK1_l^{}nzFüR뽧|`\9!n0 efJL͛=\\2+fns2YF7g/׿> b[>_yw>x|}"c"ɐEfYoSg4;gCAdi[F2JR*RyaYsb 092]]}-w6?[_xDӟ:pL/[c{'Ve K`Q@ ]_qttGl +,x,@t%<"j[jf^XoyW^xkW_z%͇,Yq]z 噋LFhS.@p&La'^sxb'*fK^9X{<-Pl`&AW^y_ꘒ'8;2t"aS&n^2t"Adr xbZF9.<] ήAx+ďөm"_h2˘qFgN\IDATidZjsVLvPIuA-TKNo>jN'+N!OAP:lv)6R*0&Q݃s{G`CrIt 4J/Wù"Hw4N5-yBlQIJB! `n!*b@;TVi׭`Jt3@D; _ޏhk; ͯ/~O|o[W,dfn3b"R;wy&tЫW3^|kv<7>}|8?H|糟9߾?y&x!/E#BzGʧ˗.Kf3%n%7=yU8>>>>>2ͣaϝqFb{(*(8.5q%o.KEXq :K^9>s븀# z||ҥN'ez82).WCc@8%SmT[N)Kb$,K&sS qzL <3@c yObKh*wX.v(㰖V$uٖKcң=]ymEnf=7+gXiZi%gZI L Ҍ.h柰@`6 GۨkppLyڷ.;Z ABAn(;dwmO.,sj帴iM@&E yy0۰O[cb+R;G:f~*Oz_ڱ:&f z#=;`Dʴq8|Q'U2YpTi4M'rΘe r Z]Rb&\iEL1ˣœcK}=A-u&r.}cL6n/adk5 fv+C+c`b<#w~я~=,'6&&7z{罷~s5Ĕyg~7k*k{ *(Kn70o2ڑT*eRT_UihI5y k5Av6O,Ze0)q<E猂䄙.g ]zft2V+BVh2Wpo;w~cK]즲e|g3QB0KknB¡ml0#;n*v{% >A`aOѳBLq"Z~8ID>,bGL)Hn'Pl6̂y9.dc 0Q϶ G.x].J"a ]p΃ ph`X#xVή1X۷on6{j^  [qв6&xxxhÃ۷o߾BgªB1F;rTsJ11k4pZpb(ÐJi@1) }KO m9*52xNlW܎)&{2%8&Q1ubZٖzF >#caɊ1}}ozra Fv{iC);w6V7E;wԩ!fS<<8#4Maf4`ۭg5CD|&'9g>QQB@BeQP\q 1:Q{ WW74zj!E?":Xd0&)=* -t\)7E'@' 7tW{[ o% Ko>y2;g,,t"l@qT=\^Ƒbs2% H n尒6BIOsn8h/^_*kNl.*Z8DhE\45P1d#{Fb-ޠ$|O.8I@'3.ɞ2(ef9`(0-v#[QxH}MW pN,F2cvc-د70 {3մ*"Ν;0 QXUun%v]}Ш-h@[ڱ]8?~V=}{O:EDU|!X}p­Љ3VU0Z!HZUDJj H-(0@]oݺooz裏>}ڬmqR3sU!uW4tXB7ZرRJ)aҮI8SWUL'а|%yyE4guX",bDj}liLJ nZIa裈)cbf﨤1M˸),Ucb:VJTǝeQC,BscG6XUSUܘ($e &#MӅ{Xa Dr>xUfAD]Jvڢ6rH6-Q,)|`CVQPr·Sdam,,q,oja*̳<$џbb՜hKzU#D2{{7JJlKNbMED-kM!fƣyg)3pDa#\Iy" C]ݸe,1ODNDSL1&fQ# 2|-BkhW JFEV6"I(?#,3v>t0'|JX#{;ДF'Cm;KKA'`gL`waR8E6SG\_~kݓ:Rr \Mj ;s圄/hCIb\7n(ݣ)N3 !K[U܅uGSC'-+fwB-Kn&XI@eOt[Dg2jw֛6EZYRt2)5ML9YEc%RjjvP"jY R}JBt|Bea(*ɭJ2wߒB ,??q51,^ /LR2[i @,FPU.*x3cŒmwn4IENDB`MongoDB-v1.2.2/t/data/gridfs/input.txt000644 000765 000024 00000000011 12651754051 017751 0ustar00davidstaff000000 000000 abc zyw MongoDB-v1.2.2/t/data/CRUD/read/000755 000765 000024 00000000000 12651754051 016253 5ustar00davidstaff000000 000000 MongoDB-v1.2.2/t/data/CRUD/README.rst000644 000765 000024 00000005443 12651754051 017035 0ustar00davidstaff000000 000000 ========== CRUD Tests ========== The YAML and JSON files in this directory tree are platform-independent tests meant to exercise the translation from the API to underlying commands that MongoDB understands. Given the variety of languages and implementations and limited nature of a description of a test, there are a number of things that aren't testable. For instance, none of these tests assert that maxTimeMS was properly sent to the server. This would involve a lot of infrastructure to define and setup. Therefore, these YAML tests are in no way a replacement for more thorough testing. However, they can provide an initial verification of your implementation. Converting to JSON ================== The tests are written in YAML because it is easier for humans to write and read, and because YAML includes a standard comment format. A JSONified version of each YAML file is included in this repository. Whenever you change the YAML, re-convert to JSON. One method to convert to JSON is using `yamljs `_:: npm install -g yamljs yaml2json -s -p -r . Version ======= Files in the "specifications" repository have no version scheme. They are not tied to a MongoDB server version, and it is our intention that each specification moves from "draft" to "final" with no further revisions; it is superseded by a future spec, not revised. However, implementers must have stable sets of tests to target. As test files evolve they will occasionally be tagged like "crud-tests-YYYY-MM-DD", until the spec is final. Format ====== Each YAML file has the following keys: - data: The data that should exist in the collection under test before each test run. - tests: An array of tests that are to be run independently of each other. Each test will have some or all of the following fields - description: The name of the test - operation: - name: The name of the operation as defined in the specification. - arguments: The names and values of arguments from the specification. - outcome: - result: The return value from the operation. - collection: - name: OPTIONAL: The collection name to verify. If this isn't present then use the collection under test. - data: The data that should exist in the collection after the operation has been run. Use as integration tests ======================== Running these as integration tests will require a running mongod server. Each of these tests is valid against a standalone mongod, a replica set, and a sharded system for server version 3.0.0. Many of them will run against 2.4 and 2.6, but some will require conditional code. For instance, $out is not supported in an aggregation pipeline in server 2.4, so that test must be skipped. MongoDB-v1.2.2/t/data/CRUD/write/000755 000765 000024 00000000000 12651754051 016472 5ustar00davidstaff000000 000000 MongoDB-v1.2.2/t/data/CRUD/write/deleteMany.json000644 000765 000024 00000002336 12651754051 021460 0ustar00davidstaff000000 000000 { "data": [ { "_id": 1, "x": 11 }, { "_id": 2, "x": 22 }, { "_id": 3, "x": 33 } ], "tests": [ { "description": "DeleteMany when many documents match", "operation": { "name": "deleteMany", "arguments": { "filter": { "_id": { "$gt": 1 } } } }, "outcome": { "result": { "deletedCount": 2 }, "collection": { "data": [ { "_id": 1, "x": 11 } ] } } }, { "description": "DeleteMany when no document matches", "operation": { "name": "deleteMany", "arguments": { "filter": { "_id": 4 } } }, "outcome": { "result": { "deletedCount": 0 }, "collection": { "data": [ { "_id": 1, "x": 11 }, { "_id": 2, "x": 22 }, { "_id": 3, "x": 33 } ] } } } ] }MongoDB-v1.2.2/t/data/CRUD/write/deleteMany.yml000644 000765 000024 00000001501 12651754051 021301 0ustar00davidstaff000000 000000 data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} tests: - description: "DeleteMany when many documents match" operation: name: "deleteMany" arguments: filter: _id: {$gt: 1} outcome: result: deletedCount: 2 collection: data: - {_id: 1, x: 11} - description: "DeleteMany when no document matches" operation: name: "deleteMany" arguments: filter: {_id: 4} outcome: result: deletedCount: 0 collection: data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} MongoDB-v1.2.2/t/data/CRUD/write/deleteOne.json000644 000765 000024 00000003125 12651754051 021272 0ustar00davidstaff000000 000000 { "data": [ { "_id": 1, "x": 11 }, { "_id": 2, "x": 22 }, { "_id": 3, "x": 33 } ], "tests": [ { "description": "DeleteOne when many documents match", "operation": { "name": "deleteOne", "arguments": { "filter": { "_id": { "$gt": 1 } } } }, "outcome": { "result": { "deletedCount": 1 } } }, { "description": "DeleteOne when one document matches", "operation": { "name": "deleteOne", "arguments": { "filter": { "_id": 2 } } }, "outcome": { "result": { "deletedCount": 1 }, "collection": { "data": [ { "_id": 1, "x": 11 }, { "_id": 3, "x": 33 } ] } } }, { "description": "DeleteOne when no documents match", "operation": { "name": "deleteOne", "arguments": { "filter": { "_id": 4 } } }, "outcome": { "result": { "deletedCount": 0 }, "collection": { "data": [ { "_id": 1, "x": 11 }, { "_id": 2, "x": 22 }, { "_id": 3, "x": 33 } ] } } } ] }MongoDB-v1.2.2/t/data/CRUD/write/deleteOne.yml000644 000765 000024 00000002312 12651754051 021117 0ustar00davidstaff000000 000000 data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} tests: - description: "DeleteOne when many documents match" operation: name: "deleteOne" arguments: filter: _id: {$gt: 1} outcome: result: deletedCount: 1 # can't verify collection because we don't have a way # of knowing which document gets deleted. - description: "DeleteOne when one document matches" operation: name: "deleteOne" arguments: filter: {_id: 2} outcome: result: deletedCount: 1 collection: data: - {_id: 1, x: 11} - {_id: 3, x: 33} - description: "DeleteOne when no documents match" operation: name: "deleteOne" arguments: filter: {_id: 4} outcome: result: deletedCount: 0 collection: data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} MongoDB-v1.2.2/t/data/CRUD/write/findOneAndDelete.json000644 000765 000024 00000004254 12651754051 022522 0ustar00davidstaff000000 000000 { "data": [ { "_id": 1, "x": 11 }, { "_id": 2, "x": 22 }, { "_id": 3, "x": 33 } ], "tests": [ { "description": "FindOneAndDelete when many documents match", "operation": { "name": "findOneAndDelete", "arguments": { "filter": { "_id": { "$gt": 1 } }, "projection": { "x": 1, "_id": 0 }, "sort": { "x": 1 } } }, "outcome": { "result": { "x": 22 }, "collection": { "data": [ { "_id": 1, "x": 11 }, { "_id": 3, "x": 33 } ] } } }, { "description": "FindOneAndDelete when one document matches", "operation": { "name": "findOneAndDelete", "arguments": { "filter": { "_id": 2 }, "projection": { "x": 1, "_id": 0 }, "sort": { "x": 1 } } }, "outcome": { "result": { "x": 22 }, "collection": { "data": [ { "_id": 1, "x": 11 }, { "_id": 3, "x": 33 } ] } } }, { "description": "FindOneAndDelete when no documents match", "operation": { "name": "findOneAndDelete", "arguments": { "filter": { "_id": 4 }, "projection": { "x": 1, "_id": 0 }, "sort": { "x": 1 } } }, "outcome": { "result": null, "collection": { "data": [ { "_id": 1, "x": 11 }, { "_id": 2, "x": 22 }, { "_id": 3, "x": 33 } ] } } } ] }MongoDB-v1.2.2/t/data/CRUD/write/findOneAndDelete.yml000644 000765 000024 00000002574 12651754051 022355 0ustar00davidstaff000000 000000 data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} tests: - description: "FindOneAndDelete when many documents match" operation: name: findOneAndDelete arguments: filter: _id: {$gt: 1} projection: {x: 1, _id: 0} sort: {x: 1} outcome: result: {x: 22} collection: data: - {_id: 1, x: 11} - {_id: 3, x: 33} - description: "FindOneAndDelete when one document matches" operation: name: findOneAndDelete arguments: filter: {_id: 2} projection: {x: 1, _id: 0} sort: {x: 1} outcome: result: {x: 22} collection: data: - {_id: 1, x: 11} - {_id: 3, x: 33} - description: "FindOneAndDelete when no documents match" operation: name: findOneAndDelete arguments: filter: {_id: 4} projection: {x: 1, _id: 0} sort: {x: 1} outcome: result: null collection: data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33}MongoDB-v1.2.2/t/data/CRUD/write/findOneAndReplace.json000644 000765 000024 00000016065 12651754051 022676 0ustar00davidstaff000000 000000 { "data": [ { "_id": 1, "x": 11 }, { "_id": 2, "x": 22 }, { "_id": 3, "x": 33 } ], "tests": [ { "description": "FindOneAndReplace when many documents match returning the document before modification", "operation": { "name": "findOneAndReplace", "arguments": { "filter": { "_id": { "$gt": 1 } }, "replacement": { "x": 32 }, "projection": { "x": 1, "_id": 0 }, "sort": { "x": 1 } } }, "outcome": { "result": { "x": 22 }, "collection": { "data": [ { "_id": 1, "x": 11 }, { "_id": 2, "x": 32 }, { "_id": 3, "x": 33 } ] } } }, { "description": "FindOneAndReplace when many documents match returning the document after modification", "operation": { "name": "findOneAndReplace", "arguments": { "filter": { "_id": { "$gt": 1 } }, "replacement": { "x": 32 }, "projection": { "x": 1, "_id": 0 }, "returnDocument": "After", "sort": { "x": 1 } } }, "outcome": { "result": { "x": 32 }, "collection": { "data": [ { "_id": 1, "x": 11 }, { "_id": 2, "x": 32 }, { "_id": 3, "x": 33 } ] } } }, { "description": "FindOneAndReplace when one document matches returning the document before modification", "operation": { "name": "findOneAndReplace", "arguments": { "filter": { "_id": 2 }, "replacement": { "x": 32 }, "projection": { "x": 1, "_id": 0 }, "sort": { "x": 1 } } }, "outcome": { "result": { "x": 22 }, "collection": { "data": [ { "_id": 1, "x": 11 }, { "_id": 2, "x": 32 }, { "_id": 3, "x": 33 } ] } } }, { "description": "FindOneAndReplace when one document matches returning the document after modification", "operation": { "name": "findOneAndReplace", "arguments": { "filter": { "_id": 2 }, "replacement": { "x": 32 }, "projection": { "x": 1, "_id": 0 }, "returnDocument": "After", "sort": { "x": 1 } } }, "outcome": { "result": { "x": 32 }, "collection": { "data": [ { "_id": 1, "x": 11 }, { "_id": 2, "x": 32 }, { "_id": 3, "x": 33 } ] } } }, { "description": "FindOneAndReplace when no documents match returning the document before modification", "operation": { "name": "findOneAndReplace", "arguments": { "filter": { "_id": 4 }, "replacement": { "x": 44 }, "projection": { "x": 1, "_id": 0 }, "sort": { "x": 1 } } }, "outcome": { "result": null, "collection": { "data": [ { "_id": 1, "x": 11 }, { "_id": 2, "x": 22 }, { "_id": 3, "x": 33 } ] } } }, { "description": "FindOneAndReplace when no documents match with upsert returning the document before modification", "operation": { "name": "findOneAndReplace", "arguments": { "filter": { "_id": 4 }, "replacement": { "x": 44 }, "projection": { "x": 1, "_id": 0 }, "sort": { "x": 1 }, "upsert": true } }, "outcome": { "result": null, "collection": { "data": [ { "_id": 1, "x": 11 }, { "_id": 2, "x": 22 }, { "_id": 3, "x": 33 }, { "_id": 4, "x": 44 } ] } } }, { "description": "FindOneAndReplace when no documents match returning the document after modification", "operation": { "name": "findOneAndReplace", "arguments": { "filter": { "_id": 4 }, "replacement": { "x": 44 }, "projection": { "x": 1, "_id": 0 }, "returnDocument": "After", "sort": { "x": 1 } } }, "outcome": { "result": null, "collection": { "data": [ { "_id": 1, "x": 11 }, { "_id": 2, "x": 22 }, { "_id": 3, "x": 33 } ] } } }, { "description": "FindOneAndReplace when no documents match with upsert returning the document after modification", "operation": { "name": "findOneAndReplace", "arguments": { "filter": { "_id": 4 }, "replacement": { "x": 44 }, "projection": { "x": 1, "_id": 0 }, "returnDocument": "After", "sort": { "x": 1 }, "upsert": true } }, "outcome": { "result": { "x": 44 }, "collection": { "data": [ { "_id": 1, "x": 11 }, { "_id": 2, "x": 22 }, { "_id": 3, "x": 33 }, { "_id": 4, "x": 44 } ] } } } ] }MongoDB-v1.2.2/t/data/CRUD/write/findOneAndReplace.yml000644 000765 000024 00000011224 12651754051 022516 0ustar00davidstaff000000 000000 data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} tests: - description: "FindOneAndReplace when many documents match returning the document before modification" operation: name: findOneAndReplace arguments: filter: _id: {$gt: 1} replacement: {x: 32} projection: {x: 1, _id: 0} sort: {x: 1} outcome: result: {x: 22} collection: data: - {_id: 1, x: 11} - {_id: 2, x: 32} - {_id: 3, x: 33} - description: "FindOneAndReplace when many documents match returning the document after modification" operation: name: findOneAndReplace arguments: filter: _id: {$gt: 1} replacement: {x: 32} projection: {x: 1, _id: 0} returnDocument: After sort: {x: 1} outcome: result: {x: 32} collection: data: - {_id: 1, x: 11} - {_id: 2, x: 32} - {_id: 3, x: 33} - description: "FindOneAndReplace when one document matches returning the document before modification" operation: name: findOneAndReplace arguments: filter: {_id: 2} replacement: {x: 32} projection: {x: 1, _id: 0} sort: {x: 1} outcome: result: {x: 22} collection: data: - {_id: 1, x: 11} - {_id: 2, x: 32} - {_id: 3, x: 33} - description: "FindOneAndReplace when one document matches returning the document after modification" operation: name: findOneAndReplace arguments: filter: {_id: 2} replacement: {x: 32} projection: {x: 1, _id: 0} returnDocument: After sort: {x: 1} outcome: result: {x: 32} collection: data: - {_id: 1, x: 11} - {_id: 2, x: 32} - {_id: 3, x: 33} - description: "FindOneAndReplace when no documents match returning the document before modification" operation: name: findOneAndReplace arguments: filter: {_id: 4} replacement: {x: 44} projection: {x: 1, _id: 0} sort: {x: 1} outcome: result: null collection: data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} - description: "FindOneAndReplace when no documents match with upsert returning the document before modification" operation: name: findOneAndReplace arguments: filter: {_id: 4} replacement: {x: 44} projection: {x: 1, _id: 0} sort: {x: 1} upsert: true outcome: result: null collection: data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} - {_id: 4, x: 44} - description: "FindOneAndReplace when no documents match returning the document after modification" operation: name: findOneAndReplace arguments: filter: {_id: 4} replacement: {x: 44} projection: {x: 1, _id: 0} returnDocument: After sort: {x: 1} outcome: result: null collection: data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} - description: "FindOneAndReplace when no documents match with upsert returning the document after modification" operation: name: findOneAndReplace arguments: filter: {_id: 4} replacement: {x: 44} projection: {x: 1, _id: 0} returnDocument: After sort: {x: 1} upsert: true outcome: result: {x: 44} collection: data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} - {_id: 4, x: 44}MongoDB-v1.2.2/t/data/CRUD/write/findOneAndUpdate.json000644 000765 000024 00000016442 12651754051 022544 0ustar00davidstaff000000 000000 { "data": [ { "_id": 1, "x": 11 }, { "_id": 2, "x": 22 }, { "_id": 3, "x": 33 } ], "tests": [ { "description": "FindOneAndUpdate when many documents match returning the document before modification", "operation": { "name": "findOneAndUpdate", "arguments": { "filter": { "_id": { "$gt": 1 } }, "update": { "$inc": { "x": 1 } }, "projection": { "x": 1, "_id": 0 }, "sort": { "x": 1 } } }, "outcome": { "result": { "x": 22 }, "collection": { "data": [ { "_id": 1, "x": 11 }, { "_id": 2, "x": 23 }, { "_id": 3, "x": 33 } ] } } }, { "description": "FindOneAndUpdate when many documents match returning the document after modification", "operation": { "name": "findOneAndUpdate", "arguments": { "filter": { "_id": { "$gt": 1 } }, "update": { "$inc": { "x": 1 } }, "projection": { "x": 1, "_id": 0 }, "returnDocument": "After", "sort": { "x": 1 } } }, "outcome": { "result": { "x": 23 }, "collection": { "data": [ { "_id": 1, "x": 11 }, { "_id": 2, "x": 23 }, { "_id": 3, "x": 33 } ] } } }, { "description": "FindOneAndUpdate when one document matches returning the document before modification", "operation": { "name": "findOneAndUpdate", "arguments": { "filter": { "_id": 2 }, "update": { "$inc": { "x": 1 } }, "projection": { "x": 1, "_id": 0 }, "sort": { "x": 1 } } }, "outcome": { "result": { "x": 22 }, "collection": { "data": [ { "_id": 1, "x": 11 }, { "_id": 2, "x": 23 }, { "_id": 3, "x": 33 } ] } } }, { "description": "FindOneAndUpdate when one document matches returning the document after modification", "operation": { "name": "findOneAndUpdate", "arguments": { "filter": { "_id": 2 }, "update": { "$inc": { "x": 1 } }, "projection": { "x": 1, "_id": 0 }, "returnDocument": "After", "sort": { "x": 1 } } }, "outcome": { "result": { "x": 23 }, "collection": { "data": [ { "_id": 1, "x": 11 }, { "_id": 2, "x": 23 }, { "_id": 3, "x": 33 } ] } } }, { "description": "FindOneAndUpdate when no documents match returning the document before modification", "operation": { "name": "findOneAndUpdate", "arguments": { "filter": { "_id": 4 }, "update": { "$inc": { "x": 1 } }, "projection": { "x": 1, "_id": 0 }, "sort": { "x": 1 } } }, "outcome": { "result": null, "collection": { "data": [ { "_id": 1, "x": 11 }, { "_id": 2, "x": 22 }, { "_id": 3, "x": 33 } ] } } }, { "description": "FindOneAndUpdate when no documents match with upsert returning the document before modification", "operation": { "name": "findOneAndUpdate", "arguments": { "filter": { "_id": 4 }, "update": { "$inc": { "x": 1 } }, "projection": { "x": 1, "_id": 0 }, "sort": { "x": 1 }, "upsert": true } }, "outcome": { "result": null, "collection": { "data": [ { "_id": 1, "x": 11 }, { "_id": 2, "x": 22 }, { "_id": 3, "x": 33 }, { "_id": 4, "x": 1 } ] } } }, { "description": "FindOneAndUpdate when no documents match returning the document after modification", "operation": { "name": "findOneAndUpdate", "arguments": { "filter": { "_id": 4 }, "update": { "$inc": { "x": 1 } }, "projection": { "x": 1, "_id": 0 }, "returnDocument": "After", "sort": { "x": 1 } } }, "outcome": { "result": null, "collection": { "data": [ { "_id": 1, "x": 11 }, { "_id": 2, "x": 22 }, { "_id": 3, "x": 33 } ] } } }, { "description": "FindOneAndUpdate when no documents match with upsert returning the document after modification", "operation": { "name": "findOneAndUpdate", "arguments": { "filter": { "_id": 4 }, "update": { "$inc": { "x": 1 } }, "projection": { "x": 1, "_id": 0 }, "returnDocument": "After", "sort": { "x": 1 }, "upsert": true } }, "outcome": { "result": { "x": 1 }, "collection": { "data": [ { "_id": 1, "x": 11 }, { "_id": 2, "x": 22 }, { "_id": 3, "x": 33 }, { "_id": 4, "x": 1 } ] } } } ] }MongoDB-v1.2.2/t/data/CRUD/write/findOneAndUpdate.yml000644 000765 000024 00000011451 12651754051 022367 0ustar00davidstaff000000 000000 data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} tests: - description: "FindOneAndUpdate when many documents match returning the document before modification" operation: name: findOneAndUpdate arguments: filter: _id: {$gt: 1} update: $inc: {x: 1} projection: {x: 1, _id: 0} sort: {x: 1} outcome: result: {x: 22} collection: data: - {_id: 1, x: 11} - {_id: 2, x: 23} - {_id: 3, x: 33} - description: "FindOneAndUpdate when many documents match returning the document after modification" operation: name: findOneAndUpdate arguments: filter: _id: {$gt: 1} update: $inc: {x: 1} projection: {x: 1, _id: 0} returnDocument: After sort: {x: 1} outcome: result: {x: 23} collection: data: - {_id: 1, x: 11} - {_id: 2, x: 23} - {_id: 3, x: 33} - description: "FindOneAndUpdate when one document matches returning the document before modification" operation: name: findOneAndUpdate arguments: filter: {_id: 2} update: $inc: {x: 1} projection: {x: 1, _id: 0} sort: {x: 1} outcome: result: {x: 22} collection: data: - {_id: 1, x: 11} - {_id: 2, x: 23} - {_id: 3, x: 33} - description: "FindOneAndUpdate when one document matches returning the document after modification" operation: name: findOneAndUpdate arguments: filter: {_id: 2} update: $inc: {x: 1} projection: {x: 1, _id: 0} returnDocument: After sort: {x: 1} outcome: result: {x: 23} collection: data: - {_id: 1, x: 11} - {_id: 2, x: 23} - {_id: 3, x: 33} - description: "FindOneAndUpdate when no documents match returning the document before modification" operation: name: findOneAndUpdate arguments: filter: {_id: 4} update: $inc: {x: 1} projection: {x: 1, _id: 0} sort: {x: 1} outcome: result: null collection: data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} - description: "FindOneAndUpdate when no documents match with upsert returning the document before modification" operation: name: findOneAndUpdate arguments: filter: {_id: 4} update: $inc: {x: 1} projection: {x: 1, _id: 0} sort: {x: 1} upsert: true outcome: result: null collection: data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} - {_id: 4, x: 1} - description: "FindOneAndUpdate when no documents match returning the document after modification" operation: name: findOneAndUpdate arguments: filter: {_id: 4} update: $inc: {x: 1} projection: {x: 1, _id: 0} returnDocument: After sort: {x: 1} outcome: result: null collection: data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} - description: "FindOneAndUpdate when no documents match with upsert returning the document after modification" operation: name: findOneAndUpdate arguments: filter: {_id: 4} update: $inc: {x: 1} projection: {x: 1, _id: 0} returnDocument: After sort: {x: 1} upsert: true outcome: result: {x: 1} collection: data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} - {_id: 4, x: 1}MongoDB-v1.2.2/t/data/CRUD/write/insertMany.json000644 000765 000024 00000001521 12651754051 021515 0ustar00davidstaff000000 000000 { "data": [ { "_id": 1, "x": 11 } ], "tests": [ { "description": "InsertMany with non-existing documents", "operation": { "name": "insertMany", "arguments": { "documents": [ { "_id": 2, "x": 22 }, { "_id": 3, "x": 33 } ] } }, "outcome": { "result": { "insertedIds": [ 2, 3 ] }, "collection": { "data": [ { "_id": 1, "x": 11 }, { "_id": 2, "x": 22 }, { "_id": 3, "x": 33 } ] } } } ] }MongoDB-v1.2.2/t/data/CRUD/write/insertMany.yml000644 000765 000024 00000001054 12651754051 021346 0ustar00davidstaff000000 000000 data: - {_id: 1, x: 11} tests: - description: "InsertMany with non-existing documents" operation: name: "insertMany" arguments: documents: - {_id: 2, x: 22} - {_id: 3, x: 33} outcome: result: insertedIds: - 2 - 3 collection: data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33}MongoDB-v1.2.2/t/data/CRUD/write/insertOne.json000644 000765 000024 00000001157 12651754051 021337 0ustar00davidstaff000000 000000 { "data": [ { "_id": 1, "x": 11 } ], "tests": [ { "description": "InsertOne with a non-existing document", "operation": { "name": "insertOne", "arguments": { "document": { "_id": 2, "x": 22 } } }, "outcome": { "result": { "insertedId": 2 }, "collection": { "data": [ { "_id": 1, "x": 11 }, { "_id": 2, "x": 22 } ] } } } ] }MongoDB-v1.2.2/t/data/CRUD/write/insertOne.yml000644 000765 000024 00000000627 12651754051 021170 0ustar00davidstaff000000 000000 data: - {_id: 1, x: 11} tests: - description: "InsertOne with a non-existing document" operation: name: "insertOne" arguments: document: {_id: 2, x: 22} outcome: result: insertedId: 2 collection: data: - {_id: 1, x: 11} - {_id: 2, x: 22}MongoDB-v1.2.2/t/data/CRUD/write/replaceOne.json000644 000765 000024 00000007225 12651754051 021450 0ustar00davidstaff000000 000000 { "data": [ { "_id": 1, "x": 11 }, { "_id": 2, "x": 22 }, { "_id": 3, "x": 33 } ], "tests": [ { "description": "ReplaceOne when many documents match", "operation": { "name": "replaceOne", "arguments": { "filter": { "_id": { "$gt": 1 } }, "replacement": { "x": 111 } } }, "outcome": { "result": { "matchedCount": 1, "modifiedCount": 1 } } }, { "description": "ReplaceOne when one document matches", "operation": { "name": "replaceOne", "arguments": { "filter": { "_id": 1 }, "replacement": { "_id": 1, "x": 111 } } }, "outcome": { "result": { "matchedCount": 1, "modifiedCount": 1 }, "collection": { "data": [ { "_id": 1, "x": 111 }, { "_id": 2, "x": 22 }, { "_id": 3, "x": 33 } ] } } }, { "description": "ReplaceOne when no documents match", "operation": { "name": "replaceOne", "arguments": { "filter": { "_id": 4 }, "replacement": { "_id": 4, "x": 1 } } }, "outcome": { "result": { "matchedCount": 0, "modifiedCount": 0 }, "collection": { "data": [ { "_id": 1, "x": 11 }, { "_id": 2, "x": 22 }, { "_id": 3, "x": 33 } ] } } }, { "description": "ReplaceOne with upsert when no documents match without an id specified", "operation": { "name": "replaceOne", "arguments": { "filter": { "_id": 4 }, "replacement": { "x": 1 }, "upsert": true } }, "outcome": { "result": { "matchedCount": 0, "modifiedCount": 0, "upsertedId": 4 }, "collection": { "data": [ { "_id": 1, "x": 11 }, { "_id": 2, "x": 22 }, { "_id": 3, "x": 33 }, { "_id": 4, "x": 1 } ] } } }, { "description": "ReplaceOne with upsert when no documents match with an id specified", "operation": { "name": "replaceOne", "arguments": { "filter": { "_id": 4 }, "replacement": { "_id": 4, "x": 1 }, "upsert": true } }, "outcome": { "result": { "matchedCount": 0, "modifiedCount": 0, "upsertedId": 4 }, "collection": { "data": [ { "_id": 1, "x": 11 }, { "_id": 2, "x": 22 }, { "_id": 3, "x": 33 }, { "_id": 4, "x": 1 } ] } } } ] }MongoDB-v1.2.2/t/data/CRUD/write/replaceOne.yml000644 000765 000024 00000005221 12651754051 021272 0ustar00davidstaff000000 000000 data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} tests: - description: "ReplaceOne when many documents match" operation: name: "replaceOne" arguments: filter: _id: {$gt: 1} replacement: {x: 111} outcome: result: matchedCount: 1 modifiedCount: 1 # can't verify collection because we don't have a way # of knowing which document gets updated. - description: "ReplaceOne when one document matches" operation: name: "replaceOne" arguments: filter: {_id: 1} replacement: {_id: 1, x: 111} outcome: result: matchedCount: 1 modifiedCount: 1 collection: data: - {_id: 1, x: 111} - {_id: 2, x: 22} - {_id: 3, x: 33} - description: "ReplaceOne when no documents match" operation: name: "replaceOne" arguments: filter: {_id: 4} replacement: {_id: 4, x: 1} outcome: result: matchedCount: 0 modifiedCount: 0 collection: data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} - description: "ReplaceOne with upsert when no documents match without an id specified" operation: name: "replaceOne" arguments: filter: {_id: 4} replacement: {x: 1} upsert: true outcome: result: matchedCount: 0 modifiedCount: 0 upsertedId: 4 collection: data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} - {_id: 4, x: 1} - description: "ReplaceOne with upsert when no documents match with an id specified" operation: name: "replaceOne" arguments: filter: {_id: 4} replacement: {_id: 4, x: 1} upsert: true outcome: result: matchedCount: 0 modifiedCount: 0 upsertedId: 4 collection: data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} - {_id: 4, x: 1} MongoDB-v1.2.2/t/data/CRUD/write/updateMany.json000644 000765 000024 00000006220 12651754051 021474 0ustar00davidstaff000000 000000 { "data": [ { "_id": 1, "x": 11 }, { "_id": 2, "x": 22 }, { "_id": 3, "x": 33 } ], "tests": [ { "description": "UpdateMany when many documents match", "operation": { "name": "updateMany", "arguments": { "filter": { "_id": { "$gt": 1 } }, "update": { "$inc": { "x": 1 } } } }, "outcome": { "result": { "matchedCount": 2, "modifiedCount": 2 }, "collection": { "data": [ { "_id": 1, "x": 11 }, { "_id": 2, "x": 23 }, { "_id": 3, "x": 34 } ] } } }, { "description": "UpdateMany when one document matches", "operation": { "name": "updateMany", "arguments": { "filter": { "_id": 1 }, "update": { "$inc": { "x": 1 } } } }, "outcome": { "result": { "matchedCount": 1, "modifiedCount": 1 }, "collection": { "data": [ { "_id": 1, "x": 12 }, { "_id": 2, "x": 22 }, { "_id": 3, "x": 33 } ] } } }, { "description": "UpdateMany when no documents match", "operation": { "name": "updateMany", "arguments": { "filter": { "_id": 4 }, "update": { "$inc": { "x": 1 } } } }, "outcome": { "result": { "matchedCount": 0, "modifiedCount": 0 }, "collection": { "data": [ { "_id": 1, "x": 11 }, { "_id": 2, "x": 22 }, { "_id": 3, "x": 33 } ] } } }, { "description": "UpdateMany with upsert when no documents match", "operation": { "name": "updateMany", "arguments": { "filter": { "_id": 4 }, "update": { "$inc": { "x": 1 } }, "upsert": true } }, "outcome": { "result": { "matchedCount": 0, "modifiedCount": 0, "upsertedId": 4 }, "collection": { "data": [ { "_id": 1, "x": 11 }, { "_id": 2, "x": 22 }, { "_id": 3, "x": 33 }, { "_id": 4, "x": 1 } ] } } } ] }MongoDB-v1.2.2/t/data/CRUD/write/updateMany.yml000644 000765 000024 00000004206 12651754051 021326 0ustar00davidstaff000000 000000 data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} tests: - description: "UpdateMany when many documents match" operation: name: "updateMany" arguments: filter: _id: {$gt: 1} update: $inc: {x: 1} outcome: result: matchedCount: 2 modifiedCount: 2 collection: data: - {_id: 1, x: 11} - {_id: 2, x: 23} - {_id: 3, x: 34} - description: "UpdateMany when one document matches" operation: name: "updateMany" arguments: filter: {_id: 1} update: $inc: {x: 1} outcome: result: matchedCount: 1 modifiedCount: 1 collection: data: - {_id: 1, x: 12} - {_id: 2, x: 22} - {_id: 3, x: 33} - description: "UpdateMany when no documents match" operation: name: "updateMany" arguments: filter: {_id: 4} update: $inc: {x: 1} outcome: result: matchedCount: 0 modifiedCount: 0 collection: data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} - description: "UpdateMany with upsert when no documents match" operation: name: "updateMany" arguments: filter: {_id: 4} update: $inc: {x: 1} upsert: true outcome: result: matchedCount: 0 modifiedCount: 0 upsertedId: 4 collection: data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} - {_id: 4, x: 1} MongoDB-v1.2.2/t/data/CRUD/write/updateOne.json000644 000765 000024 00000005545 12651754051 021322 0ustar00davidstaff000000 000000 { "data": [ { "_id": 1, "x": 11 }, { "_id": 2, "x": 22 }, { "_id": 3, "x": 33 } ], "tests": [ { "description": "UpdateOne when many documents match", "operation": { "name": "updateOne", "arguments": { "filter": { "_id": { "$gt": 1 } }, "update": { "$inc": { "x": 1 } } } }, "outcome": { "result": { "matchedCount": 1, "modifiedCount": 1 } } }, { "description": "UpdateOne when one document matches", "operation": { "name": "updateOne", "arguments": { "filter": { "_id": 1 }, "update": { "$inc": { "x": 1 } } } }, "outcome": { "result": { "matchedCount": 1, "modifiedCount": 1 }, "collection": { "data": [ { "_id": 1, "x": 12 }, { "_id": 2, "x": 22 }, { "_id": 3, "x": 33 } ] } } }, { "description": "UpdateOne when no documents match", "operation": { "name": "updateOne", "arguments": { "filter": { "_id": 4 }, "update": { "$inc": { "x": 1 } } } }, "outcome": { "result": { "matchedCount": 0, "modifiedCount": 0 }, "collection": { "data": [ { "_id": 1, "x": 11 }, { "_id": 2, "x": 22 }, { "_id": 3, "x": 33 } ] } } }, { "description": "UpdateOne with upsert when no documents match", "operation": { "name": "updateOne", "arguments": { "filter": { "_id": 4 }, "update": { "$inc": { "x": 1 } }, "upsert": true } }, "outcome": { "result": { "matchedCount": 0, "modifiedCount": 0, "upsertedId": 4 }, "collection": { "data": [ { "_id": 1, "x": 11 }, { "_id": 2, "x": 22 }, { "_id": 3, "x": 33 }, { "_id": 4, "x": 1 } ] } } } ] }MongoDB-v1.2.2/t/data/CRUD/write/updateOne.yml000644 000765 000024 00000004126 12651754051 021144 0ustar00davidstaff000000 000000 data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} tests: - description: "UpdateOne when many documents match" operation: name: "updateOne" arguments: filter: _id: {$gt: 1} update: $inc: {x: 1} outcome: result: matchedCount: 1 modifiedCount: 1 # can't verify collection because we don't have a way # of knowing which document gets updated. - description: "UpdateOne when one document matches" operation: name: "updateOne" arguments: filter: {_id: 1} update: $inc: {x: 1} outcome: result: matchedCount: 1 modifiedCount: 1 collection: data: - {_id: 1, x: 12} - {_id: 2, x: 22} - {_id: 3, x: 33} - description: "UpdateOne when no documents match" operation: name: "updateOne" arguments: filter: {_id: 4} update: $inc: {x: 1} outcome: result: matchedCount: 0 modifiedCount: 0 collection: data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} - description: "UpdateOne with upsert when no documents match" operation: name: "updateOne" arguments: filter: {_id: 4} update: $inc: {x: 1} upsert: true outcome: result: matchedCount: 0 modifiedCount: 0 upsertedId: 4 collection: data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} - {_id: 4, x: 1} MongoDB-v1.2.2/t/data/CRUD/read/aggregate.json000644 000765 000024 00000003361 12651754051 021077 0ustar00davidstaff000000 000000 { "data": [ { "_id": 1, "x": 11 }, { "_id": 2, "x": 22 }, { "_id": 3, "x": 33 } ], "tests": [ { "description": "Aggregate with multiple stages", "operation": { "name": "aggregate", "arguments": { "pipeline": [ { "$sort": { "x": 1 } }, { "$match": { "_id": { "$gt": 1 } } } ], "batchSize": 2 } }, "outcome": { "result": [ { "_id": 2, "x": 22 }, { "_id": 3, "x": 33 } ] } }, { "description": "Aggregate with $out", "operation": { "name": "aggregate", "arguments": { "pipeline": [ { "$sort": { "x": 1 } }, { "$match": { "_id": { "$gt": 1 } } }, { "$out": "other_test_collection" } ], "batchSize": 2 } }, "outcome": { "result": [ { "_id": 2, "x": 22 }, { "_id": 3, "x": 33 } ], "collection": { "name": "other_test_collection", "data": [ { "_id": 2, "x": 22 }, { "_id": 3, "x": 33 } ] } } } ] }MongoDB-v1.2.2/t/data/CRUD/read/aggregate.yml000644 000765 000024 00000002070 12651754051 020723 0ustar00davidstaff000000 000000 data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} tests: - description: "Aggregate with multiple stages" operation: name: aggregate arguments: pipeline: - $sort: {x: 1} - $match: _id: {$gt: 1} batchSize: 2 outcome: result: - {_id: 2, x: 22} - {_id: 3, x: 33} - description: "Aggregate with $out" operation: name: aggregate arguments: pipeline: - $sort: {x: 1} - $match: _id: {$gt: 1} - $out: "other_test_collection" batchSize: 2 outcome: result: - {_id: 2, x: 22} - {_id: 3, x: 33} collection: name: "other_test_collection" data: - {_id: 2, x: 22} - {_id: 3, x: 33}MongoDB-v1.2.2/t/data/CRUD/read/count.json000644 000765 000024 00000001642 12651754051 020301 0ustar00davidstaff000000 000000 { "data": [ { "_id": 1, "x": 11 }, { "_id": 2, "x": 22 }, { "_id": 3, "x": 33 } ], "tests": [ { "description": "Count without a filter", "operation": { "name": "count", "arguments": { "filter": {} } }, "outcome": { "result": 3 } }, { "description": "Count with a filter", "operation": { "name": "count", "arguments": { "filter": { "_id": { "$gt": 1 } } } }, "outcome": { "result": 2 } }, { "description": "Count with skip and limit", "operation": { "name": "count", "arguments": { "filter": {}, "skip": 1, "limit": 3 } }, "outcome": { "result": 2 } } ] }MongoDB-v1.2.2/t/data/CRUD/read/count.yml000644 000765 000024 00000001314 12651754051 020125 0ustar00davidstaff000000 000000 data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} tests: - description: "Count without a filter" operation: name: count arguments: filter: { } outcome: result: 3 - description: "Count with a filter" operation: name: count arguments: filter: _id: {$gt: 1} outcome: result: 2 - description: "Count with skip and limit" operation: name: count arguments: filter: {} skip: 1 limit: 3 outcome: result: 2MongoDB-v1.2.2/t/data/CRUD/read/distinct.json000644 000765 000024 00000001466 12651754051 020776 0ustar00davidstaff000000 000000 { "data": [ { "_id": 1, "x": 11 }, { "_id": 2, "x": 22 }, { "_id": 3, "x": 33 } ], "tests": [ { "description": "Distinct without a filter", "operation": { "name": "distinct", "arguments": { "fieldName": "x", "filter": {} } }, "outcome": { "result": [ 11, 22, 33 ] } }, { "description": "Distinct with a filter", "operation": { "name": "distinct", "arguments": { "fieldName": "x", "filter": { "_id": { "$gt": 1 } } } }, "outcome": { "result": [ 22, 33 ] } } ] }MongoDB-v1.2.2/t/data/CRUD/read/distinct.yml000644 000765 000024 00000001217 12651754051 020620 0ustar00davidstaff000000 000000 data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} tests: - description: "Distinct without a filter" operation: name: distinct arguments: fieldName: "x" filter: {} outcome: result: - 11 - 22 - 33 - description: "Distinct with a filter" operation: name: distinct arguments: fieldName: "x" filter: _id: {$gt: 1} outcome: result: - 22 - 33MongoDB-v1.2.2/t/data/CRUD/read/find.json000644 000765 000024 00000003167 12651754051 020075 0ustar00davidstaff000000 000000 { "data": [ { "_id": 1, "x": 11 }, { "_id": 2, "x": 22 }, { "_id": 3, "x": 33 }, { "_id": 4, "x": 44 }, { "_id": 5, "x": 55 } ], "tests": [ { "description": "Find with filter", "operation": { "name": "find", "arguments": { "filter": { "_id": 1 } } }, "outcome": { "result": [ { "_id": 1, "x": 11 } ] } }, { "description": "Find with filter, sort, skip, and limit", "operation": { "name": "find", "arguments": { "filter": { "_id": { "$gt": 2 } }, "sort": { "_id": 1 }, "skip": 2, "limit": 2 } }, "outcome": { "result": [ { "_id": 5, "x": 55 } ] } }, { "description": "Find with limit, sort, and batchsize", "operation": { "name": "find", "arguments": { "filter": {}, "sort": { "_id": 1 }, "limit": 4, "batchSize": 2 } }, "outcome": { "result": [ { "_id": 1, "x": 11 }, { "_id": 2, "x": 22 }, { "_id": 3, "x": 33 }, { "_id": 4, "x": 44 } ] } } ] }MongoDB-v1.2.2/t/data/CRUD/read/find.yml000644 000765 000024 00000002130 12651754051 017712 0ustar00davidstaff000000 000000 data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} - {_id: 4, x: 44} - {_id: 5, x: 55} tests: - description: "Find with filter" operation: name: "find" arguments: filter: {_id: 1} outcome: result: - {_id: 1, x: 11} - description: "Find with filter, sort, skip, and limit" operation: name: "find" arguments: filter: _id: {$gt: 2} sort: {_id: 1} skip: 2 limit: 2 outcome: result: - {_id: 5, x: 55} - description: "Find with limit, sort, and batchsize" operation: name: "find" arguments: filter: {} sort: {_id: 1} limit: 4 batchSize: 2 outcome: result: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} - {_id: 4, x: 44} MongoDB-v1.2.2/t/bson_codec/booleans.t000644 000765 000024 00000003377 12651754051 017771 0ustar00davidstaff000000 000000 # # Copyright 2015 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. use strict; use warnings; use Test::More 0.96; use Test::Fatal; use Test::Deep; use MongoDB; use boolean; use lib "t/lib"; use TestBSON; my $class = "MongoDB::BSON"; require_ok($class); my $codec = new_ok( $class, [], "new with no args" ); my @cases = qw( boolean JSON::PP Types::Serialiser Cpanel::JSON::XS Mojo::JSON JSON::Tiny ); for my $c (@cases) { subtest "class $c" => sub { plan skip_all => "requires $c" unless eval "require $c; 1"; my $input = [ true => eval "${c}::true()", false => eval "${c}::false()", ]; my $bson = _doc( BSON_BOOL . _ename("true") . "\x01" . BSON_BOOL . _ename("false") . "\x00" ); my $output = { true => boolean::true, false => boolean::false, }; my $encoded = $codec->encode_one( $input, {} ); is_bin( $encoded, $bson, "encode_one" ); my $decoded = $codec->decode_one( $encoded, {} ); cmp_deeply( $decoded, $output, "decode_one" ) or diag "GOT:", _hexdump( explain($decoded) ), "EXPECTED:", _hexdump( explain($output) ); } } done_testing; # vim: ts=4 sts=4 sw=4 et: MongoDB-v1.2.2/t/bson_codec/containers.t000644 000765 000024 00000015452 12651754051 020331 0ustar00davidstaff000000 000000 # # Copyright 2015 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. use strict; use warnings; use Test::More 0.96; use Test::Fatal; use Test::Deep; use Tie::IxHash; use MongoDB; use MongoDB::OID; use lib 't/lib'; use TestBSON; my $oid = MongoDB::OID->new("554ce5e4096df3be01323321"); my $bin_oid = pack( "C*", map hex($_), unpack( "(a2)12", "$oid" ) ); my $class = "MongoDB::BSON"; require_ok($class); my $codec = new_ok( $class, [], "new with no args" ); my @cases = ( { label => "empty doc", opts => {}, input => [], bson => _doc(""), }, { label => "BSON double", opts => {}, input => [ a => 1.23 ], bson => _doc( BSON_DOUBLE . _ename("a") . _double(1.23) ), }, { label => "BSON string", opts => {}, input => [ a => 'b' ], bson => _doc( BSON_STRING . _ename("a") . _string("b") ), }, { label => "BSON OID", opts => {}, input => [ _id => $oid ], bson => _doc( BSON_OID . _ename("_id") . $bin_oid ), }, { label => "add _id", opts => { first_key => '_id', first_value => $oid, }, input => [], bson => _doc( BSON_OID . _ename("_id") . $bin_oid ), }, { label => "add _id, ignore existing", opts => { first_key => '_id', first_value => $oid, }, input => [ _id => "12345" ], bson => _doc( BSON_OID . _ename("_id") . $bin_oid ), }, { label => "add _id with null", opts => { first_key => '_id', }, input => [ _id => "12345" ], bson => _doc( BSON_NULL . _ename("_id") ), }, { label => "empty key is error", opts => {}, input => [ "" => "12345" ], error => qr/empty key name/, }, { label => "dot in key is normally valid", opts => {}, input => [ "a.b" => "c" ], bson => _doc( BSON_STRING . _ename("a.b") . _string("c") ), }, { label => "dot in key fails invalid check", opts => { invalid_chars => '.' }, input => [ "a.b" => "c" ], error => qr/cannot contain the '\.' character/, }, { label => "dot in key fails multi invalid chars", opts => { invalid_chars => '_$' }, input => [ '$ab' => "c" ], error => qr/cannot contain the '\$' character/, }, { label => "op_char replacement", opts => { op_char => '-' }, input => [ '-a' => "c" ], bson => _doc( BSON_STRING . _ename('$a') . _string("c") ), }, { label => "op_char change before invalid check", opts => { op_char => '.', invalid_chars => '.' }, input => [ '.a' => "c" ], bson => _doc( BSON_STRING . _ename('$a') . _string("c") ), }, { label => "op_char and invalid check ignore empty string", opts => { op_char => '', invalid_chars => '' }, input => [ '.a' => "c" ], bson => _doc( BSON_STRING . _ename('.a') . _string("c") ), }, { label => "prefer_numeric false", opts => {}, input => [ a => "1.23" ], bson => _doc( BSON_STRING . _ename("a") . _string("1.23") ), }, { label => "prefer_numeric true", opts => { prefer_numeric => 1 }, input => [ a => "1.23" ], bson => _doc( BSON_DOUBLE . _ename("a") . _double(1.23) ), }, { label => "BSON too long", opts => { max_length => 2 }, input => [ 'a' => 'b' ], error => qr/exceeds maximum size 2/, }, { label => "BSON too long", opts => { invalid_chars => '.', error_callback => sub { die "Bad $_[1]: $_[0]" }, }, input => [ 'a.b' => 'b' ], error => qr/Bad (?:[A-Za-z:]+=)?\w+\(0x[a-f0-9]+\):.*the '\.' character/, }, ); for my $c (@cases) { if ( $c->{bson} ) { valid_case($c); } elsif ( $c->{error} ) { error_case($c); } else { die "Unknown case type for '$c->{label}'"; } } # have to check one-off as we won't get this via a round-trip { my $bson = _doc( BSON_STRING . _ename("a") . _string("a"x20) ); like( exception { $codec->decode_one( $bson, { max_length => 5 } ) }, qr/exceeds maximum size 5/, "decode exceeding max_length throws error" ); } # array documents can't have duplicate keys { like( exception { $codec->encode_one( [ x => 1, y => 2, z => 3, y => 4 ] ) }, qr/duplicate key 'y'/, "duplicate key in array document is fatal" ); } #--------------------------------------------------------------------------# # support functions #--------------------------------------------------------------------------# sub valid_case { my $c = shift; my ( $label, $input, $bson, $opts ) = @{$c}{qw/label input bson opts/}; my ( $doc, $got ); subtest $label => sub { # hash style $doc = {@$input}; $got = $codec->encode_one( $doc, $opts ); is_bin( $got, $bson, "encode_one( HASH )" ); cmp_deeply( $doc, {@$input}, "doc unmodified" ); # array style $doc = [@$input]; $got = $codec->encode_one( $doc, $opts ); is_bin( $got, $bson, "encode_one( ARRAY )" ); cmp_deeply( $doc, [@$input], "doc unmodified" ); # IxHash $doc = Tie::IxHash->new(@$input); $got = $codec->encode_one( $doc, $opts ); is_bin( $got, $bson, "encode_one( IxHash )" ); cmp_deeply( $doc, Tie::IxHash->new(@$input), "doc unmodified" ); }; } sub error_case { my $c = shift; my ( $label, $input, $error, $opts ) = @{$c}{qw/label input error opts/}; my ( $doc, $got ); subtest $label => sub { # hash style $doc = {@$input}; like( exception { $got = $codec->encode_one( $doc, $opts ) }, $error, "exception for HASH" ); # array style $doc = [@$input]; like( exception { $got = $codec->encode_one( $doc, $opts ) }, $error, "exception for ARRAY" ); # IxHash $doc = Tie::IxHash->new(@$input); like( exception { $got = $codec->encode_one( $doc, $opts ) }, $error, "exception for Tie::IxHash" ); }; } done_testing; # vim: ts=4 sts=4 sw=4 et: MongoDB-v1.2.2/t/bson_codec/elements.t000644 000765 000024 00000020240 12651754051 017767 0ustar00davidstaff000000 000000 # # Copyright 2015 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. use strict; use warnings; use Test::More 0.96; use Test::Deep 0.086; # num() function use Test::Fatal; use Config; use DateTime; use Math::BigInt; use MongoDB; use MongoDB::OID; use MongoDB::DBRef; use lib 't/lib'; use TestBSON; use constant HAS_DATETIME_TINY => eval { require DateTime::Tiny; 1 }; my $oid = MongoDB::OID->new("554ce5e4096df3be01323321"); my $bin_oid = pack( "C*", map hex($_), unpack( "(a2)12", "$oid" ) ); my $regexp = MongoDB::BSON::Regexp->new( pattern => "abcd", flags => "ismx" ); my $dt = DateTime->new( year => 1984, month => 10, day => 16, hour => 16, minute => 12, second => 47, nanosecond => 500_000_000, time_zone => 'UTC', ); my $dt_epoch_fraction = $dt->epoch + $dt->nanosecond / 1e9; my $dtt; $dtt = DateTime::Tiny->new( year => 1984, month => 10, day => 16, hour => 16, minute => 12, second => 47, ) if HAS_DATETIME_TINY; my $dbref = MongoDB::DBRef->new( db => 'test', ref => 'test_coll', id => '123' ); my $dbref_cb = sub { my $hr = shift; return [ map { $_ => $hr->{$_} } sort keys %$hr ]; }; my $class = "MongoDB::BSON"; require_ok($class); my $codec = new_ok( $class, [], "new with no args" ); my @cases = ( { label => "BSON double", input => { a => 1.23 }, bson => _doc( BSON_DOUBLE . _ename("a") . _double(1.23) ), output => { a => num( 1.23, 1e-6 ) }, }, { label => "BSON string", input => { a => 'b' }, bson => _doc( BSON_STRING . _ename("a") . _string("b") ), output => { a => 'b' }, }, { label => "BSON OID", input => { _id => $oid }, bson => _doc( BSON_OID . _ename("_id") . $bin_oid ), output => { _id => $oid }, }, { label => "BSON Regexp (qr to obj)", input => { re => qr/abcd/imsx }, bson => _doc( BSON_REGEXP . _ename("re") . _regexp( 'abcd', 'imsx' ) ), output => { re => $regexp }, }, { label => "BSON Regexp (obj to obj)", input => { re => $regexp }, bson => _doc( BSON_REGEXP . _ename("re") . _regexp( 'abcd', 'imsx' ) ), output => { re => $regexp }, }, { label => "BSON Datetime from DateTime to raw", input => { a => $dt }, bson => _doc( BSON_DATETIME . _ename("a") . _datetime($dt) ), dec_opts => { dt_type => undef }, output => { a => $dt_epoch_fraction }, }, { label => "BSON Datetime from DateTime to DateTime", input => { a => $dt }, bson => _doc( BSON_DATETIME . _ename("a") . _datetime($dt) ), dec_opts => { dt_type => "DateTime" }, output => { a => DateTime->from_epoch( epoch => $dt_epoch_fraction ) }, }, { label => "BSON DBRef to unblessed", input => { a => $dbref }, bson => _doc( BSON_DOC . _ename("a") . _dbref($dbref) ), output => { a => { '$ref' => $dbref->ref, '$id' => $dbref->id, '$db' => $dbref->db } }, }, { label => "BSON DBRef to arrayref", input => { a => $dbref }, bson => _doc( BSON_DOC . _ename("a") . _dbref($dbref) ), dec_opts => { dbref_callback => $dbref_cb }, output => { a => [ '$db' => $dbref->db, '$id' => $dbref->id, '$ref' => $dbref->ref ] }, }, { label => "BSON Int32", input => { a => 66 }, bson => _doc( BSON_INT32 . _ename("a") . _int32(66) ), output => { a => 66 }, }, { label => "BSON Int32 (max 32 bit int)", input => { a => MAX_LONG }, bson => _doc( BSON_INT32 . _ename("a") . _int32(MAX_LONG) ), output => { a => MAX_LONG }, }, { label => "BSON Int32 (min 32 bit int)", input => { a => MIN_LONG }, bson => _doc( BSON_INT32 . _ename("a") . _int32(MIN_LONG) ), output => { a => MIN_LONG }, }, ); if (HAS_DATETIME_TINY) { push @cases, { label => "BSON Datetime from DateTime::Tiny to DateTime::Tiny", input => { a => $dtt }, bson => _doc( BSON_DATETIME . _ename("a") . _datetime( $dtt->DateTime ) ), dec_opts => { dt_type => "DateTime::Tiny" }, output => { a => $dtt }, }; } if (HAS_INT64) { my $big = 20 << 40; push @cases, { label => "BSON Int64", input => { a => $big }, bson => _doc( BSON_INT64 . _ename("a") . _int64($big) ), output => { a => $big }, }, { label => "BSON Int64 (small bigint)", input => { a => Math::BigInt->new(66) }, bson => _doc( BSON_INT64 . _ename("a") . _int64(66) ), output => { a => 66 }, }, { label => "BSON Int64 (64 bit pos igint)", input => { a => Math::BigInt->new( MAX_LONG + 1 ) }, bson => _doc( BSON_INT64 . _ename("a") . _int64( MAX_LONG + 1 ) ), output => { a => MAX_LONG + 1 }, }, { label => "BSON Int64 (64 bit neg bigint)", input => { a => Math::BigInt->new( MIN_LONG - 1 ) }, bson => _doc( BSON_INT64 . _ename("a") . _int64( MIN_LONG - 1 ) ), output => { a => MIN_LONG - 1 }, }; } else { my $neg_sml = Math::BigInt->new( -66 ); my $pos_sml = Math::BigInt->new( 66 ); my $neg_big = Math::BigInt->new( MIN_LONG )->bsub ( 1 ); my $pos_big = Math::BigInt->new( MAX_LONG )->badd ( 1 ); push @cases, { label => "BSON Int64 (small pos bigint)", input => { a => $pos_sml }, bson => _doc( BSON_INT64 . _ename("a") . _int64(66) ), output => { a => $pos_sml }, }, { label => "BSON Int64 (small neg bigint)", input => { a => $neg_sml }, bson => _doc( BSON_INT64 . _ename("a") . _int64(-66) ), output => { a => $neg_sml }, }, { label => "BSON Int64 (64 bit pos bigint)", input => { a => $pos_big }, bson => _doc( BSON_INT64 . _ename("a") . _int64( $pos_big ) ), output => { a => $pos_big }, }, { label => "BSON Int64 (64 bit neg bigint)", input => { a => $neg_big }, bson => _doc( BSON_INT64 . _ename("a") . _int64( $neg_big ) ), output => { a => $neg_big }, }; } for my $c (@cases) { my ( $label, $input, $bson, $output ) = @{$c}{qw/label input bson output/}; my $encoded = $codec->encode_one( $input, $c->{enc_opts} || {} ); is_bin( $encoded, $bson, "$label: encode_one" ); if ($output) { my $decoded = $codec->decode_one( $encoded, $c->{dec_opts} || {} ); cmp_deeply( $decoded, $output, "$label: decode_one" ) or diag "GOT:", explain($decoded), "EXPECTED:", explain($output); } } subtest "bigint over/underflow" => sub { # these are greater/less than LLONG_MAX/MIN my $too_big = Math::BigInt->new("9223372036854775808"); my $too_small = Math::BigInt->new("-9223372036854775809"); for my $data ( $too_big, $too_small ) { like( exception { $codec->encode_one( { a => $data } ) }, qr/Math::BigInt '-?\d+' can't fit into a 64-bit integer/, "error encoding $data" ); } }; subtest "unhandled types" => sub { my $hashtypeobj = bless { b => 'c' }, "HashTypeObject"; my $scalarobj = bless \(my $n=1), "ScalarTypeObject"; like( exception { $codec->encode_one( { a => $scalarobj } ) }, qr/type \(ScalarTypeObject\) unhandled/, "scalar ref class" ); like( exception { $codec->encode_one( { a => $hashtypeobj } ) }, qr/type \(HashTypeObject\) unhandled/, "hash ref class (as element)" ); }; done_testing; # vim: ts=4 sts=4 sw=4 et: MongoDB-v1.2.2/t/bson_codec/time_moment.t000644 000765 000024 00000004565 12651754051 020504 0ustar00davidstaff000000 000000 # # Copyright 2015 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. use strict; use warnings; use Test::More 0.96; use Test::Deep 0.086; # num() function use Test::Fatal; use Config; use Math::BigInt; use MongoDB; use lib "t/lib"; use TestBSON; plan skip_all => "Requires Time::Moment" unless eval { require Time::Moment; 1 }; require DateTime; my $dt = DateTime->new( year => 1984, month => 10, day => 16, hour => 16, minute => 12, second => 47, nanosecond => 500_000_000, time_zone => 'UTC', ); my $tm = Time::Moment->from_object($dt); my $dt_epoch_fraction = $dt->epoch + $dt->nanosecond / 1e9; my $class = "MongoDB::BSON"; require_ok($class); my $codec = new_ok( $class, [], "new with no args" ); my @cases = ( { label => "BSON Datetime from DateTime to Time::Moment", input => { a => $dt }, bson => _doc( BSON_DATETIME . _ename("a") . _datetime($dt) ), dec_opts => { dt_type => "Time::Moment" }, output => { a => $tm }, }, { label => "BSON Datetime from Time::Moment to Time::Moment", input => { a => $tm }, bson => _doc( BSON_DATETIME . _ename("a") . _datetime($dt) ), dec_opts => { dt_type => "Time::Moment" }, output => { a => $tm }, }, ); for my $c (@cases) { my ( $label, $input, $bson, $output ) = @{$c}{qw/label input bson output/}; my $encoded = $codec->encode_one( $input, $c->{enc_opts} || {} ); is_bin( $encoded, $bson, "$label: encode_one" ); if ($output) { my $decoded = $codec->decode_one( $encoded, $c->{dec_opts} || {} ); cmp_deeply( $decoded, $output, "$label: decode_one" ) or diag "GOT:", _hexdump( explain($decoded) ), "EXPECTED:", _hexdump( explain($output) ); } } done_testing; # vim: ts=4 sts=4 sw=4 et: MongoDB-v1.2.2/lib/MongoDB/000755 000765 000024 00000000000 12651754051 015462 5ustar00davidstaff000000 000000 MongoDB-v1.2.2/lib/MongoDB.pm000644 000765 000024 00000027273 12651754051 016033 0ustar00davidstaff000000 000000 # # Copyright 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # use 5.008; use strict; use warnings; package MongoDB; # ABSTRACT: Official MongoDB Driver for Perl use version; our $VERSION = 'v1.2.2'; # regexp_pattern was unavailable before 5.10, had to be exported to load the # function implementation on 5.10, and was automatically available in 5.10.1 use if ($] eq '5.010000'), 're', 'regexp_pattern'; use Carp (); use MongoDB::BSON; use MongoDB::MongoClient; use MongoDB::Database; use MongoDB::Collection; use MongoDB::DBRef; use MongoDB::OID; use MongoDB::Timestamp; use MongoDB::BSON::Binary; use MongoDB::BSON::Regexp; use MongoDB::BulkWrite; use MongoDB::_Link; use MongoDB::_Protocol; *read_documents = \&MongoDB::BSON::decode_bson; # regexp_pattern was unavailable before 5.10, had to be exported to load the # function implementation on 5.10, and was automatically available in 5.10.1 if ( $] eq '5.010' ) { require re; re->import('regexp_pattern'); } #pod =method connect #pod #pod $client = MongoDB->connect(); # localhost, port 27107 #pod $client = MongoDB->connect($host_uri); #pod $client = MongoDB->connect($host_uri, $options); #pod #pod This function returns a L object. The first parameter is #pod used as the C argument and must be a host name or L. The second argument is #pod optional. If provided, it must be a hash reference of constructor arguments #pod for L. #pod #pod If an error occurs, a L object will be thrown. #pod #pod B: To connect to a replica set, a replica set name must be provided. #pod For example, if the set name is "setA": #pod #pod $client = MongoDB->connect("mongodb://example.com/?replicaSet=setA"); #pod #pod =cut sub connect { my ($class, $host, $options) = @_; $host ||= "mongodb://localhost"; $options ||= {}; $options->{host} = $host; return MongoDB::MongoClient->new( $options ); } sub force_double { if ( ref $_[0] ) { Carp::croak("Can't force a reference into a double"); } return $_[0] = unpack("d",pack("d", $_[0])); } sub force_int { if ( ref $_[0] ) { Carp::croak("Can't force a reference into an int"); } return $_[0] = int($_[0]); } 1; =pod =encoding UTF-8 =head1 NAME MongoDB - Official MongoDB Driver for Perl =head1 VERSION version v1.2.2 =head1 SYNOPSIS use MongoDB; my $client = MongoDB->connect('mongodb://localhost'); my $collection = $client->ns('foo.bar'); # database foo, collection bar my $result = $collection->insert_one({ some => 'data' }); my $data = $collection->find_one({ _id => $result->inserted_id }); =head1 DESCRIPTION This is the official Perl driver for L. MongoDB is an open-source document database that provides high performance, high availability, and easy scalability. A MongoDB server (or multi-server deployment) hosts a number of databases. A database holds a set of collections. A collection holds a set of documents. A document is a set of key-value pairs. Documents have dynamic schema. Using dynamic schema means that documents in the same collection do not need to have the same set of fields or structure, and common fields in a collection's documents may hold different types of data. Here are some resources for learning more about MongoDB: =over 4 =item * L =item * L =item * L =back To get started with the Perl driver, see these pages: =over 4 =item * L =item * L =back Extensive documentation and support resources are available via the L. =head1 USAGE The MongoDB driver is organized into a set of classes representing different levels of abstraction and functionality. As a user, you first create and configure a L object to connect to a MongoDB deployment. From that client object, you can get a L object for interacting with a specific database. From a database object, you can get a L object for CRUD operations on that specific collection, or a L object for working with an abstract file system hosted on the database. Each of those classes may return other objects for specific features or functions. See the documentation of those classes for more details or the L for an example. =head2 Error handling Unless otherwise documented, errors result in fatal exceptions. See L for a list of exception classes and error code constants. =head1 METHODS =head2 connect $client = MongoDB->connect(); # localhost, port 27107 $client = MongoDB->connect($host_uri); $client = MongoDB->connect($host_uri, $options); This function returns a L object. The first parameter is used as the C argument and must be a host name or L. The second argument is optional. If provided, it must be a hash reference of constructor arguments for L. If an error occurs, a L object will be thrown. B: To connect to a replica set, a replica set name must be provided. For example, if the set name is "setA": $client = MongoDB->connect("mongodb://example.com/?replicaSet=setA"); =for Pod::Coverage force_double force_int read_documents =head1 SEMANTIC VERSIONING SCHEME Starting with MongoDB C, the driver reverts to the more familiar three-part version-tuple numbering scheme used by both Perl and MongoDB: C =over 4 =item * C will be incremented for incompatible API changes. =item * Even-value increments of C indicate stable releases with new functionality. C will be incremented for bug fixes. =item * Odd-value increments of C indicate unstable ("development") releases that should not be used in production. C increments have no semantic meaning; they indicate only successive development releases. =back See the Changes file included with releases for an indication of the nature of changes involved. =head1 ENVIRONMENT VARIABLES If the C environment variable is true before the MongoDB module is loaded, then its various classes will be generated with internal type assertions enabled. This has a severe performance cost and is not recommended for production use. It may be useful in diagnosing bugs. =for :stopwords cpan testmatrix url annocpan anno bugtracker rt cpants kwalitee diff irc mailto metadata placeholders metacpan =head1 SUPPORT =head2 Bugs / Feature Requests Please report any bugs or feature requests through the issue tracker at L. You will be notified automatically of any progress on your issue. =head2 Source Code This is open source software. The code repository is available for public review and contribution under the terms of the license. L git clone https://github.com/mongodb/mongo-perl-driver.git =head1 AUTHORS =over 4 =item * David Golden =item * Mike Friedman =item * Kristina Chodorow =item * Florian Ragwitz =back =head1 CONTRIBUTORS =for stopwords Andrew Page Andrey Khozov Ashley Willis Ask Bjørn Hansen Bernard Gorman Brendan W. McAdams Casey Rojas Christian Sturm Walde Colin Cyr Danny Raetzsch David Morrison Nadle Steinbrunner Storch D. Ilmari Mannsåker Eric Daniels Gerard Goossen Glenn Fowler Graham Barr Hao Wu Jason Carey Toffaletti Johann Rolschewski Joseph Harnish Josh Matthews Joshua Juran J. Stewart Kamil Slowikowski Ken Williams Matthew Shopsin Michael Langner Rotmanov Mike Dirolf Mohammad S Anwar Nickola Trupcheff Nigel Gregoire Niko Tyni Nuno Carvalho Orlando Vazquez Othello Maurer Pan Fan Rahul Dhodapkar Robin Lee Roman Yerin Ronald J Kimball Ryan Chipman Stephen Oberholtzer Steve Sanbeg Stuart Watt Uwe Voelker Whitney Jackson Xtreak Zhihong Zhang =over 4 =item * Andrew Page =item * Andrey Khozov =item * Ashley Willis =item * Ask Bjørn Hansen =item * Bernard Gorman =item * Brendan W. McAdams =item * Casey Rojas =item * Christian Hansen =item * Christian Sturm =item * Christian Walde =item * Colin Cyr =item * Danny Raetzsch =item * David Morrison =item * David Nadle =item * David Steinbrunner =item * David Storch =item * D. Ilmari Mannsåker =item * Eric Daniels =item * Gerard Goossen =item * Glenn Fowler =item * Graham Barr =item * Hao Wu =item * Jason Carey =item * Jason Toffaletti =item * Johann Rolschewski =item * Joseph Harnish =item * Josh Matthews =item * Joshua Juran =item * J. Stewart =item * Kamil Slowikowski =item * Ken Williams =item * Matthew Shopsin =item * Michael Langner =item * Michael Rotmanov =item * Mike Dirolf =item * Mohammad S Anwar =item * Nickola Trupcheff =item * Nigel Gregoire =item * Niko Tyni =item * Nuno Carvalho =item * Orlando Vazquez =item * Othello Maurer =item * Pan Fan =item * Rahul Dhodapkar =item * Robin Lee =item * Roman Yerin =item * Ronald J Kimball =item * Ryan Chipman =item * Stephen Oberholtzer =item * Steve Sanbeg =item * Stuart Watt =item * Uwe Voelker =item * Whitney Jackson =item * Xtreak =item * Zhihong Zhang =back =head1 COPYRIGHT AND LICENSE This software is Copyright (c) 2016 by MongoDB, Inc.. This is free software, licensed under: The Apache License, Version 2.0, January 2004 =cut __END__ # vim: set ts=4 sts=4 sw=4 et tw=75: MongoDB-v1.2.2/lib/MongoDB/_Constants.pm000644 000765 000024 00000003373 12651754051 020141 0ustar00davidstaff000000 000000 # # Copyright 2015 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # use 5.008; use strict; use warnings; package MongoDB::_Constants; # Common MongoDB driver constants use version; our $VERSION = 'v1.2.2'; use Exporter 5.57 qw/import/; use Config; my $CONSTANTS; BEGIN { $CONSTANTS = { COOLDOWN_SECS => 5, CURSOR_ZERO => "\0" x 8, EPOCH => 0, HAS_INT64 => $Config{use64bitint}, MAX_BSON_OBJECT_SIZE => 4_194_304, MAX_BSON_WIRE_SIZE => 16_793_600, # 16MiB + 16KiB MAX_WIRE_VERSION => 3, MAX_WRITE_BATCH_SIZE => 1000, MIN_HEARTBEAT_FREQUENCY_SEC => .5, MIN_HEARTBEAT_FREQUENCY_USEC => 500_000, # 500ms, not configurable MIN_KEYED_DOC_LENGTH => 8, MIN_WIRE_VERSION => 0, NO_JOURNAL_RE => qr/^journaling not enabled/, NO_REPLICATION_RE => qr/^no replication has been enabled/, P_INT32 => $] lt '5.010' ? 'l' : 'l<', WITH_ASSERTS => $ENV{PERL_MONGO_WITH_ASSERTS}, }; } use constant $CONSTANTS; our @EXPORT = keys %$CONSTANTS; 1; MongoDB-v1.2.2/lib/MongoDB/_Credential.pm000644 000765 000024 00000024457 12651754051 020245 0ustar00davidstaff000000 000000 # # Copyright 2014 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # package MongoDB::_Credential; use version; our $VERSION = 'v1.2.2'; use Moo; use MongoDB::Error; use MongoDB::Op::_Command; use MongoDB::_Types qw( AuthMechanism NonEmptyStr ); use Digest::MD5 qw/md5_hex/; use Encode qw/encode/; use MIME::Base64 qw/encode_base64 decode_base64/; use Tie::IxHash; use Try::Tiny; use Types::Standard qw( Bool HashRef InstanceOf Str ); use namespace::clean -except => 'meta'; has mechanism => ( is => 'ro', isa => AuthMechanism, required => 1, ); has username => ( is => 'ro', isa => Str, default => '', ); has source => ( is => 'lazy', isa => NonEmptyStr, builder => '_build_source', ); has password => ( is => 'ro', isa => Str, default => '', ); has pw_is_digest => ( is => 'ro', isa => Bool, ); has mechanism_properties => ( is => 'ro', isa => HashRef, default => sub { {} }, ); has _digested_password => ( is => 'lazy', isa => Str, builder => '_build__digested_password', ); has _scram_client => ( is => 'lazy', isa => InstanceOf['Authen::SCRAM::Client'], builder => '_build__scram_client', ); sub _build__scram_client { my ($self) = @_; # loaded only demand as it has a long load time relative to other # modules require Authen::SCRAM::Client; Authen::SCRAM::Client->VERSION(0.003); return Authen::SCRAM::Client->new( username => $self->username, password => $self->_digested_password, skip_saslprep => 1, ); } sub _build__digested_password { my ($self) = @_; return $self->password if $self->pw_is_digest; return md5_hex( encode( "UTF-8", $self->username . ":mongo:" . $self->password ) ); } sub _build_source { my ($self) = @_; my $mech = $self->mechanism; return $mech eq 'DEFAULT' || $mech eq 'MONGODB-CR' || $mech eq 'SCRAM-SHA-1' ? 'admin' : '$external'; } #<<< No perltidy my %CONSTRAINTS = ( 'MONGODB-CR' => { username => sub { length }, password => sub { length }, source => sub { length }, mechanism_properties => sub { !keys %$_ }, }, 'MONGODB-X509' => { username => sub { length }, password => sub { ! length }, source => sub { $_ eq '$external' }, mechanism_properties => sub { !keys %$_ }, }, 'GSSAPI' => { username => sub { length }, source => sub { $_ eq '$external' }, }, 'PLAIN' => { username => sub { length }, password => sub { length }, source => sub { $_ eq '$external' }, mechanism_properties => sub { !keys %$_ }, }, 'SCRAM-SHA-1' => { username => sub { length }, password => sub { length }, source => sub { length }, mechanism_properties => sub { !keys %$_ }, }, 'DEFAULT' => { username => sub { length }, password => sub { length }, source => sub { length }, mechanism_properties => sub { !keys %$_ }, }, ); #>>> sub BUILD { my ($self) = @_; my $mech = $self->mechanism; # validate attributes for given mechanism while ( my ( $key, $validator ) = each %{ $CONSTRAINTS{$mech} } ) { local $_ = $self->$key; unless ( $validator->() ) { MongoDB::UsageError->throw("invalid field $key in $mech credential"); } } # fix up GSSAPI property defaults if not given if ( $mech eq 'GSSAPI' ) { my $mp = $self->mechanism_properties; $mp->{SERVICE_NAME} ||= 'mongodb'; } return; } sub authenticate { my ( $self, $link, $bson_codec ) = @_; my $mech = $self->mechanism; if ( $mech eq 'DEFAULT' ) { $mech = $link->accepts_wire_version(3) ? 'SCRAM-SHA-1' : 'MONGODB-CR'; } my $method = "_authenticate_$mech"; $method =~ s/-/_/g; return $self->$method($link, $bson_codec); } #--------------------------------------------------------------------------# # authentication mechanisms #--------------------------------------------------------------------------# sub _authenticate_NONE () { 1 } sub _authenticate_MONGODB_CR { my ( $self, $link, $bson_codec ) = @_; my $nonce = $self->_send_command( $link, $bson_codec, 'admin', { getnonce => 1 } )->output->{nonce}; my $key = md5_hex( encode( "UTF-8", $nonce . $self->username . $self->_digested_password ) ); my $command = Tie::IxHash->new( authenticate => 1, user => $self->username, nonce => $nonce, key => $key ); $self->_send_command( $link, $bson_codec, $self->source, $command ); return 1; } sub _authenticate_MONGODB_X509 { my ( $self, $link, $bson_codec ) = @_; my $command = Tie::IxHash->new( authenticate => 1, user => $self->username, mechanism => "MONGODB-X509", ); $self->_send_command( $link, $bson_codec, $self->source, $command ); return 1; } sub _authenticate_PLAIN { my ( $self, $link, $bson_codec ) = @_; my $auth_bytes = encode( "UTF-8", "\x00" . $self->username . "\x00" . $self->password ); $self->_sasl_start( $link, $bson_codec, $auth_bytes, "PLAIN" ); return 1; } sub _authenticate_GSSAPI { my ( $self, $link, $bson_codec ) = @_; eval { require Authen::SASL; 1 } or MongoDB::AuthError->throw( "GSSAPI requires Authen::SASL and GSSAPI or Authen::SASL::XS from CPAN"); my ( $sasl, $client ); try { $sasl = Authen::SASL->new( mechanism => 'GSSAPI', callback => { user => $self->username, authname => $self->username, }, ); $client = $sasl->client_new( $self->mechanism_properties->{SERVICE_NAME}, $link->host ); } catch { MongoDB::AuthError->throw( "Failed to initialize a GSSAPI backend (did you install GSSAPI or Authen::SASL::XS?) Error was: $_" ); }; try { # start conversation my $step = $client->client_start; $self->_assert_gssapi( $client, "Could not start GSSAPI. Did you run kinit? Error was: " ); my ( $sasl_resp, $conv_id, $done ) = $self->_sasl_start( $link, $bson_codec, $step, 'GSSAPI' ); # iterate, but with maximum number of exchanges to prevent endless loop for my $i ( 1 .. 10 ) { last if $done; $step = $client->client_step($sasl_resp); $self->_assert_gssapi( $client, "GSSAPI step error: " ); ( $sasl_resp, $conv_id, $done ) = $self->_sasl_continue( $link, $bson_codec, $step, $conv_id ); } } catch { MongoDB::AuthError->throw("GSSAPI error: $_"); }; return 1; } sub _authenticate_SCRAM_SHA_1 { my ( $self, $link, $bson_codec ) = @_; my $client = $self->_scram_client; my ( $msg, $sasl_resp, $conv_id, $done ); try { $msg = $client->first_msg; ( $sasl_resp, $conv_id, $done ) = $self->_sasl_start( $link, $bson_codec, $msg, 'SCRAM-SHA-1' ); $msg = $client->final_msg($sasl_resp); ( $sasl_resp, $conv_id, $done ) = $self->_sasl_continue( $link, $bson_codec, $msg, $conv_id ); $client->validate($sasl_resp); # might require an empty payload to complete SASL conversation $self->_sasl_continue( $link, $bson_codec, "", $conv_id ) if !$done; } catch { MongoDB::AuthError->throw("SCRAM-SHA-1 error: $_"); }; return 1; } #--------------------------------------------------------------------------# # GSSAPI/SASL methods #--------------------------------------------------------------------------# # GSSAPI backends report status/errors differently sub _assert_gssapi { my ( $self, $client, $prefix ) = @_; my $type = ref $client; if ( $type =~ m{^Authen::SASL::(?:XS|Cyrus)$} ) { my $code = $client->code; if ( $code != 0 && $code != 1 ) { # not OK or CONTINUE my $error = join( "; ", $client->error ); MongoDB::AuthError->throw("$prefix$error"); } } else { # Authen::SASL::Perl::GSSAPI or some unknown backend if ( my $error = $client->error ) { MongoDB::AuthError->throw("$prefix$error"); } } return 1; } sub _sasl_start { my ( $self, $link, $bson_codec, $payload, $mechanism ) = @_; my $command = Tie::IxHash->new( saslStart => 1, mechanism => $mechanism, payload => $payload ? encode_base64( $payload, "" ) : "", autoAuthorize => 1, ); return $self->_sasl_send( $link, $bson_codec, $command ); } sub _sasl_continue { my ( $self, $link, $bson_codec, $payload, $conv_id ) = @_; my $command = Tie::IxHash->new( saslContinue => 1, conversationId => $conv_id, payload => $payload ? encode_base64( $payload, "" ) : "", ); return $self->_sasl_send( $link, $bson_codec, $command ); } sub _sasl_send { my ( $self, $link, $bson_codec, $command ) = @_; my $output = $self->_send_command( $link, $bson_codec, $self->source, $command )->output; my $sasl_resp = $output->{payload} ? decode_base64( $output->{payload} ) : ""; return ( $sasl_resp, $output->{conversationId}, $output->{done} ); } sub _send_command { my ($self, $link, $bson_codec, $db_name, $command) = @_; my $op = MongoDB::Op::_Command->_new( db_name => $db_name, query => $command, query_flags => {}, bson_codec => $bson_codec, ); my $res = $op->execute( $link ); return $res; } 1; MongoDB-v1.2.2/lib/MongoDB/_Link.pm000644 000765 000024 00000033150 12651754051 017056 0ustar00davidstaff000000 000000 # # Copyright 2014 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # Some portions of this code were copied and adapted from the Perl module # HTTP::Tiny, which is copyright Christian Hansen, David Golden and other # contributors and used with permission under the terms of the Artistic License use v5.8.0; use strict; use warnings; package MongoDB::_Link; use version; our $VERSION = 'v1.2.2'; use Moo; use Errno qw[EINTR EPIPE]; use IO::Socket qw[SOCK_STREAM]; use Scalar::Util qw/refaddr/; use Socket qw/SOL_SOCKET SO_KEEPALIVE SO_RCVBUF IPPROTO_TCP TCP_NODELAY/; use Time::HiRes qw/time/; use MongoDB::Error; use MongoDB::_Constants; use MongoDB::_Types qw( HostAddress NonNegNum ServerDesc ); use Types::Standard qw( Bool HashRef Maybe Num Str Undef ); use namespace::clean; my $SOCKET_CLASS = eval { require IO::Socket::IP; IO::Socket::IP->VERSION(0.25) } ? 'IO::Socket::IP' : 'IO::Socket::INET'; has address => ( is => 'ro', required => 1, isa => HostAddress, ); has connect_timeout => ( is => 'ro', default => 20, isa => Num, ); has socket_timeout => ( is => 'ro', default => 30, isa => Num|Undef, ); has with_ssl => ( is => 'ro', isa => Bool, ); has SSL_options => ( is => 'ro', default => sub { {} }, isa => HashRef, ); has server => ( is => 'rwp', init_arg => undef, isa => Maybe[ServerDesc], ); has host => ( is => 'lazy', init_arg => undef, isa => Str, ); sub _build_host { my ($self) = @_; my ($host, $port) = split /:/, $self->address; return $host; } my @is_master_fields= qw( min_wire_version max_wire_version max_message_size_bytes max_write_batch_size max_bson_object_size ); for my $f ( @is_master_fields ) { has $f => ( is => 'rwp', init_arg => undef, isa => Maybe[NonNegNum], ); } # for caching wire version >= 2 has does_write_commands => ( is => 'rwp', init_arg => undef, isa => Bool, ); my @connection_state_fields = qw( fh connected rcvbuf last_used fdset is_ssl ); for my $f ( @connection_state_fields ) { has $f => ( is => 'rwp', clearer => "_clear_$f", init_arg => undef, ); } around BUILDARGS => sub { my $orig = shift; my $class = shift; my $hr = $class->$orig(@_); # shortcut on missing required field return $hr unless exists $hr->{address}; ($hr->{host}, $hr->{port}) = split /:/, $hr->{address}; return $hr; }; sub connect { @_ == 1 || MongoDB::UsageError->throw( q/Usage: $handle->connect()/ . "\n" ); my ($self) = @_; if ( $self->with_ssl ) { $self->_assert_ssl; # XXX possibly make SOCKET_CLASS an instance variable and set it here to IO::Socket::SSL } my ($host, $port) = split /:/, $self->address; my $fh = $SOCKET_CLASS->new( PeerHost => $host, PeerPort => $port, Proto => 'tcp', Type => SOCK_STREAM, Timeout => $self->connect_timeout >= 0 ? $self->connect_timeout : undef, ) or MongoDB::NetworkError->throw(qq/Could not connect to '@{[$self->address]}': $@\n/); unless ( binmode($fh) ) { undef $fh; MongoDB::InternalError->throw(qq/Could not binmode() socket: '$!'\n/); } unless ( defined( $fh->setsockopt( IPPROTO_TCP, TCP_NODELAY, 1 ) ) ) { undef $fh; MongoDB::InternalError->throw(qq/Could not set TCP_NODELAY on socket: '$!'\n/); } unless ( defined( $fh->setsockopt( SOL_SOCKET, SO_KEEPALIVE, 1 ) ) ) { undef $fh; MongoDB::InternalError->throw(qq/Could not set SO_KEEPALIVE on socket: '$!'\n/); } $self->_set_fh($fh); $self->_set_connected(1); my $fd = fileno $fh; unless ( defined $fd && $fd >= 0 ) { $self->_close; MongoDB::InternalError->throw(qq/select(2): 'Bad file descriptor'\n/); } vec( my $fdset = '', $fd, 1 ) = 1; $self->_set_fdset( $fdset ); $self->start_ssl($host) if $self->with_ssl; $self->_set_last_used( time ); $self->_set_rcvbuf( $fh->sockopt(SO_RCVBUF) ); # Default max msg size is 2 * max BSON object size (DRIVERS-1) $self->_set_max_message_size_bytes( 2 * MAX_BSON_OBJECT_SIZE ); return $self; } sub set_metadata { my ( $self, $server ) = @_; $self->_set_server($server); $self->_set_min_wire_version( $server->is_master->{minWireVersion} || "0" ); $self->_set_max_wire_version( $server->is_master->{maxWireVersion} || "0" ); $self->_set_max_bson_object_size( $server->is_master->{maxBsonObjectSize} || MAX_BSON_OBJECT_SIZE ); $self->_set_max_write_batch_size( $server->is_master->{maxWriteBatchSize} || MAX_WRITE_BATCH_SIZE ); # Default is 2 * max BSON object size (DRIVERS-1) $self->_set_max_message_size_bytes( $server->is_master->{maxMessageSizeBytes} || 2 * $self->max_bson_object_size ); $self->_set_does_write_commands( $self->accepts_wire_version(2) ); return; } sub accepts_wire_version { my ( $self, $version ) = @_; my $min = $self->min_wire_version || 0; my $max = $self->max_wire_version || 0; return $version >= $min && $version <= $max; } sub start_ssl { my ( $self, $host ) = @_; my $ssl_args = $self->_ssl_args($host); IO::Socket::SSL->start_SSL( $self->fh, %$ssl_args, SSL_create_ctx_callback => sub { my $ctx = shift; Net::SSLeay::CTX_set_mode( $ctx, Net::SSLeay::MODE_AUTO_RETRY() ); }, ); unless ( ref( $self->fh ) eq 'IO::Socket::SSL' ) { my $ssl_err = IO::Socket::SSL->errstr; $self->_close; MongoDB::HandshakeError->throw(qq/SSL connection failed for $host: $ssl_err\n/); } } sub close { my ($self) = @_; $self->_close or MongoDB::NetworkError->throw(qq/Error closing socket: '$!'\n/); } # this is a quiet close so preexisting network errors can be thrown sub _close { my ($self) = @_; $self->_clear_connected; my $ok = 1; if ( $self->fh ) { $ok = CORE::close( $self->fh ); $self->_clear_fh; } return $ok; } sub is_connected { my ($self) = @_; return $self->connected && $self->fh; } sub idle_time_sec { my ($self) = @_; return( time - $self->last_used ); } sub write { my ( $self, $buf ) = @_; my ( $len, $off, $pending, $nfound, $r ) = ( length($buf), 0 ); MongoDB::ProtocolError->throw( qq/Message of size $len exceeds maximum of / . $self->{max_message_size_bytes} ) if $len > $self->max_message_size_bytes; local $SIG{PIPE} = 'IGNORE'; while () { # do timeout ( $pending, $nfound ) = ( $self->socket_timeout, 0 ); TIMEOUT: while () { if ( -1 == ( $nfound = select( undef, $self->fdset, undef, $pending ) ) ) { unless ( $! == EINTR ) { $self->_close; MongoDB::NetworkError->throw(qq/select(2): '$!'\n/); } # to avoid overhead tracking monotonic clock times; assume # interrupts occur on average halfway through the timeout period # and restart with half the original time $pending = int( $pending / 2 ); redo TIMEOUT; } last TIMEOUT; } unless ($nfound) { $self->_close; MongoDB::NetworkTimeout->throw( qq/Timed out while waiting for socket to become ready for writing\n/); } # do write if ( defined( $r = syswrite( $self->fh, $buf, $len, $off ) ) ) { ( $len -= $r ), ( $off += $r ); last unless $len > 0; } elsif ( $! == EPIPE ) { $self->_close; MongoDB::NetworkError->throw(qq/Socket closed by remote server: $!\n/); } elsif ( $! != EINTR ) { if ( $self->fh->can('errstr') ) { my $err = $self->fh->errstr(); $self->_close; MongoDB::NetworkError->throw(qq/Could not write to SSL socket: '$err'\n /); } else { $self->_close; MongoDB::NetworkError->throw(qq/Could not write to socket: '$!'\n/); } } } $self->_set_last_used(time); return; } sub read { my ($self) = @_; # len of undef triggers first pass through loop my ( $msg, $len, $pending, $nfound, $r ) = ( '', undef ); while () { # do timeout ( $pending, $nfound ) = ( $self->socket_timeout, 0 ); TIMEOUT: while () { # no need to select if SSL and has pending data from a frame if ( $self->with_ssl ) { ( $nfound = 1 ), last TIMEOUT if $self->fh->pending; } if ( -1 == ( $nfound = select( $self->fdset, undef, undef, $pending ) ) ) { unless ( $! == EINTR ) { $self->_close; MongoDB::NetworkError->throw(qq/select(2): '$!'\n/); } # to avoid overhead tracking monotonic clock times; assume # interrupts occur on average halfway through the timeout period # and restart with half the original time $pending = int( $pending / 2 ); redo TIMEOUT; } last TIMEOUT; } unless ($nfound) { $self->_close; MongoDB::NetworkTimeout->throw( q/Timed out while waiting for socket to become ready for reading/ . "\n" ); } # read up to SO_RCVBUF if we can if ( defined( $r = sysread( $self->fh, $msg, $self->rcvbuf, length $msg ) ) ) { # because select said we're ready to read, if we read 0 then # we got EOF before the full message if ( !$r ) { $self->_close; MongoDB::NetworkError->throw(qq/Unexpected end of stream\n/); } } elsif ( $! != EINTR ) { if ( $self->fh->can('errstr') ) { my $err = $self->fh->errstr(); $self->_close; MongoDB::NetworkError->throw(qq/Could not read from SSL socket: '$err'\n /); } else { $self->_close; MongoDB::NetworkError->throw(qq/Could not read from socket: '$!'\n/); } } if ( !defined $len ) { $len = unpack( P_INT32, $msg ); MongoDB::ProtocolError->throw( qq/Server reply of size $len exceeds maximum of / . $self->{max_message_size_bytes} ) if $len > $self->max_message_size_bytes; } last unless length($msg) < $len; } $self->_set_last_used(time); return $msg; } sub _assert_ssl { # Need IO::Socket::SSL 1.42 for SSL_create_ctx_callback MongoDB::UsageError->throw(qq/IO::Socket::SSL 1.42 must be installed for SSL support\n/) unless eval { require IO::Socket::SSL; IO::Socket::SSL->VERSION(1.42) }; # Need Net::SSLeay 1.49 for MODE_AUTO_RETRY MongoDB::UsageError->throw(qq/Net::SSLeay 1.49 must be installed for SSL support\n/) unless eval { require Net::SSLeay; Net::SSLeay->VERSION(1.49) }; } # Try to find a CA bundle to validate the SSL cert, # prefer Mozilla::CA or fallback to a system file sub _find_CA_file { my $self = shift(); return $self->SSL_options->{SSL_ca_file} if $self->SSL_options->{SSL_ca_file} and -e $self->SSL_options->{SSL_ca_file}; return Mozilla::CA::SSL_ca_file() if eval { require Mozilla::CA }; # cert list copied from golang src/crypto/x509/root_unix.go foreach my $ca_bundle ( "/etc/ssl/certs/ca-certificates.crt", # Debian/Ubuntu/Gentoo etc. "/etc/pki/tls/certs/ca-bundle.crt", # Fedora/RHEL "/etc/ssl/ca-bundle.pem", # OpenSUSE "/etc/openssl/certs/ca-certificates.crt", # NetBSD "/etc/ssl/cert.pem", # OpenBSD "/usr/local/share/certs/ca-root-nss.crt", # FreeBSD/DragonFly "/etc/pki/tls/cacert.pem", # OpenELEC "/etc/certs/ca-certificates.crt", # Solaris 11.2+ ) { return $ca_bundle if -e $ca_bundle; } MongoDB::UsageError->throw( qq/Couldn't find a CA bundle with which to verify the SSL certificate.\n/ . qq/Try installing Mozilla::CA from CPAN\n/); } sub _ssl_args { my ( $self, $host ) = @_; my %ssl_args; # This test reimplements IO::Socket::SSL::can_client_sni(), which wasn't # added until IO::Socket::SSL 1.84 if ( Net::SSLeay::OPENSSL_VERSION_NUMBER() >= 0x01000000 ) { $ssl_args{SSL_hostname} = $host, # Sane SNI support } $ssl_args{SSL_verifycn_scheme} = 'http'; # enable CN validation $ssl_args{SSL_verifycn_name} = $host; # set validation hostname $ssl_args{SSL_verify_mode} = 0x01; # enable cert validation $ssl_args{SSL_ca_file} = $self->_find_CA_file; # user options override default settings for my $k ( keys %{ $self->SSL_options } ) { $ssl_args{$k} = $self->SSL_options->{$k} if $k =~ m/^SSL_/; } return \%ssl_args; } 1; # vim: ts=4 sts=4 sw=4 et: MongoDB-v1.2.2/lib/MongoDB/_Protocol.pm000644 000765 000024 00000031170 12651754051 017762 0ustar00davidstaff000000 000000 # # Copyright 2014 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # use v5.8.0; use strict; use warnings; package MongoDB::_Protocol; use version; our $VERSION = 'v1.2.2'; use MongoDB::_Constants; use MongoDB::Error; use constant { OP_REPLY => 1, # Reply to a client request. responseTo is set OP_MSG => 1000, # generic msg command followed by a string OP_UPDATE => 2001, # update document OP_INSERT => 2002, # insert new document RESERVED => 2003, # formerly used for OP_GET_BY_OID OP_QUERY => 2004, # query a collection OP_GET_MORE => 2005, # Get more data from a query. See Cursors OP_DELETE => 2006, # Delete documents OP_KILL_CURSORS => 2007, # Tell database client is done with a cursor }; use constant { PERL58 => $] lt '5.010', MIN_REPLY_LENGTH => 4 * 5 + 8 + 4 * 2, MAX_REQUEST_ID => 2**31 - 1, }; # Perl < 5.10, pack doesn't have endianness modifiers, and the MongoDB wire # protocol mandates little-endian order. For 5.10, we can use modifiers but # before that we only work on platforms that are natively little-endian. We # die during configuration on big endian platforms on 5.8 use constant { P_HEADER => PERL58 ? "l4" : "l<4", }; # These ops all include P_HEADER already use constant { P_UPDATE => PERL58 ? "l5Z*l" : "l<5Z*l<", P_INSERT => PERL58 ? "l5Z*" : "l<5Z*", P_QUERY => PERL58 ? "l5Z*l2" : "l<5Z*l<2", P_GET_MORE => PERL58 ? "l5Z*la8" : "l<5Z*l PERL58 ? "l5Z*l" : "l<5Z*l<", P_KILL_CURSORS => PERL58 ? "l6(a8)*" : "l<6(a8)*", P_REPLY_HEADER => PERL58 ? "l5a8l2" : "l<5a8l<2", }; # struct MsgHeader { # int32 messageLength; // total message size, including this # int32 requestID; // identifier for this message # int32 responseTo; // requestID from the original request # // (used in reponses from db) # int32 opCode; // request type - see table below # } # # Approach for MsgHeader is to write a header with 0 for length, then # fix it up after the message is constructed. E.g. # my $msg = pack( P_INSERT, 0, int(rand(2**32-1)), 0, OP_INSERT, 0, $ns ) . $bson_docs; # substr( $msg, 0, 4, pack( P_INT32, length($msg) ) ); # struct OP_UPDATE { # MsgHeader header; // standard message header # int32 ZERO; // 0 - reserved for future use # cstring fullCollectionName; // "dbname.collectionname" # int32 flags; // bit vector. see below # document selector; // the query to select the document # document update; // specification of the update to perform # } use constant { U_UPSERT => 0, U_MULTI_UPDATE => 1, }; sub write_update { my ( $ns, $selector, $update, $flags ) = @_; utf8::encode($ns); my $bitflags = 0; if ($flags) { $bitflags = ( $flags->{upsert} ? 1 << U_UPSERT : 0 ) | ( $flags->{multi} ? 1 << U_MULTI_UPDATE : 0 ); } my $msg = pack( P_UPDATE, 0, int( rand( 2**32 - 1 ) ), 0, OP_UPDATE, 0, $ns, $bitflags ) . $selector . $update; substr( $msg, 0, 4, pack( P_INT32, length($msg) ) ); return $msg; } # struct OP_INSERT { # MsgHeader header; // standard message header # int32 flags; // bit vector - see below # cstring fullCollectionName; // "dbname.collectionname" # document* documents; // one or more documents to insert into the collection # } use constant { I_CONTINUE_ON_ERROR => 0, }; sub write_insert { my ( $ns, $bson_docs, $flags ) = @_; utf8::encode($ns); my $bitflags = 0; if ($flags) { $bitflags = ( $flags->{continue_on_error} ? 1 << I_CONTINUE_ON_ERROR : 0 ); } my $msg = pack( P_INSERT, 0, int( rand( 2**32 - 1 ) ), 0, OP_INSERT, $bitflags, $ns ) . $bson_docs; substr( $msg, 0, 4, pack( P_INT32, length($msg) ) ); return $msg; } # struct OP_QUERY { # MsgHeader header; // standard message header # int32 flags; // bit vector of query options. See below for details. # cstring fullCollectionName ; // "dbname.collectionname" # int32 numberToSkip; // number of documents to skip # int32 numberToReturn; // number of documents to return # // in the first OP_REPLY batch # document query; // query object. See below for details. # [ document returnFieldsSelector; ] // Optional. Selector indicating the fields # // to return. See below for details. # } use constant { Q_TAILABLE => 1, Q_SLAVE_OK => 2, Q_NO_CURSOR_TIMEOUT => 4, Q_AWAIT_DATA => 5, Q_EXHAUST => 6, # unsupported (PERL-282) Q_PARTIAL => 7, }; sub write_query { my ( $ns, $query, $fields, $skip, $batch_size, $flags ) = @_; utf8::encode($ns); my $bitflags = 0; if ($flags) { $bitflags = ( $flags->{tailable} ? 1 << Q_TAILABLE : 0 ) | ( $flags->{slave_ok} ? 1 << Q_SLAVE_OK : 0 ) | ( $flags->{await_data} ? 1 << Q_AWAIT_DATA : 0 ) | ( $flags->{immortal} ? 1 << Q_NO_CURSOR_TIMEOUT : 0 ) | ( $flags->{partial} ? 1 << Q_PARTIAL : 0 ); } my $request_id = int( rand( MAX_REQUEST_ID ) ); my $msg = pack( P_QUERY, 0, $request_id, 0, OP_QUERY, $bitflags, $ns, $skip, $batch_size ) . $query . ( defined $fields && length $fields ? $fields : '' ); substr( $msg, 0, 4, pack( P_INT32, length($msg) ) ); return ( $msg, $request_id ); } # struct { # MsgHeader header; // standard message header # int32 ZERO; // 0 - reserved for future use # cstring fullCollectionName; // "dbname.collectionname" # int32 numberToReturn; // number of documents to return # int64 cursorID; // cursorID from the OP_REPLY # } # We treat cursor_id as an opaque string so we don't have to depend # on 64-bit integer support sub write_get_more { my ( $ns, $cursor_id, $batch_size ) = @_; utf8::encode($ns); my $request_id = int( rand( MAX_REQUEST_ID ) ); my $msg = pack( P_GET_MORE, 0, $request_id, 0, OP_GET_MORE, 0, $ns, $batch_size, _pack_cursor_id($cursor_id) ); substr( $msg, 0, 4, pack( P_INT32, length($msg) ) ); return ( $msg, $request_id ); } # struct { # MsgHeader header; // standard message header # int32 ZERO; // 0 - reserved for future use # cstring fullCollectionName; // "dbname.collectionname" # int32 flags; // bit vector - see below for details. # document selector; // query object. See below for details. # } use constant { D_SINGLE_REMOVE => 0, }; sub write_delete { my ( $ns, $selector, $flags ) = @_; utf8::encode($ns); my $bitflags = 0; if ($flags) { $bitflags = ( $flags->{just_one} ? 1 << D_SINGLE_REMOVE : 0 ); } my $msg = pack( P_DELETE, 0, int( rand( 2**32 - 1 ) ), 0, OP_DELETE, 0, $ns, $bitflags ) . $selector; substr( $msg, 0, 4, pack( P_INT32, length($msg) ) ); return $msg; } # legacy alias { no warnings 'once'; *write_remove = \&write_delete; } # struct { # MsgHeader header; // standard message header # int32 ZERO; // 0 - reserved for future use # int32 numberOfCursorIDs; // number of cursorIDs in message # int64* cursorIDs; // sequence of cursorIDs to close # } sub write_kill_cursors { my (@cursors) = map _pack_cursor_id($_), @_; my $msg = pack( P_KILL_CURSORS, 0, int( rand( 2**32 - 1 ) ), 0, OP_KILL_CURSORS, 0, scalar(@cursors), @cursors ); substr( $msg, 0, 4, pack( P_INT32, length($msg) ) ); return $msg; } # struct { # // MessageHeader # int32 messageLength; // total message size, including this # int32 requestID; // identifier for this message # int32 responseTo; // requestID from the original request # int32 opCode; // request type - see table below # // OP_REPLY fields # int32 responseFlags; // bit vector - see details below # int64 cursorID; // cursor id if client needs to do get more's # int32 startingFrom; // where in the cursor this reply is starting # int32 numberReturned; // number of documents in the reply # document* documents; // documents # } # We treat cursor_id as an opaque string so we don't have to depend # on 64-bit integer support # flag bits relevant to drivers use constant { R_CURSOR_NOT_FOUND => 0, R_QUERY_FAILURE => 1, R_AWAIT_CAPABLE => 3, }; sub parse_reply { my ( $msg, $request_id ) = @_; MongoDB::ProtocolError->throw("response was truncated") if length($msg) < MIN_REPLY_LENGTH; my ( $len, $msg_id, $response_to, $opcode, $bitflags, $cursor_id, $starting_from, $number_returned ) = unpack( P_REPLY_HEADER, $msg ); # pre-check all conditions using a modifier in one statement for speed; # disambiguate afterwards only if an error exists do { if ( length($msg) < $len ) { MongoDB::ProtocolError->throw("response was truncated"); } if ( $opcode != OP_REPLY ) { MongoDB::ProtocolError->throw("response was not OP_REPLY"); } if ( $response_to != $request_id ) { MongoDB::ProtocolError->throw( "response ID ($response_to) did not match request ID ($request_id)"); } } if ( length($msg) < $len ) || ( $opcode != OP_REPLY ) || ( $response_to != $request_id ); # returns non-zero cursor_id as blessed object to identify it as an # 8-byte opaque ID rather than an ambiguous Perl scalar. N.B. cursors # from commands are handled differently: they are perl integers or # else Math::BigInt objects substr( $msg, 0, MIN_REPLY_LENGTH, '' ), return { flags => { cursor_not_found => vec( $bitflags, R_CURSOR_NOT_FOUND, 1 ), query_failure => vec( $bitflags, R_QUERY_FAILURE, 1 ), }, cursor_id => ( ( $cursor_id eq CURSOR_ZERO ) ? 0 : bless( \$cursor_id, "MongoDB::_CursorID" ) ), starting_from => $starting_from, number_returned => $number_returned, docs => $msg, }; } #--------------------------------------------------------------------------# # utility functions #--------------------------------------------------------------------------# # CursorID's can come in 3 forms: # # 1. MongoDB::CursorID object (a blessed reference to an 8-byte string) # 2. A perl scalar (an integer) # 3. A Math::BigInt object (64 bit integer on 32-bit perl) # # The _pack_cursor_id function converts any of them to a packed Int64 for # use in OP_GET_MORE or OP_KILL_CURSORS sub _pack_cursor_id { my $cursor_id = shift; if ( ref($cursor_id) eq "MongoDB::_CursorID" ) { $cursor_id = $$cursor_id; } elsif ( ref($cursor_id) eq "Math::BigInt" ) { my $as_hex = $cursor_id->as_hex; # big-endian hex substr( $as_hex, 0, 2, '' ); # remove "0x" my $len = length($as_hex); substr( $as_hex, 0, 0, "0" x ( 16 - $len ) ) if $len < 16; # pad to quad length $cursor_id = pack( "H*", $as_hex ); # packed big-endian $cursor_id = reverse($cursor_id); # reverse to little-endian } elsif (HAS_INT64) { # pack doesn't have endianness modifiers before perl 5.10. # We die during configuration on big-endian platforms on 5.8 $cursor_id = pack( $] lt '5.010' ? "q" : "q<", $cursor_id ); } else { # we on 32-bit perl *and* have a cursor ID that fits in 32 bits, # so pack it as long and pad out to a quad $cursor_id = pack( $] lt '5.010' ? "l" : "l<", $cursor_id ) . ( "\0" x 4 ); } return $cursor_id; } 1; # vim: ts=4 sts=4 sw=4 et: MongoDB-v1.2.2/lib/MongoDB/_Query.pm000644 000765 000024 00000012251 12651754051 017265 0ustar00davidstaff000000 000000 # # Copyright 2014 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # package MongoDB::_Query; # Encapsulate query structure and modification use version; our $VERSION = 'v1.2.2'; use Moo; use MongoDB::_Types qw( BSONCodec Document ReadPreference ReadConcern CursorType IxHash ); use Types::Standard qw( Str InstanceOf Maybe HashRef Bool Num ); use MongoDB::Op::_Query; use Tie::IxHash; use namespace::clean; #--------------------------------------------------------------------------# # attributes for constructing/conducting the op #--------------------------------------------------------------------------# has db_name => ( is => 'ro', isa => Str, required => 1, ); has coll_name => ( is => 'ro', isa => Str, required => 1, ); has client => ( is => 'ro', isa => InstanceOf ['MongoDB::MongoClient'], required => 1, ); has bson_codec => ( is => 'ro', isa => BSONCodec, required => 1, ); has read_preference => ( is => 'rw', # mutable for Cursor isa => Maybe( [ReadPreference] ), ); has read_concern => ( is => 'ro', # mutable for Cursor isa => ReadConcern, ); #--------------------------------------------------------------------------# # attributes based on the CRUD API spec: filter # # some are mutable so that MongoDB::Cursor methods can manipulate them # until the query is executed #--------------------------------------------------------------------------# has filter => ( is => 'ro', isa => Document, required => 1, ); # various things want to write here, so it must exist has modifiers => ( is => 'ro', isa => HashRef, required => 1, ); has allowPartialResults => ( is => 'rw', isa => Bool, required => 1, ); has batchSize => ( is => 'rw', isa => Num, required => 1, ); has comment => ( is => 'rw', isa => Str, required => 1, ); has cursorType => ( is => 'rw', isa => CursorType, required => 1, ); has limit => ( is => 'rw', isa => Num, required => 1, ); has maxAwaitTimeMS => ( is => 'rw', isa => Num, required => 1, ); has maxTimeMS => ( is => 'rw', isa => Num, required => 1, ); has noCursorTimeout => ( is => 'rw', isa => Bool, required => 1, ); has oplogReplay => ( is => 'rw', isa => Bool, required => 1, ); has projection => ( is => 'rw', isa => Maybe( [Document] ), ); has skip => ( is => 'rw', isa => Num, required => 1, ); has sort => ( is => 'rw', isa => Maybe( [IxHash] ), ); with $_ for qw( MongoDB::Role::_PrivateConstructor ); sub as_query_op { my ( $self, $extra_params ) = @_; return MongoDB::Op::_Query->_new( db_name => $self->db_name, coll_name => $self->coll_name, client => $self->client, bson_codec => $self->bson_codec, filter => $self->filter, projection => $self->projection, batch_size => $self->batchSize, limit => $self->limit, skip => $self->skip, 'sort' => $self->sort, comment => $self->comment, max_await_time_ms => $self->maxAwaitTimeMS, max_time_ms => $self->maxTimeMS, oplog_replay => $self->oplogReplay, no_cursor_timeout => $self->noCursorTimeout, allow_partial_results => $self->allowPartialResults, modifiers => $self->modifiers, cursor_type => $self->cursorType, read_preference => $self->read_preference, read_concern => $self->read_concern, exists $$extra_params{post_filter} ? (post_filter => $$extra_params{post_filter}) : (), ); } sub execute { my ($self) = @_; return $self->client->send_read_op( $self->as_query_op ); } sub clone { my ($self) = @_; # shallow copy everything; my %args = %$self; # deep copy any documents for my $k (qw/filter modifiers projection sort/) { my ($orig ) = $args{$k}; next unless $orig; if ( ref($orig) eq 'Tie::IxHash' ) { $args{$k}= Tie::IxHash->new( map { $_ => $orig->FETCH($_) } $orig->Keys ); } elsif ( ref($orig) eq 'ARRAY' ) { $args{$k}= [@$orig]; } else { $args{$k} = { %$orig }; } } return ref($self)->_new(%args); } 1; MongoDB-v1.2.2/lib/MongoDB/_Server.pm000644 000765 000024 00000016267 12651754051 017441 0ustar00davidstaff000000 000000 # # Copyright 2014 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # package MongoDB::_Server; use version; our $VERSION = 'v1.2.2'; use Moo; use MongoDB::_Types qw( NonNegNum HostAddress ServerType HostAddressList ); use Types::Standard qw( Bool InstanceOf HashRef Str Num ); use List::Util qw/first/; use Time::HiRes qw/time/; use namespace::clean -except => 'meta'; # address: the hostname or IP, and the port number, that the client connects # to. Note that this is not the server's ismaster.me field, in the case that # the server reports an address different from the address the client uses. has address => ( is => 'ro', isa => HostAddress, coerce => HostAddress->coercion, required => 1, ); # lastUpdateTime: when this server was last checked. Default "infinity ago". has last_update_time => ( is => 'ro', isa => Num, # floating point time required => 1, ); # error: information about the last error related to this server. Default null. has error => ( is => 'ro', isa => Str, default => '', ); # roundTripTime: the duration of the ismaster call. Default null. has rtt_sec => ( is => 'ro', isa => NonNegNum, default => 0, ); # is_master: hashref returned from an is_master command has is_master => ( is => 'ro', isa => HashRef, default => sub { {} }, ); # type: a ServerType enum value. Default Unknown. Definitions from the Server # Discovery and Monitoring Spec: # - Unknown Initial, or after a network error or failed ismaster call, or "ok: 1" # not in ismaster response. # - Standalone No "msg: isdbgrid", no setName, and no "isreplicaset: true". # - Mongos "msg: isdbgrid". # - RSPrimary "ismaster: true", "setName" in response. # - RSSecondary "secondary: true", "setName" in response. # - RSArbiter "arbiterOnly: true", "setName" in response. # - RSOther "setName" in response, "hidden: true" or not primary, secondary, nor arbiter. # - RSGhost "isreplicaset: true" in response. # - PossiblePrimary Not yet checked, but another member thinks it is the primary. has type => ( is => 'lazy', isa => ServerType, builder => '_build_type', writer => '_set_type', ); sub _build_type { my ($self) = @_; my $is_master = $self->is_master; if ( !$is_master->{ok} ) { return 'Unknown'; } elsif ( $is_master->{msg} && $is_master->{msg} eq 'isdbgrid' ) { return 'Mongos'; } elsif ( $is_master->{isreplicaset} ) { return 'RSGhost'; } elsif ( exists $is_master->{setName} ) { return $is_master->{ismaster} ? return 'RSPrimary' : $is_master->{hidden} ? return 'RSOther' : $is_master->{secondary} ? return 'RSSecondary' : $is_master->{arbiterOnly} ? return 'RSArbiter' : 'RSOther'; } else { return 'Standalone'; } } # hosts, passives, arbiters: Sets of addresses. This server's opinion of the # replica set's members, if any. Default empty. The client monitors all three # types of servers in a replica set. for my $s (qw/hosts passives arbiters/) { has $s => ( is => 'lazy', isa => HostAddressList, builder => "_build_$s", coerce => HostAddressList->coercion, ); no strict 'refs'; *{"_build_$s"} = sub { $_[0]->is_master->{$s} || [] }; } # address configured as part of replica set: string or null. Default null. has me => ( is => 'lazy', isa => Str, builder => "_build_me", ); sub _build_me { my ($self) = @_; return $self->is_master->{me} || ''; } # setName: string or null. Default null. has set_name => ( is => 'lazy', isa => Str, builder => "_build_set_name", ); sub _build_set_name { my ($self) = @_; return $self->is_master->{setName} || ''; } # primary: an address. This server's opinion of who the primary is. Default # null. has primary => ( is => 'lazy', isa => Str, # not HostAddress -- might be empty string builder => "_build_primary", ); sub _build_primary { my ($self) = @_; return $self->is_master->{primary} || ''; } # tags: (a tag set) map from string to string. Default empty. has tags => ( is => 'lazy', isa => HashRef, builder => "_build_tags", ); sub _build_tags { my ($self) = @_; return $self->is_master->{tags} || {}; } has is_available => ( is => 'lazy', isa => Bool, builder => "_build_is_available", ); sub _build_is_available { my ($self) = @_; return $self->type ne 'Unknown' && $self->type ne 'PossiblePrimary'; } has is_readable => ( is => 'lazy', isa => Bool, builder => "_build_is_readable", ); # any of these can take reads. Topologies will screen inappropriate # ones out. E.g. "Standalone" won't be found in a replica set topology. sub _build_is_readable { my ($self) = @_; my $type = $self->type; return !! grep { $type eq $_ } qw/Standalone RSPrimary RSSecondary Mongos/; } has is_writable => ( is => 'lazy', isa => Bool, builder => "_build_is_writable", ); # any of these can take writes. Topologies will screen inappropriate # ones out. E.g. "Standalone" won't be found in a replica set topology. sub _build_is_writable { my ($self) = @_; my $type = $self->type; return !! grep { $type eq $_ } qw/Standalone RSPrimary Mongos/; } sub updated_since { my ( $self, $time ) = @_; return( ($self->last_update_time - $time) > 0 ); } # check if server matches a single tag set (NOT a tag set list) sub matches_tag_set { my ( $self, $ts ) = @_; no warnings 'uninitialized'; # let undef equal empty string without complaint my $tg = $self->tags; # check if ts is a subset of tg: if any tags in ts that aren't in tg or where # the tag values aren't equal mean ts is NOT a subset if ( !defined first { !exists( $tg->{$_} ) || $tg->{$_} ne $ts->{$_} } keys %$ts ) { return 1; } return; } sub status_string { my ($self) = @_; if ( my $err = $self->error ) { $err =~ tr[\n][ ]; return sprintf( "%s (type: %s, error: %s)", $self->{address}, $self->{type}, $err); } else { return sprintf( "%s (type: %s)", map { $self->$_ } qw/address type/ ); } } sub status_struct { my ($self) = @_; my $info = { address => $self->address, type => $self->type, last_update_time => $self->last_update_time, }; $info->{error} = $self->error if $self->error; $info->{tags} = { %{ $self->tags } } if %{ $self->tags }; return $info; } 1; # vim: ts=4 sts=4 sw=4 et: MongoDB-v1.2.2/lib/MongoDB/_Topology.pm000644 000765 000024 00000073525 12651754051 020007 0ustar00davidstaff000000 000000 # # Copyright 2014 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # package MongoDB::_Topology; use version; our $VERSION = 'v1.2.2'; use Moo; use MongoDB::BSON; use MongoDB::Error; use MongoDB::Op::_Command; use MongoDB::ReadPreference; use MongoDB::_Constants; use MongoDB::_Link; use MongoDB::_Types qw( BSONCodec NonNegNum TopologyType ); use Types::Standard qw( Bool HashRef InstanceOf Num Str Maybe ); use MongoDB::_Server; use Config; use List::Util qw/first/; use Safe::Isa; use Time::HiRes qw/time usleep/; use Try::Tiny; use namespace::clean; #--------------------------------------------------------------------------# # attributes #--------------------------------------------------------------------------# has uri => ( is => 'ro', required => 1, isa => InstanceOf['MongoDB::_URI'], ); has max_wire_version => ( is => 'ro', required => 1, isa => Num, ); has min_wire_version => ( is => 'ro', required => 1, isa => Num, ); has credential => ( is => 'ro', required => 1, isa => InstanceOf['MongoDB::_Credential'], ); has type => ( is => 'ro', writer => '_set_type', default => 'Unknown', isa => TopologyType, ); has replica_set_name => ( is => 'ro', default => '', writer => '_set_replica_set_name', # :-) isa => Str, ); has heartbeat_frequency_sec => ( is => 'ro', default => 60, isa => NonNegNum, ); has last_scan_time => ( is => 'ro', default => EPOCH, writer => '_set_last_scan_time', isa => Num, ); has local_threshold_sec => ( is => 'ro', default => 0.015, isa => Num, ); has socket_check_interval_sec => ( is => 'ro', default => 5, isa => Num, ); has server_selection_timeout_sec => ( is => 'ro', default => 60, isa => Num, ); has server_selection_try_once => ( is => 'ro', default => 1, isa => Bool, ); has ewma_alpha => ( is => 'ro', default => 0.2, isa => Num, ); has link_options => ( is => 'ro', default => sub { {} }, isa => HashRef, ); has bson_codec => ( is => 'ro', default => sub { MongoDB::BSON->new }, isa => BSONCodec, ); has number_of_seeds => ( is => 'lazy', builder => '_build_number_of_seeds', isa => Num, ); has max_election_id => ( is => 'rw', isa => Maybe[ InstanceOf['MongoDB::OID'] ], writer => '_set_max_election_id', ); has max_set_version => ( is => 'rw', isa => Maybe [Num], writer => '_set_max_set_version', ); # compatible wire protocol has is_compatible => ( is => 'ro', writer => '_set_is_compatible', isa => Bool, ); has current_primary => ( is => 'rwp', clearer => '_clear_current_primary', init_arg => undef, ); has stale => ( is => 'rwp', init_arg => undef, default => 1, ); # servers, links and rtt_ewma_sec are all hashes on server address has servers => ( is => 'ro', default => sub { {} }, isa => HashRef[InstanceOf['MongoDB::_Server']], ); has links => ( is => 'ro', default => sub { {} }, isa => HashRef[InstanceOf['MongoDB::_Link']], ); has rtt_ewma_sec => ( is => 'ro', default => sub { {} }, isa => HashRef[Num], ); #--------------------------------------------------------------------------# # builders #--------------------------------------------------------------------------# sub _build_number_of_seeds { my ($self) = @_; return scalar @{ $self->uri->hostpairs }; } sub BUILD { my ($self) = @_; my $type = $self->type; my @addresses = @{ $self->uri->hostpairs }; if ( my $set_name = $self->replica_set_name ) { if ( $type eq 'Single' || $type eq 'ReplicaSetNoPrimary' ) { # these are valid, so nothing to do here } elsif ( $type eq 'Unknown' ) { $self->_set_type('ReplicaSetNoPrimary'); } else { MongoDB::InternalError->throw( "deployment with set name '$set_name' may not be initialized as type '$type'"); } } if ( $type eq 'Single' && @addresses > 1 ) { MongoDB::InternalError->throw( "topology type 'Single' cannot be used with multiple addresses: @addresses"); } $self->_add_address_as_unknown($_) for @addresses; return; } #--------------------------------------------------------------------------# # public methods #--------------------------------------------------------------------------# sub all_servers { return values %{ $_[0]->servers } } sub check_address { my ( $self, $address ) = @_; my $link = $self->links->{$address}; if ( $link && $link->is_connected ) { $self->_update_topology_from_link($link); } else { # initialize_link will call update_topology_from_link $self->_initialize_link($address); } return; } sub close_all_links { my ($self) = @_; delete $self->links->{ $_->address } for $self->all_servers; return; } sub get_readable_link { my ( $self, $read_pref ) = @_; my $mode = $read_pref ? lc $read_pref->mode : 'primary'; my $method = ( $self->type eq "Single" || $self->type eq "Sharded" ) ? '_find_any_server' : "_find_${mode}_server"; if ($mode eq 'primary' && $self->current_primary) { my $link = $self->_get_server_link( $self->current_primary, $method ); return $link if $link; } while ( my $server = $self->_selection_timeout( $method, $read_pref ) ) { my $link = $self->_get_server_link( $server, $method, $read_pref ); if ($link) { $self->_set_current_primary($server) if $mode eq 'primary' && ( $self->type eq "ReplicaSetWithPrimary" || 1 == keys %{ $self->servers } ); return $link; } } my $rp = $read_pref ? $read_pref->as_string : 'primary'; MongoDB::SelectionError->throw( "No readable server available for matching read preference $rp. MongoDB server status:\n" . $self->_status_string ); } sub get_specific_link { my ( $self, $address ) = @_; my $server = $self->servers->{$address}; if ( $server && ( my $link = $self->_get_server_link($server) ) ) { return $link; } else { MongoDB::SelectionError->throw("Server $address is no longer available"); } } sub get_writable_link { my ($self) = @_; my $method = ( $self->type eq "Single" || $self->type eq "Sharded" ) ? '_find_any_server' : "_find_primary_server"; if ($self->current_primary) { my $link = $self->_get_server_link( $self->current_primary, $method ); return $link if $link; } while ( my $server = $self->_selection_timeout($method) ) { my $link = $self->_get_server_link( $server, $method ); if ($link) { $self->_set_current_primary($server) if $self->type eq "ReplicaSetWithPrimary" || 1 == keys %{ $self->servers }; return $link; } } MongoDB::SelectionError->throw( "No writable server available. MongoDB server status:\n" . $self->_status_string ); } sub mark_server_unknown { my ( $self, $server, $error ) = @_; $self->_reset_address_to_unknown( $server->address, $error ); return; } sub mark_stale { my ($self) = @_; $self->_set_stale(1); return; } sub scan_all_servers { my ($self) = @_; my ( $next, @ordinary, @to_check ); my $start_time = time; my $cooldown_time = $start_time - COOLDOWN_SECS; # anything not updated since scan start is eligible for a check; when all servers # are updated, the loop terminates; Unknown servers aren't checked if # they are in the cooldown window since we don't want to wait the connect # timeout each attempt when they are unlikely to have changed status while (1) { @to_check = grep { $_->type eq 'Unknown' ? !$_->updated_since($cooldown_time) : !$_->updated_since($start_time) } $self->all_servers; last unless @to_check; if ( $next = first { $_->type eq 'RSPrimary' } @to_check ) { $self->check_address( $next->address ); } elsif ( $next = first { $_->type eq 'PossiblePrimary' } @to_check ) { $self->check_address( $next->address ); } elsif ( @ordinary = grep { $_->type ne 'Unknown' && $_->type ne 'RSGhost' } @to_check ) { $self->_check_oldest_server(@ordinary); } else { $self->_check_oldest_server(@to_check); } } $self->_set_last_scan_time( time ); $self->_set_stale( 0 ); $self->_check_wire_versions; return; } sub status_struct { my ($self) = @_; my $status = { topology_type => $self->type, }; $status->{replica_set_name} = $self->replica_set_name if $self->replica_set_name; # convert from [sec, microsec] array to floating point $status->{last_scan_time} = $self->last_scan_time; my $rtt_hash = $self->rtt_ewma_sec; my $ss = $status->{servers} = []; for my $server ( $self->all_servers ) { my $addr = $server->address; my $server_struct = $server->status_struct; if ( defined $rtt_hash->{$addr} ) { $server_struct->{ewma_rtt_sec} = $rtt_hash->{$addr}; } push @$ss, $server_struct; } return $status; } #--------------------------------------------------------------------------# # private methods #--------------------------------------------------------------------------# sub _add_address_as_unknown { my ( $self, $address, $last_update, $error ) = @_; $error = $error ? "$error" : ""; $error =~ s/ at \S+ line \d+.*//ms; return $self->servers->{$address} = MongoDB::_Server->new( address => $address, last_update_time => $last_update || EPOCH, error => $error, ); } sub _check_for_primary { my ($self) = @_; if ( 0 == $self->_primaries ) { $self->_set_type('ReplicaSetNoPrimary'); $self->_clear_current_primary; return 0; } return 1; } sub _check_oldest_server { my ( $self, @to_check ) = @_; my @ordered = map { $_->[0] } sort { $a->[1] <=> $b->[1] || rand() <=> rand() } # random if equal map { [ $_, $_->last_update_time ] } # ignore partial secs @to_check; $self->check_address( $ordered[0]->address ); return; } sub _check_wire_versions { my ($self) = @_; my $compat = 1; for my $server ( grep { $_->is_available } $self->all_servers ) { my ( $server_min_wire_version, $server_max_wire_version ) = @{ $server->is_master }{qw/minWireVersion maxWireVersion/}; if ( ( $server_min_wire_version || 0 ) > $self->max_wire_version || ( $server_max_wire_version || 0 ) < $self->min_wire_version ) { $compat = 0; } } $self->_set_is_compatible($compat); return; } sub _dump { my ($self) = @_; print $self->_status_string . "\n"; } sub _eligible { my ( $self, $read_pref, @candidates ) = @_; return @candidates if $read_pref->has_empty_tag_sets; # given a tag set list, if a tag set matches at least one # candidate, then all candidates matching that tag set are eligible for my $ts ( @{ $read_pref->tag_sets } ) { my @eligible = grep { $_->matches_tag_set($ts) } @candidates; return @eligible if @eligible; } return; } sub _find_any_server { my ( $self, undef, @candidates ) = @_; push @candidates, $self->all_servers unless @candidates; return $self->_get_server_in_latency_window( [ grep { $_->is_available } @candidates ] ); } sub _find_nearest_server { my ( $self, $read_pref, @candidates ) = @_; push @candidates, ( $self->_primaries, $self->_secondaries ) unless @candidates; my @suitable = $self->_eligible( $read_pref, @candidates ); return $self->_get_server_in_latency_window( \@suitable ); } sub _find_primary_server { my ( $self, undef, @candidates ) = @_; return $self->current_primary if $self->current_primary; push @candidates, $self->all_servers unless @candidates; return first { $_->is_writable } @candidates; } sub _find_primarypreferred_server { my ( $self, $read_pref, @candidates ) = @_; return $self->_find_primary_server(@candidates) || $self->_find_secondary_server( $read_pref, @candidates ); } sub _find_secondary_server { my ( $self, $read_pref, @candidates ) = @_; push @candidates, $self->_secondaries unless @candidates; my @suitable = $self->_eligible( $read_pref, @candidates ); return $self->_get_server_in_latency_window( \@suitable ); } sub _find_secondarypreferred_server { my ( $self, $read_pref, @candidates ) = @_; return $self->_find_secondary_server( $read_pref, @candidates ) || $self->_find_primary_server(@candidates); } sub _get_server_in_latency_window { my ( $self, $servers ) = @_; return unless @$servers; return $servers->[0] if @$servers == 1; # order servers by RTT EWMA my $rtt_hash = $self->rtt_ewma_sec; my @sorted = sort { $a->{rtt} <=> $b->{rtt} } map { { server => $_, rtt => $rtt_hash->{ $_->address } } } @$servers; # lowest RTT is always in the windows my @in_window = shift @sorted; # add any other servers in window and return a random one my $max_rtt = $in_window[0]->{rtt} + $self->local_threshold_sec; push @in_window, grep { $_->{rtt} <= $max_rtt } @sorted; return $in_window[ int( rand(@in_window) ) ]->{server}; } sub _get_server_link { my ( $self, $server, $method, $read_pref ) = @_; my $address = $server->address; my $link = $self->links->{$address}; # if no link, make a new connection or give up $link = $self->_initialize_link($address) unless $link && $link->connected; return unless $link; # for idle links, refresh the server and verify validity if ( $link->idle_time_sec > $self->socket_check_interval_sec ) { $self->check_address($address); # topology might have dropped the server $server = $self->servers->{$address} or return; my $fresh_link = $self->links->{$address}; return $fresh_link if !$method; # verify selection criteria return $self->$method( $read_pref, $server ) ? $fresh_link : undef; } return $link; } sub _initialize_link { my ( $self, $address ) = @_; my $link = try { MongoDB::_Link->new( %{$self->link_options}, address => $address )->connect; } catch { # if connection failed, update topology with Unknown description $self->_reset_address_to_unknown( $address, $_ ); return; }; return unless $link; # connection succeeded, so register link and get a server description $self->links->{$address} = $link; $self->_update_topology_from_link($link); # after update, server might or might not exist in the topology; # if not, return nothing return unless my $server = $self->servers->{$address}; # we have a link and the server is a valid member, so # try to authenticate; if authentication fails, all # servers are considered invalid and we throw an error if ( first { $_ eq $server->type } qw/Standalone Mongos RSPrimary RSSecondary/ ) { try { $self->credential->authenticate($link, $self->bson_codec); } catch { my $err = $_; $self->_reset_address_to_unknown( $_->address, $err ) for $self->all_servers; MongoDB::AuthError->throw("Authentication to $address failed: $err"); }; } return $link; } sub _primaries { return grep { $_->type eq 'RSPrimary' } $_[0]->all_servers; } sub _remove_address { my ( $self, $address ) = @_; if ( $self->current_primary && $self->current_primary->address eq $address ) { $self->_clear_current_primary; } delete $self->$_->{$address} for qw/servers links rtt_ewma_sec/; return; } sub _remove_server { my ( $self, $server ) = @_; $self->_remove_address( $server->address ); return; } sub _reset_address_to_unknown { my ( $self, $address, $error, $update_time ) = @_; $update_time ||= time; $self->_remove_address($address); my $desc = $self->_add_address_as_unknown( $address, $update_time, $error ); $self->_update_topology_from_server_desc($address, $desc); return; } sub _secondaries { return grep { $_->type eq 'RSSecondary' } $_[0]->all_servers; } sub _status_string { my ($self) = @_; my $status = ''; if ( $self->type =~ /^Replica/ ) { $status .= sprintf( "Topology type: %s; Set name: %s, Member status:\n", $self->type, $self->replica_set_name ); } else { $status .= sprintf( "Topology type: %s; Member status:\n", $self->type ); } $status .= join( "\n", map { " $_" } map { $_->status_string } $self->all_servers ) . "\n"; return $status; } # this implements the server selection timeout around whatever actual method # is used for returning a link sub _selection_timeout { my ( $self, $method, $read_pref ) = @_; my $start_time = my $loop_end_time = time(); my $max_time = $start_time + $self->server_selection_timeout_sec; if ( $self->last_scan_time + $self->heartbeat_frequency_sec < $start_time ) { $self->_set_stale(1); } while (1) { if ( $self->stale ) { my $scan_ready_time = $self->last_scan_time + MIN_HEARTBEAT_FREQUENCY_SEC; # if not enough time left to wait to check; then caller throws error return if !$self->server_selection_try_once && $scan_ready_time > $max_time; # loop_end_time is a proxy for time() to avoid overhead my $sleep_time = $scan_ready_time - $loop_end_time; usleep( 1e6 * $sleep_time ) if $sleep_time > 0; $self->scan_all_servers; } unless ( $self->is_compatible ) { $self->_set_stale(1); MongoDB::ProtocolError->throw( "Incompatible wire protocol version. This version of the MongoDB driver is not compatible with the server. You probably need to upgrade this library." ); } my $server = $self->$method($read_pref); return $server if $server; $self->_set_stale(1); $loop_end_time = time(); if ( $self->server_selection_try_once ) { # if already tried once; then caller throws error return if $self->last_scan_time > $start_time; } else { # if selection timed out; then caller throws error return if $loop_end_time > $max_time; } } } my $PRIMARY = MongoDB::ReadPreference->new; sub _update_topology_from_link { my ( $self, $link ) = @_; my $start_time = time; my $is_master = eval { my $op = MongoDB::Op::_Command->_new( db_name => 'admin', query => [ ismaster => 1 ], query_flags => {}, bson_codec => $self->bson_codec, read_preference => $PRIMARY, ); # just for this command, use connect timeout as socket timeout; # this violates encapsulation, but requires less API modification # to support this specific exception to the socket timeout local $link->{socket_timeout} = $link->{connect_timeout}; $op->execute( $link )->output; }; if ( $@ ) { local $_ = $@; warn "During MongoDB topology update for @{[$link->address]}: $_" if WITH_ASSERTS; $self->_reset_address_to_unknown( $link->address, $_ ); # retry a network error if server was previously known to us if ( $_->$_isa("MongoDB::NetworkError") and $link->server and $link->server->type ne 'Unknown' and $link->server->type ne 'PossiblePrimary' ) { # the earlier reset to unknown avoids us reaching this branch again # and recursing forever $self->check_address( $link->address ); } return; }; return unless $is_master; my $end_time = time; my $rtt_sec = $end_time - $start_time; my $new_server = MongoDB::_Server->new( address => $link->address, last_update_time => $end_time, rtt_sec => $rtt_sec, is_master => $is_master, ); $self->_update_topology_from_server_desc( $link->address, $new_server ); return; } sub _update_topology_from_server_desc { my ( $self, $address, $new_server ) = @_; # ignore spurious result not in the set; this isn't strictly necessary # for single-threaded operation, but spec tests expect it and if we # have async monitoring in the future, late responses could come back # after a server has been removed return unless $self->servers->{$address}; $self->_update_ewma( $address, $new_server ); # must come after ewma update $self->servers->{$address} = $new_server; my $method = "_update_" . $self->type; $self->$method( $address, $new_server ); # if link is still around, tag it with server specifics $self->_update_link_metadata( $address, $new_server ); return $new_server; } sub _update_ewma { my ( $self, $address, $new_server ) = @_; if ( $new_server->type eq 'Unknown' ) { delete $self->rtt_ewma_sec->{$address}; } else { my $old_avg = $self->rtt_ewma_sec->{$address}; my $alpha = $self->ewma_alpha; my $rtt_sec = $new_server->rtt_sec; $self->rtt_ewma_sec->{$address} = defined($old_avg) ? ( $alpha * $rtt_sec + ( 1 - $alpha ) * $old_avg ) : $rtt_sec; } return; } sub _update_link_metadata { my ( $self, $address, $server ) = @_; # if the link didn't get dropped from the topology during the update, we # attach the server so the link knows where it came from if ( $self->links->{$address} ) { $self->links->{$address}->set_metadata($server); } return; } sub _update_rs_with_primary_from_member { my ( $self, $new_server ) = @_; if ( !$self->servers->{ $new_server->address } || $self->replica_set_name ne $new_server->set_name ) { $self->_remove_server($new_server); } # require 'me' that matches expected address if ( $new_server->me && $new_server->me ne $new_server->address ) { $self->_remove_server($new_server); $self->_check_for_primary; return; } if ( ! $self->_check_for_primary ) { # flag possible primary to amend scanning order my $primary = $new_server->primary; if ( length($primary) && $self->servers->{$primary} && $self->servers->{$primary}->type eq 'Unknown' ) { $self->servers->{$primary}->_set_type('PossiblePrimary'); } } return; } sub _update_rs_with_primary_from_primary { my ( $self, $new_server ) = @_; if ( !length $self->replica_set_name ) { $self->_set_replica_set_name( $new_server->set_name ); } elsif ( $self->replica_set_name ne $new_server->set_name ) { # We found a primary but it doesn't have the setName # provided by the user or previously discovered $self->_remove_server($new_server); return; } my $election_id = $new_server->is_master->{electionId}; my $set_version = $new_server->is_master->{setVersion}; my $max_election_id = $self->max_election_id; my $max_set_version = $self->max_set_version; if ( defined $set_version && defined $election_id ) { if ( defined $max_election_id && defined $max_set_version && ( $max_set_version > $set_version || ( $max_set_version == $set_version && $max_election_id->value gt $election_id->value ) ) ) { # stale primary $self->_remove_address( $new_server->address ); $self->_add_address_as_unknown( $new_server->address ); $self->_check_for_primary; return; } $self->_set_max_election_id( $election_id ); } if ( defined $set_version && ( !defined $max_set_version || $set_version > $max_set_version ) ) { $self->_set_max_set_version($set_version); } # possibly invalidate an old primary (even if more than one!) for my $old_primary ( $self->_primaries ) { if ( $old_primary->address ne $new_server->address ) { $self->_reset_address_to_unknown( $old_primary->address, "no longer primary; update needed", $old_primary->last_update_time ); } } # unknown set members need to be added to the topology my %set_members = map { $_ => undef } map { @{ $new_server->$_ } } qw/hosts passives arbiters/; $self->_add_address_as_unknown($_) for grep { !exists $self->servers->{$_} } keys %set_members; # topology servers no longer in the set need to be removed $self->_remove_address($_) for grep { !exists $set_members{$_} } keys %{ $self->servers }; return; } sub _update_rs_without_primary { my ( $self, $new_server ) = @_; if ( !length $self->replica_set_name ) { $self->_set_replica_set_name( $new_server->set_name ); } elsif ( $self->replica_set_name ne $new_server->set_name ) { $self->_remove_server($new_server); return; } # unknown set members need to be added to the topology my %set_members = map { $_ => undef } map { @{ $new_server->$_ } } qw/hosts passives arbiters/; $self->_add_address_as_unknown($_) for grep { !exists $self->servers->{$_} } keys %set_members; # require 'me' that matches expected address if ( $new_server->me && $new_server->me ne $new_server->address ) { $self->_remove_server($new_server); return; } # flag possible primary to amend scanning order my $primary = $new_server->primary; if ( length($primary) && $self->servers->{$primary} && $self->servers->{$primary}->type eq 'Unknown' ) { $self->servers->{$primary}->_set_type('PossiblePrimary'); } return; } #--------------------------------------------------------------------------# # update methods by topology types: behavior in each depends on new server # type received #--------------------------------------------------------------------------# sub _update_ReplicaSetNoPrimary { my ( $self, $address, $new_server ) = @_; my $server_type = $new_server->type; if ( $server_type eq 'RSPrimary' ) { $self->_set_type('ReplicaSetWithPrimary'); $self->_update_rs_with_primary_from_primary($new_server); # topology changes might have removed all primaries $self->_check_for_primary; } elsif ( grep { $server_type eq $_ } qw/RSSecondary RSArbiter RSOther/ ) { $self->_update_rs_without_primary($new_server); } elsif ( grep { $server_type eq $_ } qw/Standalone Mongos/ ) { $self->_remove_server($new_server); } else { # Unknown or RSGhost are no-ops } return; } sub _update_ReplicaSetWithPrimary { my ( $self, $address, $new_server ) = @_; my $server_type = $new_server->type; if ( $server_type eq 'RSPrimary' ) { $self->_update_rs_with_primary_from_primary($new_server); } elsif ( grep { $server_type eq $_ } qw/RSSecondary RSArbiter RSOther/ ) { $self->_update_rs_with_primary_from_member($new_server); } elsif ( grep { $server_type eq $_ } qw/Unknown Standalone Mongos/ ) { $self->_remove_server($new_server) unless $server_type eq 'Unknown'; } else { # RSGhost is no-op } # topology changes might have removed all primaries $self->_check_for_primary; return; } sub _update_Sharded { my ( $self, $address, $new_server ) = @_; my $server_type = $new_server->type; if ( grep { $server_type eq $_ } qw/Unknown Mongos/ ) { # no-op } else { $self->_remove_server($new_server); } return; } sub _update_Single { my ( $self, $address, $new_server ) = @_; # Per the spec, TopologyType Single never changes type or membership return; } sub _update_Unknown { my ( $self, $address, $new_server ) = @_; my $server_type = $new_server->type; if ( $server_type eq 'Standalone' ) { if ( $self->number_of_seeds == 1 ) { $self->_set_type('Single'); } else { # a standalone server with multiple seeds is a replica set member # in maintenance mode; we drop it and may pick it up later if it # rejoins the replica set. $self->_remove_address($address); } } elsif ( $server_type eq 'Mongos' ) { $self->_set_type('Sharded'); } elsif ( $server_type eq 'RSPrimary' ) { $self->_set_type('ReplicaSetWithPrimary'); $self->_update_rs_with_primary_from_primary($new_server); # topology changes might have removed all primaries $self->_check_for_primary; } elsif ( grep { $server_type eq $_ } qw/RSSecondary RSArbiter RSOther/ ) { $self->_set_type('ReplicaSetNoPrimary'); $self->_update_rs_without_primary($new_server); } else { # Unknown or RSGhost are no-ops } return; } 1; # vim: ts=4 sts=4 sw=4 et: MongoDB-v1.2.2/lib/MongoDB/_Types.pm000644 000765 000024 00000013421 12651754051 017264 0ustar00davidstaff000000 000000 # # Copyright 2014 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # package MongoDB::_Types; # MongoDB type definitions use version; our $VERSION = 'v1.2.2'; use Type::Library -base, -declare => qw( ArrayOfHashRef AuthMechanism Booleanpm BSONCodec ConnectType CursorType DBRefColl DBRefDB Document ErrorStr HashLike HostAddress HostAddressList IndexModel IndexModelList IxHash MongoDBCollection MongoDBDatabase MongoDBQuery NonEmptyStr NonNegNum OID OrderedDoc PairArrayRef ReadPrefMode ReadConcern ReadPreference ServerDesc ServerType SingleChar SingleKeyHash TopologyType WriteConcern ); use Type::Utils -all; use Types::Standard qw( Any ArrayRef Dict HashRef Maybe Num Optional Ref Str Undef ); use Scalar::Util qw/reftype/; use boolean 0.25; require Tie::IxHash; #--------------------------------------------------------------------------# # Type declarations (without inherited coercions) #--------------------------------------------------------------------------# declare ArrayOfHashRef, as ArrayRef [HashRef]; enum AuthMechanism, [qw/NONE DEFAULT MONGODB-CR MONGODB-X509 GSSAPI PLAIN SCRAM-SHA-1/]; class_type Booleanpm, { class => 'boolean' }; duck_type BSONCodec, [ qw/encode_one decode_one/ ]; enum ConnectType, [qw/replicaSet direct none/]; enum CursorType, [qw/non_tailable tailable tailable_await/]; declare ErrorStr, as Str, where { $_ }; # needs a true value declare HashLike, as Ref, where { reftype($_) eq 'HASH' }; # XXX loose address validation for now. Host part should really be hostname or # IPv4/IPv6 literals declare HostAddress, as Str, where { $_ =~ /^[^:]+:[0-9]+$/ and lc($_) eq $_ }, message { "Address '$_' not formatted as 'hostname:port'" }; declare HostAddressList, as ArrayRef [HostAddress], message { "Address list <@$_> is not all hostname:port pairs" }; class_type IxHash, { class => 'Tie::IxHash' }; declare MaybeHashRef, as Maybe[ HashRef ]; class_type MongoDBCollection, { class => 'MongoDB::Collection' }; class_type MongoDBDatabase, { class => 'MongoDB::Database' }; class_type MongoDBQuery, { class => 'MongoDB::_Query' }; declare NonEmptyStr, as Str, where { defined $_ && length $_ }; declare NonNegNum, as Num, where { defined($_) && $_ >= 0 }, message { "value must be a non-negative number" }; declare OID, as Str, where { /\A[0-9a-f]{24}\z/ }, message { "Value '$_' is not a valid OID" }; declare PairArrayRef, as ArrayRef, where { @$_ % 2 == 0 }; enum ReadPrefMode, [qw/primary primaryPreferred secondary secondaryPreferred nearest/]; class_type ReadPreference, { class => 'MongoDB::ReadPreference' }; class_type ReadConcern, { class => 'MongoDB::ReadConcern' }; class_type ServerDesc, { class => 'MongoDB::_Server' }; enum ServerType, [ qw/Standalone Mongos PossiblePrimary RSPrimary RSSecondary RSArbiter RSOther RSGhost Unknown/ ]; declare SingleChar, as Str, where { length $_ eq 1 }; declare SingleKeyHash, as HashRef, where { 1 == scalar keys %$_ }; enum TopologyType, [qw/Single ReplicaSetNoPrimary ReplicaSetWithPrimary Sharded Unknown/]; class_type WriteConcern, { class => 'MongoDB::WriteConcern' }; # after SingleKeyHash, PairArrayRef and IxHash declare OrderedDoc, as PairArrayRef|IxHash|SingleKeyHash; declare Document, as HashRef|PairArrayRef|IxHash|HashLike; # after NonEmptyStr declare DBRefColl, as NonEmptyStr; declare DBRefDB, as NonEmptyStr|Undef; # after OrderedDoc declare IndexModel, as Dict [ keys => OrderedDoc, options => Optional [HashRef] ]; declare IndexModelList, as ArrayRef [IndexModel]; #--------------------------------------------------------------------------# # Coercions #--------------------------------------------------------------------------# coerce ArrayOfHashRef, from HashRef, via { [$_] }; coerce BSONCodec, from HashRef, via { require MongoDB::BSON; MongoDB::BSON->new($_) }; coerce Booleanpm, from Any, via { boolean($_) }; coerce DBRefColl, from MongoDBCollection, via { $_->name }; coerce DBRefDB, from MongoDBDatabase, via { $_->name }; coerce ErrorStr, from Str, via { $_ || "unspecified error" }; coerce HostAddress, from Str, via { /:/ ? lc $_ : lc "$_:27017" }; coerce HostAddressList, from ArrayRef, via { [ map { /:/ ? lc $_ : lc "$_:27017" } @$_ ] }; coerce ReadPrefMode, from Str, via { $_ = lc $_; s/_?preferred/Preferred/; $_ }; coerce IxHash, from HashRef, via { Tie::IxHash->new(%$_) }; coerce IxHash, from ArrayRef, via { Tie::IxHash->new(@$_) }; coerce IxHash, from HashLike, via { Tie::IxHash->new(%$_) }; coerce OID, from Str, via { lc $_ }; coerce ReadPreference, from HashRef, via { require MongoDB::ReadPreference; MongoDB::ReadPreference->new($_) }; coerce ReadPreference, from Str, via { require MongoDB::ReadPreference; MongoDB::ReadPreference->new( mode => $_ ) }; coerce ReadPreference, from ArrayRef, via { require MongoDB::ReadPreference; MongoDB::ReadPreference->new( mode => $_->[0], tag_sets => $_->[1] ) }; coerce ReadConcern, from Str, via { require MongoDB::ReadConcern; MongoDB::ReadConcern->new( level => $_ ) }; coerce ReadConcern, from HashRef, via { require MongoDB::ReadConcern; MongoDB::ReadConcern->new($_) }; coerce WriteConcern, from HashRef, via { require MongoDB::WriteConcern; MongoDB::WriteConcern->new($_) }; 1; MongoDB-v1.2.2/lib/MongoDB/_URI.pm000644 000765 000024 00000014176 12651754051 016627 0ustar00davidstaff000000 000000 # # Copyright 2014 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # package MongoDB::_URI; use version; our $VERSION = 'v1.2.2'; use Moo; use MongoDB::Error; use Types::Standard qw( Any ArrayRef HashRef Str ); use namespace::clean -except => 'meta'; my $uri_re = qr{ mongodb:// (?: ([^:]*) (?: : ([^@]*) )? @ )? # [username(:password)?@] ([^/]*) # host1[:port1][,host2[:port2],...[,hostN[:portN]]] (?: / ([^?]*) # /[database] (?: [?] (.*) )? # [?options] )? }x; has uri => ( is => 'ro', isa => Str, required => 1, ); has username => ( is => 'ro', isa => Any, writer => '_set_username', ); has password => ( is => 'ro', isa => Any, writer => '_set_password', ); has db_name => ( is => 'ro', isa => Str, writer => '_set_db_name', default => '', ); has options => ( is => 'ro', isa => HashRef, writer => '_set_options', default => sub { {} }, ); has hostpairs => ( is => 'ro', isa => ArrayRef, writer => '_set_hostpairs', default => sub { [] }, ); has valid_options => ( is => 'ro', isa => HashRef, builder => '_build_valid_options', ); sub _build_valid_options { return { map { lc($_) => 1 } qw( authMechanism authMechanismProperties connectTimeoutMS connect heartbeatFrequencyMS journal localThresholdMS maxTimeMS readPreference readPreferenceTags replicaSet serverSelectionTimeoutMS serverSelectionTryOnce socketCheckIntervalMS socketTimeoutMS ssl w wTimeoutMS readConcernLevel ) }; } sub _unescape_all { my $str = shift; return '' unless defined $str; $str =~ s/%([0-9a-f]{2})/chr(hex($1))/ieg; return $str; } sub _parse_doc { my ($name, $string) = @_; my $set = {}; for my $tag ( split /,/, $string ) { if ( $tag =~ /\S/ ) { my @kv = map { s{^\s*}{}; s{\s*$}{}; $_ } split /:/, $tag, 2; MongoDB::UsageError->throw("in option '$name', '$tag' is not a key:value pair") unless @kv == 2; $set->{$kv[0]} = $kv[1]; } } return $set; } sub BUILD { my ($self) = @_; my $uri = $self->uri; my %result; if ($uri =~ m{^$uri_re$}) { ($result{username}, $result{password}, $result{hostpairs}, $result{db_name}, $result{options}) = ($1, $2, $3, $4, $5); # Decode components for my $subcomponent ( qw/username password db_name/ ) { $result{$subcomponent} = _unescape_all($result{$subcomponent}) unless !(defined $result{$subcomponent}); } $result{hostpairs} = 'localhost' unless $result{hostpairs}; $result{hostpairs} = [ map { lc $_ } map { @_ = split ':', $_; _unescape_all($_[0]).":"._unescape_all($_[1]) } map { $_ .= ':27017' unless $_ =~ /:/ ; $_ } split ',', $result{hostpairs} ]; if ( defined $result{options} ) { my $valid = $self->valid_options; my %parsed; for my $opt ( split '&', $result{options} ) { my @kv = split '=', $opt; push @kv, '' if @kv == 1; MongoDB::UsageError->throw("expected key value pair") unless @kv == 2; my ($k, $v) = map { _unescape_all($_) } @kv; # connection string spec calls for case normalization (my $lc_k = $k) =~ tr[A-Z][a-z]; if ( !$valid->{$lc_k} ) { warn "Unsupported option '$k' in URI $self\n"; next; } if ( $lc_k eq 'authmechanismproperties' ) { $parsed{$lc_k} = _parse_doc($k,$v); } elsif ( $lc_k eq 'readpreferencetags' ) { $parsed{$lc_k} ||= []; push @{$parsed{$lc_k}}, _parse_doc($k,$v); } elsif ( $lc_k eq 'ssl' || $lc_k eq 'journal' || $lc_k eq 'serverselectiontryonce' ) { $parsed{$lc_k} = __str_to_bool($k, $v); } else { $parsed{$lc_k} = $v; } } $result{options} = \%parsed; } delete $result{username} unless defined $result{username}; delete $result{password} unless defined $result{password}; # can be empty string delete $result{db_name} unless defined $result{db_name} && length $result{db_name}; } else { # NOT a UsageError to avoid stacktrace revealing credentials MongoDB::Error->throw("URI '$self' could not be parsed"); } for my $attr ( qw/username password db_name options hostpairs/ ) { my $setter = "_set_$attr"; $self->$setter( $result{$attr} ) if defined $result{$attr}; } return; } sub __str_to_bool { my ($k, $str) = @_; MongoDB::UsageError->throw("cannot convert undef to bool for key '$k'") unless defined $str; # check for "true" and "false" (case-insensitively) my $ret = $str eq "true" ? 1 : $str eq "false" ? 0 : undef; return $ret if defined $ret; MongoDB::UsageError->throw("expected boolean string 'true' or 'false' for key '$k' but instead received '$str'"); } # redact user credentials when stringifying use overload '""' => sub { (my $s = $_[0]->uri) =~ s{^(\w+)://[^/]+\@}{$1://[**REDACTED**]\@}; return $s }, 'fallback' => 1; 1; # vim: ts=4 sts=4 sw=4 et: MongoDB-v1.2.2/lib/MongoDB/BSON/000755 000765 000024 00000000000 12651754051 016223 5ustar00davidstaff000000 000000 MongoDB-v1.2.2/lib/MongoDB/BSON.pm000644 000765 000024 00000032626 12651754051 016572 0ustar00davidstaff000000 000000 # # Copyright 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # package MongoDB::BSON; # ABSTRACT: Tools for serializing and deserializing data in BSON form use version; our $VERSION = 'v1.2.2'; use XSLoader; XSLoader::load("MongoDB", $VERSION); use Carp (); use Config; use if ! $Config{use64bitint}, "Math::BigInt"; use DateTime; use MongoDB::Error; use Moo; use MongoDB::_Types qw( NonNegNum SingleChar ); use Types::Standard qw( Bool CodeRef Maybe Str Undef ); use boolean; use namespace::clean -except => 'meta'; # cached for efficiency during decoding our $_boolean_true = true; our $_boolean_false = false; #pod =attr dbref_callback #pod #pod A document with keys C<$ref> and C<$id> is a special MongoDB convention #pod representing a #pod L. #pod #pod This attribute specifies a function reference that will be called with a hash #pod reference argument representing a DBRef. #pod #pod The hash reference will have keys C<$ref> and C<$id> and may have C<$db> and #pod other keys. The callback must return a scalar value representing the dbref #pod (e.g. a document, an object, etc.) #pod #pod The default C returns the DBRef hash reference without #pod modification. #pod #pod Note: in L, when no L object is #pod provided as the C attribute, L creates a #pod B L object that inflates DBRefs into #pod L objects using a custom C: #pod #pod dbref_callback => sub { return MongoDB::DBRef->new(shift) }, #pod #pod Object-database mappers may wish to implement alternative C #pod attributes to provide whatever semantics they require. #pod #pod =cut has dbref_callback => ( is => 'ro', isa => CodeRef, default => sub { sub { shift } }, ); #pod =attr dt_type #pod #pod Sets the type of object which is returned for BSON DateTime fields. The default #pod is L. Other acceptable values are L, L #pod and C. The latter will give you the raw epoch value (possibly as a #pod floating point value) rather than an object. #pod #pod =cut has dt_type => ( is => 'ro', isa => Str|Undef, default => 'DateTime', ); #pod =attr error_callback #pod #pod This attribute specifies a function reference that will be called with #pod three positional arguments: #pod #pod =for :list #pod * an error string argument describing the error condition #pod * a reference to the problematic document or byte-string #pod * the method in which the error occurred (e.g. C or C) #pod #pod Note: for decoding errors, the byte-string is passed as a reference to avoid #pod copying possibly large strings. #pod #pod If not provided, errors messages will be thrown with C. #pod #pod =cut has error_callback => ( is => 'ro', isa => Maybe[CodeRef], ); #pod =attr invalid_chars #pod #pod A string containing ASCII characters that must not appear in keys. The default #pod is the empty string, meaning there are no invalid characters. #pod #pod =cut has invalid_chars => ( is => 'ro', isa => Str, default => '', ); #pod =attr max_length #pod #pod This attribute defines the maximum document size. The default is 0, which #pod disables any maximum. #pod #pod If set to a positive number, it applies to both encoding B decoding (the #pod latter is necessary for prevention of resource consumption attacks). #pod #pod =cut has max_length => ( is => 'ro', isa => NonNegNum, default => 0, ); #pod =attr op_char #pod #pod This is a single character to use for special operators. If a key starts #pod with C, the C character will be replaced with "$". #pod #pod The default is "$". #pod #pod =cut has op_char => ( is => 'ro', isa => Maybe[ SingleChar ], ); #pod =attr prefer_numeric #pod #pod If set to true, scalar values that look like a numeric value will be #pod encoded as a BSON numeric type. When false, if the scalar value was ever #pod used as a string, it will be encoded as a BSON UTF-8 string. #pod #pod The default is false. #pod #pod =cut has prefer_numeric => ( is => 'ro', isa => Bool, ); #--------------------------------------------------------------------------# # public methods #--------------------------------------------------------------------------# #pod =method encode_one #pod #pod $byte_string = $codec->encode_one( $doc ); #pod $byte_string = $codec->encode_one( $doc, \%options ); #pod #pod Takes a "document", typically a hash reference, an array reference, or a #pod Tie::IxHash object and returns a byte string with the BSON representation of #pod the document. #pod #pod An optional hash reference of options may be provided. Valid options include: #pod #pod =for :list #pod * first_key – if C is defined, it and C #pod will be encoded first in the output BSON; any matching key found in the #pod document will be ignored. #pod * first_value - value to assign to C; will encode as Null if omitted #pod * error_callback – overrides codec default #pod * invalid_chars – overrides codec default #pod * max_length – overrides codec default #pod * op_char – overrides codec default #pod * prefer_numeric – overrides codec default #pod #pod =cut sub encode_one { my ( $self, $document, $options ) = @_; my $merged_opts = { %$self, ( $options ? %$options : () ) }; my $bson = eval { MongoDB::BSON::_encode_bson( $document, $merged_opts ) }; if ( $@ or ( $merged_opts->{max_length} && length($bson) > $merged_opts->{max_length} ) ) { my $msg = $@ || "Document exceeds maximum size $merged_opts->{max_length}"; if ( $merged_opts->{error_callback} ) { $merged_opts->{error_callback}->( $msg, $document, 'encode_one' ); } else { Carp::croak("During encode_one, $msg"); } } return $bson; } #pod =method decode_one #pod #pod $doc = $codec->decode_one( $byte_string ); #pod $doc = $codec->decode_one( $byte_string, \%options ); #pod #pod Takes a byte string with a BSON-encoded document and returns a #pod hash reference representin the decoded document. #pod #pod An optional hash reference of options may be provided. Valid options include: #pod #pod =for :list #pod * dbref_callback – overrides codec default #pod * dt_type – overrides codec default #pod * error_callback – overrides codec default #pod * max_length – overrides codec default #pod #pod =cut sub decode_one { my ( $self, $string, $options ) = @_; my $merged_opts = { %$self, ( $options ? %$options : () ) }; if ( $merged_opts->{max_length} && length($string) > $merged_opts->{max_length} ) { my $msg = "Document exceeds maximum size $merged_opts->{max_length}"; if ( $merged_opts->{error_callback} ) { $merged_opts->{error_callback}->( $msg, \$string, 'decode_one' ); } else { Carp::croak("During decode_one, $msg"); } } my $document = eval { MongoDB::BSON::_decode_bson( $string, $merged_opts ) }; if ( $@ ) { if ( $merged_opts->{error_callback} ) { $merged_opts->{error_callback}->( $@, \$string, 'decode_one' ); } else { Carp::croak("During decode_one, $@"); } } return $document; } #pod =method clone #pod #pod $codec->clone( dt_type => 'Time::Moment' ); #pod #pod Constructs a copy of the original codec, but allows changing #pod attributes in the copy. #pod #pod =cut sub clone { my ($self, @args) = @_; my $class = ref($self); if ( @args == 1 && ref( $args[0] ) eq 'HASH' ) { return $class->new( %$self, %{$args[0]} ); } return $class->new( %$self, @args ); } 1; __END__ =pod =encoding UTF-8 =head1 NAME MongoDB::BSON - Tools for serializing and deserializing data in BSON form =head1 VERSION version v1.2.2 =head1 SYNOPSIS my $codec = MongoDB::BSON->new; my $bson = $codec->encode_one( $document ); my $doc = $codec->decode_one( $bson ); =head1 DESCRIPTION This class implements a BSON encoder/decoder ("codec"). It consumes documents and emits BSON strings and vice versa. =head1 ATTRIBUTES =head2 dbref_callback A document with keys C<$ref> and C<$id> is a special MongoDB convention representing a L. This attribute specifies a function reference that will be called with a hash reference argument representing a DBRef. The hash reference will have keys C<$ref> and C<$id> and may have C<$db> and other keys. The callback must return a scalar value representing the dbref (e.g. a document, an object, etc.) The default C returns the DBRef hash reference without modification. Note: in L, when no L object is provided as the C attribute, L creates a B L object that inflates DBRefs into L objects using a custom C: dbref_callback => sub { return MongoDB::DBRef->new(shift) }, Object-database mappers may wish to implement alternative C attributes to provide whatever semantics they require. =head2 dt_type Sets the type of object which is returned for BSON DateTime fields. The default is L. Other acceptable values are L, L and C. The latter will give you the raw epoch value (possibly as a floating point value) rather than an object. =head2 error_callback This attribute specifies a function reference that will be called with three positional arguments: =over 4 =item * an error string argument describing the error condition =item * a reference to the problematic document or byte-string =item * the method in which the error occurred (e.g. C or C) =back Note: for decoding errors, the byte-string is passed as a reference to avoid copying possibly large strings. If not provided, errors messages will be thrown with C. =head2 invalid_chars A string containing ASCII characters that must not appear in keys. The default is the empty string, meaning there are no invalid characters. =head2 max_length This attribute defines the maximum document size. The default is 0, which disables any maximum. If set to a positive number, it applies to both encoding B decoding (the latter is necessary for prevention of resource consumption attacks). =head2 op_char This is a single character to use for special operators. If a key starts with C, the C character will be replaced with "$". The default is "$". =head2 prefer_numeric If set to true, scalar values that look like a numeric value will be encoded as a BSON numeric type. When false, if the scalar value was ever used as a string, it will be encoded as a BSON UTF-8 string. The default is false. =head1 METHODS =head2 encode_one $byte_string = $codec->encode_one( $doc ); $byte_string = $codec->encode_one( $doc, \%options ); Takes a "document", typically a hash reference, an array reference, or a Tie::IxHash object and returns a byte string with the BSON representation of the document. An optional hash reference of options may be provided. Valid options include: =over 4 =item * first_key – if C is defined, it and C will be encoded first in the output BSON; any matching key found in the document will be ignored. =item * first_value - value to assign to C; will encode as Null if omitted =item * error_callback – overrides codec default =item * invalid_chars – overrides codec default =item * max_length – overrides codec default =item * op_char – overrides codec default =item * prefer_numeric – overrides codec default =back =head2 decode_one $doc = $codec->decode_one( $byte_string ); $doc = $codec->decode_one( $byte_string, \%options ); Takes a byte string with a BSON-encoded document and returns a hash reference representin the decoded document. An optional hash reference of options may be provided. Valid options include: =over 4 =item * dbref_callback – overrides codec default =item * dt_type – overrides codec default =item * error_callback – overrides codec default =item * max_length – overrides codec default =back =head2 clone $codec->clone( dt_type => 'Time::Moment' ); Constructs a copy of the original codec, but allows changing attributes in the copy. =head1 AUTHORS =over 4 =item * David Golden =item * Mike Friedman =item * Kristina Chodorow =item * Florian Ragwitz =back =head1 COPYRIGHT AND LICENSE This software is Copyright (c) 2016 by MongoDB, Inc.. This is free software, licensed under: The Apache License, Version 2.0, January 2004 =cut MongoDB-v1.2.2/lib/MongoDB/BulkWrite.pm000644 000765 000024 00000027605 12651754051 017742 0ustar00davidstaff000000 000000 # # Copyright 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # package MongoDB::BulkWrite; # ABSTRACT: MongoDB bulk write interface use version; our $VERSION = 'v1.2.2'; use MongoDB::Error; use MongoDB::OID; use MongoDB::Op::_BulkWrite; use MongoDB::BulkWriteResult; use MongoDB::BulkWriteView; use Moo; use MongoDB::_Types qw( to_WriteConcern ); use Types::Standard qw( ArrayRef Bool InstanceOf ); use namespace::clean -except => 'meta'; #pod =attr collection (required) #pod #pod The L where the operations are to be performed. #pod #pod =cut has 'collection' => ( is => 'ro', isa => InstanceOf['MongoDB::Collection'], required => 1, ); #pod =attr ordered (required) #pod #pod A boolean for whether or not operations should be ordered (true) or #pod unordered (false). #pod #pod =cut has 'ordered' => ( is => 'ro', isa => Bool, required => 1, ); #pod =attr bypassDocumentValidation #pod #pod A boolean for whether or not operations should bypass document validation. #pod Default is false. #pod #pod =cut has 'bypassDocumentValidation' => ( is => 'ro', isa => Bool, ); has '_executed' => ( is => 'rw', isa => Bool, init_arg => undef, default => 0, ); has '_queue' => ( is => 'rw', isa => ArrayRef[ArrayRef], init_arg => undef, default => sub { [] }, ); sub _enqueue_write { my $self = shift; push @{$self->{_queue}}, @_; } sub _all_writes { return @{$_[0]->{_queue}} } sub _count_writes { return scalar @{$_[0]->{_queue}} } sub _clear_writes { @{$_[0]->{_queue}} = (); return; } has '_database' => ( is => 'lazy', isa => InstanceOf['MongoDB::Database'], ); sub _build__database { my ($self) = @_; return $self->collection->database; } has '_client' => ( is => 'lazy', isa => InstanceOf['MongoDB::MongoClient'], ); sub _build__client { my ($self) = @_; return $self->_database->_client; } #pod =method find #pod #pod $view = $bulk->find( $query_document ); #pod #pod The C method returns a L object that allows #pod write operations like C or C, constrained by a query document. #pod #pod A query document is required. Use an empty hashref for no criteria: #pod #pod $bulk->find( {} )->remove; # remove all documents! #pod #pod An exception will be thrown on error. #pod #pod =cut sub find { my ( $self, $doc ) = @_; MongoDB::UsageError->throw("find requires a criteria document. Use an empty hashref for no criteria.") unless defined $doc; my $type = ref $doc; unless ( @_ == 2 && grep { $type eq $_ } qw/HASH ARRAY Tie::IxHash/ ) { MongoDB::UsageError->throw("argument to find must be a single hashref, arrayref or Tie::IxHash"); } if ( ref $doc eq 'ARRAY' ) { MongoDB::UsageError->throw("array reference to find must have key/value pairs") if @$doc % 2; $doc = {@$doc}; } return MongoDB::BulkWriteView->new( _query => $doc, _bulk => $self, ); } #pod =method insert_one #pod #pod $bulk->insert_one( $doc ); #pod #pod Queues a document for insertion when L is called. The document may #pod be a hash reference, an array reference (with balanced key/value pairs) or a #pod L object. If the document does not have an C<_id> field, one will #pod be added to the original. #pod #pod The method has an empty return on success; an exception will be thrown on error. #pod #pod =cut sub insert_one { MongoDB::UsageError->throw("insert_one requires a single document reference as an argument") unless @_ == 2 && ref( $_[1] ); my ( $self, $doc ) = @_; if ( ref $doc eq 'ARRAY' ) { MongoDB::UsageError->throw("array reference to find must have key/value pairs") if @$doc % 2; $doc = {@$doc}; } $self->_enqueue_write( [ insert => $doc ] ); return; } #pod =method execute #pod #pod my $result = $bulk->execute; #pod #pod Executes the queued operations. The order and semantics depend on #pod whether the bulk object is ordered or unordered: #pod #pod =for :list #pod * ordered — operations are executed in order, but operations of the same type #pod (e.g. multiple inserts) may be grouped together and sent to the server. If #pod the server returns an error, the bulk operation will stop and an error will #pod be thrown. #pod * unordered — operations are grouped by type and sent to the server in an #pod unpredictable order. After all operations are sent, if any errors occurred, #pod an error will be thrown. #pod #pod When grouping operations of a type, operations will be sent to the server in #pod batches not exceeding 16MiB or 1000 items (for a version 2.6 or later server) #pod or individually (for legacy servers without write command support). #pod #pod This method returns a L object if the bulk operation #pod executes successfully. #pod #pod Typical errors might include: #pod #pod =for :list #pod * C — one or more write operations failed #pod * C - all writes were accepted by a primary, but #pod the write concern failed #pod * C — a command to the database failed entirely #pod #pod See L for more on error handling. #pod #pod B: it is an error to call C without any operations or #pod to call C more than once on the same bulk object. #pod #pod =cut sub execute { my ( $self, $write_concern ) = @_; $write_concern = to_WriteConcern($write_concern) if defined($write_concern) && ref($write_concern) ne 'MongoDB::WriteConcern'; if ( $self->_executed ) { MongoDB::UsageError->throw("bulk op execute called more than once"); } else { $self->_executed(1); } unless ( $self->_count_writes ) { MongoDB::UsageError->throw("no bulk ops to execute"); } $write_concern ||= $self->collection->write_concern; my $op = MongoDB::Op::_BulkWrite->_new( db_name => $self->_database->name, coll_name => $self->collection->name, queue => $self->_queue, ordered => $self->ordered, bypassDocumentValidation => $self->bypassDocumentValidation, bson_codec => $self->collection->bson_codec, write_concern => $write_concern, ); return $self->_client->send_write_op( $op ); } #--------------------------------------------------------------------------# # Deprecated methods #--------------------------------------------------------------------------# BEGIN { no warnings 'once'; *insert = \&insert_one; } 1; __END__ =pod =encoding UTF-8 =head1 NAME MongoDB::BulkWrite - MongoDB bulk write interface =head1 VERSION version v1.2.2 =head1 SYNOPSIS use Safe::Isa; use Try::Tiny; my $bulk = $collection->initialize_ordered_bulk_op; $bulk->insert_one( $doc ); $bulk->find( $query )->upsert->replace_one( $doc ) $bulk->find( $query )->update( $modification ) my $result = try { $bulk->execute; } catch { if ( $_->$isa("MongoDB::WriteConcernError") ) { warn "Write concern failed"; } else { die $_; } }; =head1 DESCRIPTION This class constructs a list of write operations to perform in bulk for a single collection. On a MongoDB 2.6 or later server with write command support this allow grouping similar operations together for transit to the database, minimizing network round-trips. To begin a bulk operation, use one these methods from L: =over 4 =item * L =item * L =back =head2 Ordered Operations With an ordered operations list, MongoDB executes the write operations in the list serially. If an error occurs during the processing of one of the write operations, MongoDB will return without processing any remaining write operations in the list. =head2 Unordered Operations With an unordered operations list, MongoDB can execute in parallel, as well as in a nondeterministic order, the write operations in the list. If an error occurs during the processing of one of the write operations, MongoDB will continue to process remaining write operations in the list. =head1 ATTRIBUTES =head2 collection (required) The L where the operations are to be performed. =head2 ordered (required) A boolean for whether or not operations should be ordered (true) or unordered (false). =head2 bypassDocumentValidation A boolean for whether or not operations should bypass document validation. Default is false. =head1 METHODS =head2 find $view = $bulk->find( $query_document ); The C method returns a L object that allows write operations like C or C, constrained by a query document. A query document is required. Use an empty hashref for no criteria: $bulk->find( {} )->remove; # remove all documents! An exception will be thrown on error. =head2 insert_one $bulk->insert_one( $doc ); Queues a document for insertion when L is called. The document may be a hash reference, an array reference (with balanced key/value pairs) or a L object. If the document does not have an C<_id> field, one will be added to the original. The method has an empty return on success; an exception will be thrown on error. =head2 execute my $result = $bulk->execute; Executes the queued operations. The order and semantics depend on whether the bulk object is ordered or unordered: =over 4 =item * ordered — operations are executed in order, but operations of the same type (e.g. multiple inserts) may be grouped together and sent to the server. If the server returns an error, the bulk operation will stop and an error will be thrown. =item * unordered — operations are grouped by type and sent to the server in an unpredictable order. After all operations are sent, if any errors occurred, an error will be thrown. =back When grouping operations of a type, operations will be sent to the server in batches not exceeding 16MiB or 1000 items (for a version 2.6 or later server) or individually (for legacy servers without write command support). This method returns a L object if the bulk operation executes successfully. Typical errors might include: =over 4 =item * C — one or more write operations failed =item * C - all writes were accepted by a primary, but the write concern failed =item * C — a command to the database failed entirely =back See L for more on error handling. B: it is an error to call C without any operations or to call C more than once on the same bulk object. =head1 AUTHORS =over 4 =item * David Golden =item * Mike Friedman =item * Kristina Chodorow =item * Florian Ragwitz =back =head1 COPYRIGHT AND LICENSE This software is Copyright (c) 2016 by MongoDB, Inc.. This is free software, licensed under: The Apache License, Version 2.0, January 2004 =cut MongoDB-v1.2.2/lib/MongoDB/BulkWriteResult.pm000644 000765 000024 00000033167 12651754051 021141 0ustar00davidstaff000000 000000 # # Copyright 2014 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # package MongoDB::BulkWriteResult; # ABSTRACT: MongoDB bulk write result document use version; our $VERSION = 'v1.2.2'; # empty superclass for backcompatibility; add a variable to the # package namespace so Perl thinks it's a real package $MongoDB::WriteResult::VERSION = $VERSION; use Moo; use MongoDB::Error; use MongoDB::_Constants; use MongoDB::_Types qw( ArrayOfHashRef ); use Types::Standard qw( HashRef Num Undef ); use namespace::clean; # fake empty superclass for backcompat our @ISA; push @ISA, 'MongoDB::WriteResult'; with $_ for qw( MongoDB::Role::_PrivateConstructor MongoDB::Role::_WriteResult ); has [qw/upserted inserted/] => ( is => 'ro', required => 1, isa => ArrayOfHashRef, ); has inserted_ids => ( is => 'lazy', builder => '_build_inserted_ids', init_arg => undef, isa => HashRef, ); sub _build_inserted_ids { my ($self) = @_; return { map { $_->{index}, $_->{_id} } @{ $self->inserted } }; } has upserted_ids => ( is => 'lazy', builder => '_build_upserted_ids', init_arg => undef, isa => HashRef, ); sub _build_upserted_ids { my ($self) = @_; return { map { $_->{index}, $_->{_id} } @{ $self->upserted } }; } for my $attr (qw/inserted_count upserted_count matched_count deleted_count/) { has $attr => ( is => 'ro', writer => "_set_$attr", required => 1, isa => Num, ); } # This should always be initialized either as a number or as undef so that # merges accumulate correctly. It should be undef if talking to a server < 2.6 # or if talking to a mongos and not getting the field back from an update. The # default is undef, which will be sticky and ensure this field stays undef. has modified_count => ( is => 'ro', writer => '_set_modified_count', required => 1, isa => (Num|Undef), ); sub has_modified_count { my ($self) = @_; return defined( $self->modified_count ); } has op_count => ( is => 'ro', writer => '_set_op_count', required => 1, isa => Num, ); has batch_count => ( is => 'ro', writer => '_set_batch_count', required => 1, isa => Num, ); #--------------------------------------------------------------------------# # emulate old API #--------------------------------------------------------------------------# my %OLD_API_ALIASING = ( nInserted => 'inserted_count', nUpserted => 'upserted_count', nMatched => 'matched_count', nModified => 'modified_count', nRemoved => 'deleted_count', writeErrors => 'write_errors', writeConcernErrors => 'write_concern_errors', count_writeErrors => 'count_write_errors', count_writeConcernErrors => 'count_write_concern_errors', ); while ( my ( $old, $new ) = each %OLD_API_ALIASING ) { no strict 'refs'; *{$old} = \&{$new}; } #--------------------------------------------------------------------------# # private functions #--------------------------------------------------------------------------# # defines how an logical operation type gets mapped to a result # field from the actual command result my %op_map = ( insert => [ inserted_count => sub { $_[0]->{n} } ], delete => [ deleted_count => sub { $_[0]->{n} } ], update => [ matched_count => sub { $_[0]->{n} } ], upsert => [ matched_count => sub { $_[0]->{n} - @{ $_[0]->{upserted} || [] } } ], ); my @op_map_keys = sort keys %op_map; sub _parse_cmd_result { my $class = shift; my $args = ref $_[0] eq 'HASH' ? shift : {@_}; unless ( 2 == grep { exists $args->{$_} } qw/op result/ ) { MongoDB::UsageError->throw("parse requires 'op' and 'result' arguments"); } my ( $op, $op_count, $batch_count, $result, $cmd_doc ) = @{$args}{qw/op op_count batch_count result cmd_doc/}; $result = $result->output if eval { $result->isa("MongoDB::CommandResult") }; MongoDB::UsageError->throw("op argument to parse must be one of: @op_map_keys") unless grep { $op eq $_ } @op_map_keys; MongoDB::UsageError->throw("results argument to parse must be a hash reference") unless ref $result eq 'HASH'; my %attrs = ( batch_count => $batch_count || 1, $op_count ? ( op_count => $op_count ) : (), inserted_count => 0, upserted_count => 0, matched_count => 0, deleted_count => 0, upserted => [], inserted => [], ); $attrs{write_errors} = $result->{writeErrors} ? $result->{writeErrors} : []; # rename writeConcernError -> write_concern_errors; coerce it to arrayref $attrs{write_concern_errors} = $result->{writeConcernError} ? [ $result->{writeConcernError} ] : []; # if we have upserts, change type to calculate differently if ( $result->{upserted} ) { $op = 'upsert'; $attrs{upserted} = $result->{upserted}; $attrs{upserted_count} = @{ $result->{upserted} }; } # recover _ids from documents if ( exists($result->{n}) && $op eq 'insert' ) { my @pairs; my $docs = {@$cmd_doc}->{documents}; for my $i ( 0 .. $result->{n}-1 ) { push @pairs, { index => $i, _id => $docs->[$i]{metadata}{_id} }; } $attrs{inserted} = \@pairs; } # change 'n' into an op-specific count if ( exists $result->{n} ) { my ( $key, $builder ) = @{ $op_map{$op} }; $attrs{$key} = $builder->($result); } # for an update/upsert we want the exact response whether numeric or undef # so that new undef responses become sticky; for all other updates, we # consider it 0 and let it get sorted out in the merging $attrs{modified_count} = ( $op eq 'update' || $op eq 'upsert' ) ? $result->{nModified} : 0; return $class->_new(%attrs); } # these are for single results only sub _parse_write_op { my $class = shift; my $op = shift; my %attrs = ( batch_count => 1, op_count => 1, write_errors => $op->write_errors, write_concern_errors => $op->write_concern_errors, inserted_count => 0, upserted_count => 0, matched_count => 0, modified_count => undef, deleted_count => 0, upserted => [], inserted => [], ); my $has_write_error = @{ $attrs{write_errors} }; # parse by type my $type = ref($op); if ( $type eq 'MongoDB::InsertOneResult' ) { if ( $has_write_error ) { $attrs{inserted_count} = 0; $attrs{inserted} = []; } else { $attrs{inserted_count} = 1; $attrs{inserted} = [ { index => 0, _id => $op->inserted_id } ]; } } elsif ( $type eq 'MongoDB::DeleteResult' ) { $attrs{deleted_count} = $op->deleted_count; } elsif ( $type eq 'MongoDB::UpdateResult' ) { if ( defined $op->upserted_id ) { my $upsert = { index => 0, _id => $op->upserted_id }; $attrs{upserted} = [$upsert]; $attrs{upserted_count} = 1; # modified_count *must* always be defined for 2.6+ servers # matched_count is here for clarity and consistency $attrs{matched_count} = 0; $attrs{modified_count} = 0; } else { $attrs{matched_count} = $op->matched_count; $attrs{modified_count} = $op->modified_count; } } else { MongoDB::InternalError->throw("can't parse unknown result class $op"); } return $class->_new(%attrs); } sub _merge_result { my ( $self, $result ) = @_; # Add simple counters for my $attr (qw/inserted_count upserted_count matched_count deleted_count/) { my $setter = "_set_$attr"; $self->$setter( $self->$attr + $result->$attr ); } # If modified_count is defined in both results we're merging, then we're # talking to a 2.6+ mongod or we're talking to a 2.6+ mongos and have only # seen responses with modified_count. In any other case, we set # modified_count to undef, which then becomes "sticky" if ( defined $self->modified_count && defined $result->modified_count ) { $self->_set_modified_count( $self->modified_count + $result->modified_count ); } else { $self->_set_modified_count(undef); } # Append error and upsert docs, but modify index based on op count my $op_count = $self->op_count; for my $attr (qw/write_errors upserted inserted/) { for my $doc ( @{ $result->$attr } ) { $doc->{index} += $op_count; } push @{ $self->$attr }, @{ $result->$attr }; } # Append write concern errors without modification (they have no index) push @{ $self->write_concern_errors }, @{ $result->write_concern_errors }; $self->_set_op_count( $op_count + $result->op_count ); $self->_set_batch_count( $self->batch_count + $result->batch_count ); return 1; } 1; __END__ =pod =encoding UTF-8 =head1 NAME MongoDB::BulkWriteResult - MongoDB bulk write result document =head1 VERSION version v1.2.2 =head1 SYNOPSIS # returned directly my $result = $bulk->execute; # from a WriteError or WriteConcernError my $result = $error->result; if ( $result->acknowledged ) { ... } =head1 DESCRIPTION This class encapsulates the results from a bulk write operation. It may be returned directly from C or it may be in the C attribute of a C subclass like C or C. =head1 ATTRIBUTES =head2 inserted_count Number of documents inserted =head2 upserted_count Number of documents upserted =head2 matched_count Number of documents matched for an update or replace operation. =head2 deleted_count Number of documents removed =head2 modified_count Number of documents actually modified by an update operation. This is not necessarily the same as L if the document was not actually modified as a result of the update. This field is not available from legacy servers before version 2.6. If results are seen from a legacy server (or from a mongos proxying for a legacy server) this attribute will be C. You can call C to find out if this attribute is defined or not. =head2 upserted An array reference containing information about upserted documents (if any). Each document will have the following fields: =over 4 =item * index — 0-based index indicating which operation failed =item * _id — the object ID of the upserted document =back =head2 upserted_ids A hash reference built lazily from C mapping indexes to object IDs. =head2 inserted An array reference containing information about inserted documents (if any). Documents are just as in C. =head2 inserted_ids A hash reference built lazily from C mapping indexes to object IDs. =head2 write_errors An array reference containing write errors (if any). Each error document will have the following fields: =over 4 =item * index — 0-based index indicating which operation failed =item * code — numeric error code =item * errmsg — textual error string =item * op — a representation of the actual operation sent to the server =back =head2 write_concern_errors An array reference containing write concern errors (if any). Each error document will have the following fields: =over 4 =item * index — 0-based index indicating which operation failed =item * code — numeric error code =back =head2 op_count The number of operations sent to the database. =head2 batch_count The number of database commands issued to the server. This will be less than the C if multiple operations were grouped together. =head1 METHODS =head2 assert Throws an error if write errors or write concern errors occurred. =head2 assert_no_write_error Throws a MongoDB::WriteError if C is non-zero; otherwise returns 1. =head2 assert_no_write_concern_error Throws a MongoDB::WriteConcernError if C is non-zero; otherwise returns 1. =head2 count_write_errors Returns the number of write errors =head2 count_write_concern_errors Returns the number of write errors =head2 last_code Returns the last C field from either the list of C or C or 0 if there are no errors. =head2 last_errmsg Returns the last C field from either the list of C or C or the empty string if there are no errors. =head2 last_wtimeout True if a write concern timed out or false otherwise. =head1 AUTHORS =over 4 =item * David Golden =item * Mike Friedman =item * Kristina Chodorow =item * Florian Ragwitz =back =head1 COPYRIGHT AND LICENSE This software is Copyright (c) 2016 by MongoDB, Inc.. This is free software, licensed under: The Apache License, Version 2.0, January 2004 =cut MongoDB-v1.2.2/lib/MongoDB/BulkWriteView.pm000644 000765 000024 00000014257 12651754051 020574 0ustar00davidstaff000000 000000 # # Copyright 2014 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # package MongoDB::BulkWriteView; # ABSTRACT: Bulk write operations against a query document use version; our $VERSION = 'v1.2.2'; use Moo; use MongoDB::Error; use MongoDB::_Types qw( IxHash Booleanpm ); use Types::Standard qw( InstanceOf ); use boolean; use namespace::clean -except => 'meta'; # A hash reference containing a MongoDB query document has _query => ( is => 'ro', isa => IxHash, coerce => IxHash->coercion, required => 1 ); # Originating bulk write object for executing write operations. has _bulk => ( is => 'ro', isa => InstanceOf['MongoDB::BulkWrite'], required => 1, handles => [qw/_enqueue_write/] ); has _upsert => ( is => 'ro', isa => Booleanpm, default => sub { false }, ); sub upsert { my ($self) = @_; unless ( @_ == 1 ) { MongoDB::UsageError->throw("the upsert method takes no arguments"); } return $self->new( %$self, _upsert => true ); } sub update_many { push @_, "update_many"; goto &_update; } sub update_one { push @_, "update_one"; goto &_update; } sub replace_one { push @_, "replace_one"; goto &_update; } sub _update { my $method = pop @_; my ( $self, $doc ) = @_; my $type = ref $doc; unless ( @_ == 2 && grep { $type eq $_ } qw/HASH ARRAY Tie::IxHash/ ) { MongoDB::UsageError->throw("argument to $method must be a single hashref, arrayref or Tie::IxHash"); } if ( ref $doc eq 'ARRAY' ) { MongoDB::UsageError->throw("array reference to $method must have key/value pairs") if @$doc % 2; $doc = Tie::IxHash->new(@$doc); } elsif ( ref $doc eq 'HASH' ) { $doc = Tie::IxHash->new(%$doc); } my $update = { q => $self->_query, u => $doc, multi => $method eq 'update_many' ? true : false, upsert => boolean( $self->_upsert ), is_replace => $method eq 'replace_one', }; $self->_enqueue_write( [ update => $update ] ); return; } sub delete_many { my ($self) = @_; $self->_enqueue_write( [ delete => { q => $self->_query, limit => 0 } ] ); return; } sub delete_one { my ($self) = @_; $self->_enqueue_write( [ delete => { q => $self->_query, limit => 1 } ] ); return; } #--------------------------------------------------------------------------# # Deprecated methods #--------------------------------------------------------------------------# BEGIN { no warnings 'once'; *update = \&update_many; *remove = \&delete_many; *remove_one = \&delete_one; } 1; __END__ =pod =encoding UTF-8 =head1 NAME MongoDB::BulkWriteView - Bulk write operations against a query document =head1 VERSION version v1.2.2 =head1 SYNOPSIS my $bulk = $collection->initialize_ordered_bulk_op; # Update one document matching the selector bulk->find( { a => 1 } )->update_one( { '$inc' => { x => 1 } } ); # Update all documents matching the selector bulk->find( { a => 2 } )->update_many( { '$inc' => { x => 2 } } ); # Update all documents bulk->find( {} )->update_many( { '$inc' => { x => 2 } } ); # Replace entire document (update with whole doc replace) bulk->find( { a => 3 } )->replace_one( { x => 3 } ); # Update one document matching the selector or upsert bulk->find( { a => 1 } )->upsert()->update_one( { '$inc' => { x => 1 } } ); # Update all documents matching the selector or upsert bulk->find( { a => 2 } )->upsert()->update_many( { '$inc' => { x => 2 } } ); # Replaces a single document matching the selector or upsert bulk->find( { a => 3 } )->upsert()->replace_one( { x => 3 } ); # Remove a single document matching the selector bulk->find( { a => 4 } )->delete_one(); # Remove all documents matching the selector bulk->find( { a => 5 } )->delete_many(); # Remove all documents bulk->find( {} )->delete_many(); =head1 DESCRIPTION This class provides means to specify write operations constrained by a query document. To instantiate a C, use the L method from L or the L method described below. Except for L, all methods have an empty return on success; an exception will be thrown on error. =head1 METHODS =head2 delete_many $bulk->delete_many; Removes all documents matching the query document. =head2 delete_one $bulk->delete_one; Removes a single document matching the query document. =head2 replace_one $bulk->replace_one( $doc ); Replaces the document matching the query document. The document to replace must not have any keys that begin with a dollar sign, C<$>. =head2 update_many $bulk->update_many( $modification ); Updates all documents matching the query document. The modification document must have all its keys begin with a dollar sign, C<$>. =head2 update_one $bulk->update_one( $modification ); Updates a single document matching the query document. The modification document must have all its keys begin with a dollar sign, C<$>. =head2 upsert $bulk->upsert->replace_one( $doc ); Returns a new C object that will treat every update, update_one or replace_one operation as an upsert operation. =head1 AUTHORS =over 4 =item * David Golden =item * Mike Friedman =item * Kristina Chodorow =item * Florian Ragwitz =back =head1 COPYRIGHT AND LICENSE This software is Copyright (c) 2016 by MongoDB, Inc.. This is free software, licensed under: The Apache License, Version 2.0, January 2004 =cut MongoDB-v1.2.2/lib/MongoDB/Code.pm000644 000765 000024 00000003556 12651754051 016703 0ustar00davidstaff000000 000000 # # Copyright 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # package MongoDB::Code; # ABSTRACT: JavaScript Code use version; our $VERSION = 'v1.2.2'; #pod =head1 NAME #pod #pod MongoDB::Code - JavaScript code #pod #pod =cut use Moo; use Types::Standard qw( HashRef Str ); use namespace::clean -except => 'meta'; #pod =head1 ATTRIBUTES #pod #pod =head2 code #pod #pod A string of JavaScript code. #pod #pod =cut has code => ( is => 'ro', isa => Str, required => 1, ); #pod =head2 scope #pod #pod An optional hash of variables to pass as the scope. #pod #pod =cut has scope => ( is => 'ro', isa => HashRef, required => 0, ); 1; __END__ =pod =encoding UTF-8 =head1 NAME MongoDB::Code - JavaScript Code =head1 VERSION version v1.2.2 =head1 NAME MongoDB::Code - JavaScript code =head1 ATTRIBUTES =head2 code A string of JavaScript code. =head2 scope An optional hash of variables to pass as the scope. =head1 AUTHORS =over 4 =item * David Golden =item * Mike Friedman =item * Kristina Chodorow =item * Florian Ragwitz =back =head1 COPYRIGHT AND LICENSE This software is Copyright (c) 2016 by MongoDB, Inc.. This is free software, licensed under: The Apache License, Version 2.0, January 2004 =cut MongoDB-v1.2.2/lib/MongoDB/Collection.pm000644 000765 000024 00000253457 12651754051 020133 0ustar00davidstaff000000 000000 # # Copyright 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # package MongoDB::Collection; # ABSTRACT: A MongoDB Collection use version; our $VERSION = 'v1.2.2'; use MongoDB::Error; use MongoDB::IndexView; use MongoDB::InsertManyResult; use MongoDB::QueryResult; use MongoDB::WriteConcern; use MongoDB::_Query; use MongoDB::Op::_Aggregate; use MongoDB::Op::_BatchInsert; use MongoDB::Op::_BulkWrite; use MongoDB::Op::_Count; use MongoDB::Op::_CreateIndexes; use MongoDB::Op::_Delete; use MongoDB::Op::_Distinct; use MongoDB::Op::_FindAndDelete; use MongoDB::Op::_FindAndUpdate; use MongoDB::Op::_InsertOne; use MongoDB::Op::_ListIndexes; use MongoDB::Op::_ParallelScan; use MongoDB::Op::_Update; use MongoDB::_Types qw( BSONCodec NonNegNum ReadPreference ReadConcern WriteConcern ); use Types::Standard qw( HashRef InstanceOf Str ); use Tie::IxHash; use Carp 'carp'; use boolean; use Safe::Isa; use Scalar::Util qw/blessed reftype/; use Try::Tiny; use Moo; use namespace::clean -except => 'meta'; #--------------------------------------------------------------------------# # constructor attributes #--------------------------------------------------------------------------# #pod =attr database #pod #pod The L representing the database that contains #pod the collection. #pod #pod =cut has database => ( is => 'ro', isa => InstanceOf['MongoDB::Database'], required => 1, ); #pod =attr name #pod #pod The name of the collection. #pod #pod =cut has name => ( is => 'ro', isa => Str, required => 1, ); #pod =attr read_preference #pod #pod A L object. It may be initialized with a string #pod corresponding to one of the valid read preference modes or a hash reference #pod that will be coerced into a new MongoDB::ReadPreference object. #pod By default it will be inherited from a L object. #pod #pod =cut has read_preference => ( is => 'ro', isa => ReadPreference, required => 1, coerce => ReadPreference->coercion, ); #pod =attr write_concern #pod #pod A L object. It may be initialized with a hash #pod reference that will be coerced into a new MongoDB::WriteConcern object. #pod By default it will be inherited from a L object. #pod #pod =cut has write_concern => ( is => 'ro', isa => WriteConcern, required => 1, coerce => WriteConcern->coercion, ); #pod =attr read_concern #pod #pod A L object. May be initialized with a hash #pod reference or a string that will be coerced into the level of read #pod concern. #pod #pod By default it will be inherited from a L object. #pod #pod =cut has read_concern => ( is => 'ro', isa => ReadConcern, required => 1, coerce => ReadConcern->coercion, ); #pod =attr max_time_ms #pod #pod Specifies the default maximum amount of time in milliseconds that the #pod server should use for working on a query. #pod #pod B: this will only be used for server versions 2.6 or greater, as that #pod was when the C<$maxTimeMS> meta-operator was introduced. #pod #pod =cut has max_time_ms => ( is => 'ro', isa => NonNegNum, required => 1, ); #pod =attr bson_codec #pod #pod An object that provides the C and C methods, such #pod as from L. It may be initialized with a hash reference that #pod will be coerced into a new MongoDB::BSON object. By default it will be #pod inherited from a L object. #pod #pod =cut has bson_codec => ( is => 'ro', isa => BSONCodec, coerce => BSONCodec->coercion, required => 1, ); #--------------------------------------------------------------------------# # computed attributes #--------------------------------------------------------------------------# #pod =method client #pod #pod $client = $coll->client; #pod #pod Returns the L object associated with this #pod object. #pod #pod =cut has _client => ( is => 'lazy', isa => InstanceOf['MongoDB::MongoClient'], reader => 'client', init_arg => undef, builder => '_build__client', ); sub _build__client { my ($self) = @_; return $self->database->_client; } #pod =method full_name #pod #pod $full_name = $coll->full_name; #pod #pod Returns the full name of the collection, including the namespace of the #pod database it's in prefixed with a dot character. E.g. collection "foo" in #pod database "test" would result in a C of "test.foo". #pod #pod =cut has _full_name => ( is => 'lazy', isa => Str, reader => 'full_name', init_arg => undef, builder => '_build__full_name', ); sub _build__full_name { my ($self) = @_; my $name = $self->name; my $db_name = $self->database->name; return "${db_name}.${name}"; } #pod =method indexes #pod #pod $indexes = $collection->indexes; #pod #pod $collection->indexes->create_one( [ x => 1 ], { unique => 1 } ); #pod $collection->indexes->drop_all; #pod #pod Returns a L object for managing the indexes associated #pod with the collection. #pod #pod =cut has _indexes => ( is => 'lazy', isa => InstanceOf['MongoDB::IndexView'], reader => 'indexes', init_arg => undef, builder => '_build__indexes', ); sub _build__indexes { my ($self) = @_; return MongoDB::IndexView->new( collection => $self ); } # these are constant, so we cache them has _op_args => ( is => 'lazy', isa => HashRef, init_arg => undef, builder => '_build__op_args', ); sub _build__op_args { my ($self) = @_; return { client => $self->client, db_name => $self->database->name, bson_codec => $self->bson_codec, coll_name => $self->name, write_concern => $self->write_concern, read_concern => $self->read_concern, read_preference => $self->read_preference, full_name => join( ".", $self->database->name, $self->name ), }; } #--------------------------------------------------------------------------# # public methods #--------------------------------------------------------------------------# #pod =method clone #pod #pod $coll2 = $coll1->clone( write_concern => { w => 2 } ); #pod #pod Constructs a copy of the original collection, but allows changing #pod attributes in the copy. #pod #pod =cut sub clone { my ($self, @args) = @_; my $class = ref($self); if ( @args == 1 && ref( $args[0] ) eq 'HASH' ) { return $class->new( %$self, %{$args[0]} ); } return $class->new( %$self, @args ); } #pod =method with_codec #pod #pod $coll2 = $coll1->with_codec( $new_codec ); #pod $coll2 = $coll1->with_codec( prefer_numeric => 1 ); #pod #pod Constructs a copy of the original collection, but clones the C. #pod If given an object that does C and C, it is #pod equivalent to: #pod #pod $coll2 = $coll1->clone( bson_codec => $new_codec ); #pod #pod If given a hash reference or a list of key/value pairs, it is equivalent #pod to: #pod #pod $coll2 = $coll1->clone( #pod bson_codec => $coll1->bson_codec->clone( @list ) #pod ); #pod #pod =cut sub with_codec { my ( $self, @args ) = @_; if ( @args == 1 ) { my $arg = $args[0]; if ( eval { $arg->can('encode_bson') && $arg->can('decode_bson') } ) { return $self->clone( bson_codec => $arg ); } elsif ( ref $arg eq 'HASH' ) { return $self->clone( bson_codec => $self->bson_codec->clone(%$arg) ); } } elsif ( @args % 2 == 0 ) { return $self->clone( bson_codec => $self->bson_codec->clone(@args) ); } # fallthrough is argument error MongoDB::UsageError->throw( "argument to with_codec must be new codec, hashref or key/value pairs" ); } #pod =method insert_one #pod #pod $res = $coll->insert_one( $document ); #pod $res = $coll->insert_one( $document, $options ); #pod $id = $res->inserted_id; #pod #pod Inserts a single L into the database and returns a #pod L or L object. #pod #pod If no C<_id> field is present, one will be added when a document is #pod serialized for the database without modifying the original document. #pod The generated C<_id> may be retrieved from the result object. #pod #pod An optional hash reference of options may be given. #pod #pod Valid options include: #pod #pod =for :list #pod * C - skips document validation, if enabled; this #pod is ignored for MongoDB servers older than version 3.2. #pod #pod =cut # args not unpacked for efficiency; args are self, document sub insert_one { MongoDB::UsageError->throw("document argument must be a reference") unless ref( $_[1] ); return $_[0]->client->send_write_op( MongoDB::Op::_InsertOne->_new( ( defined $_[2] ? (%{$_[2]}) : () ), document => $_[1], %{ $_[0]->_op_args }, ) ); } #pod =method insert_many #pod #pod $res = $coll->insert_many( [ @documents ] ); #pod $res = $coll->insert_many( [ @documents ], { ordered => 0 } ); #pod #pod Inserts each of the L in an array reference into the #pod database and returns a L or #pod L. This is syntactic sugar for doing a #pod L operation. #pod #pod If no C<_id> field is present, one will be added when a document is #pod serialized for the database without modifying the original document. #pod The generated C<_id> may be retrieved from the result object. #pod #pod An optional hash reference of options may be provided. #pod #pod Valid options include: #pod #pod =for :list #pod * C - skips document validation, if enabled; this #pod is ignored for MongoDB servers older than version 3.2. #pod * C – when true, the server will halt insertions after the first #pod error (if any). When false, all documents will be processed and any #pod error will only be thrown after all insertions are attempted. The #pod default is true. #pod #pod On MongoDB servers before version 2.6, C bulk operations are #pod emulated with individual inserts to capture error information. On 2.6 or #pod later, this method will be significantly faster than individual C #pod calls. #pod #pod =cut # args not unpacked for efficiency; args are self, document, options sub insert_many { MongoDB::UsageError->throw("documents argument must be an array reference") unless ref( $_[1] ) eq 'ARRAY'; return MongoDB::InsertManyResult->_new( acknowledged => $_[0]->write_concern->is_acknowledged, inserted => $_[0]->client->send_write_op( MongoDB::Op::_BulkWrite->_new( # default ordered => 1, # user overrides ( defined $_[2] ? ( %{ $_[2] } ) : () ), # un-overridable queue => [ map { [ insert => $_ ] } @{ $_[1] } ], %{ $_[0]->_op_args }, ) )->inserted, write_errors => [], write_concern_errors => [], ); } #pod =method delete_one #pod #pod $res = $coll->delete_one( $filter ); #pod $res = $coll->delete_one( { _id => $id } ); #pod #pod Deletes a single document that matches a L and returns a #pod L or L object. #pod #pod =cut # args not unpacked for efficiency; args are self, filter sub delete_one { MongoDB::UsageError->throw("filter argument must be a reference") unless ref( $_[1] ); return $_[0]->client->send_write_op( MongoDB::Op::_Delete->_new( filter => $_[1], just_one => 1, %{ $_[0]->_op_args }, ) ); } #pod =method delete_many #pod #pod $res = $coll->delete_many( $filter ); #pod $res = $coll->delete_many( { name => "Larry" } ); #pod #pod Deletes all documents that match a L #pod and returns a L or L #pod object. #pod #pod =cut # args not unpacked for efficiency; args are self, filter sub delete_many { MongoDB::UsageError->throw("filter argument must be a reference") unless ref( $_[1] ); return $_[0]->client->send_write_op( MongoDB::Op::_Delete->_new( filter => $_[1], just_one => 0, %{ $_[0]->_op_args }, ) ); } #pod =method replace_one #pod #pod $res = $coll->replace_one( $filter, $replacement ); #pod $res = $coll->replace_one( $filter, $replacement, { upsert => 1 } ); #pod #pod Replaces one document that matches a L and returns a L or #pod L object. #pod #pod The replacement document must not have any field-update operators in it (e.g. #pod C<$set>). #pod #pod A hash reference of options may be provided. #pod #pod Valid options include: #pod #pod =for :list #pod * C - skips document validation, if enabled; this #pod is ignored for MongoDB servers older than version 3.2. #pod * C – defaults to false; if true, a new document will be added if one #pod is not found #pod #pod =cut # args not unpacked for efficiency; args are self, filter, update, options sub replace_one { MongoDB::UsageError->throw("filter and replace arguments must be references") unless ref( $_[1] ) && ref( $_[2] ); return $_[0]->client->send_write_op( MongoDB::Op::_Update->_new( ( defined $_[3] ? (%{$_[3]}) : () ), filter => $_[1], update => $_[2], multi => false, is_replace => 1, %{ $_[0]->_op_args }, ) ); } #pod =method update_one #pod #pod $res = $coll->update_one( $filter, $update ); #pod $res = $coll->update_one( $filter, $update, { upsert => 1 } ); #pod #pod Updates one document that matches a L #pod and returns a L or L #pod object. #pod #pod The update document must have only field-update operators in it (e.g. #pod C<$set>). #pod #pod A hash reference of options may be provided. #pod #pod Valid options include: #pod #pod =for :list #pod * C - skips document validation, if enabled; this #pod is ignored for MongoDB servers older than version 3.2. #pod * C – defaults to false; if true, a new document will be added if #pod one is not found by taking the filter expression and applying the update #pod document operations to it prior to insertion. #pod #pod =cut # args not unpacked for efficiency; args are self, filter, update, options sub update_one { MongoDB::UsageError->throw("filter and update arguments must be references") unless ref( $_[1] ) && ref( $_[2] ); return $_[0]->client->send_write_op( MongoDB::Op::_Update->_new( ( defined $_[3] ? (%{$_[3]}) : () ), filter => $_[1], update => $_[2], multi => false, is_replace => 0, %{ $_[0]->_op_args }, ) ); } #pod =method update_many #pod #pod $res = $coll->update_many( $filter, $update ); #pod $res = $coll->update_many( $filter, $update, { upsert => 1 } ); #pod #pod Updates one or more documents that match a L and returns a L or #pod L object. #pod #pod The update document must have only field-update operators in it (e.g. #pod C<$set>). #pod #pod A hash reference of options may be provided. #pod #pod Valid options include: #pod #pod =for :list #pod * C - skips document validation, if enabled; this #pod is ignored for MongoDB servers older than version 3.2. #pod * C – defaults to false; if true, a new document will be added if #pod one is not found by taking the filter expression and applying the update #pod document operations to it prior to insertion. #pod #pod =cut # args not unpacked for efficiency; args are self, filter, update, options sub update_many { MongoDB::UsageError->throw("filter and update arguments must be references") unless ref( $_[1] ) && ref( $_[2] ); return $_[0]->client->send_write_op( MongoDB::Op::_Update->_new( ( defined $_[3] ? (%{$_[3]}) : () ), filter => $_[1], update => $_[2], multi => true, is_replace => 0, %{ $_[0]->_op_args }, ) ); } #pod =method find #pod #pod $cursor = $coll->find( $filter ); #pod $cursor = $coll->find( $filter, $options ); #pod #pod $cursor = $coll->find({ i => { '$gt' => 42 } }, {limit => 20}); #pod #pod Executes a query with a L and returns a #pod C object. #pod #pod The query can be customized using L methods, or with an #pod optional hash reference of options. #pod #pod Valid options include: #pod #pod =for :list #pod * C - get partial results from a mongos if some shards are #pod down (instead of throwing an error). #pod * C – the number of documents to return per batch. #pod * C – attaches a comment to the query. If C<$comment> also exists in #pod the C document, the comment field overwrites C<$comment>. #pod * C – indicates the type of cursor to use. It must be one of three #pod string values: C<'non_tailable'> (the default), C<'tailable'>, and #pod C<'tailable_await'>. #pod * C – the maximum number of documents to return. #pod * C – the maximum amount of time for the server to wait on #pod new documents to satisfy a tailable cursor query. This only applies #pod to a C of 'tailable_await'; the option is otherwise ignored. #pod (Note, this will be ignored for servers before version 3.2.) #pod * C – the maximum amount of time to allow the query to run. If #pod C<$maxTimeMS> also exists in the modifiers document, the C field #pod overwrites C<$maxTimeMS>. (Note, this will be ignored for servers before #pod version 2.6.) #pod * C – a hash reference of L #pod modifying the output or behavior of a query. #pod * C – if true, prevents the server from timing out a cursor #pod after a period of inactivity #pod * C - a hash reference defining fields to return. See "L" #pod in the MongoDB documentation for details. #pod * C – the number of documents to skip before returning. #pod * C – an L defining the order in which #pod to return matching documents. If C<$orderby> also exists in the modifiers #pod document, the sort field overwrites C<$orderby>. See docs for #pod L<$orderby|http://docs.mongodb.org/manual/reference/operator/meta/orderby/>. #pod #pod For more information, see the L in #pod the MongoDB documentation. #pod #pod B, a L object holds the query and does not issue the #pod query to the server until the L method is #pod called on it or until an iterator method like L #pod is called. Performance will be better directly on a #pod L object: #pod #pod my $query_result = $coll->find( $filter )->result; #pod #pod while ( my $next = $query_result->next ) { #pod ... #pod } #pod #pod =cut sub find { my ( $self, $filter, $options ) = @_; $options ||= {}; # backwards compatible sort option for deprecated 'query' alias $options->{sort} = delete $options->{sort_by} if $options->{sort_by}; # possibly fallback to default maxTimeMS if ( !exists $options->{maxTimeMS} && $self->max_time_ms ) { $options->{maxTimeMS} = $self->max_time_ms; } # coerce to IxHash __ixhash( $options, 'sort' ); return MongoDB::Cursor->new( query => MongoDB::_Query->_new( modifiers => {}, allowPartialResults => 0, batchSize => 0, comment => '', cursorType => 'non_tailable', limit => 0, maxAwaitTimeMS => 0, maxTimeMS => 0, noCursorTimeout => 0, oplogReplay => 0, projection => undef, skip => 0, sort => undef, %$options, filter => $filter || {}, %{ $self->_op_args }, ) ); } #pod =method find_one #pod #pod $doc = $collection->find_one( $filter, $projection ); #pod $doc = $collection->find_one( $filter, $projection, $options ); #pod #pod Executes a query with a L and returns a #pod single document. #pod #pod If a projection argument is provided, it must be a hash reference specifying #pod fields to return. See L #pod in the MongoDB documentation for details. #pod #pod If only a filter is provided or if the projection document is an empty hash #pod reference, all fields will be returned. #pod #pod my $doc = $collection->find_one( $filter ); #pod my $doc = $collection->find_one( $filter, {}, $options ); #pod #pod A hash reference of options may be provided as a third argument. Valid keys #pod include: #pod #pod =for :list #pod * C – the maximum amount of time in milliseconds to allow the #pod command to run. (Note, this will be ignored for servers before version 2.6.) #pod * C – an L defining the order in which #pod to return matching documents. If C<$orderby> also exists in the modifiers #pod document, the sort field overwrites C<$orderby>. See docs for #pod L<$orderby|http://docs.mongodb.org/manual/reference/operator/meta/orderby/>. #pod #pod See also core documentation on querying: #pod L. #pod #pod =cut sub find_one { my ( $self, $filter, $projection, $options ) = @_; $options ||= {}; # possibly fallback to default maxTimeMS if ( !exists $options->{maxTimeMS} && $self->max_time_ms ) { $options->{maxTimeMS} = $self->max_time_ms; } # coerce to IxHash __ixhash( $options, 'sort' ); return MongoDB::_Query->_new( modifiers => {}, allowPartialResults => 0, batchSize => 0, comment => '', cursorType => 'non_tailable', limit => 0, maxAwaitTimeMS => 0, maxTimeMS => 0, noCursorTimeout => 0, oplogReplay => 0, skip => 0, sort => undef, %$options, filter => $filter || {}, projection => $projection || {}, limit => -1, %{ $self->_op_args }, )->execute->next; } #pod =method find_id #pod #pod $doc = $collection->find_id( $id ); #pod $doc = $collection->find_id( $id, $projection ); #pod $doc = $collection->find_id( $id, $projection, $options ); #pod #pod Executes a query with a L of C<< { _id #pod => $id } >> and returns a single document. #pod #pod See the L documentation for details on the $projection and $options parameters. #pod #pod See also core documentation on querying: #pod L. #pod #pod =cut sub find_id { my $self = shift; my $id = shift; return $self->find_one({ _id => $id }, @_); } #pod =method find_one_and_delete #pod #pod $doc = $coll->find_one_and_delete( $filter ); #pod $doc = $coll->find_one_and_delete( $filter, $options ); #pod #pod Given a L, this deletes a document from #pod the database and returns it as it appeared before it was deleted. #pod #pod A hash reference of options may be provided. Valid keys include: #pod #pod =for :list #pod * C – the maximum amount of time in milliseconds to allow the #pod command to run. (Note, this will be ignored for servers before version 2.6.) #pod * C - a hash reference defining fields to return. See "L" #pod in the MongoDB documentation for details. #pod * C – an L defining the order in #pod which to return matching documents. See docs for #pod L<$orderby|http://docs.mongodb.org/manual/reference/operator/meta/orderby/>. #pod #pod =cut sub find_one_and_delete { MongoDB::UsageError->throw("filter argument must be a reference") unless ref( $_[1] ); my ( $self, $filter, $options ) = @_; $options ||= {}; # rename projection -> fields $options->{fields} = delete $options->{projection} if exists $options->{projection}; # possibly fallback to default maxTimeMS if ( ! exists $options->{maxTimeMS} && $self->max_time_ms ) { $options->{maxTimeMS} = $self->max_time_ms; } # coerce to IxHash __ixhash($options, 'sort'); my $op = MongoDB::Op::_FindAndDelete->_new( %{ $_[0]->_op_args }, filter => $filter, options => $options, ); return $self->client->send_write_op($op); } #pod =method find_one_and_replace #pod #pod $doc = $coll->find_one_and_replace( $filter, $replacement ); #pod $doc = $coll->find_one_and_replace( $filter, $replacement, $options ); #pod #pod Given a L and a replacement document, #pod this replaces a document from the database and returns it as it was either #pod right before or right after the replacement. The default is 'before'. #pod #pod The replacement document must not have any field-update operators in it (e.g. #pod C<$set>). #pod #pod A hash reference of options may be provided. Valid keys include: #pod #pod =for :list #pod * C - skips document validation, if enabled; this #pod is ignored for MongoDB servers older than version 3.2. #pod * C – the maximum amount of time in milliseconds to allow the #pod command to run. #pod * C - a hash reference defining fields to return. See "L" #pod in the MongoDB documentation for details. #pod * C – either the string C<'before'> or C<'after'>, to indicate #pod whether the returned document should be the one before or after replacement. #pod The default is C<'before'>. #pod * C – an L defining the order in #pod which to return matching documents. See docs for #pod L<$orderby|http://docs.mongodb.org/manual/reference/operator/meta/orderby/>. #pod * C – defaults to false; if true, a new document will be added if one #pod is not found #pod #pod =cut sub find_one_and_replace { MongoDB::UsageError->throw("filter and replace arguments must be references") unless ref( $_[1] ) && ref( $_[2] ); my ( $self, $filter, $replacement, $options ) = @_; return $self->_find_one_and_update_or_replace($filter, $replacement, $options); } #pod =method find_one_and_update #pod #pod $doc = $coll->find_one_and_update( $filter, $update ); #pod $doc = $coll->find_one_and_update( $filter, $update, $options ); #pod #pod Given a L and a document of update #pod operators, this updates a single document and returns it as it was either right #pod before or right after the update. The default is 'before'. #pod #pod The update document must contain only field-update operators (e.g. C<$set>). #pod #pod A hash reference of options may be provided. Valid keys include: #pod #pod =for :list #pod * C - skips document validation, if enabled; this #pod is ignored for MongoDB servers older than version 3.2. #pod * C – the maximum amount of time in milliseconds to allow the #pod command to run. (Note, this will be ignored for servers before version 2.6.) #pod * C - a hash reference defining fields to return. See "L" #pod in the MongoDB documentation for details. #pod * C – either the string C<'before'> or C<'after'>, to indicate #pod whether the returned document should be the one before or after replacement. #pod The default is C<'before'>. #pod * C – an L defining the order in #pod which to return matching documents. See docs for #pod L<$orderby|http://docs.mongodb.org/manual/reference/operator/meta/orderby/>. #pod * C – defaults to false; if true, a new document will be added if one #pod is not found #pod #pod =cut my $foau_args; sub find_one_and_update { MongoDB::UsageError->throw("filter and update arguments must be references") unless ref( $_[1] ) && ref( $_[2] ); my ( $self, $filter, $update, $options ) = @_; return $self->_find_one_and_update_or_replace($filter, $update, $options); } #pod =method aggregate #pod #pod @pipeline = ( #pod { '$group' => { _id => '$state,' totalPop => { '$sum' => '$pop' } } }, #pod { '$match' => { totalPop => { '$gte' => 10 * 1000 * 1000 } } } #pod ); #pod #pod $result = $collection->aggregate( \@pipeline ); #pod $result = $collection->aggregate( \@pipeline, $options ); #pod #pod Runs a query using the MongoDB 2.2+ aggregation framework and returns a #pod L object. #pod #pod The first argument must be an array-ref of L documents. #pod Each pipeline document must be a hash reference. #pod #pod A hash reference of options may be provided. Valid keys include: #pod #pod =for :list #pod * C – if, true enables writing to temporary files. #pod * C – the number of documents to return per batch. #pod * C - skips document validation, if enabled. #pod (Note, this will be ignored for servers before version 3.2.) #pod * C – if true, return a single document with execution information. #pod * C – the maximum amount of time in milliseconds to allow the #pod command to run. (Note, this will be ignored for servers before version 2.6.) #pod #pod B MongoDB 2.6+ added the '$out' pipeline operator. If this operator is #pod used to write aggregation results directly to a collection, an empty result #pod will be returned. Create a new collection> object to query the generated result #pod collection. When C<$out> is used, the command is treated as a write operation #pod and read preference is ignored. #pod #pod See L in the MongoDB manual #pod for more information on how to construct aggregation queries. #pod #pod B The use of aggregation cursors is automatic based on your server #pod version. However, if migrating a sharded cluster from MongoDB 2.4 to 2.6 #pod or later, you must upgrade your mongod servers first before your mongos #pod routers or aggregation queries will fail. As a workaround, you may #pod pass C<< cursor => undef >> as an option. #pod #pod =cut my $aggregate_args; sub aggregate { MongoDB::UsageError->throw("pipeline argument must be an array reference") unless ref( $_[1] ) eq 'ARRAY'; my ( $self, $pipeline, $options ) = @_; $options ||= {}; # boolify some options for my $k (qw/allowDiskUse explain/) { $options->{$k} = ( $options->{$k} ? true : false ) if exists $options->{$k}; } # possibly fallback to default maxTimeMS if ( ! exists $options->{maxTimeMS} && $self->max_time_ms ) { $options->{maxTimeMS} = $self->max_time_ms; } # read preferences are ignored if the last stage is $out my ($last_op) = keys %{ $pipeline->[-1] }; my $read_pref = $last_op eq '$out' ? undef : $self->read_preference; my $op = MongoDB::Op::_Aggregate->_new( db_name => $self->database->name, coll_name => $self->name, client => $self->client, bson_codec => $self->bson_codec, pipeline => $pipeline, options => $options, ( $read_pref ? ( read_preference => $read_pref ) : () ), read_concern => $self->read_concern, ); return $self->client->send_read_op($op); } #pod =method count #pod #pod $count = $coll->count( $filter ); #pod $count = $coll->count( $filter, $options ); #pod #pod Returns a count of documents matching a L. #pod #pod A hash reference of options may be provided. Valid keys include: #pod #pod =for :list #pod * C – L; #pod must be a string, array reference, hash reference or L object. #pod * C – the maximum number of documents to count. #pod * C – the maximum amount of time in milliseconds to allow the #pod command to run. (Note, this will be ignored for servers before version 2.6.) #pod * C – the number of documents to skip before counting documents. #pod #pod B: On a sharded cluster, C can result in an inaccurate count if #pod orphaned documents exist or if a chunk migration is in progress. See L #pod for details and a work-around using L. #pod #pod =cut sub count { my ( $self, $filter, $options ) = @_; $filter ||= {}; $options ||= {}; # possibly fallback to default maxTimeMS if ( ! exists $options->{maxTimeMS} && $self->max_time_ms ) { $options->{maxTimeMS} = $self->max_time_ms; } # string is OK so we check ref, not just exists __ixhash($options, 'hint') if ref $options->{hint}; my $op = MongoDB::Op::_Count->_new( options => $options, filter => $filter, %{ $self->_op_args }, ); my $res = $self->client->send_read_op($op); return $res->{n}; } #pod =method distinct #pod #pod $result = $coll->distinct( $fieldname ); #pod $result = $coll->distinct( $fieldname, $filter ); #pod $result = $coll->distinct( $fieldname, $filter, $options ); #pod #pod Returns a L object that will provide distinct values for #pod a specified field name. #pod #pod The query may be limited by an optional L. #pod #pod A hash reference of options may be provided. Valid keys include: #pod #pod =for :list #pod * C – the maximum amount of time in milliseconds to allow the #pod command to run. (Note, this will be ignored for servers before version 2.6.) #pod #pod See documentation for the L for #pod details. #pod #pod =cut my $distinct_args; sub distinct { MongoDB::UsageError->throw("fieldname argument is required") unless defined( $_[1] ); my ( $self, $fieldname, $filter, $options ) = @_; $filter ||= {}; $options ||= {}; # possibly fallback to default maxTimeMS if ( ! exists $options->{maxTimeMS} && $self->max_time_ms ) { $options->{maxTimeMS} = $self->max_time_ms; } my $op = MongoDB::Op::_Distinct->_new( db_name => $self->database->name, coll_name => $self->name, client => $self->client, bson_codec => $self->bson_codec, fieldname => $fieldname, filter => $filter, options => $options, read_preference => $self->read_preference, read_concern => $self->read_concern, ); return $self->client->send_read_op($op); } #pod =method parallel_scan #pod #pod @result_objs = $collection->parallel_scan(10); #pod #pod Returns one or more L objects to scan the collection in #pod parallel. The argument is the maximum number of L objects #pod to return and must be a positive integer between 1 and 10,000. #pod #pod As long as the collection is not modified during scanning, each document will #pod appear only once in one of the cursors' result sets. #pod #pod B: the server may return fewer cursors than requested, depending on the #pod underlying storage engine and resource availability. #pod #pod =cut sub parallel_scan { my ( $self, $num_cursors, $opts ) = @_; unless (defined $num_cursors && $num_cursors == int($num_cursors) && $num_cursors > 0 && $num_cursors <= 10000 ) { MongoDB::UsageError->throw( "first argument to parallel_scan must be a positive integer between 1 and 10000" ) } $opts = ref $opts eq 'HASH' ? $opts : { }; my $db = $self->database; my $op = MongoDB::Op::_ParallelScan->_new( %{ $self->_op_args }, num_cursors => $num_cursors, ); my $result = $self->client->send_read_op( $op ); my $response = $result->output; MongoDB::UsageError->throw("No cursors returned") unless $response->{cursors} && ref $response->{cursors} eq 'ARRAY'; my @cursors; for my $c ( map { $_->{cursor} } @{$response->{cursors}} ) { my $batch = $c->{firstBatch}; my $qr = MongoDB::QueryResult->_new( _client => $self->client, _address => $result->address, _ns => $c->{ns}, _bson_codec => $self->bson_codec, _batch_size => scalar @$batch, _cursor_at => 0, _limit => 0, _cursor_id => $c->{id}, _cursor_start => 0, _cursor_flags => {}, _cursor_num => scalar @$batch, _docs => $batch, ); push @cursors, $qr; } return @cursors; } #pod =method rename #pod #pod $newcollection = $collection->rename("mynewcollection"); #pod #pod Renames the collection. If a collection already exists with the new collection #pod name, this method will throw an exception. #pod #pod It returns a new L object corresponding to the renamed #pod collection. #pod #pod =cut sub rename { my ($self, $collectionname) = @_; my $conn = $self->client; my $database = $conn->get_database( 'admin' ); my $fullname = $self->full_name; my ($db, @collection_bits) = split(/\./, $fullname); my $collection = join('.', @collection_bits); # this does NOT use our private _run_command method as it needs to run # against a totally different database my $obj = $database->run_command([ 'renameCollection' => "$db.$collection", 'to' => "$db.$collectionname" ]); return $conn->get_database( $db )->get_collection( $collectionname ); } #pod =method drop #pod #pod $collection->drop; #pod #pod Deletes a collection as well as all of its indexes. #pod #pod =cut sub drop { my ($self) = @_; try { $self->_run_command({ drop => $self->name }); } catch { die $_ unless /ns not found/; }; return; } #pod =method ordered_bulk #pod #pod $bulk = $coll->ordered_bulk; #pod $bulk->insert( $doc1 ); #pod $bulk->insert( $doc2 ); #pod ... #pod $result = $bulk->execute; #pod #pod Returns a L object to group write operations into fewer network #pod round-trips. This method creates an B operation, where operations halt after #pod the first error. See L for more details. #pod #pod The method C may be used as an alias. #pod #pod A hash reference of options may be provided. #pod #pod Valid options include: #pod #pod =for :list #pod * C - skips document validation, if enabled; this #pod is ignored for MongoDB servers older than version 3.2. #pod #pod =cut sub initialize_ordered_bulk_op { my ($self, $args) = @_; $args ||= {}; return MongoDB::BulkWrite->new( %$args, collection => $self, ordered => 1, ); } #pod =method unordered_bulk #pod #pod This method works just like L except that the order that #pod operations are sent to the database is not guaranteed and errors do not halt processing. #pod See L for more details. #pod #pod The method C may be used as an alias. #pod #pod A hash reference of options may be provided. #pod #pod Valid options include: #pod #pod =for :list #pod * C - skips document validation, if enabled; this #pod is ignored for MongoDB servers older than version 3.2. #pod #pod =cut sub initialize_unordered_bulk_op { my ($self, $args) = @_; $args ||= {}; return MongoDB::BulkWrite->new( %$args, collection => $self, ordered => 0 ); } #pod =method bulk_write #pod #pod $res = $coll->bulk_write( [ @requests ], $options ) #pod #pod This method provides syntactic sugar to construct and execute a bulk operation #pod directly, without using C or #pod C to generate a L object and #pod then calling methods on it. It returns a L object #pod just like the L method. #pod #pod The first argument must be an array reference of requests. Requests consist #pod of pairs of a MongoDB::Collection write method name (e.g. C, #pod C) and an array reference of arguments to the corresponding #pod method name. They may be given as pairs, or as hash or array #pod references: #pod #pod # pairs -- most efficient #pod @requests = ( #pod insert_one => [ { x => 1 } ], #pod replace_one => [ { x => 1 }, { x => 4 } ], #pod delete_one => [ { x => 4 } ], #pod update_many => [ { x => { '$gt' => 5 } }, { '$inc' => { x => 1 } } ], #pod ); #pod #pod # hash references #pod @requests = ( #pod { insert_one => [ { x => 1 } ] }, #pod { replace_one => [ { x => 1 }, { x => 4 } ] }, #pod { delete_one => [ { x => 4 } ] }, #pod { update_many => [ { x => { '$gt' => 5 } }, { '$inc' => { x => 1 } } ] }, #pod ); #pod #pod # array references #pod @requests = ( #pod [ insert_one => [ { x => 1 } ] ], #pod [ replace_one => [ { x => 1 }, { x => 4 } ] ], #pod [ delete_one => [ { x => 4 } ] ], #pod [ update_many => [ { x => { '$gt' => 5 } }, { '$inc' => { x => 1 } } ] ], #pod ); #pod #pod Valid method names include C, C, C, #pod C C, C, C. #pod #pod An optional hash reference of options may be provided. #pod #pod Valid options include: #pod #pod =for :list #pod * C - skips document validation, if enabled; this #pod is ignored for MongoDB servers older than version 3.2. #pod * C – when true, the bulk operation is executed like #pod L. When false, the bulk operation is executed #pod like L. The default is true. #pod #pod See L for more details on bulk writes. Be advised that #pod the legacy Bulk API method names differ slightly from MongoDB::Collection #pod method names. #pod #pod =cut sub bulk_write { my ( $self, $requests, $options ) = @_; MongoDB::UsageError->throw("requests not an array reference") unless ref $requests eq 'ARRAY'; MongoDB::UsageError->throw("empty request list") unless @$requests; MongoDB::UsageError->throw("options not a hash reference") if defined($options) && ref($options) ne 'HASH'; $options ||= {}; my $ordered = exists $options->{ordered} ? delete $options->{ordered} : 1; my $bulk = $ordered ? $self->ordered_bulk($options) : $self->unordered_bulk($options); my $i = 0; while ( $i <= $#$requests ) { my ( $method, $args ); # pull off document or pair if ( my $type = ref $requests->[$i] ) { if ( $type eq 'ARRAY' ) { ( $method, $args ) = @{ $requests->[$i] }; } elsif ( $type eq 'HASH' ) { ( $method, $args ) = %{ $requests->[$i] }; } else { MongoDB::UsageError->throw("$requests->[$i] is not a hash or array reference"); } $i++; } else { ( $method, $args ) = @{$requests}[ $i, $i + 1 ]; $i += 2; } MongoDB::UsageError->throw("'$method' requires an array reference of arguments") unless ref($args) eq 'ARRAY'; # handle inserts if ( $method eq 'insert_one' || $method eq 'insert_many' ) { $bulk->insert_one($_) for @$args; } else { my ($filter, $doc, $opts) = @$args; my $view = $bulk->find($filter); # handle deletes if ( $method eq 'delete_one' ) { $view->delete_one; next; } elsif ( $method eq 'delete_many' ) { $view->delete_many; next; } # updates might be upserts $view = $view->upsert if $opts && $opts->{upsert}; # handle updates if ( $method eq 'replace_one' ) { $view->replace_one($doc); } elsif ( $method eq 'update_one' ) { $view->update_one($doc); } elsif ( $method eq 'update_many' ) { $view->update_many($doc); } else { MongoDB::UsageError->throw("unknown bulk operation '$method'"); } } } return $bulk->execute; } BEGIN { # aliases no warnings 'once'; *query = \&find; *ordered_bulk = \&initialize_ordered_bulk_op; *unordered_bulk = \&initialize_unordered_bulk_op; } #--------------------------------------------------------------------------# # private methods #--------------------------------------------------------------------------# sub _dynamic_write_concern { my ( $self, $opts ) = @_; if ( !exists( $opts->{safe} ) || $opts->{safe} ) { return $self->write_concern; } else { return MongoDB::WriteConcern->new( w => 0 ); } } sub _find_one_and_update_or_replace { my ($self, $filter, $modifier, $options) = @_; $options ||= {}; # rename projection -> fields $options->{fields} = delete $options->{projection} if exists $options->{projection}; # possibly fallback to default maxTimeMS if ( ! exists $options->{maxTimeMS} && $self->max_time_ms ) { $options->{maxTimeMS} = $self->max_time_ms; } # coerce to IxHash __ixhash($options, 'sort'); # returnDocument ('before'|'after') maps to field 'new' if ( exists $options->{returnDocument} ) { MongoDB::UsageError->throw("Invalid returnDocument parameter '$options->{returnDocument}'") unless $options->{returnDocument} =~ /^(?:before|after)$/; $options->{new} = delete( $options->{returnDocument} ) eq 'after' ? true : false; } # pass separately for MongoDB::Role::_BypassValidation my $bypass = delete $options->{bypassDocumentValidation}; my $op = MongoDB::Op::_FindAndUpdate->_new( filter => $filter, modifier => $modifier, options => $options, bypassDocumentValidation => $bypass, %{ $self->_op_args }, ); return $self->client->send_write_op($op); } # we have a private _run_command rather than using the 'database' attribute # so that we're using our BSON codec and not the source database one sub _run_command { my ( $self, $command, $read_pref ) = @_; if ( $read_pref && ref($read_pref) eq 'HASH' ) { $read_pref = MongoDB::ReadPreference->new($read_pref); } my $op = MongoDB::Op::_Command->_new( db_name => $self->database->name, query => $command, query_flags => {}, bson_codec => $self->bson_codec, ( $read_pref ? ( read_preference => $read_pref ) : () ), ); my $obj = $self->client->send_read_op($op); return $obj->output; } #--------------------------------------------------------------------------# # utility function #--------------------------------------------------------------------------# # utility function to coerce array/hashref to Tie::Ixhash sub __ixhash { my ($hash, $key) = @_; return unless exists $hash->{$key}; my $ref = $hash->{$key}; my $type = ref($ref); return if $type eq 'Tie::IxHash'; if ( $type eq 'HASH' ) { $hash->{$key} = Tie::IxHash->new( %$ref ); } elsif ( $type eq 'ARRAY' ) { $hash->{$key} = Tie::IxHash->new( @$ref ); } else { MongoDB::UsageError->throw("Can't convert $type to a Tie::IxHash"); } return; } #--------------------------------------------------------------------------# # Deprecated legacy methods #--------------------------------------------------------------------------# my $legacy_insert_args; sub insert { MongoDB::UsageError->throw("document argument must be a reference") unless ref( $_[1] ); my ( $self, $document, $opts ) = @_; my $op = MongoDB::Op::_InsertOne->_new( document => $document, %{ $self->_op_args }, write_concern => $self->_dynamic_write_concern($opts), ); my $result = $self->client->send_write_op($op); return $result->inserted_id; } sub batch_insert { MongoDB::UsageError->throw("documents argument must be an array reference") unless ref( $_[1] ) eq 'ARRAY'; my ( $self, $documents, $opts ) = @_; my $op = MongoDB::Op::_BatchInsert->_new( db_name => $self->database->name, coll_name => $self->name, bson_codec => $self->bson_codec, documents => $documents, write_concern => $self->_dynamic_write_concern($opts), check_keys => 0, ordered => 1, ); my $result = $self->client->send_write_op($op); my @ids; my $inserted_ids = $result->inserted_ids; for my $k ( sort { $a <=> $b } keys %$inserted_ids ) { push @ids, $inserted_ids->{$k}; } return @ids; } sub remove { my ($self, $query, $opts) = @_; $opts ||= {}; my $op = MongoDB::Op::_Delete->_new( filter => $query || {}, just_one => !! $opts->{just_one}, %{ $self->_op_args }, write_concern => $self->_dynamic_write_concern($opts), ); my $result = $self->client->send_write_op( $op ); # emulate key fields of legacy GLE result return { ok => 1, n => $result->deleted_count, }; } my $legacy_update_args; sub update { my ( $self, $query, $object, $opts ) = @_; $opts ||= {}; if ( exists $opts->{multiple} ) { if ( exists( $opts->{multi} ) && !!$opts->{multi} ne !!$opts->{multiple} ) { MongoDB::UsageError->throw( "can't use conflicting values of 'multiple' and 'multi' in 'update'"); } $opts->{multi} = delete $opts->{multiple}; } # figure out if first key based on op_char or '$' my $type = ref($object); my $fk = ( $type eq 'HASH' ? each(%$object) : $type eq 'ARRAY' ? $object->[0] : $type eq 'Tie::IxHash' ? $object->FIRSTKEY : each (%$object) ); $fk = defined($fk) ? substr($fk,0,1) : ''; my $op_char = eval { $self->bson_codec->op_char } || ''; my $is_replace = $fk ne '$' && $fk ne $op_char; my $op = MongoDB::Op::_Update->_new( filter => $query || {}, update => $object || {}, multi => $opts->{multi}, upsert => $opts->{upsert}, is_replace => $is_replace, %{ $_[0]->_op_args }, write_concern => $self->_dynamic_write_concern($opts), ); my $result = $self->client->send_write_op( $op ); if ( $result->acknowledged ) { # emulate key fields of legacy GLE result return { ok => 1, n => $result->matched_count, ( $result->upserted_id ? ( upserted => $result->upserted_id ) : () ), }; } else { return { ok => 1 }; } } sub save { MongoDB::UsageError->throw("document argument must be a reference") unless ref( $_[1] ); my ($self, $doc, $options) = @_; my $type = ref($doc); my $id = ( $type eq 'HASH' ? $doc->{_id} : $type eq 'ARRAY' ? do { my $i; for ( $i = 0; $i < @$doc; $i++ ) { last if $doc->[$i] eq '_id' } $i < $#$doc ? $doc->[ $i + 1 ] : undef; } : $type eq 'Tie::IxHash' ? $doc->FETCH('_id') : $doc->{_id} # hashlike? ); if ( defined($id) ) { $options ||= {}; $options->{'upsert'} = boolean::true; return $self->update( { _id => $id }, $doc, $options ); } else { return $self->insert( $doc, ( $options ? $options : () ) ); } } sub find_and_modify { my ( $self, $opts ) = @_; $opts ||= {}; MongoDB::UsageError->throw("find_and_modify requires a 'query' option") unless $opts->{query}; MongoDB::UsageError->throw("find_and_modify requires a 'remove' or 'update' option") unless $opts->{remove} || $opts->{update}; my $query = delete $opts->{query}; my $remove = delete $opts->{remove}; my $update = delete $opts->{update}; return $remove ? $self->find_one_and_delete($query, $opts) : $self->find_one_and_update($query, $update, $opts); } sub get_collection { my $self = shift @_; my $coll = shift @_; return $self->database->get_collection($self->name.'.'.$coll); } sub ensure_index { my ( $self, $keys, $opts ) = @_; MongoDB::UsageError->throw("ensure_index options must be a hash reference") if $opts && !ref($opts) eq 'HASH'; $keys = Tie::IxHash->new(@$keys) if ref $keys eq 'ARRAY'; $opts = $self->_clean_index_options( $opts, $keys ); # always use safe write concern for index creation my $wc = $self->write_concern->is_acknowledged ? $self->write_concern : MongoDB::WriteConcern->new; my $op = MongoDB::Op::_CreateIndexes->_new( db_name => $self->database->name, coll_name => $self->name, bson_codec => $self->bson_codec, indexes => [ { key => $keys, %$opts } ], write_concern => $wc, ); $self->client->send_write_op($op); return 1; } sub _clean_index_options { my ( $self, $orig, $keys ) = @_; # copy the original so we don't modify it my $opts = { $orig ? %$orig : () }; # add name if not provided $opts->{name} = __to_index_string($keys) unless defined $opts->{name}; # safe is no more delete $opts->{safe} if exists $opts->{safe}; # convert snake case if ( exists $opts->{drop_dups} ) { $opts->{dropDups} = delete $opts->{drop_dups}; } # convert snake case and turn into an integer if ( exists $opts->{expire_after_seconds} ) { $opts->{expireAfterSeconds} = int( delete $opts->{expire_after_seconds} ); } # convert some things to booleans for my $k (qw/unique background sparse dropDups/) { next unless exists $opts->{$k}; $opts->{$k} = boolean( $opts->{$k} ); } return $opts; } sub __to_index_string { my $keys = shift; my @name; if (ref $keys eq 'ARRAY') { @name = @$keys; } elsif (ref $keys eq 'HASH' ) { @name = %$keys } elsif (ref $keys eq 'Tie::IxHash') { my @ks = $keys->Keys; my @vs = $keys->Values; for (my $i=0; $i<$keys->Length; $i++) { push @name, $ks[$i]; push @name, $vs[$i]; } } else { MongoDB::UsageError->throw("expected Tie::IxHash, hash, or array reference for keys"); } return join("_", @name); } sub get_indexes { my ($self) = @_; my $op = MongoDB::Op::_ListIndexes->_new( db_name => $self->database->name, coll_name => $self->name, client => $self->client, bson_codec => $self->bson_codec, ); my $res = $self->client->send_read_op($op); return $res->all; } sub drop_indexes { my ($self) = @_; return $self->drop_index('*'); } sub drop_index { my ($self, $index_name) = @_; return $self->_run_command([ dropIndexes => $self->name, index => $index_name, ]); } sub validate { my ($self, $scan_data) = @_; $scan_data = 0 unless defined $scan_data; my $obj = $self->_run_command({ validate => $self->name }); } 1; =pod =encoding UTF-8 =head1 NAME MongoDB::Collection - A MongoDB Collection =head1 VERSION version v1.2.2 =head1 SYNOPSIS # get a Collection via the Database object $coll = $db->get_collection("people"); # insert a document $coll->insert_one( { name => "John Doe", age => 42 } ); # insert one or more documents $coll->insert_many( \@documents ); # delete a document $coll->delete_one( { name => "John Doe" } ); # update a document $coll->update_one( { name => "John Doe" }, { '$inc' => { age => 1 } } ); # find a single document $doc = $coll->find_one( { name => "John Doe" } ) # Get a MongoDB::Cursor for a query $cursor = $coll->find( { age => 42 } ); # Cursor iteration while ( my $doc = $cursor->next ) { ... } =head1 DESCRIPTION This class models a MongoDB collection and provides an API for interacting with it. Generally, you never construct one of these directly with C. Instead, you call C on a L object. =head1 USAGE =head2 Error handling Unless otherwise explictly documented, all methods throw exceptions if an error occurs. The error types are documented in L. To catch and handle errors, the L and L modules are recommended: use Try::Tiny; use Safe::Isa; # provides $_isa try { $coll->insert( $doc ) } catch { if ( $_->$_isa("MongoDB::DuplicateKeyError" ) { ... } else { ... } }; To retry failures automatically, consider using L. =head2 Terminology =head3 Document A collection of key-value pairs. A Perl hash is a document. Array references with an even number of elements and L objects may also be used as documents. =head3 Ordered document Many MongoDB::Collection method parameters or options require an B: an ordered list of key/value pairs. Perl's hashes are B ordered and since Perl v5.18 are guaranteed to have random order. Therefore, when an ordered document is called for, you may use an array reference of pairs or a L object. You may use a hash reference if there is only one key/value pair. =head3 Filter expression A filter expression provides the L to select a document for deletion. It must be an L. =head1 ATTRIBUTES =head2 database The L representing the database that contains the collection. =head2 name The name of the collection. =head2 read_preference A L object. It may be initialized with a string corresponding to one of the valid read preference modes or a hash reference that will be coerced into a new MongoDB::ReadPreference object. By default it will be inherited from a L object. =head2 write_concern A L object. It may be initialized with a hash reference that will be coerced into a new MongoDB::WriteConcern object. By default it will be inherited from a L object. =head2 read_concern A L object. May be initialized with a hash reference or a string that will be coerced into the level of read concern. By default it will be inherited from a L object. =head2 max_time_ms Specifies the default maximum amount of time in milliseconds that the server should use for working on a query. B: this will only be used for server versions 2.6 or greater, as that was when the C<$maxTimeMS> meta-operator was introduced. =head2 bson_codec An object that provides the C and C methods, such as from L. It may be initialized with a hash reference that will be coerced into a new MongoDB::BSON object. By default it will be inherited from a L object. =head1 METHODS =head2 client $client = $coll->client; Returns the L object associated with this object. =head2 full_name $full_name = $coll->full_name; Returns the full name of the collection, including the namespace of the database it's in prefixed with a dot character. E.g. collection "foo" in database "test" would result in a C of "test.foo". =head2 indexes $indexes = $collection->indexes; $collection->indexes->create_one( [ x => 1 ], { unique => 1 } ); $collection->indexes->drop_all; Returns a L object for managing the indexes associated with the collection. =head2 clone $coll2 = $coll1->clone( write_concern => { w => 2 } ); Constructs a copy of the original collection, but allows changing attributes in the copy. =head2 with_codec $coll2 = $coll1->with_codec( $new_codec ); $coll2 = $coll1->with_codec( prefer_numeric => 1 ); Constructs a copy of the original collection, but clones the C. If given an object that does C and C, it is equivalent to: $coll2 = $coll1->clone( bson_codec => $new_codec ); If given a hash reference or a list of key/value pairs, it is equivalent to: $coll2 = $coll1->clone( bson_codec => $coll1->bson_codec->clone( @list ) ); =head2 insert_one $res = $coll->insert_one( $document ); $res = $coll->insert_one( $document, $options ); $id = $res->inserted_id; Inserts a single L into the database and returns a L or L object. If no C<_id> field is present, one will be added when a document is serialized for the database without modifying the original document. The generated C<_id> may be retrieved from the result object. An optional hash reference of options may be given. Valid options include: =over 4 =item * C - skips document validation, if enabled; this is ignored for MongoDB servers older than version 3.2. =back =head2 insert_many $res = $coll->insert_many( [ @documents ] ); $res = $coll->insert_many( [ @documents ], { ordered => 0 } ); Inserts each of the L in an array reference into the database and returns a L or L. This is syntactic sugar for doing a L operation. If no C<_id> field is present, one will be added when a document is serialized for the database without modifying the original document. The generated C<_id> may be retrieved from the result object. An optional hash reference of options may be provided. Valid options include: =over 4 =item * C - skips document validation, if enabled; this is ignored for MongoDB servers older than version 3.2. =item * C – when true, the server will halt insertions after the first error (if any). When false, all documents will be processed and any error will only be thrown after all insertions are attempted. The default is true. =back On MongoDB servers before version 2.6, C bulk operations are emulated with individual inserts to capture error information. On 2.6 or later, this method will be significantly faster than individual C calls. =head2 delete_one $res = $coll->delete_one( $filter ); $res = $coll->delete_one( { _id => $id } ); Deletes a single document that matches a L and returns a L or L object. =head2 delete_many $res = $coll->delete_many( $filter ); $res = $coll->delete_many( { name => "Larry" } ); Deletes all documents that match a L and returns a L or L object. =head2 replace_one $res = $coll->replace_one( $filter, $replacement ); $res = $coll->replace_one( $filter, $replacement, { upsert => 1 } ); Replaces one document that matches a L and returns a L or L object. The replacement document must not have any field-update operators in it (e.g. C<$set>). A hash reference of options may be provided. Valid options include: =over 4 =item * C - skips document validation, if enabled; this is ignored for MongoDB servers older than version 3.2. =item * C – defaults to false; if true, a new document will be added if one is not found =back =head2 update_one $res = $coll->update_one( $filter, $update ); $res = $coll->update_one( $filter, $update, { upsert => 1 } ); Updates one document that matches a L and returns a L or L object. The update document must have only field-update operators in it (e.g. C<$set>). A hash reference of options may be provided. Valid options include: =over 4 =item * C - skips document validation, if enabled; this is ignored for MongoDB servers older than version 3.2. =item * C – defaults to false; if true, a new document will be added if one is not found by taking the filter expression and applying the update document operations to it prior to insertion. =back =head2 update_many $res = $coll->update_many( $filter, $update ); $res = $coll->update_many( $filter, $update, { upsert => 1 } ); Updates one or more documents that match a L and returns a L or L object. The update document must have only field-update operators in it (e.g. C<$set>). A hash reference of options may be provided. Valid options include: =over 4 =item * C - skips document validation, if enabled; this is ignored for MongoDB servers older than version 3.2. =item * C – defaults to false; if true, a new document will be added if one is not found by taking the filter expression and applying the update document operations to it prior to insertion. =back =head2 find $cursor = $coll->find( $filter ); $cursor = $coll->find( $filter, $options ); $cursor = $coll->find({ i => { '$gt' => 42 } }, {limit => 20}); Executes a query with a L and returns a C object. The query can be customized using L methods, or with an optional hash reference of options. Valid options include: =over 4 =item * C - get partial results from a mongos if some shards are down (instead of throwing an error). =item * C – the number of documents to return per batch. =item * C – attaches a comment to the query. If C<$comment> also exists in the C document, the comment field overwrites C<$comment>. =item * C – indicates the type of cursor to use. It must be one of three string values: C<'non_tailable'> (the default), C<'tailable'>, and C<'tailable_await'>. =item * C – the maximum number of documents to return. =item * C – the maximum amount of time for the server to wait on new documents to satisfy a tailable cursor query. This only applies to a C of 'tailable_await'; the option is otherwise ignored. (Note, this will be ignored for servers before version 3.2.) =item * C – the maximum amount of time to allow the query to run. If C<$maxTimeMS> also exists in the modifiers document, the C field overwrites C<$maxTimeMS>. (Note, this will be ignored for servers before version 2.6.) =item * C – a hash reference of L modifying the output or behavior of a query. =item * C – if true, prevents the server from timing out a cursor after a period of inactivity =item * C - a hash reference defining fields to return. See "L" in the MongoDB documentation for details. =item * C – the number of documents to skip before returning. =item * C – an L defining the order in which to return matching documents. If C<$orderby> also exists in the modifiers document, the sort field overwrites C<$orderby>. See docs for L<$orderby|http://docs.mongodb.org/manual/reference/operator/meta/orderby/>. =back For more information, see the L in the MongoDB documentation. B, a L object holds the query and does not issue the query to the server until the L method is called on it or until an iterator method like L is called. Performance will be better directly on a L object: my $query_result = $coll->find( $filter )->result; while ( my $next = $query_result->next ) { ... } =head2 find_one $doc = $collection->find_one( $filter, $projection ); $doc = $collection->find_one( $filter, $projection, $options ); Executes a query with a L and returns a single document. If a projection argument is provided, it must be a hash reference specifying fields to return. See L in the MongoDB documentation for details. If only a filter is provided or if the projection document is an empty hash reference, all fields will be returned. my $doc = $collection->find_one( $filter ); my $doc = $collection->find_one( $filter, {}, $options ); A hash reference of options may be provided as a third argument. Valid keys include: =over 4 =item * C – the maximum amount of time in milliseconds to allow the command to run. (Note, this will be ignored for servers before version 2.6.) =item * C – an L defining the order in which to return matching documents. If C<$orderby> also exists in the modifiers document, the sort field overwrites C<$orderby>. See docs for L<$orderby|http://docs.mongodb.org/manual/reference/operator/meta/orderby/>. =back See also core documentation on querying: L. =head2 find_id $doc = $collection->find_id( $id ); $doc = $collection->find_id( $id, $projection ); $doc = $collection->find_id( $id, $projection, $options ); Executes a query with a L of C<< { _id => $id } >> and returns a single document. See the L documentation for details on the $projection and $options parameters. See also core documentation on querying: L. =head2 find_one_and_delete $doc = $coll->find_one_and_delete( $filter ); $doc = $coll->find_one_and_delete( $filter, $options ); Given a L, this deletes a document from the database and returns it as it appeared before it was deleted. A hash reference of options may be provided. Valid keys include: =over 4 =item * C – the maximum amount of time in milliseconds to allow the command to run. (Note, this will be ignored for servers before version 2.6.) =item * C - a hash reference defining fields to return. See "L" in the MongoDB documentation for details. =item * C – an L defining the order in which to return matching documents. See docs for L<$orderby|http://docs.mongodb.org/manual/reference/operator/meta/orderby/>. =back =head2 find_one_and_replace $doc = $coll->find_one_and_replace( $filter, $replacement ); $doc = $coll->find_one_and_replace( $filter, $replacement, $options ); Given a L and a replacement document, this replaces a document from the database and returns it as it was either right before or right after the replacement. The default is 'before'. The replacement document must not have any field-update operators in it (e.g. C<$set>). A hash reference of options may be provided. Valid keys include: =over 4 =item * C - skips document validation, if enabled; this is ignored for MongoDB servers older than version 3.2. =item * C – the maximum amount of time in milliseconds to allow the command to run. =item * C - a hash reference defining fields to return. See "L" in the MongoDB documentation for details. =item * C – either the string C<'before'> or C<'after'>, to indicate whether the returned document should be the one before or after replacement. The default is C<'before'>. =item * C – an L defining the order in which to return matching documents. See docs for L<$orderby|http://docs.mongodb.org/manual/reference/operator/meta/orderby/>. =item * C – defaults to false; if true, a new document will be added if one is not found =back =head2 find_one_and_update $doc = $coll->find_one_and_update( $filter, $update ); $doc = $coll->find_one_and_update( $filter, $update, $options ); Given a L and a document of update operators, this updates a single document and returns it as it was either right before or right after the update. The default is 'before'. The update document must contain only field-update operators (e.g. C<$set>). A hash reference of options may be provided. Valid keys include: =over 4 =item * C - skips document validation, if enabled; this is ignored for MongoDB servers older than version 3.2. =item * C – the maximum amount of time in milliseconds to allow the command to run. (Note, this will be ignored for servers before version 2.6.) =item * C - a hash reference defining fields to return. See "L" in the MongoDB documentation for details. =item * C – either the string C<'before'> or C<'after'>, to indicate whether the returned document should be the one before or after replacement. The default is C<'before'>. =item * C – an L defining the order in which to return matching documents. See docs for L<$orderby|http://docs.mongodb.org/manual/reference/operator/meta/orderby/>. =item * C – defaults to false; if true, a new document will be added if one is not found =back =head2 aggregate @pipeline = ( { '$group' => { _id => '$state,' totalPop => { '$sum' => '$pop' } } }, { '$match' => { totalPop => { '$gte' => 10 * 1000 * 1000 } } } ); $result = $collection->aggregate( \@pipeline ); $result = $collection->aggregate( \@pipeline, $options ); Runs a query using the MongoDB 2.2+ aggregation framework and returns a L object. The first argument must be an array-ref of L documents. Each pipeline document must be a hash reference. A hash reference of options may be provided. Valid keys include: =over 4 =item * C – if, true enables writing to temporary files. =item * C – the number of documents to return per batch. =item * C - skips document validation, if enabled. (Note, this will be ignored for servers before version 3.2.) =item * C – if true, return a single document with execution information. =item * C – the maximum amount of time in milliseconds to allow the command to run. (Note, this will be ignored for servers before version 2.6.) =back B MongoDB 2.6+ added the '$out' pipeline operator. If this operator is used to write aggregation results directly to a collection, an empty result will be returned. Create a new collection> object to query the generated result collection. When C<$out> is used, the command is treated as a write operation and read preference is ignored. See L in the MongoDB manual for more information on how to construct aggregation queries. B The use of aggregation cursors is automatic based on your server version. However, if migrating a sharded cluster from MongoDB 2.4 to 2.6 or later, you must upgrade your mongod servers first before your mongos routers or aggregation queries will fail. As a workaround, you may pass C<< cursor => undef >> as an option. =head2 count $count = $coll->count( $filter ); $count = $coll->count( $filter, $options ); Returns a count of documents matching a L. A hash reference of options may be provided. Valid keys include: =over 4 =item * C – L; must be a string, array reference, hash reference or L object. =item * C – the maximum number of documents to count. =item * C – the maximum amount of time in milliseconds to allow the command to run. (Note, this will be ignored for servers before version 2.6.) =item * C – the number of documents to skip before counting documents. =back B: On a sharded cluster, C can result in an inaccurate count if orphaned documents exist or if a chunk migration is in progress. See L for details and a work-around using L. =head2 distinct $result = $coll->distinct( $fieldname ); $result = $coll->distinct( $fieldname, $filter ); $result = $coll->distinct( $fieldname, $filter, $options ); Returns a L object that will provide distinct values for a specified field name. The query may be limited by an optional L. A hash reference of options may be provided. Valid keys include: =over 4 =item * C – the maximum amount of time in milliseconds to allow the command to run. (Note, this will be ignored for servers before version 2.6.) =back See documentation for the L for details. =head2 parallel_scan @result_objs = $collection->parallel_scan(10); Returns one or more L objects to scan the collection in parallel. The argument is the maximum number of L objects to return and must be a positive integer between 1 and 10,000. As long as the collection is not modified during scanning, each document will appear only once in one of the cursors' result sets. B: the server may return fewer cursors than requested, depending on the underlying storage engine and resource availability. =head2 rename $newcollection = $collection->rename("mynewcollection"); Renames the collection. If a collection already exists with the new collection name, this method will throw an exception. It returns a new L object corresponding to the renamed collection. =head2 drop $collection->drop; Deletes a collection as well as all of its indexes. =head2 ordered_bulk $bulk = $coll->ordered_bulk; $bulk->insert( $doc1 ); $bulk->insert( $doc2 ); ... $result = $bulk->execute; Returns a L object to group write operations into fewer network round-trips. This method creates an B operation, where operations halt after the first error. See L for more details. The method C may be used as an alias. A hash reference of options may be provided. Valid options include: =over 4 =item * C - skips document validation, if enabled; this is ignored for MongoDB servers older than version 3.2. =back =head2 unordered_bulk This method works just like L except that the order that operations are sent to the database is not guaranteed and errors do not halt processing. See L for more details. The method C may be used as an alias. A hash reference of options may be provided. Valid options include: =over 4 =item * C - skips document validation, if enabled; this is ignored for MongoDB servers older than version 3.2. =back =head2 bulk_write $res = $coll->bulk_write( [ @requests ], $options ) This method provides syntactic sugar to construct and execute a bulk operation directly, without using C or C to generate a L object and then calling methods on it. It returns a L object just like the L method. The first argument must be an array reference of requests. Requests consist of pairs of a MongoDB::Collection write method name (e.g. C, C) and an array reference of arguments to the corresponding method name. They may be given as pairs, or as hash or array references: # pairs -- most efficient @requests = ( insert_one => [ { x => 1 } ], replace_one => [ { x => 1 }, { x => 4 } ], delete_one => [ { x => 4 } ], update_many => [ { x => { '$gt' => 5 } }, { '$inc' => { x => 1 } } ], ); # hash references @requests = ( { insert_one => [ { x => 1 } ] }, { replace_one => [ { x => 1 }, { x => 4 } ] }, { delete_one => [ { x => 4 } ] }, { update_many => [ { x => { '$gt' => 5 } }, { '$inc' => { x => 1 } } ] }, ); # array references @requests = ( [ insert_one => [ { x => 1 } ] ], [ replace_one => [ { x => 1 }, { x => 4 } ] ], [ delete_one => [ { x => 4 } ] ], [ update_many => [ { x => { '$gt' => 5 } }, { '$inc' => { x => 1 } } ] ], ); Valid method names include C, C, C, C C, C, C. An optional hash reference of options may be provided. Valid options include: =over 4 =item * C - skips document validation, if enabled; this is ignored for MongoDB servers older than version 3.2. =item * C – when true, the bulk operation is executed like L. When false, the bulk operation is executed like L. The default is true. =back See L for more details on bulk writes. Be advised that the legacy Bulk API method names differ slightly from MongoDB::Collection method names. =for Pod::Coverage initialize_ordered_bulk_op initialize_unordered_bulk_op batch_insert find_and_modify insert query remove update =head1 DEPRECATIONS With the introduction of the common driver CRUD API, these legacy methods have been deprecated: =over 4 =item * batch_insert =item * find_and_modify =item * insert =item * query =item * remove =item * update =item * save =back The C method is deprecated; it implied a 'subcollection' relationship that is purely notional. The C, C, C, and C methods are deprecated. The new L class is accessable through the C method, and offer greater consistency in behavior across drivers. The C method is deprecated as the return value was inconsistent over time. Users who need it should execute it via C instead. The methods still exist, but are no longer documented. In a future version they will warn when used, then will eventually be removed. =head1 AUTHORS =over 4 =item * David Golden =item * Mike Friedman =item * Kristina Chodorow =item * Florian Ragwitz =back =head1 COPYRIGHT AND LICENSE This software is Copyright (c) 2016 by MongoDB, Inc.. This is free software, licensed under: The Apache License, Version 2.0, January 2004 =cut __END__ # vim: set ts=4 sts=4 sw=4 et tw=75: MongoDB-v1.2.2/lib/MongoDB/CommandResult.pm000644 000765 000024 00000007453 12651754051 020606 0ustar00davidstaff000000 000000 # # Copyright 2014 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # package MongoDB::CommandResult; # ABSTRACT: MongoDB generic command result document use version; our $VERSION = 'v1.2.2'; use Moo; use MongoDB::Error; use MongoDB::_Constants; use MongoDB::_Types qw( HostAddress ); use Types::Standard qw( HashRef ); use namespace::clean; with $_ for qw( MongoDB::Role::_PrivateConstructor MongoDB::Role::_LastError ); #pod =attr output #pod #pod Hash reference with the output document of a database command #pod #pod =cut has output => ( is => 'ro', required => 1, isa => HashRef, ); #pod =attr address #pod #pod Address ("host:port") of server that ran the command #pod #pod =cut has address => ( is => 'ro', required => 1, isa => HostAddress, ); #pod =method last_code #pod #pod Error code (if any) or 0 if there was no error. #pod #pod =cut sub last_code { my ($self) = @_; my $output = $self->output; if ( $output->{code} ) { return $output->{code}; } elsif ( $output->{lastErrorObject} ) { return $output->{lastErrorObject}{code} || 0; } else { return 0; } } #pod =method last_errmsg #pod #pod Error string (if any) or the empty string if there was no error. #pod #pod =cut sub last_errmsg { my ($self) = @_; for my $err_key (qw/$err err errmsg/) { return $self->output->{$err_key} if exists $self->output->{$err_key}; } return ""; } #pod =method last_wtimeout #pod #pod True if a write concern timed out or false otherwise. #pod #pod =cut sub last_wtimeout { my ($self) = @_; return !!$self->output->{wtimeout}; } #pod =method assert #pod #pod Throws an exception if the command failed. #pod #pod =cut sub assert { my ($self, $default_class) = @_; $self->_throw_database_error( $default_class ) if ! $self->output->{ok}; return 1; } # deprecated sub result { shift->output } 1; __END__ =pod =encoding UTF-8 =head1 NAME MongoDB::CommandResult - MongoDB generic command result document =head1 VERSION version v1.2.2 =head1 DESCRIPTION This class encapsulates the results from a database command. Currently, it is only available from the C attribute of C. =head1 ATTRIBUTES =head2 output Hash reference with the output document of a database command =head2 address Address ("host:port") of server that ran the command =head1 METHODS =head2 last_code Error code (if any) or 0 if there was no error. =head2 last_errmsg Error string (if any) or the empty string if there was no error. =head2 last_wtimeout True if a write concern timed out or false otherwise. =head2 assert Throws an exception if the command failed. =for Pod::Coverage result =head1 DEPRECATIONS The methods still exist, but are no longer documented. In a future version they will warn when used, then will eventually be removed. =over 4 =item * result =back =head1 AUTHORS =over 4 =item * David Golden =item * Mike Friedman =item * Kristina Chodorow =item * Florian Ragwitz =back =head1 COPYRIGHT AND LICENSE This software is Copyright (c) 2016 by MongoDB, Inc.. This is free software, licensed under: The Apache License, Version 2.0, January 2004 =cut MongoDB-v1.2.2/lib/MongoDB/Cursor.pm000644 000765 000024 00000067150 12651754051 017306 0ustar00davidstaff000000 000000 # # Copyright 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # package MongoDB::Cursor; # ABSTRACT: A lazy cursor for Mongo query results use version; our $VERSION = 'v1.2.2'; use Moo; use MongoDB; use MongoDB::BSON; use MongoDB::Error; use MongoDB::QueryResult; use MongoDB::ReadPreference; use MongoDB::_Protocol; use MongoDB::Op::_Explain; use MongoDB::_Types -types, 'to_IxHash'; use Types::Standard qw( InstanceOf ); use boolean; use Tie::IxHash; use Try::Tiny; use namespace::clean -except => 'meta'; #pod =attr started_iterating #pod #pod A boolean indicating if this cursor has queried the database yet. Methods #pod modifying the query will complain if they are called after the database is #pod queried. #pod #pod =cut with 'MongoDB::Role::_Cursor'; # attributes for sending a query has query => ( is => 'ro', isa => InstanceOf['MongoDB::_Query'], required => 1, ); # lazy result attribute has result => ( is => 'lazy', isa => InstanceOf['MongoDB::QueryResult'], builder => '_build_result', predicate => 'started_iterating', clearer => '_clear_result', ); # this does the query if it hasn't been done yet sub _build_result { my ($self) = @_; $self->query->execute; } #--------------------------------------------------------------------------# # methods that modify the query #--------------------------------------------------------------------------# #pod =head1 QUERY MODIFIERS #pod #pod These methods modify the query to be run. An exception will be thrown if #pod they are called after results are iterated. #pod #pod =head2 immortal #pod #pod $cursor->immortal(1); #pod #pod Ordinarily, a cursor "dies" on the database server after a certain length of #pod time (approximately 10 minutes), to prevent inactive cursors from hogging #pod resources. This option indicates that a cursor should not die until all of its #pod results have been fetched or it goes out of scope in Perl. #pod #pod Boolean value, defaults to 0. #pod #pod Note: C only affects the server-side timeout. If you are getting #pod client-side timeouts you will need to change your client configuration. #pod See L and #pod L. #pod #pod Returns this cursor for chaining operations. #pod #pod =cut sub immortal { my ( $self, $bool ) = @_; MongoDB::UsageError->throw("cannot set immortal after querying") if $self->started_iterating; $self->query->noCursorTimeout(!!$bool); return $self; } #pod =head2 fields #pod #pod $coll->insert({name => "Fred", age => 20}); #pod my $cursor = $coll->find->fields({ name => 1 }); #pod my $obj = $cursor->next; #pod $obj->{name}; "Fred" #pod $obj->{age}; # undef #pod #pod Selects which fields are returned. The default is all fields. When fields #pod are specified, _id is returned by default, but this can be disabled by #pod explicitly setting it to "0". E.g. C<< _id => 0 >>. Argument must be either a #pod hash reference or a L object. #pod #pod See L #pod in the MongoDB documentation for details. #pod #pod Returns this cursor for chaining operations. #pod #pod =cut sub fields { my ($self, $f) = @_; MongoDB::UsageError->throw("cannot set fields after querying") if $self->started_iterating; MongoDB::UsageError->throw("not a hash reference") unless ref $f eq 'HASH' || ref $f eq 'Tie::IxHash'; $self->query->projection($f); return $self; } #pod =head2 sort #pod #pod # sort by name, descending #pod $cursor->sort([name => -1]); #pod #pod Adds a sort to the query. Argument is either a hash reference or a #pod L or an array reference of key/value pairs. Because hash #pod references are not ordered, do not use them for more than one key. #pod #pod Returns this cursor for chaining operations. #pod #pod =cut sub sort { my ( $self, $order ) = @_; MongoDB::UsageError->throw("cannot set sort after querying") if $self->started_iterating; $self->query->sort( to_IxHash($order) ); return $self; } #pod =head2 limit #pod #pod $cursor->limit(20); #pod #pod Sets cursor to return a maximum of N results. #pod #pod Returns this cursor for chaining operations. #pod #pod =cut sub limit { my ( $self, $num ) = @_; MongoDB::UsageError->throw("cannot set limit after querying") if $self->started_iterating; $self->query->limit($num); return $self; } #pod =head2 max_await_time_ms #pod #pod $cursor->max_await_time_ms( 500 ); #pod #pod The maximum amount of time in milliseconds for the server to wait on new #pod documents to satisfy a tailable cursor query. This only applies to a #pod cursor of type 'tailble_await'. This is ignored if the cursor is not #pod a 'tailable_await' cursor or the server version is less than version 3.2. #pod #pod Returns this cursor for chaining operations. #pod #pod =cut sub max_await_time_ms { my ( $self, $num ) = @_; $num = 0 unless defined $num; MongoDB::UsageError->throw("max_await_time_ms must be non-negative") if $num < 0; MongoDB::UsageError->throw("can not set max_await_time_ms after querying") if $self->started_iterating; $self->query->maxAwaitTimeMS( $num ); return $self; } #pod =head2 max_time_ms #pod #pod $cursor->max_time_ms( 500 ); #pod #pod Causes the server to abort the operation if the specified time in milliseconds #pod is exceeded. #pod #pod Returns this cursor for chaining operations. #pod #pod =cut sub max_time_ms { my ( $self, $num ) = @_; $num = 0 unless defined $num; MongoDB::UsageError->throw("max_time_ms must be non-negative") if $num < 0; MongoDB::UsageError->throw("can not set max_time_ms after querying") if $self->started_iterating; $self->query->maxTimeMS( $num ); return $self; } #pod =head2 tailable #pod #pod $cursor->tailable(1); #pod #pod If a cursor should be tailable. Tailable cursors can only be used on capped #pod collections and are similar to the C command: they never die and keep #pod returning new results as more is added to a collection. #pod #pod They are often used for getting log messages. #pod #pod Boolean value, defaults to 0. #pod #pod If you want the tailable cursor to block for a few seconds, use #pod L instead. B calling this with a false value #pod disables tailing, even if C was previously called. #pod #pod Returns this cursor for chaining operations. #pod #pod =cut sub tailable { my ( $self, $bool ) = @_; MongoDB::UsageError->throw("cannot set tailable after querying") if $self->started_iterating; $self->query->cursorType($bool ? 'tailable' : 'non_tailable'); return $self; } #pod =head2 tailable_await #pod #pod $cursor->tailable_await(1); #pod #pod Sets a cursor to be tailable and block for a few seconds if no data #pod is immediately available. #pod #pod Boolean value, defaults to 0. #pod #pod If you want the tailable cursor without blocking, use L instead. #pod B calling this with a false value disables tailing, even if C #pod was previously called. #pod #pod =cut sub tailable_await { my ( $self, $bool ) = @_; MongoDB::UsageError->throw("cannot set tailable_await after querying") if $self->started_iterating; $self->query->cursorType($bool ? 'tailable_await' : 'non_tailable'); return $self; } #pod =head2 skip #pod #pod $cursor->skip( 50 ); #pod #pod Skips the first N results. #pod #pod Returns this cursor for chaining operations. #pod #pod =cut sub skip { my ( $self, $num ) = @_; MongoDB::UsageError->throw("skip must be non-negative") if $num < 0; MongoDB::UsageError->throw("cannot set skip after querying") if $self->started_iterating; $self->query->skip($num); return $self; } #pod =head2 snapshot #pod #pod $cursor->snapshot(1); #pod #pod Uses snapshot mode for the query. Snapshot mode assures no duplicates are #pod returned due an intervening write relocating a document. Note that if an #pod object is inserted, updated or deleted during the query, it may or may not #pod be returned when snapshot mode is enabled. Short query responses (less than #pod 1MB) are always effectively snapshotted. Currently, snapshot mode may not #pod be used with sorting or explicit hints. #pod #pod Returns this cursor for chaining operations. #pod #pod =cut sub snapshot { my ($self, $bool) = @_; MongoDB::UsageError->throw("cannot set snapshot after querying") if $self->started_iterating; MongoDB::UsageError->throw("snapshot requires a defined, boolean argument") unless defined $bool; $self->query->modifiers->{'$snapshot'} = $bool; return $self; } #pod =head2 hint #pod #pod $cursor->hint({'x' => 1}); #pod $cursor->hint(['x', 1]); #pod $cursor->hint('x_1'); #pod #pod Force Mongo to use a specific index for a query. #pod #pod Returns this cursor for chaining operations. #pod #pod =cut sub hint { my ( $self, $index ) = @_; MongoDB::UsageError->throw("cannot set hint after querying") if $self->started_iterating; # $index must either be a string or a reference to an array, hash, or IxHash if ( ref $index eq 'ARRAY' ) { $index = Tie::IxHash->new(@$index); } elsif ( ref $index && !( ref $index eq 'HASH' || ref $index eq 'Tie::IxHash' ) ) { MongoDB::UsageError->throw("not a hash reference"); } $self->query->modifiers->{'$hint'} = $index; return $self; } #pod =head2 partial #pod #pod $cursor->partial(1); #pod #pod If a shard is down, mongos will return an error when it tries to query that #pod shard. If this is set, mongos will just skip that shard, instead. #pod #pod Boolean value, defaults to 0. #pod #pod Returns this cursor for chaining operations. #pod #pod =cut sub partial { my ($self, $value) = @_; MongoDB::UsageError->throw("cannot set partial after querying") if $self->started_iterating; $self->query->allowPartialResults( !! $value ); # returning self is an API change but more consistent with other cursor methods return $self; } #pod =head2 read_preference #pod #pod $cursor->read_preference($read_preference_object); #pod $cursor->read_preference('secondary', [{foo => 'bar'}]); #pod #pod Sets read preference for the cursor's connection. #pod #pod If given a single argument that is a L object, the #pod read preference is set to that object. Otherwise, it takes positional #pod arguments: the read preference mode and a tag set list, which must be a valid #pod mode and tag set list as described in the L #pod documentation. #pod #pod Returns this cursor for chaining operations. #pod #pod =cut sub read_preference { my $self = shift; MongoDB::UsageError->throw("cannot set read preference after querying") if $self->started_iterating; my $type = ref $_[0]; if ( $type eq 'MongoDB::ReadPreference' ) { $self->query->read_preference( $_[0] ); } else { my $mode = shift || 'primary'; my $tag_sets = shift; my $rp = MongoDB::ReadPreference->new( mode => $mode, ( $tag_sets ? ( tag_sets => $tag_sets ) : () ) ); $self->query->read_preference($rp); } return $self; } #pod =head1 QUERY INTROSPECTION AND RESET #pod #pod These methods run introspection methods on the query conditions and modifiers #pod stored within the cursor object. #pod #pod =head2 explain #pod #pod my $explanation = $cursor->explain; #pod #pod This will tell you the type of cursor used, the number of records the DB had to #pod examine as part of this query, the number of records returned by the query, and #pod the time in milliseconds the query took to execute. #pod #pod See also core documentation on explain: #pod L. #pod #pod =cut sub explain { my ($self) = @_; my $explain_op = MongoDB::Op::_Explain->_new( db_name => $self->query->db_name, coll_name => $self->query->coll_name, bson_codec => $self->query->bson_codec, query => $self->query->clone, read_preference => $self->query->read_preference, read_concern => $self->query->read_concern, ); return $self->query->client->send_read_op($explain_op); } #pod =head1 QUERY ITERATION #pod #pod These methods allow you to iterate over results. #pod #pod =head2 result #pod #pod my $result = $cursor->result; #pod #pod This method will execute the query and return a L object #pod with the results. #pod #pod The C, C, and C methods call C internally, #pod which executes the query "on demand". #pod #pod Iterating with a MongoDB::QueryResult object directly instead of a #pod L will be slightly faster, since the L #pod methods below just internally call the corresponding method on the result #pod object. #pod #pod =cut #--------------------------------------------------------------------------# # methods delgated to result object #--------------------------------------------------------------------------# #pod =head2 has_next #pod #pod while ($cursor->has_next) { #pod ... #pod } #pod #pod Checks if there is another result to fetch. Will automatically fetch more #pod data from the server if necessary. #pod #pod =cut sub has_next { $_[0]->result->has_next } #pod =head2 next #pod #pod while (my $object = $cursor->next) { #pod ... #pod } #pod #pod Returns the next object in the cursor. Will automatically fetch more data from #pod the server if necessary. Returns undef if no more data is available. #pod #pod =cut sub next { $_[0]->result->next } #pod =head2 batch #pod #pod while (my @batch = $cursor->batch) { #pod ... #pod } #pod #pod Returns the next batch of data from the cursor. Will automatically fetch more #pod data from the server if necessary. Returns an empty list if no more data is available. #pod #pod =cut sub batch { $_[0]->result->batch } #pod =head2 all #pod #pod my @objects = $cursor->all; #pod #pod Returns a list of all objects in the result. #pod #pod =cut sub all { $_[0]->result->all } #pod =head2 reset #pod #pod Resets the cursor. After being reset, pre-query methods can be #pod called on the cursor (sort, limit, etc.) and subsequent calls to #pod result, next, has_next, or all will re-query the database. #pod #pod =cut sub reset { my ($self) = @_; $self->_clear_result; return $self; } #pod =head2 info #pod #pod Returns a hash of information about this cursor. This is intended for #pod debugging purposes and users should not rely on the contents of this method for #pod production use. Currently the fields are: #pod #pod =for :list #pod * C -- the server-side id for this cursor. See below for details. #pod * C -- the number of results received from the server so far #pod * C -- the (zero-based) index of the document that will be returned next from L #pod * C -- if the database could not find the cursor or another error occurred, C may #pod contain a hash reference of flags set in the response (depending on the error). See #pod L #pod for a full list of flag values. #pod * C -- the index of the result that the current batch of results starts at. #pod #pod If the cursor has not yet executed, only the C field will be returned with #pod a value of 0. #pod #pod The C could appear in one of three forms: #pod #pod =for :list #pod * MongoDB::CursorID object (a blessed reference to an 8-byte string) #pod * A perl scalar (an integer) #pod * A Math::BigInt object (64 bit integer on 32-bit perl) #pod #pod When the C is zero, there are no more results to fetch. #pod #pod =cut sub info { my $self = shift; if ( $self->started_iterating ) { return $self->result->_info; } else { return { num => 0 }; } } #--------------------------------------------------------------------------# # Deprecated methods #--------------------------------------------------------------------------# sub count { my ($self, $limit_skip) = @_; my $cmd = new Tie::IxHash(count => $self->query->coll_name); $cmd->Push(query => $self->query->filter); if ($limit_skip) { $cmd->Push(limit => $self->query->limit) if $self->query->limit; $cmd->Push(skip => $self->query->skip) if $self->query->skip; } if (my $hint = $self->query->modifiers->{'$hint'}) { $cmd->Push(hint => $hint); } my $result = try { my $db = $self->query->client->get_database( $self->query->db_name ); $db->run_command( $cmd, $self->query->read_preference ); } catch { # if there was an error, check if it was the "ns missing" one that means the # collection hasn't been created or a real error. die $_ unless /^ns missing/; }; return $result ? $result->{n} : 0; } my $PRIMARY = MongoDB::ReadPreference->new; my $SEC_PREFERRED = MongoDB::ReadPreference->new( mode => 'secondaryPreferred' ); sub slave_okay { my ($self, $value) = @_; MongoDB::UsageError->throw("cannot set slave_ok after querying") if $self->started_iterating; if ($value) { # if not 'primary', then slave_ok is already true, so leave alone if ( $self->query->read_preference->mode eq 'primary' ) { # secondaryPreferred is how mongos interpretes slave_ok $self->query->read_preference( $SEC_PREFERRED ); } } else { $self->query->read_preference( $PRIMARY ); } # returning self is an API change but more consistent with other cursor methods return $self; } 1; # vim: ts=4 sts=4 sw=4 et tw=75: __END__ =pod =encoding UTF-8 =head1 NAME MongoDB::Cursor - A lazy cursor for Mongo query results =head1 VERSION version v1.2.2 =head1 SYNOPSIS while (my $object = $cursor->next) { ... } my @objects = $cursor->all; =head1 USAGE =head2 Multithreading Cursors are cloned in threads, but not reset. Iterating the same cursor from multiple threads will give unpredictable results. Only iterate from a single thread. =head1 ATTRIBUTES =head2 started_iterating A boolean indicating if this cursor has queried the database yet. Methods modifying the query will complain if they are called after the database is queried. =head1 QUERY MODIFIERS These methods modify the query to be run. An exception will be thrown if they are called after results are iterated. =head2 immortal $cursor->immortal(1); Ordinarily, a cursor "dies" on the database server after a certain length of time (approximately 10 minutes), to prevent inactive cursors from hogging resources. This option indicates that a cursor should not die until all of its results have been fetched or it goes out of scope in Perl. Boolean value, defaults to 0. Note: C only affects the server-side timeout. If you are getting client-side timeouts you will need to change your client configuration. See L and L. Returns this cursor for chaining operations. =head2 fields $coll->insert({name => "Fred", age => 20}); my $cursor = $coll->find->fields({ name => 1 }); my $obj = $cursor->next; $obj->{name}; "Fred" $obj->{age}; # undef Selects which fields are returned. The default is all fields. When fields are specified, _id is returned by default, but this can be disabled by explicitly setting it to "0". E.g. C<< _id => 0 >>. Argument must be either a hash reference or a L object. See L in the MongoDB documentation for details. Returns this cursor for chaining operations. =head2 sort # sort by name, descending $cursor->sort([name => -1]); Adds a sort to the query. Argument is either a hash reference or a L or an array reference of key/value pairs. Because hash references are not ordered, do not use them for more than one key. Returns this cursor for chaining operations. =head2 limit $cursor->limit(20); Sets cursor to return a maximum of N results. Returns this cursor for chaining operations. =head2 max_await_time_ms $cursor->max_await_time_ms( 500 ); The maximum amount of time in milliseconds for the server to wait on new documents to satisfy a tailable cursor query. This only applies to a cursor of type 'tailble_await'. This is ignored if the cursor is not a 'tailable_await' cursor or the server version is less than version 3.2. Returns this cursor for chaining operations. =head2 max_time_ms $cursor->max_time_ms( 500 ); Causes the server to abort the operation if the specified time in milliseconds is exceeded. Returns this cursor for chaining operations. =head2 tailable $cursor->tailable(1); If a cursor should be tailable. Tailable cursors can only be used on capped collections and are similar to the C command: they never die and keep returning new results as more is added to a collection. They are often used for getting log messages. Boolean value, defaults to 0. If you want the tailable cursor to block for a few seconds, use L instead. B calling this with a false value disables tailing, even if C was previously called. Returns this cursor for chaining operations. =head2 tailable_await $cursor->tailable_await(1); Sets a cursor to be tailable and block for a few seconds if no data is immediately available. Boolean value, defaults to 0. If you want the tailable cursor without blocking, use L instead. B calling this with a false value disables tailing, even if C was previously called. =head2 skip $cursor->skip( 50 ); Skips the first N results. Returns this cursor for chaining operations. =head2 snapshot $cursor->snapshot(1); Uses snapshot mode for the query. Snapshot mode assures no duplicates are returned due an intervening write relocating a document. Note that if an object is inserted, updated or deleted during the query, it may or may not be returned when snapshot mode is enabled. Short query responses (less than 1MB) are always effectively snapshotted. Currently, snapshot mode may not be used with sorting or explicit hints. Returns this cursor for chaining operations. =head2 hint $cursor->hint({'x' => 1}); $cursor->hint(['x', 1]); $cursor->hint('x_1'); Force Mongo to use a specific index for a query. Returns this cursor for chaining operations. =head2 partial $cursor->partial(1); If a shard is down, mongos will return an error when it tries to query that shard. If this is set, mongos will just skip that shard, instead. Boolean value, defaults to 0. Returns this cursor for chaining operations. =head2 read_preference $cursor->read_preference($read_preference_object); $cursor->read_preference('secondary', [{foo => 'bar'}]); Sets read preference for the cursor's connection. If given a single argument that is a L object, the read preference is set to that object. Otherwise, it takes positional arguments: the read preference mode and a tag set list, which must be a valid mode and tag set list as described in the L documentation. Returns this cursor for chaining operations. =head1 QUERY INTROSPECTION AND RESET These methods run introspection methods on the query conditions and modifiers stored within the cursor object. =head2 explain my $explanation = $cursor->explain; This will tell you the type of cursor used, the number of records the DB had to examine as part of this query, the number of records returned by the query, and the time in milliseconds the query took to execute. See also core documentation on explain: L. =head1 QUERY ITERATION These methods allow you to iterate over results. =head2 result my $result = $cursor->result; This method will execute the query and return a L object with the results. The C, C, and C methods call C internally, which executes the query "on demand". Iterating with a MongoDB::QueryResult object directly instead of a L will be slightly faster, since the L methods below just internally call the corresponding method on the result object. =head2 has_next while ($cursor->has_next) { ... } Checks if there is another result to fetch. Will automatically fetch more data from the server if necessary. =head2 next while (my $object = $cursor->next) { ... } Returns the next object in the cursor. Will automatically fetch more data from the server if necessary. Returns undef if no more data is available. =head2 batch while (my @batch = $cursor->batch) { ... } Returns the next batch of data from the cursor. Will automatically fetch more data from the server if necessary. Returns an empty list if no more data is available. =head2 all my @objects = $cursor->all; Returns a list of all objects in the result. =head2 reset Resets the cursor. After being reset, pre-query methods can be called on the cursor (sort, limit, etc.) and subsequent calls to result, next, has_next, or all will re-query the database. =head2 info Returns a hash of information about this cursor. This is intended for debugging purposes and users should not rely on the contents of this method for production use. Currently the fields are: =over 4 =item * C -- the server-side id for this cursor. See below for details. =item * C -- the number of results received from the server so far =item * C -- the (zero-based) index of the document that will be returned next from L =item * C -- if the database could not find the cursor or another error occurred, C may contain a hash reference of flags set in the response (depending on the error). See L for a full list of flag values. =item * C -- the index of the result that the current batch of results starts at. =back If the cursor has not yet executed, only the C field will be returned with a value of 0. The C could appear in one of three forms: =over 4 =item * MongoDB::CursorID object (a blessed reference to an 8-byte string) =item * A perl scalar (an integer) =item * A Math::BigInt object (64 bit integer on 32-bit perl) =back When the C is zero, there are no more results to fetch. =head1 SEE ALSO Core documentation on cursors: L. =head1 AUTHORS =over 4 =item * David Golden =item * Mike Friedman =item * Kristina Chodorow =item * Florian Ragwitz =back =head1 COPYRIGHT AND LICENSE This software is Copyright (c) 2016 by MongoDB, Inc.. This is free software, licensed under: The Apache License, Version 2.0, January 2004 =cut MongoDB-v1.2.2/lib/MongoDB/Database.pm000644 000765 000024 00000043562 12651754051 017536 0ustar00davidstaff000000 000000 # # Copyright 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # package MongoDB::Database; # ABSTRACT: A MongoDB Database use version; our $VERSION = 'v1.2.2'; use MongoDB::CommandResult; use MongoDB::Error; use MongoDB::GridFS; use MongoDB::Op::_ListCollections; use MongoDB::ReadPreference; use MongoDB::_Query; use MongoDB::_Types qw( BSONCodec NonNegNum ReadPreference ReadConcern WriteConcern ); use Types::Standard qw( InstanceOf Str ); use Carp 'carp'; use boolean; use Moo; use Try::Tiny; use namespace::clean -except => 'meta'; has _client => ( is => 'ro', isa => InstanceOf['MongoDB::MongoClient'], required => 1, ); #pod =attr name #pod #pod The name of the database. #pod #pod =cut has name => ( is => 'ro', isa => Str, required => 1, ); #pod =attr read_preference #pod #pod A L object. It may be initialized with a string #pod corresponding to one of the valid read preference modes or a hash reference #pod that will be coerced into a new MongoDB::ReadPreference object. #pod By default it will be inherited from a L object. #pod #pod =cut has read_preference => ( is => 'ro', isa => ReadPreference, required => 1, coerce => ReadPreference->coercion, ); #pod =attr write_concern #pod #pod A L object. It may be initialized with a hash #pod reference that will be coerced into a new MongoDB::WriteConcern object. #pod By default it will be inherited from a L object. #pod #pod =cut has write_concern => ( is => 'ro', isa => WriteConcern, required => 1, coerce => WriteConcern->coercion, ); #pod =attr read_concern #pod #pod A L object. May be initialized with a hash #pod reference or a string that will be coerced into the level of read #pod concern. #pod #pod By default it will be inherited from a L object. #pod #pod =cut has read_concern => ( is => 'ro', isa => ReadConcern, required => 1, coerce => ReadConcern->coercion, ); #pod =attr max_time_ms #pod #pod Specifies the maximum amount of time in milliseconds that the server should use #pod for working on a query. #pod #pod B: this will only be used for server versions 2.6 or greater, as that #pod was when the C<$maxTimeMS> meta-operator was introduced. #pod #pod =cut has max_time_ms => ( is => 'ro', isa => NonNegNum, required => 1, ); #pod =attr bson_codec #pod #pod An object that provides the C and C methods, such as #pod from L. It may be initialized with a hash reference that will #pod be coerced into a new MongoDB::BSON object. By default it will be inherited #pod from a L object. #pod #pod =cut has bson_codec => ( is => 'ro', isa => BSONCodec, coerce => BSONCodec->coercion, required => 1, ); #--------------------------------------------------------------------------# # methods #--------------------------------------------------------------------------# #pod =method list_collections #pod #pod $result = $coll->list_collections( $filter ); #pod $result = $coll->list_collections( $filter, $options ); #pod #pod Returns a L object to iterate over collection description #pod documents. These will contain C and C keys like so: #pod #pod use boolean; #pod #pod { #pod name => "my_capped_collection", #pod options => { #pod capped => true, #pod size => 10485760, #pod } #pod }, #pod #pod An optional filter document may be provided, which cause only collection #pod description documents matching a filter expression to be returned. See the #pod L #pod for more details on filtering for specific collections. #pod #pod A hash reference of options may be provided. Valid keys include: #pod #pod =for :list #pod * C – the number of documents to return per batch. #pod * C – the maximum amount of time in milliseconds to allow the #pod command to run. (Note, this will be ignored for servers before version 2.6.) #pod #pod =cut my $list_collections_args; sub list_collections { my ( $self, $filter, $options ) = @_; $filter ||= {}; $options ||= {}; # possibly fallback to default maxTimeMS if ( ! exists $options->{maxTimeMS} && $self->max_time_ms ) { $options->{maxTimeMS} = $self->max_time_ms; } my $op = MongoDB::Op::_ListCollections->_new( db_name => $self->name, client => $self->_client, bson_codec => $self->bson_codec, filter => $filter, options => $options, ); return $self->_client->send_read_op($op); } #pod =method collection_names #pod #pod my @collections = $database->collection_names; #pod #pod Returns the list of collections in this database. #pod #pod B if the number of collections is very large, this will return #pod a very large result. Use L to iterate over collections #pod instead. #pod #pod =cut sub collection_names { my ($self) = @_; my $op = MongoDB::Op::_ListCollections->_new( db_name => $self->name, client => $self->_client, bson_codec => $self->bson_codec, filter => {}, options => {}, ); my $res = $self->_client->send_read_op($op); return map { $_->{name} } $res->all; } #pod =method get_collection, coll #pod #pod my $collection = $database->get_collection('foo'); #pod my $collection = $database->get_collection('foo', $options); #pod my $collection = $database->coll('foo', $options); #pod #pod Returns a L for the given collection name within this #pod database. #pod #pod It takes an optional hash reference of options that are passed to the #pod L constructor. #pod #pod The C method is an alias for C. #pod #pod =cut sub get_collection { my ( $self, $collection_name, $options ) = @_; return MongoDB::Collection->new( read_preference => $self->read_preference, write_concern => $self->write_concern, read_concern => $self->read_concern, bson_codec => $self->bson_codec, max_time_ms => $self->max_time_ms, ( $options ? %$options : () ), # not allowed to be overridden by options database => $self, name => $collection_name, ); } { no warnings 'once'; *coll = \&get_collection } #pod =method get_gridfs #pod #pod my $grid = $database->get_gridfs; #pod my $grid = $database->get_gridfs("fs"); #pod my $grid = $database->get_gridfs("fs", $options); #pod #pod Returns a L for storing and retrieving files from the database. #pod Default prefix is "fs", making C<$grid-Efiles> "fs.files" and C<$grid-Echunks> #pod "fs.chunks". #pod #pod It takes an optional hash reference of options that are passed to the #pod L constructor. #pod #pod See L for more information. #pod #pod =cut sub get_gridfs { my ($self, $prefix, $options) = @_; $prefix = "fs" unless $prefix; return MongoDB::GridFS->new( read_preference => $self->read_preference, write_concern => $self->write_concern, max_time_ms => $self->max_time_ms, bson_codec => $self->bson_codec, ( $options ? %$options : () ), # not allowed to be overridden by options _database => $self, prefix => $prefix ); } #pod =method drop #pod #pod $database->drop; #pod #pod Deletes the database. #pod #pod =cut sub drop { my ($self) = @_; return $self->run_command({ 'dropDatabase' => 1 }); } #pod =method run_command #pod #pod my $output = $database->run_command([ some_command => 1 ]); #pod #pod my $output = $database->run_command( #pod [ some_command => 1 ], #pod { mode => 'secondaryPreferred' } #pod ); #pod #pod This method runs a database command. The first argument must be a document #pod with the command and its arguments. It should be given as an array reference #pod of key-value pairs or a L object with the command name as the #pod first key. The use of a hash reference will only reliably work for commands #pod without additional parameters. #pod #pod By default, commands are run with a read preference of 'primary'. An optional #pod second argument may specify an alternative read preference. If given, it must #pod be a L object or a hash reference that can be used to #pod construct one. #pod #pod It returns the output of the command (a hash reference) on success or throws a #pod L exception if #pod the command fails. #pod #pod For a list of possible database commands, run: #pod #pod my $commands = $db->run_command([listCommands => 1]); #pod #pod There are a few examples of database commands in the #pod L section. See also core documentation #pod on database commands: L. #pod #pod =cut sub run_command { my ( $self, $command, $read_pref ) = @_; $read_pref = MongoDB::ReadPreference->new( ref($read_pref) ? $read_pref : ( mode => $read_pref ) ) if $read_pref && ref($read_pref) ne 'MongoDB::ReadPreference'; my $op = MongoDB::Op::_Command->_new( db_name => $self->name, query => $command, query_flags => {}, bson_codec => $self->bson_codec, read_preference => $read_pref, ); my $obj = $self->_client->send_read_op($op); return $obj->output; } #--------------------------------------------------------------------------# # deprecated methods #--------------------------------------------------------------------------# sub eval { my ($self, $code, $args, $nolock) = @_; $nolock = boolean::false unless defined $nolock; my $cmd = tie(my %hash, 'Tie::IxHash'); %hash = ('$eval' => $code, 'args' => $args, 'nolock' => $nolock); my $output = $self->run_command($cmd); if (ref $output eq 'HASH' && exists $output->{'retval'}) { return $output->{'retval'}; } else { return $output; } } sub last_error { my ( $self, $opt ) = @_; return $self->run_command( [ getlasterror => 1, ( $opt ? %$opt : () ) ] ); } 1; __END__ =pod =encoding UTF-8 =head1 NAME MongoDB::Database - A MongoDB Database =head1 VERSION version v1.2.2 =head1 SYNOPSIS # get a Database object via MongoDB::MongoClient my $db = $client->get_database("foo"); # get a Collection via the Database object my $coll = $db->get_collection("people"); # run a command on a database my $res = $db->run_command([ismaster => 1]); =head1 DESCRIPTION This class models a MongoDB database. Use it to construct L objects. It also provides the L method and some convenience methods that use it. Generally, you never construct one of these directly with C. Instead, you call C on a L object. =head1 USAGE =head2 Error handling Unless otherwise explictly documented, all methods throw exceptions if an error occurs. The error types are documented in L. To catch and handle errors, the L and L modules are recommended: use Try::Tiny; use Safe::Isa; # provides $_isa try { $db->run_command( @command ) } catch { if ( $_->$_isa("MongoDB::DuplicateKeyError" ) { ... } else { ... } }; To retry failures automatically, consider using L. =head1 ATTRIBUTES =head2 name The name of the database. =head2 read_preference A L object. It may be initialized with a string corresponding to one of the valid read preference modes or a hash reference that will be coerced into a new MongoDB::ReadPreference object. By default it will be inherited from a L object. =head2 write_concern A L object. It may be initialized with a hash reference that will be coerced into a new MongoDB::WriteConcern object. By default it will be inherited from a L object. =head2 read_concern A L object. May be initialized with a hash reference or a string that will be coerced into the level of read concern. By default it will be inherited from a L object. =head2 max_time_ms Specifies the maximum amount of time in milliseconds that the server should use for working on a query. B: this will only be used for server versions 2.6 or greater, as that was when the C<$maxTimeMS> meta-operator was introduced. =head2 bson_codec An object that provides the C and C methods, such as from L. It may be initialized with a hash reference that will be coerced into a new MongoDB::BSON object. By default it will be inherited from a L object. =head1 METHODS =head2 list_collections $result = $coll->list_collections( $filter ); $result = $coll->list_collections( $filter, $options ); Returns a L object to iterate over collection description documents. These will contain C and C keys like so: use boolean; { name => "my_capped_collection", options => { capped => true, size => 10485760, } }, An optional filter document may be provided, which cause only collection description documents matching a filter expression to be returned. See the L for more details on filtering for specific collections. A hash reference of options may be provided. Valid keys include: =over 4 =item * C – the number of documents to return per batch. =item * C – the maximum amount of time in milliseconds to allow the command to run. (Note, this will be ignored for servers before version 2.6.) =back =head2 collection_names my @collections = $database->collection_names; Returns the list of collections in this database. B if the number of collections is very large, this will return a very large result. Use L to iterate over collections instead. =head2 get_collection, coll my $collection = $database->get_collection('foo'); my $collection = $database->get_collection('foo', $options); my $collection = $database->coll('foo', $options); Returns a L for the given collection name within this database. It takes an optional hash reference of options that are passed to the L constructor. The C method is an alias for C. =head2 get_gridfs my $grid = $database->get_gridfs; my $grid = $database->get_gridfs("fs"); my $grid = $database->get_gridfs("fs", $options); Returns a L for storing and retrieving files from the database. Default prefix is "fs", making C<$grid-Efiles> "fs.files" and C<$grid-Echunks> "fs.chunks". It takes an optional hash reference of options that are passed to the L constructor. See L for more information. =head2 drop $database->drop; Deletes the database. =head2 run_command my $output = $database->run_command([ some_command => 1 ]); my $output = $database->run_command( [ some_command => 1 ], { mode => 'secondaryPreferred' } ); This method runs a database command. The first argument must be a document with the command and its arguments. It should be given as an array reference of key-value pairs or a L object with the command name as the first key. The use of a hash reference will only reliably work for commands without additional parameters. By default, commands are run with a read preference of 'primary'. An optional second argument may specify an alternative read preference. If given, it must be a L object or a hash reference that can be used to construct one. It returns the output of the command (a hash reference) on success or throws a L exception if the command fails. For a list of possible database commands, run: my $commands = $db->run_command([listCommands => 1]); There are a few examples of database commands in the L section. See also core documentation on database commands: L. =for Pod::Coverage last_error =head1 DEPRECATIONS The methods still exist, but are no longer documented. In a future version they will warn when used, then will eventually be removed. =over 4 =item * last_error =back =head1 AUTHORS =over 4 =item * David Golden =item * Mike Friedman =item * Kristina Chodorow =item * Florian Ragwitz =back =head1 COPYRIGHT AND LICENSE This software is Copyright (c) 2016 by MongoDB, Inc.. This is free software, licensed under: The Apache License, Version 2.0, January 2004 =cut MongoDB-v1.2.2/lib/MongoDB/DataTypes.pod000644 000765 000024 00000036143 12651754051 020073 0ustar00davidstaff000000 000000 # # Copyright 2009-2013 10gen, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # PODNAME: MongoDB::DataTypes # ABSTRACT: The data types used with MongoDB __END__ =pod =encoding UTF-8 =head1 NAME MongoDB::DataTypes - The data types used with MongoDB =head1 VERSION version v1.2.2 =head1 DESCRIPTION This goes over the types you can save to the database and use for queries in the Perl driver. If you are using another language, please refer to that language's documentation (L). =head1 NOTES FOR SQL PROGRAMMERS =head2 You must query for data using the correct type. For example, it is perfectly valid to have some records where the field "foo" is 123 (integer) and other records where "foo" is "123" (string). Thus, you must query for the correct type. If you save C<{"foo" =E "123"}>, you cannot query for it with C<{"foo" =E 123}>. MongoDB is strict about types. If the type of a field is ambiguous and important to your application, you should document what you expect the application to send to the database and convert your data to those types before sending. There are some object-document mappers that will enforce certain types for certain fields for you. You generally shouldn't save numbers as strings, as they will behave like strings (e.g., range queries won't work correctly) and the data will take up more space. If you set L, the driver will automatically convert everything that looks like a number to a number before sending it to the database. Numbers are the only exception to the strict typing: all number types stored by MongoDB (32-bit integers, 64-bit integers, 64-bit floating point numbers) will match each other. =head1 TYPES =head2 Numbers By default, numbers with a decimal point will be saved as doubles (64-bit). B: On a perl compiled with long-double support, floating point number precision will be lost when sending data to MongoDB. =head3 32-bit Platforms Numbers without decimal points will be saved as 32-bit integers. To save a number as a 64-bit integer, use bigint (i.e. L): use bigint; $collection->insert({"user_id" => 28347197234178}) The driver will die if you try to insert a number beyond the signed 64-bit range: -9,223,372,036,854,775,808 to +9,223,372,036,854,775,807. Numbers that are saved as 64-bit integers will be decoded as L objects. =head3 64-bit Platforms Numbers without a decimal point will be saved and returned as 32-bit integers if they will fit and 64-bit integers otherwise. To force 64-bit encoding, use a L object. =head4 64-bit integers in the shell The Mongo shell has one numeric type: the 8-byte float. This means that it cannot always represent an 8-byte integer exactly. Thus, when you display a 64-bit integer in the shell, it will be wrapped in a subobject that indicates it might be an approximate value. For instance, if we run this Perl on a 64-bit machine: $coll->insert({_id => 1}); then look at it in the shell, we see: > db.whatever.findOne() { "_id" : { "floatApprox" : 1 } } This doesn't mean that we saved a float, it just means that the float value of a 64-bit integer may not be exact. =head3 Dealing with numbers and strings in Perl Perl is very flexible about whether something is number or a string; it generally infers the type from context. Unfortunately, the driver doesn't have any context when it has to choose how to serialize a variable. Therefore, the default behavior is to introspect the internal state of the variable. Any variable that has ever been used in a string context (e.g. printed, compared with 'eq', matched with a regular expression, etc.) will be serialized as a string. my $var = "4"; # stored as the string "4" $collection->insert({myVar => $var}); $var = int($var) if (int($var) eq $var); # stored as the int 4 $collection->insert({myVar => $var}); Because of this, users often end up with more strings than they wanted in their databases. One technique for eliminating the string representation and store a numeric interpretation is to add 0 to the variable: $collection->insert({myVar => 0 + $var}); If you would like to have everything that looks like a number saved as a number without the C<0+> technique, use a L codec that has the L option set. $coll2 = $collection->with_codec( prefer_numeric => 1 ); $coll2->insert( {myVar => "1.23"} ); # stored as double 1.23 On the other hand, some data looks like a number but should be saved as a string. For example, suppose we are storing zip codes. To ensure a zip code is saved as a string, bless the string as a C type: my $z = "04101"; my $zip = bless(\$z, "MongoDB::BSON::String"); # zip is stored as "04101" $collection->insert({city => "Portland", zip => bless(\$zip, "MongoDB::BSON::String")}); Additionally, there are two utility functions, C and C, that explicitly set Perl's internal type flags to Integer (C) and Double (C) respectively. These flags trigger MongoDB's recognition of the values as Int32/Int64 (depending on the size of the number) or Double: my $x = 1.0; MongoDB::force_int($x); $coll->insert({x => $x}); # Inserts an integer MongoDB::force_double($x); $coll->insert({x => $x}); # Inserts a double =head2 Strings All strings must be valid UTF-8 to be sent to the database. If a string is not valid, it will not be saved. If you need to save a non-UTF-8 string, you can save it as a binary blob (see the Binary Data section below). All strings returned from the database have the UTF-8 flag set. Unfortunately, due to Perl weirdness, UTF-8 is not very pretty. For example, suppose we have a UTF-8 string: my $str = 'Åland Islands'; Now, let's print it: print "$str\n"; You can see in the output: "\x{c5}land Islands" Lovely, isn't it? This is how Perl prints UTF-8. To make it "pretty," there are a couple options: my $pretty_str = utf8::encode($str); This, unintuitively, clears the UTF-8 flag. You can also just run binmode STDOUT, ':utf8'; and then the string (and all future UTF-8 strings) will print "correctly." =head2 Arrays Arrays must be saved as array references (C<\@foo>, not C<@foo>). =head2 Embedded Documents Embedded documents take the same form as top-level documents: either hash references or Les. =head2 Dates The L, L or L package can be used to insert and query for dates. Dates stored in the database will be returned as instances of one of these classes, depending on the C setting of the L codec object: $codec = MongoDB::BSON->new( dt_type => 'Time::Moment' ); $client = MongoDB::MongoClient->new( bson_codec => $codec ); An example of storing and retrieving a date: use DateTime; my $now = DateTime->now; $collection->insert({'ts' => $now}); my $obj = $collection->find_one; print "Today is ".$obj->{'ts'}->ymd."\n"; An example of querying for a range of dates: my $start = DateTime->from_epoch( epoch => 100000 ); my $end = DateTime->from_epoch( epoch => 500000 ); my $cursor = $collection->query({event => {'$gt' => $start, '$lt' => $end}}); B objects is extremely slow.> Consider saving dates as epoch seconds and converting the numbers to objects only when needed. A single L field can make deserialization up to 10 times slower. For example, you could use the L object if one is not provided. #pod #pod As this has a one-time effect, it is now read-only to help you detect #pod code that was trying to change after the fact during program execution. #pod #pod For temporary or localized changes, look into overriding the C #pod object for a database or collection object. #pod #pod =cut has dt_type => ( is => 'ro', default => 'DateTime' ); #pod =attr query_timeout (DEPRECATED AND READ-ONLY) #pod #pod # set query timeout to 1 second #pod my $client = MongoDB::MongoClient->new(query_timeout => 1000); #pod #pod This option has been renamed as L. If this option is set #pod and that one is not, this will be used. #pod #pod This value is in milliseconds and defaults to 30000. #pod #pod =cut has query_timeout => ( is => 'ro', isa => Int, default => 30000, ); #pod =attr sasl (DEPRECATED) #pod #pod If true, the driver will set the authentication mechanism based on the #pod C property. #pod #pod =cut has sasl => ( is => 'ro', isa => Bool, default => 0 ); #pod =attr sasl_mechanism (DEPRECATED) #pod #pod This specifies the SASL mechanism to use for authentication with a MongoDB server. #pod It has the same valid values as L. The default is GSSAPI. #pod #pod =cut has sasl_mechanism => ( is => 'ro', isa => AuthMechanism, default => 'GSSAPI', ); #pod =attr timeout (DEPRECATED AND READ-ONLY) #pod #pod This option has been renamed as L. If this option is set #pod and that one is not, this will be used. #pod #pod Connection timeout is in milliseconds. Defaults to C<10000>. #pod #pod =cut has timeout => ( is => 'ro', isa => Int, default => 10000, ); #--------------------------------------------------------------------------# # computed attributes - these are private and can't be set in the # constructor, but have a public accessor #--------------------------------------------------------------------------# #pod =method read_preference #pod #pod Returns a L object constructed from #pod L and L #pod #pod B as a mutator has been removed.> Read #pod preference is read-only. If you need a different read preference for #pod a database or collection, you can specify that in C or #pod C. #pod #pod =cut has _read_preference => ( is => 'lazy', isa => ReadPreference, reader => 'read_preference', init_arg => undef, builder => '_build__read_preference', ); sub _build__read_preference { my ($self) = @_; return MongoDB::ReadPreference->new( ( $self->read_pref_mode ? ( mode => $self->read_pref_mode ) : () ), ( $self->read_pref_tag_sets ? ( tag_sets => $self->read_pref_tag_sets ) : () ), ); } #pod =method write_concern #pod #pod Returns a L object constructed from L, L #pod and L. #pod #pod =cut has _write_concern => ( is => 'lazy', isa => InstanceOf['MongoDB::WriteConcern'], reader => 'write_concern', init_arg => undef, builder => '_build__write_concern', ); sub _build__write_concern { my ($self) = @_; return MongoDB::WriteConcern->new( ( $self->w ? ( w => $self->w ) : () ), ( $self->wtimeout ? ( wtimeout => $self->wtimeout ) : () ), ( $self->j ? ( j => $self->j ) : () ), ); } #pod =method read_concern #pod #pod Returns a L object constructed from #pod L. #pod #pod =cut has _read_concern => ( is => 'lazy', isa => InstanceOf['MongoDB::ReadConcern'], reader => 'read_concern', init_arg => undef, builder => '_build__read_concern', ); sub _build__read_concern { my ($self) = @_; return MongoDB::ReadConcern->new( ( $self->read_concern_level ? ( level => $self->read_concern_level ) : () ), ); } #--------------------------------------------------------------------------# # private attributes #--------------------------------------------------------------------------# # collects constructor options and defer them so precedence can be resolved # against the _uri options; unlike other private args, this needs a valid # init argument has _deferred => ( is => 'ro', isa => HashRef, init_arg => '_deferred', default => sub { {} }, ); #pod =method topology_type #pod #pod Returns an enumerated topology type. If the L is #pod set, the value will be either 'ReplicaSetWithPrimary' or 'ReplicaSetNoPrimary' #pod (if the primary is down or not yet discovered). Without L, #pod the type will be 'Single' if there is only one server in the list of hosts, and #pod 'Sharded' if there are more than one. #pod #pod N.B. A single mongos will have a topology type of 'Single', as that mongos will #pod be used for all reads and writes, just like a standalone mongod. The 'Sharded' #pod type indicates a sharded cluster with multiple mongos servers, and reads/writes #pod will be distributed acc #pod #pod =cut has _topology => ( is => 'lazy', isa => InstanceOf ['MongoDB::_Topology'], init_arg => undef, builder => '_build__topology', handles => { topology_type => 'type' }, clearer => '_clear__topology', ); sub _build__topology { my ($self) = @_; my $type = length( $self->replica_set_name ) ? 'ReplicaSetNoPrimary' : @{ $self->_uri->hostpairs } > 1 ? 'Sharded' : 'Single'; MongoDB::_Topology->new( uri => $self->_uri, bson_codec => $self->bson_codec, type => $type, replica_set_name => $self->replica_set_name, server_selection_timeout_sec => $self->server_selection_timeout_ms / 1000, server_selection_try_once => $self->server_selection_try_once, local_threshold_sec => $self->local_threshold_ms / 1000, heartbeat_frequency_sec => $self->heartbeat_frequency_ms / 1000, max_wire_version => MAX_WIRE_VERSION, min_wire_version => MIN_WIRE_VERSION, credential => $self->_credential, link_options => { connect_timeout => $self->connect_timeout_ms >= 0 ? $self->connect_timeout_ms / 1000 : undef, socket_timeout => $self->socket_timeout_ms >= 0 ? $self->socket_timeout_ms / 1000 : undef, with_ssl => !!$self->ssl, ( ref( $self->ssl ) eq 'HASH' ? ( SSL_options => $self->ssl ) : () ), }, ); } has _credential => ( is => 'lazy', isa => InstanceOf ['MongoDB::_Credential'], init_arg => undef, builder => '_build__credential', ); sub _build__credential { my ($self) = @_; my $mechanism = $self->auth_mechanism; my $cred = MongoDB::_Credential->new( mechanism => $mechanism, mechanism_properties => $self->auth_mechanism_properties, ( $self->username ? ( username => $self->username ) : () ), ( $self->password ? ( password => $self->password ) : () ), ( $self->db_name ? ( source => $self->db_name ) : () ), ); return $cred; } has _uri => ( is => 'lazy', isa => InstanceOf ['MongoDB::_URI'], init_arg => undef, builder => '_build__uri', ); sub _build__uri { my ($self) = @_; if ( $self->host =~ m{^\w+://} ) { return MongoDB::_URI->new( uri => $self->host ); } else { my $uri = $self->host =~ /:\d+$/ ? $self->host : sprintf("%s:%s", map { $self->$_ } qw/host port/ ); return MongoDB::_URI->new( uri => ("mongodb://$uri") ); } } #--------------------------------------------------------------------------# # Constructor customization #--------------------------------------------------------------------------# # these attributes are lazy, built from either _uri->options or from # _config_options captured in BUILDARGS my @deferred_options = qw( auth_mechanism auth_mechanism_properties connect_timeout_ms db_name heartbeat_frequency_ms j local_threshold_ms max_time_ms read_pref_mode read_pref_tag_sets replica_set_name server_selection_timeout_ms server_selection_try_once socket_check_interval_ms socket_timeout_ms ssl username password w wtimeout read_concern_level ); around BUILDARGS => sub { my $orig = shift; my $class = shift; my $hr = $class->$orig(@_); my $deferred = {}; for my $k ( @deferred_options ) { $deferred->{$k} = delete $hr->{$k} if exists $hr->{$k}; } $hr->{_deferred} = $deferred; return $hr; }; sub BUILD { my ($self, $opts) = @_; my $uri = $self->_uri; my @addresses = @{ $uri->hostpairs }; # resolve and validate all deferred attributes $self->$_ for @deferred_options; # resolve and validate read pref and write concern $self->read_preference; $self->write_concern; # Add error handler to codec if user didn't provide their own unless ( $self->bson_codec->error_callback ) { $self->_set_bson_codec( $self->bson_codec->clone( error_callback => sub { my ($msg, $ref, $op) = @_; if ( $op =~ /^encode/ ) { MongoDB::DocumentError->throw( message => $msg, document => $ref ); } else { MongoDB::DecodingError->throw($msg); } }, ) ); } return; } #--------------------------------------------------------------------------# # helper functions #--------------------------------------------------------------------------# sub __uri_or_else { my ( $self, %spec ) = @_; my $uri_options = $self->_uri->options; my $deferred = $self->_deferred; my ( $u, $e, $default ) = @spec{qw/u e d/}; return exists $uri_options->{$u} ? $uri_options->{$u} : exists $deferred->{$e} ? $deferred->{$e} : $default; } sub __string { local $_; my ($first) = grep { defined && length } @_; return $first || ''; } #--------------------------------------------------------------------------# # public methods - network communication #--------------------------------------------------------------------------# #pod =method connect #pod #pod $client->connect; #pod #pod Calling this method is unnecessary, as connections are established #pod automatically as needed. It is kept for backwards compatibility. Calling it #pod will check all servers in the deployment which ensures a connection to any #pod that are available. #pod #pod See L for a method that is useful when using forks or threads. #pod #pod =cut sub connect { my ($self) = @_; $self->_topology->scan_all_servers; return 1; } #pod =method disconnect #pod #pod $client->disconnect; #pod #pod Drops all connections to servers. #pod #pod =cut sub disconnect { my ($self) = @_; $self->_topology->close_all_links; return 1; } #pod =method reconnect #pod #pod $client->reconnect; #pod #pod This method closes all connections to the server, as if L were #pod called, and then immediately reconnects. Use this after forking or spawning #pod off a new thread. #pod #pod =cut sub reconnect { my ($self) = @_; $self->_topology->close_all_links; $self->_topology->scan_all_servers; return 1; } #pod =method topology_status #pod #pod $client->topology_status; #pod $client->topology_status( refresh => 1 ); #pod #pod Returns a hash reference with server topology information like this: #pod #pod { #pod 'topology_type' => 'ReplicaSetWithPrimary' #pod 'replica_set_name' => 'foo', #pod 'last_scan_time' => '1433766895.183241', #pod 'servers' => [ #pod { #pod 'address' => 'localhost:50003', #pod 'ewma_rtt_ms' => '0.223462326', #pod 'type' => 'RSSecondary' #pod }, #pod { #pod 'address' => 'localhost:50437', #pod 'ewma_rtt_ms' => '0.268435456', #pod 'type' => 'RSArbiter' #pod }, #pod { #pod 'address' => 'localhost:50829', #pod 'ewma_rtt_ms' => '0.737782272', #pod 'type' => 'RSPrimary' #pod } #pod }, #pod } #pod #pod If the 'refresh' argument is true, then the topology will be scanned #pod to update server data before returning the hash reference. #pod #pod =cut sub topology_status { my ($self, %opts) = @_; $self->_topology->scan_all_servers if $opts{refresh}; return $self->_topology->status_struct; } #--------------------------------------------------------------------------# # semi-private methods; these are public but undocumented and their # semantics might change in future releases #--------------------------------------------------------------------------# # Undocumented in old MongoDB::MongoClient; semantics don't translate, but # best approximation is checking if we can send a command to a server sub connected { my ($self) = @_; return eval { $self->send_admin_command([ismaster => 1]); 1 }; } sub send_admin_command { my ( $self, $command, $read_pref ) = @_; $read_pref = MongoDB::ReadPreference->new( ref($read_pref) ? $read_pref : ( mode => $read_pref ) ) if $read_pref && ref($read_pref) ne 'MongoDB::ReadPreference'; my $op = MongoDB::Op::_Command->_new( db_name => 'admin', query => $command, query_flags => {}, bson_codec => $self->bson_codec, read_preference => $read_pref, ); return $self->send_read_op( $op ); } # op dispatcher written in highly optimized style sub send_direct_op { my ( $self, $op, $address ) = @_; my ( $link, $result ); ( $link = $self->_topology->get_specific_link($address) ), ( eval { ($result) = $op->execute($link); 1 } or do { my $err = length($@) ? $@ : "caught error, but it was lost in eval unwind"; if ( $err->$_isa("MongoDB::ConnectionError") ) { $self->_topology->mark_server_unknown( $link->server, $err ); } elsif ( $err->$_isa("MongoDB::NotMasterError") ) { $self->_topology->mark_server_unknown( $link->server, $err ); $self->_topology->mark_stale; } # regardless of cleanup, rethrow the error WITH_ASSERTS ? ( confess $err ) : ( die $err ); } ), return $result; } # op dispatcher written in highly optimized style sub send_write_op { my ( $self, $op ) = @_; my ( $link, $result ); ( $link = $self->_topology->get_writable_link ), ( eval { ($result) = $op->execute($link); 1 } or do { my $err = length($@) ? $@ : "caught error, but it was lost in eval unwind"; if ( $err->$_isa("MongoDB::ConnectionError") ) { $self->_topology->mark_server_unknown( $link->server, $err ); } elsif ( $err->$_isa("MongoDB::NotMasterError") ) { $self->_topology->mark_server_unknown( $link->server, $err ); $self->_topology->mark_stale; } # regardless of cleanup, rethrow the error WITH_ASSERTS ? ( confess $err ) : ( die $err ); } ), return $result; } # op dispatcher written in highly optimized style sub send_read_op { my ( $self, $op ) = @_; my ( $link, $type, $result ); ( $link = $self->_topology->get_readable_link( $op->read_preference ) ), ( $type = $self->_topology->type ), ( eval { ($result) = $op->execute( $link, $type ); 1 } or do { my $err = length($@) ? $@ : "caught error, but it was lost in eval unwind"; if ( $err->$_isa("MongoDB::ConnectionError") ) { $self->_topology->mark_server_unknown( $link->server, $err ); } elsif ( $err->$_isa("MongoDB::NotMasterError") ) { $self->_topology->mark_server_unknown( $link->server, $err ); $self->_topology->mark_stale; } # regardless of cleanup, rethrow the error WITH_ASSERTS ? ( confess $err ) : ( die $err ); } ), return $result; } #--------------------------------------------------------------------------# # database helper methods #--------------------------------------------------------------------------# #pod =method database_names #pod #pod my @dbs = $client->database_names; #pod #pod Lists all databases on the MongoDB server. #pod #pod =cut sub database_names { my ($self) = @_; my @databases; my $max_tries = 3; for my $try ( 1 .. $max_tries ) { last if try { my $output = $self->send_admin_command([ listDatabases => 1 ])->output; if (ref($output) eq 'HASH' && exists $output->{databases}) { @databases = map { $_->{name} } @{ $output->{databases} }; } return 1; } catch { if ( $_->$_isa("MongoDB::DatabaseError" ) ) { return if $_->result->output->{code} == CANT_OPEN_DB_IN_READ_LOCK() || $try < $max_tries; } die $_; }; } return @databases; } #pod =method get_database, db #pod #pod my $database = $client->get_database('foo'); #pod my $database = $client->get_database('foo', $options); #pod my $database = $client->db('foo', $options); #pod #pod Returns a L instance for the database with the given #pod C<$name>. #pod #pod It takes an optional hash reference of options that are passed to the #pod L constructor. #pod #pod The C method is an alias for C. #pod #pod =cut sub get_database { my ( $self, $database_name, $options ) = @_; return MongoDB::Database->new( read_preference => $self->read_preference, write_concern => $self->write_concern, read_concern => $self->read_concern, bson_codec => $self->bson_codec, max_time_ms => $self->max_time_ms, ( $options ? %$options : () ), # not allowed to be overridden by options _client => $self, name => $database_name, ); } { no warnings 'once'; *db = \&get_database } #pod =method get_namespace, ns #pod #pod my $collection = $client->get_namespace('test.foo'); #pod my $collection = $client->get_namespace('test.foo', $options); #pod my $collection = $client->ns('test.foo', $options); #pod #pod Returns a L instance for the given namespace. #pod The namespace has both the database name and the collection name #pod separated with a dot character. #pod #pod This is a quick way to get a collection object if you don't need #pod the database object separately. #pod #pod It takes an optional hash reference of options that are passed to the #pod L constructor. The intermediate L #pod object will be created with default options. #pod #pod The C method is an alias for C. #pod #pod =cut sub get_namespace { my ( $self, $ns, $options ) = @_; MongoDB::UsageError->throw("namespace requires a string argument") unless defined($ns) && length($ns); my ( $db, $coll ) = split /\./, $ns, 2; MongoDB::UsageError->throw("$ns is not a valid namespace") unless defined($db) && defined($coll); return $self->db($db)->coll( $coll, $options ); } { no warnings 'once'; *ns = \&get_namespace } #pod =method fsync(\%args) #pod #pod $client->fsync(); #pod #pod A function that will forces the server to flush all pending writes to the storage layer. #pod #pod The fsync operation is synchronous by default, to run fsync asynchronously, use the following form: #pod #pod $client->fsync({async => 1}); #pod #pod The primary use of fsync is to lock the database during backup operations. This will flush all data to the data storage layer and block all write operations until you unlock the database. Note: you can still read while the database is locked. #pod #pod $conn->fsync({lock => 1}); #pod #pod =cut sub fsync { my ($self, $args) = @_; $args ||= {}; # Pass this in as array-ref to ensure that 'fsync => 1' is the first argument. return $self->get_database('admin')->run_command([fsync => 1, %$args]); } #pod =method fsync_unlock #pod #pod $conn->fsync_unlock(); #pod #pod Unlocks a database server to allow writes and reverses the operation of a $conn->fsync({lock => 1}); operation. #pod #pod =cut sub fsync_unlock { my ($self) = @_; my $op = MongoDB::Op::_FSyncUnlock->_new( db_name => 'admin', client => $self, bson_codec => $self->bson_codec, ); return $self->send_read_op($op); } 1; __END__ =pod =encoding UTF-8 =head1 NAME MongoDB::MongoClient - A connection to a MongoDB server or multi-server deployment =head1 VERSION version v1.2.2 =head1 SYNOPSIS use MongoDB; # also loads MongoDB::MongoClient # connect to localhost:27017 my $client = MongoDB::MongoClient->new; # connect to specific host and port my $client = MongoDB::MongoClient->new( host => "mongodb://mongo.example.com:27017" ); # connect to a replica set (set name *required*) my $client = MongoDB::MongoClient->new( host => "mongodb://mongo1.example.com,mongo2.example.com", replica_set_name => 'myset', ); # connect to a replica set with URI (set name *required*) my $client = MongoDB::MongoClient->new( host => "mongodb://mongo1.example.com,mongo2.example.com/?replicaSet=myset", ); my $db = $client->get_database("test"); my $coll = $db->get_collection("people"); $coll->insert({ name => "John Doe", age => 42 }); my @people = $coll->find()->all(); =head1 DESCRIPTION The C class represents a client connection to one or more MongoDB servers. By default, it connects to a single server running on the local machine listening on the default port 27017: # connects to localhost:27017 my $client = MongoDB::MongoClient->new; It can connect to a database server running anywhere, though: my $client = MongoDB::MongoClient->new(host => 'example.com:12345'); See the L attribute for more options for connecting to MongoDB. MongoDB can be started in L, which requires clients to log in before manipulating data. By default, MongoDB does not start in this mode, so no username or password is required to make a fully functional connection. To configure the client for authentication, see the L section. The actual socket connections are lazy and created on demand. When the client object goes out of scope, all socket will be closed. Note that L, L and related classes could hold a reference to the client as well. Only when all references are out of scope will the sockets be closed. =head1 ATTRIBUTES =head2 host The C attribute specifies either a single server to connect to (as C or C), or else a L with a seed list of one or more servers plus connection options. Defaults to the connection string URI C. For IPv6 support, you must have a recent version of L installed. This module ships with the Perl core since v5.20.0 and is available on CPAN for older Perls. =head2 auth_mechanism This attribute determines how the client authenticates with the server. Valid values are: =over 4 =item * NONE =item * DEFAULT =item * MONGODB-CR =item * MONGODB-X509 =item * GSSAPI =item * PLAIN =item * SCRAM-SHA-1 =back If not specified, then if no username is provided, it defaults to NONE. If a username is provided, it is set to DEFAULT, which chooses SCRAM-SHA-1 if available or MONGODB-CR otherwise. This may be set in a connection string with the C option. =head2 auth_mechanism_properties This is an optional hash reference of authentication mechanism specific properties. See L for details. This may be set in a connection string with the C option. If given, the value must be key/value pairs joined with a ":". Multiple pairs must be separated by a comma. If ": or "," appear in a key or value, they must be URL encoded. =head2 bson_codec An object that provides the C and C methods, such as from L. It may be initialized with a hash reference that will be coerced into a new L object. If not provided, one will be generated as follows: MongoDB::BSON->new( dbref_callback => sub { return MongoDB::DBRef->new(shift) }, dt_type => $client->dt_type, prefer_numeric => $MongoDB::BSON::looks_like_number || 0, ( $MongoDB::BSON::char ne '$' ? ( op_char => $MongoDB::BSON::char ) : () ), ); This will inflate all DBRefs to L objects, set C based on the client's C accessor, and set the C and C attributes based on the deprecated legacy global variables. =head2 connect_timeout_ms This attribute specifies the amount of time in milliseconds to wait for a new connection to a server. The default is 10,000 ms. If set to a negative value, connection operations will block indefinitely until the server replies or until the operating system TCP/IP stack gives up (e.g. if the name can't resolve or there is no process listening on the target host/port). A zero value polls the socket during connection and is thus likely to fail except when talking to a local process (and perhaps even then). This may be set in a connection string with the C option. =head2 db_name Optional. If an L requires a database for authentication, this attribute will be used. Otherwise, it will be ignored. Defaults to "admin". This may be provided in the L as a path between the authority and option parameter sections. For example, to authenticate against the "admin" database (showing a configuration option only for illustration): mongodb://localhost/admin?readPreference=primary =head2 heartbeat_frequency_ms The time in milliseconds (non-negative) between scans of all servers to check if they are up and update their latency. Defaults to 60,000 ms. This may be set in a connection string with the C option. =head2 j If true, the client will block until write operations have been committed to the server's journal. Prior to MongoDB 2.6, this option was ignored if the server was running without journaling. Starting with MongoDB 2.6, write operations will fail if this option is used when the server is running without journaling. This may be set in a connection string with the C option as the strings 'true' or 'false'. =head2 local_threshold_ms The width of the 'latency window': when choosing between multiple suitable servers for an operation, the acceptable delta in milliseconds (non-negative) between shortest and longest average round-trip times. Servers within the latency window are selected randomly. Set this to "0" to always select the server with the shortest average round trip time. Set this to a very high value to always randomly choose any known server. Defaults to 15 ms. See L for more details. This may be set in a connection string with the C option. =head2 max_time_ms Specifies the maximum amount of time in (non-negative) milliseconds that the server should use for working on a database command. Defaults to 0, which disables this feature. Make sure this value is shorter than C. B: this will only be used for server versions 2.6 or greater, as that was when the C<$maxTimeMS> meta-operator was introduced. You are B encouraged to set this variable if you know your environment has MongoDB 2.6 or later, as getting a definitive error response from the server is vastly preferred over a getting a network socket timeout. This may be set in a connection string with the C option. =head2 password If an L requires a password, this attribute will be used. Otherwise, it will be ignored. This may be provided in the L as a C pair in the leading portion of the authority section before a C<@> character. For example, to authenticate as user "mulder" with password "trustno1": mongodb://mulder:trustno1@localhost If the username or password have a ":" or "@" in it, they must be URL encoded. An empty password still requires a ":" character. =head2 port If a network port is not specified as part of the C attribute, this attribute provides the port to use. It defaults to 27107. =head2 read_pref_mode The read preference mode determines which server types are candidates for a read operation. Valid values are: =over 4 =item * primary =item * primaryPreferred =item * secondary =item * secondaryPreferred =item * nearest =back For core documentation on read preference see L. This may be set in a connection string with the C option. =head2 read_pref_tag_sets The C parameter is an ordered list of tag sets used to restrict the eligibility of servers, such as for data center awareness. It must be an array reference of hash references. The application of C varies depending on the C parameter. If the C is 'primary', then C must not be supplied. For core documentation on read preference see L. This may be set in a connection string with the C option. If given, the value must be key/value pairs joined with a ":". Multiple pairs must be separated by a comma. If ": or "," appear in a key or value, they must be URL encoded. The C option may appear more than once, in which case each document will be added to the tag set list. =head2 replica_set_name Specifies the replica set name to connect to. If this string is non-empty, then the topology is treated as a replica set and all server replica set names must match this or they will be removed from the topology. This may be set in a connection string with the C option. =head2 server_selection_timeout_ms This attribute specifies the amount of time in milliseconds to wait for a suitable server to be available for a read or write operation. If no server is available within this time period, an exception will be thrown. The default is 30,000 ms. See L for more details. This may be set in a connection string with the C option. =head2 server_selection_try_once This attribute controls whether the client will make only a single attempt to find a suitable server for a read or write operation. The default is true. When true, the client will B use the C. Instead, if the topology information is stale and needs to be checked or if no suitable server is available, the client will make a single scan of all known servers to try to find a suitable one. When false, the client will continually scan known servers until a suitable server is found or the C is reached. See L for more details. This may be set in a connection string with the C option. =head2 socket_check_interval_ms If a socket to a server has not been used in this many milliseconds, an C command will be issued to check the status of the server before issuing any reads or writes. Must be non-negative. The default is 5,000 ms. This may be set in a connection string with the C option. =head2 socket_timeout_ms This attribute specifies the amount of time in milliseconds to wait for a reply from the server before issuing a network exception. The default is 30,000 ms. If set to a negative value, socket operations will block indefinitely until the server replies or until the operating system TCP/IP stack gives up. A zero value polls the socket for available data and is thus likely to fail except when talking to a local process (and perhaps even then). This may be set in a connection string with the C option. =head2 ssl ssl => 1 ssl => \%ssl_options This tells the driver that you are connecting to an SSL mongodb instance. You must have L 1.42+ and L 1.49+ installed for SSL support. The C attribute takes either a boolean value or a hash reference of options to pass to IO::Socket::SSL. For example, to set a CA file to validate the server certificate and set a client certificate for the server to validate, you could set the attribute like this: ssl => { SSL_ca_file => "/path/to/ca.pem", SSL_cert_file => "/path/to/client.pem", } If C is not provided, server certificates are verified against a default list of CAs, either L or an operating-system-specific default CA file. To disable verification, you can use C<< SSL_verify_mode => 0x00 >>. B. Server hostnames are also validated against the CN name in the server certificate using C<< SSL_verifycn_scheme => 'http' >>. You can use the scheme 'none' to disable this check. B. This may be set to the string 'true' or 'false' in a connection string with the C option, which will enable ssl with default configuration. (A future version of the driver may support customizing ssl via the connection string.) =head2 username Optional username for this client connection. If this field is set, the client will attempt to authenticate when connecting to servers. Depending on the L, the L field or other attributes will need to be set for authentication to succeed. This may be provided in the L as a C pair in the leading portion of the authority section before a C<@> character. For example, to authenticate as user "mulder" with password "trustno1": mongodb://mulder:trustno1@localhost If the username or password have a ":" or "@" in it, they must be URL encoded. An empty password still requires a ":" character. =head2 w The client I. =over 4 =item * C<0> Unacknowledged. MongoClient will B wait for an acknowledgment that the server has received and processed the request. Older documentation may refer to this as "fire-and-forget" mode. This option is not recommended. =item * C<1> Acknowledged. This is the default. MongoClient will wait until the primary MongoDB acknowledges the write. =item * C<2> Replica acknowledged. MongoClient will wait until at least two replicas (primary and one secondary) acknowledge the write. You can set a higher number for more replicas. =item * C All replicas acknowledged. =item * C A majority of replicas acknowledged. =back In MongoDB v2.0+, you can "tag" replica members. With "tagging" you can specify a custom write concern For more information see L This may be set in a connection string with the C option. =head2 wtimeout The number of milliseconds an operation should wait for C secondaries to replicate it. Defaults to 1000 (1 second). See C above for more information. This may be set in a connection string with the C option. =head2 read_concern_level The read concern level determines the consistency level required of data being read. The default level is C, which means the server will use its configured default. If the level is set to "local", reads will return the latest data a server has locally. Additional levels are storage engine specific. See L in the MongoDB documentation for more details. This may be set in a connection string with the the C option. =head2 dt_type (DEPRECATED AND READ-ONLY) Sets the type of object which is returned for DateTime fields. The default is L. Other acceptable values are L, L and C. The latter will give you the raw epoch value (possibly as a floating point value) rather than an object. This will be used to construct L object if one is not provided. As this has a one-time effect, it is now read-only to help you detect code that was trying to change after the fact during program execution. For temporary or localized changes, look into overriding the C object for a database or collection object. =head2 query_timeout (DEPRECATED AND READ-ONLY) # set query timeout to 1 second my $client = MongoDB::MongoClient->new(query_timeout => 1000); This option has been renamed as L. If this option is set and that one is not, this will be used. This value is in milliseconds and defaults to 30000. =head2 sasl (DEPRECATED) If true, the driver will set the authentication mechanism based on the C property. =head2 sasl_mechanism (DEPRECATED) This specifies the SASL mechanism to use for authentication with a MongoDB server. It has the same valid values as L. The default is GSSAPI. =head2 timeout (DEPRECATED AND READ-ONLY) This option has been renamed as L. If this option is set and that one is not, this will be used. Connection timeout is in milliseconds. Defaults to C<10000>. =head1 METHODS =head2 read_preference Returns a L object constructed from L and L B as a mutator has been removed.> Read preference is read-only. If you need a different read preference for a database or collection, you can specify that in C or C. =head2 write_concern Returns a L object constructed from L, L and L. =head2 read_concern Returns a L object constructed from L. =head2 topology_type Returns an enumerated topology type. If the L is set, the value will be either 'ReplicaSetWithPrimary' or 'ReplicaSetNoPrimary' (if the primary is down or not yet discovered). Without L, the type will be 'Single' if there is only one server in the list of hosts, and 'Sharded' if there are more than one. N.B. A single mongos will have a topology type of 'Single', as that mongos will be used for all reads and writes, just like a standalone mongod. The 'Sharded' type indicates a sharded cluster with multiple mongos servers, and reads/writes will be distributed acc =head2 connect $client->connect; Calling this method is unnecessary, as connections are established automatically as needed. It is kept for backwards compatibility. Calling it will check all servers in the deployment which ensures a connection to any that are available. See L for a method that is useful when using forks or threads. =head2 disconnect $client->disconnect; Drops all connections to servers. =head2 reconnect $client->reconnect; This method closes all connections to the server, as if L were called, and then immediately reconnects. Use this after forking or spawning off a new thread. =head2 topology_status $client->topology_status; $client->topology_status( refresh => 1 ); Returns a hash reference with server topology information like this: { 'topology_type' => 'ReplicaSetWithPrimary' 'replica_set_name' => 'foo', 'last_scan_time' => '1433766895.183241', 'servers' => [ { 'address' => 'localhost:50003', 'ewma_rtt_ms' => '0.223462326', 'type' => 'RSSecondary' }, { 'address' => 'localhost:50437', 'ewma_rtt_ms' => '0.268435456', 'type' => 'RSArbiter' }, { 'address' => 'localhost:50829', 'ewma_rtt_ms' => '0.737782272', 'type' => 'RSPrimary' } }, } If the 'refresh' argument is true, then the topology will be scanned to update server data before returning the hash reference. =head2 database_names my @dbs = $client->database_names; Lists all databases on the MongoDB server. =head2 get_database, db my $database = $client->get_database('foo'); my $database = $client->get_database('foo', $options); my $database = $client->db('foo', $options); Returns a L instance for the database with the given C<$name>. It takes an optional hash reference of options that are passed to the L constructor. The C method is an alias for C. =head2 get_namespace, ns my $collection = $client->get_namespace('test.foo'); my $collection = $client->get_namespace('test.foo', $options); my $collection = $client->ns('test.foo', $options); Returns a L instance for the given namespace. The namespace has both the database name and the collection name separated with a dot character. This is a quick way to get a collection object if you don't need the database object separately. It takes an optional hash reference of options that are passed to the L constructor. The intermediate L object will be created with default options. The C method is an alias for C. =head2 fsync(\%args) $client->fsync(); A function that will forces the server to flush all pending writes to the storage layer. The fsync operation is synchronous by default, to run fsync asynchronously, use the following form: $client->fsync({async => 1}); The primary use of fsync is to lock the database during backup operations. This will flush all data to the data storage layer and block all write operations until you unlock the database. Note: you can still read while the database is locked. $conn->fsync({lock => 1}); =head2 fsync_unlock $conn->fsync_unlock(); Unlocks a database server to allow writes and reverses the operation of a $conn->fsync({lock => 1}); operation. =for Pod::Coverage connected send_admin_command send_direct_op send_read_op send_write_op =head1 DEPLOYMENT TOPOLOGY MongoDB can operate as a single server or as a distributed system. One or more servers that collectively provide access to a single logical set of MongoDB databases are referred to as a "deployment". There are three types of deployments: =over 4 =item * Single server – a stand-alone mongod database =item * Replica set – a set of mongod databases with data replication and fail-over capability =item * Sharded cluster – a distributed deployment that spreads data across one or more shards, each of which can be a replica set. Clients communicate with a mongos process that routes operations to the correct share. =back The state of a deployment, including its type, which servers are members, the server types of members and the round-trip network latency to members is referred to as the "topology" of the deployment. To the greatest extent possible, the MongoDB driver abstracts away the details of communicating with different deployment types. It determines the deployment topology through a combination of the connection string, configuration options and direct discovery communicating with servers in the deployment. =head1 CONNECTION STRING URI MongoDB uses a pseudo-URI connection string to specify one or more servers to connect to, along with configuration options. To connect to more than one database server, provide host or host:port pairs as a comma separated list: mongodb://host1[:port1][,host2[:port2],...[,hostN[:portN]]] This list is referred to as the "seed list". An arbitrary number of hosts can be specified. If a port is not specified for a given host, it will default to 27017. If multiple hosts are given in the seed list or discovered by talking to servers in the seed list, they must all be replica set members or must all be mongos servers for a sharded cluster. A replica set B have the C option set to the replica set name. If there is only single host in the seed list and C is not provided, the deployment is treated as a single server deployment and all reads and writes will be sent to that host. Providing a replica set member as a single host without the set name is the way to get a "direct connection" for carrying out administrative activities on that server. The connection string may also have a username and password: mongodb://username:password@host1:port1,host2:port2 The username and password must be URL-escaped. A optional database name for authentication may be given: mongodb://username:password@host1:port1,host2:port2/my_database Finally, connection string options may be given as URI attribute pairs in a query string: mongodb://host1:port1,host2:port2/?ssl=1&wtimeoutMS=1000 mongodb://username:password@host1:port1,host2:port2/my_database?ssl=1&wtimeoutMS=1000 The currently supported connection string options are: =over 4 *authMechanism *authMechanism.SERVICE_NAME *connectTimeoutMS *journal *readPreference *readPreferenceTags *replicaSet *ssl *w *wtimeoutMS =back See the official MongoDB documentation on connection strings for more on the URI format and connection string options: L. =head1 SERVER SELECTION For a single server deployment or a direct connection to a mongod or mongos, all reads and writes and sent to that server. Any read-preference is ignored. When connected to a deployment with multiple servers, such as a replica set or sharded cluster, the driver chooses a server for operations based on the type of operation (read or write), the types of servers available and a read preference. For a replica set deployment, writes are sent to the primary (if available) and reads are sent to a server based on the L attribute, which defaults to sending reads to the primary. See L for more. For a sharded cluster reads and writes are distributed across mongos servers in the seed list. Any read preference is passed through to the mongos and used by it when executing reads against shards. If multiple servers can service an operation (e.g. multiple mongos servers, or multiple replica set members), one is chosen at random from within the "latency window". The server with the shortest average round-trip time (RTT) is always in the window. Any servers with an average round-trip time less than or equal to the shortest RTT plus the L are also in the latency window. If a suitable server is not immediately available, what happens next depends on the L option. If that option is true, a single topology scan will be performed. Afterwards if a suitable server is available, it will be returned; otherwise, an exception is thrown. If that option is false, the driver will do topology scans repeatedly looking for a suitable server. When more than L milliseconds have elapsed since the start of server selection without a suitable server being found, an exception is thrown. B: the actual maximum wait time for server selection could be as long C plus the amount of time required to do a topology scan. =head1 SERVER MONITORING AND FAILOVER When the client first needs to find a server for a database operation, all servers from the L attribute are scanned to determine which servers to monitor. If the deployment is a replica set, additional hosts may be discovered in this process. Invalid hosts are dropped. After the initial scan, whenever the servers have not been checked in L milliseconds, the scan will be repeated. This amortizes monitoring time over many of operations. Additionally, if a socket has been idle for a while, it will be checked before being used for an operation. If a server operation fails because of a "not master" or "node is recovering" error, or if there is a network error or timeout, then the server is flagged as unavailable and exception will be thrown. See L for exception types. If the error is caught and handled, the next operation will rescan all servers immediately to update its view of the topology. The driver can continue to function as long as servers are suitable per L. When catching an exception, users must determine whether or not their application should retry an operation based on the specific operation attempted and other use-case-specific considerations. For automating retries despite exceptions, consider using the L module. =head1 AUTHENTICATION The MongoDB server provides several authentication mechanisms, though some are only available in the Enterprise edition. MongoDB client authentication is controlled via the L attribute, which takes one of the following values: =over 4 =item * MONGODB-CR -- legacy username-password challenge-response =item * SCRAM-SHA-1 -- secure username-password challenge-response (3.0+) =item * MONGODB-X509 -- SSL client certificate authentication (2.6+) =item * PLAIN -- LDAP authentication via SASL PLAIN (Enterprise only) =item * GSSAPI -- Kerberos authentication (Enterprise only) =back The mechanism to use depends on the authentication configuration of the server. See the core documentation on authentication: L. Usage information for each mechanism is given below. =head2 MONGODB-CR and SCRAM-SHA-1 (for username/password) These mechnisms require a username and password, given either as constructor attributes or in the C connection string. If a username is provided and an authentication mechanism is not specified, the client will use SCRAM-SHA-1 for version 3.0 or later servers and will fall back to MONGODB-CR for older servers. my $mc = MongoDB::MongoClient->new( host => "mongodb://mongo.example.com/", username => "johndoe", password => "trustno1", ); my $mc = MongoDB::MongoClient->new( host => "mongodb://johndoe:trustno1@mongo.example.com/", ); Usernames and passwords will be UTF-8 encoded before use. The password is never sent over the wire -- only a secure digest is used. The SCRAM-SHA-1 mechanism is the Salted Challenge Response Authentication Mechanism definedin L. The default database for authentication is 'admin'. If another database name should be used, specify it with the C attribute or via the connection string. db_name => auth_db mongodb://johndoe:trustno1@mongo.example.com/auth_db =head2 MONGODB-X509 (for SSL client certificate) X509 authentication requires SSL support (L) and requires that a client certificate be configured and that the username attribute be set to the "Subject" field, formatted according to RFC 2253. To find the correct username, run the C program as follows: $ openssl x509 -in certs/client.pem -inform PEM -subject -nameopt RFC2253 subject= CN=XXXXXXXXXXX,OU=XXXXXXXX,O=XXXXXXX,ST=XXXXXXXXXX,C=XX In this case the C attribute would be C. Configure your client with the correct username and ssl parameters, and specify the "MONGODB-X509" authentication mechanism. my $mc = MongoDB::MongoClient->new( host => "mongodb://sslmongo.example.com/", ssl => { SSL_ca_file => "certs/ca.pem", SSL_cert_file => "certs/client.pem", }, auth_mechanism => "MONGODB-X509", username => "CN=XXXXXXXXXXX,OU=XXXXXXXX,O=XXXXXXX,ST=XXXXXXXXXX,C=XX" ); =head2 PLAIN (for LDAP) This mechanism requires a username and password, which will be UTF-8 encoded before use. The C parameter must be given as a constructor attribute or in the C connection string: my $mc = MongoDB::MongoClient->new( host => "mongodb://mongo.example.com/", username => "johndoe", password => "trustno1", auth_mechanism => "PLAIN", ); my $mc = MongoDB::MongoClient->new( host => "mongodb://johndoe:trustno1@mongo.example.com/authMechanism=PLAIN", ); =head2 GSSAPI (for Kerberos) Kerberos authentication requires the CPAN module L and a GSSAPI-capable backend. On Debian systems, L may be available as C; on RHEL systems, it may be available as C. The L backend comes with L and requires the L CPAN module for GSSAPI support. On Debian systems, this may be available as C; on RHEL systems, it may be available as C. Installing the L module from CPAN rather than an OS package requires C and the C utility (available for Debian/RHEL systems in the C package). Alternatively, the L or L modules may be used. Both rely on Cyrus C. L is preferred, but not yet available as an OS package. L is available on Debian as C and on RHEL as C. Installing L or L from CPAN requires C. On Debian systems, it is available from C; on RHEL, it is available in C. To use the GSSAPI mechanism, first run C to authenticate with the ticket granting service: $ kinit johndoe@EXAMPLE.COM Configure MongoDB::MongoClient with the principal name as the C parameter and specify 'GSSAPI' as the C: my $mc = MongoDB::MongoClient->new( host => 'mongodb://mongo.example.com', username => 'johndoe@EXAMPLE.COM', auth_mechanism => 'GSSAPI', ); Both can be specified in the C connection string, keeping in mind that the '@' in the principal name must be encoded as "%40": my $mc = MongoDB::MongoClient->new( host => 'mongodb://johndoe%40EXAMPLE.COM@mongo.examplecom/?authMechanism=GSSAPI', ); The default service name is 'mongodb'. It can be changed with the C attribute or in the connection string. auth_mechanism_properties => { SERVICE_NAME => 'other_service' } mongodb://.../?authMechanism=GSSAPI&authMechanismProperties=SERVICE_NAME:other_service =head1 THREAD-SAFETY AND FORK-SAFETY You B call the L method on any MongoDB::MongoClient objects after forking or spawning a thread. =head1 AUTHORS =over 4 =item * David Golden =item * Mike Friedman =item * Kristina Chodorow =item * Florian Ragwitz =back =head1 COPYRIGHT AND LICENSE This software is Copyright (c) 2016 by MongoDB, Inc.. This is free software, licensed under: The Apache License, Version 2.0, January 2004 =cut MongoDB-v1.2.2/lib/MongoDB/OID.pm000644 000765 000024 00000012503 12651754051 016434 0ustar00davidstaff000000 000000 # # Copyright 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # package MongoDB::OID; # ABSTRACT: A Mongo Object ID use version; our $VERSION = 'v1.2.2'; use MongoDB::BSON; use Moo; use MongoDB; use MongoDB::_Constants; use MongoDB::_Types qw( OID ); use Types::Standard qw( Str ); use namespace::clean; #pod =head1 ATTRIBUTES #pod #pod =head2 value #pod #pod The OID value. A random value will be generated if none exists already. #pod It is a 24-character hexidecimal string (12 bytes). #pod #pod Its string representation is the 24-character string. #pod #pod =cut has value => ( is => 'ro', required => 1, builder => '_build_value', isa => OID, coerce => OID->coercion, ); # XXX need to set up typedef with str length # msg: "OIDs need to have a length of 24 bytes" sub _build_value { my ($self) = @_; return MongoDB::BSON::generate_oid(); } around BUILDARGS => sub { my $orig = shift; my $class = shift; if ( @_ == 0 ) { return { value => MongoDB::BSON::generate_oid() }; } if ( @_ == 1 ) { return { value => "$_[0]" }; } return $orig->($class, @_); }; # This private constructor bypasses everything Moo does for us and just # jams an OID into a blessed hashref. This is only for use in super-hot # code paths, like document insertion. sub _new_oid { return bless { value => MongoDB::BSON::generate_oid() }, $_[0]; } #pod =head1 METHODS #pod #pod =head2 to_string #pod #pod my $hex = $oid->to_string; #pod #pod Gets the value of this OID as a 24-digit hexidecimal string. #pod #pod =cut sub to_string { $_[0]->{value} } #pod =head2 get_time #pod #pod my $date = DateTime->from_epoch(epoch => $id->get_time); #pod #pod Each OID contains a 4 bytes timestamp from when it was created. This method #pod extracts the timestamp. #pod #pod =cut sub get_time { my ($self) = @_; return hex(substr($self->value, 0, 8)); } # for testing purposes sub _get_pid { my ($self) = @_; return hex(substr($self->value, 14, 4)); } #pod =head2 TO_JSON #pod #pod my $json = JSON->new; #pod $json->allow_blessed; #pod $json->convert_blessed; #pod #pod $json->encode(MongoDB::OID->new); #pod #pod Returns a JSON string for this OID. This is compatible with the strict JSON #pod representation used by MongoDB, that is, an OID with the value #pod "012345678901234567890123" will be represented as #pod C<{"$oid" : "012345678901234567890123"}>. #pod #pod =cut sub TO_JSON { my ($self) = @_; return {'$oid' => $self->value}; } use overload '""' => \&to_string, 'fallback' => 1; 1; __END__ =pod =encoding UTF-8 =head1 NAME MongoDB::OID - A Mongo Object ID =head1 VERSION version v1.2.2 =head1 SYNOPSIS If no C<_id> field is provided when a document is inserted into the database, an C<_id> field will be added with a new C as its value. my $id = $collection->insert({'name' => 'Alice', age => 20}); C<$id> will be a C that can be used to retrieve or update the saved document: $collection->update({_id => $id}, {'age' => {'$inc' => 1}}); # now Alice is 21 To create a copy of an existing OID, you must set the value attribute in the constructor. For example: my $id1 = MongoDB::OID->new; my $id2 = MongoDB::OID->new(value => $id1->value); my $id3 = MongoDB::OID->new($id1->value); my $id4 = MongoDB::OID->new($id1); Now C<$id1>, C<$id2>, C<$id3> and C<$id4> will have the same value. OID generation is thread safe. =head1 ATTRIBUTES =head2 value The OID value. A random value will be generated if none exists already. It is a 24-character hexidecimal string (12 bytes). Its string representation is the 24-character string. =head1 METHODS =head2 to_string my $hex = $oid->to_string; Gets the value of this OID as a 24-digit hexidecimal string. =head2 get_time my $date = DateTime->from_epoch(epoch => $id->get_time); Each OID contains a 4 bytes timestamp from when it was created. This method extracts the timestamp. =head2 TO_JSON my $json = JSON->new; $json->allow_blessed; $json->convert_blessed; $json->encode(MongoDB::OID->new); Returns a JSON string for this OID. This is compatible with the strict JSON representation used by MongoDB, that is, an OID with the value "012345678901234567890123" will be represented as C<{"$oid" : "012345678901234567890123"}>. =head1 SEE ALSO Core documentation on object ids: L. =head1 AUTHORS =over 4 =item * David Golden =item * Mike Friedman =item * Kristina Chodorow =item * Florian Ragwitz =back =head1 COPYRIGHT AND LICENSE This software is Copyright (c) 2016 by MongoDB, Inc.. This is free software, licensed under: The Apache License, Version 2.0, January 2004 =cut MongoDB-v1.2.2/lib/MongoDB/Op/000755 000765 000024 00000000000 12651754051 016040 5ustar00davidstaff000000 000000 MongoDB-v1.2.2/lib/MongoDB/QueryResult/000755 000765 000024 00000000000 12651754051 017766 5ustar00davidstaff000000 000000 MongoDB-v1.2.2/lib/MongoDB/QueryResult.pm000644 000765 000024 00000022472 12651754051 020333 0ustar00davidstaff000000 000000 # # Copyright 2014 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # package MongoDB::QueryResult; # ABSTRACT: An iterator for Mongo query results use version; our $VERSION = 'v1.2.2'; use Moo; use MongoDB::Error; use MongoDB::_Constants; use MongoDB::Op::_GetMore; use MongoDB::Op::_KillCursors; use MongoDB::_Types qw( BSONCodec HostAddress ); use Types::Standard qw( Maybe ArrayRef Any InstanceOf Int HashRef Num Str ); use namespace::clean; with $_ for qw( MongoDB::Role::_PrivateConstructor MongoDB::Role::_Cursor ); # attributes needed for get more has _client => ( is => 'rw', required => 1, isa => InstanceOf['MongoDB::MongoClient'], ); has _address => ( is => 'ro', required => 1, isa => HostAddress, ); has _ns => ( is => 'ro', required => 1, isa => Str, ); has _bson_codec => ( is => 'ro', required => 1, isa => BSONCodec, ); has _batch_size => ( is => 'ro', required => 1, isa => Int, ); has _max_time_ms => ( is => 'ro', isa => Maybe[Num], ); # attributes for tracking progress has _cursor_at => ( is => 'ro', required => 1, isa => Num, ); sub _inc_cursor_at { $_[0]{_cursor_at}++ } has _limit => ( is => 'ro', required => 1, isa => Num, ); # attributes from actual results # integer or MongoDB::_CursorID or Math::BigInt has _cursor_id => ( is => 'ro', required => 1, writer => '_set_cursor_id', isa => Any, ); has _cursor_start => ( is => 'ro', required => 1, writer => '_set_cursor_start', isa => Num, ); has _cursor_flags => ( is => 'ro', required => 1, writer => '_set_cursor_flags', isa => HashRef, ); has _cursor_num => ( is => 'ro', required => 1, isa => Num, ); sub _inc_cursor_num { $_[0]{_cursor_num}++ } has _docs => ( is => 'ro', required => 1, isa => ArrayRef, ); sub _drained { ! @{$_[0]{_docs}} } sub _doc_count { scalar @{$_[0]{_docs}} } sub _add_docs { my $self = shift; push @{$self->{_docs}}, @_; } sub _next_doc { shift @{$_[0]{_docs}} } sub _drain_docs { my @docs = @{$_[0]{_docs}}; $_[0]{_cursor_at} += scalar @docs; @{$_[0]{_docs}} = (); return @docs; } # for backwards compatibility sub started_iterating() { 1 } sub _info { my ($self) = @_; return { flag => $self->_cursor_flags, cursor_id => $self->_cursor_id, start => $self->_cursor_start, at => $self->_cursor_at, num => $self->_cursor_num, }; } #pod =method has_next #pod #pod if ( $response->has_next ) { #pod ... #pod } #pod #pod Returns true if additional documents are available. This will #pod attempt to get another batch of documents from the server if #pod necessary. #pod #pod =cut sub has_next { my ($self) = @_; my $limit = $self->_limit; if ( $limit > 0 && ( $self->_cursor_at + 1 ) > $limit ) { $self->_kill_cursor; return 0; } return !$self->_drained || $self->_get_more; } #pod =method next #pod #pod while ( $doc = $result->next ) { #pod process_doc($doc) #pod } #pod #pod Returns the next document or C if the server cursor is exhausted. #pod #pod =cut sub next { my ($self) = @_; return unless $self->has_next; $self->_inc_cursor_at(); return $self->_next_doc; } #pod =method batch #pod #pod while ( @batch = $result->batch ) { #pod for $doc ( @batch ) { #pod process_doc($doc); #pod } #pod } #pod #pod Returns the next batch of documents or an empty list if the server cursor is exhausted. #pod #pod =cut sub batch { my ($self) = @_; return unless $self->has_next; return $self->_drain_docs; } sub _get_more { my ($self) = @_; return 0 if $self->_cursor_id == 0; my $limit = $self->_limit; my $want = $limit > 0 ? ( $limit - $self->_cursor_at ) : $self->_batch_size; my ($db_name, $coll_name) = split(/\./, $self->_ns, 2); my $op = MongoDB::Op::_GetMore->_new( ns => $self->_ns, db_name => $db_name, coll_name => $coll_name, client => $self->_client, bson_codec => $self->_bson_codec, cursor_id => $self->_cursor_id, batch_size => $want, max_time_ms => $self->_max_time_ms, ); my $result = $self->_client->send_direct_op( $op, $self->_address ); $self->_set_cursor_id( $result->{cursor_id} ); $self->_set_cursor_flags( $result->{flags} ); $self->_set_cursor_start( $result->{starting_from} ); $self->_inc_cursor_num( $result->{number_returned} ); $self->_add_docs( @{ $result->{docs} } ); return scalar @{ $result->{docs} }; } #pod =method all #pod #pod @docs = $result->all; #pod #pod Returns all documents as a list. #pod #pod =cut sub all { my ($self) = @_; my @ret; push @ret, $self->_drain_docs while $self->has_next; return @ret; } sub _kill_cursor { my ($self) = @_; my $cursor_id = $self->_cursor_id; return if !defined $cursor_id || $cursor_id == 0; my $op = MongoDB::Op::_KillCursors->_new( cursor_ids => [ $cursor_id ], ); $self->_client->send_direct_op( $op, $self->_address ); $self->_set_cursor_id(0); } sub DEMOLISH { my ($self) = @_; $self->_kill_cursor; } #pod =head1 SYNOPSIS #pod #pod $cursor = $coll->find( $filter ); #pod $result = $cursor->result; #pod #pod while ( $doc = $result->next ) { #pod process_doc($doc) #pod } #pod #pod =head1 DESCRIPTION #pod #pod This class defines an iterator against a query result. It automatically #pod fetches additional results from the originating mongod/mongos server #pod on demand. #pod #pod For backwards compatibility reasons, L encapsulates query #pod parameters and generates a C object on demand. All #pod iterators on C delegate to C object. #pod #pod Retrieving this object and iterating on it directly will be slightly #pod more efficient. #pod #pod =head1 USAGE #pod #pod =head2 Error handling #pod #pod Unless otherwise explictly documented, all methods throw exceptions if #pod an error occurs. The error types are documented in L. #pod #pod To catch and handle errors, the L and L modules #pod are recommended: #pod #pod =head2 Cursor destruction #pod #pod When a C object is destroyed, a cursor termination #pod request will be sent to the originating server to free server resources. #pod #pod =cut 1; __END__ =pod =encoding UTF-8 =head1 NAME MongoDB::QueryResult - An iterator for Mongo query results =head1 VERSION version v1.2.2 =head1 SYNOPSIS $cursor = $coll->find( $filter ); $result = $cursor->result; while ( $doc = $result->next ) { process_doc($doc) } =head1 DESCRIPTION This class defines an iterator against a query result. It automatically fetches additional results from the originating mongod/mongos server on demand. For backwards compatibility reasons, L encapsulates query parameters and generates a C object on demand. All iterators on C delegate to C object. Retrieving this object and iterating on it directly will be slightly more efficient. =head1 USAGE =head2 Error handling Unless otherwise explictly documented, all methods throw exceptions if an error occurs. The error types are documented in L. To catch and handle errors, the L and L modules are recommended: =head2 Cursor destruction When a C object is destroyed, a cursor termination request will be sent to the originating server to free server resources. =head1 METHODS =head2 has_next if ( $response->has_next ) { ... } Returns true if additional documents are available. This will attempt to get another batch of documents from the server if necessary. =head2 next while ( $doc = $result->next ) { process_doc($doc) } Returns the next document or C if the server cursor is exhausted. =head2 batch while ( @batch = $result->batch ) { for $doc ( @batch ) { process_doc($doc); } } Returns the next batch of documents or an empty list if the server cursor is exhausted. =head2 all @docs = $result->all; Returns all documents as a list. =head1 AUTHORS =over 4 =item * David Golden =item * Mike Friedman =item * Kristina Chodorow =item * Florian Ragwitz =back =head1 COPYRIGHT AND LICENSE This software is Copyright (c) 2016 by MongoDB, Inc.. This is free software, licensed under: The Apache License, Version 2.0, January 2004 =cut MongoDB-v1.2.2/lib/MongoDB/ReadConcern.pm000644 000765 000024 00000006301 12651754051 020203 0ustar00davidstaff000000 000000 # # Copyright 2014 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # package MongoDB::ReadConcern; # ABSTRACT: Encapsulate and validate a read concern use version; our $VERSION = 'v1.2.2'; use Moo; use MongoDB::Error; use Types::Standard qw( Maybe Str ArrayRef ); use namespace::clean; #pod =attr level #pod #pod The read concern level determines the consistency level required #pod of data being read. #pod #pod The default level is C, which means the server will use its configured #pod default. #pod #pod If the level is set to "local", reads will return the latest data a server has #pod locally. #pod #pod Additional levels are storage engine specific. See L in the MongoDB #pod documentation for more details. #pod #pod This may be set in a connection string with the the C option. #pod #pod =cut has level => ( is => 'ro', isa => Maybe [Str], predicate => 'has_level', ); has _as_args => ( is => 'lazy', isa => ArrayRef, reader => 'as_args', builder => '_build_as_args', ); sub _build_as_args { my ($self) = @_; if ( $self->{level} ) { return [ readConcern => { level => $self->{level} } ]; } else { return []; } } 1; __END__ =pod =encoding UTF-8 =head1 NAME MongoDB::ReadConcern - Encapsulate and validate a read concern =head1 VERSION version v1.2.2 =head1 SYNOPSIS $rc = MongoDB::ReadConcern->new(); # no defaults $rc = MongoDB::ReadConcern->new( level => 'local', ); =head1 DESCRIPTION A Read Concern describes the constraints that MongoDB must satisfy when reading data. Read Concern was introduced in MongoDB 3.2. =head1 ATTRIBUTES =head2 level The read concern level determines the consistency level required of data being read. The default level is C, which means the server will use its configured default. If the level is set to "local", reads will return the latest data a server has locally. Additional levels are storage engine specific. See L in the MongoDB documentation for more details. This may be set in a connection string with the the C option. =head1 AUTHORS =over 4 =item * David Golden =item * Mike Friedman =item * Kristina Chodorow =item * Florian Ragwitz =back =head1 COPYRIGHT AND LICENSE This software is Copyright (c) 2016 by MongoDB, Inc.. This is free software, licensed under: The Apache License, Version 2.0, January 2004 =cut MongoDB-v1.2.2/lib/MongoDB/ReadPreference.pm000644 000765 000024 00000016255 12651754051 020703 0ustar00davidstaff000000 000000 # # Copyright 2014 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # package MongoDB::ReadPreference; # ABSTRACT: Encapsulate and validate read preferences use version; our $VERSION = 'v1.2.2'; use Moo; use MongoDB::Error; use MongoDB::_Types qw( ArrayOfHashRef ReadPrefMode ); use namespace::clean -except => 'meta'; use overload ( q[""] => sub { $_[0]->mode }, fallback => 1, ); #pod =attr mode #pod #pod The read preference mode determines which server types are candidates #pod for a read operation. Valid values are: #pod #pod =for :list #pod * primary #pod * primaryPreferred #pod * secondary #pod * secondaryPreferred #pod * nearest #pod #pod =cut has mode => ( is => 'ro', isa => ReadPrefMode, default => 'primary', coerce => ReadPrefMode->coercion, ); #pod =attr tag_sets #pod #pod The C parameter is an ordered list of tag sets used to restrict the #pod eligibility of servers, such as for data center awareness. #pod #pod The application of C varies depending on the C parameter. If #pod the C is 'primary', then C must not be supplied. #pod #pod =cut has tag_sets => ( is => 'ro', isa => ArrayOfHashRef, default => sub { [ {} ] }, coerce => ArrayOfHashRef->coercion, ); sub BUILD { my ($self) = @_; if ( $self->mode eq 'primary' && !$self->has_empty_tag_sets ) { MongoDB::UsageError->throw("A tag set list is not allowed with read preference mode 'primary'"); } return; } # Returns true if the C array is empty or if it consists only of a # single, empty hash reference. sub has_empty_tag_sets { my ($self) = @_; my $tag_sets = $self->tag_sets; return @$tag_sets == 0 || ( @$tag_sets == 1 && !keys %{ $tag_sets->[0] } ); } # Reformat to the document needed by mongos in $readPreference sub for_mongos { my ($self) = @_; return { mode => $self->mode, tags => $self->tag_sets, }; } # Format as a string for error messages sub as_string { my ($self) = @_; my $string = $self->mode; unless ( $self->has_empty_tag_sets ) { my @ts; for my $set ( @{ $self->tag_sets } ) { push @ts, keys(%$set) ? join( ",", map { "$_\:$set->{$_}" } sort keys %$set ) : ""; } $string .= " (" . join( ",", map { "{$_}" } @ts ) . ")"; } return $string; } 1; __END__ =pod =encoding UTF-8 =head1 NAME MongoDB::ReadPreference - Encapsulate and validate read preferences =head1 VERSION version v1.2.2 =head1 SYNOPSIS use MongoDB::ReadPreference; $rp = MongoDB::ReadPreference->new(); # mode: primary $rp = MongoDB::ReadPreference->new( mode => 'primaryPreferred', tag_sets => [ { dc => 'useast' }, {} ], ); =head1 DESCRIPTION A read preference indicates which servers should be used for read operations. For core documentation on read preference see L. =head1 USAGE Read preferences work via two attributes: C and C. The C parameter controls the types of servers that are candidates for a read operation as well as the logic for applying the C attribute to further restrict the list. The following terminology is used in describing read preferences: =over 4 =item * candidates – based on C, servers that could be suitable, based on C and other logic =item * eligible – these are candidates that match C =item * suitable – servers that meet all criteria for a read operation =back =head2 Read preference modes =head3 primary Only an available primary is suitable. C do not apply and must not be provided or an exception is thrown. =head3 secondary All secondaries (and B secondaries) are candidates, but only eligible candidates (i.e. after applying C) are suitable. =head3 primaryPreferred Try to find a server using mode "primary" (with no C). If that fails, try to find one using mode "secondary" and the C attribute. =head3 secondaryPreferred Try to find a server using mode "secondary" and the C attribute. If that fails, try to find a server using mode "primary" (with no C). =head3 nearest The primary and all secondaries are candidates, but only eligible candidates (i.e. after applying C to all candidates) are suitable. B: in retrospect, the name "nearest" is misleading, as it implies a choice based on lowest absolute latency or geographic proximity, neither which are true. The "nearest" mode merely includes both primaries and secondaries without any preference between the two. All are filtered on C. Because of filtering, servers might not be "closest" in any sense. And if multiple servers are suitable, one is randomly chosen based on the rules for L, which again might not be the closest in absolute latency terms. =head2 Tag set matching The C parameter is a list of tag sets (i.e. key/value pairs) to try in order. The first tag set in the list to match B candidate server is used as the filter for all candidate servers. Any subsequent tag sets are ignored. A read preference tag set (C) matches a server tag set (C) – or equivalently a server tag set (C) matches a read preference tag set (C) — if C is a subset of C (i.e. C). For example, the read preference tag set C<< { dc => 'ny', rack => 2 } >> matches a secondary server with tag set C<< { dc => 'ny', rack => 2, size => 'large' } >>. A tag set that is an empty document – C<< {} >> – matches any server, because the empty tag set is a subset of any tag set. =head1 ATTRIBUTES =head2 mode The read preference mode determines which server types are candidates for a read operation. Valid values are: =over 4 =item * primary =item * primaryPreferred =item * secondary =item * secondaryPreferred =item * nearest =back =head2 tag_sets The C parameter is an ordered list of tag sets used to restrict the eligibility of servers, such as for data center awareness. The application of C varies depending on the C parameter. If the C is 'primary', then C must not be supplied. =for Pod::Coverage has_empty_tag_sets for_mongos as_string =head1 AUTHORS =over 4 =item * David Golden =item * Mike Friedman =item * Kristina Chodorow =item * Florian Ragwitz =back =head1 COPYRIGHT AND LICENSE This software is Copyright (c) 2016 by MongoDB, Inc.. This is free software, licensed under: The Apache License, Version 2.0, January 2004 =cut MongoDB-v1.2.2/lib/MongoDB/Role/000755 000765 000024 00000000000 12651754051 016363 5ustar00davidstaff000000 000000 MongoDB-v1.2.2/lib/MongoDB/Timestamp.pm000644 000765 000024 00000003701 12651754051 017764 0ustar00davidstaff000000 000000 # # Copyright 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # package MongoDB::Timestamp; # ABSTRACT: Replication timestamp use version; our $VERSION = 'v1.2.2'; use Moo; use Types::Standard qw( Int ); use namespace::clean -except => 'meta'; #pod =attr sec #pod #pod Seconds since epoch. #pod #pod =cut has sec => ( is => 'ro', isa => Int, required => 1, ); #pod =attr inc #pod #pod Incrementing field. #pod #pod =cut has inc => ( is => 'ro', isa => Int, required => 1, ); 1; __END__ =pod =encoding UTF-8 =head1 NAME MongoDB::Timestamp - Replication timestamp =head1 VERSION version v1.2.2 =head1 DESCRIPTION This is an internal type used for replication. It is not for storing dates, times, or timestamps in the traditional sense. Unless you are looking to mess with MongoDB's replication internals, the class you are probably looking for is L. See L for more information. =head1 ATTRIBUTES =head2 sec Seconds since epoch. =head2 inc Incrementing field. =head1 AUTHORS =over 4 =item * David Golden =item * Mike Friedman =item * Kristina Chodorow =item * Florian Ragwitz =back =head1 COPYRIGHT AND LICENSE This software is Copyright (c) 2016 by MongoDB, Inc.. This is free software, licensed under: The Apache License, Version 2.0, January 2004 =cut MongoDB-v1.2.2/lib/MongoDB/Tutorial.pod000644 000765 000024 00000022237 12651754051 017777 0ustar00davidstaff000000 000000 # # Copyright 2009-2013 10gen, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # PODNAME: MongoDB::Tutorial # ABSTRACT: Getting started with MongoDB __END__ =pod =encoding UTF-8 =head1 NAME MongoDB::Tutorial - Getting started with MongoDB =head1 VERSION version v1.2.2 =head1 DESCRIPTION The tutorial runs through the basic functionality of the MongoDB package. This is a good starting point if you have never used MongoDB before. The tutorial assumes that you are running a B MongoDB database server (i.e. not a replica set) locally on the default port. You can download MongoDB from L. =head1 TERMINOLOGY Document-oriented database terms and their relational equivalents: =over =item Database Database =item Collection Table =item Document Record or row =item L Autoincrementing primary key =back =head1 PREAMBLE To use MongoDB, you'll usually just start with: use MongoDB; The L module loads most of the modules you'll need to interact with MongoDB: =over 4 =item * L =item * L =item * L =item * Query result classes like L and L =item * Write result classes like L and L =back =head1 CONNECTING To get started, we have to connect to the database server. Because it's running locally on the default port, we need not pass any parameters to the L method. my $client = MongoDB->connect(); Now we we have a client connected to the MongoDB server. Next we need a database to work with, we'll call it "tutorial". You need not do anything special to create the database, MongoDB will create it on the fly. my $db = $client->get_database( 'tutorial' ); The last part of the preliminary setup is to choose a collection. We'll be using the "users" collection to start out. my $users = $db->get_collection( 'users' ); Again, there is no need to create the collection in advance, it will be created as needed. The L method is a short cut to get a L object direct from the client. my $users = $client->ns("tutorial.users"); =head1 CRUD =head2 Creating Documents =head3 Inserting To add a document to the collection, we use the L function. It takes a hash reference which is saved to the collection. $users->insert_one( { "name" => "Joe", "age" => 52, "likes" => [qw/skiing math ponies/] }); Now there is a user in the collection. =head3 Ls When a document is inserted, it is given a C<_id> field if one does not already exist. By default, this field is a L, 12 bytes that are guaranteed to be unique. The C<_id> field of the inserted document is returned in a L object by the C method. my $result = $users->insert_one({"name" => "Bill"}); my $id = $result->inserted_id; An efficient way to insert documents is to send many at a time to the database by using L, which returns a L describing the documents inserted. my $result = $users->insert_many(\@many_users); =head2 Retrieving Documents =head3 Queries To retrieve documents that were saved to a collection, we can use the L method. my $all_users = $users->find; To query for certain criteria, say, all users named Joe, pass the query a hash with the key/value pair you wish to match: my $some_users = $users->find({"name" => "Joe"}); You can match array elements in your queries; for example, to find all users who like math: my $geeks = $users->find({"likes" => "math"}); This being Perl, it is important to mention that you can also use regular expressions to search for strings. If you wanted to find all users with the name John and all variations of said name, you could do: my $john = $users->find({"name" => qr/joh?n/i}); See L for more information. =head3 Ranges As queries are hashes, they use a special syntax to express comparisons, such as "x < 4". To make the query a valid hash, MongoDB uses $-prefixed terms. For example, "x < 4" could be expressed by: my $doc321 = $collection->find({'x' => { '$lt' => 4 }}); Comparison operators can be combined to get a range: my $doc32 = $collection->find({'x' => { '$gte' => 2, '$lt' => 4 }}); =head3 Cursors C returns a L, which can be iterated over. It lazily loads results from the database. The following prints all of the users' names: while (my $doc = $all_users->next) { print $doc->{'name'}."\n"; } A cursor can also be converted into an array of hash references. For example, to print the "name" field of the first result: my @arr = $geeks->all; print $arr[0]->{'name'}."\n"; =head2 Updating Documents =head3 C<$>-operators To change a document after it has been saved to the database, you must pass L (or L to change many documents at once) two arguments. The first is a query argument, identical to the previous section, to identify the document you want to change. The second is an argument that describes the change that you wish to make. The change is described by $-prefixed descriptors. For example, to increment a field, we would write: $users->update_one({"_id" => $id}, {'$inc' => {'age' => 1}}); To add an element to an array, we can use C<$push>. So, to add an element to the C<"likes"> array, we write: $users->update_one({"_id" => $id}, {'$push' => {'likes' => 'reading'}}); To add a new field or change the type or value of an existing field, we use C<$set>. For example, to change the _id field to a username, we would say: $users->update_one({"_id" => $id}, {'$set' => {'name' => 'joe_schmoe'}}); =head3 Options C and C do nothing if no document matches the query. Sometimes we may want update to create an element if it does not already exist. This is called an 'upsert' (a combination of an update and an insert). For example, the same code could be used for creating and updating a log document: $pageviews->update_one( {"url" => "www.example.com"}, {'$inc' => {"views" => 1}}, {'upsert' => 1} ); If the pageview counter for www.example.com did not exist yet, it would be created and the "views" field would be set to 1. If it did exist, the "views" field would be incremented. =head2 Deleting Documents To delete documents, we use the L or L methods. They take the same type of hash queries do: $users->delete_many({"name" => "Joe"}); It does not delete the collection, though (in that in that it will still appear if the user lists collections in the database and the indexes will still exist). To remove a collection entirely, call C: $users->drop; C can also be used for whole databases: $db->drop; =head1 MONGODB BASICS =head2 Database Commands There are a large number of useful database commands that can be called directly on C<$db> with the L method. For example, you can use a database command to create a capped collection like so: use boolean; # imports 'true' and 'false' my $cmd = [ create => "posts", capped => true, size => 10240, max => 100 ]; $db->run_command($cmd); This will create a capped collection called "posts" in the current database. It has a maximum size of 10240 bytes and can contain up to 100 documents. The L module must be used whenever the database expects an actual boolean argument (i.e. not "1" or "0"). MongoDB expects commands to have key/value pairs in a certain order, so you must give arguments in an array reference (or L object). =head1 NEXT STEPS Now that you know the basic syntax used by the Perl driver, you should be able to translate the JavaScript examples in the main MongoDB documentation (L) into Perl. Check out L for more examples. =head1 AUTHORS =over 4 =item * David Golden =item * Mike Friedman =item * Kristina Chodorow =item * Florian Ragwitz =back =head1 COPYRIGHT AND LICENSE This software is Copyright (c) 2016 by MongoDB, Inc.. This is free software, licensed under: The Apache License, Version 2.0, January 2004 =cut MongoDB-v1.2.2/lib/MongoDB/UnacknowledgedResult.pm000644 000765 000024 00000003652 12651754051 022157 0ustar00davidstaff000000 000000 # # Copyright 2015 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # package MongoDB::UnacknowledgedResult; # ABSTRACT: MongoDB unacknowledged result object use version; our $VERSION = 'v1.2.2'; use Moo; use MongoDB::_Constants; use namespace::clean; with $_ for qw( MongoDB::Role::_PrivateConstructor MongoDB::Role::_WriteResult ); #pod =method acknowledged #pod #pod Indicates whether this write result was acknowledged. Always false for #pod this class. #pod #pod =cut sub acknowledged() { 0 }; 1; __END__ =pod =encoding UTF-8 =head1 NAME MongoDB::UnacknowledgedResult - MongoDB unacknowledged result object =head1 VERSION version v1.2.2 =head1 SYNOPSIS if ( $result->acknowledged ) { ... } =head1 DESCRIPTION This class represents an unacknowledged result, i.e. with write concern of C<< w => 0 >>. No additional information is available and no other methods should be called on it. =head1 METHODS =head2 acknowledged Indicates whether this write result was acknowledged. Always false for this class. =head1 AUTHORS =over 4 =item * David Golden =item * Mike Friedman =item * Kristina Chodorow =item * Florian Ragwitz =back =head1 COPYRIGHT AND LICENSE This software is Copyright (c) 2016 by MongoDB, Inc.. This is free software, licensed under: The Apache License, Version 2.0, January 2004 =cut MongoDB-v1.2.2/lib/MongoDB/UpdateResult.pm000644 000765 000024 00000006704 12651754051 020450 0ustar00davidstaff000000 000000 # # Copyright 2014 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # package MongoDB::UpdateResult; # ABSTRACT: MongoDB update result object use version; our $VERSION = 'v1.2.2'; use Moo; use MongoDB::_Constants; use Types::Standard qw( Num Undef ); use namespace::clean; with $_ for qw( MongoDB::Role::_PrivateConstructor MongoDB::Role::_WriteResult ); #pod =attr matched_count #pod #pod The number of documents that matched the filter. #pod #pod =cut has matched_count => ( is => 'ro', required => 1, isa => Num, ); #pod =attr modified_count #pod #pod The number of documents that were modified. Note: this is only available #pod from MongoDB version 2.6 or later. It will return C from earlier #pod servers. #pod #pod You can call C to find out if this attribute is #pod defined or not. #pod #pod =cut has modified_count => ( is => 'ro', required => 1, isa => (Num|Undef), ); sub has_modified_count { my ($self) = @_; return defined( $self->modified_count ); } #pod =attr upserted_id #pod #pod The identifier of the inserted document if an upsert took place. If #pod no upsert took place, it returns C. #pod #pod =cut has upserted_id => ( is => 'ro', ); 1; __END__ =pod =encoding UTF-8 =head1 NAME MongoDB::UpdateResult - MongoDB update result object =head1 VERSION version v1.2.2 =head1 SYNOPSIS my $result = $coll->update( @parameters ); if ( $result->acknowledged ) { ... } =head1 DESCRIPTION This class encapsulates the results from an update or replace operations. =head1 ATTRIBUTES =head2 matched_count The number of documents that matched the filter. =head2 modified_count The number of documents that were modified. Note: this is only available from MongoDB version 2.6 or later. It will return C from earlier servers. You can call C to find out if this attribute is defined or not. =head2 upserted_id The identifier of the inserted document if an upsert took place. If no upsert took place, it returns C. =head1 METHODS =head2 acknowledged Indicates whether this write result was acknowledged. Always true for this class. =head2 assert Throws an error if write errors or write concern errors occurred. Otherwise, returns the invocant. =head2 assert_no_write_error Throws a MongoDB::WriteError if write errors occurred. Otherwise, returns the invocant. =head2 assert_no_write_concern_error Throws a MongoDB::WriteConcernError if write concern errors occurred. Otherwise, returns the invocant. =head1 AUTHORS =over 4 =item * David Golden =item * Mike Friedman =item * Kristina Chodorow =item * Florian Ragwitz =back =head1 COPYRIGHT AND LICENSE This software is Copyright (c) 2016 by MongoDB, Inc.. This is free software, licensed under: The Apache License, Version 2.0, January 2004 =cut MongoDB-v1.2.2/lib/MongoDB/Upgrading.pod000644 000765 000024 00000063627 12651754051 020124 0ustar00davidstaff000000 000000 # # Copyright 2015 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # PODNAME: MongoDB::Upgrading # ABSTRACT: Deprecations and behavior changes from the v0 driver __END__ =pod =encoding UTF-8 =head1 NAME MongoDB::Upgrading - Deprecations and behavior changes from the v0 driver =head1 VERSION version v1.2.2 =head1 DESCRIPTION The v1 driver represents a substantial step forward in functionality and consistency. There are many areas where the old API has been deprecated or changed in a backward breaking way. This document is intended to help developers update their code to take into account API changes from the v0 driver to the v1 driver. =head1 RATIONALE Changes to the driver were deemed necessary to achieve certain goals: =over 4 =item * consistency (intra-driver) – many parts of the v0 API were inconsistent, behaving differently from method to method; the v1 API minimizes developer surprises by improving consistency in return types and exception mechanisms. =item * consistency (inter-driver) — "next-generation" MongoDB drivers across all languages are converging on common APIs and common behaviors; this simplifies developer education and support, as cross-language examples will be similar. =item * encapsulation – too many low-level, internal operations were exposed as part of the API, which complicates maintenance work; the v1 API aims to minimize the "public surface" available to developers, allowing faster future development keeping up with MongoDB server enhancements with less risk of breakage. =item * abstraction – many v0 methods returned raw server documents for end-user code to inspect, which is brittle in the face of changes in server responses over time; the v1 API uses result classes to abstract the details behind standardized accessors. =item * server compatibility – some new features and behavior changes in the MongoDB server no longer fit the old driver design; the v1 driver transparently supports both old and new servers. =item * portability – the v0 driver had a large dependency tree and substantial non-portable C code; the v1 driver removes some dependencies and uses widely-used, well-tested CPAN modules in place of custom C code where possible; it lays the groundwork for a future "pure-Perl optional" driver. =item * round-trippable data – the v0 BSON implementation could easily change data types when round-tripping documents; the v1 driver is designed to round-trip data correctly whenever possible (within the limits of Perl's dynamic typing). =back =head1 INSTALLATION AND DEPENDENCY CHANGES =head2 Moo instead of Moose The v1 driver uses L instead of L. This change results in a slightly faster driver and a significanly reduced deep dependency tree. =head2 SSL and SASL The v0 driver required a compiler and OpenSSL and libgsasl for SSL and SASL support, respectively. The v1 driver instead relies on CPAN modules C and C for SSL and SASL support, respectively. SSL configuration is now possible via the L. Authentication configuration is described in L. =head1 BEHAVIOR CHANGES =head2 MongoClient configuration =head3 New configuration options Several configuration options have been added, with particular emphasis on adding more granular control of timings and timeout behaviors. =over 4 =item * C =item * C =item * C =item * C =item * C =item * C =item * C =item * C =item * C =item * C =item * C =item * C =item * C =back =head3 Replica set configuration Connecting to a replica set now requires a replica set name, given either with the C option for L or with the C option in a connection string. For example: $client = MongoDB::MongoClient->new( host => "mongodb://rs1.example.com,rs2.example.com/", replica_set_name => 'the_set', ); $client = MongoDB::MongoClient->new( host => "mongodb://rs1.example.com,rs2.example.com/?replicaSet=the_set" ); =head3 Configuration options changed to read-only Configuration options are changing to be immutable to prevent surprising action-at-a-distance. (E.g. changing an attribute value in some part of the code changes it for other parts of the code that didn't expect it.) Going forward, options may be set at L construction time only. The following options have changed to be read-only: =over 4 =item * C =item * C =item * C =item * C =item * C =item * C =item * C =back Write concern may be overridden at the L and L level during construction of those objects. For more details, see the later section on L. =head3 Mapping between connection string and configuration options Many configuration options may be set via a connection string URI in the C option. In the v0 driver, the precedence between the connection string and constructor options was completely inconsistent. In the v1 driver, options set via a connection string URI will take precedence over options passed to the constructor. This is consistent with with other MongoDB drivers (as well as how L treats Data Source Names). The list of servers and ports as well as the optional C, C and C options come directly from URI structure. Other options are parsed as key-value parameters at the end of the connection string. The following table shows how connection string keys map to configuration options in the L: Connection String Key MongoClient option --------------------------- ----------------------------- authMechanism auth_mechanism authMechanismProperties auth_mechanism_properties connectTimeoutMS connect_timeout_ms heartbeatFrequencyMS heartbeat_frequency_ms journal j localThresholdMS local_threshold_ms maxTimeMS max_time_ms readPreference read_pref_mode readPreferenceTags read_pref_tag_sets replicaSet replica_set_name serverSelectionTimeoutMS server_selection_timeout_ms socketCheckIntervalMS socket_check_interval_ms socketTimeoutMS socket_timeout_ms ssl ssl w w wTimeoutMS wtimeout The C and C keys take colon-delimited, comma-separated pairs: readPreferenceTags=dc:nyeast,rack:1 authMechanismProperties=SERVICE_NAME:mongodb The C option may be repeated to build up a list of tag set documents: readPreferenceTags=dc:nyc,rack:1&readPreferenceTags=dc:nyc =head3 Deprecated configuration options Several options have been superseded, replaced or renamed for clarity and are thus deprecated and undocumented. They are kept for a limited degree of backwards compatibility. They will be generally be used as fallbacks for other options. If any were read-write, they have also been changed to read-only. =over 4 =item * C — see L for details. =item * C — replaced by C; if set, this will be used as a fallback default for C. =item * C — superseded by C; if set, this will be used along with C as a fallback default for C. =item * C — superseded by C; if set, this will be used as a fallback default for C. =item * C — replaced by C; if set, this will be used as a fallback default for C. =back These will be removed in a future major release. =head3 Configuration options removed Some configuration options have been removed entirely, as they no longer serve any purpose given changes to server discovery, server selection and connection handling: =over 4 =item * C =item * C =item * C =item * C =back As described further below in the L section, these BSON encoding configuration options have been removed as well: =over 4 =item * C =item * C =back Removed configuration options will be ignored if passed to the L constructor. =head2 Lazy connections and reconnections on demand The improved approach to server monitoring and selection allows all connections to be lazy. When the client is constructed, no connections are made until the first network operation is needed. At that time, the client will scan all servers in the seed list and begin regular monitoring. Connections that drop will be re-established when needed. B Code that used to rely on a fatal exception from C<< MongoDB::MongoClient->new >> when no mongod is available will break. Instead, users are advised to just conduct their operations and be prepared to handle errors. For testing, users may wish to run a simple command to check that a mongod is ready: use Test::More; # OLD WAY: BROKEN plan skip_all => 'no mongod' unless eval { MongoDB::MongoClient->new }; # NEW WAY 1: with MongoDB::MongoClient plan skip_all => 'no mongod' unless eval { MongoDB::MongoClient->new->db('admin')->run_command( [ ismaster => 1 ] ) }; # NEW WAY 2: with MongoDB and connect plan skip_all => 'no mongod' unless eval { MongoDB->connect->db('admin')->run_command([ ismaster => 1 ]) }; See L and L in L for details. =head2 Exceptions are the preferred error handling approach In the v0 driver, errors could be indicated in various ways: =over 4 =item * boolean return value =item * string return value is an error; hash ref is success =item * document that might contain an 'err', 'errmsg' or '$err' field =item * thrown string exception =back Regardless of the documented error handling, every method that involved a network operation would throw an exception on various network errors. In the v1 driver, exceptions objects are the standard way of indicating errors. The exception hierarchy is described in L. =head2 Cursors and query responses In v0, L objects were used for ordinary queries as well as the query-like commands aggregation and parallel scan. However, only cursor iteration commands worked for aggregation and parallel scan "cursors"; the rest of the L API didn't apply and was fatal. In v1, all result iteration is done via the new L class. L is now just a thin wrapper that holds query parameters, instantiates a L on demand, and passes iteration methods through to the query result object. This significantly simplifies the code base and should have little end-user visibility unless users are specifically checking the return type of queries and query-like methods. The C cursor method no longer resets the cursor. The C cursor method now sets the C to 'secondaryPreferred' or clears it to 'primary'. The C cursor method now requires a boolean argument, allowing it to be turned on or off before executing the query. Calling it without an argument (as it was in v0) is a fatal exception. Parallel scan "cursors" are now L objects, with the same iteration methods as in v0. The C<$MongoDB::Cursor::slave_ok> global variable has been removed as part of the revision to read preference handling. See the L section below for more details. The C<$MongoDB::Cursor::timeout> global variable has also been removed. Timeouts are set during L configuration and are immutable. See the section on L for more. =head2 Aggregation API On MongoDB 2.6 or later, C always uses a cursor to execute the query. The C option has been added (but has no effect prior to 2.6). The C option is deprecated. The return types for the C method are now B L objects, regardless of whether the aggregation uses a cursor internally or is an 'explain'. B: To help users with a 2.6 mongos and mixed version shards with versions before 2.6, passing the deprecated 'cursor' option with a false value will disable the use of a cursor. This workaround is provided for convenience and will be removed when 2.4 is no longer supported. =head2 Read preference objects and the read_preference method A new L class is used to encapsulate read preference attributes. In the v1 driver, it is constructed from the C and C attributes on L: MongoDB::MongoClient->new( read_pref_mode => 'primaryPreferred', read_pref_tag_sets => [ { dc => 'useast' }, {} ], ); The old C method to change the read preference has been removed and trying to set a read preference after the client has been created is a fatal error. The old mode constants PRIMARY, SECONDARY, etc. have been removed. The C method now returns the L object generated from C and C. It is inherited by L, L, and L objects unless provided as an option to the relevant factory methods: my $coll = $db->get_collection( "foo", { read_preference => 'secondary' } ); Such C arguments may be a L object, a hash reference of arguments to construct one, or a string that represents the read preference mode. L and L also have C methods that allow easy alteration of a read preference for a limited scope. my $coll2 = $coll->clone( read_preference => 'secondaryPreferred' ); For L, the C method sets a hidden read preference attribute that is used for the query in place of the L default C attribute. This means that calling C on a cursor object no longer changes the read preference globally on the client – the read preference change is scoped to the cursor object only. =head2 Write concern objects and removing the safe argument A new L class is used to encapsulate write concern attributes. In the v1 driver, it is constructed from the C, C and C attributes on L: MongoDB::MongoClient->new( w => 'majority', wtimeout => 1000 ); The C method now returns the L object generated from C, C and C. It is inherited by L, L, and L objects unless provided as an option to the relevant factory methods: $db = $client->get_database( "test", { write_concern => { w => 'majority' } } ); Such C arguments may be a L object, a hash reference of arguments to construct one, or a string that represents the C mode. L and L also have C methods that allow easy alteration of a write concern for a limited scope. my $coll2 = $coll->clone( write_concern => { w => 1 } ); The C argument is no longer used in the new CRUD API. =head2 Authentication based only on configuration options Authentication now happens automatically on connection during the "handshake" with any given server based on the L. The old C method in L has been removed. =head2 Bulk API =head3 Bulk method names changed to match CRUD API Method names match the new CRUD API, e.g. C instead of C and so one. The legacy names are deprecated. =head3 Bulk insertion Insertion via the bulk API will B insert an C<_id> into the original document if one does not exist. Previous documentation was not specific whether this was the case or if the C<_id> was added to the document sent to the server. =head3 Bulk write results The bulk write results class has been renamed to L. It keeps C as an empty superclass for some backwards compatibility so that C<< $result->isa("MongoDB::WriteResult") >> will continue to work as expected. The attributes have been renamed to be consistent with the new CRUD API. The legacy names are deprecated, but are available as aliases. =head2 GridFS The L class now has explicit read preference and write concern attributes inherited from L or L, just like L. This means that GridFS operations now default to an acknowledged write concern, just like collection operations have been doing since v0.502.0 in 2012. The use of C is deprecated. Support for ancient, undocumented positional parameters circa 2010 has been removed. =head2 Low-level functions removed Low-level driver functions have been removed from the public API. =head2 MongoDB::Connection removed The C module was deprecated in v0.502.0 and has been removed. =head2 BSON encoding changes In the v1 driver, BSON encoding and decoding have been encapsulated into a L codec object. This can be provided at any level, from L to L. If not provided, a default will be created that behaves similarly to the v0 encoding/decoding functions, except for the following changes. =head3 C<$MongoDB::BSON::use_binary> removed Historically, this defaulted to false, which corrupts binary data when round tripping. Retrieving a binary data element and re-inserting it would have resulted in a field with UTF-8 encoded string of binary data. Going forward, binary data will be returned as a L object. A future driver may add the ability to control decoding to allow alternative representations. =head3 C<$MongoDB::BSON::use_boolean> removed This global variable never worked. BSON booleans were always deserialized as L objects. A future driver may add the ability to control boolean representation. =head3 C<$MongoDB::BSON::utf8_flag_on> removed In order to ensure round-tripping of string data, this variable is removed. BSON strings will always be decoded to Perl character strings. Anything else risks double-encoding a round-trip. =head3 C<$MongoDB::BSON::looks_like_number> and C<$MongoDB::BSON::char> deprecated and re-scoped In order to allow a future driver to provide more flexible user-customized encoding and decoding, these global variables are deprecated. If set, they will be examined during C<< MongoDB::MongoClient->new() >> to set the configuration of a default L codec (if one is not provided). Changing them later will B change the behavior of the codec object. =head3 C option C removed Previously, BSON regular expressions decoded to C references by default and the C C option was available to decode instead to Ls. Going forward in the v1.0.0 driver, for safety and consistency with other drivers, BSON regular expressions B decode to L objects. =head3 C option C removed The C configuration option has been removed and replaced with a C option in L. By default, the C will create a L codec that will construct L objects. This ensures that DBRefs properly round-trip. =head3 C option C deprecated and changed to read-only The C option is now only takes effect if C constructs a L codec object. It has been changed to a read-only attribute so that any code that relied on changing C after constructing a C object will fail instead of being silently ignored. =head3 Int32 vs Int64 encoding changes On 64-bit Perls, integers that fit in 32-bits will be encoded as BSON Int32 (whereas previously these were always encoded as BSON Int64). Math::BigInt objects will always be encoded as BSON Int64, which allows users to force 64-bit encoding if desired. =head3 Added support for Time::Moment L is a much faster replacement for the venerable L module. The BSON codec will serialize L objects correctly and can use that module as an argument for the C codec attribute. =head3 Added support for encoding common JSON boolean classes Most JSON libraries on CPAN implement their own boolean classes. The following libraries boolean types will now encode correctly as BSON booleans: =over 4 =item * JSON::XS =item * Cpanel::JSON::XS =item * JSON::PP =item * JSON::Tiny =item * Mojo::JSON =back =head2 DBRef objects The C method and related attributes C, C, and C have been removed from L. Providing a C method was inconsistent with other MongoDB drivers, which either never provided it, or have dropped it in the next-generation drivers. It requires a C attribute, which tightly couples BSON decoding to the client model, causing circular reference issues and triggering Perl memory bugs under threads. Therefore, the v1.0.0 driver no longer support fetching directly from L; users will need to implement their own methods for dereferencing. Additonally, the C attribute is now optional, consistent with the specification for DBRefs. Also, all attributes (C, C and C) are now read-only, consistent with the move toward immutable objects throughout the driver. To support round-tripping DBRefs with additional fields other than C<$ref>, C<$id> and C<$db>, the DBRef class now has an attribute called C. As not all drivers support this feature, using it for new DBRefs is not recommended. =head1 DEPRECATED METHODS Deprecated options and methods may be removed in a future release. Their documentation has been removed to discourage ongoing use. Unless otherwise stated, they will continue to behave as they previously did, allowing a degree of backwards compatibility until code is updated to the new MongoDB driver API. =head2 MongoDB::Database =over 4 =item * eval – MongoDB 3.0 deprecated the '$eval' command, so this helper method is deprecated as well. =item * last_error — Errors are now indicated via exceptions at the time database commands are executed. =back =head2 MongoDB::Collection =over 4 =item * insert, batch_insert, remove, update, save, query and find_and_modify — A new common driver CRUD API replaces these legacy methods. =item * get_collection — This method implied that collections could be contained inside collection. This doesn't actually happen so it's confusing to have a Collection be a factory for collections. Users who want nested namespaces should be explicit and create them off Database objects instead. =item * ensure_index, drop_indexes, drop_index, get_index — A new L class is accessable through the C method, offering greater consistency in behavior across drivers. =item * validate — The return values have changed over different server versions, so this method is risky to use; it has more use as a one-off tool, which can be accomplished via C. =back =head2 MongoDB::CommandResult =over 4 =item * result — has been renamed to 'output' for clarity =back =head2 MongoDB::Cursor =over 4 =item * slave_ok — this modifier method is superseded by the 'read_preference' modifier method =item * count — this is superseded by the L method. Previously, this ignored skip/limit unless a true argument was passed, which was a bizarre, non-intuitive and inconsistent API. =back =head2 MongoDB::BulkWrite and MongoDB::BulkWriteView =over 4 =item * insert — renamed to 'insert_one' for consistency with CRUD API =item * update — renamed to 'update_many' for consistency with CRUD API =item * remove — renamed to 'delete_many' for consistency with CRUD API =item * remove_one — renamed to 'delete_one' for consistency with CRUD API =back =head1 AUTHORS =over 4 =item * David Golden =item * Mike Friedman =item * Kristina Chodorow =item * Florian Ragwitz =back =head1 COPYRIGHT AND LICENSE This software is Copyright (c) 2016 by MongoDB, Inc.. This is free software, licensed under: The Apache License, Version 2.0, January 2004 =cut MongoDB-v1.2.2/lib/MongoDB/WriteConcern.pm000644 000765 000024 00000010600 12651754051 020417 0ustar00davidstaff000000 000000 # # Copyright 2014 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # package MongoDB::WriteConcern; # ABSTRACT: Encapsulate and validate a write concern use version; our $VERSION = 'v1.2.2'; use Moo; use MongoDB::Error; use MongoDB::_Types qw( Booleanpm ); use Types::Standard qw( Bool ArrayRef Num Str Maybe ); use Scalar::Util qw/looks_like_number/; use namespace::clean -except => 'meta'; #pod =attr w #pod #pod Specifies the desired acknowledgement level. Defaults to '1'. #pod #pod =cut has w => ( is => 'ro', isa => Maybe [Str], predicate => '_has_w', ); #pod =attr wtimeout #pod #pod Specifies how long to wait for the write concern to be satisfied (in #pod milliseonds). Defaults to 1000. #pod #pod =cut has wtimeout => ( is => 'ro', isa => Num, predicate => '_has_wtimeout', default => 1000, ); #pod =attr j #pod #pod The j option confirms that the mongod instance has written the data to the #pod on-disk journal. Defaults to false. #pod #pod B: specifying a write concern that set j to a true value may result in an #pod error with a mongod or mongos running with --nojournal option now errors. #pod #pod =cut has j => ( is => 'ro', isa => Booleanpm, coerce => Booleanpm->coercion, predicate => '_has_j', ); has _is_acknowledged => ( is => 'lazy', isa => Bool, reader => 'is_acknowledged', builder => '_build_is_acknowledged', ); has _as_args => ( is => 'lazy', isa => ArrayRef, reader => 'as_args', builder => '_build_as_args', ); sub _build_is_acknowledged { my ($self) = @_; return !!( $self->j || $self->_w_is_acknowledged ); } sub _build_as_args { my ($self) = @_; my $wc = { ( $self->_has_w ? ( w => $self->w ) : () ), ( $self->_has_wtimeout ? ( wtimeout => 0+ $self->wtimeout ) : () ), ( $self->_has_j ? ( j => $self->j ) : () ), }; return ( (defined $self->w || defined $self->j) ? [writeConcern => $wc] : [] ); } sub BUILD { my ($self) = @_; if ( ! $self->_w_is_acknowledged && $self->j ) { MongoDB::UsageError->throw("can't use write concern w=0 with j=" . $self->j ); } return; } sub _w_is_acknowledged { my ($self) = @_; return ($self->_has_w && ( looks_like_number( $self->w ) ? $self->w > 0 : length $self->w )) || !defined $self->w; } 1; __END__ =pod =encoding UTF-8 =head1 NAME MongoDB::WriteConcern - Encapsulate and validate a write concern =head1 VERSION version v1.2.2 =head1 SYNOPSIS $rp = MongoDB::WriteConcern->new(); # w:1, wtimeout: 1000 $rp = MongoDB::WriteConcern->new( w => 'majority', wtimeout => 10000, # milliseconds ); =head1 DESCRIPTION A write concern describes the guarantee that MongoDB provides when reporting on the success of a write operation. For core documentation on read preference see L. =head1 ATTRIBUTES =head2 w Specifies the desired acknowledgement level. Defaults to '1'. =head2 wtimeout Specifies how long to wait for the write concern to be satisfied (in milliseonds). Defaults to 1000. =head2 j The j option confirms that the mongod instance has written the data to the on-disk journal. Defaults to false. B: specifying a write concern that set j to a true value may result in an error with a mongod or mongos running with --nojournal option now errors. =head1 AUTHORS =over 4 =item * David Golden =item * Mike Friedman =item * Kristina Chodorow =item * Florian Ragwitz =back =head1 COPYRIGHT AND LICENSE This software is Copyright (c) 2016 by MongoDB, Inc.. This is free software, licensed under: The Apache License, Version 2.0, January 2004 =cut MongoDB-v1.2.2/lib/MongoDB/Role/_BypassValidation.pm000644 000765 000024 00000002304 12651754051 022333 0ustar00davidstaff000000 000000 # # Copyright 2015 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # package MongoDB::Role::_BypassValidation; # MongoDB interface for optionally applying bypassDocumentValidation # to a command use version; our $VERSION = 'v1.2.2'; use Moo::Role; use Types::Standard qw( Bool ); use namespace::clean; has bypassDocumentValidation => ( is => 'ro', isa => Bool ); # args not unpacked for efficiency; args are self, link, command; # returns (unmodified) link and command sub _maybe_bypass { push @{ $_[2] }, bypassDocumentValidation => $_[0]->bypassDocumentValidation if defined $_[0]->bypassDocumentValidation && $_[1]->accepts_wire_version(4); return $_[1], $_[2]; } 1; MongoDB-v1.2.2/lib/MongoDB/Role/_CommandCursorOp.pm000644 000765 000024 00000004457 12651754051 022145 0ustar00davidstaff000000 000000 # # Copyright 2014 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # package MongoDB::Role::_CommandCursorOp; # MongoDB interface for database commands with cursors use version; our $VERSION = 'v1.2.2'; use MongoDB::Error; use MongoDB::QueryResult; use Moo::Role; use namespace::clean; requires qw/client bson_codec/; sub _build_result_from_cursor { my ( $self, $res ) = @_; my $c = $res->output->{cursor} or MongoDB::DatabaseError->throw( message => "no cursor found in command response", result => $res, ); my $max_time_ms = undef; if ($self->isa('MongoDB::Op::_Query') && $self->cursor_type eq 'tailable_await') { $max_time_ms = $self->max_await_time_ms if defined $self->max_await_time_ms; } my $batch = $c->{firstBatch}; my $qr = MongoDB::QueryResult->_new( _client => $self->client, _address => $res->address, _ns => $c->{ns}, _bson_codec => $self->bson_codec, _batch_size => scalar @$batch, _cursor_at => 0, _limit => 0, _cursor_id => $c->{id}, _cursor_start => 0, _cursor_flags => {}, _cursor_num => scalar @$batch, _docs => $batch, defined $max_time_ms ? (_max_time_ms => $max_time_ms) : (), ); } sub _empty_query_result { my ( $self, $link ) = @_; my $qr = MongoDB::QueryResult->_new( _client => $self->client, _address => $link->address, _ns => '', _bson_codec => $self->bson_codec, _batch_size => 1, _cursor_at => 0, _limit => 0, _cursor_id => 0, _cursor_start => 0, _cursor_flags => {}, _cursor_num => 0, _docs => [], ); } 1; MongoDB-v1.2.2/lib/MongoDB/Role/_CommandOp.pm000644 000765 000024 00000003051 12651754051 020734 0ustar00davidstaff000000 000000 # # Copyright 2014 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # package MongoDB::Role::_CommandOp; # MongoDB interface for database command operations use version; our $VERSION = 'v1.2.2'; use MongoDB::BSON; use MongoDB::Error; use MongoDB::_Constants; use MongoDB::_Protocol; use Moo::Role; use namespace::clean; with 'MongoDB::Role::_DatabaseOp'; requires qw/db_name bson_codec/; sub _send_command { my ( $self, $link, $doc, $flags ) = @_; my $command = $self->bson_codec->encode_one( $doc ); my ( $op_bson, $request_id ) = MongoDB::_Protocol::write_query( $self->db_name . '.$cmd', $command, undef, 0, -1, $flags ); if ( length($op_bson) > MAX_BSON_WIRE_SIZE ) { # XXX should this become public? MongoDB::_CommandSizeError->throw( message => "database command too large", size => length $op_bson, ); } # return a raw, parsed result, not an object return $self->_query_and_receive( $link, $op_bson, $request_id, undef, 1 ) ->{docs}[0]; } 1; MongoDB-v1.2.2/lib/MongoDB/Role/_Cursor.pm000644 000765 000024 00000001403 12651754051 020333 0ustar00davidstaff000000 000000 # # Copyright 2014 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # package MongoDB::Role::_Cursor; # Role providing a cursor interface use version; our $VERSION = 'v1.2.2'; use Moo::Role; use namespace::clean; requires qw/all has_next next/; 1; MongoDB-v1.2.2/lib/MongoDB/Role/_DatabaseOp.pm000644 000765 000024 00000005750 12651754051 021072 0ustar00davidstaff000000 000000 # # Copyright 2014 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # package MongoDB::Role::_DatabaseOp; # MongoDB interface for database operations use version; our $VERSION = 'v1.2.2'; use MongoDB::BSON; use MongoDB::Error; use MongoDB::_Protocol; use Moo::Role; use MongoDB::_Constants; use MongoDB::_Types qw( BSONCodec ); use namespace::clean; requires 'execute'; has bson_codec => ( is => 'ro', required => 1, isa => BSONCodec, ); # Sends a BSON query string, then read, parse and validate the reply. # Throws various errors if the results indicate a problem. Returns # a "result" structure generated by MongoDB::_Protocol, but with # the 'docs' field replaced with inflated documents. # as this is the hot loop, we do a number of odd things in the name of # optimization, such as chaining lots of operations with ',' to keep them # in a single statement # args are self, link, op_bson, request_id and not unpacked as they are only # used briefly sub _query_and_receive { my ($result, $doc_bson, $bson_codec, $docs, $len, $i); $_[1]->write( $_[2] ), ( $result = MongoDB::_Protocol::parse_reply( $_[1]->read, $_[3] ) ), ( $doc_bson = $result->{docs} ), ( $docs = $result->{docs} = [] ), ( ( $bson_codec, $i ) = ( $_[0]->bson_codec, 0 ) ), ( $#$docs = $result->{number_returned} - 1 ); # XXX should address be added to result here? MongoDB::CursorNotFoundError->throw("cursor not found") if $result->{flags}{cursor_not_found}; # XXX eventually, BSON needs an API to do this efficiently for us without a # loop here. Alternatively, BSON strings could be returned as objects that # inflate lazily while ( length($doc_bson) ) { $len = unpack( P_INT32, $doc_bson ); MongoDB::ProtocolError->throw("document in response at index $i was truncated") if $len > length($doc_bson); $docs->[ $i++ ] = $bson_codec->decode_one( substr( $doc_bson, 0, $len, '' ) ); } MongoDB::ProtocolError->throw( sprintf( "unexpected number of documents: got %s, expected %s", scalar @$docs, $result->{number_returned} ) ) if scalar @$docs != $result->{number_returned}; return $result unless $result->{flags}{query_failure}; # had query_failure, so pretend the query was a command and assert it here MongoDB::CommandResult->_new( output => $result->{docs}[0], address => $_[1]->address )->assert; } 1; MongoDB-v1.2.2/lib/MongoDB/Role/_InsertPreEncoder.pm000644 000765 000024 00000003551 12651754051 022277 0ustar00davidstaff000000 000000 # # Copyright 2014 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # package MongoDB::Role::_InsertPreEncoder; # MongoDB interface for pre-encoding and validating docs to insert use version; our $VERSION = 'v1.2.2'; use Moo::Role; use MongoDB::Error; use MongoDB::BSON::_EncodedDoc; use namespace::clean; requires qw/bson_codec/; # takes MongoDB::_Link and ref of type Document; returns # blessed BSON encode doc and the original/generated _id sub _pre_encode_insert { my ( $self, $link, $doc, $invalid_chars ) = @_; my $type = ref($doc); my $id = ( $type eq 'HASH' ? $doc->{_id} : $type eq 'ARRAY' ? do { my $i; for ( $i = 0; $i < @$doc; $i++ ) { last if $doc->[$i] eq '_id' } $i < $#$doc ? $doc->[ $i + 1 ] : undef; } : $type eq 'Tie::IxHash' ? $doc->FETCH('_id') : $doc->{_id} # hashlike? ); $id = MongoDB::OID->_new_oid() unless defined $id; my $bson_doc = $self->bson_codec->encode_one( $doc, { invalid_chars => $invalid_chars, max_length => $link->max_bson_object_size, first_key => '_id', first_value => $id, } ); return MongoDB::BSON::_EncodedDoc->_new( bson => $bson_doc, metadata => { _id => $id }, ); } 1; # vim: set ts=4 sts=4 sw=4 et tw=75: MongoDB-v1.2.2/lib/MongoDB/Role/_LastError.pm000644 000765 000024 00000003364 12651754051 021003 0ustar00davidstaff000000 000000 # # Copyright 2014 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # package MongoDB::Role::_LastError; # MongoDB interface for providing the last database error use version; our $VERSION = 'v1.2.2'; use Moo::Role; use MongoDB::Error; use namespace::clean; requires qw/last_errmsg last_code last_wtimeout/; my $ANY_DUP_KEY = [ DUPLICATE_KEY, DUPLICATE_KEY_UPDATE, DUPLICATE_KEY_CAPPED ]; my $ANY_NOT_MASTER = [ NOT_MASTER, NOT_MASTER_NO_SLAVE_OK, NOT_MASTER_OR_SECONDARY ]; # analyze last_errmsg and last_code and throw an appropriate # error message. sub _throw_database_error { my ( $self, $error_class ) = @_; $error_class ||= "MongoDB::DatabaseError"; my $err = $self->last_errmsg; my $code = $self->last_code; if ( grep { $code == $_ } @$ANY_NOT_MASTER || $err =~ /^(?:not master|node is recovering)/ ) { $error_class = "MongoDB::NotMasterError"; } elsif ( grep { $code == $_ } @$ANY_DUP_KEY ) { $error_class = "MongoDB::DuplicateKeyError"; } elsif ( $self->last_wtimeout ) { $error_class = "MongoDB::WriteConcernError"; } $error_class->throw( result => $self, code => $code || UNKNOWN_ERROR, ( length($err) ? ( message => $err ) : () ), ); } 1; MongoDB-v1.2.2/lib/MongoDB/Role/_PrivateConstructor.pm000644 000765 000024 00000002200 12651754051 022732 0ustar00davidstaff000000 000000 # # Copyright 2015 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # package MongoDB::Role::_PrivateConstructor; # MongoDB interface for a private constructor use version; our $VERSION = 'v1.2.2'; use Moo::Role; use MongoDB::_Constants; use namespace::clean; # When assertions are enabled, the private constructor delegates to the # public one, which checks required/isa assertions. When disabled, # the private constructor blesses args directly to the class for speed. BEGIN { WITH_ASSERTS ? eval 'sub _new { my $class = shift; $class->new(@_) }' : eval 'sub _new { my $class = shift; return bless {@_}, $class }'; } 1; MongoDB-v1.2.2/lib/MongoDB/Role/_ReadOp.pm000644 000765 000024 00000002261 12651754051 020233 0ustar00davidstaff000000 000000 # # Copyright 2014 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # package MongoDB::Role::_ReadOp; # MongoDB role for read ops that provides read preference use version; our $VERSION = 'v1.2.2'; use Moo::Role; use MongoDB::ReadPreference; use MongoDB::_Constants; use MongoDB::_Types qw( ReadPreference ReadConcern ); use Types::Standard qw( Maybe ); use namespace::clean; with 'MongoDB::Role::_DatabaseOp'; # PERL-573 Would like to refactor to remove Maybe types for # read_preference and read_concern has read_preference => ( is => 'ro', isa => Maybe [ReadPreference], ); has read_concern => ( is => 'ro', isa => Maybe [ReadConcern], ); 1; MongoDB-v1.2.2/lib/MongoDB/Role/_ReadPrefModifier.pm000644 000765 000024 00000005275 12651754051 022240 0ustar00davidstaff000000 000000 # # Copyright 2014 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # package MongoDB::Role::_ReadPrefModifier; # MongoDB interface for read ops that respect read preference # Only affects MongoDB::_Op::_Query on the legacy code path use version; our $VERSION = 'v1.2.2'; use Moo::Role; use MongoDB::Error; use namespace::clean; requires qw/read_preference/; sub _apply_read_prefs { my ( $self, $link, $topology_type, $query_flags, $query_ref ) = @_; $topology_type ||= ""; my $read_pref = $self->read_preference; if ( $topology_type eq 'Single' ) { if ( $link->server && $link->server->type eq 'Mongos' ) { $self->_apply_mongos_read_prefs($read_pref); } else { $query_flags->{slave_ok} = 1; } } elsif ( grep { $topology_type eq $_ } qw/ReplicaSetNoPrimary ReplicaSetWithPrimary/ ) { if ( !$read_pref || $read_pref->mode eq 'primary' ) { $query_flags->{slave_ok} = 0; } else { $query_flags->{slave_ok} = 1; } } elsif ( $topology_type eq 'Sharded' ) { $self->_apply_mongos_read_prefs($read_pref, $query_flags, $query_ref); } else { MongoDB::InternalError->throw("can't query topology type '$topology_type'"); } return; } sub _apply_mongos_read_prefs { my ( $self, $read_pref, $query_flags, $query_ref ) = @_; my $mode = $read_pref ? $read_pref->mode : 'primary'; my $need_read_pref; if ( $mode eq 'primary' ) { $query_flags->{slave_ok} = 0; } elsif ( grep { $mode eq $_ } qw/secondary primaryPreferred nearest/ ) { $query_flags->{slave_ok} = 1; $need_read_pref = 1; } elsif ( $mode eq 'secondaryPreferred' ) { $query_flags->{slave_ok} = 1; $need_read_pref = 1 unless $read_pref->has_empty_tag_sets; } else { MongoDB::InternalError->throw("invalid read preference mode '$mode'"); } if ($need_read_pref) { if ( !$$query_ref->FETCH('$query') ) { $$query_ref = Tie::IxHash->new( '$query' => $$query_ref ); } $$query_ref->Push( '$readPreference' => $read_pref->for_mongos ); } return; } 1; MongoDB-v1.2.2/lib/MongoDB/Role/_UpdatePreEncoder.pm000644 000765 000024 00000004263 12651754051 022256 0ustar00davidstaff000000 000000 # # Copyright 2014 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # package MongoDB::Role::_UpdatePreEncoder; # MongoDB interface for pre-encoding and validating update/replace docs use version; our $VERSION = 'v1.2.2'; use Moo::Role; use MongoDB::Error; use MongoDB::_Constants; use namespace::clean; requires qw/bson_codec/; sub _pre_encode_update { my ( $self, $link, $doc, $is_replace ) = @_; my $bson_doc = $self->bson_codec->encode_one( $doc, { invalid_chars => $is_replace ? '.' : '', max_length => $is_replace ? $link->max_bson_object_size : undef, } ); # must check if first character of first key is valid for replace/update; # do this from BSON to get key *after* op_char replacment; # only need to validate if length is enough for a document with a key my ( $len, undef, $first_char ) = unpack( P_INT32 . "CZ", $bson_doc ); if ( $len >= MIN_KEYED_DOC_LENGTH ) { my $err; if ($is_replace) { $err = "replacement document must not contain update operators" if $first_char eq '$'; } else { $err = "update document must only contain update operators" if $first_char ne '$'; } MongoDB::DocumentError->throw( message => $err, document => $doc, ) if $err; } elsif ( ! $is_replace ) { MongoDB::DocumentError->throw( message => "Update document was empty!", document => $doc, ); } return MongoDB::BSON::_EncodedDoc->_new( bson => $bson_doc, metadata => {}, ); } 1; # vim: set ts=4 sts=4 sw=4 et tw=75: MongoDB-v1.2.2/lib/MongoDB/Role/_WriteOp.pm000644 000765 000024 00000012761 12651754051 020460 0ustar00davidstaff000000 000000 # # Copyright 2014 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # package MongoDB::Role::_WriteOp; # MongoDB interface for database write operations use version; our $VERSION = 'v1.2.2'; use MongoDB::BSON; use MongoDB::CommandResult; use MongoDB::Error; use MongoDB::UnacknowledgedResult; use MongoDB::_Constants; use MongoDB::_Protocol; use MongoDB::_Types qw( WriteConcern ); use Moo::Role; use namespace::clean; with qw/MongoDB::Role::_CommandOp/; requires qw/db_name _parse_cmd _parse_gle/; has write_concern => ( is => 'ro', required => 1, isa => WriteConcern, ); sub _send_legacy_op_with_gle { my ( $self, $link, $op_bson, $op_doc, $result_class ) = @_; if ( $self->write_concern->is_acknowledged ) { my $wc_args = $self->write_concern->as_args(); my @write_concern = scalar @$wc_args ? %{ $wc_args->[1] } : (); my $gle = $self->bson_codec->encode_one( [ getlasterror => 1, @write_concern ] ); my ( $gle_bson, $request_id ) = MongoDB::_Protocol::write_query( $self->db_name . '.$cmd', $gle, undef, 0, -1 ); # write op sent as a unit with GLE command to ensure GLE applies to the # operation without other operations in between my $res = $self->_query_and_receive( $link, $op_bson . $gle_bson, $request_id, undef ) ->{docs}[0]; # errors in the command itself get handled as normal CommandResult if ( !$res->{ok} && ( $res->{errmsg} || $res->{'$err'} ) ) { return MongoDB::CommandResult->_new( output => $res, address => $link->address, ); } # 'ok' false means GLE itself failed # usually we shouldn't check wnote or jnote, but the Bulk API QA test says we should # detect no journal or replication not enabled, so we check for special strings. # These strings were checked back to MongoDB 1.8.5. my $got_error = ( exists( $res->{jnote} ) && $res->{jnote} =~ NO_JOURNAL_RE ) ? $res->{jnote} : ( exists( $res->{wnote} ) && $res->{wnote} =~ NO_REPLICATION_RE ) ? $res->{wnote} : undef; if ($got_error) { MongoDB::DatabaseError->throw( message => $got_error, result => MongoDB::CommandResult->_new( output => $res, address => $link->address, ), ); } # otherwise, construct the desired result object, calling back # on class-specific parser to generate additional attributes my ( $write_concern_error, $write_error ); my $errmsg = $res->{err}; my $wtimeout = $res->{wtimeout}; if ($wtimeout) { $write_concern_error = { errmsg => $errmsg, errInfo => { wtimeout => $wtimeout }, code => $res->{code} || WRITE_CONCERN_ERROR, }; } elsif ($errmsg) { $write_error = { errmsg => $errmsg, code => $res->{code} || UNKNOWN_ERROR, index => 0, op => $op_doc, }; } return $result_class->_new( acknowledged => 1, write_errors => ( $write_error ? [$write_error] : [] ), write_concern_errors => ( $write_concern_error ? [$write_concern_error] : [] ), $self->_parse_gle( $res, $op_doc ), ); } else { $link->write($op_bson); return $result_class->_new( $self->_parse_gle( {}, $op_doc ), acknowledged => 0, write_errors => [], write_concern_errors => [], ); } } sub _send_write_command { my ( $self, $link, $cmd, $op_doc, $result_class ) = @_; my $res = $self->_send_command( $link, $cmd ); if ( $self->write_concern->is_acknowledged ) { # errors in the command itself get handled as normal CommandResult if ( !$res->{ok} && ( $res->{errmsg} || $res->{'$err'} ) ) { return MongoDB::CommandResult->_new( output => $res, address => $link->address, ); } # if an error occurred, add the op document involved if ( exists($res->{writeErrors}) && @{$res->{writeErrors}} ) { $res->{writeErrors}[0]{op} = $op_doc; } # otherwise, construct the desired result object, calling back # on class-specific parser to generate additional attributes return $result_class->_new( write_errors => ( $res->{writeErrors} ? $res->{writeErrors} : [] ), write_concern_errors => ( $res->{writeConcernError} ? [ $res->{writeConcernError} ] : [] ), $self->_parse_cmd($res), ); } else { return MongoDB::UnacknowledgedResult->_new( write_errors => [], write_concern_errors => [], ); } } 1; MongoDB-v1.2.2/lib/MongoDB/Role/_WriteResult.pm000644 000765 000024 00000005727 12651754051 021364 0ustar00davidstaff000000 000000 # # Copyright 2014 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # package MongoDB::Role::_WriteResult; # MongoDB interface for common write result attributes and methods use version; our $VERSION = 'v1.2.2'; use MongoDB::Error; use MongoDB::_Constants; use MongoDB::_Types qw( ArrayOfHashRef ); use Moo::Role; use namespace::clean; has [qw/write_errors write_concern_errors/] => ( is => 'ro', required => 1, isa => ArrayOfHashRef, ); with 'MongoDB::Role::_LastError'; sub acknowledged() { 1 }; # override to 0 for MongoDB::UnacknowledgedResult # inline assert_no_write_error and assert_no_write_concern rather # than having to make to additional method calls sub assert { my ($self) = @_; $self->_throw_database_error("MongoDB::WriteError") if scalar @{ $self->write_errors }; MongoDB::WriteConcernError->throw( message => $self->last_errmsg, result => $self, code => WRITE_CONCERN_ERROR, ) if scalar @{ $self->write_concern_errors }; return $self; } sub assert_no_write_error { my ($self) = @_; $self->_throw_database_error("MongoDB::WriteError") if scalar @{ $self->write_errors }; return $self; } sub assert_no_write_concern_error { my ($self) = @_; MongoDB::WriteConcernError->throw( message => $self->last_errmsg, result => $self, code => WRITE_CONCERN_ERROR, ) if scalar @{ $self->write_concern_errors }; return $self; } sub count_write_errors { my ($self) = @_; return scalar @{ $self->write_errors }; } sub count_write_concern_errors { my ($self) = @_; return scalar @{ $self->write_concern_errors }; } sub last_errmsg { my ($self) = @_; if ( $self->count_write_errors ) { return $self->write_errors->[-1]{errmsg}; } elsif ( $self->count_write_concern_errors ) { return $self->write_concern_errors->[-1]{errmsg}; } else { return ""; } } sub last_code { my ($self) = @_; if ( $self->count_write_errors ) { return $self->write_errors->[-1]{code} || UNKNOWN_ERROR; } elsif ( $self->count_write_concern_errors ) { return $self->write_concern_errors->[-1]{code} || UNKNOWN_ERROR; } else { return 0; } } sub last_wtimeout { my ($self) = @_; # if we have actual write errors, we don't want to report a # write concern error return !!( $self->count_write_concern_errors && !$self->count_write_errors ); } 1; MongoDB-v1.2.2/lib/MongoDB/QueryResult/Filtered.pm000644 000765 000024 00000004556 12651754051 022074 0ustar00davidstaff000000 000000 # # Copyright 2014 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # package MongoDB::QueryResult::Filtered; # ABSTRACT: An iterator for Mongo query results with client-side filtering use version; our $VERSION = 'v1.2.2'; use Moo; use Types::Standard qw( CodeRef ); extends 'MongoDB::QueryResult'; use namespace::clean; # N.B.: _post_filter may also munge documents in addition to filtering; # it *must* be run on all documents has _post_filter => ( is => 'ro', isa => CodeRef, required => 1, ); sub has_next { my ($self) = @_; my $limit = $self->_limit; if ( $limit > 0 && ( $self->cursor_at + 1 ) > $limit ) { $self->_kill_cursor; return 0; } while ( !$self->_drained || $self->_get_more ) { my $peek = $self->_docs->[0]; if ( $self->_post_filter->($peek) ) { # if meets criteria, has_next is true return 1; } else { # otherwise throw it away and repeat $self->_inc_cursor_at; $self->_next_doc; } } # ran out of docs, so nothing left return 0; } sub all { my ($self) = @_; my @ret; push @ret, grep { $self->_post_filter->($_) } $self->_drain_docs while $self->has_next; return @ret; } 1; __END__ =pod =encoding UTF-8 =head1 NAME MongoDB::QueryResult::Filtered - An iterator for Mongo query results with client-side filtering =head1 VERSION version v1.2.2 =for Pod::Coverage has_next =head1 AUTHORS =over 4 =item * David Golden =item * Mike Friedman =item * Kristina Chodorow =item * Florian Ragwitz =back =head1 COPYRIGHT AND LICENSE This software is Copyright (c) 2016 by MongoDB, Inc.. This is free software, licensed under: The Apache License, Version 2.0, January 2004 =cut MongoDB-v1.2.2/lib/MongoDB/Op/_Aggregate.pm000644 000765 000024 00000011706 12651754051 020430 0ustar00davidstaff000000 000000 # # Copyright 2015 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # package MongoDB::Op::_Aggregate; # Encapsulate aggregate operation; return MongoDB::QueryResult use version; our $VERSION = 'v1.2.2'; use Moo; use MongoDB::Error; use MongoDB::Op::_Command; use MongoDB::_Constants; use MongoDB::_Types qw( ArrayOfHashRef ); use Types::Standard qw( HashRef InstanceOf Str ); use boolean; use namespace::clean; has db_name => ( is => 'ro', required => 1, isa => Str, ); has coll_name => ( is => 'ro', required => 1, isa => Str, ); has client => ( is => 'ro', required => 1, isa => InstanceOf ['MongoDB::MongoClient'], ); has pipeline => ( is => 'ro', required => 1, isa => ArrayOfHashRef, ); has options => ( is => 'ro', required => 1, isa => HashRef, ); with $_ for qw( MongoDB::Role::_PrivateConstructor MongoDB::Role::_ReadOp MongoDB::Role::_CommandCursorOp ); sub execute { my ( $self, $link, $topology ) = @_; my $options = $self->options; my $is_2_6 = $link->does_write_commands; # maxTimeMS isn't available until 2.6 and the aggregate command # will reject it as unrecognized delete $options->{maxTimeMS} unless $is_2_6; # bypassDocumentValidation isn't available until 3.2 (wire version 4) delete $options->{bypassDocumentValidation} unless $link->accepts_wire_version(4); # If 'cursor' is explicitly false, we disable using cursors, even # for MongoDB 2.6+. This allows users operating with a 2.6+ mongos # and pre-2.6 mongod in shards to avoid fatal errors. This # workaround should be removed once MongoDB 2.4 is no longer supported. my $use_cursor = $is_2_6 && ( !exists( $options->{cursor} ) || $options->{cursor} ); # batchSize is not a command parameter itself like other options my $batchSize = delete $options->{batchSize}; # If we're doing cursors, we first respect an explicit batchSize option; # next we fallback to the legacy (deprecated) cursor option batchSize; finally we # just give an empty document. Other than batchSize we ignore any other # legacy cursor options. If we're not doing cursors, don't send any # cursor option at all, as servers will choke on it. if ($use_cursor) { if ( defined $batchSize ) { $options->{cursor} = { batchSize => $batchSize }; } elsif ( ref $options->{cursor} eq 'HASH' ) { $batchSize = $options->{cursor}{batchSize}; $options->{cursor} = defined($batchSize) ? { batchSize => $batchSize } : {}; } else { $options->{cursor} = {}; } } else { delete $options->{cursor}; } # read concerns are ignored if the last stage is $out my ($last_op) = keys %{ $self->pipeline->[-1] }; my @command = ( aggregate => $self->coll_name, pipeline => $self->pipeline, ($last_op eq '$out' ? () : ($link->accepts_wire_version(4) ? @{ $self->read_concern->as_args } : () ) ), %$options, ); my $op = MongoDB::Op::_Command->_new( db_name => $self->db_name, query => Tie::IxHash->new(@command), query_flags => {}, read_preference => $self->read_preference, bson_codec => $self->bson_codec, ); my $res = $op->execute( $link, $topology ); # For explain, we give the whole response as fields have changed in # different server versions if ( $options->{explain} ) { return MongoDB::QueryResult->_new( _client => $self->client, _address => $link->address, _ns => '', _bson_codec => $self->bson_codec, _batch_size => 1, _cursor_at => 0, _limit => 0, _cursor_id => 0, _cursor_start => 0, _cursor_flags => {}, _cursor_num => 1, _docs => [ $res->output ], ); } # Fake up a single-batch cursor if we didn't get a cursor response. # We use the 'results' fields as the first (and only) batch if ( !$res->output->{cursor} ) { $res->output->{cursor} = { ns => '', id => 0, firstBatch => ( delete $res->output->{result} ) || [], }; } return $self->_build_result_from_cursor($res); } 1; MongoDB-v1.2.2/lib/MongoDB/Op/_BatchInsert.pm000644 000765 000024 00000007043 12651754051 020747 0ustar00davidstaff000000 000000 # # Copyright 2014 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # package MongoDB::Op::_BatchInsert; # Encapsulate a multi-document insert operation; returns a # MongoDB::InsertManyResult use version; our $VERSION = 'v1.2.2'; use Moo; use MongoDB::BSON; use MongoDB::Error; use MongoDB::InsertManyResult; use MongoDB::OID; use MongoDB::_Constants; use MongoDB::_Protocol; use Types::Standard qw( Str ArrayRef Bool ); use Scalar::Util qw/blessed reftype/; use Tie::IxHash; use namespace::clean; has db_name => ( is => 'ro', required => 1, isa => Str, ); has coll_name => ( is => 'ro', required => 1, isa => Str, ); # may or may not have _id; will be added if check_keys is true has documents => ( is => 'ro', required => 1, isa => ArrayRef, ); has ordered => ( is => 'ro', required => 1, isa => Bool, ); has check_keys => ( is => 'ro', required => 1, isa => Bool, ); # starts empty and gets initialized during operations has _doc_ids => ( is => 'ro', writer => '_set_doc_ids', init_arg => undef, isa => ArrayRef, ); with $_ for qw( MongoDB::Role::_PrivateConstructor MongoDB::Role::_WriteOp MongoDB::Role::_InsertPreEncoder ); sub execute { my ( $self, $link ) = @_; my $documents = $self->documents; my $invalid_chars = $self->check_keys ? '.' : ''; my (@insert_docs, @ids); my $last_idx = $#$documents; for ( my $i = 0; $i <= $last_idx; $i++ ) { push @insert_docs, $self->_pre_encode_insert( $link, $documents->[$i], $invalid_chars ); push @ids, $insert_docs[-1]{metadata}{_id}; } $self->_set_doc_ids(\@ids); my $res = $link->does_write_commands ? $self->_command_insert( $link, \@insert_docs ) : $self->_legacy_op_insert( $link, \@insert_docs ); $res->assert; return $res; } sub _command_insert { my ( $self, $link, $insert_docs ) = @_; # XXX have to check size of docs to insert here and possibly split it my $cmd = Tie::IxHash->new( insert => $self->coll_name, documents => $insert_docs, @{ $self->write_concern->as_args }, ); return $self->_send_write_command( $link, $cmd, undef, "MongoDB::InsertManyResult" ); } sub _legacy_op_insert { my ( $self, $link, $insert_docs ) = @_; # XXX have to check size of docs to insert here and possibly split it my $ns = $self->db_name . "." . $self->coll_name; my $op_bson = MongoDB::_Protocol::write_insert( $ns, join( "", map { $_->{bson} } @$insert_docs ) ); return $self->_send_legacy_op_with_gle( $link, $op_bson, undef, "MongoDB::InsertManyResult" ); } sub _parse_cmd { my ( $self, $res ) = @_; return unless $res->{ok}; my $inserted = $self->_doc_ids; my $ids = [ map +{ index => $_, _id => $inserted->[$_] }, 0 .. $#{$inserted} ]; return ( inserted_count => scalar @$inserted, inserted => $ids ); } BEGIN { no warnings 'once'; *_parse_gle = \&_parse_cmd; } 1; MongoDB-v1.2.2/lib/MongoDB/Op/_BulkWrite.pm000644 000765 000024 00000030347 12651754051 020454 0ustar00davidstaff000000 000000 # # Copyright 2014 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # package MongoDB::Op::_BulkWrite; # Encapsulate a multi-document multi-operation write; returns a # MongoDB::BulkWriteResult object use version; our $VERSION = 'v1.2.2'; use Moo; use MongoDB::BSON; use MongoDB::Error; use MongoDB::BulkWriteResult; use MongoDB::UnacknowledgedResult; use MongoDB::Op::_InsertOne; use MongoDB::Op::_Update; use MongoDB::Op::_Delete; use MongoDB::_Protocol; use MongoDB::_Constants; use MongoDB::_Types qw( WriteConcern ); use Types::Standard qw( ArrayRef Bool Str ); use Safe::Isa; use Scalar::Util qw/blessed reftype/; use Tie::IxHash; use Try::Tiny; use boolean; use namespace::clean; has db_name => ( is => 'ro', required => 1, isa => Str, ); has coll_name => ( is => 'ro', required => 1, isa => Str, ); has queue => ( is => 'ro', required => 1, isa => ArrayRef, ); has ordered => ( is => 'ro', required => 1, isa => Bool, ); has write_concern => ( is => 'ro', required => 1, isa => WriteConcern, ); # not _WriteOp because we construct our own result objects with $_ for qw( MongoDB::Role::_PrivateConstructor MongoDB::Role::_CommandOp MongoDB::Role::_UpdatePreEncoder MongoDB::Role::_InsertPreEncoder MongoDB::Role::_BypassValidation ); sub execute { my ( $self, $link ) = @_; Carp::confess("NO LINK") unless $link; my $use_write_cmd = $link->does_write_commands; # If using legacy write ops, then there will never be a valid modified_count # result so we set that to undef in the constructor; otherwise, we set it # to 0 so that results accumulate normally. If a mongos on a mixed topology # later fails to set it, results merging will handle it in that case. # If unacknowledged, we have to accumulate a result to get bulk semantics # right and just throw it away later. my $result = MongoDB::BulkWriteResult->_new( modified_count => ( $use_write_cmd ? 0 : undef ), write_errors => [], write_concern_errors => [], op_count => 0, batch_count => 0, inserted_count => 0, upserted_count => 0, matched_count => 0, deleted_count => 0, upserted => [], inserted => [], ); my @batches = $self->ordered ? $self->_batch_ordered( $link, $self->queue ) : $self->_batch_unordered( $link, $self->queue ); for my $batch (@batches) { if ($use_write_cmd) { $self->_execute_write_command_batch( $link, $batch, $result ); } else { $self->_execute_legacy_batch( $link, $batch, $result ); } } return MongoDB::UnacknowledgedResult->_new( write_errors => [], write_concern_errors => [], ) if !$self->write_concern->is_acknowledged; # only reach here with an error for unordered bulk ops $result->assert_no_write_error; # write concern errors are thrown only for the entire batch $result->assert_no_write_concern_error; return $result; } my %OP_MAP = ( insert => [ insert => 'documents' ], update => [ update => 'updates' ], delete => [ delete => 'deletes' ], ); # _execute_write_command_batch may split batches if they are too large and # execute them separately sub _execute_write_command_batch { my ( $self, $link, $batch, $result ) = @_; my ( $type, $docs ) = @$batch; my ( $cmd, $op_key ) = @{ $OP_MAP{$type} }; my $boolean_ordered = boolean( $self->ordered ); my ( $db_name, $coll_name, $wc ) = map { $self->$_ } qw/db_name coll_name write_concern/; my @left_to_send = ($docs); while (@left_to_send) { my $chunk = shift @left_to_send; # for update/insert, pre-encode docs as they need custom BSON handling # that can't be applied to an entire write command at once if ( $cmd eq 'update' ) { # take array of hash, validate and encode each update doc; since this # might be called more than once if chunks are getting split, check if # the update doc is already encoded; this also removes the 'is_replace' # field that needs to not be in the command sent to the server for ( my $i = 0; $i <= $#$chunk; $i++ ) { next if ref( $chunk->[$i]{u} ) eq 'MongoDB::BSON::_EncodedDoc'; my $is_replace = delete $chunk->[$i]{is_replace}; $chunk->[$i]{u} = $self->_pre_encode_update( $link, $chunk->[$i]{u}, $is_replace ); } } elsif ( $cmd eq 'insert' ) { # take array of docs, encode each one while saving original or generated _id # field; since this might be called more than once if chunks are getting # split, check if the doc is already encoded for ( my $i = 0; $i <= $#$chunk; $i++ ) { unless ( ref( $chunk->[$i] ) eq 'MongoDB::BSON::_EncodedDoc' ) { $chunk->[$i] = $self->_pre_encode_insert( $link, $chunk->[$i], '.' ); }; } } my $cmd_doc = [ $cmd => $coll_name, $op_key => $chunk, ordered => $boolean_ordered, @{ $wc->as_args }, ]; if ( $cmd eq 'insert' || $cmd eq 'update' ) { (undef, $cmd_doc) = $self->_maybe_bypass($link, $cmd_doc); } my $op = MongoDB::Op::_Command->_new( db_name => $db_name, query => $cmd_doc, query_flags => {}, bson_codec => $self->bson_codec, ); my $cmd_result = try { $op->execute($link) } catch { if ( $_->$_isa("MongoDB::_CommandSizeError") ) { if ( @$chunk == 1 ) { MongoDB::DocumentError->throw( message => "document too large", document => $chunk->[0], ); } else { unshift @left_to_send, $self->_split_chunk( $link, $chunk, $_->size ); } } else { die $_; } return; }; redo unless $cmd_result; # restart after a chunk split my $r = MongoDB::BulkWriteResult->_parse_cmd_result( op => $type, op_count => scalar @$chunk, result => $cmd_result, cmd_doc => $cmd_doc, ); # append corresponding ops to errors if ( $r->count_write_errors ) { for my $error ( @{ $r->write_errors } ) { $error->{op} = $chunk->[ $error->{index} ]; } } $result->_merge_result($r); $result->assert_no_write_error if $boolean_ordered; } return; } sub _split_chunk { my ( $self, $link, $chunk, $size ) = @_; my $avg_cmd_size = $size / @$chunk; my $new_cmds_per_chunk = int( MAX_BSON_WIRE_SIZE / $avg_cmd_size ); my @split_chunks; while (@$chunk) { push @split_chunks, [ splice( @$chunk, 0, $new_cmds_per_chunk ) ]; } return @split_chunks; } sub _batch_ordered { my ( $self, $link, $queue ) = @_; my @batches; my $last_type = ''; my $count = 0; my $max_batch_count = $link->max_write_batch_size; for my $op (@$queue) { my ( $type, $doc ) = @$op; if ( $type ne $last_type || $count == $max_batch_count ) { push @batches, [ $type => [$doc] ]; $last_type = $type; $count = 1; } else { push @{ $batches[-1][-1] }, $doc; $count++; } } return @batches; } sub _batch_unordered { my ( $self, $link, $queue ) = @_; my %batches = map { ; $_ => [ [] ] } keys %OP_MAP; my $max_batch_count = $link->max_write_batch_size; for my $op (@$queue) { my ( $type, $doc ) = @$op; if ( @{ $batches{$type}[-1] } == $max_batch_count ) { push @{ $batches{$type} }, [$doc]; } else { push @{ $batches{$type}[-1] }, $doc; } } # insert/update/delete are guaranteed to be in random order on Perl 5.18+ my @batches; for my $type ( grep { scalar @{ $batches{$_}[-1] } } keys %batches ) { push @batches, map { [ $type => $_ ] } @{ $batches{$type} }; } return @batches; } sub _execute_legacy_batch { my ( $self, $link, $batch, $result ) = @_; my ( $type, $docs ) = @$batch; my $ordered = $self->ordered; # if write concern is not safe, we have to proxy with a safe one so that # we can interrupt ordered bulks, even while ignoring the actual error my $wc = $self->write_concern; my $w_0 = !$wc->is_acknowledged; if ($w_0) { my $wc_args = $wc->as_args(); my $wcs = scalar @$wc_args ? $wc->as_args()->[1] : {}; $wcs->{w} = 1; $wc = MongoDB::WriteConcern->new($wcs); } # XXX successive inserts ought to get batched up, up to the max size for # batch, but we have no feedback on max size to know how many to put # together. I wonder if send_insert should return a list of write results, # or if it should just strip out however many docs it can from an arrayref # and leave the rest, and then this code can iterate. for my $doc (@$docs) { my $op; if ( $type eq 'insert' ) { $op = MongoDB::Op::_InsertOne->_new( db_name => $self->db_name, coll_name => $self->coll_name, full_name => $self->db_name . "." . $self->coll_name, document => $doc, write_concern => $wc, bson_codec => $self->bson_codec, ); } elsif ( $type eq 'update' ) { $op = MongoDB::Op::_Update->_new( db_name => $self->db_name, coll_name => $self->coll_name, full_name => $self->db_name . "." . $self->coll_name, filter => $doc->{q}, update => $doc->{u}, multi => $doc->{multi}, upsert => $doc->{upsert}, write_concern => $wc, is_replace => $doc->{is_replace}, bson_codec => $self->bson_codec, ); } elsif ( $type eq 'delete' ) { $op = MongoDB::Op::_Delete->_new( db_name => $self->db_name, coll_name => $self->coll_name, full_name => $self->db_name . "." . $self->coll_name, filter => $doc->{q}, just_one => !!$doc->{limit}, write_concern => $wc, bson_codec => $self->bson_codec, ); } my $op_result = try { $op->execute($link); } catch { if ( $_->$_isa("MongoDB::DatabaseError") && $_->result->does("MongoDB::Role::_WriteResult") ) { return $_->result; } die $_ unless $w_0 && /exceeds maximum size/; return undef; }; my $gle_result = $op_result ? MongoDB::BulkWriteResult->_parse_write_op($op_result) : undef; # Even for {w:0}, if the batch is ordered we have to break on the first # error, but we don't throw the error to the user. if ($w_0) { last if $ordered && ( !$gle_result || $gle_result->count_write_errors ); } else { $result->_merge_result($gle_result); $result->assert_no_write_error if $ordered; } } return; } 1; MongoDB-v1.2.2/lib/MongoDB/Op/_Command.pm000644 000765 000024 00000003440 12651754051 020114 0ustar00davidstaff000000 000000 # # Copyright 2014 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # package MongoDB::Op::_Command; # Encapsulate running a command and returning a MongoDB::CommandResult use version; our $VERSION = 'v1.2.2'; use Moo; use MongoDB::_Constants; use MongoDB::_Types qw( Document ); use Types::Standard qw( HashRef Str ); use Tie::IxHash; use namespace::clean; has db_name => ( is => 'ro', required => 1, isa => Str, ); has query => ( is => 'ro', required => 1, writer => '_set_query', isa => Document, ); has query_flags => ( is => 'ro', required => 1, isa => HashRef, ); with $_ for qw( MongoDB::Role::_PrivateConstructor MongoDB::Role::_CommandOp MongoDB::Role::_ReadOp MongoDB::Role::_ReadPrefModifier ); sub execute { my ( $self, $link, $topology_type ) = @_; $topology_type ||= 'Single'; # if not specified, assume direct # $query is passed as a reference because it *may* be replaced $self->_apply_read_prefs( $link, $topology_type, $self->query_flags, \$self->query); my $res = MongoDB::CommandResult->_new( output => $self->_send_command( $link, $self->query, $self->query_flags ), address => $link->address ); $res->assert; return $res; } 1; MongoDB-v1.2.2/lib/MongoDB/Op/_Count.pm000644 000765 000024 00000003616 12651754051 017633 0ustar00davidstaff000000 000000 # # Copyright 2014 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # package MongoDB::Op::_Count; # Encapsulate code path for count commands use version; our $VERSION = 'v1.2.2'; use Moo; use MongoDB::Op::_Command; use MongoDB::Error; use MongoDB::_Types qw( Document ); use Types::Standard qw( Str InstanceOf HashRef ); use Tie::IxHash; use boolean; use namespace::clean; has filter => ( is => 'ro', required => 1, isa => HashRef, ); has options => ( is => 'ro', required => 1, isa => HashRef, ); has db_name => ( is => 'ro', required => 1, isa => Str, ); has coll_name => ( is => 'ro', required => 1, isa => Str, ); with $_ for qw( MongoDB::Role::_PrivateConstructor MongoDB::Role::_ReadOp ); sub execute { my ( $self, $link, $topology ) = @_; my $command = [ count => $self->coll_name, query => $self->filter, ($link->accepts_wire_version(4) ? @{ $self->read_concern->as_args } : () ), %{ $self->options }, ]; my $op = MongoDB::Op::_Command->_new( db_name => $self->db_name, query => $command, query_flags => {}, bson_codec => $self->bson_codec, read_preference => $self->read_preference, ); my $res = $op->execute( $link, $topology ); return $res->output; } 1; MongoDB-v1.2.2/lib/MongoDB/Op/_CreateIndexes.pm000644 000765 000024 00000005342 12651754051 021264 0ustar00davidstaff000000 000000 # # Copyright 2014 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # package MongoDB::Op::_CreateIndexes; # Encapsulate index creation operations; returns a MongoDB::CommandResult # or a MongoDB::InsertManyResult, depending on the server version use version; our $VERSION = 'v1.2.2'; use Moo; use MongoDB::CommandResult; use MongoDB::_Constants; use MongoDB::_Types -types; use MongoDB::Op::_BatchInsert; use Types::Standard qw( ArrayRef HashRef Str ); use MongoDB::_Types qw( WriteConcern ); use Tie::IxHash; use namespace::clean; has db_name => ( is => 'ro', required => 1, isa => Str, ); has coll_name => ( is => 'ro', required => 1, isa => Str, ); has indexes => ( is => 'ro', required => 1, isa => ArrayRef [HashRef], ); has write_concern => ( is => 'ro', required => 1, isa => WriteConcern, ); with $_ for qw( MongoDB::Role::_PrivateConstructor MongoDB::Role::_CommandOp ); sub execute { my ( $self, $link ) = @_; my $res = $link->does_write_commands ? $self->_command_create_indexes($link) : $self->_legacy_index_insert($link); $res->assert; return $res; } sub _command_create_indexes { my ( $self, $link, $op_doc ) = @_; my $cmd = Tie::IxHash->new( createIndexes => $self->coll_name, indexes => $self->indexes, ); my $res = $self->_send_command( $link, $cmd ); return MongoDB::CommandResult->_new( output => $self->write_concern->is_acknowledged ? $res : { ok => 1 }, address => $link->address, ); } sub _legacy_index_insert { my ( $self, $link, $op_doc ) = @_; # construct docs for an insert many op my $ns = join( ".", $self->db_name, $self->coll_name ); my $indexes = [ map { { %$_, ns => $ns } } @{ $self->indexes } ]; my $op = MongoDB::Op::_BatchInsert->_new( db_name => $self->db_name, coll_name => "system.indexes", documents => $indexes, write_concern => $self->write_concern, bson_codec => $self->bson_codec, check_keys => 0, ordered => 1, ); return $op->execute($link); } 1; MongoDB-v1.2.2/lib/MongoDB/Op/_Delete.pm000644 000765 000024 00000005205 12651754051 017741 0ustar00davidstaff000000 000000 # # Copyright 2014 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # package MongoDB::Op::_Delete; # Encapsulate a delete operation; returns a MongoDB::DeleteResult use version; our $VERSION = 'v1.2.2'; use Moo; use MongoDB::BSON; use MongoDB::DeleteResult; use MongoDB::_Constants; use MongoDB::_Protocol; use MongoDB::_Types qw( Document ); use Types::Standard qw( Bool Str ); use Tie::IxHash; use namespace::clean; has db_name => ( is => 'ro', required => 1, isa => Str, ); has coll_name => ( is => 'ro', required => 1, isa => Str, ); has full_name => ( is => 'ro', required => 1, isa => Str, ); has filter => ( is => 'ro', required => 1, isa => Document, ); has just_one => ( is => 'ro', required => 1, isa => Bool, ); with $_ for qw( MongoDB::Role::_PrivateConstructor MongoDB::Role::_WriteOp ); sub execute { my ( $self, $link ) = @_; my $filter = ref( $self->filter ) eq 'ARRAY' ? { @{ $self->filter } } : $self->filter; my $op_doc = { q => $filter, limit => $self->just_one ? 1 : 0 }; return ( $link->does_write_commands ? ( $self->_send_write_command( $link, [ delete => $self->coll_name, deletes => [$op_doc], @{ $self->write_concern->as_args }, ], $op_doc, "MongoDB::DeleteResult" )->assert ) : ( $self->_send_legacy_op_with_gle( $link, MongoDB::_Protocol::write_delete( $self->full_name, $self->bson_codec->encode_one( $self->filter ), { just_one => $self->just_one ? 1 : 0 } ), $op_doc, "MongoDB::DeleteResult", )->assert ) ); } sub _parse_cmd { my ( $self, $res ) = @_; return ( deleted_count => $res->{n} || 0 ); } BEGIN { no warnings 'once'; *_parse_gle = \&_parse_cmd; } 1; MongoDB-v1.2.2/lib/MongoDB/Op/_Distinct.pm000644 000765 000024 00000004724 12651754051 020325 0ustar00davidstaff000000 000000 # # Copyright 2015 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # package MongoDB::Op::_Distinct; # Encapsulate distinct operation; return MongoDB::QueryResult use version; our $VERSION = 'v1.2.2'; use Moo; use MongoDB::Op::_Command; use MongoDB::_Constants; use MongoDB::_Types qw( Document ); use Types::Standard qw( InstanceOf HashRef Str ); use namespace::clean; has db_name => ( is => 'ro', required => 1, isa => Str, ); has coll_name => ( is => 'ro', required => 1, isa => Str, ); has client => ( is => 'ro', required => 1, isa => InstanceOf ['MongoDB::MongoClient'], ); has fieldname=> ( is => 'ro', required => 1, isa => Str, ); has filter => ( is => 'ro', required => 1, isa => Document, ); has options => ( is => 'ro', required => 1, isa => HashRef, ); with $_ for qw( MongoDB::Role::_PrivateConstructor MongoDB::Role::_ReadOp MongoDB::Role::_CommandCursorOp ); sub execute { my ( $self, $link, $topology ) = @_; my $options = $self->options; my $filter = ref( $self->filter ) eq 'ARRAY' ? { @{ $self->filter } } : $self->filter; my @command = ( distinct => $self->coll_name, key => $self->fieldname, query => $filter, ($link->accepts_wire_version(4) ? @{ $self->read_concern->as_args } : ()), %$options ); my $op = MongoDB::Op::_Command->_new( db_name => $self->db_name, query => Tie::IxHash->new(@command), query_flags => {}, read_preference => $self->read_preference, bson_codec => $self->bson_codec, ); my $res = $op->execute( $link, $topology ); $res->output->{cursor} = { ns => '', id => 0, firstBatch => ( delete $res->output->{values} ) || [], }; return $self->_build_result_from_cursor($res); } 1; MongoDB-v1.2.2/lib/MongoDB/Op/_Explain.pm000644 000765 000024 00000005523 12651754051 020142 0ustar00davidstaff000000 000000 # # Copyright 2014 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # package MongoDB::Op::_Explain; # Encapsulate code path for explain commands/queries use version; our $VERSION = 'v1.2.2'; use Moo; use MongoDB::Op::_Command; use MongoDB::Op::_Query; use MongoDB::QueryResult::Filtered; use MongoDB::_Constants; use MongoDB::Error; use MongoDB::CommandResult; use MongoDB::_Types qw( Document ); use Types::Standard qw( HashRef InstanceOf Str ); use Tie::IxHash; use boolean; use namespace::clean; has query => ( is => 'ro', required => 1, isa => InstanceOf['MongoDB::_Query'], ); has db_name => ( is => 'ro', required => 1, isa => Str, ); has coll_name => ( is => 'ro', required => 1, isa => Str, ); with $_ for qw( MongoDB::Role::_PrivateConstructor MongoDB::Role::_ReadOp ); sub execute { my ( $self, $link, $topology ) = @_; my $res = $link->accepts_wire_version(4) ? $self->_command_explain( $link, $topology ) : $self->_legacy_explain( $link, $topology ); return $res; } sub _command_explain { my ( $self, $link, $topology ) = @_; my $cmd = Tie::IxHash->new( @{ $self->query->as_query_op->as_command } ); # XXX need to standardize error here if (defined $self->query->modifiers->{hint}) { # cannot use hint on explain, throw error MongoDB::Error->throw( message => "cannot use 'hint' with 'explain'", ); } my $op = MongoDB::Op::_Command->_new( db_name => $self->db_name, query => [ explain => $cmd, @{ $self->read_concern->as_args } ], query_flags => {}, read_preference => $self->read_preference, bson_codec => $self->bson_codec, ); my $res = $op->execute( $link, $topology ); return $res->{output}; } sub _legacy_explain { my ( $self, $link, $topology ) = @_; my $new_query = $self->query->clone; $new_query->modifiers->{'$explain'} = true; # per David Storch, drivers *must* send a negative limit to instruct # the query planner analysis module to add a LIMIT stage. For older # explain implementations, it also ensures a cursor isn't left open. $new_query->limit( -1 * abs( $new_query->limit ) ); return $new_query->execute->next; } 1; MongoDB-v1.2.2/lib/MongoDB/Op/_FindAndDelete.pm000644 000765 000024 00000005346 12651754051 021173 0ustar00davidstaff000000 000000 # # Copyright 2014 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # package MongoDB::Op::_FindAndDelete; # Encapsulate find_and_delete operation; atomically delete and return doc use version; our $VERSION = 'v1.2.2'; use Moo; use boolean; use MongoDB::Error; use MongoDB::Op::_Command; use Types::Standard qw( InstanceOf Str HashRef Maybe ); use MongoDB::_Types qw( WriteConcern ); use Try::Tiny; use namespace::clean; has db_name => ( is => 'ro', required => 1, isa => Str, ); has coll_name => ( is => 'ro', required => 1, isa => Str, ); has client => ( is => 'ro', required => 1, isa => InstanceOf ['MongoDB::MongoClient'], ); has filter => ( is => 'ro', required => 1, isa => HashRef, ); has options => ( is => 'ro', required => 1, isa => HashRef, ); has write_concern => ( is => 'ro', required => 1, isa => WriteConcern, ); with $_ for qw( MongoDB::Role::_PrivateConstructor MongoDB::Role::_ReadOp MongoDB::Role::_CommandCursorOp ); sub execute { my ( $self, $link, $topology ) = @_; my $command = [ findAndModify => $self->coll_name, query => $self->filter, remove => true, ($link->accepts_wire_version(4) ? (@{ $self->write_concern->as_args }) : () ), %{ $self->options }, ]; my $op = MongoDB::Op::_Command->_new( db_name => $self->db_name, query => $command, query_flags => {}, bson_codec => $self->bson_codec, ); my $result; try { $result = $op->execute( $link, $topology ); $result = $result->{output}; } catch { die $_ unless $_ eq 'No matching object found'; }; # findAndModify returns ok:1 even for write concern errors, so # we must check and throw explicitly if ( $result->{writeConcernError} ) { MongoDB::WriteConcernError->throw( message => $result->{writeConcernError}{errmsg}, result => $result, code => WRITE_CONCERN_ERROR, ); } return $result->{value} if $result; return; } 1; MongoDB-v1.2.2/lib/MongoDB/Op/_FindAndUpdate.pm000644 000765 000024 00000005743 12651754051 021214 0ustar00davidstaff000000 000000 # # Copyright 2014 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # package MongoDB::Op::_FindAndUpdate; # Encapsulate find_and_update operation; atomically update and return doc use version; our $VERSION = 'v1.2.2'; use Moo; use boolean; use MongoDB::Error; use MongoDB::Op::_Command; use Types::Standard qw( InstanceOf Str HashRef Maybe ); use MongoDB::_Types qw( WriteConcern ); use Try::Tiny; use namespace::clean; has db_name => ( is => 'ro', required => 1, isa => Str, ); has coll_name => ( is => 'ro', required => 1, isa => Str, ); has client => ( is => 'ro', required => 1, isa => InstanceOf ['MongoDB::MongoClient'], ); has filter => ( is => 'ro', required => 1, isa => HashRef, ); has modifier => ( is => 'ro', required => 1, isa => HashRef, ); has options => ( is => 'ro', required => 1, isa => HashRef, ); has write_concern => ( is => 'ro', required => 1, isa => WriteConcern, ); with $_ for qw( MongoDB::Role::_PrivateConstructor MongoDB::Role::_ReadOp MongoDB::Role::_CommandCursorOp MongoDB::Role::_BypassValidation ); sub execute { my ( $self, $link, $topology ) = @_; my ( undef, $command ) = $self->_maybe_bypass( $link, [ findAndModify => $self->coll_name, query => $self->filter, update => $self->modifier, ( $link->accepts_wire_version(4) ? ( @{ $self->write_concern->as_args } ) : () ), %{ $self->options }, ] ); my $op = MongoDB::Op::_Command->_new( db_name => $self->db_name, query => $command, query_flags => {}, bson_codec => $self->bson_codec, ); my $result; try { $result = $op->execute( $link, $topology ); $result = $result->{output}; } catch { die $_ unless $_ eq 'No matching object found'; }; # findAndModify returns ok:1 even for write concern errors, so # we must check and throw explicitly if ( $result->{writeConcernError} ) { MongoDB::WriteConcernError->throw( message => $result->{writeConcernError}{errmsg}, result => $result, code => WRITE_CONCERN_ERROR, ); } return $result->{value} if $result; return; } 1; MongoDB-v1.2.2/lib/MongoDB/Op/_FSyncUnlock.pm000644 000765 000024 00000005645 12651754051 020745 0ustar00davidstaff000000 000000 # # Copyright 2014 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # package MongoDB::Op::_FSyncUnlock; # Encapsulate collection list operations; returns arrayref of collection # names use version; our $VERSION = 'v1.2.2'; use Moo; use MongoDB::Op::_Command; use MongoDB::Op::_Query; use MongoDB::QueryResult::Filtered; use MongoDB::_Constants; use MongoDB::_Types qw( Document ); use Types::Standard qw( HashRef InstanceOf Str ); use Tie::IxHash; use namespace::clean; has db_name => ( is => 'ro', required => 1, isa => Str, ); has client => ( is => 'ro', required => 1, isa => InstanceOf['MongoDB::MongoClient'], ); with $_ for qw( MongoDB::Role::_PrivateConstructor MongoDB::Role::_ReadOp MongoDB::Role::_CommandCursorOp ); sub execute { my ( $self, $link, $topology ) = @_; my $res = $link->accepts_wire_version(4) ? $self->_command_fsync_unlock( $link, $topology ) : $self->_legacy_fsync_unlock( $link, $topology ); return $res; } sub _command_fsync_unlock { my ( $self, $link, $topology ) = @_; my $cmd = Tie::IxHash->new( fsyncUnlock => 1, ); my $op = MongoDB::Op::_Command->_new( db_name => $self->db_name, query => $cmd, query_flags => {}, read_preference => $self->read_preference, bson_codec => $self->bson_codec, ); my $res = $op->execute( $link, $topology ); return $res->{output}; } sub _legacy_fsync_unlock { my ( $self, $link, $topology ) = @_; my $query = MongoDB::_Query->_new( modifiers => {}, filter => {}, allowPartialResults => 0, batchSize => 0, comment => '', cursorType => 'non_tailable', maxAwaitTimeMS => 0, maxTimeMS => 0, noCursorTimeout => 0, oplogReplay => 0, projection => undef, skip => 0, sort => undef, db_name => 'admin', coll_name => '$cmd.sys.unlock', limit => -1, bson_codec => $self->bson_codec, client => $self->client, read_preference => $self->read_preference, ); my $op = $query->as_query_op(); return $op->execute( $link, $topology )->next; } 1; MongoDB-v1.2.2/lib/MongoDB/Op/_GetMore.pm000644 000765 000024 00000005703 12651754051 020104 0ustar00davidstaff000000 000000 # # Copyright 2014 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # package MongoDB::Op::_GetMore; # Encapsulate a cursor fetch operation; returns raw results object # (after inflation from BSON) use version; our $VERSION = 'v1.2.2'; use Moo; use MongoDB::_Constants; use Types::Standard qw( Maybe Any InstanceOf Num Str ); use MongoDB::_Protocol; use namespace::clean; has ns => ( is => 'ro', required => 1, isa => Str, ); has db_name => ( is => 'ro', required => 1, isa => Str, ); has coll_name => ( is => 'ro', required => 1, isa => Str, ); has client => ( is => 'ro', required => 1, isa => InstanceOf ['MongoDB::MongoClient'], ); has cursor_id => ( is => 'ro', required => 1, isa => Any, ); has batch_size => ( is => 'ro', required => 1, isa => Num, ); has max_time_ms => ( is => 'ro', isa => Maybe[Num], ); with $_ for qw( MongoDB::Role::_PrivateConstructor MongoDB::Role::_DatabaseOp MongoDB::Role::_CommandOp ); sub execute { my ( $self, $link ) = @_; my $res = $link->accepts_wire_version(4) ? $self->_command_get_more( $link ) : $self->_legacy_get_more( $link ); return $res; } sub _command_get_more { my ( $self, $link ) = @_; my $cmd = [ getMore => $self->cursor_id, collection => $self->coll_name, $self->batch_size > 0 ? (batchSize => $self->batch_size) : (), defined $self->max_time_ms ? (maxTimeMS => $self->max_time_ms) : (), ]; my $op = MongoDB::Op::_Command->_new( db_name => $self->db_name, query => $cmd, query_flags => {}, bson_codec => $self->bson_codec, ); my $c = $op->execute( $link )->output->{cursor}; my $batch = $c->{nextBatch} || []; return { cursor_id => $c->{id} || 0, flags => {}, starting_from => 0, number_returned => scalar @$batch, docs => $batch, }; } sub _legacy_get_more { my ( $self, $link ) = @_; my ( $op_bson, $request_id ) = MongoDB::_Protocol::write_get_more( map { $self->$_ } qw/ns cursor_id batch_size/ ); my $result = $self->_query_and_receive( $link, $op_bson, $request_id, $self->bson_codec ); $result->{address} = $link->address; return $result; } 1; MongoDB-v1.2.2/lib/MongoDB/Op/_InsertOne.pm000644 000765 000024 00000005177 12651754051 020455 0ustar00davidstaff000000 000000 # # Copyright 2014 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # package MongoDB::Op::_InsertOne; # Encapsulate a single-document insert operation; returns a # MongoDB::InsertOneResult use version; our $VERSION = 'v1.2.2'; use Moo; use MongoDB::BSON; use MongoDB::Error; use MongoDB::InsertOneResult; use MongoDB::OID; use MongoDB::_Constants; use MongoDB::_Protocol; use Types::Standard qw( Str ); use boolean; use namespace::clean; has db_name => ( is => 'ro', required => 1, isa => Str, ); has coll_name => ( is => 'ro', required => 1, isa => Str, ); has full_name => ( is => 'ro', required => 1, isa => Str, ); has document => ( is => 'ro', required => 1, ); # this starts undef and gets initialized during processing has _doc_id => ( is => 'ro', init_arg => undef, writer => '_set_doc_id', ); with $_ for qw( MongoDB::Role::_PrivateConstructor MongoDB::Role::_WriteOp MongoDB::Role::_InsertPreEncoder MongoDB::Role::_BypassValidation ); sub execute { my ( $self, $link ) = @_; my ( $orig_doc, $insert_doc ) = ( $self->document ); ( $insert_doc = $self->_pre_encode_insert( $link, $orig_doc, '.' ) ), ( $self->_set_doc_id( $insert_doc->{metadata}{_id} ) ); return $link->does_write_commands ? ( $self->_send_write_command( $self->_maybe_bypass( $link, [ insert => $self->coll_name, documents => [$insert_doc], @{ $self->write_concern->as_args }, ], ), $orig_doc, "MongoDB::InsertOneResult", )->assert ) : ( $self->_send_legacy_op_with_gle( $link, MongoDB::_Protocol::write_insert( $self->full_name, $insert_doc->{bson} ), $orig_doc, "MongoDB::InsertOneResult" )->assert ); } sub _parse_cmd { my ( $self, $res ) = @_; return ( $res->{ok} ? ( inserted_id => $self->_doc_id ) : () ); } BEGIN { no warnings 'once'; *_parse_gle = \&_parse_cmd; } 1; MongoDB-v1.2.2/lib/MongoDB/Op/_KillCursors.pm000644 000765 000024 00000002172 12651754051 021013 0ustar00davidstaff000000 000000 # # Copyright 2014 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # package MongoDB::Op::_KillCursors; # Encapsulate a cursor kill operation; returns true use version; our $VERSION = 'v1.2.2'; use Moo; use MongoDB::_Constants; use Types::Standard qw( ArrayRef Str ); use MongoDB::_Protocol; use namespace::clean; has cursor_ids => ( is => 'ro', required => 1, isa => ArrayRef, ); with $_ for qw( MongoDB::Role::_PrivateConstructor ); sub execute { my ( $self, $link ) = @_; $link->write( MongoDB::_Protocol::write_kill_cursors( @{ $self->cursor_ids } ) ); return 1; } 1; MongoDB-v1.2.2/lib/MongoDB/Op/_ListCollections.pm000644 000765 000024 00000007676 12651754051 021667 0ustar00davidstaff000000 000000 # # Copyright 2014 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # package MongoDB::Op::_ListCollections; # Encapsulate collection list operations; returns arrayref of collection # names use version; our $VERSION = 'v1.2.2'; use Moo; use MongoDB::Op::_Command; use MongoDB::Op::_Query; use MongoDB::QueryResult::Filtered; use MongoDB::_Constants; use MongoDB::_Types qw( Document ); use Types::Standard qw( HashRef InstanceOf Str ); use Tie::IxHash; use namespace::clean; has db_name => ( is => 'ro', required => 1, isa => Str, ); has client => ( is => 'ro', required => 1, isa => InstanceOf['MongoDB::MongoClient'], ); has filter => ( is => 'ro', required => 1, isa => Document, ); has options => ( is => 'ro', required => 1, isa => HashRef, ); with $_ for qw( MongoDB::Role::_PrivateConstructor MongoDB::Role::_ReadOp MongoDB::Role::_CommandCursorOp ); sub execute { my ( $self, $link, $topology ) = @_; my $res = $link->accepts_wire_version(3) ? $self->_command_list_colls( $link, $topology ) : $self->_legacy_list_colls( $link, $topology ); return $res; } sub _command_list_colls { my ( $self, $link, $topology ) = @_; my $options = $self->options; # batchSize is not a command parameter itself like other options my $batchSize = delete $options->{batchSize}; if ( defined $batchSize ) { $options->{cursor} = { batchSize => $batchSize }; } else { $options->{cursor} = {}; } my $filter = ref( $self->filter ) eq 'ARRAY' ? { @{ $self->filter } } : $self->filter; my $cmd = Tie::IxHash->new( listCollections => 1, filter => $filter, %{$self->options}, ); my $op = MongoDB::Op::_Command->_new( db_name => $self->db_name, query => $cmd, query_flags => {}, read_preference => $self->read_preference, bson_codec => $self->bson_codec, ); my $res = $op->execute( $link, $topology ); return $self->_build_result_from_cursor( $res ); } sub _legacy_list_colls { my ( $self, $link, $topology ) = @_; my $query = MongoDB::_Query->_new( modifiers => {}, allowPartialResults => 0, batchSize => 0, comment => '', cursorType => 'non_tailable', limit => 0, maxAwaitTimeMS => 0, maxTimeMS => 0, noCursorTimeout => 0, oplogReplay => 0, projection => undef, skip => 0, sort => undef, %{$self->options}, db_name => $self->db_name, coll_name => 'system.namespaces', bson_codec => $self->bson_codec, client => $self->client, read_preference => $self->read_preference, filter => $self->filter, ); my $op = $query->as_query_op( { post_filter => \&__filter_legacy_names } ); return $op->execute( $link, $topology ); } # exclude names with '$' except oplog.$ # XXX why do we include oplog.$? sub __filter_legacy_names { my $doc = shift; # remove leading database name for compatibility with listCollections $doc->{name} =~ s/^[^.]+\.//; my $name = $doc->{name}; return !( index( $name, '$' ) >= 0 && index( $name, '.oplog.$' ) < 0 ); } 1; MongoDB-v1.2.2/lib/MongoDB/Op/_ListIndexes.pm000644 000765 000024 00000006404 12651754051 020774 0ustar00davidstaff000000 000000 # # Copyright 2014 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # package MongoDB::Op::_ListIndexes; # Encapsulate index list operation; returns array ref of index documents use version; our $VERSION = 'v1.2.2'; use Moo; use MongoDB::Error; use MongoDB::Op::_Command; use MongoDB::Op::_Query; use MongoDB::_Constants; use Types::Standard qw( InstanceOf Str ); use Tie::IxHash; use Try::Tiny; use Safe::Isa; use namespace::clean; has db_name => ( is => 'ro', required => 1, isa => Str, ); has coll_name => ( is => 'ro', required => 1, isa => Str, ); has client => ( is => 'ro', required => 1, isa => InstanceOf ['MongoDB::MongoClient'], ); with $_ for qw( MongoDB::Role::_PrivateConstructor MongoDB::Role::_ReadOp MongoDB::Role::_CommandCursorOp ); sub execute { my ( $self, $link, $topology ) = @_; my $res = $link->accepts_wire_version(3) ? $self->_command_list_indexes( $link, $topology ) : $self->_legacy_list_indexes( $link, $topology ); return $res; } sub _command_list_indexes { my ( $self, $link, $topology ) = @_; my $op = MongoDB::Op::_Command->_new( db_name => $self->db_name, query => Tie::IxHash->new( listIndexes => $self->coll_name, cursor => {} ), query_flags => {}, read_preference => $self->read_preference, bson_codec => $self->bson_codec, ); my $res = try { $op->execute( $link, $topology ); } catch { if ( $_->$_isa("MongoDB::DatabaseError") ) { return undef if $_->code == NAMESPACE_NOT_FOUND(); } die $_; }; return $res ? $self->_build_result_from_cursor($res) : $self->_empty_query_result($link); } sub _legacy_list_indexes { my ( $self, $link, $topology ) = @_; my $ns = $self->db_name . "." . $self->coll_name; my $op = MongoDB::Op::_Query->_new( modifiers => {}, allow_partial_results => 0, batch_size => 0, comment => '', cursor_type => 'non_tailable', limit => 0, max_time_ms => 0, no_cursor_timeout => 0, oplog_replay => 0, projection => undef, skip => 0, sort => undef, db_name => $self->db_name, coll_name => 'system.indexes', bson_codec => $self->bson_codec, client => $self->client, read_preference => $self->read_preference, filter => Tie::IxHash->new( ns => $ns ), ); return $op->execute( $link, $topology ); } 1; MongoDB-v1.2.2/lib/MongoDB/Op/_ParallelScan.pm000644 000765 000024 00000003405 12651754051 021100 0ustar00davidstaff000000 000000 # # Copyright 2014 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # package MongoDB::Op::_ParallelScan; # Encapsulate code path for parallelCollectionScan commands use version; our $VERSION = 'v1.2.2'; use Moo; use MongoDB::Op::_Command; use MongoDB::Error; use Types::Standard qw( Int Str ); use Tie::IxHash; use boolean; use namespace::clean; has num_cursors => ( is => 'ro', required => 1, isa => Int, ); has db_name => ( is => 'ro', required => 1, isa => Str, ); has coll_name => ( is => 'ro', required => 1, isa => Str, ); with $_ for qw( MongoDB::Role::_PrivateConstructor MongoDB::Role::_ReadOp ); sub execute { my ( $self, $link, $topology ) = @_; my $command = [ parallelCollectionScan => $self->coll_name, numCursors => $self->num_cursors, ($link->accepts_wire_version(4) ? @{ $self->read_concern->as_args } : () ), ]; my $op = MongoDB::Op::_Command->_new( db_name => $self->db_name, query => $command, query_flags => {}, bson_codec => $self->bson_codec, read_preference => $self->read_preference, ); return $op->execute( $link, $topology ); } 1; MongoDB-v1.2.2/lib/MongoDB/Op/_Query.pm000644 000765 000024 00000017345 12651754051 017654 0ustar00davidstaff000000 000000 # # Copyright 2014 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # package MongoDB::Op::_Query; # Encapsulate a query operation; returns a MongoDB::QueryResult object use version; our $VERSION = 'v1.2.2'; use Moo; use List::Util qw/min/; use MongoDB::BSON; use MongoDB::QueryResult; use MongoDB::_Constants; use MongoDB::_Protocol; use MongoDB::_Types qw( Document CursorType IxHash ); use Types::Standard qw( CodeRef HashRef InstanceOf Maybe Bool Num Str ); use boolean; use namespace::clean; has db_name => ( is => 'ro', required => 1, isa => Str, ); has coll_name => ( is => 'ro', required => 1, isa => Str, ); has client => ( is => 'ro', required => 1, isa => InstanceOf ['MongoDB::MongoClient'], ); has projection => ( is => 'ro', isa => Maybe [Document], ); has [qw/batch_size limit skip/] => ( is => 'ro', required => 1, isa => Num, ); has sort => ( is => 'ro', isa => Maybe( [IxHash] ), ); has filter => ( is => 'ro', isa => Document, ); has comment => ( is => 'ro', isa => Str, ); has max_await_time_ms => ( is => 'ro', isa => Maybe[Num], ); has max_time_ms => ( is => 'ro', isa => Maybe[Num], ); has no_cursor_timeout => ( is => 'ro', isa => Bool, ); has allow_partial_results => ( is => 'ro', isa => Bool, ); has modifiers => ( is => 'ro', isa => HashRef, ); has cursor_type => ( is => 'ro', isa => CursorType, ); has post_filter => ( is => 'ro', predicate => 'has_post_filter', isa => Maybe [CodeRef], ); with $_ for qw( MongoDB::Role::_PrivateConstructor MongoDB::Role::_ReadOp MongoDB::Role::_CommandCursorOp ); with 'MongoDB::Role::_ReadPrefModifier'; sub execute { my ( $self, $link, $topology ) = @_; my $res = $link->accepts_wire_version(4) ? $self->_command_query( $link, $topology ) : $self->_legacy_query( $link, $topology ); return $res; } sub _command_query { my ( $self, $link, $topology ) = @_; my $op = MongoDB::Op::_Command->_new( db_name => $self->db_name, query => $self->as_command, query_flags => {}, read_preference => $self->read_preference, bson_codec => $self->bson_codec, ); my $res = $op->execute( $link, $topology ); return $self->_build_result_from_cursor( $res ); } sub _legacy_query { my ( $self, $link, $topology ) = @_; my $query_flags = { tailable => ( $self->cursor_type =~ /^tailable/ ? 1 : 0 ), await_data => $self->cursor_type eq 'tailable_await', immortal => $self->no_cursor_timeout, partial => $self->allow_partial_results, }; # build starting query document; modifiers come first as other parameters # take precedence. my $query = { ( $self->modifiers ? %{ $self->modifiers } : () ), ( $self->comment ? ( '$comment' => $self->comment ) : () ), ( $self->sort ? ( '$orderby' => $self->sort ) : () ), ( ( $self->max_time_ms && $self->coll_name !~ /\A\$cmd/ ) ? ( '$maxTimeMS' => $self->max_time_ms ) : () ), '$query' => ($self->filter || {}), }; # if no modifers were added and there is no 'query' key in '$query' # we remove the extra layer; this is necessary as some special # command queries will choke on '$query' # (see https://jira.mongodb.org/browse/SERVER-14294) $query = $query->{'$query'} if keys %$query == 1 && !( ( ref( $query->{'$query'} ) eq 'Tie::IxHash' ) ? $query->{'$query'}->EXISTS('query') : exists $query->{'$query'}{query} ); my $ns = $self->db_name . "." . $self->coll_name; my $filter = $self->bson_codec->encode_one( $query ); # rules for calculating initial batch size my $limit = $self->limit || 0; my $batch_size = $self->batch_size || 0; my $n_to_return = $limit == 0 ? $batch_size : $batch_size == 0 ? $limit : $limit < 0 ? $limit : min( $limit, $batch_size ); my $proj = $self->projection ? $self->bson_codec->encode_one( $self->projection ) : undef; # $query is passed as a reference because it *may* be replaced $self->_apply_read_prefs( $link, $topology, $query_flags, \$query); my ( $op_bson, $request_id ) = MongoDB::_Protocol::write_query( $ns, $filter, $proj, $self->skip, $n_to_return, $query_flags ); my $result = $self->_query_and_receive( $link, $op_bson, $request_id, $self->bson_codec ); my $class = $self->has_post_filter ? "MongoDB::QueryResult::Filtered" : "MongoDB::QueryResult"; return $class->_new( _client => $self->client, _address => $link->address, _ns => $ns, _bson_codec => $self->bson_codec, _batch_size => $n_to_return, _cursor_at => 0, _limit => $self->limit, _cursor_id => $result->{cursor_id}, _cursor_start => $result->{starting_from}, _cursor_flags => $result->{flags} || {}, _cursor_num => $result->{number_returned}, _docs => $result->{docs}, _post_filter => $self->post_filter, ); } sub as_command { my ($self) = @_; my ($limit, $batch_size, $single_batch) = ($self->limit, $self->batch_size, 0); $single_batch = $limit < 0 || $batch_size < 0; $limit = abs($limit); $batch_size = $limit if $single_batch; my $tailable = $self->cursor_type =~ /^tailable/ ? true : false; my $await_data = $self->cursor_type eq 'tailable_await' ? true : false; my $max_time = $await_data ? $self->max_await_time_ms : $self->max_time_ms ; my $mod = $self->modifiers; return [ find => $self->coll_name, filter => $self->filter, (defined $self->sort ? (sort => $self->sort) : ()), (defined $self->projection ? (projection => $self->projection) : ()), (defined $mod->{'$hint'} ? (hint => $mod->{'$hint'}) : ()), skip => $self->skip, ($limit ? (limit => $limit) : ()), ($batch_size ? (batchSize => $batch_size) : ()), singleBatch => boolean($single_batch), ($self->{comment} ? (comment => $self->comment) : ()), (defined $mod->{maxScan} ? (maxScan => $mod->{maxScan}) : ()), (defined $self->{max_time_ms} ? (maxTimeMS => $self->{max_time_ms}) : ()), (defined $mod->{max} ? (max => $mod->{max}) : ()), (defined $mod->{min} ? (min => $mod->{min}) : ()), (defined $mod->{returnKey} ? (returnKey => $mod->{returnKey}) : ()), (defined $mod->{showDiskLoc} ? (showRecordId => $mod->{showDiskLoc}) : ()), (defined $mod->{snapshot} ? (snapshot => $mod->{snapshot}) : ()), tailable => $tailable, noCursorTimeout => boolean($self->no_cursor_timeout), awaitData => $await_data, allowPartialResults => boolean($self->allow_partial_results), @{$self->read_concern->as_args}, ]; } 1; MongoDB-v1.2.2/lib/MongoDB/Op/_Update.pm000644 000765 000024 00000011177 12651754051 017766 0ustar00davidstaff000000 000000 # # Copyright 2014 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # package MongoDB::Op::_Update; # Encapsulate an update operation; returns a MongoDB::UpdateResult use version; our $VERSION = 'v1.2.2'; use Moo; use MongoDB::BSON; use MongoDB::UpdateResult; use MongoDB::_Constants; use MongoDB::_Protocol; use MongoDB::_Types qw( Document ); use Types::Standard qw( Bool Str ); use Tie::IxHash; use boolean; use namespace::clean; has db_name => ( is => 'ro', required => 1, isa => Str, ); has coll_name => ( is => 'ro', required => 1, isa => Str, ); has full_name => ( is => 'ro', required => 1, isa => Str, ); has filter => ( is => 'ro', required => 1, isa => Document, ); has update => ( is => 'ro', required => 1, ); has is_replace => ( is => 'ro', required => 1, isa => Bool, ); has multi => ( is => 'ro', required => 1, isa => Bool, ); has upsert => ( is => 'ro', ); with $_ for qw( MongoDB::Role::_PrivateConstructor MongoDB::Role::_WriteOp MongoDB::Role::_UpdatePreEncoder MongoDB::Role::_BypassValidation ); # cached my ($true, $false) = (true, false); sub execute { my ( $self, $link ) = @_; my $orig_op = { q => ( ref( $self->filter ) eq 'ARRAY' ? { @{ $self->filter } } : $self->filter ), u => $self->update, multi => $self->multi ? $true : $false, upsert => $self->upsert ? $true : $false, }; return $link->does_write_commands ? ( $self->_send_write_command( $self->_maybe_bypass( $link, [ update => $self->coll_name, updates => [ { %$orig_op, u => $self->_pre_encode_update( $link, $orig_op->{u}, $self->is_replace ), } ], @{ $self->write_concern->as_args }, ], ), $orig_op, "MongoDB::UpdateResult" )->assert ) : ( $self->_send_legacy_op_with_gle( $link, MongoDB::_Protocol::write_update( $self->full_name, $self->bson_codec->encode_one( $orig_op->{q}, { invalid_chars => '' } ), $self->_pre_encode_update( $link, $orig_op->{u}, $self->is_replace )->{bson}, { upsert => $orig_op->{upsert}, multi => $orig_op->{multi}, }, ), $orig_op, "MongoDB::UpdateResult" )->assert ); } sub _parse_cmd { my ( $self, $res ) = @_; return ( matched_count => ($res->{n} || 0) - @{ $res->{upserted} || [] }, modified_count => $res->{nModified}, upserted_id => $res->{upserted} ? $res->{upserted}[0]{_id} : undef, ); } sub _parse_gle { my ( $self, $res, $orig_doc ) = @_; # For 2.4 and earlier, 'upserted' has _id only if the _id is an OID. # Otherwise, we have to pick it out of the update document or query # document when we see updateExisting is false but the number of docs # affected is 1 my $upserted = $res->{upserted}; if (! defined( $upserted ) && exists( $res->{updatedExisting} ) && !$res->{updatedExisting} && $res->{n} == 1 ) { $upserted = _find_id( $orig_doc->{u} ); $upserted = _find_id( $orig_doc->{q} ) unless defined $upserted; } return ( matched_count => ($upserted ? 0 : $res->{n} || 0), modified_count => undef, upserted_id => $upserted, ); } sub _find_id { my ($doc) = @_; my $type = ref($doc); return ( $type eq 'HASH' ? $doc->{_id} : $type eq 'ARRAY' ? do { my $i; for ( $i = 0; $i < @$doc; $i++ ) { last if $doc->[$i] eq '_id' } $i < $#$doc ? $doc->[ $i + 1 ] : undef; } : $type eq 'Tie::IxHash' ? $doc->FETCH('_id') : $doc->{_id} # hashlike? ); } 1; MongoDB-v1.2.2/lib/MongoDB/GridFS/File.pm000644 000765 000024 00000014037 12651754051 020022 0ustar00davidstaff000000 000000 # # Copyright 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # package MongoDB::GridFS::File; # ABSTRACT: A Mongo GridFS file use version; our $VERSION = 'v1.2.2'; use MongoDB::Error; use IO::File; use Moo; use Types::Standard qw( HashRef InstanceOf ); use namespace::clean -except => 'meta'; has _grid => ( is => 'ro', isa => InstanceOf['MongoDB::GridFS'], required => 1, ); #pod =attr info #pod #pod A hash reference of metadata saved with this file. #pod #pod =cut has info => ( is => 'ro', isa => HashRef, required => 1, ); #pod =method print #pod #pod $written = $file->print($fh); #pod $written = $file->print($fh, $length); #pod $written = $file->print($fh, $length, $offset) #pod #pod Writes the number of bytes specified from the offset specified #pod to the given file handle. If no C<$length> or C<$offset> are #pod given, the entire file is written to C<$fh>. Returns the number #pod of bytes written. #pod #pod =cut sub print { my ($self, $fh, $length, $offset) = @_; $offset ||= 0; $length ||= 0; my ($written, $pos) = (0, 0); my $start_pos = $fh->getpos(); $self->_grid->chunks->ensure_index(Tie::IxHash->new(files_id => 1, n => 1), { safe => 1, unique => 1 }); my $cursor = $self->_grid->chunks->query({"files_id" => $self->info->{"_id"}})->sort({"n" => 1}); if ( $self->info->{length} && !$cursor->has_next ) { MongoDB::GridFSError->throw( sprintf( "GridFS file corrupt: no chunks found for file ID '%s'", $self->info->{_id} ) ); } while ((my $chunk = $cursor->next) && (!$length || $written < $length)) { my $len = length $chunk->{'data'}; # if we are cleanly beyond the offset if (!$offset || $pos >= $offset) { if (!$length || $written + $len < $length) { $fh->print($chunk->{"data"}); $written += $len; $pos += $len; } else { $fh->print(substr($chunk->{'data'}, 0, $length-$written)); $written += $length-$written; $pos += $length-$written; } next; } # if the offset goes to the middle of this chunk elsif ($pos + $len > $offset) { # if the length of this chunk is smaller than the desired length if (!$length || $len <= $length-$written) { $fh->print(substr($chunk->{'data'}, $offset-$pos, $len-($offset-$pos))); $written += $len-($offset-$pos); $pos += $len-($offset-$pos); } else { no warnings 'substr'; $fh->print(substr($chunk->{'data'}, $offset-$pos, $length)); $written += $length; $pos += $length; } next; } # if the offset is larger than this chunk $pos += $len; } $fh->setpos($start_pos); return $written; } #pod =method slurp #pod #pod $all = $file->slurp #pod $bytes = $file->slurp($length); #pod $bytes = $file->slurp($length, $offset); #pod #pod Return the number of bytes specified from the offset specified. If no #pod C<$length> or C<$offset> are given, the entire file is returned. #pod #pod =cut sub slurp { my ($self,$length,$offset) = @_; my $bytes = ''; my $fh = new IO::File \$bytes,'+>'; my $written = $self->print($fh,$length,$offset); # some machines don't set $bytes if ($written and !length($bytes)) { my $retval; read $fh, $retval, $written; return $retval; } return $bytes; } 1; __END__ =pod =encoding UTF-8 =head1 NAME MongoDB::GridFS::File - A Mongo GridFS file =head1 VERSION version v1.2.2 =head1 SYNOPSIS use MongoDB::GridFS::File; $outfile = IO::File->new("outfile", "w"); $file = $grid->find_one; $file->print($outfile); =head1 USAGE =head2 Error handling Unless otherwise explictly documented, all methods throw exceptions if an error occurs. The error types are documented in L. To catch and handle errors, the L and L modules are recommended: use Try::Tiny; use Safe::Isa; # provides $_isa $bytes = try { $file->slurp; } catch { if ( $_->$_isa("MongoDB::TimeoutError" ) { ... } else { ... } }; To retry failures automatically, consider using L. =head1 ATTRIBUTES =head2 info A hash reference of metadata saved with this file. =head1 METHODS =head2 print $written = $file->print($fh); $written = $file->print($fh, $length); $written = $file->print($fh, $length, $offset) Writes the number of bytes specified from the offset specified to the given file handle. If no C<$length> or C<$offset> are given, the entire file is written to C<$fh>. Returns the number of bytes written. =head2 slurp $all = $file->slurp $bytes = $file->slurp($length); $bytes = $file->slurp($length, $offset); Return the number of bytes specified from the offset specified. If no C<$length> or C<$offset> are given, the entire file is returned. =head1 AUTHORS =over 4 =item * David Golden =item * Mike Friedman =item * Kristina Chodorow =item * Florian Ragwitz =back =head1 COPYRIGHT AND LICENSE This software is Copyright (c) 2016 by MongoDB, Inc.. This is free software, licensed under: The Apache License, Version 2.0, January 2004 =cut MongoDB-v1.2.2/lib/MongoDB/BSON/_EncodedDoc.pm000644 000765 000024 00000002337 12651754051 020714 0ustar00davidstaff000000 000000 # # Copyright 2015 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # package MongoDB::BSON::_EncodedDoc; # Wrapper for pre-encoded BSON documents, with optional metadata use version; our $VERSION = 'v1.2.2'; use Moo; use MongoDB::_Constants; use Types::Standard qw( HashRef Str ); use namespace::clean; # An encoded document, i.e. a BSON string has bson => ( is => 'ro', required => 1, isa => Str, ); # A hash reference of optional meta data about the document, such as the "_id" has metadata => ( is => 'ro', required => 1, # for speed; lazy accessors don't get optimized isa => HashRef, ); with $_ for qw( MongoDB::Role::_PrivateConstructor ); 1; # vim: set ts=4 sts=4 sw=4 et tw=75: MongoDB-v1.2.2/lib/MongoDB/BSON/Binary.pm000644 000765 000024 00000011137 12651754051 020010 0ustar00davidstaff000000 000000 # # Copyright 2009-2013 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # package MongoDB::BSON::Binary; # ABSTRACT: MongoDB binary type use version; our $VERSION = 'v1.2.2'; use Moo; use Types::Standard qw( Int Str ); use namespace::clean; use constant { SUBTYPE_GENERIC => 0, SUBTYPE_FUNCTION => 1, SUBTYPE_GENERIC_DEPRECATED => 2, SUBTYPE_UUID_DEPRECATED => 3, SUBTYPE_UUID => 4, SUBTYPE_MD5 => 5, SUBTYPE_USER_DEFINED => 128 }; use overload ( q{""} => sub { $_[0]->{data} }, fallback => 1 ); #pod =attr data #pod #pod A string of binary data. #pod #pod =cut has data => ( is => 'ro', isa => Str, required => 1 ); #pod =attr subtype #pod #pod A subtype. Defaults to C. #pod #pod =cut has subtype => ( is => 'ro', isa => Int, required => 0, default => MongoDB::BSON::Binary->SUBTYPE_GENERIC ); 1; __END__ =pod =encoding UTF-8 =head1 NAME MongoDB::BSON::Binary - MongoDB binary type =head1 VERSION version v1.2.2 =head1 SYNOPSIS Creates an instance of binary data with a specific subtype. =head1 USAGE For example, suppose we wanted to store a profile pic. my $pic = MongoDB::BSON::Binary->new(data => $pic_bytes); $collection->insert({name => "profile pic", pic => $pic}); You can also, optionally, specify a subtype: my $pic = MongoDB::BSON::Binary->new(data => $pic_bytes, subtype => MongoDB::BSON::Binary->SUBTYPE_GENERIC); $collection->insert({name => "profile pic", pic => $pic}); =head2 Overloading MongoDB::BSON::Binary objects have stringification overloaded to return the binary data. =head1 ATTRIBUTES =head2 data A string of binary data. =head2 subtype A subtype. Defaults to C. =head1 SUBTYPES MongoDB allows you to specify the "flavor" of binary data that you are storing by providing a subtype. The subtypes are purely cosmetic: the database treats them all the same. There are several subtypes defined in the BSON spec: =over 4 =item C (0x00) is the default used by the driver (as of 0.46). =item C (0x01) is for compiled byte code. =item C (0x02) is deprecated. It was used by the driver prior to version 0.46, but this subtype wastes 4 bytes of space so C is preferred. This is the only type that is parsed differently based on type. =item C (0x03) is deprecated. It is for UUIDs. =item C (0x04) is for UUIDs. =item C can be (0x05) is for MD5 hashes. =item C (0x80) is for user-defined binary types. =back =head2 Why is C deprecated? Binary data is stored with the length of the binary data, the subtype, and the actually data. C stores the length of the data a second time, which just wastes four bytes. If you have been using C for binary data, moving to C should be painless: just use the driver normally and all new/resaved data will be stored as C. It gets a little trickier if you've been querying by binary data fields: C won't match C, even if the data itself is the same. =head2 Why is C deprecated? Other languages were using the UUID type to deserialize into their languages' native UUID type. They were doing this in different ways, so to standardize, they decided on a deserialization format for everyone to use and changed the subtype for UUID to the universal format. This should not affect Perl users at all, as Perl does not deserialize it into any native UUID type. =head1 AUTHORS =over 4 =item * David Golden =item * Mike Friedman =item * Kristina Chodorow =item * Florian Ragwitz =back =head1 COPYRIGHT AND LICENSE This software is Copyright (c) 2016 by MongoDB, Inc.. This is free software, licensed under: The Apache License, Version 2.0, January 2004 =cut MongoDB-v1.2.2/lib/MongoDB/BSON/Regexp.pm000644 000765 000024 00000005036 12651754051 020017 0ustar00davidstaff000000 000000 package MongoDB::BSON::Regexp; # ABSTRACT: Regular expression type use version; our $VERSION = 'v1.2.2'; use Moo; use MongoDB::Error; use Types::Standard qw( Str ); use namespace::clean -except => 'meta'; has pattern => ( is => 'ro', isa => Str, required => 1, ); has flags => ( is => 'ro', isa => Str, required => 0, predicate => 'has_flags', writer => '_set_flags', ); my %ALLOWED_FLAGS = ( i => 1, m => 1, x => 1, l => 1, s => 1, u => 1 ); sub BUILD { my $self = shift; if ( $self->has_flags ) { my %seen; my @flags = grep { !$seen{$_}++ } split '', $self->flags; foreach my $f( @flags ) { MongoDB::UsageError->throw("Regexp flag $f is not supported by MongoDB") if not exists $ALLOWED_FLAGS{$f}; } $self->_set_flags( join '', sort @flags ); } } #pod =method #pod #pod my $qr = $regexp->try_compile; #pod #pod Tries to compile the C and C into a reference to a regular #pod expression. If the pattern or flags can't be compiled, a #pod C exception will be thrown. #pod #pod B: Executing a regular expression can evaluate arbitrary #pod code. You are strongly advised never to use untrusted input with #pod C. #pod #pod =cut sub try_compile { my ($self) = @_; my ( $p, $f ) = map { $self->$_ } qw/pattern flags/; my $re = eval { qr/(?$f:$p)/ }; MongoDB::DecodingError->throw("error compiling regex 'qr/$p/$f': $@") if $@; return $re; } 1; # vim: set ts=4 sts=4 sw=4 et tw=75: __END__ =pod =encoding UTF-8 =head1 NAME MongoDB::BSON::Regexp - Regular expression type =head1 VERSION version v1.2.2 =head1 METHODS =head2 my $qr = $regexp->try_compile; Tries to compile the C and C into a reference to a regular expression. If the pattern or flags can't be compiled, a C exception will be thrown. B: Executing a regular expression can evaluate arbitrary code. You are strongly advised never to use untrusted input with C. =head1 AUTHORS =over 4 =item * David Golden =item * Mike Friedman =item * Kristina Chodorow =item * Florian Ragwitz =back =head1 COPYRIGHT AND LICENSE This software is Copyright (c) 2016 by MongoDB, Inc.. This is free software, licensed under: The Apache License, Version 2.0, January 2004 =cut MongoDB-v1.2.2/inc/CheckJiraInChanges.pm000644 000765 000024 00000004135 12651754051 020144 0ustar00davidstaff000000 000000 use 5.008001; use strict; use warnings; package inc::CheckJiraInChanges; our $VERSION = 0.001; use Dist::Zilla 5 (); use Dist::Zilla::File::InMemory; use Moose; use namespace::clean -except => 'meta'; has changelog => ( is => 'ro', isa => 'Str', default => 'Changes' ); with 'Dist::Zilla::Role::FileGatherer'; sub gather_files { my ( $self, $arg ) = @_; my $zilla = $self->zilla; my $newver = $self->zilla->version; my $commits = $self->_extract_jira_commits; my $test_file = <<'TESTFILE'; #!perl use strict; use warnings; # This test was generated by inc::CheckJiraInChanges use Test::More tests => 1; my @commits = split /\n/, <<'EOC'; INSERT_COMMITS_HERE EOC my %ticket_map; for my $commit ( @commits ) { for my $ticket ( $commit =~ /PERL-(\d+)/g ) { next if $ENV{CHECK_JIRA_SKIP} && grep { $ticket eq $_ } split " ", $ENV{CHECK_JIRA_SKIP}; $ticket_map{$ticket} ||= []; push @{$ticket_map{$ticket}}, $commit; } } # grab Changes lines from new version to next un-indented line open my $fh, "<:encoding(UTF-8)", "Changes"; my $changelog = do { local $/; <$fh> }; my @bad; for my $ticket ( keys %ticket_map ) { if ( index( $changelog, "PERL-$ticket" ) < 0 ) { push @bad, $ticket; } } if ( !@commits ) { pass("No commits with Jira tickets"); } else { ok( ! scalar @bad, "Jira tickets in Changes") or diag "Jira tickets missing:\n" . join("\n", map { " * $_" } map { @{$ticket_map{$_}} } sort { $a <=> $b } @bad ); } TESTFILE $test_file =~ s/INSERT_VERSION_HERE/$newver/; $test_file =~ s/INSERT_COMMITS_HERE/$commits/; my $file = Dist::Zilla::File::InMemory->new( { name => "xt/release/check-jira-in-changes.t", content => $test_file, } ); $self->add_file($file); return; } sub _extract_jira_commits { my $last_tag = qx/git describe --abbrev=0/; chomp $last_tag; return join( "", grep { /PERL-\d+/ } qx/git log --oneline $last_tag..HEAD/ ); } __PACKAGE__->meta->make_immutable; 1; # vim: ts=4 sts=4 sw=4 et: MongoDB-v1.2.2/inc/Module/000755 000765 000024 00000000000 12651754051 015425 5ustar00davidstaff000000 000000 MongoDB-v1.2.2/inc/Module/AutoInstall.pm000644 000765 000024 00000062162 12651754051 020231 0ustar00davidstaff000000 000000 #line 1 package Module::AutoInstall; use strict; use Cwd (); use File::Spec (); use ExtUtils::MakeMaker (); use vars qw{$VERSION}; BEGIN { $VERSION = '1.06'; } # special map on pre-defined feature sets my %FeatureMap = ( '' => 'Core Features', # XXX: deprecated '-core' => 'Core Features', ); # various lexical flags my ( @Missing, @Existing, %DisabledTests, $UnderCPAN, $InstallDepsTarget, $HasCPANPLUS ); my ( $Config, $CheckOnly, $SkipInstall, $AcceptDefault, $TestOnly, $AllDeps, $UpgradeDeps ); my ( $PostambleActions, $PostambleActionsNoTest, $PostambleActionsUpgradeDeps, $PostambleActionsUpgradeDepsNoTest, $PostambleActionsListDeps, $PostambleActionsListAllDeps, $PostambleUsed, $NoTest); # See if it's a testing or non-interactive session _accept_default( $ENV{AUTOMATED_TESTING} or ! -t STDIN ); _init(); sub _accept_default { $AcceptDefault = shift; } sub _installdeps_target { $InstallDepsTarget = shift; } sub missing_modules { return @Missing; } sub do_install { __PACKAGE__->install( [ $Config ? ( UNIVERSAL::isa( $Config, 'HASH' ) ? %{$Config} : @{$Config} ) : () ], @Missing, ); } # initialize various flags, and/or perform install sub _init { foreach my $arg ( @ARGV, split( /[\s\t]+/, $ENV{PERL_AUTOINSTALL} || $ENV{PERL_EXTUTILS_AUTOINSTALL} || '' ) ) { if ( $arg =~ /^--config=(.*)$/ ) { $Config = [ split( ',', $1 ) ]; } elsif ( $arg =~ /^--installdeps=(.*)$/ ) { __PACKAGE__->install( $Config, @Missing = split( /,/, $1 ) ); exit 0; } elsif ( $arg =~ /^--upgradedeps=(.*)$/ ) { $UpgradeDeps = 1; __PACKAGE__->install( $Config, @Missing = split( /,/, $1 ) ); exit 0; } elsif ( $arg =~ /^--default(?:deps)?$/ ) { $AcceptDefault = 1; } elsif ( $arg =~ /^--check(?:deps)?$/ ) { $CheckOnly = 1; } elsif ( $arg =~ /^--skip(?:deps)?$/ ) { $SkipInstall = 1; } elsif ( $arg =~ /^--test(?:only)?$/ ) { $TestOnly = 1; } elsif ( $arg =~ /^--all(?:deps)?$/ ) { $AllDeps = 1; } } } # overrides MakeMaker's prompt() to automatically accept the default choice sub _prompt { goto &ExtUtils::MakeMaker::prompt unless $AcceptDefault; my ( $prompt, $default ) = @_; my $y = ( $default =~ /^[Yy]/ ); print $prompt, ' [', ( $y ? 'Y' : 'y' ), '/', ( $y ? 'n' : 'N' ), '] '; print "$default\n"; return $default; } # the workhorse sub import { my $class = shift; my @args = @_ or return; my $core_all; print "*** $class version " . $class->VERSION . "\n"; print "*** Checking for Perl dependencies...\n"; my $cwd = Cwd::cwd(); $Config = []; my $maxlen = length( ( sort { length($b) <=> length($a) } grep { /^[^\-]/ } map { ref($_) ? ( ( ref($_) eq 'HASH' ) ? keys(%$_) : @{$_} ) : '' } map { +{@args}->{$_} } grep { /^[^\-]/ or /^-core$/i } keys %{ +{@args} } )[0] ); # We want to know if we're under CPAN early to avoid prompting, but # if we aren't going to try and install anything anyway then skip the # check entirely since we don't want to have to load (and configure) # an old CPAN just for a cosmetic message $UnderCPAN = _check_lock(1) unless $SkipInstall || $InstallDepsTarget; while ( my ( $feature, $modules ) = splice( @args, 0, 2 ) ) { my ( @required, @tests, @skiptests ); my $default = 1; my $conflict = 0; if ( $feature =~ m/^-(\w+)$/ ) { my $option = lc($1); # check for a newer version of myself _update_to( $modules, @_ ) and return if $option eq 'version'; # sets CPAN configuration options $Config = $modules if $option eq 'config'; # promote every features to core status $core_all = ( $modules =~ /^all$/i ) and next if $option eq 'core'; next unless $option eq 'core'; } print "[" . ( $FeatureMap{ lc($feature) } || $feature ) . "]\n"; $modules = [ %{$modules} ] if UNIVERSAL::isa( $modules, 'HASH' ); unshift @$modules, -default => &{ shift(@$modules) } if ( ref( $modules->[0] ) eq 'CODE' ); # XXX: bugward combatability while ( my ( $mod, $arg ) = splice( @$modules, 0, 2 ) ) { if ( $mod =~ m/^-(\w+)$/ ) { my $option = lc($1); $default = $arg if ( $option eq 'default' ); $conflict = $arg if ( $option eq 'conflict' ); @tests = @{$arg} if ( $option eq 'tests' ); @skiptests = @{$arg} if ( $option eq 'skiptests' ); next; } printf( "- %-${maxlen}s ...", $mod ); if ( $arg and $arg =~ /^\D/ ) { unshift @$modules, $arg; $arg = 0; } # XXX: check for conflicts and uninstalls(!) them. my $cur = _version_of($mod); if (_version_cmp ($cur, $arg) >= 0) { print "loaded. ($cur" . ( $arg ? " >= $arg" : '' ) . ")\n"; push @Existing, $mod => $arg; $DisabledTests{$_} = 1 for map { glob($_) } @skiptests; } else { if (not defined $cur) # indeed missing { print "missing." . ( $arg ? " (would need $arg)" : '' ) . "\n"; } else { # no need to check $arg as _version_cmp ($cur, undef) would satisfy >= above print "too old. ($cur < $arg)\n"; } push @required, $mod => $arg; } } next unless @required; my $mandatory = ( $feature eq '-core' or $core_all ); if ( !$SkipInstall and ( $CheckOnly or ($mandatory and $UnderCPAN) or $AllDeps or $InstallDepsTarget or _prompt( qq{==> Auto-install the } . ( @required / 2 ) . ( $mandatory ? ' mandatory' : ' optional' ) . qq{ module(s) from CPAN?}, $default ? 'y' : 'n', ) =~ /^[Yy]/ ) ) { push( @Missing, @required ); $DisabledTests{$_} = 1 for map { glob($_) } @skiptests; } elsif ( !$SkipInstall and $default and $mandatory and _prompt( qq{==> The module(s) are mandatory! Really skip?}, 'n', ) =~ /^[Nn]/ ) { push( @Missing, @required ); $DisabledTests{$_} = 1 for map { glob($_) } @skiptests; } else { $DisabledTests{$_} = 1 for map { glob($_) } @tests; } } if ( @Missing and not( $CheckOnly or $UnderCPAN) ) { require Config; my $make = $Config::Config{make}; if ($InstallDepsTarget) { print "*** To install dependencies type '$make installdeps' or '$make installdeps_notest'.\n"; } else { print "*** Dependencies will be installed the next time you type '$make'.\n"; } # make an educated guess of whether we'll need root permission. print " (You may need to do that as the 'root' user.)\n" if eval '$>'; } print "*** $class configuration finished.\n"; chdir $cwd; # import to main:: no strict 'refs'; *{'main::WriteMakefile'} = \&Write if caller(0) eq 'main'; return (@Existing, @Missing); } sub _running_under { my $thing = shift; print <<"END_MESSAGE"; *** Since we're running under ${thing}, I'll just let it take care of the dependency's installation later. END_MESSAGE return 1; } # Check to see if we are currently running under CPAN.pm and/or CPANPLUS; # if we are, then we simply let it taking care of our dependencies sub _check_lock { return unless @Missing or @_; if ($ENV{PERL5_CPANM_IS_RUNNING}) { return _running_under('cpanminus'); } my $cpan_env = $ENV{PERL5_CPAN_IS_RUNNING}; if ($ENV{PERL5_CPANPLUS_IS_RUNNING}) { return _running_under($cpan_env ? 'CPAN' : 'CPANPLUS'); } require CPAN; if ($CPAN::VERSION > '1.89') { if ($cpan_env) { return _running_under('CPAN'); } return; # CPAN.pm new enough, don't need to check further } # last ditch attempt, this -will- configure CPAN, very sorry _load_cpan(1); # force initialize even though it's already loaded # Find the CPAN lock-file my $lock = MM->catfile( $CPAN::Config->{cpan_home}, ".lock" ); return unless -f $lock; # Check the lock local *LOCK; return unless open(LOCK, $lock); if ( ( $^O eq 'MSWin32' ? _under_cpan() : == getppid() ) and ( $CPAN::Config->{prerequisites_policy} || '' ) ne 'ignore' ) { print <<'END_MESSAGE'; *** Since we're running under CPAN, I'll just let it take care of the dependency's installation later. END_MESSAGE return 1; } close LOCK; return; } sub install { my $class = shift; my $i; # used below to strip leading '-' from config keys my @config = ( map { s/^-// if ++$i; $_ } @{ +shift } ); my ( @modules, @installed ); while ( my ( $pkg, $ver ) = splice( @_, 0, 2 ) ) { # grep out those already installed if ( _version_cmp( _version_of($pkg), $ver ) >= 0 ) { push @installed, $pkg; } else { push @modules, $pkg, $ver; } } if ($UpgradeDeps) { push @modules, @installed; @installed = (); } return @installed unless @modules; # nothing to do return @installed if _check_lock(); # defer to the CPAN shell print "*** Installing dependencies...\n"; return unless _connected_to('cpan.org'); my %args = @config; my %failed; local *FAILED; if ( $args{do_once} and open( FAILED, '.#autoinstall.failed' ) ) { while () { chomp; $failed{$_}++ } close FAILED; my @newmod; while ( my ( $k, $v ) = splice( @modules, 0, 2 ) ) { push @newmod, ( $k => $v ) unless $failed{$k}; } @modules = @newmod; } if ( _has_cpanplus() and not $ENV{PERL_AUTOINSTALL_PREFER_CPAN} ) { _install_cpanplus( \@modules, \@config ); } else { _install_cpan( \@modules, \@config ); } print "*** $class installation finished.\n"; # see if we have successfully installed them while ( my ( $pkg, $ver ) = splice( @modules, 0, 2 ) ) { if ( _version_cmp( _version_of($pkg), $ver ) >= 0 ) { push @installed, $pkg; } elsif ( $args{do_once} and open( FAILED, '>> .#autoinstall.failed' ) ) { print FAILED "$pkg\n"; } } close FAILED if $args{do_once}; return @installed; } sub _install_cpanplus { my @modules = @{ +shift }; my @config = _cpanplus_config( @{ +shift } ); my $installed = 0; require CPANPLUS::Backend; my $cp = CPANPLUS::Backend->new; my $conf = $cp->configure_object; return unless $conf->can('conf') # 0.05x+ with "sudo" support or _can_write($conf->_get_build('base')); # 0.04x # if we're root, set UNINST=1 to avoid trouble unless user asked for it. my $makeflags = $conf->get_conf('makeflags') || ''; if ( UNIVERSAL::isa( $makeflags, 'HASH' ) ) { # 0.03+ uses a hashref here $makeflags->{UNINST} = 1 unless exists $makeflags->{UNINST}; } else { # 0.02 and below uses a scalar $makeflags = join( ' ', split( ' ', $makeflags ), 'UNINST=1' ) if ( $makeflags !~ /\bUNINST\b/ and eval qq{ $> eq '0' } ); } $conf->set_conf( makeflags => $makeflags ); $conf->set_conf( prereqs => 1 ); while ( my ( $key, $val ) = splice( @config, 0, 2 ) ) { $conf->set_conf( $key, $val ); } my $modtree = $cp->module_tree; while ( my ( $pkg, $ver ) = splice( @modules, 0, 2 ) ) { print "*** Installing $pkg...\n"; MY::preinstall( $pkg, $ver ) or next if defined &MY::preinstall; my $success; my $obj = $modtree->{$pkg}; if ( $obj and _version_cmp( $obj->{version}, $ver ) >= 0 ) { my $pathname = $pkg; $pathname =~ s/::/\\W/; foreach my $inc ( grep { m/$pathname.pm/i } keys(%INC) ) { delete $INC{$inc}; } my $rv = $cp->install( modules => [ $obj->{module} ] ); if ( $rv and ( $rv->{ $obj->{module} } or $rv->{ok} ) ) { print "*** $pkg successfully installed.\n"; $success = 1; } else { print "*** $pkg installation cancelled.\n"; $success = 0; } $installed += $success; } else { print << "."; *** Could not find a version $ver or above for $pkg; skipping. . } MY::postinstall( $pkg, $ver, $success ) if defined &MY::postinstall; } return $installed; } sub _cpanplus_config { my @config = (); while ( @_ ) { my ($key, $value) = (shift(), shift()); if ( $key eq 'prerequisites_policy' ) { if ( $value eq 'follow' ) { $value = CPANPLUS::Internals::Constants::PREREQ_INSTALL(); } elsif ( $value eq 'ask' ) { $value = CPANPLUS::Internals::Constants::PREREQ_ASK(); } elsif ( $value eq 'ignore' ) { $value = CPANPLUS::Internals::Constants::PREREQ_IGNORE(); } else { die "*** Cannot convert option $key = '$value' to CPANPLUS version.\n"; } push @config, 'prereqs', $value; } elsif ( $key eq 'force' ) { push @config, $key, $value; } elsif ( $key eq 'notest' ) { push @config, 'skiptest', $value; } else { die "*** Cannot convert option $key to CPANPLUS version.\n"; } } return @config; } sub _install_cpan { my @modules = @{ +shift }; my @config = @{ +shift }; my $installed = 0; my %args; _load_cpan(); require Config; if (CPAN->VERSION < 1.80) { # no "sudo" support, probe for writableness return unless _can_write( MM->catfile( $CPAN::Config->{cpan_home}, 'sources' ) ) and _can_write( $Config::Config{sitelib} ); } # if we're root, set UNINST=1 to avoid trouble unless user asked for it. my $makeflags = $CPAN::Config->{make_install_arg} || ''; $CPAN::Config->{make_install_arg} = join( ' ', split( ' ', $makeflags ), 'UNINST=1' ) if ( $makeflags !~ /\bUNINST\b/ and eval qq{ $> eq '0' } ); # don't show start-up info $CPAN::Config->{inhibit_startup_message} = 1; # set additional options while ( my ( $opt, $arg ) = splice( @config, 0, 2 ) ) { ( $args{$opt} = $arg, next ) if $opt =~ /^(?:force|notest)$/; # pseudo-option $CPAN::Config->{$opt} = $arg; } if ($args{notest} && (not CPAN::Shell->can('notest'))) { die "Your version of CPAN is too old to support the 'notest' pragma"; } local $CPAN::Config->{prerequisites_policy} = 'follow'; while ( my ( $pkg, $ver ) = splice( @modules, 0, 2 ) ) { MY::preinstall( $pkg, $ver ) or next if defined &MY::preinstall; print "*** Installing $pkg...\n"; my $obj = CPAN::Shell->expand( Module => $pkg ); my $success = 0; if ( $obj and _version_cmp( $obj->cpan_version, $ver ) >= 0 ) { my $pathname = $pkg; $pathname =~ s/::/\\W/; foreach my $inc ( grep { m/$pathname.pm/i } keys(%INC) ) { delete $INC{$inc}; } my $rv = do { if ($args{force}) { CPAN::Shell->force( install => $pkg ) } elsif ($args{notest}) { CPAN::Shell->notest( install => $pkg ) } else { CPAN::Shell->install($pkg) } }; $rv ||= eval { $CPAN::META->instance( 'CPAN::Distribution', $obj->cpan_file, ) ->{install} if $CPAN::META; }; if ( $rv eq 'YES' ) { print "*** $pkg successfully installed.\n"; $success = 1; } else { print "*** $pkg installation failed.\n"; $success = 0; } $installed += $success; } else { print << "."; *** Could not find a version $ver or above for $pkg; skipping. . } MY::postinstall( $pkg, $ver, $success ) if defined &MY::postinstall; } return $installed; } sub _has_cpanplus { return ( $HasCPANPLUS = ( $INC{'CPANPLUS/Config.pm'} or _load('CPANPLUS::Shell::Default') ) ); } # make guesses on whether we're under the CPAN installation directory sub _under_cpan { require Cwd; require File::Spec; my $cwd = File::Spec->canonpath( Cwd::cwd() ); my $cpan = File::Spec->canonpath( $CPAN::Config->{cpan_home} ); return ( index( $cwd, $cpan ) > -1 ); } sub _update_to { my $class = __PACKAGE__; my $ver = shift; return if _version_cmp( _version_of($class), $ver ) >= 0; # no need to upgrade if ( _prompt( "==> A newer version of $class ($ver) is required. Install?", 'y' ) =~ /^[Nn]/ ) { die "*** Please install $class $ver manually.\n"; } print << "."; *** Trying to fetch it from CPAN... . # install ourselves _load($class) and return $class->import(@_) if $class->install( [], $class, $ver ); print << '.'; exit 1; *** Cannot bootstrap myself. :-( Installation terminated. . } # check if we're connected to some host, using inet_aton sub _connected_to { my $site = shift; return ( ( _load('Socket') and Socket::inet_aton($site) ) or _prompt( qq( *** Your host cannot resolve the domain name '$site', which probably means the Internet connections are unavailable. ==> Should we try to install the required module(s) anyway?), 'n' ) =~ /^[Yy]/ ); } # check if a directory is writable; may create it on demand sub _can_write { my $path = shift; mkdir( $path, 0755 ) unless -e $path; return 1 if -w $path; print << "."; *** You are not allowed to write to the directory '$path'; the installation may fail due to insufficient permissions. . if ( eval '$>' and lc(`sudo -V`) =~ /version/ and _prompt( qq( ==> Should we try to re-execute the autoinstall process with 'sudo'?), ((-t STDIN) ? 'y' : 'n') ) =~ /^[Yy]/ ) { # try to bootstrap ourselves from sudo print << "."; *** Trying to re-execute the autoinstall process with 'sudo'... . my $missing = join( ',', @Missing ); my $config = join( ',', UNIVERSAL::isa( $Config, 'HASH' ) ? %{$Config} : @{$Config} ) if $Config; return unless system( 'sudo', $^X, $0, "--config=$config", "--installdeps=$missing" ); print << "."; *** The 'sudo' command exited with error! Resuming... . } return _prompt( qq( ==> Should we try to install the required module(s) anyway?), 'n' ) =~ /^[Yy]/; } # load a module and return the version it reports sub _load { my $mod = pop; # method/function doesn't matter my $file = $mod; $file =~ s|::|/|g; $file .= '.pm'; local $@; return eval { require $file; $mod->VERSION } || ( $@ ? undef: 0 ); } # report version without loading a module sub _version_of { my $mod = pop; # method/function doesn't matter my $file = $mod; $file =~ s|::|/|g; $file .= '.pm'; foreach my $dir ( @INC ) { next if ref $dir; my $path = File::Spec->catfile($dir, $file); next unless -e $path; require ExtUtils::MM_Unix; return ExtUtils::MM_Unix->parse_version($path); } return undef; } # Load CPAN.pm and it's configuration sub _load_cpan { return if $CPAN::VERSION and $CPAN::Config and not @_; require CPAN; # CPAN-1.82+ adds CPAN::Config::AUTOLOAD to redirect to # CPAN::HandleConfig->load. CPAN reports that the redirection # is deprecated in a warning printed at the user. # CPAN-1.81 expects CPAN::HandleConfig->load, does not have # $CPAN::HandleConfig::VERSION but cannot handle # CPAN::Config->load # Which "versions expect CPAN::Config->load? if ( $CPAN::HandleConfig::VERSION || CPAN::HandleConfig->can('load') ) { # Newer versions of CPAN have a HandleConfig module CPAN::HandleConfig->load; } else { # Older versions had the load method in Config directly CPAN::Config->load; } } # compare two versions, either use Sort::Versions or plain comparison # return values same as <=> sub _version_cmp { my ( $cur, $min ) = @_; return -1 unless defined $cur; # if 0 keep comparing return 1 unless $min; $cur =~ s/\s+$//; # check for version numbers that are not in decimal format if ( ref($cur) or ref($min) or $cur =~ /v|\..*\./ or $min =~ /v|\..*\./ ) { if ( ( $version::VERSION or defined( _load('version') )) and version->can('new') ) { # use version.pm if it is installed. return version->new($cur) <=> version->new($min); } elsif ( $Sort::Versions::VERSION or defined( _load('Sort::Versions') ) ) { # use Sort::Versions as the sorting algorithm for a.b.c versions return Sort::Versions::versioncmp( $cur, $min ); } warn "Cannot reliably compare non-decimal formatted versions.\n" . "Please install version.pm or Sort::Versions.\n"; } # plain comparison local $^W = 0; # shuts off 'not numeric' bugs return $cur <=> $min; } # nothing; this usage is deprecated. sub main::PREREQ_PM { return {}; } sub _make_args { my %args = @_; $args{PREREQ_PM} = { %{ $args{PREREQ_PM} || {} }, @Existing, @Missing } if $UnderCPAN or $TestOnly; if ( $args{EXE_FILES} and -e 'MANIFEST' ) { require ExtUtils::Manifest; my $manifest = ExtUtils::Manifest::maniread('MANIFEST'); $args{EXE_FILES} = [ grep { exists $manifest->{$_} } @{ $args{EXE_FILES} } ]; } $args{test}{TESTS} ||= 't/*.t'; $args{test}{TESTS} = join( ' ', grep { !exists( $DisabledTests{$_} ) } map { glob($_) } split( /\s+/, $args{test}{TESTS} ) ); my $missing = join( ',', @Missing ); my $config = join( ',', UNIVERSAL::isa( $Config, 'HASH' ) ? %{$Config} : @{$Config} ) if $Config; $PostambleActions = ( ($missing and not $UnderCPAN) ? "\$(PERL) $0 --config=$config --installdeps=$missing" : "\$(NOECHO) \$(NOOP)" ); my $deps_list = join( ',', @Missing, @Existing ); $PostambleActionsUpgradeDeps = "\$(PERL) $0 --config=$config --upgradedeps=$deps_list"; my $config_notest = join( ',', (UNIVERSAL::isa( $Config, 'HASH' ) ? %{$Config} : @{$Config}), 'notest', 1 ) if $Config; $PostambleActionsNoTest = ( ($missing and not $UnderCPAN) ? "\$(PERL) $0 --config=$config_notest --installdeps=$missing" : "\$(NOECHO) \$(NOOP)" ); $PostambleActionsUpgradeDepsNoTest = "\$(PERL) $0 --config=$config_notest --upgradedeps=$deps_list"; $PostambleActionsListDeps = '@$(PERL) -le "print for @ARGV" ' . join(' ', map $Missing[$_], grep $_ % 2 == 0, 0..$#Missing); my @all = (@Missing, @Existing); $PostambleActionsListAllDeps = '@$(PERL) -le "print for @ARGV" ' . join(' ', map $all[$_], grep $_ % 2 == 0, 0..$#all); return %args; } # a wrapper to ExtUtils::MakeMaker::WriteMakefile sub Write { require Carp; Carp::croak "WriteMakefile: Need even number of args" if @_ % 2; if ($CheckOnly) { print << "."; *** Makefile not written in check-only mode. . return; } my %args = _make_args(@_); no strict 'refs'; $PostambleUsed = 0; local *MY::postamble = \&postamble unless defined &MY::postamble; ExtUtils::MakeMaker::WriteMakefile(%args); print << "." unless $PostambleUsed; *** WARNING: Makefile written with customized MY::postamble() without including contents from Module::AutoInstall::postamble() -- auto installation features disabled. Please contact the author. . return 1; } sub postamble { $PostambleUsed = 1; my $fragment; $fragment .= <<"AUTO_INSTALL" if !$InstallDepsTarget; config :: installdeps \t\$(NOECHO) \$(NOOP) AUTO_INSTALL $fragment .= <<"END_MAKE"; checkdeps :: \t\$(PERL) $0 --checkdeps installdeps :: \t$PostambleActions installdeps_notest :: \t$PostambleActionsNoTest upgradedeps :: \t$PostambleActionsUpgradeDeps upgradedeps_notest :: \t$PostambleActionsUpgradeDepsNoTest listdeps :: \t$PostambleActionsListDeps listalldeps :: \t$PostambleActionsListAllDeps END_MAKE return $fragment; } 1; __END__ #line 1193 MongoDB-v1.2.2/inc/Module/Install/000755 000765 000024 00000000000 12651754051 017033 5ustar00davidstaff000000 000000 MongoDB-v1.2.2/inc/Module/Install.pm000644 000765 000024 00000030135 12651754051 017373 0ustar00davidstaff000000 000000 #line 1 package Module::Install; # For any maintainers: # The load order for Module::Install is a bit magic. # It goes something like this... # # IF ( host has Module::Install installed, creating author mode ) { # 1. Makefile.PL calls "use inc::Module::Install" # 2. $INC{inc/Module/Install.pm} set to installed version of inc::Module::Install # 3. The installed version of inc::Module::Install loads # 4. inc::Module::Install calls "require Module::Install" # 5. The ./inc/ version of Module::Install loads # } ELSE { # 1. Makefile.PL calls "use inc::Module::Install" # 2. $INC{inc/Module/Install.pm} set to ./inc/ version of Module::Install # 3. The ./inc/ version of Module::Install loads # } use 5.005; use strict 'vars'; use Cwd (); use File::Find (); use File::Path (); use vars qw{$VERSION $MAIN}; BEGIN { # All Module::Install core packages now require synchronised versions. # This will be used to ensure we don't accidentally load old or # different versions of modules. # This is not enforced yet, but will be some time in the next few # releases once we can make sure it won't clash with custom # Module::Install extensions. $VERSION = '1.06'; # Storage for the pseudo-singleton $MAIN = undef; *inc::Module::Install::VERSION = *VERSION; @inc::Module::Install::ISA = __PACKAGE__; } sub import { my $class = shift; my $self = $class->new(@_); my $who = $self->_caller; #------------------------------------------------------------- # all of the following checks should be included in import(), # to allow "eval 'require Module::Install; 1' to test # installation of Module::Install. (RT #51267) #------------------------------------------------------------- # Whether or not inc::Module::Install is actually loaded, the # $INC{inc/Module/Install.pm} is what will still get set as long as # the caller loaded module this in the documented manner. # If not set, the caller may NOT have loaded the bundled version, and thus # they may not have a MI version that works with the Makefile.PL. This would # result in false errors or unexpected behaviour. And we don't want that. my $file = join( '/', 'inc', split /::/, __PACKAGE__ ) . '.pm'; unless ( $INC{$file} ) { die <<"END_DIE" } Please invoke ${\__PACKAGE__} with: use inc::${\__PACKAGE__}; not: use ${\__PACKAGE__}; END_DIE # This reportedly fixes a rare Win32 UTC file time issue, but # as this is a non-cross-platform XS module not in the core, # we shouldn't really depend on it. See RT #24194 for detail. # (Also, this module only supports Perl 5.6 and above). eval "use Win32::UTCFileTime" if $^O eq 'MSWin32' && $] >= 5.006; # If the script that is loading Module::Install is from the future, # then make will detect this and cause it to re-run over and over # again. This is bad. Rather than taking action to touch it (which # is unreliable on some platforms and requires write permissions) # for now we should catch this and refuse to run. if ( -f $0 ) { my $s = (stat($0))[9]; # If the modification time is only slightly in the future, # sleep briefly to remove the problem. my $a = $s - time; if ( $a > 0 and $a < 5 ) { sleep 5 } # Too far in the future, throw an error. my $t = time; if ( $s > $t ) { die <<"END_DIE" } Your installer $0 has a modification time in the future ($s > $t). This is known to create infinite loops in make. Please correct this, then run $0 again. END_DIE } # Build.PL was formerly supported, but no longer is due to excessive # difficulty in implementing every single feature twice. if ( $0 =~ /Build.PL$/i ) { die <<"END_DIE" } Module::Install no longer supports Build.PL. It was impossible to maintain duel backends, and has been deprecated. Please remove all Build.PL files and only use the Makefile.PL installer. END_DIE #------------------------------------------------------------- # To save some more typing in Module::Install installers, every... # use inc::Module::Install # ...also acts as an implicit use strict. $^H |= strict::bits(qw(refs subs vars)); #------------------------------------------------------------- unless ( -f $self->{file} ) { foreach my $key (keys %INC) { delete $INC{$key} if $key =~ /Module\/Install/; } local $^W; require "$self->{path}/$self->{dispatch}.pm"; File::Path::mkpath("$self->{prefix}/$self->{author}"); $self->{admin} = "$self->{name}::$self->{dispatch}"->new( _top => $self ); $self->{admin}->init; @_ = ($class, _self => $self); goto &{"$self->{name}::import"}; } local $^W; *{"${who}::AUTOLOAD"} = $self->autoload; $self->preload; # Unregister loader and worker packages so subdirs can use them again delete $INC{'inc/Module/Install.pm'}; delete $INC{'Module/Install.pm'}; # Save to the singleton $MAIN = $self; return 1; } sub autoload { my $self = shift; my $who = $self->_caller; my $cwd = Cwd::cwd(); my $sym = "${who}::AUTOLOAD"; $sym->{$cwd} = sub { my $pwd = Cwd::cwd(); if ( my $code = $sym->{$pwd} ) { # Delegate back to parent dirs goto &$code unless $cwd eq $pwd; } unless ($$sym =~ s/([^:]+)$//) { # XXX: it looks like we can't retrieve the missing function # via $$sym (usually $main::AUTOLOAD) in this case. # I'm still wondering if we should slurp Makefile.PL to # get some context or not ... my ($package, $file, $line) = caller; die <<"EOT"; Unknown function is found at $file line $line. Execution of $file aborted due to runtime errors. If you're a contributor to a project, you may need to install some Module::Install extensions from CPAN (or other repository). If you're a user of a module, please contact the author. EOT } my $method = $1; if ( uc($method) eq $method ) { # Do nothing return; } elsif ( $method =~ /^_/ and $self->can($method) ) { # Dispatch to the root M:I class return $self->$method(@_); } # Dispatch to the appropriate plugin unshift @_, ( $self, $1 ); goto &{$self->can('call')}; }; } sub preload { my $self = shift; unless ( $self->{extensions} ) { $self->load_extensions( "$self->{prefix}/$self->{path}", $self ); } my @exts = @{$self->{extensions}}; unless ( @exts ) { @exts = $self->{admin}->load_all_extensions; } my %seen; foreach my $obj ( @exts ) { while (my ($method, $glob) = each %{ref($obj) . '::'}) { next unless $obj->can($method); next if $method =~ /^_/; next if $method eq uc($method); $seen{$method}++; } } my $who = $self->_caller; foreach my $name ( sort keys %seen ) { local $^W; *{"${who}::$name"} = sub { ${"${who}::AUTOLOAD"} = "${who}::$name"; goto &{"${who}::AUTOLOAD"}; }; } } sub new { my ($class, %args) = @_; delete $INC{'FindBin.pm'}; { # to suppress the redefine warning local $SIG{__WARN__} = sub {}; require FindBin; } # ignore the prefix on extension modules built from top level. my $base_path = Cwd::abs_path($FindBin::Bin); unless ( Cwd::abs_path(Cwd::cwd()) eq $base_path ) { delete $args{prefix}; } return $args{_self} if $args{_self}; $args{dispatch} ||= 'Admin'; $args{prefix} ||= 'inc'; $args{author} ||= ($^O eq 'VMS' ? '_author' : '.author'); $args{bundle} ||= 'inc/BUNDLES'; $args{base} ||= $base_path; $class =~ s/^\Q$args{prefix}\E:://; $args{name} ||= $class; $args{version} ||= $class->VERSION; unless ( $args{path} ) { $args{path} = $args{name}; $args{path} =~ s!::!/!g; } $args{file} ||= "$args{base}/$args{prefix}/$args{path}.pm"; $args{wrote} = 0; bless( \%args, $class ); } sub call { my ($self, $method) = @_; my $obj = $self->load($method) or return; splice(@_, 0, 2, $obj); goto &{$obj->can($method)}; } sub load { my ($self, $method) = @_; $self->load_extensions( "$self->{prefix}/$self->{path}", $self ) unless $self->{extensions}; foreach my $obj (@{$self->{extensions}}) { return $obj if $obj->can($method); } my $admin = $self->{admin} or die <<"END_DIE"; The '$method' method does not exist in the '$self->{prefix}' path! Please remove the '$self->{prefix}' directory and run $0 again to load it. END_DIE my $obj = $admin->load($method, 1); push @{$self->{extensions}}, $obj; $obj; } sub load_extensions { my ($self, $path, $top) = @_; my $should_reload = 0; unless ( grep { ! ref $_ and lc $_ eq lc $self->{prefix} } @INC ) { unshift @INC, $self->{prefix}; $should_reload = 1; } foreach my $rv ( $self->find_extensions($path) ) { my ($file, $pkg) = @{$rv}; next if $self->{pathnames}{$pkg}; local $@; my $new = eval { local $^W; require $file; $pkg->can('new') }; unless ( $new ) { warn $@ if $@; next; } $self->{pathnames}{$pkg} = $should_reload ? delete $INC{$file} : $INC{$file}; push @{$self->{extensions}}, &{$new}($pkg, _top => $top ); } $self->{extensions} ||= []; } sub find_extensions { my ($self, $path) = @_; my @found; File::Find::find( sub { my $file = $File::Find::name; return unless $file =~ m!^\Q$path\E/(.+)\.pm\Z!is; my $subpath = $1; return if lc($subpath) eq lc($self->{dispatch}); $file = "$self->{path}/$subpath.pm"; my $pkg = "$self->{name}::$subpath"; $pkg =~ s!/!::!g; # If we have a mixed-case package name, assume case has been preserved # correctly. Otherwise, root through the file to locate the case-preserved # version of the package name. if ( $subpath eq lc($subpath) || $subpath eq uc($subpath) ) { my $content = Module::Install::_read($subpath . '.pm'); my $in_pod = 0; foreach ( split //, $content ) { $in_pod = 1 if /^=\w/; $in_pod = 0 if /^=cut/; next if ($in_pod || /^=cut/); # skip pod text next if /^\s*#/; # and comments if ( m/^\s*package\s+($pkg)\s*;/i ) { $pkg = $1; last; } } } push @found, [ $file, $pkg ]; }, $path ) if -d $path; @found; } ##################################################################### # Common Utility Functions sub _caller { my $depth = 0; my $call = caller($depth); while ( $call eq __PACKAGE__ ) { $depth++; $call = caller($depth); } return $call; } # Done in evals to avoid confusing Perl::MinimumVersion eval( $] >= 5.006 ? <<'END_NEW' : <<'END_OLD' ); die $@ if $@; sub _read { local *FH; open( FH, '<', $_[0] ) or die "open($_[0]): $!"; my $string = do { local $/; }; close FH or die "close($_[0]): $!"; return $string; } END_NEW sub _read { local *FH; open( FH, "< $_[0]" ) or die "open($_[0]): $!"; my $string = do { local $/; }; close FH or die "close($_[0]): $!"; return $string; } END_OLD sub _readperl { my $string = Module::Install::_read($_[0]); $string =~ s/(?:\015{1,2}\012|\015|\012)/\n/sg; $string =~ s/(\n)\n*__(?:DATA|END)__\b.*\z/$1/s; $string =~ s/\n\n=\w+.+?\n\n=cut\b.+?\n+/\n\n/sg; return $string; } sub _readpod { my $string = Module::Install::_read($_[0]); $string =~ s/(?:\015{1,2}\012|\015|\012)/\n/sg; return $string if $_[0] =~ /\.pod\z/; $string =~ s/(^|\n=cut\b.+?\n+)[^=\s].+?\n(\n=\w+|\z)/$1$2/sg; $string =~ s/\n*=pod\b[^\n]*\n+/\n\n/sg; $string =~ s/\n*=cut\b[^\n]*\n+/\n\n/sg; $string =~ s/^\n+//s; return $string; } # Done in evals to avoid confusing Perl::MinimumVersion eval( $] >= 5.006 ? <<'END_NEW' : <<'END_OLD' ); die $@ if $@; sub _write { local *FH; open( FH, '>', $_[0] ) or die "open($_[0]): $!"; foreach ( 1 .. $#_ ) { print FH $_[$_] or die "print($_[0]): $!"; } close FH or die "close($_[0]): $!"; } END_NEW sub _write { local *FH; open( FH, "> $_[0]" ) or die "open($_[0]): $!"; foreach ( 1 .. $#_ ) { print FH $_[$_] or die "print($_[0]): $!"; } close FH or die "close($_[0]): $!"; } END_OLD # _version is for processing module versions (eg, 1.03_05) not # Perl versions (eg, 5.8.1). sub _version ($) { my $s = shift || 0; my $d =()= $s =~ /(\.)/g; if ( $d >= 2 ) { # Normalise multipart versions $s =~ s/(\.)(\d{1,3})/sprintf("$1%03d",$2)/eg; } $s =~ s/^(\d+)\.?//; my $l = $1 || 0; my @v = map { $_ . '0' x (3 - length $_) } $s =~ /(\d{1,3})\D?/g; $l = $l . '.' . join '', @v if @v; return $l + 0; } sub _cmp ($$) { _version($_[1]) <=> _version($_[2]); } # Cloned from Params::Util::_CLASS sub _CLASS ($) { ( defined $_[0] and ! ref $_[0] and $_[0] =~ m/^[^\W\d]\w*(?:::\w+)*\z/s ) ? $_[0] : undef; } 1; # Copyright 2008 - 2012 Adam Kennedy. MongoDB-v1.2.2/inc/Module/Install/AutoInstall.pm000644 000765 000024 00000004162 12651754051 021633 0ustar00davidstaff000000 000000 #line 1 package Module::Install::AutoInstall; use strict; use Module::Install::Base (); use vars qw{$VERSION @ISA $ISCORE}; BEGIN { $VERSION = '1.06'; @ISA = 'Module::Install::Base'; $ISCORE = 1; } sub AutoInstall { $_[0] } sub run { my $self = shift; $self->auto_install_now(@_); } sub write { my $self = shift; $self->auto_install(@_); } sub auto_install { my $self = shift; return if $self->{done}++; # Flatten array of arrays into a single array my @core = map @$_, map @$_, grep ref, $self->build_requires, $self->requires; my @config = @_; # We'll need Module::AutoInstall $self->include('Module::AutoInstall'); require Module::AutoInstall; my @features_require = Module::AutoInstall->import( (@config ? (-config => \@config) : ()), (@core ? (-core => \@core) : ()), $self->features, ); my %seen; my @requires = map @$_, map @$_, grep ref, $self->requires; while (my ($mod, $ver) = splice(@requires, 0, 2)) { $seen{$mod}{$ver}++; } my @build_requires = map @$_, map @$_, grep ref, $self->build_requires; while (my ($mod, $ver) = splice(@build_requires, 0, 2)) { $seen{$mod}{$ver}++; } my @configure_requires = map @$_, map @$_, grep ref, $self->configure_requires; while (my ($mod, $ver) = splice(@configure_requires, 0, 2)) { $seen{$mod}{$ver}++; } my @deduped; while (my ($mod, $ver) = splice(@features_require, 0, 2)) { push @deduped, $mod => $ver unless $seen{$mod}{$ver}++; } $self->requires(@deduped); $self->makemaker_args( Module::AutoInstall::_make_args() ); my $class = ref($self); $self->postamble( "# --- $class section:\n" . Module::AutoInstall::postamble() ); } sub installdeps_target { my ($self, @args) = @_; $self->include('Module::AutoInstall'); require Module::AutoInstall; Module::AutoInstall::_installdeps_target(1); $self->auto_install(@args); } sub auto_install_now { my $self = shift; $self->auto_install(@_); Module::AutoInstall::do_install(); } 1; MongoDB-v1.2.2/inc/Module/Install/Base.pm000644 000765 000024 00000002147 12651754051 020247 0ustar00davidstaff000000 000000 #line 1 package Module::Install::Base; use strict 'vars'; use vars qw{$VERSION}; BEGIN { $VERSION = '1.06'; } # Suspend handler for "redefined" warnings BEGIN { my $w = $SIG{__WARN__}; $SIG{__WARN__} = sub { $w }; } #line 42 sub new { my $class = shift; unless ( defined &{"${class}::call"} ) { *{"${class}::call"} = sub { shift->_top->call(@_) }; } unless ( defined &{"${class}::load"} ) { *{"${class}::load"} = sub { shift->_top->load(@_) }; } bless { @_ }, $class; } #line 61 sub AUTOLOAD { local $@; my $func = eval { shift->_top->autoload } or return; goto &$func; } #line 75 sub _top { $_[0]->{_top}; } #line 90 sub admin { $_[0]->_top->{admin} or Module::Install::Base::FakeAdmin->new; } #line 106 sub is_admin { ! $_[0]->admin->isa('Module::Install::Base::FakeAdmin'); } sub DESTROY {} package Module::Install::Base::FakeAdmin; use vars qw{$VERSION}; BEGIN { $VERSION = $Module::Install::Base::VERSION; } my $fake; sub new { $fake ||= bless(\@_, $_[0]); } sub AUTOLOAD {} sub DESTROY {} # Restore warning handler BEGIN { $SIG{__WARN__} = $SIG{__WARN__}->(); } 1; #line 159 MongoDB-v1.2.2/inc/Module/Install/Can.pm000644 000765 000024 00000006157 12651754051 020103 0ustar00davidstaff000000 000000 #line 1 package Module::Install::Can; use strict; use Config (); use ExtUtils::MakeMaker (); use Module::Install::Base (); use vars qw{$VERSION @ISA $ISCORE}; BEGIN { $VERSION = '1.06'; @ISA = 'Module::Install::Base'; $ISCORE = 1; } # check if we can load some module ### Upgrade this to not have to load the module if possible sub can_use { my ($self, $mod, $ver) = @_; $mod =~ s{::|\\}{/}g; $mod .= '.pm' unless $mod =~ /\.pm$/i; my $pkg = $mod; $pkg =~ s{/}{::}g; $pkg =~ s{\.pm$}{}i; local $@; eval { require $mod; $pkg->VERSION($ver || 0); 1 }; } # Check if we can run some command sub can_run { my ($self, $cmd) = @_; my $_cmd = $cmd; return $_cmd if (-x $_cmd or $_cmd = MM->maybe_command($_cmd)); for my $dir ((split /$Config::Config{path_sep}/, $ENV{PATH}), '.') { next if $dir eq ''; require File::Spec; my $abs = File::Spec->catfile($dir, $cmd); return $abs if (-x $abs or $abs = MM->maybe_command($abs)); } return; } # Can our C compiler environment build XS files sub can_xs { my $self = shift; # Ensure we have the CBuilder module $self->configure_requires( 'ExtUtils::CBuilder' => 0.27 ); # Do we have the configure_requires checker? local $@; eval "require ExtUtils::CBuilder;"; if ( $@ ) { # They don't obey configure_requires, so it is # someone old and delicate. Try to avoid hurting # them by falling back to an older simpler test. return $self->can_cc(); } # Do we have a working C compiler my $builder = ExtUtils::CBuilder->new( quiet => 1, ); unless ( $builder->have_compiler ) { # No working C compiler return 0; } # Write a C file representative of what XS becomes require File::Temp; my ( $FH, $tmpfile ) = File::Temp::tempfile( "compilexs-XXXXX", SUFFIX => '.c', ); binmode $FH; print $FH <<'END_C'; #include "EXTERN.h" #include "perl.h" #include "XSUB.h" int main(int argc, char **argv) { return 0; } int boot_sanexs() { return 1; } END_C close $FH; # Can the C compiler access the same headers XS does my @libs = (); my $object = undef; eval { local $^W = 0; $object = $builder->compile( source => $tmpfile, ); @libs = $builder->link( objects => $object, module_name => 'sanexs', ); }; my $result = $@ ? 0 : 1; # Clean up all the build files foreach ( $tmpfile, $object, @libs ) { next unless defined $_; 1 while unlink; } return $result; } # Can we locate a (the) C compiler sub can_cc { my $self = shift; my @chunks = split(/ /, $Config::Config{cc}) or return; # $Config{cc} may contain args; try to find out the program part while (@chunks) { return $self->can_run("@chunks") || (pop(@chunks), next); } return; } # Fix Cygwin bug on maybe_command(); if ( $^O eq 'cygwin' ) { require ExtUtils::MM_Cygwin; require ExtUtils::MM_Win32; if ( ! defined(&ExtUtils::MM_Cygwin::maybe_command) ) { *ExtUtils::MM_Cygwin::maybe_command = sub { my ($self, $file) = @_; if ($file =~ m{^/cygdrive/}i and ExtUtils::MM_Win32->can('maybe_command')) { ExtUtils::MM_Win32->maybe_command($file); } else { ExtUtils::MM_Unix->maybe_command($file); } } } } 1; __END__ #line 236 MongoDB-v1.2.2/inc/Module/Install/Compiler.pm000644 000765 000024 00000005047 12651754051 021151 0ustar00davidstaff000000 000000 package Module::Install::Compiler; use strict; use File::Basename (); use Module::Install::Base (); use vars qw{$VERSION @ISA $ISCORE}; BEGIN { $VERSION = '1.06'; @ISA = 'Module::Install::Base'; $ISCORE = 1; } sub ppport { my $self = shift; if ( $self->is_admin ) { return $self->admin->ppport(@_); } else { # Fallback to just a check my $file = shift || 'ppport.h'; unless ( -f $file ) { die "Packaging error, $file is missing"; } } } sub cc_files { require Config; my $self = shift; $self->makemaker_args( OBJECT => join ' ', map { substr($_, 0, -2) . $Config::Config{_o} } @_ ); } sub cc_inc_paths { my $self = shift; $self->makemaker_args( INC => join ' ', map { "-I$_" } @_ ); } sub cc_lib_paths { my $self = shift; $self->makemaker_args( LIBS => join ' ', map { "-L$_" } @_ ); } sub cc_lib_links { my $self = shift; $self->makemaker_args( LIBS => join ' ', $self->makemaker_args->{LIBS}, map { "-l$_" } @_ ); } sub cc_optimize_flags { my $self = shift; $self->makemaker_args( OPTIMIZE => join ' ', @_ ); } 1; __END__ =pod =head1 NAME Module::Install::Compiler - Commands for interacting with the C compiler =head1 SYNOPSIS To be completed =head1 DESCRIPTION Many Perl modules that contains C and XS code have fiendishly complex F files, because L doesn't itself provide a huge amount of assistance and automation in this area. B provides a number of commands that take care of common utility tasks, and try to take some of intricacy out of creating C and XS modules. =head1 COMMANDS To be completed =head1 TO DO The current implementation is relatively fragile and minimalistic. It only handles some very basic wrapper around L. It is currently undergoing extensive refactoring to provide a more generic compiler flag generation capability. This may take some time, and if anyone who maintains a Perl module that makes use of the compiler would like to help out, your assistance would be greatly appreciated. =head1 SEE ALSO L, L =head1 AUTHORS Refactored by Adam Kennedy Eadamk@cpan.orgE Mostly by Audrey Tang Eautrijus@autrijus.orgE Based on original works by Brian Ingerson Eingy@cpan.orgE =head1 COPYRIGHT Copyright 2002, 2003, 2004, 2006 by Adam Kennedy, Audrey Tang, Brian Ingerson. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. See L =cut MongoDB-v1.2.2/inc/Module/Install/Fetch.pm000644 000765 000024 00000004627 12651754051 020433 0ustar00davidstaff000000 000000 #line 1 package Module::Install::Fetch; use strict; use Module::Install::Base (); use vars qw{$VERSION @ISA $ISCORE}; BEGIN { $VERSION = '1.06'; @ISA = 'Module::Install::Base'; $ISCORE = 1; } sub get_file { my ($self, %args) = @_; my ($scheme, $host, $path, $file) = $args{url} =~ m|^(\w+)://([^/]+)(.+)/(.+)| or return; if ( $scheme eq 'http' and ! eval { require LWP::Simple; 1 } ) { $args{url} = $args{ftp_url} or (warn("LWP support unavailable!\n"), return); ($scheme, $host, $path, $file) = $args{url} =~ m|^(\w+)://([^/]+)(.+)/(.+)| or return; } $|++; print "Fetching '$file' from $host... "; unless (eval { require Socket; Socket::inet_aton($host) }) { warn "'$host' resolve failed!\n"; return; } return unless $scheme eq 'ftp' or $scheme eq 'http'; require Cwd; my $dir = Cwd::getcwd(); chdir $args{local_dir} or return if exists $args{local_dir}; if (eval { require LWP::Simple; 1 }) { LWP::Simple::mirror($args{url}, $file); } elsif (eval { require Net::FTP; 1 }) { eval { # use Net::FTP to get past firewall my $ftp = Net::FTP->new($host, Passive => 1, Timeout => 600); $ftp->login("anonymous", 'anonymous@example.com'); $ftp->cwd($path); $ftp->binary; $ftp->get($file) or (warn("$!\n"), return); $ftp->quit; } } elsif (my $ftp = $self->can_run('ftp')) { eval { # no Net::FTP, fallback to ftp.exe require FileHandle; my $fh = FileHandle->new; local $SIG{CHLD} = 'IGNORE'; unless ($fh->open("|$ftp -n")) { warn "Couldn't open ftp: $!\n"; chdir $dir; return; } my @dialog = split(/\n/, <<"END_FTP"); open $host user anonymous anonymous\@example.com cd $path binary get $file $file quit END_FTP foreach (@dialog) { $fh->print("$_\n") } $fh->close; } } else { warn "No working 'ftp' program available!\n"; chdir $dir; return; } unless (-f $file) { warn "Fetching failed: $@\n"; chdir $dir; return; } return if exists $args{size} and -s $file != $args{size}; system($args{run}) if exists $args{run}; unlink($file) if $args{remove}; print(((!exists $args{check_for} or -e $args{check_for}) ? "done!" : "failed! ($!)"), "\n"); chdir $dir; return !$?; } 1; MongoDB-v1.2.2/inc/Module/Install/Include.pm000644 000765 000024 00000001015 12651754051 020751 0ustar00davidstaff000000 000000 #line 1 package Module::Install::Include; use strict; use Module::Install::Base (); use vars qw{$VERSION @ISA $ISCORE}; BEGIN { $VERSION = '1.06'; @ISA = 'Module::Install::Base'; $ISCORE = 1; } sub include { shift()->admin->include(@_); } sub include_deps { shift()->admin->include_deps(@_); } sub auto_include { shift()->admin->auto_include(@_); } sub auto_include_deps { shift()->admin->auto_include_deps(@_); } sub auto_include_dependent_dists { shift()->admin->auto_include_dependent_dists(@_); } 1; MongoDB-v1.2.2/inc/Module/Install/Makefile.pm000644 000765 000024 00000027437 12651754051 021123 0ustar00davidstaff000000 000000 #line 1 package Module::Install::Makefile; use strict 'vars'; use ExtUtils::MakeMaker (); use Module::Install::Base (); use Fcntl qw/:flock :seek/; use vars qw{$VERSION @ISA $ISCORE}; BEGIN { $VERSION = '1.06'; @ISA = 'Module::Install::Base'; $ISCORE = 1; } sub Makefile { $_[0] } my %seen = (); sub prompt { shift; # Infinite loop protection my @c = caller(); if ( ++$seen{"$c[1]|$c[2]|$_[0]"} > 3 ) { die "Caught an potential prompt infinite loop ($c[1]|$c[2]|$_[0])"; } # In automated testing or non-interactive session, always use defaults if ( ($ENV{AUTOMATED_TESTING} or -! -t STDIN) and ! $ENV{PERL_MM_USE_DEFAULT} ) { local $ENV{PERL_MM_USE_DEFAULT} = 1; goto &ExtUtils::MakeMaker::prompt; } else { goto &ExtUtils::MakeMaker::prompt; } } # Store a cleaned up version of the MakeMaker version, # since we need to behave differently in a variety of # ways based on the MM version. my $makemaker = eval $ExtUtils::MakeMaker::VERSION; # If we are passed a param, do a "newer than" comparison. # Otherwise, just return the MakeMaker version. sub makemaker { ( @_ < 2 or $makemaker >= eval($_[1]) ) ? $makemaker : 0 } # Ripped from ExtUtils::MakeMaker 6.56, and slightly modified # as we only need to know here whether the attribute is an array # or a hash or something else (which may or may not be appendable). my %makemaker_argtype = ( C => 'ARRAY', CONFIG => 'ARRAY', # CONFIGURE => 'CODE', # ignore DIR => 'ARRAY', DL_FUNCS => 'HASH', DL_VARS => 'ARRAY', EXCLUDE_EXT => 'ARRAY', EXE_FILES => 'ARRAY', FUNCLIST => 'ARRAY', H => 'ARRAY', IMPORTS => 'HASH', INCLUDE_EXT => 'ARRAY', LIBS => 'ARRAY', # ignore '' MAN1PODS => 'HASH', MAN3PODS => 'HASH', META_ADD => 'HASH', META_MERGE => 'HASH', PL_FILES => 'HASH', PM => 'HASH', PMLIBDIRS => 'ARRAY', PMLIBPARENTDIRS => 'ARRAY', PREREQ_PM => 'HASH', CONFIGURE_REQUIRES => 'HASH', SKIP => 'ARRAY', TYPEMAPS => 'ARRAY', XS => 'HASH', # VERSION => ['version',''], # ignore # _KEEP_AFTER_FLUSH => '', clean => 'HASH', depend => 'HASH', dist => 'HASH', dynamic_lib=> 'HASH', linkext => 'HASH', macro => 'HASH', postamble => 'HASH', realclean => 'HASH', test => 'HASH', tool_autosplit => 'HASH', # special cases where you can use makemaker_append CCFLAGS => 'APPENDABLE', DEFINE => 'APPENDABLE', INC => 'APPENDABLE', LDDLFLAGS => 'APPENDABLE', LDFROM => 'APPENDABLE', ); sub makemaker_args { my ($self, %new_args) = @_; my $args = ( $self->{makemaker_args} ||= {} ); foreach my $key (keys %new_args) { if ($makemaker_argtype{$key}) { if ($makemaker_argtype{$key} eq 'ARRAY') { $args->{$key} = [] unless defined $args->{$key}; unless (ref $args->{$key} eq 'ARRAY') { $args->{$key} = [$args->{$key}] } push @{$args->{$key}}, ref $new_args{$key} eq 'ARRAY' ? @{$new_args{$key}} : $new_args{$key}; } elsif ($makemaker_argtype{$key} eq 'HASH') { $args->{$key} = {} unless defined $args->{$key}; foreach my $skey (keys %{ $new_args{$key} }) { $args->{$key}{$skey} = $new_args{$key}{$skey}; } } elsif ($makemaker_argtype{$key} eq 'APPENDABLE') { $self->makemaker_append($key => $new_args{$key}); } } else { if (defined $args->{$key}) { warn qq{MakeMaker attribute "$key" is overriden; use "makemaker_append" to append values\n}; } $args->{$key} = $new_args{$key}; } } return $args; } # For mm args that take multiple space-seperated args, # append an argument to the current list. sub makemaker_append { my $self = shift; my $name = shift; my $args = $self->makemaker_args; $args->{$name} = defined $args->{$name} ? join( ' ', $args->{$name}, @_ ) : join( ' ', @_ ); } sub build_subdirs { my $self = shift; my $subdirs = $self->makemaker_args->{DIR} ||= []; for my $subdir (@_) { push @$subdirs, $subdir; } } sub clean_files { my $self = shift; my $clean = $self->makemaker_args->{clean} ||= {}; %$clean = ( %$clean, FILES => join ' ', grep { length $_ } ($clean->{FILES} || (), @_), ); } sub realclean_files { my $self = shift; my $realclean = $self->makemaker_args->{realclean} ||= {}; %$realclean = ( %$realclean, FILES => join ' ', grep { length $_ } ($realclean->{FILES} || (), @_), ); } sub libs { my $self = shift; my $libs = ref $_[0] ? shift : [ shift ]; $self->makemaker_args( LIBS => $libs ); } sub inc { my $self = shift; $self->makemaker_args( INC => shift ); } sub _wanted_t { } sub tests_recursive { my $self = shift; my $dir = shift || 't'; unless ( -d $dir ) { die "tests_recursive dir '$dir' does not exist"; } my %tests = map { $_ => 1 } split / /, ($self->tests || ''); require File::Find; File::Find::find( sub { /\.t$/ and -f $_ and $tests{"$File::Find::dir/*.t"} = 1 }, $dir ); $self->tests( join ' ', sort keys %tests ); } sub write { my $self = shift; die "&Makefile->write() takes no arguments\n" if @_; # Check the current Perl version my $perl_version = $self->perl_version; if ( $perl_version ) { eval "use $perl_version; 1" or die "ERROR: perl: Version $] is installed, " . "but we need version >= $perl_version"; } # Make sure we have a new enough MakeMaker require ExtUtils::MakeMaker; if ( $perl_version and $self->_cmp($perl_version, '5.006') >= 0 ) { # This previous attempted to inherit the version of # ExtUtils::MakeMaker in use by the module author, but this # was found to be untenable as some authors build releases # using future dev versions of EU:MM that nobody else has. # Instead, #toolchain suggests we use 6.59 which is the most # stable version on CPAN at time of writing and is, to quote # ribasushi, "not terminally fucked, > and tested enough". # TODO: We will now need to maintain this over time to push # the version up as new versions are released. $self->build_requires( 'ExtUtils::MakeMaker' => 6.59 ); $self->configure_requires( 'ExtUtils::MakeMaker' => 6.59 ); } else { # Allow legacy-compatibility with 5.005 by depending on the # most recent EU:MM that supported 5.005. $self->build_requires( 'ExtUtils::MakeMaker' => 6.36 ); $self->configure_requires( 'ExtUtils::MakeMaker' => 6.36 ); } # Generate the MakeMaker params my $args = $self->makemaker_args; $args->{DISTNAME} = $self->name; $args->{NAME} = $self->module_name || $self->name; $args->{NAME} =~ s/-/::/g; $args->{VERSION} = $self->version or die <<'EOT'; ERROR: Can't determine distribution version. Please specify it explicitly via 'version' in Makefile.PL, or set a valid $VERSION in a module, and provide its file path via 'version_from' (or 'all_from' if you prefer) in Makefile.PL. EOT if ( $self->tests ) { my @tests = split ' ', $self->tests; my %seen; $args->{test} = { TESTS => (join ' ', grep {!$seen{$_}++} @tests), }; } elsif ( $Module::Install::ExtraTests::use_extratests ) { # Module::Install::ExtraTests doesn't set $self->tests and does its own tests via harness. # So, just ignore our xt tests here. } elsif ( -d 'xt' and ($Module::Install::AUTHOR or $ENV{RELEASE_TESTING}) ) { $args->{test} = { TESTS => join( ' ', map { "$_/*.t" } grep { -d $_ } qw{ t xt } ), }; } if ( $] >= 5.005 ) { $args->{ABSTRACT} = $self->abstract; $args->{AUTHOR} = join ', ', @{$self->author || []}; } if ( $self->makemaker(6.10) ) { $args->{NO_META} = 1; #$args->{NO_MYMETA} = 1; } if ( $self->makemaker(6.17) and $self->sign ) { $args->{SIGN} = 1; } unless ( $self->is_admin ) { delete $args->{SIGN}; } if ( $self->makemaker(6.31) and $self->license ) { $args->{LICENSE} = $self->license; } my $prereq = ($args->{PREREQ_PM} ||= {}); %$prereq = ( %$prereq, map { @$_ } # flatten [module => version] map { @$_ } grep $_, ($self->requires) ); # Remove any reference to perl, PREREQ_PM doesn't support it delete $args->{PREREQ_PM}->{perl}; # Merge both kinds of requires into BUILD_REQUIRES my $build_prereq = ($args->{BUILD_REQUIRES} ||= {}); %$build_prereq = ( %$build_prereq, map { @$_ } # flatten [module => version] map { @$_ } grep $_, ($self->configure_requires, $self->build_requires) ); # Remove any reference to perl, BUILD_REQUIRES doesn't support it delete $args->{BUILD_REQUIRES}->{perl}; # Delete bundled dists from prereq_pm, add it to Makefile DIR my $subdirs = ($args->{DIR} || []); if ($self->bundles) { my %processed; foreach my $bundle (@{ $self->bundles }) { my ($mod_name, $dist_dir) = @$bundle; delete $prereq->{$mod_name}; $dist_dir = File::Basename::basename($dist_dir); # dir for building this module if (not exists $processed{$dist_dir}) { if (-d $dist_dir) { # List as sub-directory to be processed by make push @$subdirs, $dist_dir; } # Else do nothing: the module is already present on the system $processed{$dist_dir} = undef; } } } unless ( $self->makemaker('6.55_03') ) { %$prereq = (%$prereq,%$build_prereq); delete $args->{BUILD_REQUIRES}; } if ( my $perl_version = $self->perl_version ) { eval "use $perl_version; 1" or die "ERROR: perl: Version $] is installed, " . "but we need version >= $perl_version"; if ( $self->makemaker(6.48) ) { $args->{MIN_PERL_VERSION} = $perl_version; } } if ($self->installdirs) { warn qq{old INSTALLDIRS (probably set by makemaker_args) is overriden by installdirs\n} if $args->{INSTALLDIRS}; $args->{INSTALLDIRS} = $self->installdirs; } my %args = map { ( $_ => $args->{$_} ) } grep {defined($args->{$_} ) } keys %$args; my $user_preop = delete $args{dist}->{PREOP}; if ( my $preop = $self->admin->preop($user_preop) ) { foreach my $key ( keys %$preop ) { $args{dist}->{$key} = $preop->{$key}; } } my $mm = ExtUtils::MakeMaker::WriteMakefile(%args); $self->fix_up_makefile($mm->{FIRST_MAKEFILE} || 'Makefile'); } sub fix_up_makefile { my $self = shift; my $makefile_name = shift; my $top_class = ref($self->_top) || ''; my $top_version = $self->_top->VERSION || ''; my $preamble = $self->preamble ? "# Preamble by $top_class $top_version\n" . $self->preamble : ''; my $postamble = "# Postamble by $top_class $top_version\n" . ($self->postamble || ''); local *MAKEFILE; open MAKEFILE, "+< $makefile_name" or die "fix_up_makefile: Couldn't open $makefile_name: $!"; eval { flock MAKEFILE, LOCK_EX }; my $makefile = do { local $/; }; $makefile =~ s/\b(test_harness\(\$\(TEST_VERBOSE\), )/$1'inc', /; $makefile =~ s/( -I\$\(INST_ARCHLIB\))/ -Iinc$1/g; $makefile =~ s/( "-I\$\(INST_LIB\)")/ "-Iinc"$1/g; $makefile =~ s/^(FULLPERL = .*)/$1 "-Iinc"/m; $makefile =~ s/^(PERL = .*)/$1 "-Iinc"/m; # Module::Install will never be used to build the Core Perl # Sometimes PERL_LIB and PERL_ARCHLIB get written anyway, which breaks # PREFIX/PERL5LIB, and thus, install_share. Blank them if they exist $makefile =~ s/^PERL_LIB = .+/PERL_LIB =/m; #$makefile =~ s/^PERL_ARCHLIB = .+/PERL_ARCHLIB =/m; # Perl 5.005 mentions PERL_LIB explicitly, so we have to remove that as well. $makefile =~ s/(\"?)-I\$\(PERL_LIB\)\1//g; # XXX - This is currently unused; not sure if it breaks other MM-users # $makefile =~ s/^pm_to_blib\s+:\s+/pm_to_blib :: /mg; seek MAKEFILE, 0, SEEK_SET; truncate MAKEFILE, 0; print MAKEFILE "$preamble$makefile$postamble" or die $!; close MAKEFILE or die $!; 1; } sub preamble { my ($self, $text) = @_; $self->{preamble} = $text . $self->{preamble} if defined $text; $self->{preamble}; } sub postamble { my ($self, $text) = @_; $self->{postamble} ||= $self->admin->postamble; $self->{postamble} .= $text if defined $text; $self->{postamble} } 1; __END__ #line 544 MongoDB-v1.2.2/inc/Module/Install/Metadata.pm000644 000765 000024 00000043277 12651754051 021126 0ustar00davidstaff000000 000000 #line 1 package Module::Install::Metadata; use strict 'vars'; use Module::Install::Base (); use vars qw{$VERSION @ISA $ISCORE}; BEGIN { $VERSION = '1.06'; @ISA = 'Module::Install::Base'; $ISCORE = 1; } my @boolean_keys = qw{ sign }; my @scalar_keys = qw{ name module_name abstract version distribution_type tests installdirs }; my @tuple_keys = qw{ configure_requires build_requires requires recommends bundles resources }; my @resource_keys = qw{ homepage bugtracker repository }; my @array_keys = qw{ keywords author }; *authors = \&author; sub Meta { shift } sub Meta_BooleanKeys { @boolean_keys } sub Meta_ScalarKeys { @scalar_keys } sub Meta_TupleKeys { @tuple_keys } sub Meta_ResourceKeys { @resource_keys } sub Meta_ArrayKeys { @array_keys } foreach my $key ( @boolean_keys ) { *$key = sub { my $self = shift; if ( defined wantarray and not @_ ) { return $self->{values}->{$key}; } $self->{values}->{$key} = ( @_ ? $_[0] : 1 ); return $self; }; } foreach my $key ( @scalar_keys ) { *$key = sub { my $self = shift; return $self->{values}->{$key} if defined wantarray and !@_; $self->{values}->{$key} = shift; return $self; }; } foreach my $key ( @array_keys ) { *$key = sub { my $self = shift; return $self->{values}->{$key} if defined wantarray and !@_; $self->{values}->{$key} ||= []; push @{$self->{values}->{$key}}, @_; return $self; }; } foreach my $key ( @resource_keys ) { *$key = sub { my $self = shift; unless ( @_ ) { return () unless $self->{values}->{resources}; return map { $_->[1] } grep { $_->[0] eq $key } @{ $self->{values}->{resources} }; } return $self->{values}->{resources}->{$key} unless @_; my $uri = shift or die( "Did not provide a value to $key()" ); $self->resources( $key => $uri ); return 1; }; } foreach my $key ( grep { $_ ne "resources" } @tuple_keys) { *$key = sub { my $self = shift; return $self->{values}->{$key} unless @_; my @added; while ( @_ ) { my $module = shift or last; my $version = shift || 0; push @added, [ $module, $version ]; } push @{ $self->{values}->{$key} }, @added; return map {@$_} @added; }; } # Resource handling my %lc_resource = map { $_ => 1 } qw{ homepage license bugtracker repository }; sub resources { my $self = shift; while ( @_ ) { my $name = shift or last; my $value = shift or next; if ( $name eq lc $name and ! $lc_resource{$name} ) { die("Unsupported reserved lowercase resource '$name'"); } $self->{values}->{resources} ||= []; push @{ $self->{values}->{resources} }, [ $name, $value ]; } $self->{values}->{resources}; } # Aliases for build_requires that will have alternative # meanings in some future version of META.yml. sub test_requires { shift->build_requires(@_) } sub install_requires { shift->build_requires(@_) } # Aliases for installdirs options sub install_as_core { $_[0]->installdirs('perl') } sub install_as_cpan { $_[0]->installdirs('site') } sub install_as_site { $_[0]->installdirs('site') } sub install_as_vendor { $_[0]->installdirs('vendor') } sub dynamic_config { my $self = shift; my $value = @_ ? shift : 1; if ( $self->{values}->{dynamic_config} ) { # Once dynamic we never change to static, for safety return 0; } $self->{values}->{dynamic_config} = $value ? 1 : 0; return 1; } # Convenience command sub static_config { shift->dynamic_config(0); } sub perl_version { my $self = shift; return $self->{values}->{perl_version} unless @_; my $version = shift or die( "Did not provide a value to perl_version()" ); # Normalize the version $version = $self->_perl_version($version); # We don't support the really old versions unless ( $version >= 5.005 ) { die "Module::Install only supports 5.005 or newer (use ExtUtils::MakeMaker)\n"; } $self->{values}->{perl_version} = $version; } sub all_from { my ( $self, $file ) = @_; unless ( defined($file) ) { my $name = $self->name or die( "all_from called with no args without setting name() first" ); $file = join('/', 'lib', split(/-/, $name)) . '.pm'; $file =~ s{.*/}{} unless -e $file; unless ( -e $file ) { die("all_from cannot find $file from $name"); } } unless ( -f $file ) { die("The path '$file' does not exist, or is not a file"); } $self->{values}{all_from} = $file; # Some methods pull from POD instead of code. # If there is a matching .pod, use that instead my $pod = $file; $pod =~ s/\.pm$/.pod/i; $pod = $file unless -e $pod; # Pull the different values $self->name_from($file) unless $self->name; $self->version_from($file) unless $self->version; $self->perl_version_from($file) unless $self->perl_version; $self->author_from($pod) unless @{$self->author || []}; $self->license_from($pod) unless $self->license; $self->abstract_from($pod) unless $self->abstract; return 1; } sub provides { my $self = shift; my $provides = ( $self->{values}->{provides} ||= {} ); %$provides = (%$provides, @_) if @_; return $provides; } sub auto_provides { my $self = shift; return $self unless $self->is_admin; unless (-e 'MANIFEST') { warn "Cannot deduce auto_provides without a MANIFEST, skipping\n"; return $self; } # Avoid spurious warnings as we are not checking manifest here. local $SIG{__WARN__} = sub {1}; require ExtUtils::Manifest; local *ExtUtils::Manifest::manicheck = sub { return }; require Module::Build; my $build = Module::Build->new( dist_name => $self->name, dist_version => $self->version, license => $self->license, ); $self->provides( %{ $build->find_dist_packages || {} } ); } sub feature { my $self = shift; my $name = shift; my $features = ( $self->{values}->{features} ||= [] ); my $mods; if ( @_ == 1 and ref( $_[0] ) ) { # The user used ->feature like ->features by passing in the second # argument as a reference. Accomodate for that. $mods = $_[0]; } else { $mods = \@_; } my $count = 0; push @$features, ( $name => [ map { ref($_) ? ( ref($_) eq 'HASH' ) ? %$_ : @$_ : $_ } @$mods ] ); return @$features; } sub features { my $self = shift; while ( my ( $name, $mods ) = splice( @_, 0, 2 ) ) { $self->feature( $name, @$mods ); } return $self->{values}->{features} ? @{ $self->{values}->{features} } : (); } sub no_index { my $self = shift; my $type = shift; push @{ $self->{values}->{no_index}->{$type} }, @_ if $type; return $self->{values}->{no_index}; } sub read { my $self = shift; $self->include_deps( 'YAML::Tiny', 0 ); require YAML::Tiny; my $data = YAML::Tiny::LoadFile('META.yml'); # Call methods explicitly in case user has already set some values. while ( my ( $key, $value ) = each %$data ) { next unless $self->can($key); if ( ref $value eq 'HASH' ) { while ( my ( $module, $version ) = each %$value ) { $self->can($key)->($self, $module => $version ); } } else { $self->can($key)->($self, $value); } } return $self; } sub write { my $self = shift; return $self unless $self->is_admin; $self->admin->write_meta; return $self; } sub version_from { require ExtUtils::MM_Unix; my ( $self, $file ) = @_; $self->version( ExtUtils::MM_Unix->parse_version($file) ); # for version integrity check $self->makemaker_args( VERSION_FROM => $file ); } sub abstract_from { require ExtUtils::MM_Unix; my ( $self, $file ) = @_; $self->abstract( bless( { DISTNAME => $self->name }, 'ExtUtils::MM_Unix' )->parse_abstract($file) ); } # Add both distribution and module name sub name_from { my ($self, $file) = @_; if ( Module::Install::_read($file) =~ m/ ^ \s* package \s* ([\w:]+) \s* ; /ixms ) { my ($name, $module_name) = ($1, $1); $name =~ s{::}{-}g; $self->name($name); unless ( $self->module_name ) { $self->module_name($module_name); } } else { die("Cannot determine name from $file\n"); } } sub _extract_perl_version { if ( $_[0] =~ m/ ^\s* (?:use|require) \s* v? ([\d_\.]+) \s* ; /ixms ) { my $perl_version = $1; $perl_version =~ s{_}{}g; return $perl_version; } else { return; } } sub perl_version_from { my $self = shift; my $perl_version=_extract_perl_version(Module::Install::_read($_[0])); if ($perl_version) { $self->perl_version($perl_version); } else { warn "Cannot determine perl version info from $_[0]\n"; return; } } sub author_from { my $self = shift; my $content = Module::Install::_read($_[0]); if ($content =~ m/ =head \d \s+ (?:authors?)\b \s* ([^\n]*) | =head \d \s+ (?:licen[cs]e|licensing|copyright|legal)\b \s* .*? copyright .*? \d\d\d[\d.]+ \s* (?:\bby\b)? \s* ([^\n]*) /ixms) { my $author = $1 || $2; # XXX: ugly but should work anyway... if (eval "require Pod::Escapes; 1") { # Pod::Escapes has a mapping table. # It's in core of perl >= 5.9.3, and should be installed # as one of the Pod::Simple's prereqs, which is a prereq # of Pod::Text 3.x (see also below). $author =~ s{ E<( (\d+) | ([A-Za-z]+) )> } { defined $2 ? chr($2) : defined $Pod::Escapes::Name2character_number{$1} ? chr($Pod::Escapes::Name2character_number{$1}) : do { warn "Unknown escape: E<$1>"; "E<$1>"; }; }gex; } elsif (eval "require Pod::Text; 1" && $Pod::Text::VERSION < 3) { # Pod::Text < 3.0 has yet another mapping table, # though the table name of 2.x and 1.x are different. # (1.x is in core of Perl < 5.6, 2.x is in core of # Perl < 5.9.3) my $mapping = ($Pod::Text::VERSION < 2) ? \%Pod::Text::HTML_Escapes : \%Pod::Text::ESCAPES; $author =~ s{ E<( (\d+) | ([A-Za-z]+) )> } { defined $2 ? chr($2) : defined $mapping->{$1} ? $mapping->{$1} : do { warn "Unknown escape: E<$1>"; "E<$1>"; }; }gex; } else { $author =~ s{E}{<}g; $author =~ s{E}{>}g; } $self->author($author); } else { warn "Cannot determine author info from $_[0]\n"; } } #Stolen from M::B my %license_urls = ( perl => 'http://dev.perl.org/licenses/', apache => 'http://apache.org/licenses/LICENSE-2.0', apache_1_1 => 'http://apache.org/licenses/LICENSE-1.1', artistic => 'http://opensource.org/licenses/artistic-license.php', artistic_2 => 'http://opensource.org/licenses/artistic-license-2.0.php', lgpl => 'http://opensource.org/licenses/lgpl-license.php', lgpl2 => 'http://opensource.org/licenses/lgpl-2.1.php', lgpl3 => 'http://opensource.org/licenses/lgpl-3.0.html', bsd => 'http://opensource.org/licenses/bsd-license.php', gpl => 'http://opensource.org/licenses/gpl-license.php', gpl2 => 'http://opensource.org/licenses/gpl-2.0.php', gpl3 => 'http://opensource.org/licenses/gpl-3.0.html', mit => 'http://opensource.org/licenses/mit-license.php', mozilla => 'http://opensource.org/licenses/mozilla1.1.php', open_source => undef, unrestricted => undef, restrictive => undef, unknown => undef, ); sub license { my $self = shift; return $self->{values}->{license} unless @_; my $license = shift or die( 'Did not provide a value to license()' ); $license = __extract_license($license) || lc $license; $self->{values}->{license} = $license; # Automatically fill in license URLs if ( $license_urls{$license} ) { $self->resources( license => $license_urls{$license} ); } return 1; } sub _extract_license { my $pod = shift; my $matched; return __extract_license( ($matched) = $pod =~ m/ (=head \d \s+ L(?i:ICEN[CS]E|ICENSING)\b.*?) (=head \d.*|=cut.*|)\z /xms ) || __extract_license( ($matched) = $pod =~ m/ (=head \d \s+ (?:C(?i:OPYRIGHTS?)|L(?i:EGAL))\b.*?) (=head \d.*|=cut.*|)\z /xms ); } sub __extract_license { my $license_text = shift or return; my @phrases = ( '(?:under )?the same (?:terms|license) as (?:perl|the perl (?:\d )?programming language)' => 'perl', 1, '(?:under )?the terms of (?:perl|the perl programming language) itself' => 'perl', 1, 'Artistic and GPL' => 'perl', 1, 'GNU general public license' => 'gpl', 1, 'GNU public license' => 'gpl', 1, 'GNU lesser general public license' => 'lgpl', 1, 'GNU lesser public license' => 'lgpl', 1, 'GNU library general public license' => 'lgpl', 1, 'GNU library public license' => 'lgpl', 1, 'GNU Free Documentation license' => 'unrestricted', 1, 'GNU Affero General Public License' => 'open_source', 1, '(?:Free)?BSD license' => 'bsd', 1, 'Artistic license 2\.0' => 'artistic_2', 1, 'Artistic license' => 'artistic', 1, 'Apache (?:Software )?license' => 'apache', 1, 'GPL' => 'gpl', 1, 'LGPL' => 'lgpl', 1, 'BSD' => 'bsd', 1, 'Artistic' => 'artistic', 1, 'MIT' => 'mit', 1, 'Mozilla Public License' => 'mozilla', 1, 'Q Public License' => 'open_source', 1, 'OpenSSL License' => 'unrestricted', 1, 'SSLeay License' => 'unrestricted', 1, 'zlib License' => 'open_source', 1, 'proprietary' => 'proprietary', 0, ); while ( my ($pattern, $license, $osi) = splice(@phrases, 0, 3) ) { $pattern =~ s#\s+#\\s+#gs; if ( $license_text =~ /\b$pattern\b/i ) { return $license; } } return ''; } sub license_from { my $self = shift; if (my $license=_extract_license(Module::Install::_read($_[0]))) { $self->license($license); } else { warn "Cannot determine license info from $_[0]\n"; return 'unknown'; } } sub _extract_bugtracker { my @links = $_[0] =~ m#L<( https?\Q://rt.cpan.org/\E[^>]+| https?\Q://github.com/\E[\w_]+/[\w_]+/issues| https?\Q://code.google.com/p/\E[\w_\-]+/issues/list )>#gx; my %links; @links{@links}=(); @links=keys %links; return @links; } sub bugtracker_from { my $self = shift; my $content = Module::Install::_read($_[0]); my @links = _extract_bugtracker($content); unless ( @links ) { warn "Cannot determine bugtracker info from $_[0]\n"; return 0; } if ( @links > 1 ) { warn "Found more than one bugtracker link in $_[0]\n"; return 0; } # Set the bugtracker bugtracker( $links[0] ); return 1; } sub requires_from { my $self = shift; my $content = Module::Install::_readperl($_[0]); my @requires = $content =~ m/^use\s+([^\W\d]\w*(?:::\w+)*)\s+(v?[\d\.]+)/mg; while ( @requires ) { my $module = shift @requires; my $version = shift @requires; $self->requires( $module => $version ); } } sub test_requires_from { my $self = shift; my $content = Module::Install::_readperl($_[0]); my @requires = $content =~ m/^use\s+([^\W\d]\w*(?:::\w+)*)\s+([\d\.]+)/mg; while ( @requires ) { my $module = shift @requires; my $version = shift @requires; $self->test_requires( $module => $version ); } } # Convert triple-part versions (eg, 5.6.1 or 5.8.9) to # numbers (eg, 5.006001 or 5.008009). # Also, convert double-part versions (eg, 5.8) sub _perl_version { my $v = $_[-1]; $v =~ s/^([1-9])\.([1-9]\d?\d?)$/sprintf("%d.%03d",$1,$2)/e; $v =~ s/^([1-9])\.([1-9]\d?\d?)\.(0|[1-9]\d?\d?)$/sprintf("%d.%03d%03d",$1,$2,$3 || 0)/e; $v =~ s/(\.\d\d\d)000$/$1/; $v =~ s/_.+$//; if ( ref($v) ) { # Numify $v = $v + 0; } return $v; } sub add_metadata { my $self = shift; my %hash = @_; for my $key (keys %hash) { warn "add_metadata: $key is not prefixed with 'x_'.\n" . "Use appopriate function to add non-private metadata.\n" unless $key =~ /^x_/; $self->{values}->{$key} = $hash{$key}; } } ###################################################################### # MYMETA Support sub WriteMyMeta { die "WriteMyMeta has been deprecated"; } sub write_mymeta_yaml { my $self = shift; # We need YAML::Tiny to write the MYMETA.yml file unless ( eval { require YAML::Tiny; 1; } ) { return 1; } # Generate the data my $meta = $self->_write_mymeta_data or return 1; # Save as the MYMETA.yml file print "Writing MYMETA.yml\n"; YAML::Tiny::DumpFile('MYMETA.yml', $meta); } sub write_mymeta_json { my $self = shift; # We need JSON to write the MYMETA.json file unless ( eval { require JSON; 1; } ) { return 1; } # Generate the data my $meta = $self->_write_mymeta_data or return 1; # Save as the MYMETA.yml file print "Writing MYMETA.json\n"; Module::Install::_write( 'MYMETA.json', JSON->new->pretty(1)->canonical->encode($meta), ); } sub _write_mymeta_data { my $self = shift; # If there's no existing META.yml there is nothing we can do return undef unless -f 'META.yml'; # We need Parse::CPAN::Meta to load the file unless ( eval { require Parse::CPAN::Meta; 1; } ) { return undef; } # Merge the perl version into the dependencies my $val = $self->Meta->{values}; my $perl = delete $val->{perl_version}; if ( $perl ) { $val->{requires} ||= []; my $requires = $val->{requires}; # Canonize to three-dot version after Perl 5.6 if ( $perl >= 5.006 ) { $perl =~ s{^(\d+)\.(\d\d\d)(\d*)}{join('.', $1, int($2||0), int($3||0))}e } unshift @$requires, [ perl => $perl ]; } # Load the advisory META.yml file my @yaml = Parse::CPAN::Meta::LoadFile('META.yml'); my $meta = $yaml[0]; # Overwrite the non-configure dependency hashs delete $meta->{requires}; delete $meta->{build_requires}; delete $meta->{recommends}; if ( exists $val->{requires} ) { $meta->{requires} = { map { @$_ } @{ $val->{requires} } }; } if ( exists $val->{build_requires} ) { $meta->{build_requires} = { map { @$_ } @{ $val->{build_requires} } }; } return $meta; } 1; MongoDB-v1.2.2/inc/Module/Install/PRIVATE/000755 000765 000024 00000000000 12651754051 020145 5ustar00davidstaff000000 000000 MongoDB-v1.2.2/inc/Module/Install/Win32.pm000644 000765 000024 00000003403 12651754051 020273 0ustar00davidstaff000000 000000 #line 1 package Module::Install::Win32; use strict; use Module::Install::Base (); use vars qw{$VERSION @ISA $ISCORE}; BEGIN { $VERSION = '1.06'; @ISA = 'Module::Install::Base'; $ISCORE = 1; } # determine if the user needs nmake, and download it if needed sub check_nmake { my $self = shift; $self->load('can_run'); $self->load('get_file'); require Config; return unless ( $^O eq 'MSWin32' and $Config::Config{make} and $Config::Config{make} =~ /^nmake\b/i and ! $self->can_run('nmake') ); print "The required 'nmake' executable not found, fetching it...\n"; require File::Basename; my $rv = $self->get_file( url => 'http://download.microsoft.com/download/vc15/Patch/1.52/W95/EN-US/Nmake15.exe', ftp_url => 'ftp://ftp.microsoft.com/Softlib/MSLFILES/Nmake15.exe', local_dir => File::Basename::dirname($^X), size => 51928, run => 'Nmake15.exe /o > nul', check_for => 'Nmake.exe', remove => 1, ); die <<'END_MESSAGE' unless $rv; ------------------------------------------------------------------------------- Since you are using Microsoft Windows, you will need the 'nmake' utility before installation. It's available at: http://download.microsoft.com/download/vc15/Patch/1.52/W95/EN-US/Nmake15.exe or ftp://ftp.microsoft.com/Softlib/MSLFILES/Nmake15.exe Please download the file manually, save it to a directory in %PATH% (e.g. C:\WINDOWS\COMMAND\), then launch the MS-DOS command line shell, "cd" to that directory, and run "Nmake15.exe" from there; that will create the 'nmake.exe' file needed by this module. You may then resume the installation process described in README. ------------------------------------------------------------------------------- END_MESSAGE } 1; MongoDB-v1.2.2/inc/Module/Install/WriteAll.pm000644 000765 000024 00000002376 12651754051 021124 0ustar00davidstaff000000 000000 #line 1 package Module::Install::WriteAll; use strict; use Module::Install::Base (); use vars qw{$VERSION @ISA $ISCORE}; BEGIN { $VERSION = '1.06'; @ISA = qw{Module::Install::Base}; $ISCORE = 1; } sub WriteAll { my $self = shift; my %args = ( meta => 1, sign => 0, inline => 0, check_nmake => 1, @_, ); $self->sign(1) if $args{sign}; $self->admin->WriteAll(%args) if $self->is_admin; $self->check_nmake if $args{check_nmake}; unless ( $self->makemaker_args->{PL_FILES} ) { # XXX: This still may be a bit over-defensive... unless ($self->makemaker(6.25)) { $self->makemaker_args( PL_FILES => {} ) if -f 'Build.PL'; } } # Until ExtUtils::MakeMaker support MYMETA.yml, make sure # we clean it up properly ourself. $self->realclean_files('MYMETA.yml'); if ( $args{inline} ) { $self->Inline->write; } else { $self->Makefile->write; } # The Makefile write process adds a couple of dependencies, # so write the META.yml files after the Makefile. if ( $args{meta} ) { $self->Meta->write; } # Experimental support for MYMETA if ( $ENV{X_MYMETA} ) { if ( $ENV{X_MYMETA} eq 'JSON' ) { $self->Meta->write_mymeta_json; } else { $self->Meta->write_mymeta_yaml; } } return 1; } 1; MongoDB-v1.2.2/inc/Module/Install/PRIVATE/Mongo.pm000644 000765 000024 00000016517 12651754051 021574 0ustar00davidstaff000000 000000 use strict; use warnings; package Module::Install::PRIVATE::Mongo; use Module::Install::Base; use Config; use Config::AutoConf 0.22; use Path::Tiny 0.052; use File::Spec::Functions qw/catdir/; use Cwd; our @ISA = qw{Module::Install::Base}; use constant { HAS_GCC => $Config{ccname} =~ /gcc/ ? 1 : 0, }; sub check_for_outdated_win_gcc { return if $ENV{MONGODB_NO_WIN32_GCC_CHECK}; return if ! HAS_GCC; local $@; my $gcc_ver = eval { my ( $v ) = split / /, $Config{gccversion}; "$v"; }; die "Could not identify gcc version in '$Config{gccversion}' due to:\n$@" if !$gcc_ver or $@; my $gcc_vstring = eval "v$gcc_ver"; die "Could not parse gcc version '$gcc_ver':\n$@" if !$gcc_vstring or $@; my $min_work_ver = "4.6.3"; my $min_work_vstring = eval "v$min_work_ver"; return if $gcc_vstring ge $min_work_vstring; die <<"END"; !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Your gcc is version '$gcc_ver'. The highest known incompatible version of gcc is '4.4.7'. The lowest known compatible version of gcc is '$min_work_ver'. Your gcc version is highly unlikely to be able to compile BSON, since the libraries/headers that come with it is incompatible with our version of libbson. We're aborting here forcibly so you will see this message. You have the following options at this point: 1. set MONGODB_NO_WIN32_GCC_CHECK to any value to ignore this message and retry 2. if you know C, try and help us by upgrading our libbson, patches welcome! 3. install a newer gcc, '$min_work_ver' or higher 4. install Strawberry 5.16.3 or higher, their gcc versions are compatible !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! END } sub mongo { my ($self, @mongo_vars) = @_; my $ccflags = $self->makemaker_args->{CCFLAGS} || $Config{ccflags}; $ccflags = "" unless defined $ccflags; $ccflags .= " -Wall -Wextra -Wuninitialized -Wdeclaration-after-statement" if HAS_GCC && ( $ENV{AUTHOR_TESTING} || $ENV{AUTOMATED_TESTING} ); # Perl on older Centos doesn't come with this by default $ccflags .= " -D_GNU_SOURCE" if HAS_GCC && $ccflags !~ /-D_GNU_SOURCE/; # MSWin32 requires newer gcc (if using gcc) if ( $^O eq 'MSWin32' ) { check_for_outdated_win_gcc; } # openbsd needs threaded perl *or* single-threaded but with libpthread, so # we check specifically for that if ($^O eq 'openbsd') { my $has_libpthread = qx{/usr/bin/ldd $Config{perlpath}} =~ /libpthread/; die "OS unsupported: OpenBSD support requires a perl linked with libpthread" unless $has_libpthread; } # check for 64-bit if ($Config{use64bitint}) { $ccflags .= " -DMONGO_USE_64_BIT_INT"; } # check for big-endian my $endianess = $Config{byteorder}; if ($endianess == 4321 || $endianess == 87654321) { $ccflags .= " -DMONGO_BIG_ENDIAN=1 "; if ( $] lt '5.010' ) { die "OS unsupported: Perl 5.10 or greater is required for big-endian platforms"; } } # needed to compile bson library $ccflags .= " -DBSON_COMPILATION "; my $conf = $self->configure_bson; if ($conf->{BSON_WITH_OID32_PT} || $conf->{BSON_WITH_OID64_PT}) { my $pthread = $^O eq 'solaris' ? " -pthreads " : " -pthread "; $ccflags .= $pthread; my $ldflags = $self->makemaker_args->{LDFLAGS}; $ldflags = "" unless defined $ldflags; $self->makemaker_args( LDFLAGS => "$ldflags $pthread" ); } if ( $conf->{BSON_HAVE_CLOCK_GETTIME} ) { my $libs = $self->makemaker_args->{LIBS}; $libs = "" unless defined $libs; $self->makemaker_args( LIBS => "$libs -lrt" ); } $self->makemaker_args( CCFLAGS => $ccflags ); $self->xs_files; $self->makemaker_args( INC => '-I. -Ibson' ); return; } sub xs_files { my ($self) = @_; my (@clean, @OBJECT, %XS); for my $xs () { (my $c = $xs) =~ s/\.xs$/.c/i; (my $o = $xs) =~ s/\.xs$/\$(OBJ_EXT)/i; $XS{$xs} = $c; push @OBJECT, $o; push @clean, $o; } for my $c (<*.c>, ) { (my $o = $c) =~ s/\.c$/\$(OBJ_EXT)/i; push @OBJECT, $o; push @clean, $o; } $self->makemaker_args( clean => { FILES => join(q{ }, @clean) }, OBJECT => join(q{ }, @OBJECT), XS => \%XS, ); $self->postamble(<<'HERE'); $(OBJECT) : perl_mongo.h cover : pure_all HARNESS_PERL_SWITCHES=-MDevel::Cover make test ptest : pure_all HARNESS_OPTIONS=j9 make test HERE return; } # Quick and dirty autoconf substitute sub configure_bson { my ($self) = @_; my $conf = $self->probe_bson_config; my $config_guts = path("bson/bson-config.h.in")->slurp; for my $key ( %$conf ) { $config_guts =~ s/\@$key\@/$conf->{$key}/; } path("bson/bson-config.h")->spew($config_guts); return $conf; } sub probe_bson_config { my ($self) = @_; my $ca = Config::AutoConf->new; $ca->push_lang("C"); my %conf; ##/* ## * Define to 1234 for Little Endian, 4321 for Big Endian. ## */ $conf{BSON_BYTE_ORDER} = $Config{byteorder} =~ /^1234/ ? '1234' : '4321'; ##/* ## * Define to 1 if you have stdbool.h ## */ $conf{BSON_HAVE_STDBOOL_H} = $Config{i_stdbool} ? 1 : 0; ##/* ## * Define to 1 for POSIX-like systems, 2 for Windows. ## */ $conf{BSON_OS} = $^O eq 'MSWin32' ? 2 : 1; ##/* ## * Define to 1 if you have clock_gettime() available. ## */ ## XXX also needs to link -lrt for this to work { my $ca = Config::AutoConf->new; $ca->push_libraries('rt'); $conf{BSON_HAVE_CLOCK_GETTIME} = $ca->link_if_else( $ca->lang_call("", "clock_gettime") ) ? 1 : 0; } ##/* ## * Define to 1 if you have strnlen available on your platform. ## */ $conf{BSON_HAVE_STRNLEN} = $ca->link_if_else( $ca->lang_call("", "strnlen") ) ? 1 : 0; ##/* ## * Define to 1 if you have snprintf available on your platform. ## */ $conf{BSON_HAVE_SNPRINTF} = $Config{d_snprintf} ? 1 : 0; ##/* ## * Define to 1 if your system requires {} around PTHREAD_ONCE_INIT. ## * This is typically just Solaris 8-10. ## */ ## pthread-related configuration if ( $^O eq 'MSWin32' ) { $conf{BSON_PTHREAD_ONCE_INIT_NEEDS_BRACES} = 0; } else { $conf{BSON_PTHREAD_ONCE_INIT_NEEDS_BRACES} = $ca->link_if_else(<<'HERE') ? 0 : 1; #include pthread_once_t foo = PTHREAD_ONCE_INIT; int main () { ; return 0; } HERE } ##/* ## * Define to 1 if we have access to GCC 32-bit atomic builtins. ## * While this requires GCC 4.1+ in most cases, it is also architecture ## * dependent. For example, some PPC or ARM systems may not have it even ## * if it is a recent GCC version. ## */ $conf{BSON_HAVE_ATOMIC_32_ADD_AND_FETCH} = $ca->link_if_else(<<'HERE') ? 1 : 0; #include int main () { int32_t seq = 0; __sync_fetch_and_add_4(&seq, (int32_t)1); return seq; } HERE ##/* ## * Similarly, define to 1 if we have access to GCC 64-bit atomic builtins. ## */ $conf{BSON_HAVE_ATOMIC_64_ADD_AND_FETCH} = $ca->link_if_else(<<'HERE') ? 1 : 0; #include int main () { int64_t seq = 0; __sync_fetch_and_add_8(&seq, (int64_t)1); return seq; } HERE return \%conf; } 1; MongoDB-v1.2.2/bson/b64_ntop.h000644 000765 000024 00000016556 12651754051 016211 0ustar00davidstaff000000 000000 /* * Copyright (c) 1996, 1998 by Internet Software Consortium. * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND INTERNET SOFTWARE CONSORTIUM DISCLAIMS * ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES * OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL INTERNET SOFTWARE * CONSORTIUM BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL * DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR * PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS * SOFTWARE. */ /* * Portions Copyright (c) 1995 by International Business Machines, Inc. * * International Business Machines, Inc. (hereinafter called IBM) grants * permission under its copyrights to use, copy, modify, and distribute this * Software with or without fee, provided that the above copyright notice and * all paragraphs of this notice appear in all copies, and that the name of IBM * not be used in connection with the marketing of any product incorporating * the Software or modifications thereof, without specific, written prior * permission. * * To the extent it has a right to do so, IBM grants an immunity from suit * under its patents, if any, for the use, sale or manufacture of products to * the extent that such products are used for performing Domain Name System * dynamic updates in TCP/IP networks by means of the Software. No immunity is * granted for any product per se or for any other function of any product. * * THE SOFTWARE IS PROVIDED "AS IS", AND IBM DISCLAIMS ALL WARRANTIES, * INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A * PARTICULAR PURPOSE. IN NO EVENT SHALL IBM BE LIABLE FOR ANY SPECIAL, * DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER ARISING * OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE, EVEN * IF IBM IS APPRISED OF THE POSSIBILITY OF SUCH DAMAGES. */ #include "bson-compat.h" #include "bson-macros.h" #include "bson-types.h" #define Assert(Cond) if (!(Cond)) abort () static const char Base64[] = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/"; static const char Pad64 = '='; /* (From RFC1521 and draft-ietf-dnssec-secext-03.txt) * The following encoding technique is taken from RFC 1521 by Borenstein * and Freed. It is reproduced here in a slightly edited form for * convenience. * * A 65-character subset of US-ASCII is used, enabling 6 bits to be * represented per printable character. (The extra 65th character, "=", * is used to signify a special processing function.) * * The encoding process represents 24-bit groups of input bits as output * strings of 4 encoded characters. Proceeding from left to right, a * 24-bit input group is formed by concatenating 3 8-bit input groups. * These 24 bits are then treated as 4 concatenated 6-bit groups, each * of which is translated into a single digit in the base64 alphabet. * * Each 6-bit group is used as an index into an array of 64 printable * characters. The character referenced by the index is placed in the * output string. * * Table 1: The Base64 Alphabet * * Value Encoding Value Encoding Value Encoding Value Encoding * 0 A 17 R 34 i 51 z * 1 B 18 S 35 j 52 0 * 2 C 19 T 36 k 53 1 * 3 D 20 U 37 l 54 2 * 4 E 21 V 38 m 55 3 * 5 F 22 W 39 n 56 4 * 6 G 23 X 40 o 57 5 * 7 H 24 Y 41 p 58 6 * 8 I 25 Z 42 q 59 7 * 9 J 26 a 43 r 60 8 * 10 K 27 b 44 s 61 9 * 11 L 28 c 45 t 62 + * 12 M 29 d 46 u 63 / * 13 N 30 e 47 v * 14 O 31 f 48 w (pad) = * 15 P 32 g 49 x * 16 Q 33 h 50 y * * Special processing is performed if fewer than 24 bits are available * at the end of the data being encoded. A full encoding quantum is * always completed at the end of a quantity. When fewer than 24 input * bits are available in an input group, zero bits are added (on the * right) to form an integral number of 6-bit groups. Padding at the * end of the data is performed using the '=' character. * * Since all base64 input is an integral number of octets, only the * following cases can arise: * * (1) the final quantum of encoding input is an integral * multiple of 24 bits; here, the final unit of encoded * output will be an integral multiple of 4 characters * with no "=" padding, * (2) the final quantum of encoding input is exactly 8 bits; * here, the final unit of encoded output will be two * characters followed by two "=" padding characters, or * (3) the final quantum of encoding input is exactly 16 bits; * here, the final unit of encoded output will be three * characters followed by one "=" padding character. */ static ssize_t b64_ntop (uint8_t const *src, size_t srclength, char *target, size_t targsize) { size_t datalength = 0; uint8_t input[3]; uint8_t output[4]; size_t i; while (2 < srclength) { input[0] = *src++; input[1] = *src++; input[2] = *src++; srclength -= 3; output[0] = input[0] >> 2; output[1] = ((input[0] & 0x03) << 4) + (input[1] >> 4); output[2] = ((input[1] & 0x0f) << 2) + (input[2] >> 6); output[3] = input[2] & 0x3f; Assert (output[0] < 64); Assert (output[1] < 64); Assert (output[2] < 64); Assert (output[3] < 64); if (datalength + 4 > targsize) { return -1; } target[datalength++] = Base64[output[0]]; target[datalength++] = Base64[output[1]]; target[datalength++] = Base64[output[2]]; target[datalength++] = Base64[output[3]]; } /* Now we worry about padding. */ if (0 != srclength) { /* Get what's left. */ input[0] = input[1] = input[2] = '\0'; for (i = 0; i < srclength; i++) { input[i] = *src++; } output[0] = input[0] >> 2; output[1] = ((input[0] & 0x03) << 4) + (input[1] >> 4); output[2] = ((input[1] & 0x0f) << 2) + (input[2] >> 6); Assert (output[0] < 64); Assert (output[1] < 64); Assert (output[2] < 64); if (datalength + 4 > targsize) { return -1; } target[datalength++] = Base64[output[0]]; target[datalength++] = Base64[output[1]]; if (srclength == 1) { target[datalength++] = Pad64; } else{ target[datalength++] = Base64[output[2]]; } target[datalength++] = Pad64; } if (datalength >= targsize) { return -1; } target[datalength] = '\0'; /* Returned value doesn't count \0. */ return datalength; } MongoDB-v1.2.2/bson/b64_pton.h000644 000765 000024 00000026023 12651754051 016177 0ustar00davidstaff000000 000000 /* * Copyright (c) 1996, 1998 by Internet Software Consortium. * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND INTERNET SOFTWARE CONSORTIUM DISCLAIMS * ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES * OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL INTERNET SOFTWARE * CONSORTIUM BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL * DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR * PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS * SOFTWARE. */ /* * Portions Copyright (c) 1995 by International Business Machines, Inc. * * International Business Machines, Inc. (hereinafter called IBM) grants * permission under its copyrights to use, copy, modify, and distribute this * Software with or without fee, provided that the above copyright notice and * all paragraphs of this notice appear in all copies, and that the name of IBM * not be used in connection with the marketing of any product incorporating * the Software or modifications thereof, without specific, written prior * permission. * * To the extent it has a right to do so, IBM grants an immunity from suit * under its patents, if any, for the use, sale or manufacture of products to * the extent that such products are used for performing Domain Name System * dynamic updates in TCP/IP networks by means of the Software. No immunity is * granted for any product per se or for any other function of any product. * * THE SOFTWARE IS PROVIDED "AS IS", AND IBM DISCLAIMS ALL WARRANTIES, * INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A * PARTICULAR PURPOSE. IN NO EVENT SHALL IBM BE LIABLE FOR ANY SPECIAL, * DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER ARISING * OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE, EVEN * IF IBM IS APPRISED OF THE POSSIBILITY OF SUCH DAMAGES. */ #include "bson-compat.h" #define Assert(Cond) if (!(Cond)) abort() static const char Base64[] = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/"; static const char Pad64 = '='; /* (From RFC1521 and draft-ietf-dnssec-secext-03.txt) The following encoding technique is taken from RFC 1521 by Borenstein and Freed. It is reproduced here in a slightly edited form for convenience. A 65-character subset of US-ASCII is used, enabling 6 bits to be represented per printable character. (The extra 65th character, "=", is used to signify a special processing function.) The encoding process represents 24-bit groups of input bits as output strings of 4 encoded characters. Proceeding from left to right, a 24-bit input group is formed by concatenating 3 8-bit input groups. These 24 bits are then treated as 4 concatenated 6-bit groups, each of which is translated into a single digit in the base64 alphabet. Each 6-bit group is used as an index into an array of 64 printable characters. The character referenced by the index is placed in the output string. Table 1: The Base64 Alphabet Value Encoding Value Encoding Value Encoding Value Encoding 0 A 17 R 34 i 51 z 1 B 18 S 35 j 52 0 2 C 19 T 36 k 53 1 3 D 20 U 37 l 54 2 4 E 21 V 38 m 55 3 5 F 22 W 39 n 56 4 6 G 23 X 40 o 57 5 7 H 24 Y 41 p 58 6 8 I 25 Z 42 q 59 7 9 J 26 a 43 r 60 8 10 K 27 b 44 s 61 9 11 L 28 c 45 t 62 + 12 M 29 d 46 u 63 / 13 N 30 e 47 v 14 O 31 f 48 w (pad) = 15 P 32 g 49 x 16 Q 33 h 50 y Special processing is performed if fewer than 24 bits are available at the end of the data being encoded. A full encoding quantum is always completed at the end of a quantity. When fewer than 24 input bits are available in an input group, zero bits are added (on the right) to form an integral number of 6-bit groups. Padding at the end of the data is performed using the '=' character. Since all base64 input is an integral number of octets, only the following cases can arise: (1) the final quantum of encoding input is an integral multiple of 24 bits; here, the final unit of encoded output will be an integral multiple of 4 characters with no "=" padding, (2) the final quantum of encoding input is exactly 8 bits; here, the final unit of encoded output will be two characters followed by two "=" padding characters, or (3) the final quantum of encoding input is exactly 16 bits; here, the final unit of encoded output will be three characters followed by one "=" padding character. */ /* skips all whitespace anywhere. converts characters, four at a time, starting at (or after) src from base - 64 numbers into three 8 bit bytes in the target area. it returns the number of data bytes stored at the target, or -1 on error. */ static int b64rmap_initialized = 0; static uint8_t b64rmap[256]; static const uint8_t b64rmap_special = 0xf0; static const uint8_t b64rmap_end = 0xfd; static const uint8_t b64rmap_space = 0xfe; static const uint8_t b64rmap_invalid = 0xff; /** * Initializing the reverse map is not thread safe. * Which is fine for NSD. For now... **/ static void b64_initialize_rmap () { int i; unsigned char ch; /* Null: end of string, stop parsing */ b64rmap[0] = b64rmap_end; for (i = 1; i < 256; ++i) { ch = (unsigned char)i; /* Whitespaces */ if (isspace(ch)) b64rmap[i] = b64rmap_space; /* Padding: stop parsing */ else if (ch == Pad64) b64rmap[i] = b64rmap_end; /* Non-base64 char */ else b64rmap[i] = b64rmap_invalid; } /* Fill reverse mapping for base64 chars */ for (i = 0; Base64[i] != '\0'; ++i) b64rmap[(uint8_t)Base64[i]] = i; b64rmap_initialized = 1; } static int b64_pton_do(char const *src, uint8_t *target, size_t targsize) { int tarindex, state, ch; uint8_t ofs; state = 0; tarindex = 0; while (1) { ch = *src++; ofs = b64rmap[ch]; if (ofs >= b64rmap_special) { /* Ignore whitespaces */ if (ofs == b64rmap_space) continue; /* End of base64 characters */ if (ofs == b64rmap_end) break; /* A non-base64 character. */ return (-1); } switch (state) { case 0: if ((size_t)tarindex >= targsize) return (-1); target[tarindex] = ofs << 2; state = 1; break; case 1: if ((size_t)tarindex + 1 >= targsize) return (-1); target[tarindex] |= ofs >> 4; target[tarindex+1] = (ofs & 0x0f) << 4 ; tarindex++; state = 2; break; case 2: if ((size_t)tarindex + 1 >= targsize) return (-1); target[tarindex] |= ofs >> 2; target[tarindex+1] = (ofs & 0x03) << 6; tarindex++; state = 3; break; case 3: if ((size_t)tarindex >= targsize) return (-1); target[tarindex] |= ofs; tarindex++; state = 0; break; default: abort(); } } /* * We are done decoding Base-64 chars. Let's see if we ended * on a byte boundary, and/or with erroneous trailing characters. */ if (ch == Pad64) { /* We got a pad char. */ ch = *src++; /* Skip it, get next. */ switch (state) { case 0: /* Invalid = in first position */ case 1: /* Invalid = in second position */ return (-1); case 2: /* Valid, means one byte of info */ /* Skip any number of spaces. */ for ((void)NULL; ch != '\0'; ch = *src++) if (b64rmap[ch] != b64rmap_space) break; /* Make sure there is another trailing = sign. */ if (ch != Pad64) return (-1); ch = *src++; /* Skip the = */ /* Fall through to "single trailing =" case. */ /* FALLTHROUGH */ case 3: /* Valid, means two bytes of info */ /* * We know this char is an =. Is there anything but * whitespace after it? */ for ((void)NULL; ch != '\0'; ch = *src++) if (b64rmap[ch] != b64rmap_space) return (-1); /* * Now make sure for cases 2 and 3 that the "extra" * bits that slopped past the last full byte were * zeros. If we don't check them, they become a * subliminal channel. */ if (target[tarindex] != 0) return (-1); default: break; } } else { /* * We ended by seeing the end of the string. Make sure we * have no partial bytes lying around. */ if (state != 0) return (-1); } return (tarindex); } static int b64_pton_len(char const *src) { int tarindex, state, ch; uint8_t ofs; state = 0; tarindex = 0; while (1) { ch = *src++; ofs = b64rmap[ch]; if (ofs >= b64rmap_special) { /* Ignore whitespaces */ if (ofs == b64rmap_space) continue; /* End of base64 characters */ if (ofs == b64rmap_end) break; /* A non-base64 character. */ return (-1); } switch (state) { case 0: state = 1; break; case 1: tarindex++; state = 2; break; case 2: tarindex++; state = 3; break; case 3: tarindex++; state = 0; break; default: abort(); } } /* * We are done decoding Base-64 chars. Let's see if we ended * on a byte boundary, and/or with erroneous trailing characters. */ if (ch == Pad64) { /* We got a pad char. */ ch = *src++; /* Skip it, get next. */ switch (state) { case 0: /* Invalid = in first position */ case 1: /* Invalid = in second position */ return (-1); case 2: /* Valid, means one byte of info */ /* Skip any number of spaces. */ for ((void)NULL; ch != '\0'; ch = *src++) if (b64rmap[ch] != b64rmap_space) break; /* Make sure there is another trailing = sign. */ if (ch != Pad64) return (-1); ch = *src++; /* Skip the = */ /* Fall through to "single trailing =" case. */ /* FALLTHROUGH */ case 3: /* Valid, means two bytes of info */ /* * We know this char is an =. Is there anything but * whitespace after it? */ for ((void)NULL; ch != '\0'; ch = *src++) if (b64rmap[ch] != b64rmap_space) return (-1); default: break; } } else { /* * We ended by seeing the end of the string. Make sure we * have no partial bytes lying around. */ if (state != 0) return (-1); } return (tarindex); } static int b64_pton(char const *src, uint8_t *target, size_t targsize) { if (!b64rmap_initialized) b64_initialize_rmap (); if (target) return b64_pton_do (src, target, targsize); else return b64_pton_len (src); } MongoDB-v1.2.2/bson/bson-atomic.c000644 000765 000024 00000003407 12651754051 016753 0ustar00davidstaff000000 000000 /* * Copyright 2014 MongoDB, Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #include "bson-atomic.h" /* * We should only ever hit these on non-Windows systems, for which we require * pthread support. Therefore, we will avoid making a threading portability * for threads here and just use pthreads directly. */ #ifdef __BSON_NEED_BARRIER #include static pthread_mutex_t gBarrier = PTHREAD_MUTEX_INITIALIZER; void bson_memory_barrier (void) { pthread_mutex_lock (&gBarrier); pthread_mutex_unlock (&gBarrier); } #endif #ifdef __BSON_NEED_ATOMIC_32 #warning "Using mutex to emulate 32-bit atomics." #include static pthread_mutex_t gSync32 = PTHREAD_MUTEX_INITIALIZER; int32_t bson_atomic_int_add (volatile int32_t *p, int32_t n) { int ret; pthread_mutex_lock (&gSync32); *p += n; ret = *p; pthread_mutex_unlock (&gSync32); return ret; } #endif #ifdef __BSON_NEED_ATOMIC_64 #include static pthread_mutex_t gSync64 = PTHREAD_MUTEX_INITIALIZER; int64_t bson_atomic_int64_add (volatile int64_t *p, int64_t n) { int64_t ret; pthread_mutex_lock (&gSync64); *p += n; ret = *p; pthread_mutex_unlock (&gSync64); return ret; } #endif MongoDB-v1.2.2/bson/bson-atomic.h000644 000765 000024 00000005370 12651754051 016761 0ustar00davidstaff000000 000000 /* * Copyright 2013-2014 MongoDB, Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #ifndef BSON_ATOMIC_H #define BSON_ATOMIC_H #include "bson-config.h" #include "bson-compat.h" #include "bson-macros.h" BSON_BEGIN_DECLS #if defined(__sun) && defined(__SVR4) /* Solaris */ # include # define bson_atomic_int_add(p,v) atomic_add_32_nv((volatile uint32_t *)p, (v)) # define bson_atomic_int64_add(p,v) atomic_add_64_nv((volatile uint64_t *)p, (v)) #elif defined(_WIN32) /* MSVC/MinGW */ # define bson_atomic_int_add(p, v) (InterlockedExchangeAdd((volatile LONG *)(p), (LONG)(v)) + (LONG)(v)) # define bson_atomic_int64_add(p, v) (InterlockedExchangeAdd64((volatile LONGLONG *)(p), (LONGLONG)(v)) + (LONGLONG)(v)) #else # ifdef BSON_HAVE_ATOMIC_32_ADD_AND_FETCH # define bson_atomic_int_add(p,v) __sync_add_and_fetch((p), (v)) # else # define __BSON_NEED_ATOMIC_32 # endif # ifdef BSON_HAVE_ATOMIC_64_ADD_AND_FETCH # if BSON_GNUC_IS_VERSION(4, 1) /* * GCC 4.1 on i386 can generate buggy 64-bit atomic increment. * So we will work around with a fallback. * * https://gcc.gnu.org/bugzilla/show_bug.cgi?id=40693 */ # define __BSON_NEED_ATOMIC_64 # else # define bson_atomic_int64_add(p, v) __sync_add_and_fetch((volatile int64_t*)(p), (int64_t)(v)) # endif # else # define __BSON_NEED_ATOMIC_64 # endif #endif #ifdef __BSON_NEED_ATOMIC_32 int32_t bson_atomic_int_add (volatile int32_t *p, int32_t n); #endif #ifdef __BSON_NEED_ATOMIC_64 int64_t bson_atomic_int64_add (volatile int64_t *p, int64_t n); #endif #if defined(_WIN32) # define bson_memory_barrier() MemoryBarrier() #elif defined(__GNUC__) # if BSON_GNUC_CHECK_VERSION(4, 1) # define bson_memory_barrier() __sync_synchronize() # else # warning "GCC Pre-4.1 discovered, using inline assembly for memory barrier." # define bson_memory_barrier() __asm__ volatile ("":::"memory") # endif #elif defined(__SUNPRO_C) # include # define bson_memory_barrier() __machine_rw_barrier() #elif defined(__xlC__) # define __sync() #else # define __BSON_NEED_BARRIER 1 # warning "Unknown compiler, using lock for compiler barrier." void bson_memory_barrier (void); #endif BSON_END_DECLS #endif /* BSON_ATOMIC_H */ MongoDB-v1.2.2/bson/bson-clock.c000644 000765 000024 00000007306 12651754051 016574 0ustar00davidstaff000000 000000 /* * Copyright 2013 MongoDB, Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #ifdef __APPLE__ # include # include # include # include #endif #include "bson-config.h" #include "bson-compat.h" #if defined(BSON_HAVE_CLOCK_GETTIME) # include # include #endif #include "bson-clock.h" /* *-------------------------------------------------------------------------- * * bson_gettimeofday -- * * A wrapper around gettimeofday() with fallback support for Windows. * * Returns: * 0 if successful. * * Side effects: * @tv is set. * *-------------------------------------------------------------------------- */ int bson_gettimeofday (struct timeval *tv) /* OUT */ { #if defined(_WIN32) # if defined(_MSC_VER) # define DELTA_EPOCH_IN_MICROSEC 11644473600000000Ui64 # else # define DELTA_EPOCH_IN_MICROSEC 11644473600000000ULL # endif FILETIME ft; uint64_t tmp = 0; /* * The const value is shamelessy stolen from * http://www.boost.org/doc/libs/1_55_0/boost/chrono/detail/inlined/win/chrono.hpp * * File times are the number of 100 nanosecond intervals elapsed since * 12:00 am Jan 1, 1601 UTC. I haven't check the math particularly hard * * ... good luck */ if (tv) { GetSystemTimeAsFileTime (&ft); /* pull out of the filetime into a 64 bit uint */ tmp |= ft.dwHighDateTime; tmp <<= 32; tmp |= ft.dwLowDateTime; /* convert from 100's of nanosecs to microsecs */ tmp /= 10; /* adjust to unix epoch */ tmp -= DELTA_EPOCH_IN_MICROSEC; tv->tv_sec = (long)(tmp / 1000000UL); tv->tv_usec = (long)(tmp % 1000000UL); } return 0; #else return gettimeofday (tv, NULL); #endif } /* *-------------------------------------------------------------------------- * * bson_get_monotonic_time -- * * Returns the monotonic system time, if available. A best effort is * made to use the monotonic clock. However, some systems may not * support such a feature. * * Returns: * The monotonic clock in microseconds. * * Side effects: * None. * *-------------------------------------------------------------------------- */ int64_t bson_get_monotonic_time (void) { #if defined(BSON_HAVE_CLOCK_GETTIME) && defined(CLOCK_MONOTONIC) struct timespec ts; clock_gettime (CLOCK_MONOTONIC, &ts); return ((ts.tv_sec * 1000000UL) + (ts.tv_nsec / 1000UL)); #elif defined(__APPLE__) static mach_timebase_info_data_t info = { 0 }; static double ratio = 0.0; if (!info.denom) { // the value from mach_absolute_time () * info.numer / info.denom // is in nano seconds. So we have to divid by 1000.0 to get micro seconds mach_timebase_info (&info); ratio = (double)info.numer / (double)info.denom / 1000.0; } return mach_absolute_time () * ratio; #elif defined(_WIN32) /* Despite it's name, this is in milliseconds! */ int64_t ticks = GetTickCount64 (); return (ticks * 1000L); #else # warning "Monotonic clock is not yet supported on your platform." struct timeval tv; bson_gettimeofday (&tv); return (tv.tv_sec * 1000000UL) + tv.tv_usec; #endif } MongoDB-v1.2.2/bson/bson-clock.h000644 000765 000024 00000001740 12651754051 016575 0ustar00davidstaff000000 000000 /* * Copyright 2014 MongoDB, Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #ifndef BSON_CLOCK_H #define BSON_CLOCK_H #if !defined (BSON_INSIDE) && !defined (BSON_COMPILATION) # error "Only can be included directly." #endif #include "bson-compat.h" #include "bson-macros.h" #include "bson-types.h" BSON_BEGIN_DECLS int64_t bson_get_monotonic_time (void); int bson_gettimeofday (struct timeval *tv); BSON_END_DECLS #endif /* BSON_CLOCK_H */ MongoDB-v1.2.2/bson/bson-compat.h000644 000765 000024 00000006530 12651754051 016767 0ustar00davidstaff000000 000000 /* * Copyright 2013 MongoDB, Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #ifndef BSON_COMPAT_H #define BSON_COMPAT_H #if !defined (BSON_INSIDE) && !defined (BSON_COMPILATION) # error "Only can be included directly." #endif #include "bson-config.h" #include "bson-macros.h" #ifdef BSON_OS_WIN32 # if defined(_WIN32_WINNT) && (_WIN32_WINNT < 0x0600) # undef _WIN32_WINNT # endif # ifndef _WIN32_WINNT # define _WIN32_WINNT 0x0600 # endif # include # ifndef WIN32_LEAN_AND_MEAN # define WIN32_LEAN_AND_MEAN # include # undef WIN32_LEAN_AND_MEAN # else # include # endif #include #include #endif #ifdef BSON_OS_UNIX # include # include #endif #include "bson-macros.h" #include #include #include #include #include #include #include #include #include BSON_BEGIN_DECLS #ifdef _MSC_VER # include "bson-stdint-win32.h" # ifndef __cplusplus /* benign redefinition of type */ # pragma warning (disable :4142) # ifndef _SSIZE_T_DEFINED # define _SSIZE_T_DEFINED typedef SSIZE_T ssize_t; # endif typedef SIZE_T size_t; # pragma warning (default :4142) # else /* * MSVC++ does not include ssize_t, just size_t. * So we need to synthesize that as well. */ # pragma warning (disable :4142) # ifndef _SSIZE_T_DEFINED # define _SSIZE_T_DEFINED typedef SSIZE_T ssize_t; # endif # pragma warning (default :4142) # endif # define PRIi32 "d" # define PRId32 "d" # define PRIu32 "u" # define PRIi64 "I64i" # define PRId64 "I64i" # define PRIu64 "I64u" #else # include "bson-stdint.h" # include #endif #if defined(__MINGW32__) && ! defined(INIT_ONCE_STATIC_INIT) # define INIT_ONCE_STATIC_INIT RTL_RUN_ONCE_INIT typedef RTL_RUN_ONCE INIT_ONCE; #endif #ifdef BSON_HAVE_STDBOOL_H # include #elif !defined(__bool_true_false_are_defined) # ifndef __cplusplus typedef signed char bool; # define false 0 # define true 1 # endif # define __bool_true_false_are_defined 1 #endif #if defined(__GNUC__) # if (__GNUC__ > 4) || (__GNUC__ == 4 && __GNUC_MINOR__ >= 1) # define bson_sync_synchronize() __sync_synchronize() # elif defined(__i386__ ) || defined( __i486__ ) || defined( __i586__ ) || \ defined( __i686__ ) || defined( __x86_64__ ) # define bson_sync_synchronize() asm volatile("mfence":::"memory") # else # define bson_sync_synchronize() asm volatile("sync":::"memory") # endif #elif defined(_MSC_VER) # define bson_sync_synchronize() MemoryBarrier() #endif #if !defined(va_copy) && defined(_MSC_VER) # define va_copy(dst,src) ((dst) = (src)) #endif #if !defined(va_copy) && defined(__GNUC__) && __GNUC__ < 3 # define va_copy(dst,src) __va_copy(dst, src) #endif BSON_END_DECLS #endif /* BSON_COMPAT_H */ MongoDB-v1.2.2/bson/bson-config.h.in000644 000765 000024 00000003640 12651754051 017355 0ustar00davidstaff000000 000000 #ifndef BSON_CONFIG_H #define BSON_CONFIG_H /* * Define to 1234 for Little Endian, 4321 for Big Endian. */ #define BSON_BYTE_ORDER @BSON_BYTE_ORDER@ /* * Define to 1 if you have stdbool.h */ #define BSON_HAVE_STDBOOL_H @BSON_HAVE_STDBOOL_H@ #if BSON_HAVE_STDBOOL_H != 1 # undef BSON_HAVE_STDBOOL_H #endif /* * Define to 1 for POSIX-like systems, 2 for Windows. */ #define BSON_OS @BSON_OS@ /* * Define to 1 if we have access to GCC 32-bit atomic builtins. * While this requires GCC 4.1+ in most cases, it is also architecture * dependent. For example, some PPC or ARM systems may not have it even * if it is a recent GCC version. */ #define BSON_HAVE_ATOMIC_32_ADD_AND_FETCH @BSON_HAVE_ATOMIC_32_ADD_AND_FETCH@ #if BSON_HAVE_ATOMIC_32_ADD_AND_FETCH != 1 # undef BSON_HAVE_ATOMIC_32_ADD_AND_FETCH #endif /* * Similarly, define to 1 if we have access to GCC 64-bit atomic builtins. */ #define BSON_HAVE_ATOMIC_64_ADD_AND_FETCH @BSON_HAVE_ATOMIC_64_ADD_AND_FETCH@ #if BSON_HAVE_ATOMIC_64_ADD_AND_FETCH != 1 # undef BSON_HAVE_ATOMIC_64_ADD_AND_FETCH #endif /* * Define to 1 if your system requires {} around PTHREAD_ONCE_INIT. * This is typically just Solaris 8-10. */ #define BSON_PTHREAD_ONCE_INIT_NEEDS_BRACES @BSON_PTHREAD_ONCE_INIT_NEEDS_BRACES@ #if BSON_PTHREAD_ONCE_INIT_NEEDS_BRACES != 1 # undef BSON_PTHREAD_ONCE_INIT_NEEDS_BRACES #endif /* * Define to 1 if you have clock_gettime() available. */ #define BSON_HAVE_CLOCK_GETTIME @BSON_HAVE_CLOCK_GETTIME@ #if BSON_HAVE_CLOCK_GETTIME != 1 # undef BSON_HAVE_CLOCK_GETTIME #endif /* * Define to 1 if you have strnlen available on your platform. */ #define BSON_HAVE_STRNLEN @BSON_HAVE_STRNLEN@ #if BSON_HAVE_STRNLEN != 1 # undef BSON_HAVE_STRNLEN #endif /* * Define to 1 if you have snprintf available on your platform. */ #define BSON_HAVE_SNPRINTF @BSON_HAVE_SNPRINTF@ #if BSON_HAVE_SNPRINTF != 1 # undef BSON_HAVE_SNPRINTF #endif #endif /* BSON_CONFIG_H */ MongoDB-v1.2.2/bson/bson-context-private.h000644 000765 000024 00000002620 12651754051 020634 0ustar00davidstaff000000 000000 /* * Copyright 2014 MongoDB, Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #ifndef BSON_CONTEXT_PRIVATE_H #define BSON_CONTEXT_PRIVATE_H #include "bson-context.h" #include "bson-thread-private.h" BSON_BEGIN_DECLS struct _bson_context_t { bson_context_flags_t flags : 7; bool pidbe_once : 1; uint8_t pidbe[2]; uint8_t md5[3]; int32_t seq32; int64_t seq64; void (*oid_get_host) (bson_context_t *context, bson_oid_t *oid); void (*oid_get_pid) (bson_context_t *context, bson_oid_t *oid); void (*oid_get_seq32) (bson_context_t *context, bson_oid_t *oid); void (*oid_get_seq64) (bson_context_t *context, bson_oid_t *oid); }; BSON_END_DECLS #endif /* BSON_CONTEXT_PRIVATE_H */ MongoDB-v1.2.2/bson/bson-context.c000644 000765 000024 00000030011 12651754051 017152 0ustar00davidstaff000000 000000 /* * Copyright 2013 MongoDB, Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #include "bson-compat.h" #include #include #include #include #include #if defined(__linux__) #include #endif #include "bson-atomic.h" #include "bson-clock.h" #include "bson-context.h" #include "bson-context-private.h" #include "bson-md5.h" #include "bson-memory.h" #include "bson-thread-private.h" #ifndef HOST_NAME_MAX #define HOST_NAME_MAX 256 #endif /* * Globals. */ static bson_context_t gContextDefault; #if defined(__linux__) static uint16_t gettid (void) { return syscall (SYS_gettid); } #endif /* *-------------------------------------------------------------------------- * * _bson_context_get_oid_host -- * * Retrieves the first three bytes of MD5(hostname) and assigns them * to the host portion of oid. * * Returns: * None. * * Side effects: * @oid is modified. * *-------------------------------------------------------------------------- */ static void _bson_context_get_oid_host (bson_context_t *context, /* IN */ bson_oid_t *oid) /* OUT */ { uint8_t *bytes = (uint8_t *)oid; uint8_t digest[16]; bson_md5_t md5; char hostname[HOST_NAME_MAX]; BSON_ASSERT (context); BSON_ASSERT (oid); gethostname (hostname, sizeof hostname); hostname[HOST_NAME_MAX - 1] = '\0'; bson_md5_init (&md5); bson_md5_append (&md5, (const uint8_t *)hostname, (uint32_t)strlen (hostname)); bson_md5_finish (&md5, &digest[0]); bytes[4] = digest[0]; bytes[5] = digest[1]; bytes[6] = digest[2]; } /* *-------------------------------------------------------------------------- * * _bson_context_get_oid_host_cached -- * * Fetch the cached copy of the MD5(hostname). * * Returns: * None. * * Side effects: * @oid is modified. * *-------------------------------------------------------------------------- */ static void _bson_context_get_oid_host_cached (bson_context_t *context, /* IN */ bson_oid_t *oid) /* OUT */ { BSON_ASSERT (context); BSON_ASSERT (oid); oid->bytes[4] = context->md5[0]; oid->bytes[5] = context->md5[1]; oid->bytes[6] = context->md5[2]; } static BSON_INLINE uint16_t _bson_getpid (void) { uint16_t pid; #ifdef BSON_OS_WIN32 DWORD real_pid; real_pid = GetCurrentProcessId (); pid = (real_pid & 0xFFFF) ^ ((real_pid >> 16) & 0xFFFF); #else pid = getpid (); #endif return pid; } /* *-------------------------------------------------------------------------- * * _bson_context_get_oid_pid -- * * Initialize the pid field of @oid. * * The pid field is 2 bytes, big-endian for memcmp(). * * Returns: * None. * * Side effects: * @oid is modified. * *-------------------------------------------------------------------------- */ static void _bson_context_get_oid_pid (bson_context_t *context, /* IN */ bson_oid_t *oid) /* OUT */ { uint16_t pid = _bson_getpid (); uint8_t *bytes = (uint8_t *)&pid; BSON_ASSERT (context); BSON_ASSERT (oid); pid = BSON_UINT16_TO_BE (pid); oid->bytes[7] = bytes[0]; oid->bytes[8] = bytes[1]; } /* *-------------------------------------------------------------------------- * * _bson_context_get_oid_pid_cached -- * * Fetch the cached copy of the current pid. * This helps avoid multiple calls to getpid() which is slower * on some systems. * * Returns: * None. * * Side effects: * @oid is modified. * *-------------------------------------------------------------------------- */ static void _bson_context_get_oid_pid_cached (bson_context_t *context, /* IN */ bson_oid_t *oid) /* OUT */ { oid->bytes[7] = context->pidbe[0]; oid->bytes[8] = context->pidbe[1]; } /* *-------------------------------------------------------------------------- * * _bson_context_get_oid_seq32 -- * * 32-bit sequence generator, non-thread-safe version. * * Returns: * None. * * Side effects: * @oid is modified. * *-------------------------------------------------------------------------- */ static void _bson_context_get_oid_seq32 (bson_context_t *context, /* IN */ bson_oid_t *oid) /* OUT */ { uint32_t seq = context->seq32++; seq = BSON_UINT32_TO_BE (seq); memcpy (&oid->bytes[9], ((uint8_t *)&seq) + 1, 3); } /* *-------------------------------------------------------------------------- * * _bson_context_get_oid_seq32_threadsafe -- * * Thread-safe version of 32-bit sequence generator. * * Returns: * None. * * Side effects: * @oid is modified. * *-------------------------------------------------------------------------- */ static void _bson_context_get_oid_seq32_threadsafe (bson_context_t *context, /* IN */ bson_oid_t *oid) /* OUT */ { int32_t seq = bson_atomic_int_add (&context->seq32, 1); seq = BSON_UINT32_TO_BE (seq); memcpy (&oid->bytes[9], ((uint8_t *)&seq) + 1, 3); } /* *-------------------------------------------------------------------------- * * _bson_context_get_oid_seq64 -- * * 64-bit oid sequence generator, non-thread-safe version. * * Returns: * None. * * Side effects: * @oid is modified. * *-------------------------------------------------------------------------- */ static void _bson_context_get_oid_seq64 (bson_context_t *context, /* IN */ bson_oid_t *oid) /* OUT */ { uint64_t seq; BSON_ASSERT (context); BSON_ASSERT (oid); seq = BSON_UINT64_TO_BE (context->seq64++); memcpy (&oid->bytes[4], &seq, sizeof (seq)); } /* *-------------------------------------------------------------------------- * * _bson_context_get_oid_seq64_threadsafe -- * * Thread-safe 64-bit sequence generator. * * Returns: * None. * * Side effects: * @oid is modified. * *-------------------------------------------------------------------------- */ static void _bson_context_get_oid_seq64_threadsafe (bson_context_t *context, /* IN */ bson_oid_t *oid) /* OUT */ { int64_t seq = bson_atomic_int64_add (&context->seq64, 1); seq = BSON_UINT64_TO_BE (seq); memcpy (&oid->bytes[4], &seq, sizeof (seq)); } static void _bson_context_init (bson_context_t *context, /* IN */ bson_context_flags_t flags) /* IN */ { struct timeval tv; uint16_t pid; unsigned int seed[3]; unsigned int real_seed; bson_oid_t oid; context->flags = flags; context->oid_get_host = _bson_context_get_oid_host_cached; context->oid_get_pid = _bson_context_get_oid_pid_cached; context->oid_get_seq32 = _bson_context_get_oid_seq32; context->oid_get_seq64 = _bson_context_get_oid_seq64; /* * Generate a seed for our the random starting position of our increment * bytes. We mask off the last nibble so that the last digit of the OID will * start at zero. Just to be nice. * * The seed itself is made up of the current time in seconds, milliseconds, * and pid xored together. I welcome better solutions if at all necessary. */ bson_gettimeofday (&tv); seed[0] = (unsigned int)tv.tv_sec; seed[1] = (unsigned int)tv.tv_usec; seed[2] = _bson_getpid (); real_seed = seed[0] ^ seed[1] ^ seed[2]; #ifdef BSON_OS_WIN32 /* ms's runtime is multithreaded by default, so no rand_r */ srand(real_seed); context->seq32 = rand() & 0x007FFFF0; #else context->seq32 = rand_r (&real_seed) & 0x007FFFF0; #endif if ((flags & BSON_CONTEXT_DISABLE_HOST_CACHE)) { context->oid_get_host = _bson_context_get_oid_host; } else { _bson_context_get_oid_host (context, &oid); context->md5[0] = oid.bytes[4]; context->md5[1] = oid.bytes[5]; context->md5[2] = oid.bytes[6]; } if ((flags & BSON_CONTEXT_THREAD_SAFE)) { context->oid_get_seq32 = _bson_context_get_oid_seq32_threadsafe; context->oid_get_seq64 = _bson_context_get_oid_seq64_threadsafe; } if ((flags & BSON_CONTEXT_DISABLE_PID_CACHE)) { context->oid_get_pid = _bson_context_get_oid_pid; } else { pid = BSON_UINT16_TO_BE (_bson_getpid()); #if defined(__linux__) if ((flags & BSON_CONTEXT_USE_TASK_ID)) { int32_t tid; if ((tid = gettid ())) { pid = BSON_UINT16_TO_BE (tid); } } #endif memcpy (&context->pidbe[0], &pid, 2); } } /* *-------------------------------------------------------------------------- * * bson_context_new -- * * Initializes a new context with the flags specified. * * In most cases, you want to call this with @flags set to * BSON_CONTEXT_NONE. * * If you are running on Linux, %BSON_CONTEXT_USE_TASK_ID can result * in a healthy speedup for multi-threaded scenarios. * * If you absolutely must have a single context for your application * and use more than one thread, then %BSON_CONTEXT_THREAD_SAFE should * be bitwise-or'd with your flags. This requires synchronization * between threads. * * If you expect your hostname to change often, you may consider * specifying %BSON_CONTEXT_DISABLE_HOST_CACHE so that gethostname() * is called for every OID generated. This is much slower. * * If you expect your pid to change without notice, such as from an * unexpected call to fork(), then specify * %BSON_CONTEXT_DISABLE_PID_CACHE. * * Returns: * A newly allocated bson_context_t that should be freed with * bson_context_destroy(). * * Side effects: * None. * *-------------------------------------------------------------------------- */ bson_context_t * bson_context_new (bson_context_flags_t flags) { bson_context_t *context; context = bson_malloc0 (sizeof *context); _bson_context_init (context, flags); return context; } /* *-------------------------------------------------------------------------- * * bson_context_destroy -- * * Cleans up a bson_context_t and releases any associated resources. * This should be called when you are done using @context. * * Returns: * None. * * Side effects: * None. * *-------------------------------------------------------------------------- */ void bson_context_destroy (bson_context_t *context) /* IN */ { if (context != &gContextDefault) { memset (context, 0, sizeof *context); bson_free (context); } } static BSON_ONCE_FUN(_bson_context_init_default) { _bson_context_init (&gContextDefault, (BSON_CONTEXT_THREAD_SAFE | BSON_CONTEXT_DISABLE_PID_CACHE)); BSON_ONCE_RETURN; } /* *-------------------------------------------------------------------------- * * bson_context_get_default -- * * Fetches the default, thread-safe implementation of #bson_context_t. * If you need faster generation, it is recommended you create your * own #bson_context_t with bson_context_new(). * * Returns: * A shared instance to the default #bson_context_t. This should not * be modified or freed. * * Side effects: * None. * *-------------------------------------------------------------------------- */ bson_context_t * bson_context_get_default (void) { static bson_once_t once = BSON_ONCE_INIT; bson_once (&once, _bson_context_init_default); return &gContextDefault; } MongoDB-v1.2.2/bson/bson-context.h000644 000765 000024 00000002073 12651754051 017166 0ustar00davidstaff000000 000000 /* * Copyright 2013 MongoDB, Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #ifndef BSON_CONTEXT_H #define BSON_CONTEXT_H #if !defined (BSON_INSIDE) && !defined (BSON_COMPILATION) # error "Only can be included directly." #endif #include "bson-macros.h" #include "bson-types.h" BSON_BEGIN_DECLS bson_context_t *bson_context_new (bson_context_flags_t flags); void bson_context_destroy (bson_context_t *context); bson_context_t *bson_context_get_default (void) BSON_GNUC_CONST; BSON_END_DECLS #endif /* BSON_CONTEXT_H */ MongoDB-v1.2.2/bson/bson-endian.h000644 000765 000024 00000014676 12651754051 016754 0ustar00davidstaff000000 000000 /* * Copyright 2013 MongoDB, Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #ifndef BSON_ENDIAN_H #define BSON_ENDIAN_H #if !defined (BSON_INSIDE) && !defined (BSON_COMPILATION) # error "Only can be included directly." #endif #if defined(__sun) # include #endif #include "bson-config.h" #include "bson-macros.h" #include "bson-compat.h" #include "bson-types.h" BSON_BEGIN_DECLS #define BSON_BIG_ENDIAN 4321 #define BSON_LITTLE_ENDIAN 1234 #if defined(__sun) # define BSON_UINT16_SWAP_LE_BE(v) BSWAP_16((uint16_t)v) # define BSON_UINT32_SWAP_LE_BE(v) BSWAP_32((uint32_t)v) # define BSON_UINT64_SWAP_LE_BE(v) BSWAP_64((uint64_t)v) #elif defined(__clang__) && defined(__clang_major__) && defined(__clang_minor__) && \ (__clang_major__ >= 3) && (__clang_minor__ >= 1) # if __has_builtin(__builtin_bswap16) # define BSON_UINT16_SWAP_LE_BE(v) __builtin_bswap16(v) # endif # if __has_builtin(__builtin_bswap32) # define BSON_UINT32_SWAP_LE_BE(v) __builtin_bswap32(v) # endif # if __has_builtin(__builtin_bswap64) # define BSON_UINT64_SWAP_LE_BE(v) __builtin_bswap64(v) # endif #elif defined(__GNUC__) && (__GNUC__ >= 4) # if __GNUC__ >= 4 && defined (__GNUC_MINOR__) && __GNUC_MINOR__ >= 3 # define BSON_UINT32_SWAP_LE_BE(v) __builtin_bswap32 ((uint32_t)v) # define BSON_UINT64_SWAP_LE_BE(v) __builtin_bswap64 ((uint64_t)v) # endif # if __GNUC__ >= 4 && defined (__GNUC_MINOR__) && __GNUC_MINOR__ >= 8 # define BSON_UINT16_SWAP_LE_BE(v) __builtin_bswap16 ((uint32_t)v) # endif #endif #ifndef BSON_UINT16_SWAP_LE_BE # define BSON_UINT16_SWAP_LE_BE(v) __bson_uint16_swap_slow ((uint16_t)v) #endif #ifndef BSON_UINT32_SWAP_LE_BE # define BSON_UINT32_SWAP_LE_BE(v) __bson_uint32_swap_slow ((uint32_t)v) #endif #ifndef BSON_UINT64_SWAP_LE_BE # define BSON_UINT64_SWAP_LE_BE(v) __bson_uint64_swap_slow ((uint64_t)v) #endif #if BSON_BYTE_ORDER == BSON_LITTLE_ENDIAN # define BSON_UINT16_FROM_LE(v) ((uint16_t)v) # define BSON_UINT16_TO_LE(v) ((uint16_t)v) # define BSON_UINT16_FROM_BE(v) BSON_UINT16_SWAP_LE_BE (v) # define BSON_UINT16_TO_BE(v) BSON_UINT16_SWAP_LE_BE (v) # define BSON_UINT32_FROM_LE(v) ((uint32_t)v) # define BSON_UINT32_TO_LE(v) ((uint32_t)v) # define BSON_UINT32_FROM_BE(v) BSON_UINT32_SWAP_LE_BE (v) # define BSON_UINT32_TO_BE(v) BSON_UINT32_SWAP_LE_BE (v) # define BSON_UINT64_FROM_LE(v) ((uint64_t)v) # define BSON_UINT64_TO_LE(v) ((uint64_t)v) # define BSON_UINT64_FROM_BE(v) BSON_UINT64_SWAP_LE_BE (v) # define BSON_UINT64_TO_BE(v) BSON_UINT64_SWAP_LE_BE (v) # define BSON_DOUBLE_FROM_LE(v) ((double)v) # define BSON_DOUBLE_TO_LE(v) ((double)v) #elif BSON_BYTE_ORDER == BSON_BIG_ENDIAN # define BSON_UINT16_FROM_LE(v) BSON_UINT16_SWAP_LE_BE (v) # define BSON_UINT16_TO_LE(v) BSON_UINT16_SWAP_LE_BE (v) # define BSON_UINT16_FROM_BE(v) ((uint16_t)v) # define BSON_UINT16_TO_BE(v) ((uint16_t)v) # define BSON_UINT32_FROM_LE(v) BSON_UINT32_SWAP_LE_BE (v) # define BSON_UINT32_TO_LE(v) BSON_UINT32_SWAP_LE_BE (v) # define BSON_UINT32_FROM_BE(v) ((uint32_t)v) # define BSON_UINT32_TO_BE(v) ((uint32_t)v) # define BSON_UINT64_FROM_LE(v) BSON_UINT64_SWAP_LE_BE (v) # define BSON_UINT64_TO_LE(v) BSON_UINT64_SWAP_LE_BE (v) # define BSON_UINT64_FROM_BE(v) ((uint64_t)v) # define BSON_UINT64_TO_BE(v) ((uint64_t)v) # define BSON_DOUBLE_FROM_LE(v) (__bson_double_swap_slow (v)) # define BSON_DOUBLE_TO_LE(v) (__bson_double_swap_slow (v)) #else # error "The endianness of target architecture is unknown." #endif /* *-------------------------------------------------------------------------- * * __bson_uint16_swap_slow -- * * Fallback endianness conversion for 16-bit integers. * * Returns: * The endian swapped version. * * Side effects: * None. * *-------------------------------------------------------------------------- */ static BSON_INLINE uint16_t __bson_uint16_swap_slow (uint16_t v) /* IN */ { return ((v & 0x00FF) << 8) | ((v & 0xFF00) >> 8); } /* *-------------------------------------------------------------------------- * * __bson_uint32_swap_slow -- * * Fallback endianness conversion for 32-bit integers. * * Returns: * The endian swapped version. * * Side effects: * None. * *-------------------------------------------------------------------------- */ static BSON_INLINE uint32_t __bson_uint32_swap_slow (uint32_t v) /* IN */ { return ((v & 0x000000FFU) << 24) | ((v & 0x0000FF00U) << 8) | ((v & 0x00FF0000U) >> 8) | ((v & 0xFF000000U) >> 24); } /* *-------------------------------------------------------------------------- * * __bson_uint64_swap_slow -- * * Fallback endianness conversion for 64-bit integers. * * Returns: * The endian swapped version. * * Side effects: * None. * *-------------------------------------------------------------------------- */ static BSON_INLINE uint64_t __bson_uint64_swap_slow (uint64_t v) /* IN */ { return ((v & 0x00000000000000FFULL) << 56) | ((v & 0x000000000000FF00ULL) << 40) | ((v & 0x0000000000FF0000ULL) << 24) | ((v & 0x00000000FF000000ULL) << 8) | ((v & 0x000000FF00000000ULL) >> 8) | ((v & 0x0000FF0000000000ULL) >> 24) | ((v & 0x00FF000000000000ULL) >> 40) | ((v & 0xFF00000000000000ULL) >> 56); } /* *-------------------------------------------------------------------------- * * __bson_double_swap_slow -- * * Fallback endianness conversion for double floating point. * * Returns: * The endian swapped version. * * Side effects: * None. * *-------------------------------------------------------------------------- */ BSON_STATIC_ASSERT(sizeof(double) == sizeof(uint64_t)); static BSON_INLINE double __bson_double_swap_slow (double v) /* IN */ { uint64_t uv; memcpy(&uv, &v, sizeof(v)); uv = BSON_UINT64_SWAP_LE_BE(uv); memcpy(&v, &uv, sizeof(v)); return v; } BSON_END_DECLS #endif /* BSON_ENDIAN_H */ MongoDB-v1.2.2/bson/bson-error.c000644 000765 000024 00000006203 12651754051 016625 0ustar00davidstaff000000 000000 /* * Copyright 2013 MongoDB, Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #include #include #include "bson-compat.h" #include "bson-config.h" #include "bson-error.h" #include "bson-memory.h" #include "bson-string.h" #include "bson-types.h" /* *-------------------------------------------------------------------------- * * bson_set_error -- * * Initializes @error using the parameters specified. * * @domain is an application specific error domain which should * describe which module initiated the error. Think of this as the * exception type. * * @code is the @domain specific error code. * * @format is used to generate the format string. It uses vsnprintf() * internally so the format should match what you would use there. * * Parameters: * @error: A #bson_error_t. * @domain: The error domain. * @code: The error code. * @format: A printf style format string. * * Returns: * None. * * Side effects: * @error is initialized. * *-------------------------------------------------------------------------- */ void bson_set_error (bson_error_t *error, /* OUT */ uint32_t domain, /* IN */ uint32_t code, /* IN */ const char *format, /* IN */ ...) /* IN */ { va_list args; if (error) { error->domain = domain; error->code = code; va_start (args, format); bson_vsnprintf (error->message, sizeof error->message, format, args); va_end (args); error->message[sizeof error->message - 1] = '\0'; } } /* *-------------------------------------------------------------------------- * * bson_strerror_r -- * * This is a reentrant safe macro for strerror. * * The resulting string may be stored in @buf. * * Returns: * A pointer to a static string or @buf. * * Side effects: * None. * *-------------------------------------------------------------------------- */ char * bson_strerror_r (int err_code, /* IN */ char *buf, /* IN */ size_t buflen) /* IN */ { static const char *unknown_msg = "Unknown error"; char *ret = NULL; #if defined(_WIN32) bson_strncpy (buf, strerror( err_code ), buflen); ret = buf; #elif defined(__GNUC__) && defined(_GNU_SOURCE) ret = strerror_r (err_code, buf, buflen); #else /* XSI strerror_r */ if (strerror_r (err_code, buf, buflen) == 0) { ret = buf; } #endif if (!ret) { bson_strncpy (buf, unknown_msg, buflen); ret = buf; } return ret; } MongoDB-v1.2.2/bson/bson-error.h000644 000765 000024 00000002334 12651754051 016633 0ustar00davidstaff000000 000000 /* * Copyright 2013 MongoDB, Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #ifndef BSON_ERROR_H #define BSON_ERROR_H #include "bson-compat.h" #include "bson-macros.h" #include "bson-types.h" BSON_BEGIN_DECLS #define BSON_ERROR_JSON 1 #define BSON_ERROR_READER 2 #define BSON_ERROR_BUFFER_SIZE 64 void bson_set_error (bson_error_t *error, uint32_t domain, uint32_t code, const char *format, ...) BSON_GNUC_PRINTF (4, 5); char *bson_strerror_r (int err_code, char *buf, size_t buflen); BSON_END_DECLS #endif /* BSON_ERROR_H */ MongoDB-v1.2.2/bson/bson-iso8601-private.h000644 000765 000024 00000001636 12651754051 020267 0ustar00davidstaff000000 000000 /* * Copyright 2014 MongoDB, Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #ifndef BSON_ISO8601_PRIVATE_H #define BSON_ISO8601_PRIVATE_H #include "bson-compat.h" #include "bson-macros.h" BSON_BEGIN_DECLS bool _bson_iso8601_date_parse (const char *str, int32_t len, int64_t *out); BSON_END_DECLS #endif /* BSON_ISO8601_PRIVATE_H */ MongoDB-v1.2.2/bson/bson-iso8601.c000644 000765 000024 00000017647 12651754051 016623 0ustar00davidstaff000000 000000 /* * Copyright 2013 MongoDB, Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #include "bson-compat.h" #include "bson-macros.h" #include "bson-error.h" #include "bson-iso8601-private.h" #ifndef _WIN32 # include "bson-timegm-private.h" #endif static bool get_tok (const char *terminals, const char **ptr, int32_t *remaining, const char **out, int32_t *out_len) { const char *terminal; bool found_terminal = false; if (!*remaining) { *out = ""; *out_len = 0; } *out = *ptr; *out_len = -1; for (; *remaining && !found_terminal; (*ptr)++, (*remaining)--, (*out_len)++) { for (terminal = terminals; *terminal; terminal++) { if (**ptr == *terminal) { found_terminal = true; break; } } } if (!found_terminal) { (*out_len)++; } return found_terminal; } static bool digits_only (const char *str, int32_t len) { int i; for (i = 0; i < len; i++) { if (!isdigit(str[i])) { return false; } } return true; } static bool parse_num (const char *str, int32_t len, int32_t digits, int32_t min, int32_t max, int32_t *out) { int i; int magnitude = 1; int32_t value = 0; if ((digits >= 0 && len != digits) || !digits_only (str, len)) { return false; } for (i = 1; i <= len; i++, magnitude *= 10) { value += (str[len - i] - '0') * magnitude; } if (value < min || value > max) { return false; } *out = value; return true; } bool _bson_iso8601_date_parse (const char *str, int32_t len, int64_t *out) { const char *ptr; int32_t remaining = len; const char *year_ptr; const char *month_ptr; const char *day_ptr; const char *hour_ptr; const char *min_ptr; const char *sec_ptr; const char *millis_ptr; const char *tz_ptr; int32_t year_len = 0; int32_t month_len = 0; int32_t day_len = 0; int32_t hour_len = 0; int32_t min_len = 0; int32_t sec_len = 0; int32_t millis_len = 0; int32_t tz_len = 0; int32_t year; int32_t month; int32_t day; int32_t hour; int32_t min; int32_t sec = 0; int64_t millis = 0; int32_t tz_adjustment = 0; #ifdef BSON_OS_WIN32 SYSTEMTIME win_sys_time; FILETIME win_file_time; int64_t win_time_offset; int64_t win_epoch_difference; #else struct tm posix_date = { 0 }; #endif ptr = str; /* we have to match at least yyyy-mm-ddThh:mm[:+-Z] */ if (!(get_tok ("-", &ptr, &remaining, &year_ptr, &year_len) && get_tok ("-", &ptr, &remaining, &month_ptr, &month_len) && get_tok ("T", &ptr, &remaining, &day_ptr, &day_len) && get_tok (":", &ptr, &remaining, &hour_ptr, &hour_len) && get_tok (":+-Z", &ptr, &remaining, &min_ptr, &min_len))) { return false; } /* if the minute has a ':' at the end look for seconds */ if (min_ptr[min_len] == ':') { if (remaining < 2) { return false; } get_tok (".+-Z", &ptr, &remaining, &sec_ptr, &sec_len); if (!sec_len) { return false; } } /* if we had a second and it is followed by a '.' look for milliseconds */ if (sec_len && sec_ptr[sec_len] == '.') { if (remaining < 2) { return false; } get_tok ("+-Z", &ptr, &remaining, &millis_ptr, &millis_len); if (!millis_len) { return false; } } /* backtrack by 1 to put ptr on the timezone */ ptr--; remaining++; get_tok ("", &ptr, &remaining, &tz_ptr, &tz_len); /* we want to include the last few hours in 1969 for timezones translate * across 1970 GMT. We'll check in timegm later on to make sure we're post * 1970 */ if (!parse_num (year_ptr, year_len, 4, 1969, 9999, &year)) { return false; } /* values are as in struct tm */ year -= 1900; if (!parse_num (month_ptr, month_len, 2, 1, 12, &month)) { return false; } /* values are as in struct tm */ month -= 1; if (!parse_num (day_ptr, day_len, 2, 1, 31, &day)) { return false; } if (!parse_num (hour_ptr, hour_len, 2, 0, 23, &hour)) { return false; } if (!parse_num (min_ptr, min_len, 2, 0, 59, &min)) { return false; } if (sec_len && !parse_num (sec_ptr, sec_len, 2, 0, 60, &sec)) { return false; } if (tz_len > 0) { if (tz_ptr[0] == 'Z' && tz_len == 1) { /* valid */ } else if (tz_ptr[0] == '+' || tz_ptr[0] == '-') { int32_t tz_hour; int32_t tz_min; if (tz_len != 5 || !digits_only (tz_ptr + 1, 4)) { return false; } if (!parse_num (tz_ptr + 1, 2, -1, -23, 23, &tz_hour)) { return false; } if (!parse_num (tz_ptr + 3, 2, -1, 0, 59, &tz_min)) { return false; } /* we inflect the meaning of a 'positive' timezone. Those are hours * we have to substract, and vice versa */ tz_adjustment = (tz_ptr[0] == '-' ? 1 : -1) * ((tz_min * 60) + (tz_hour * 60 * 60)); if (!(tz_adjustment > -86400 && tz_adjustment < 86400)) { return false; } } else { return false; } } if (millis_len > 0) { int i; int magnitude; millis = 0; if (millis_len > 3 || !digits_only (millis_ptr, millis_len)) { return false; } for (i = 1, magnitude = 1; i <= millis_len; i++, magnitude *= 10) { millis += (millis_ptr[millis_len - i] - '0') * magnitude; } if (millis_len == 1) { millis *= 100; } else if (millis_len == 2) { millis *= 10; } if (millis < 0 || millis > 1000) { return false; } } #ifdef BSON_OS_WIN32 win_sys_time.wMilliseconds = millis; win_sys_time.wSecond = sec; win_sys_time.wMinute = min; win_sys_time.wHour = hour; win_sys_time.wDay = day; win_sys_time.wDayOfWeek = -1; /* ignored */ win_sys_time.wMonth = month + 1; win_sys_time.wYear = year + 1900; /* the wDayOfWeek member of SYSTEMTIME is ignored by this function */ if (SystemTimeToFileTime (&win_sys_time, &win_file_time) == 0) { return 0; } /* The Windows FILETIME structure contains two parts of a 64-bit value representing the * number of 100-nanosecond intervals since January 1, 1601 */ win_time_offset = (((uint64_t)win_file_time.dwHighDateTime) << 32) | win_file_time.dwLowDateTime; /* There are 11644473600 seconds between the unix epoch and the windows epoch * 100-nanoseconds = milliseconds * 10000 */ win_epoch_difference = 11644473600000 * 10000; /* removes the diff between 1970 and 1601 */ win_time_offset -= win_epoch_difference; /* 1 milliseconds = 1000000 nanoseconds = 10000 100-nanosecond intervals */ millis = win_time_offset / 10000; #else posix_date.tm_sec = sec; posix_date.tm_min = min; posix_date.tm_hour = hour; posix_date.tm_mday = day; posix_date.tm_mon = month; posix_date.tm_year = year; posix_date.tm_wday = 0; posix_date.tm_yday = 0; millis = (1000 * ((uint64_t)_bson_timegm (&posix_date))) + millis; #endif millis += tz_adjustment * 1000; if (millis < 0) { return false; } *out = millis; return true; } MongoDB-v1.2.2/bson/bson-iter.c000644 000765 000024 00000150614 12651754051 016445 0ustar00davidstaff000000 000000 /* * Copyright 2013-2014 MongoDB, Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #include "bson-iter.h" #define ITER_TYPE(i) ((bson_type_t) *((i)->raw + (i)->type)) /* *-------------------------------------------------------------------------- * * bson_iter_init -- * * Initializes @iter to be used to iterate @bson. * * Returns: * true if bson_iter_t was initialized. otherwise false. * * Side effects: * @iter is initialized. * *-------------------------------------------------------------------------- */ bool bson_iter_init (bson_iter_t *iter, /* OUT */ const bson_t *bson) /* IN */ { bson_return_val_if_fail (iter, false); bson_return_val_if_fail (bson, false); if (BSON_UNLIKELY (bson->len < 5)) { memset (iter, 0, sizeof *iter); return false; } iter->raw = bson_get_data (bson); iter->len = bson->len; iter->off = 0; iter->type = 0; iter->key = 0; iter->d1 = 0; iter->d2 = 0; iter->d3 = 0; iter->d4 = 0; iter->next_off = 4; iter->err_off = 0; return true; } /* *-------------------------------------------------------------------------- * * bson_iter_recurse -- * * Creates a new sub-iter looking at the document or array that @iter * is currently pointing at. * * Returns: * true if successful and @child was initialized. * * Side effects: * @child is initialized. * *-------------------------------------------------------------------------- */ bool bson_iter_recurse (const bson_iter_t *iter, /* IN */ bson_iter_t *child) /* OUT */ { const uint8_t *data = NULL; uint32_t len = 0; bson_return_val_if_fail (iter, false); bson_return_val_if_fail (child, false); if (ITER_TYPE (iter) == BSON_TYPE_DOCUMENT) { bson_iter_document (iter, &len, &data); } else if (ITER_TYPE (iter) == BSON_TYPE_ARRAY) { bson_iter_array (iter, &len, &data); } else { return false; } child->raw = data; child->len = len; child->off = 0; child->type = 0; child->key = 0; child->d1 = 0; child->d2 = 0; child->d3 = 0; child->d4 = 0; child->next_off = 4; child->err_off = 0; return true; } /* *-------------------------------------------------------------------------- * * bson_iter_init_find -- * * Initializes a #bson_iter_t and moves the iter to the first field * matching @key. * * Returns: * true if the field named @key was found; otherwise false. * * Side effects: * None. * *-------------------------------------------------------------------------- */ bool bson_iter_init_find (bson_iter_t *iter, /* INOUT */ const bson_t *bson, /* IN */ const char *key) /* IN */ { bson_return_val_if_fail (iter, false); bson_return_val_if_fail (bson, false); bson_return_val_if_fail (key, false); return bson_iter_init (iter, bson) && bson_iter_find (iter, key); } /* *-------------------------------------------------------------------------- * * bson_iter_init_find_case -- * * A case-insensitive version of bson_iter_init_find(). * * Returns: * true if the field was found and @iter is observing that field. * * Side effects: * None. * *-------------------------------------------------------------------------- */ bool bson_iter_init_find_case (bson_iter_t *iter, /* INOUT */ const bson_t *bson, /* IN */ const char *key) /* IN */ { bson_return_val_if_fail (iter, false); bson_return_val_if_fail (bson, false); bson_return_val_if_fail (key, false); return bson_iter_init (iter, bson) && bson_iter_find_case (iter, key); } /* *-------------------------------------------------------------------------- * * _bson_iter_find_with_len -- * * Internal helper for finding an exact key. * * Returns: * true if the field @key was found. * * Side effects: * None. * *-------------------------------------------------------------------------- */ static bool _bson_iter_find_with_len (bson_iter_t *iter, /* INOUT */ const char *key, /* IN */ int keylen) /* IN */ { const char *ikey; bson_return_val_if_fail (iter, false); bson_return_val_if_fail (key, false); if (keylen == 0) { return false; } if (keylen < 0) { keylen = (int)strlen (key); } while (bson_iter_next (iter)) { ikey = bson_iter_key (iter); if ((0 == strncmp (key, ikey, keylen)) && (ikey [keylen] == '\0')) { return true; } } return false; } /* *-------------------------------------------------------------------------- * * bson_iter_find -- * * Searches through @iter starting from the current position for a key * matching @key. This is a case-sensitive search meaning "KEY" and * "key" would NOT match. * * Returns: * true if @key is found. * * Side effects: * None. * *-------------------------------------------------------------------------- */ bool bson_iter_find (bson_iter_t *iter, /* INOUT */ const char *key) /* IN */ { bson_return_val_if_fail (iter, false); bson_return_val_if_fail (key, false); return _bson_iter_find_with_len (iter, key, -1); } /* *-------------------------------------------------------------------------- * * bson_iter_find_case -- * * Searches through @iter starting from the current position for a key * matching @key. This is a case-insensitive search meaning "KEY" and * "key" would match. * * Returns: * true if @key is found. * * Side effects: * None. * *-------------------------------------------------------------------------- */ bool bson_iter_find_case (bson_iter_t *iter, /* INOUT */ const char *key) /* IN */ { bson_return_val_if_fail (iter, false); bson_return_val_if_fail (key, false); while (bson_iter_next (iter)) { #ifdef BSON_OS_WIN32 if (!_stricmp(key, bson_iter_key (iter))) { #else if (!strcasecmp (key, bson_iter_key (iter))) { #endif return true; } } return false; } /* *-------------------------------------------------------------------------- * * bson_iter_find_descendant -- * * Locates a descendant using the "parent.child.key" notation. This * operates similar to bson_iter_find() except that it can recurse * into children documents using the dot notation. * * Returns: * true if the descendant was found and @descendant was initialized. * * Side effects: * @descendant may be initialized. * *-------------------------------------------------------------------------- */ bool bson_iter_find_descendant (bson_iter_t *iter, /* INOUT */ const char *dotkey, /* IN */ bson_iter_t *descendant) /* OUT */ { bson_iter_t tmp; const char *dot; size_t sublen; bson_return_val_if_fail (iter, false); bson_return_val_if_fail (dotkey, false); bson_return_val_if_fail (descendant, false); if ((dot = strchr (dotkey, '.'))) { sublen = dot - dotkey; } else { sublen = strlen (dotkey); } if (_bson_iter_find_with_len (iter, dotkey, (int)sublen)) { if (!dot) { *descendant = *iter; return true; } if (BSON_ITER_HOLDS_DOCUMENT (iter) || BSON_ITER_HOLDS_ARRAY (iter)) { if (bson_iter_recurse (iter, &tmp)) { return bson_iter_find_descendant (&tmp, dot + 1, descendant); } } } return false; } /* *-------------------------------------------------------------------------- * * bson_iter_key -- * * Retrieves the key of the current field. The resulting key is valid * while @iter is valid. * * Returns: * A string that should not be modified or freed. * * Side effects: * None. * *-------------------------------------------------------------------------- */ const char * bson_iter_key (const bson_iter_t *iter) /* IN */ { bson_return_val_if_fail (iter, NULL); return bson_iter_key_unsafe (iter); } /* *-------------------------------------------------------------------------- * * bson_iter_type -- * * Retrieves the type of the current field. It may be useful to check * the type using the BSON_ITER_HOLDS_*() macros. * * Returns: * A bson_type_t. * * Side effects: * None. * *-------------------------------------------------------------------------- */ bson_type_t bson_iter_type (const bson_iter_t *iter) /* IN */ { bson_return_val_if_fail (iter, BSON_TYPE_EOD); bson_return_val_if_fail (iter->raw, BSON_TYPE_EOD); bson_return_val_if_fail (iter->len, BSON_TYPE_EOD); return bson_iter_type_unsafe (iter); } /* *-------------------------------------------------------------------------- * * bson_iter_next -- * * Advances @iter to the next field of the underlying BSON document. * If all fields have been exhausted, then %false is returned. * * It is a programming error to use @iter after this function has * returned false. * * Returns: * true if the iter was advanced to the next record. * otherwise false and @iter should be considered invalid. * * Side effects: * @iter may be invalidated. * *-------------------------------------------------------------------------- */ bool bson_iter_next (bson_iter_t *iter) /* INOUT */ { const uint8_t *data; uint32_t o; unsigned int len; bson_return_val_if_fail (iter, false); if (!iter->raw) { return false; } data = iter->raw; len = iter->len; iter->off = iter->next_off; iter->type = iter->off; iter->key = iter->off + 1; iter->d1 = 0; iter->d2 = 0; iter->d3 = 0; iter->d4 = 0; for (o = iter->off + 1; o < len; o++) { if (!data [o]) { iter->d1 = ++o; goto fill_data_fields; } } goto mark_invalid; fill_data_fields: switch (ITER_TYPE (iter)) { case BSON_TYPE_DATE_TIME: case BSON_TYPE_DOUBLE: case BSON_TYPE_INT64: case BSON_TYPE_TIMESTAMP: iter->next_off = o + 8; break; case BSON_TYPE_CODE: case BSON_TYPE_SYMBOL: case BSON_TYPE_UTF8: { uint32_t l; if ((o + 4) >= len) { iter->err_off = o; goto mark_invalid; } iter->d2 = o + 4; memcpy (&l, iter->raw + iter->d1, sizeof (l)); l = BSON_UINT32_FROM_LE (l); if (l > (len - (o + 4))) { iter->err_off = o; goto mark_invalid; } iter->next_off = o + 4 + l; /* * Make sure the string length includes the NUL byte. */ if (BSON_UNLIKELY ((l == 0) || (iter->next_off >= len))) { iter->err_off = o; goto mark_invalid; } /* * Make sure the last byte is a NUL byte. */ if (BSON_UNLIKELY ((iter->raw + iter->d2)[l - 1] != '\0')) { iter->err_off = o + 4 + l - 1; goto mark_invalid; } } break; case BSON_TYPE_BINARY: { bson_subtype_t subtype; uint32_t l; if (o >= (len - 4)) { iter->err_off = o; goto mark_invalid; } iter->d2 = o + 4; iter->d3 = o + 5; memcpy (&l, iter->raw + iter->d1, sizeof (l)); l = BSON_UINT32_FROM_LE (l); if (l >= (len - o)) { iter->err_off = o; goto mark_invalid; } subtype = *(iter->raw + iter->d2); if (subtype == BSON_SUBTYPE_BINARY_DEPRECATED) { if (l < 4) { iter->err_off = o; goto mark_invalid; } } iter->next_off = o + 5 + l; } break; case BSON_TYPE_ARRAY: case BSON_TYPE_DOCUMENT: { uint32_t l; if (o >= (len - 4)) { iter->err_off = o; goto mark_invalid; } memcpy (&l, iter->raw + iter->d1, sizeof (l)); l = BSON_UINT32_FROM_LE (l); if ((l > len) || (l > (len - o))) { iter->err_off = o; goto mark_invalid; } iter->next_off = o + l; } break; case BSON_TYPE_OID: iter->next_off = o + 12; break; case BSON_TYPE_BOOL: iter->next_off = o + 1; break; case BSON_TYPE_REGEX: { bool eor = false; bool eoo = false; for (; o < len; o++) { if (!data [o]) { iter->d2 = ++o; eor = true; break; } } if (!eor) { iter->err_off = iter->next_off; goto mark_invalid; } for (; o < len; o++) { if (!data [o]) { eoo = true; break; } } if (!eoo) { iter->err_off = iter->next_off; goto mark_invalid; } iter->next_off = o + 1; } break; case BSON_TYPE_DBPOINTER: { uint32_t l; if (o >= (len - 4)) { iter->err_off = o; goto mark_invalid; } iter->d2 = o + 4; memcpy (&l, iter->raw + iter->d1, sizeof (l)); l = BSON_UINT32_FROM_LE (l); if ((l > len) || (l > (len - o))) { iter->err_off = o; goto mark_invalid; } iter->d3 = o + 4 + l; iter->next_off = o + 4 + l + 12; } break; case BSON_TYPE_CODEWSCOPE: { uint32_t l; uint32_t doclen; if ((len < 19) || (o >= (len - 14))) { iter->err_off = o; goto mark_invalid; } iter->d2 = o + 4; iter->d3 = o + 8; memcpy (&l, iter->raw + iter->d1, sizeof (l)); l = BSON_UINT32_FROM_LE (l); if ((l < 14) || (l >= (len - o))) { iter->err_off = o; goto mark_invalid; } iter->next_off = o + l; if (iter->next_off >= len) { iter->err_off = o; goto mark_invalid; } memcpy (&l, iter->raw + iter->d2, sizeof (l)); l = BSON_UINT32_FROM_LE (l); if (l >= (len - o - 4 - 4)) { iter->err_off = o; goto mark_invalid; } if ((o + 4 + 4 + l + 4) >= iter->next_off) { iter->err_off = o + 4; goto mark_invalid; } iter->d4 = o + 4 + 4 + l; memcpy (&doclen, iter->raw + iter->d4, sizeof (doclen)); doclen = BSON_UINT32_FROM_LE (doclen); if ((o + 4 + 4 + l + doclen) != iter->next_off) { iter->err_off = o + 4 + 4 + l; goto mark_invalid; } } break; case BSON_TYPE_INT32: iter->next_off = o + 4; break; case BSON_TYPE_MAXKEY: case BSON_TYPE_MINKEY: case BSON_TYPE_NULL: case BSON_TYPE_UNDEFINED: iter->d1 = -1; iter->next_off = o; break; case BSON_TYPE_EOD: default: iter->err_off = o; goto mark_invalid; } /* * Check to see if any of the field locations would overflow the * current BSON buffer. If so, set the error location to the offset * of where the field starts. */ if (iter->next_off >= len) { iter->err_off = o; goto mark_invalid; } iter->err_off = 0; return true; mark_invalid: iter->raw = NULL; iter->len = 0; iter->next_off = 0; return false; } /* *-------------------------------------------------------------------------- * * bson_iter_binary -- * * Retrieves the BSON_TYPE_BINARY field. The subtype is stored in * @subtype. The length of @binary in bytes is stored in @binary_len. * * @binary should not be modified or freed and is only valid while * @iter is on the current field. * * Parameters: * @iter: A bson_iter_t * @subtype: A location for the binary subtype. * @binary_len: A location for the length of @binary. * @binary: A location for a pointer to the binary data. * * Returns: * None. * * Side effects: * None. * *-------------------------------------------------------------------------- */ void bson_iter_binary (const bson_iter_t *iter, /* IN */ bson_subtype_t *subtype, /* OUT */ uint32_t *binary_len, /* OUT */ const uint8_t **binary) /* OUT */ { bson_subtype_t backup; bson_return_if_fail (iter); bson_return_if_fail (!binary || binary_len); if (ITER_TYPE (iter) == BSON_TYPE_BINARY) { if (!subtype) { subtype = &backup; } *subtype = (bson_subtype_t) *(iter->raw + iter->d2); if (binary) { memcpy (binary_len, (iter->raw + iter->d1), sizeof (*binary_len)); *binary_len = BSON_UINT32_FROM_LE (*binary_len); *binary = iter->raw + iter->d3; if (*subtype == BSON_SUBTYPE_BINARY_DEPRECATED) { *binary_len -= sizeof (int32_t); *binary += sizeof (int32_t); } } return; } if (binary) { *binary = NULL; } if (binary_len) { *binary_len = 0; } if (subtype) { *subtype = BSON_SUBTYPE_BINARY; } } /* *-------------------------------------------------------------------------- * * bson_iter_bool -- * * Retrieves the current field of type BSON_TYPE_BOOL. * * Returns: * true or false, dependent on bson document. * * Side effects: * None. * *-------------------------------------------------------------------------- */ bool bson_iter_bool (const bson_iter_t *iter) /* IN */ { bson_return_val_if_fail (iter, 0); if (ITER_TYPE (iter) == BSON_TYPE_BOOL) { return bson_iter_bool_unsafe (iter); } return false; } /* *-------------------------------------------------------------------------- * * bson_iter_as_bool -- * * If @iter is on a boolean field, returns the boolean. If it is on a * non-boolean field such as int32, int64, or double, it will convert * the value to a boolean. * * Zero is false, and non-zero is true. * * Returns: * true or false, dependent on field type. * * Side effects: * None. * *-------------------------------------------------------------------------- */ bool bson_iter_as_bool (const bson_iter_t *iter) /* IN */ { bson_return_val_if_fail (iter, 0); switch ((int)ITER_TYPE (iter)) { case BSON_TYPE_BOOL: return bson_iter_bool (iter); case BSON_TYPE_DOUBLE: return !(bson_iter_double (iter) == 0.0); case BSON_TYPE_INT64: return !(bson_iter_int64 (iter) == 0); case BSON_TYPE_INT32: return !(bson_iter_int32 (iter) == 0); case BSON_TYPE_UTF8: return true; case BSON_TYPE_NULL: case BSON_TYPE_UNDEFINED: return false; default: return true; } } /* *-------------------------------------------------------------------------- * * bson_iter_double -- * * Retrieves the current field of type BSON_TYPE_DOUBLE. * * Returns: * A double. * * Side effects: * None. * *-------------------------------------------------------------------------- */ double bson_iter_double (const bson_iter_t *iter) /* IN */ { bson_return_val_if_fail (iter, 0); if (ITER_TYPE (iter) == BSON_TYPE_DOUBLE) { return bson_iter_double_unsafe (iter); } return 0; } /* *-------------------------------------------------------------------------- * * bson_iter_int32 -- * * Retrieves the value of the field of type BSON_TYPE_INT32. * * Returns: * A 32-bit signed integer. * * Side effects: * None. * *-------------------------------------------------------------------------- */ int32_t bson_iter_int32 (const bson_iter_t *iter) /* IN */ { bson_return_val_if_fail (iter, 0); if (ITER_TYPE (iter) == BSON_TYPE_INT32) { return bson_iter_int32_unsafe (iter); } return 0; } /* *-------------------------------------------------------------------------- * * bson_iter_int64 -- * * Retrieves a 64-bit signed integer for the current BSON_TYPE_INT64 * field. * * Returns: * A 64-bit signed integer. * * Side effects: * None. * *-------------------------------------------------------------------------- */ int64_t bson_iter_int64 (const bson_iter_t *iter) /* IN */ { bson_return_val_if_fail (iter, 0); if (ITER_TYPE (iter) == BSON_TYPE_INT64) { return bson_iter_int64_unsafe (iter); } return 0; } /* *-------------------------------------------------------------------------- * * bson_iter_as_int64 -- * * If @iter is not an int64 field, it will try to convert the value to * an int64. Such field types include: * * - bool * - double * - int32 * * Returns: * An int64_t. * * Side effects: * None. * *-------------------------------------------------------------------------- */ int64_t bson_iter_as_int64 (const bson_iter_t *iter) /* IN */ { bson_return_val_if_fail (iter, 0); switch ((int)ITER_TYPE (iter)) { case BSON_TYPE_BOOL: return (int64_t)bson_iter_bool (iter); case BSON_TYPE_DOUBLE: return (int64_t)bson_iter_double (iter); case BSON_TYPE_INT64: return bson_iter_int64 (iter); case BSON_TYPE_INT32: return (int64_t)bson_iter_int32 (iter); default: return 0; } } /* *-------------------------------------------------------------------------- * * bson_iter_oid -- * * Retrieves the current field of type %BSON_TYPE_OID. The result is * valid while @iter is valid. * * Returns: * A bson_oid_t that should not be modified or freed. * * Side effects: * None. * *-------------------------------------------------------------------------- */ const bson_oid_t * bson_iter_oid (const bson_iter_t *iter) /* IN */ { bson_return_val_if_fail (iter, NULL); if (ITER_TYPE (iter) == BSON_TYPE_OID) { return bson_iter_oid_unsafe (iter); } return NULL; } /* *-------------------------------------------------------------------------- * * bson_iter_regex -- * * Fetches the current field from the iter which should be of type * BSON_TYPE_REGEX. * * Returns: * Regex from @iter. This should not be modified or freed. * * Side effects: * None. * *-------------------------------------------------------------------------- */ const char * bson_iter_regex (const bson_iter_t *iter, /* IN */ const char **options) /* IN */ { const char *ret = NULL; const char *ret_options = NULL; bson_return_val_if_fail (iter, NULL); if (ITER_TYPE (iter) == BSON_TYPE_REGEX) { ret = (const char *)(iter->raw + iter->d1); ret_options = (const char *)(iter->raw + iter->d2); } if (options) { *options = ret_options; } return ret; } /* *-------------------------------------------------------------------------- * * bson_iter_utf8 -- * * Retrieves the current field of type %BSON_TYPE_UTF8 as a UTF-8 * encoded string. * * Parameters: * @iter: A bson_iter_t. * @length: A location for the length of the string. * * Returns: * A string that should not be modified or freed. * * Side effects: * @length will be set to the result strings length if non-NULL. * *-------------------------------------------------------------------------- */ const char * bson_iter_utf8 (const bson_iter_t *iter, /* IN */ uint32_t *length) /* OUT */ { bson_return_val_if_fail (iter, NULL); if (ITER_TYPE (iter) == BSON_TYPE_UTF8) { if (length) { *length = bson_iter_utf8_len_unsafe (iter); } return (const char *)(iter->raw + iter->d2); } if (length) { *length = 0; } return NULL; } /* *-------------------------------------------------------------------------- * * bson_iter_dup_utf8 -- * * Copies the current UTF-8 element into a newly allocated string. The * string should be freed using bson_free() when the caller is * finished with it. * * Returns: * A newly allocated char* that should be freed with bson_free(). * * Side effects: * @length will be set to the result strings length if non-NULL. * *-------------------------------------------------------------------------- */ char * bson_iter_dup_utf8 (const bson_iter_t *iter, /* IN */ uint32_t *length) /* OUT */ { uint32_t local_length = 0; const char *str; char *ret = NULL; bson_return_val_if_fail (iter, NULL); if ((str = bson_iter_utf8 (iter, &local_length))) { ret = bson_malloc0 (local_length + 1); memcpy (ret, str, local_length); ret[local_length] = '\0'; } if (length) { *length = local_length; } return ret; } /* *-------------------------------------------------------------------------- * * bson_iter_code -- * * Retrieves the current field of type %BSON_TYPE_CODE. The length of * the resulting string is stored in @length. * * Parameters: * @iter: A bson_iter_t. * @length: A location for the code length. * * Returns: * A NUL-terminated string containing the code which should not be * modified or freed. * * Side effects: * None. * *-------------------------------------------------------------------------- */ const char * bson_iter_code (const bson_iter_t *iter, /* IN */ uint32_t *length) /* OUT */ { bson_return_val_if_fail (iter, NULL); if (ITER_TYPE (iter) == BSON_TYPE_CODE) { if (length) { *length = bson_iter_utf8_len_unsafe (iter); } return (const char *)(iter->raw + iter->d2); } if (length) { *length = 0; } return NULL; } /* *-------------------------------------------------------------------------- * * bson_iter_codewscope -- * * Similar to bson_iter_code() but with a scope associated encoded as * a BSON document. @scope should not be modified or freed. It is * valid while @iter is valid. * * Parameters: * @iter: A #bson_iter_t. * @length: A location for the length of resulting string. * @scope_len: A location for the length of @scope. * @scope: A location for the scope encoded as BSON. * * Returns: * A NUL-terminated string that should not be modified or freed. * * Side effects: * @length is set to the resulting string length in bytes. * @scope_len is set to the length of @scope in bytes. * @scope is set to the scope documents buffer which can be * turned into a bson document with bson_init_static(). * *-------------------------------------------------------------------------- */ const char * bson_iter_codewscope (const bson_iter_t *iter, /* IN */ uint32_t *length, /* OUT */ uint32_t *scope_len, /* OUT */ const uint8_t **scope) /* OUT */ { uint32_t len; bson_return_val_if_fail (iter, NULL); if (ITER_TYPE (iter) == BSON_TYPE_CODEWSCOPE) { if (length) { memcpy (&len, iter->raw + iter->d2, sizeof (len)); *length = BSON_UINT32_FROM_LE (len) - 1; } memcpy (&len, iter->raw + iter->d4, sizeof (len)); *scope_len = BSON_UINT32_FROM_LE (len); *scope = iter->raw + iter->d4; return (const char *)(iter->raw + iter->d3); } if (length) { *length = 0; } if (scope_len) { *scope_len = 0; } if (scope) { *scope = NULL; } return NULL; } /* *-------------------------------------------------------------------------- * * bson_iter_dbpointer -- * * Retrieves a BSON_TYPE_DBPOINTER field. @collection_len will be set * to the length of the collection name. The collection name will be * placed into @collection. The oid will be placed into @oid. * * @collection and @oid should not be modified. * * Parameters: * @iter: A #bson_iter_t. * @collection_len: A location for the length of @collection. * @collection: A location for the collection name. * @oid: A location for the oid. * * Returns: * None. * * Side effects: * @collection_len is set to the length of @collection in bytes * excluding the null byte. * @collection is set to the collection name, including a terminating * null byte. * @oid is initialized with the oid. * *-------------------------------------------------------------------------- */ void bson_iter_dbpointer (const bson_iter_t *iter, /* IN */ uint32_t *collection_len, /* OUT */ const char **collection, /* OUT */ const bson_oid_t **oid) /* OUT */ { bson_return_if_fail (iter); if (collection) { *collection = NULL; } if (oid) { *oid = NULL; } if (ITER_TYPE (iter) == BSON_TYPE_DBPOINTER) { if (collection_len) { memcpy (collection_len, (iter->raw + iter->d1), sizeof (*collection_len)); *collection_len = BSON_UINT32_FROM_LE (*collection_len); if ((*collection_len) > 0) { (*collection_len)--; } } if (collection) { *collection = (const char *)(iter->raw + iter->d2); } if (oid) { *oid = (const bson_oid_t *)(iter->raw + iter->d3); } } } /* *-------------------------------------------------------------------------- * * bson_iter_symbol -- * * Retrieves the symbol of the current field of type BSON_TYPE_SYMBOL. * * Parameters: * @iter: A bson_iter_t. * @length: A location for the length of the symbol. * * Returns: * A string containing the symbol as UTF-8. The value should not be * modified or freed. * * Side effects: * @length is set to the resulting strings length in bytes, * excluding the null byte. * *-------------------------------------------------------------------------- */ const char * bson_iter_symbol (const bson_iter_t *iter, /* IN */ uint32_t *length) /* OUT */ { const char *ret = NULL; uint32_t ret_length = 0; bson_return_val_if_fail (iter, NULL); if (ITER_TYPE (iter) == BSON_TYPE_SYMBOL) { ret = (const char *)(iter->raw + iter->d2); ret_length = bson_iter_utf8_len_unsafe (iter); } if (length) { *length = ret_length; } return ret; } /* *-------------------------------------------------------------------------- * * bson_iter_date_time -- * * Fetches the number of milliseconds elapsed since the UNIX epoch. * This value can be negative as times before 1970 are valid. * * Returns: * A signed 64-bit integer containing the number of milliseconds. * * Side effects: * None. * *-------------------------------------------------------------------------- */ int64_t bson_iter_date_time (const bson_iter_t *iter) /* IN */ { bson_return_val_if_fail (iter, 0); if (ITER_TYPE (iter) == BSON_TYPE_DATE_TIME) { return bson_iter_int64_unsafe (iter); } return 0; } /* *-------------------------------------------------------------------------- * * bson_iter_time_t -- * * Retrieves the current field of type BSON_TYPE_DATE_TIME as a * time_t. * * Returns: * A #time_t of the number of seconds since UNIX epoch in UTC. * * Side effects: * None. * *-------------------------------------------------------------------------- */ time_t bson_iter_time_t (const bson_iter_t *iter) /* IN */ { bson_return_val_if_fail (iter, 0); if (ITER_TYPE (iter) == BSON_TYPE_DATE_TIME) { return bson_iter_time_t_unsafe (iter); } return 0; } /* *-------------------------------------------------------------------------- * * bson_iter_timestamp -- * * Fetches the current field if it is a BSON_TYPE_TIMESTAMP. * * Parameters: * @iter: A #bson_iter_t. * @timestamp: a location for the timestamp. * @increment: A location for the increment. * * Returns: * None. * * Side effects: * @timestamp is initialized. * @increment is initialized. * *-------------------------------------------------------------------------- */ void bson_iter_timestamp (const bson_iter_t *iter, /* IN */ uint32_t *timestamp, /* OUT */ uint32_t *increment) /* OUT */ { uint64_t encoded; uint32_t ret_timestamp = 0; uint32_t ret_increment = 0; bson_return_if_fail (iter); if (ITER_TYPE (iter) == BSON_TYPE_TIMESTAMP) { memcpy (&encoded, iter->raw + iter->d1, sizeof (encoded)); encoded = BSON_UINT64_FROM_LE (encoded); ret_timestamp = (encoded >> 32) & 0xFFFFFFFF; ret_increment = encoded & 0xFFFFFFFF; } if (timestamp) { *timestamp = ret_timestamp; } if (increment) { *increment = ret_increment; } } /* *-------------------------------------------------------------------------- * * bson_iter_timeval -- * * Retrieves the current field of type BSON_TYPE_DATE_TIME and stores * it into the struct timeval provided. tv->tv_sec is set to the * number of seconds since the UNIX epoch in UTC. * * Since BSON_TYPE_DATE_TIME does not support fractions of a second, * tv->tv_usec will always be set to zero. * * Returns: * None. * * Side effects: * @tv is initialized. * *-------------------------------------------------------------------------- */ void bson_iter_timeval (const bson_iter_t *iter, /* IN */ struct timeval *tv) /* OUT */ { bson_return_if_fail (iter); if (ITER_TYPE (iter) == BSON_TYPE_DATE_TIME) { bson_iter_timeval_unsafe (iter, tv); return; } memset (tv, 0, sizeof *tv); } /** * bson_iter_document: * @iter: a bson_iter_t. * @document_len: A location for the document length. * @document: A location for a pointer to the document buffer. * */ /* *-------------------------------------------------------------------------- * * bson_iter_document -- * * Retrieves the data to the document BSON structure and stores the * length of the document buffer in @document_len and the document * buffer in @document. * * If you would like to iterate over the child contents, you might * consider creating a bson_t on the stack such as the following. It * allows you to call functions taking a const bson_t* only. * * bson_t b; * uint32_t len; * const uint8_t *data; * * bson_iter_document(iter, &len, &data); * * if (bson_init_static (&b, data, len)) { * ... * } * * There is no need to cleanup the bson_t structure as no data can be * modified in the process of its use (as it is static/const). * * Returns: * None. * * Side effects: * @document_len is initialized. * @document is initialized. * *-------------------------------------------------------------------------- */ void bson_iter_document (const bson_iter_t *iter, /* IN */ uint32_t *document_len, /* OUT */ const uint8_t **document) /* OUT */ { bson_return_if_fail (iter); bson_return_if_fail (document_len); bson_return_if_fail (document); *document = NULL; *document_len = 0; if (ITER_TYPE (iter) == BSON_TYPE_DOCUMENT) { memcpy (document_len, (iter->raw + iter->d1), sizeof (*document_len)); *document_len = BSON_UINT32_FROM_LE (*document_len); *document = (iter->raw + iter->d1); } } /** * bson_iter_array: * @iter: a #bson_iter_t. * @array_len: A location for the array length. * @array: A location for a pointer to the array buffer. */ /* *-------------------------------------------------------------------------- * * bson_iter_array -- * * Retrieves the data to the array BSON structure and stores the * length of the array buffer in @array_len and the array buffer in * @array. * * If you would like to iterate over the child contents, you might * consider creating a bson_t on the stack such as the following. It * allows you to call functions taking a const bson_t* only. * * bson_t b; * uint32_t len; * const uint8_t *data; * * bson_iter_array (iter, &len, &data); * * if (bson_init_static (&b, data, len)) { * ... * } * * There is no need to cleanup the #bson_t structure as no data can be * modified in the process of its use. * * Returns: * None. * * Side effects: * @array_len is initialized. * @array is initialized. * *-------------------------------------------------------------------------- */ void bson_iter_array (const bson_iter_t *iter, /* IN */ uint32_t *array_len, /* OUT */ const uint8_t **array) /* OUT */ { bson_return_if_fail (iter); bson_return_if_fail (array_len); bson_return_if_fail (array); *array = NULL; *array_len = 0; if (ITER_TYPE (iter) == BSON_TYPE_ARRAY) { memcpy (array_len, (iter->raw + iter->d1), sizeof (*array_len)); *array_len = BSON_UINT32_FROM_LE (*array_len); *array = (iter->raw + iter->d1); } } #define VISIT_FIELD(name) visitor->visit_##name && visitor->visit_##name #define VISIT_AFTER VISIT_FIELD (after) #define VISIT_BEFORE VISIT_FIELD (before) #define VISIT_CORRUPT if (visitor->visit_corrupt) visitor->visit_corrupt #define VISIT_DOUBLE VISIT_FIELD (double) #define VISIT_UTF8 VISIT_FIELD (utf8) #define VISIT_DOCUMENT VISIT_FIELD (document) #define VISIT_ARRAY VISIT_FIELD (array) #define VISIT_BINARY VISIT_FIELD (binary) #define VISIT_UNDEFINED VISIT_FIELD (undefined) #define VISIT_OID VISIT_FIELD (oid) #define VISIT_BOOL VISIT_FIELD (bool) #define VISIT_DATE_TIME VISIT_FIELD (date_time) #define VISIT_NULL VISIT_FIELD (null) #define VISIT_REGEX VISIT_FIELD (regex) #define VISIT_DBPOINTER VISIT_FIELD (dbpointer) #define VISIT_CODE VISIT_FIELD (code) #define VISIT_SYMBOL VISIT_FIELD (symbol) #define VISIT_CODEWSCOPE VISIT_FIELD (codewscope) #define VISIT_INT32 VISIT_FIELD (int32) #define VISIT_TIMESTAMP VISIT_FIELD (timestamp) #define VISIT_INT64 VISIT_FIELD (int64) #define VISIT_MAXKEY VISIT_FIELD (maxkey) #define VISIT_MINKEY VISIT_FIELD (minkey) /** * bson_iter_visit_all: * @iter: A #bson_iter_t. * @visitor: A #bson_visitor_t containing the visitors. * @data: User data for @visitor data parameters. * * * Returns: true if the visitor was pre-maturely ended; otherwise false. */ /* *-------------------------------------------------------------------------- * * bson_iter_visit_all -- * * Visits all fields forward from the current position of @iter. For * each field found a function in @visitor will be called. Typically * you will use this immediately after initializing a bson_iter_t. * * bson_iter_init (&iter, b); * bson_iter_visit_all (&iter, my_visitor, NULL); * * @iter will no longer be valid after this function has executed and * will need to be reinitialized if intending to reuse. * * Returns: * true if successfully visited all fields or callback requested * early termination, otherwise false. * * Side effects: * None. * *-------------------------------------------------------------------------- */ bool bson_iter_visit_all (bson_iter_t *iter, /* INOUT */ const bson_visitor_t *visitor, /* IN */ void *data) /* IN */ { const char *key; bson_return_val_if_fail (iter, false); bson_return_val_if_fail (visitor, false); while (bson_iter_next (iter)) { key = bson_iter_key_unsafe (iter); if (*key && !bson_utf8_validate (key, strlen (key), false)) { iter->err_off = iter->off; return true; } if (VISIT_BEFORE (iter, key, data)) { return true; } switch (bson_iter_type (iter)) { case BSON_TYPE_DOUBLE: if (VISIT_DOUBLE (iter, key, bson_iter_double (iter), data)) { return true; } break; case BSON_TYPE_UTF8: { uint32_t utf8_len; const char *utf8; utf8 = bson_iter_utf8 (iter, &utf8_len); if (!bson_utf8_validate (utf8, utf8_len, true)) { iter->err_off = iter->off; return true; } if (VISIT_UTF8 (iter, key, utf8_len, utf8, data)) { return true; } } break; case BSON_TYPE_DOCUMENT: { const uint8_t *docbuf = NULL; uint32_t doclen = 0; bson_t b; bson_iter_document (iter, &doclen, &docbuf); if (bson_init_static (&b, docbuf, doclen) && VISIT_DOCUMENT (iter, key, &b, data)) { return true; } } break; case BSON_TYPE_ARRAY: { const uint8_t *docbuf = NULL; uint32_t doclen = 0; bson_t b; bson_iter_array (iter, &doclen, &docbuf); if (bson_init_static (&b, docbuf, doclen) && VISIT_ARRAY (iter, key, &b, data)) { return true; } } break; case BSON_TYPE_BINARY: { const uint8_t *binary = NULL; bson_subtype_t subtype = BSON_SUBTYPE_BINARY; uint32_t binary_len = 0; bson_iter_binary (iter, &subtype, &binary_len, &binary); if (VISIT_BINARY (iter, key, subtype, binary_len, binary, data)) { return true; } } break; case BSON_TYPE_UNDEFINED: if (VISIT_UNDEFINED (iter, key, data)) { return true; } break; case BSON_TYPE_OID: if (VISIT_OID (iter, key, bson_iter_oid (iter), data)) { return true; } break; case BSON_TYPE_BOOL: if (VISIT_BOOL (iter, key, bson_iter_bool (iter), data)) { return true; } break; case BSON_TYPE_DATE_TIME: if (VISIT_DATE_TIME (iter, key, bson_iter_date_time (iter), data)) { return true; } break; case BSON_TYPE_NULL: if (VISIT_NULL (iter, key, data)) { return true; } break; case BSON_TYPE_REGEX: { const char *regex = NULL; const char *options = NULL; regex = bson_iter_regex (iter, &options); if (VISIT_REGEX (iter, key, regex, options, data)) { return true; } } break; case BSON_TYPE_DBPOINTER: { uint32_t collection_len = 0; const char *collection = NULL; const bson_oid_t *oid = NULL; bson_iter_dbpointer (iter, &collection_len, &collection, &oid); if (VISIT_DBPOINTER (iter, key, collection_len, collection, oid, data)) { return true; } } break; case BSON_TYPE_CODE: { uint32_t code_len; const char *code; code = bson_iter_code (iter, &code_len); if (VISIT_CODE (iter, key, code_len, code, data)) { return true; } } break; case BSON_TYPE_SYMBOL: { uint32_t symbol_len; const char *symbol; symbol = bson_iter_symbol (iter, &symbol_len); if (VISIT_SYMBOL (iter, key, symbol_len, symbol, data)) { return true; } } break; case BSON_TYPE_CODEWSCOPE: { uint32_t length = 0; const char *code; const uint8_t *docbuf = NULL; uint32_t doclen = 0; bson_t b; code = bson_iter_codewscope (iter, &length, &doclen, &docbuf); if (bson_init_static (&b, docbuf, doclen) && VISIT_CODEWSCOPE (iter, key, length, code, &b, data)) { return true; } } break; case BSON_TYPE_INT32: if (VISIT_INT32 (iter, key, bson_iter_int32 (iter), data)) { return true; } break; case BSON_TYPE_TIMESTAMP: { uint32_t timestamp; uint32_t increment; bson_iter_timestamp (iter, ×tamp, &increment); if (VISIT_TIMESTAMP (iter, key, timestamp, increment, data)) { return true; } } break; case BSON_TYPE_INT64: if (VISIT_INT64 (iter, key, bson_iter_int64 (iter), data)) { return true; } break; case BSON_TYPE_MAXKEY: if (VISIT_MAXKEY (iter, bson_iter_key_unsafe (iter), data)) { return true; } break; case BSON_TYPE_MINKEY: if (VISIT_MINKEY (iter, bson_iter_key_unsafe (iter), data)) { return true; } break; case BSON_TYPE_EOD: default: break; } if (VISIT_AFTER (iter, bson_iter_key_unsafe (iter), data)) { return true; } } if (iter->err_off) { VISIT_CORRUPT (iter, data); } #undef VISIT_FIELD return false; } /* *-------------------------------------------------------------------------- * * bson_iter_overwrite_bool -- * * Overwrites the current BSON_TYPE_BOOLEAN field with a new value. * This is performed in-place and therefore no keys are moved. * * Returns: * None. * * Side effects: * None. * *-------------------------------------------------------------------------- */ void bson_iter_overwrite_bool (bson_iter_t *iter, /* IN */ bool value) /* IN */ { bson_return_if_fail (iter); value = !!value; if (ITER_TYPE (iter) == BSON_TYPE_BOOL) { memcpy ((void *)(iter->raw + iter->d1), &value, 1); } } /* *-------------------------------------------------------------------------- * * bson_iter_overwrite_int32 -- * * Overwrites the current BSON_TYPE_INT32 field with a new value. * This is performed in-place and therefore no keys are moved. * * Returns: * None. * * Side effects: * None. * *-------------------------------------------------------------------------- */ void bson_iter_overwrite_int32 (bson_iter_t *iter, /* IN */ int32_t value) /* IN */ { bson_return_if_fail (iter); if (ITER_TYPE (iter) == BSON_TYPE_INT32) { #if BSON_BYTE_ORDER != BSON_LITTLE_ENDIAN value = BSON_UINT32_TO_LE (value); #endif memcpy ((void *)(iter->raw + iter->d1), &value, sizeof (value)); } } /* *-------------------------------------------------------------------------- * * bson_iter_overwrite_int64 -- * * Overwrites the current BSON_TYPE_INT64 field with a new value. * This is performed in-place and therefore no keys are moved. * * Returns: * None. * * Side effects: * None. * *-------------------------------------------------------------------------- */ void bson_iter_overwrite_int64 (bson_iter_t *iter, /* IN */ int64_t value) /* IN */ { bson_return_if_fail (iter); if (ITER_TYPE (iter) == BSON_TYPE_INT64) { #if BSON_BYTE_ORDER != BSON_LITTLE_ENDIAN value = BSON_UINT64_TO_LE (value); #endif memcpy ((void *)(iter->raw + iter->d1), &value, sizeof (value)); } } /* *-------------------------------------------------------------------------- * * bson_iter_overwrite_double -- * * Overwrites the current BSON_TYPE_DOUBLE field with a new value. * This is performed in-place and therefore no keys are moved. * * Returns: * None. * * Side effects: * None. * *-------------------------------------------------------------------------- */ void bson_iter_overwrite_double (bson_iter_t *iter, /* IN */ double value) /* IN */ { bson_return_if_fail (iter); if (ITER_TYPE (iter) == BSON_TYPE_DOUBLE) { value = BSON_DOUBLE_TO_LE (value); memcpy ((void *)(iter->raw + iter->d1), &value, sizeof (value)); } } /* *-------------------------------------------------------------------------- * * bson_iter_value -- * * Retrieves a bson_value_t containing the boxed value of the current * element. The result of this function valid until the state of * iter has been changed (through the use of bson_iter_next()). * * Returns: * A bson_value_t that should not be modified or freed. If you need * to hold on to the value, use bson_value_copy(). * * Side effects: * None. * *-------------------------------------------------------------------------- */ const bson_value_t * bson_iter_value (bson_iter_t *iter) /* IN */ { bson_value_t *value; bson_return_val_if_fail (iter, NULL); value = &iter->value; value->value_type = ITER_TYPE (iter); switch (value->value_type) { case BSON_TYPE_DOUBLE: value->value.v_double = bson_iter_double (iter); break; case BSON_TYPE_UTF8: value->value.v_utf8.str = (char *)bson_iter_utf8 (iter, &value->value.v_utf8.len); break; case BSON_TYPE_DOCUMENT: bson_iter_document (iter, &value->value.v_doc.data_len, (const uint8_t **)&value->value.v_doc.data); break; case BSON_TYPE_ARRAY: bson_iter_array (iter, &value->value.v_doc.data_len, (const uint8_t **)&value->value.v_doc.data); break; case BSON_TYPE_BINARY: bson_iter_binary (iter, &value->value.v_binary.subtype, &value->value.v_binary.data_len, (const uint8_t **)&value->value.v_binary.data); break; case BSON_TYPE_OID: bson_oid_copy (bson_iter_oid (iter), &value->value.v_oid); break; case BSON_TYPE_BOOL: value->value.v_bool = bson_iter_bool (iter); break; case BSON_TYPE_DATE_TIME: value->value.v_datetime = bson_iter_date_time (iter); break; case BSON_TYPE_REGEX: value->value.v_regex.regex = (char *)bson_iter_regex ( iter, (const char **)&value->value.v_regex.options); break; case BSON_TYPE_DBPOINTER: { const bson_oid_t *oid; bson_iter_dbpointer (iter, &value->value.v_dbpointer.collection_len, (const char **)&value->value.v_dbpointer.collection, &oid); bson_oid_copy (oid, &value->value.v_dbpointer.oid); break; } case BSON_TYPE_CODE: value->value.v_code.code = (char *)bson_iter_code ( iter, &value->value.v_code.code_len); break; case BSON_TYPE_SYMBOL: value->value.v_symbol.symbol = (char *)bson_iter_symbol ( iter, &value->value.v_symbol.len); break; case BSON_TYPE_CODEWSCOPE: value->value.v_codewscope.code = (char *)bson_iter_codewscope ( iter, &value->value.v_codewscope.code_len, &value->value.v_codewscope.scope_len, (const uint8_t **)&value->value.v_codewscope.scope_data); break; case BSON_TYPE_INT32: value->value.v_int32 = bson_iter_int32 (iter); break; case BSON_TYPE_TIMESTAMP: bson_iter_timestamp (iter, &value->value.v_timestamp.timestamp, &value->value.v_timestamp.increment); break; case BSON_TYPE_INT64: value->value.v_int64 = bson_iter_int64 (iter); break; case BSON_TYPE_NULL: case BSON_TYPE_UNDEFINED: case BSON_TYPE_MAXKEY: case BSON_TYPE_MINKEY: break; case BSON_TYPE_EOD: default: return NULL; } return value; } MongoDB-v1.2.2/bson/bson-iter.h000644 000765 000024 00000025501 12651754051 016446 0ustar00davidstaff000000 000000 /* * Copyright 2013 MongoDB, Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #ifndef BSON_ITER_H #define BSON_ITER_H #if !defined (BSON_INSIDE) && !defined (BSON_COMPILATION) # error "Only can be included directly." #endif #include "bson.h" #include "bson-endian.h" #include "bson-macros.h" #include "bson-types.h" BSON_BEGIN_DECLS #define BSON_ITER_HOLDS_DOUBLE(iter) \ (bson_iter_type ((iter)) == BSON_TYPE_DOUBLE) #define BSON_ITER_HOLDS_UTF8(iter) \ (bson_iter_type ((iter)) == BSON_TYPE_UTF8) #define BSON_ITER_HOLDS_DOCUMENT(iter) \ (bson_iter_type ((iter)) == BSON_TYPE_DOCUMENT) #define BSON_ITER_HOLDS_ARRAY(iter) \ (bson_iter_type ((iter)) == BSON_TYPE_ARRAY) #define BSON_ITER_HOLDS_BINARY(iter) \ (bson_iter_type ((iter)) == BSON_TYPE_BINARY) #define BSON_ITER_HOLDS_UNDEFINED(iter) \ (bson_iter_type ((iter)) == BSON_TYPE_UNDEFINED) #define BSON_ITER_HOLDS_OID(iter) \ (bson_iter_type ((iter)) == BSON_TYPE_OID) #define BSON_ITER_HOLDS_BOOL(iter) \ (bson_iter_type ((iter)) == BSON_TYPE_BOOL) #define BSON_ITER_HOLDS_DATE_TIME(iter) \ (bson_iter_type ((iter)) == BSON_TYPE_DATE_TIME) #define BSON_ITER_HOLDS_NULL(iter) \ (bson_iter_type ((iter)) == BSON_TYPE_NULL) #define BSON_ITER_HOLDS_REGEX(iter) \ (bson_iter_type ((iter)) == BSON_TYPE_REGEX) #define BSON_ITER_HOLDS_DBPOINTER(iter) \ (bson_iter_type ((iter)) == BSON_TYPE_DBPOINTER) #define BSON_ITER_HOLDS_CODE(iter) \ (bson_iter_type ((iter)) == BSON_TYPE_CODE) #define BSON_ITER_HOLDS_SYMBOL(iter) \ (bson_iter_type ((iter)) == BSON_TYPE_SYMBOL) #define BSON_ITER_HOLDS_CODEWSCOPE(iter) \ (bson_iter_type ((iter)) == BSON_TYPE_CODEWSCOPE) #define BSON_ITER_HOLDS_INT32(iter) \ (bson_iter_type ((iter)) == BSON_TYPE_INT32) #define BSON_ITER_HOLDS_TIMESTAMP(iter) \ (bson_iter_type ((iter)) == BSON_TYPE_TIMESTAMP) #define BSON_ITER_HOLDS_INT64(iter) \ (bson_iter_type ((iter)) == BSON_TYPE_INT64) #define BSON_ITER_HOLDS_MAXKEY(iter) \ (bson_iter_type ((iter)) == BSON_TYPE_MAXKEY) #define BSON_ITER_HOLDS_MINKEY(iter) \ (bson_iter_type ((iter)) == BSON_TYPE_MINKEY) #define BSON_ITER_IS_KEY(iter, key) \ (0 == strcmp ((key), bson_iter_key ((iter)))) const bson_value_t * bson_iter_value (bson_iter_t *iter); /** * bson_iter_utf8_len_unsafe: * @iter: a bson_iter_t. * * Returns the length of a string currently pointed to by @iter. This performs * no validation so the is responsible for knowing the BSON is valid. Calling * bson_validate() is one way to do this ahead of time. */ static BSON_INLINE uint32_t bson_iter_utf8_len_unsafe (const bson_iter_t *iter) { int32_t val; memcpy (&val, iter->raw + iter->d1, sizeof (val)); val = BSON_UINT32_FROM_LE (val); return BSON_MAX (0, val - 1); } void bson_iter_array (const bson_iter_t *iter, uint32_t *array_len, const uint8_t **array); void bson_iter_binary (const bson_iter_t *iter, bson_subtype_t *subtype, uint32_t *binary_len, const uint8_t **binary); const char * bson_iter_code (const bson_iter_t *iter, uint32_t *length); /** * bson_iter_code_unsafe: * @iter: A bson_iter_t. * @length: A location for the length of the resulting string. * * Like bson_iter_code() but performs no integrity checks. * * Returns: A string that should not be modified or freed. */ static BSON_INLINE const char * bson_iter_code_unsafe (const bson_iter_t *iter, uint32_t *length) { *length = bson_iter_utf8_len_unsafe (iter); return (const char *)(iter->raw + iter->d2); } const char * bson_iter_codewscope (const bson_iter_t *iter, uint32_t *length, uint32_t *scope_len, const uint8_t **scope); void bson_iter_dbpointer (const bson_iter_t *iter, uint32_t *collection_len, const char **collection, const bson_oid_t **oid); void bson_iter_document (const bson_iter_t *iter, uint32_t *document_len, const uint8_t **document); double bson_iter_double (const bson_iter_t *iter); /** * bson_iter_double_unsafe: * @iter: A bson_iter_t. * * Similar to bson_iter_double() but does not perform an integrity checking. * * Returns: A double. */ static BSON_INLINE double bson_iter_double_unsafe (const bson_iter_t *iter) { double val; memcpy (&val, iter->raw + iter->d1, sizeof (val)); return BSON_DOUBLE_FROM_LE (val); } bool bson_iter_init (bson_iter_t *iter, const bson_t *bson); bool bson_iter_init_find (bson_iter_t *iter, const bson_t *bson, const char *key); bool bson_iter_init_find_case (bson_iter_t *iter, const bson_t *bson, const char *key); int32_t bson_iter_int32 (const bson_iter_t *iter); /** * bson_iter_int32_unsafe: * @iter: A bson_iter_t. * * Similar to bson_iter_int32() but with no integrity checking. * * Returns: A 32-bit signed integer. */ static BSON_INLINE int32_t bson_iter_int32_unsafe (const bson_iter_t *iter) { int32_t val; memcpy (&val, iter->raw + iter->d1, sizeof (val)); return BSON_UINT32_FROM_LE (val); } int64_t bson_iter_int64 (const bson_iter_t *iter); int64_t bson_iter_as_int64 (const bson_iter_t *iter); /** * bson_iter_int64_unsafe: * @iter: a bson_iter_t. * * Similar to bson_iter_int64() but without integrity checking. * * Returns: A 64-bit signed integer. */ static BSON_INLINE int64_t bson_iter_int64_unsafe (const bson_iter_t *iter) { int64_t val; memcpy (&val, iter->raw + iter->d1, sizeof (val)); return BSON_UINT64_FROM_LE (val); } bool bson_iter_find (bson_iter_t *iter, const char *key); bool bson_iter_find_case (bson_iter_t *iter, const char *key); bool bson_iter_find_descendant (bson_iter_t *iter, const char *dotkey, bson_iter_t *descendant); bool bson_iter_next (bson_iter_t *iter); const bson_oid_t * bson_iter_oid (const bson_iter_t *iter); /** * bson_iter_oid_unsafe: * @iter: A #bson_iter_t. * * Similar to bson_iter_oid() but performs no integrity checks. * * Returns: A #bson_oid_t that should not be modified or freed. */ static BSON_INLINE const bson_oid_t * bson_iter_oid_unsafe (const bson_iter_t *iter) { return (const bson_oid_t *)(iter->raw + iter->d1); } const char * bson_iter_key (const bson_iter_t *iter); /** * bson_iter_key_unsafe: * @iter: A bson_iter_t. * * Similar to bson_iter_key() but performs no integrity checking. * * Returns: A string that should not be modified or freed. */ static BSON_INLINE const char * bson_iter_key_unsafe (const bson_iter_t *iter) { return (const char *)(iter->raw + iter->key); } const char * bson_iter_utf8 (const bson_iter_t *iter, uint32_t *length); /** * bson_iter_utf8_unsafe: * * Similar to bson_iter_utf8() but performs no integrity checking. * * Returns: A string that should not be modified or freed. */ static BSON_INLINE const char * bson_iter_utf8_unsafe (const bson_iter_t *iter, size_t *length) { *length = bson_iter_utf8_len_unsafe (iter); return (const char *)(iter->raw + iter->d2); } char * bson_iter_dup_utf8 (const bson_iter_t *iter, uint32_t *length); int64_t bson_iter_date_time (const bson_iter_t *iter); time_t bson_iter_time_t (const bson_iter_t *iter); /** * bson_iter_time_t_unsafe: * @iter: A bson_iter_t. * * Similar to bson_iter_time_t() but performs no integrity checking. * * Returns: A time_t containing the number of seconds since UNIX epoch * in UTC. */ static BSON_INLINE time_t bson_iter_time_t_unsafe (const bson_iter_t *iter) { return (time_t)(bson_iter_int64_unsafe (iter) / 1000UL); } void bson_iter_timeval (const bson_iter_t *iter, struct timeval *tv); /** * bson_iter_timeval_unsafe: * @iter: A bson_iter_t. * @tv: A struct timeval. * * Similar to bson_iter_timeval() but performs no integrity checking. */ static BSON_INLINE void bson_iter_timeval_unsafe (const bson_iter_t *iter, struct timeval *tv) { int64_t value = bson_iter_int64_unsafe (iter); #ifdef BSON_OS_WIN32 tv->tv_sec = (long) (value / 1000); #else tv->tv_sec = (suseconds_t) (value / 1000); #endif tv->tv_usec = (value % 1000) * 1000; } void bson_iter_timestamp (const bson_iter_t *iter, uint32_t *timestamp, uint32_t *increment); bool bson_iter_bool (const bson_iter_t *iter); /** * bson_iter_bool_unsafe: * @iter: A bson_iter_t. * * Similar to bson_iter_bool() but performs no integrity checking. * * Returns: true or false. */ static BSON_INLINE bool bson_iter_bool_unsafe (const bson_iter_t *iter) { char val; memcpy (&val, iter->raw + iter->d1, 1); return !!val; } bool bson_iter_as_bool (const bson_iter_t *iter); const char * bson_iter_regex (const bson_iter_t *iter, const char **options); const char * bson_iter_symbol (const bson_iter_t *iter, uint32_t *length); bson_type_t bson_iter_type (const bson_iter_t *iter); /** * bson_iter_type_unsafe: * @iter: A bson_iter_t. * * Similar to bson_iter_type() but performs no integrity checking. * * Returns: A bson_type_t. */ static BSON_INLINE bson_type_t bson_iter_type_unsafe (const bson_iter_t *iter) { return (bson_type_t) (iter->raw + iter->type) [0]; } bool bson_iter_recurse (const bson_iter_t *iter, bson_iter_t *child); void bson_iter_overwrite_int32 (bson_iter_t *iter, int32_t value); void bson_iter_overwrite_int64 (bson_iter_t *iter, int64_t value); void bson_iter_overwrite_double (bson_iter_t *iter, double value); void bson_iter_overwrite_bool (bson_iter_t *iter, bool value); bool bson_iter_visit_all (bson_iter_t *iter, const bson_visitor_t *visitor, void *data); BSON_END_DECLS #endif /* BSON_ITER_H */ MongoDB-v1.2.2/bson/bson-keys.c000644 000765 000024 00000022225 12651754051 016451 0ustar00davidstaff000000 000000 /* * Copyright 2013 MongoDB, Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #include #include "bson-keys.h" #include "bson-string.h" static const char * gUint32Strs[] = { "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "11", "12", "13", "14", "15", "16", "17", "18", "19", "20", "21", "22", "23", "24", "25", "26", "27", "28", "29", "30", "31", "32", "33", "34", "35", "36", "37", "38", "39", "40", "41", "42", "43", "44", "45", "46", "47", "48", "49", "50", "51", "52", "53", "54", "55", "56", "57", "58", "59", "60", "61", "62", "63", "64", "65", "66", "67", "68", "69", "70", "71", "72", "73", "74", "75", "76", "77", "78", "79", "80", "81", "82", "83", "84", "85", "86", "87", "88", "89", "90", "91", "92", "93", "94", "95", "96", "97", "98", "99", "100", "101", "102", "103", "104", "105", "106", "107", "108", "109", "110", "111", "112", "113", "114", "115", "116", "117", "118", "119", "120", "121", "122", "123", "124", "125", "126", "127", "128", "129", "130", "131", "132", "133", "134", "135", "136", "137", "138", "139", "140", "141", "142", "143", "144", "145", "146", "147", "148", "149", "150", "151", "152", "153", "154", "155", "156", "157", "158", "159", "160", "161", "162", "163", "164", "165", "166", "167", "168", "169", "170", "171", "172", "173", "174", "175", "176", "177", "178", "179", "180", "181", "182", "183", "184", "185", "186", "187", "188", "189", "190", "191", "192", "193", "194", "195", "196", "197", "198", "199", "200", "201", "202", "203", "204", "205", "206", "207", "208", "209", "210", "211", "212", "213", "214", "215", "216", "217", "218", "219", "220", "221", "222", "223", "224", "225", "226", "227", "228", "229", "230", "231", "232", "233", "234", "235", "236", "237", "238", "239", "240", "241", "242", "243", "244", "245", "246", "247", "248", "249", "250", "251", "252", "253", "254", "255", "256", "257", "258", "259", "260", "261", "262", "263", "264", "265", "266", "267", "268", "269", "270", "271", "272", "273", "274", "275", "276", "277", "278", "279", "280", "281", "282", "283", "284", "285", "286", "287", "288", "289", "290", "291", "292", "293", "294", "295", "296", "297", "298", "299", "300", "301", "302", "303", "304", "305", "306", "307", "308", "309", "310", "311", "312", "313", "314", "315", "316", "317", "318", "319", "320", "321", "322", "323", "324", "325", "326", "327", "328", "329", "330", "331", "332", "333", "334", "335", "336", "337", "338", "339", "340", "341", "342", "343", "344", "345", "346", "347", "348", "349", "350", "351", "352", "353", "354", "355", "356", "357", "358", "359", "360", "361", "362", "363", "364", "365", "366", "367", "368", "369", "370", "371", "372", "373", "374", "375", "376", "377", "378", "379", "380", "381", "382", "383", "384", "385", "386", "387", "388", "389", "390", "391", "392", "393", "394", "395", "396", "397", "398", "399", "400", "401", "402", "403", "404", "405", "406", "407", "408", "409", "410", "411", "412", "413", "414", "415", "416", "417", "418", "419", "420", "421", "422", "423", "424", "425", "426", "427", "428", "429", "430", "431", "432", "433", "434", "435", "436", "437", "438", "439", "440", "441", "442", "443", "444", "445", "446", "447", "448", "449", "450", "451", "452", "453", "454", "455", "456", "457", "458", "459", "460", "461", "462", "463", "464", "465", "466", "467", "468", "469", "470", "471", "472", "473", "474", "475", "476", "477", "478", "479", "480", "481", "482", "483", "484", "485", "486", "487", "488", "489", "490", "491", "492", "493", "494", "495", "496", "497", "498", "499", "500", "501", "502", "503", "504", "505", "506", "507", "508", "509", "510", "511", "512", "513", "514", "515", "516", "517", "518", "519", "520", "521", "522", "523", "524", "525", "526", "527", "528", "529", "530", "531", "532", "533", "534", "535", "536", "537", "538", "539", "540", "541", "542", "543", "544", "545", "546", "547", "548", "549", "550", "551", "552", "553", "554", "555", "556", "557", "558", "559", "560", "561", "562", "563", "564", "565", "566", "567", "568", "569", "570", "571", "572", "573", "574", "575", "576", "577", "578", "579", "580", "581", "582", "583", "584", "585", "586", "587", "588", "589", "590", "591", "592", "593", "594", "595", "596", "597", "598", "599", "600", "601", "602", "603", "604", "605", "606", "607", "608", "609", "610", "611", "612", "613", "614", "615", "616", "617", "618", "619", "620", "621", "622", "623", "624", "625", "626", "627", "628", "629", "630", "631", "632", "633", "634", "635", "636", "637", "638", "639", "640", "641", "642", "643", "644", "645", "646", "647", "648", "649", "650", "651", "652", "653", "654", "655", "656", "657", "658", "659", "660", "661", "662", "663", "664", "665", "666", "667", "668", "669", "670", "671", "672", "673", "674", "675", "676", "677", "678", "679", "680", "681", "682", "683", "684", "685", "686", "687", "688", "689", "690", "691", "692", "693", "694", "695", "696", "697", "698", "699", "700", "701", "702", "703", "704", "705", "706", "707", "708", "709", "710", "711", "712", "713", "714", "715", "716", "717", "718", "719", "720", "721", "722", "723", "724", "725", "726", "727", "728", "729", "730", "731", "732", "733", "734", "735", "736", "737", "738", "739", "740", "741", "742", "743", "744", "745", "746", "747", "748", "749", "750", "751", "752", "753", "754", "755", "756", "757", "758", "759", "760", "761", "762", "763", "764", "765", "766", "767", "768", "769", "770", "771", "772", "773", "774", "775", "776", "777", "778", "779", "780", "781", "782", "783", "784", "785", "786", "787", "788", "789", "790", "791", "792", "793", "794", "795", "796", "797", "798", "799", "800", "801", "802", "803", "804", "805", "806", "807", "808", "809", "810", "811", "812", "813", "814", "815", "816", "817", "818", "819", "820", "821", "822", "823", "824", "825", "826", "827", "828", "829", "830", "831", "832", "833", "834", "835", "836", "837", "838", "839", "840", "841", "842", "843", "844", "845", "846", "847", "848", "849", "850", "851", "852", "853", "854", "855", "856", "857", "858", "859", "860", "861", "862", "863", "864", "865", "866", "867", "868", "869", "870", "871", "872", "873", "874", "875", "876", "877", "878", "879", "880", "881", "882", "883", "884", "885", "886", "887", "888", "889", "890", "891", "892", "893", "894", "895", "896", "897", "898", "899", "900", "901", "902", "903", "904", "905", "906", "907", "908", "909", "910", "911", "912", "913", "914", "915", "916", "917", "918", "919", "920", "921", "922", "923", "924", "925", "926", "927", "928", "929", "930", "931", "932", "933", "934", "935", "936", "937", "938", "939", "940", "941", "942", "943", "944", "945", "946", "947", "948", "949", "950", "951", "952", "953", "954", "955", "956", "957", "958", "959", "960", "961", "962", "963", "964", "965", "966", "967", "968", "969", "970", "971", "972", "973", "974", "975", "976", "977", "978", "979", "980", "981", "982", "983", "984", "985", "986", "987", "988", "989", "990", "991", "992", "993", "994", "995", "996", "997", "998", "999" }; /* *-------------------------------------------------------------------------- * * bson_uint32_to_string -- * * Converts @value to a string. * * If @value is from 0 to 1000, it will use a constant string in the * data section of the library. * * If not, a string will be formatted using @str and snprintf(). This * is much slower, of course and therefore we try to optimize it out. * * @strptr will always be set. It will either point to @str or a * constant string. You will want to use this as your key. * * Parameters: * @value: A #uint32_t to convert to string. * @strptr: (out): A pointer to the resulting string. * @str: (out): Storage for a string made with snprintf. * @size: Size of @str. * * Returns: * The number of bytes in the resulting string. * * Side effects: * None. * *-------------------------------------------------------------------------- */ size_t bson_uint32_to_string (uint32_t value, /* IN */ const char **strptr, /* OUT */ char *str, /* OUT */ size_t size) /* IN */ { if (value < 1000) { *strptr = gUint32Strs[value]; if (value < 10) { return 1; } else if (value < 100) { return 2; } else { return 3; } } *strptr = str; return bson_snprintf (str, size, "%u", value); } MongoDB-v1.2.2/bson/bson-keys.h000644 000765 000024 00000001670 12651754051 016457 0ustar00davidstaff000000 000000 /* * Copyright 2013 MongoDB, Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #ifndef BSON_KEYS_H #define BSON_KEYS_H #include "bson-macros.h" #include "bson-types.h" BSON_BEGIN_DECLS size_t bson_uint32_to_string (uint32_t value, const char **strptr, char *str, size_t size); BSON_END_DECLS #endif /* BSON_KEYS_H */ MongoDB-v1.2.2/bson/bson-macros.h000644 000765 000024 00000012531 12651754051 016766 0ustar00davidstaff000000 000000 /* * Copyright 2013 MongoDB, Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #ifndef BSON_MACROS_H #define BSON_MACROS_H #if !defined (BSON_INSIDE) && !defined (BSON_COMPILATION) # error "Only can be included directly." #endif #include #include #ifdef __cplusplus # include #endif #include "bson-config.h" #if BSON_OS == 1 # define BSON_OS_UNIX #elif BSON_OS == 2 # define BSON_OS_WIN32 #else # error "Unknown operating system." #endif #ifdef __cplusplus # define BSON_BEGIN_DECLS extern "C" { # define BSON_END_DECLS } #else # define BSON_BEGIN_DECLS # define BSON_END_DECLS #endif #define BSON_GNUC_CHECK_VERSION(major, minor) \ (defined(__GNUC__) && \ ((__GNUC__ > (major)) || \ ((__GNUC__ == (major)) && \ (__GNUC_MINOR__ >= (minor))))) #define BSON_GNUC_IS_VERSION(major, minor) \ (defined(__GNUC__) && \ (__GNUC__ == (major)) && \ (__GNUC_MINOR__ == (minor))) #ifdef _MSC_VER # ifdef BSON_COMPILATION # define BSON_API __declspec(dllexport) # else # define BSON_API __declspec(dllimport) # endif #else # define BSON_API #endif #ifdef MIN # define BSON_MIN MIN #elif defined(__cplusplus) # define BSON_MIN(a, b) ( (std::min)(a, b) ) #elif defined(_MSC_VER) # define BSON_MIN(a, b) ((a) < (b) ? (a) : (b)) #else # define BSON_MIN(a,b) (((a) < (b)) ? (a) : (b)) #endif #ifdef MAX # define BSON_MAX MAX #elif defined(__cplusplus) # define BSON_MAX(a, b) ( (std::max)(a, b) ) #elif defined(_MSC_VER) # define BSON_MAX(a, b) ((a) > (b) ? (a) : (b)) #else # define BSON_MAX(a, b) (((a) > (b)) ? (a) : (b)) #endif #ifdef ABS # define BSON_ABS ABS #else # define BSON_ABS(a) (((a) < 0) ? ((a) * -1) : (a)) #endif #if defined(_MSC_VER) # define BSON_ALIGNED_BEGIN(_N) __declspec (align (_N)) # define BSON_ALIGNED_END(_N) #elif defined(__SUNPRO_C) # define BSON_ALIGNED_BEGIN(_N) # define BSON_ALIGNED_END(_N) __attribute__((aligned (_N))) #else # define BSON_ALIGNED_BEGIN(_N) # define BSON_ALIGNED_END(_N) __attribute__((aligned (_N))) #endif #define bson_str_empty(s) (!s[0]) #define bson_str_empty0(s) (!s || !s[0]) #ifndef BSON_DISABLE_ASSERT # define BSON_ASSERT(s) assert ((s)) #else # define BSON_ASSERT(s) #endif #define BSON_STATIC_ASSERT(s) BSON_STATIC_ASSERT_ (s, __LINE__) #define BSON_STATIC_ASSERT_JOIN(a, b) BSON_STATIC_ASSERT_JOIN2 (a, b) #define BSON_STATIC_ASSERT_JOIN2(a, b) a##b #define BSON_STATIC_ASSERT_(s, l) \ typedef char BSON_STATIC_ASSERT_JOIN (static_assert_test_, \ __LINE__)[(s) ? 1 : -1] #if defined(__GNUC__) # define BSON_GNUC_CONST __attribute__((const)) # define BSON_GNUC_WARN_UNUSED_RESULT __attribute__((warn_unused_result)) #else # define BSON_GNUC_CONST # define BSON_GNUC_WARN_UNUSED_RESULT #endif #if BSON_GNUC_CHECK_VERSION(4, 0) && !defined(_WIN32) # define BSON_GNUC_NULL_TERMINATED __attribute__((sentinel)) # define BSON_GNUC_INTERNAL __attribute__((visibility ("hidden"))) #else # define BSON_GNUC_NULL_TERMINATED # define BSON_GNUC_INTERNAL #endif #if defined(__GNUC__) # define BSON_LIKELY(x) __builtin_expect (!!(x), 1) # define BSON_UNLIKELY(x) __builtin_expect (!!(x), 0) #else # define BSON_LIKELY(v) v # define BSON_UNLIKELY(v) v #endif #if defined(__clang__) # define BSON_GNUC_PRINTF(f, v) __attribute__((format (printf, f, v))) #elif BSON_GNUC_CHECK_VERSION(4, 4) # define BSON_GNUC_PRINTF(f, v) __attribute__((format (gnu_printf, f, v))) #else # define BSON_GNUC_PRINTF(f, v) #endif #if defined(__LP64__) || defined(_LP64) # define BSON_WORD_SIZE 64 #else # define BSON_WORD_SIZE 32 #endif #if defined(_MSC_VER) # define BSON_INLINE __inline #else # define BSON_INLINE __inline__ #endif #ifndef BSON_DISABLE_CHECKS # define bson_return_if_fail(test) \ do { \ if (!(test)) { \ fprintf (stderr, "%s(): precondition failed: %s\n", \ __FUNCTION__, #test); \ return; \ } \ } while (0) #else # define bson_return_if_fail(test) #endif #ifndef BSON_DISABLE_CHECKS # define bson_return_val_if_fail(test, val) \ do { \ if (!(test)) { \ fprintf (stderr, "%s(): precondition failed: %s\n", \ __FUNCTION__, #test); \ return (val); \ } \ } while (0) #else # define bson_return_val_if_fail(test, val) #endif #ifdef _MSC_VER # define BSON_ENSURE_ARRAY_PARAM_SIZE(_n) # define BSON_TYPEOF decltype #else # define BSON_ENSURE_ARRAY_PARAM_SIZE(_n) static (_n) # define BSON_TYPEOF typeof #endif #if BSON_GNUC_CHECK_VERSION(3, 1) # define BSON_GNUC_DEPRECATED __attribute__((__deprecated__)) #else # define BSON_GNUC_DEPRECATED #endif #if BSON_GNUC_CHECK_VERSION(4, 5) # define BSON_GNUC_DEPRECATED_FOR(f) __attribute__((deprecated("Use " #f " instead"))) #else # define BSON_GNUC_DEPRECATED_FOR(f) BSON_GNUC_DEPRECATED #endif #endif /* BSON_MACROS_H */ MongoDB-v1.2.2/bson/bson-md5.c000644 000765 000024 00000031753 12651754051 016171 0ustar00davidstaff000000 000000 /* Copyright (C) 1999, 2000, 2002 Aladdin Enterprises. All rights reserved. This software is provided 'as-is', without any express or implied warranty. In no event will the authors be held liable for any damages arising from the use of this software. Permission is granted to anyone to use this software for any purpose, including commercial applications, and to alter it and redistribute it freely, subject to the following restrictions: 1. The origin of this software must not be misrepresented; you must not claim that you wrote the original software. If you use this software in a product, an acknowledgment in the product documentation would be appreciated but is not required. 2. Altered source versions must be plainly marked as such, and must not be misrepresented as being the original software. 3. This notice may not be removed or altered from any source distribution. L. Peter Deutsch ghost@aladdin.com */ /* $Id: md5.c,v 1.6 2002/04/13 19:20:28 lpd Exp $ */ /* Independent implementation of MD5 (RFC 1321). This code implements the MD5 Algorithm defined in RFC 1321, whose text is available at http://www.ietf.org/rfc/rfc1321.txt The code is derived from the text of the RFC, including the test suite (section A.5) but excluding the rest of Appendix A. It does not include any code or documentation that is identified in the RFC as being copyrighted. The original and principal author of md5.c is L. Peter Deutsch . Other authors are noted in the change history that follows (in reverse chronological order): 2002-04-13 lpd Clarified derivation from RFC 1321; now handles byte order either statically or dynamically; added missing #include in library. 2002-03-11 lpd Corrected argument list for main(), and added int return type, in test program and T value program. 2002-02-21 lpd Added missing #include in test program. 2000-07-03 lpd Patched to eliminate warnings about "constant is unsigned in ANSI C, signed in traditional"; made test program self-checking. 1999-11-04 lpd Edited comments slightly for automatic TOC extraction. 1999-10-18 lpd Fixed typo in header comment (ansi2knr rather than md5). 1999-05-03 lpd Original version. */ /* * The following MD5 implementation has been modified to use types as * specified in libbson. */ #include "bson-compat.h" #include #include "bson-md5.h" #undef BYTE_ORDER /* 1 = big-endian, -1 = little-endian, 0 = unknown */ #if BSON_BYTE_ORDER == BSON_BIG_ENDIAN # define BYTE_ORDER 1 #else # define BYTE_ORDER -1 #endif #define T_MASK ((uint32_t)~0) #define T1 /* 0xd76aa478 */ (T_MASK ^ 0x28955b87) #define T2 /* 0xe8c7b756 */ (T_MASK ^ 0x173848a9) #define T3 0x242070db #define T4 /* 0xc1bdceee */ (T_MASK ^ 0x3e423111) #define T5 /* 0xf57c0faf */ (T_MASK ^ 0x0a83f050) #define T6 0x4787c62a #define T7 /* 0xa8304613 */ (T_MASK ^ 0x57cfb9ec) #define T8 /* 0xfd469501 */ (T_MASK ^ 0x02b96afe) #define T9 0x698098d8 #define T10 /* 0x8b44f7af */ (T_MASK ^ 0x74bb0850) #define T11 /* 0xffff5bb1 */ (T_MASK ^ 0x0000a44e) #define T12 /* 0x895cd7be */ (T_MASK ^ 0x76a32841) #define T13 0x6b901122 #define T14 /* 0xfd987193 */ (T_MASK ^ 0x02678e6c) #define T15 /* 0xa679438e */ (T_MASK ^ 0x5986bc71) #define T16 0x49b40821 #define T17 /* 0xf61e2562 */ (T_MASK ^ 0x09e1da9d) #define T18 /* 0xc040b340 */ (T_MASK ^ 0x3fbf4cbf) #define T19 0x265e5a51 #define T20 /* 0xe9b6c7aa */ (T_MASK ^ 0x16493855) #define T21 /* 0xd62f105d */ (T_MASK ^ 0x29d0efa2) #define T22 0x02441453 #define T23 /* 0xd8a1e681 */ (T_MASK ^ 0x275e197e) #define T24 /* 0xe7d3fbc8 */ (T_MASK ^ 0x182c0437) #define T25 0x21e1cde6 #define T26 /* 0xc33707d6 */ (T_MASK ^ 0x3cc8f829) #define T27 /* 0xf4d50d87 */ (T_MASK ^ 0x0b2af278) #define T28 0x455a14ed #define T29 /* 0xa9e3e905 */ (T_MASK ^ 0x561c16fa) #define T30 /* 0xfcefa3f8 */ (T_MASK ^ 0x03105c07) #define T31 0x676f02d9 #define T32 /* 0x8d2a4c8a */ (T_MASK ^ 0x72d5b375) #define T33 /* 0xfffa3942 */ (T_MASK ^ 0x0005c6bd) #define T34 /* 0x8771f681 */ (T_MASK ^ 0x788e097e) #define T35 0x6d9d6122 #define T36 /* 0xfde5380c */ (T_MASK ^ 0x021ac7f3) #define T37 /* 0xa4beea44 */ (T_MASK ^ 0x5b4115bb) #define T38 0x4bdecfa9 #define T39 /* 0xf6bb4b60 */ (T_MASK ^ 0x0944b49f) #define T40 /* 0xbebfbc70 */ (T_MASK ^ 0x4140438f) #define T41 0x289b7ec6 #define T42 /* 0xeaa127fa */ (T_MASK ^ 0x155ed805) #define T43 /* 0xd4ef3085 */ (T_MASK ^ 0x2b10cf7a) #define T44 0x04881d05 #define T45 /* 0xd9d4d039 */ (T_MASK ^ 0x262b2fc6) #define T46 /* 0xe6db99e5 */ (T_MASK ^ 0x1924661a) #define T47 0x1fa27cf8 #define T48 /* 0xc4ac5665 */ (T_MASK ^ 0x3b53a99a) #define T49 /* 0xf4292244 */ (T_MASK ^ 0x0bd6ddbb) #define T50 0x432aff97 #define T51 /* 0xab9423a7 */ (T_MASK ^ 0x546bdc58) #define T52 /* 0xfc93a039 */ (T_MASK ^ 0x036c5fc6) #define T53 0x655b59c3 #define T54 /* 0x8f0ccc92 */ (T_MASK ^ 0x70f3336d) #define T55 /* 0xffeff47d */ (T_MASK ^ 0x00100b82) #define T56 /* 0x85845dd1 */ (T_MASK ^ 0x7a7ba22e) #define T57 0x6fa87e4f #define T58 /* 0xfe2ce6e0 */ (T_MASK ^ 0x01d3191f) #define T59 /* 0xa3014314 */ (T_MASK ^ 0x5cfebceb) #define T60 0x4e0811a1 #define T61 /* 0xf7537e82 */ (T_MASK ^ 0x08ac817d) #define T62 /* 0xbd3af235 */ (T_MASK ^ 0x42c50dca) #define T63 0x2ad7d2bb #define T64 /* 0xeb86d391 */ (T_MASK ^ 0x14792c6e) static void bson_md5_process (bson_md5_t *md5, const uint8_t *data) { uint32_t a = md5->abcd[0]; uint32_t b = md5->abcd[1]; uint32_t c = md5->abcd[2]; uint32_t d = md5->abcd[3]; uint32_t t; #if BYTE_ORDER > 0 /* Define storage only for big-endian CPUs. */ uint32_t X[16]; #else /* Define storage for little-endian or both types of CPUs. */ uint32_t xbuf[16]; const uint32_t *X; #endif { #if BYTE_ORDER == 0 /* * Determine dynamically whether this is a big-endian or * little-endian machine, since we can use a more efficient * algorithm on the latter. */ static const int w = 1; if (*((const uint8_t *)&w)) /* dynamic little-endian */ #endif #if BYTE_ORDER <= 0 /* little-endian */ { /* * On little-endian machines, we can process properly aligned * data without copying it. */ if (!((data - (const uint8_t *)0) & 3)) { /* data are properly aligned */ #ifdef __clang__ #pragma clang diagnostic push #pragma clang diagnostic ignored "-Wcast-align" #endif X = (const uint32_t *)data; #ifdef __clang__ #pragma clang diagnostic pop #endif } else { /* not aligned */ memcpy(xbuf, data, sizeof (xbuf)); X = xbuf; } } #endif #if BYTE_ORDER == 0 else /* dynamic big-endian */ #endif #if BYTE_ORDER >= 0 /* big-endian */ { /* * On big-endian machines, we must arrange the bytes in the * right order. */ const uint8_t *xp = data; int i; # if BYTE_ORDER == 0 X = xbuf; /* (dynamic only) */ # else # define xbuf X /* (static only) */ # endif for (i = 0; i < 16; ++i, xp += 4) xbuf[i] = xp[0] + (xp[1] << 8) + (xp[2] << 16) + (xp[3] << 24); } #endif } #define ROTATE_LEFT(x, n) (((x) << (n)) | ((x) >> (32 - (n)))) /* Round 1. */ /* Let [abcd k s i] denote the operation a = b + ((a + F(b,c,d) + X[k] + T[i]) <<< s). */ #define F(x, y, z) (((x) & (y)) | (~(x) & (z))) #define SET(a, b, c, d, k, s, Ti)\ t = a + F(b,c,d) + X[k] + Ti;\ a = ROTATE_LEFT(t, s) + b /* Do the following 16 operations. */ SET(a, b, c, d, 0, 7, T1); SET(d, a, b, c, 1, 12, T2); SET(c, d, a, b, 2, 17, T3); SET(b, c, d, a, 3, 22, T4); SET(a, b, c, d, 4, 7, T5); SET(d, a, b, c, 5, 12, T6); SET(c, d, a, b, 6, 17, T7); SET(b, c, d, a, 7, 22, T8); SET(a, b, c, d, 8, 7, T9); SET(d, a, b, c, 9, 12, T10); SET(c, d, a, b, 10, 17, T11); SET(b, c, d, a, 11, 22, T12); SET(a, b, c, d, 12, 7, T13); SET(d, a, b, c, 13, 12, T14); SET(c, d, a, b, 14, 17, T15); SET(b, c, d, a, 15, 22, T16); #undef SET /* Round 2. */ /* Let [abcd k s i] denote the operation a = b + ((a + G(b,c,d) + X[k] + T[i]) <<< s). */ #define G(x, y, z) (((x) & (z)) | ((y) & ~(z))) #define SET(a, b, c, d, k, s, Ti)\ t = a + G(b,c,d) + X[k] + Ti;\ a = ROTATE_LEFT(t, s) + b /* Do the following 16 operations. */ SET(a, b, c, d, 1, 5, T17); SET(d, a, b, c, 6, 9, T18); SET(c, d, a, b, 11, 14, T19); SET(b, c, d, a, 0, 20, T20); SET(a, b, c, d, 5, 5, T21); SET(d, a, b, c, 10, 9, T22); SET(c, d, a, b, 15, 14, T23); SET(b, c, d, a, 4, 20, T24); SET(a, b, c, d, 9, 5, T25); SET(d, a, b, c, 14, 9, T26); SET(c, d, a, b, 3, 14, T27); SET(b, c, d, a, 8, 20, T28); SET(a, b, c, d, 13, 5, T29); SET(d, a, b, c, 2, 9, T30); SET(c, d, a, b, 7, 14, T31); SET(b, c, d, a, 12, 20, T32); #undef SET /* Round 3. */ /* Let [abcd k s t] denote the operation a = b + ((a + H(b,c,d) + X[k] + T[i]) <<< s). */ #define H(x, y, z) ((x) ^ (y) ^ (z)) #define SET(a, b, c, d, k, s, Ti)\ t = a + H(b,c,d) + X[k] + Ti;\ a = ROTATE_LEFT(t, s) + b /* Do the following 16 operations. */ SET(a, b, c, d, 5, 4, T33); SET(d, a, b, c, 8, 11, T34); SET(c, d, a, b, 11, 16, T35); SET(b, c, d, a, 14, 23, T36); SET(a, b, c, d, 1, 4, T37); SET(d, a, b, c, 4, 11, T38); SET(c, d, a, b, 7, 16, T39); SET(b, c, d, a, 10, 23, T40); SET(a, b, c, d, 13, 4, T41); SET(d, a, b, c, 0, 11, T42); SET(c, d, a, b, 3, 16, T43); SET(b, c, d, a, 6, 23, T44); SET(a, b, c, d, 9, 4, T45); SET(d, a, b, c, 12, 11, T46); SET(c, d, a, b, 15, 16, T47); SET(b, c, d, a, 2, 23, T48); #undef SET /* Round 4. */ /* Let [abcd k s t] denote the operation a = b + ((a + I(b,c,d) + X[k] + T[i]) <<< s). */ #define I(x, y, z) ((y) ^ ((x) | ~(z))) #define SET(a, b, c, d, k, s, Ti)\ t = a + I(b,c,d) + X[k] + Ti;\ a = ROTATE_LEFT(t, s) + b /* Do the following 16 operations. */ SET(a, b, c, d, 0, 6, T49); SET(d, a, b, c, 7, 10, T50); SET(c, d, a, b, 14, 15, T51); SET(b, c, d, a, 5, 21, T52); SET(a, b, c, d, 12, 6, T53); SET(d, a, b, c, 3, 10, T54); SET(c, d, a, b, 10, 15, T55); SET(b, c, d, a, 1, 21, T56); SET(a, b, c, d, 8, 6, T57); SET(d, a, b, c, 15, 10, T58); SET(c, d, a, b, 6, 15, T59); SET(b, c, d, a, 13, 21, T60); SET(a, b, c, d, 4, 6, T61); SET(d, a, b, c, 11, 10, T62); SET(c, d, a, b, 2, 15, T63); SET(b, c, d, a, 9, 21, T64); #undef SET /* Then perform the following additions. (That is increment each of the four registers by the value it had before this block was started.) */ md5->abcd[0] += a; md5->abcd[1] += b; md5->abcd[2] += c; md5->abcd[3] += d; } void bson_md5_init (bson_md5_t *pms) { pms->count[0] = pms->count[1] = 0; pms->abcd[0] = 0x67452301; pms->abcd[1] = /*0xefcdab89*/ T_MASK ^ 0x10325476; pms->abcd[2] = /*0x98badcfe*/ T_MASK ^ 0x67452301; pms->abcd[3] = 0x10325476; } void bson_md5_append (bson_md5_t *pms, const uint8_t *data, uint32_t nbytes) { const uint8_t *p = data; int left = nbytes; int offset = (pms->count[0] >> 3) & 63; uint32_t nbits = (uint32_t)(nbytes << 3); if (nbytes <= 0) return; /* Update the message length. */ pms->count[1] += nbytes >> 29; pms->count[0] += nbits; if (pms->count[0] < nbits) pms->count[1]++; /* Process an initial partial block. */ if (offset) { int copy = (offset + nbytes > 64 ? 64 - offset : nbytes); memcpy(pms->buf + offset, p, copy); if (offset + copy < 64) return; p += copy; left -= copy; bson_md5_process(pms, pms->buf); } /* Process full blocks. */ for (; left >= 64; p += 64, left -= 64) bson_md5_process(pms, p); /* Process a final partial block. */ if (left) memcpy(pms->buf, p, left); } void bson_md5_finish (bson_md5_t *pms, uint8_t digest[16]) { static const uint8_t pad[64] = { 0x80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 }; uint8_t data[8]; int i; /* Save the length before padding. */ for (i = 0; i < 8; ++i) data[i] = (uint8_t)(pms->count[i >> 2] >> ((i & 3) << 3)); /* Pad to 56 bytes mod 64. */ bson_md5_append(pms, pad, ((55 - (pms->count[0] >> 3)) & 63) + 1); /* Append the length. */ bson_md5_append(pms, data, sizeof (data)); for (i = 0; i < 16; ++i) digest[i] = (uint8_t)(pms->abcd[i >> 2] >> ((i & 3) << 3)); } MongoDB-v1.2.2/bson/bson-md5.h000644 000765 000024 00000005530 12651754051 016170 0ustar00davidstaff000000 000000 /* Copyright (C) 1999, 2002 Aladdin Enterprises. All rights reserved. This software is provided 'as-is', without any express or implied warranty. In no event will the authors be held liable for any damages arising from the use of this software. Permission is granted to anyone to use this software for any purpose, including commercial applications, and to alter it and redistribute it freely, subject to the following restrictions: 1. The origin of this software must not be misrepresented; you must not claim that you wrote the original software. If you use this software in a product, an acknowledgment in the product documentation would be appreciated but is not required. 2. Altered source versions must be plainly marked as such, and must not be misrepresented as being the original software. 3. This notice may not be removed or altered from any source distribution. L. Peter Deutsch ghost@aladdin.com */ /* $Id: md5.h,v 1.4 2002/04/13 19:20:28 lpd Exp $ */ /* Independent implementation of MD5 (RFC 1321). This code implements the MD5 Algorithm defined in RFC 1321, whose text is available at http://www.ietf.org/rfc/rfc1321.txt The code is derived from the text of the RFC, including the test suite (section A.5) but excluding the rest of Appendix A. It does not include any code or documentation that is identified in the RFC as being copyrighted. The original and principal author of md5.h is L. Peter Deutsch . Other authors are noted in the change history that follows (in reverse chronological order): 2002-04-13 lpd Removed support for non-ANSI compilers; removed references to Ghostscript; clarified derivation from RFC 1321; now handles byte order either statically or dynamically. 1999-11-04 lpd Edited comments slightly for automatic TOC extraction. 1999-10-18 lpd Fixed typo in header comment (ansi2knr rather than md5); added conditionalization for C++ compilation from Martin Purschke . 1999-05-03 lpd Original version. */ /* * The following MD5 implementation has been modified to use types as * specified in libbson. */ #ifndef BSON_MD5_H #define BSON_MD5_H #if !defined (BSON_INSIDE) && !defined (BSON_COMPILATION) # error "Only can be included directly." #endif #include "bson-endian.h" BSON_BEGIN_DECLS typedef struct { uint32_t count[2]; /* message length in bits, lsw first */ uint32_t abcd[4]; /* digest buffer */ uint8_t buf[64]; /* accumulate block */ } bson_md5_t; void bson_md5_init (bson_md5_t *pms); void bson_md5_append (bson_md5_t *pms, const uint8_t *data, uint32_t nbytes); void bson_md5_finish (bson_md5_t *pms, uint8_t digest[16]); BSON_END_DECLS #endif /* BSON_MD5_H */ MongoDB-v1.2.2/bson/bson-memory.c000644 000765 000024 00000014631 12651754051 017010 0ustar00davidstaff000000 000000 /* * Copyright 2013 MongoDB, Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #include #include #include "bson-atomic.h" #include "bson-config.h" #include "bson-memory.h" static bson_mem_vtable_t gMemVtable = { malloc, calloc, #ifdef __APPLE__ reallocf, #else realloc, #endif free, }; /* *-------------------------------------------------------------------------- * * bson_malloc -- * * Allocates @num_bytes of memory and returns a pointer to it. If * malloc failed to allocate the memory, abort() is called. * * Libbson does not try to handle OOM conditions as it is beyond the * scope of this library to handle so appropriately. * * Parameters: * @num_bytes: The number of bytes to allocate. * * Returns: * A pointer if successful; otherwise abort() is called and this * function will never return. * * Side effects: * None. * *-------------------------------------------------------------------------- */ void * bson_malloc (size_t num_bytes) /* IN */ { void *mem; if (!(mem = gMemVtable.malloc (num_bytes))) { abort (); } return mem; } /* *-------------------------------------------------------------------------- * * bson_malloc0 -- * * Like bson_malloc() except the memory is zeroed first. This is * similar to calloc() except that abort() is called in case of * failure to allocate memory. * * Parameters: * @num_bytes: The number of bytes to allocate. * * Returns: * A pointer if successful; otherwise abort() is called and this * function will never return. * * Side effects: * None. * *-------------------------------------------------------------------------- */ void * bson_malloc0 (size_t num_bytes) /* IN */ { void *mem = NULL; if (BSON_LIKELY (num_bytes)) { if (BSON_UNLIKELY (!(mem = gMemVtable.calloc (1, num_bytes)))) { abort (); } } return mem; } /* *-------------------------------------------------------------------------- * * bson_realloc -- * * This function behaves similar to realloc() except that if there is * a failure abort() is called. * * Parameters: * @mem: The memory to realloc, or NULL. * @num_bytes: The size of the new allocation or 0 to free. * * Returns: * The new allocation if successful; otherwise abort() is called and * this function never returns. * * Side effects: * None. * *-------------------------------------------------------------------------- */ void * bson_realloc (void *mem, /* IN */ size_t num_bytes) /* IN */ { /* * Not all platforms are guaranteed to free() the memory if a call to * realloc() with a size of zero occurs. Windows, Linux, and FreeBSD do, * however, OS X does not. */ if (BSON_UNLIKELY (num_bytes == 0)) { gMemVtable.free (mem); return NULL; } mem = gMemVtable.realloc (mem, num_bytes); if (BSON_UNLIKELY (!mem)) { abort (); } return mem; } /* *-------------------------------------------------------------------------- * * bson_realloc_ctx -- * * This wraps bson_realloc and provides a compatible api for similar * functions with a context * * Parameters: * @mem: The memory to realloc, or NULL. * @num_bytes: The size of the new allocation or 0 to free. * @ctx: Ignored * * Returns: * The new allocation if successful; otherwise abort() is called and * this function never returns. * * Side effects: * None. * *-------------------------------------------------------------------------- */ void * bson_realloc_ctx (void *mem, /* IN */ size_t num_bytes, /* IN */ void *ctx) /* IN */ { return bson_realloc (mem, num_bytes); } /* *-------------------------------------------------------------------------- * * bson_free -- * * Frees @mem using the underlying allocator. * * Currently, this only calls free() directly, but that is subject to * change. * * Parameters: * @mem: An allocation to free. * * Returns: * None. * * Side effects: * None. * *-------------------------------------------------------------------------- */ void bson_free (void *mem) /* IN */ { gMemVtable.free (mem); } /* *-------------------------------------------------------------------------- * * bson_zero_free -- * * Frees @mem using the underlying allocator. @size bytes of @mem will * be zeroed before freeing the memory. This is useful in scenarios * where @mem contains passwords or other sensitive information. * * Parameters: * @mem: An allocation to free. * @size: The number of bytes in @mem. * * Returns: * None. * * Side effects: * None. * *-------------------------------------------------------------------------- */ void bson_zero_free (void *mem, /* IN */ size_t size) /* IN */ { if (BSON_LIKELY (mem)) { memset (mem, 0, size); gMemVtable.free (mem); } } /* *-------------------------------------------------------------------------- * * bson_mem_set_vtable -- * * This function will change our allocationt vtable. * * It is imperitive that this is called at the beginning of the * process before any memory has been allocated by the default * allocator. * * Returns: * None. * * Side effects: * None. * *-------------------------------------------------------------------------- */ void bson_mem_set_vtable (const bson_mem_vtable_t *vtable) { bson_return_if_fail (vtable); if (!vtable->malloc || !vtable->calloc || !vtable->realloc || !vtable->free) { fprintf (stderr, "Failure to install BSON vtable, " "missing functions.\n"); return; } gMemVtable = *vtable; } MongoDB-v1.2.2/bson/bson-memory.h000644 000765 000024 00000003466 12651754051 017021 0ustar00davidstaff000000 000000 /* * Copyright 2013 MongoDB, Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #ifndef BSON_MEMORY_H #define BSON_MEMORY_H #if !defined (BSON_INSIDE) && !defined (BSON_COMPILATION) # error "Only can be included directly." #endif #include "bson-macros.h" #include "bson-types.h" BSON_BEGIN_DECLS typedef void *(*bson_realloc_func) (void *mem, size_t num_bytes, void *ctx); typedef struct _bson_mem_vtable_t { void *(*malloc) (size_t num_bytes); void *(*calloc) (size_t n_members, size_t num_bytes); void *(*realloc) (void *mem, size_t num_bytes); void (*free) (void *mem); void *padding [4]; } bson_mem_vtable_t; void bson_mem_set_vtable (const bson_mem_vtable_t *vtable); void *bson_malloc (size_t num_bytes); void *bson_malloc0 (size_t num_bytes); void *bson_realloc (void *mem, size_t num_bytes); void *bson_realloc_ctx (void *mem, size_t num_bytes, void *ctx); void bson_free (void *mem); void bson_zero_free (void *mem, size_t size); BSON_END_DECLS #endif /* BSON_MEMORY_H */ MongoDB-v1.2.2/bson/bson-oid.c000644 000765 000024 00000034633 12651754051 016257 0ustar00davidstaff000000 000000 /* * Copyright 2013 MongoDB, Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #include "bson-compat.h" #include #include #include #include #include "bson-context-private.h" #include "bson-md5.h" #include "bson-oid.h" #include "bson-string.h" /* * This table contains an array of two character pairs for every possible * uint8_t. It is used as a lookup table when encoding a bson_oid_t * to hex formatted ASCII. Performing two characters at a time roughly * reduces the number of operations by one-half. */ static const uint16_t gHexCharPairs[] = { #if BSON_BYTE_ORDER == BSON_BIG_ENDIAN 12336, 12337, 12338, 12339, 12340, 12341, 12342, 12343, 12344, 12345, 12385, 12386, 12387, 12388, 12389, 12390, 12592, 12593, 12594, 12595, 12596, 12597, 12598, 12599, 12600, 12601, 12641, 12642, 12643, 12644, 12645, 12646, 12848, 12849, 12850, 12851, 12852, 12853, 12854, 12855, 12856, 12857, 12897, 12898, 12899, 12900, 12901, 12902, 13104, 13105, 13106, 13107, 13108, 13109, 13110, 13111, 13112, 13113, 13153, 13154, 13155, 13156, 13157, 13158, 13360, 13361, 13362, 13363, 13364, 13365, 13366, 13367, 13368, 13369, 13409, 13410, 13411, 13412, 13413, 13414, 13616, 13617, 13618, 13619, 13620, 13621, 13622, 13623, 13624, 13625, 13665, 13666, 13667, 13668, 13669, 13670, 13872, 13873, 13874, 13875, 13876, 13877, 13878, 13879, 13880, 13881, 13921, 13922, 13923, 13924, 13925, 13926, 14128, 14129, 14130, 14131, 14132, 14133, 14134, 14135, 14136, 14137, 14177, 14178, 14179, 14180, 14181, 14182, 14384, 14385, 14386, 14387, 14388, 14389, 14390, 14391, 14392, 14393, 14433, 14434, 14435, 14436, 14437, 14438, 14640, 14641, 14642, 14643, 14644, 14645, 14646, 14647, 14648, 14649, 14689, 14690, 14691, 14692, 14693, 14694, 24880, 24881, 24882, 24883, 24884, 24885, 24886, 24887, 24888, 24889, 24929, 24930, 24931, 24932, 24933, 24934, 25136, 25137, 25138, 25139, 25140, 25141, 25142, 25143, 25144, 25145, 25185, 25186, 25187, 25188, 25189, 25190, 25392, 25393, 25394, 25395, 25396, 25397, 25398, 25399, 25400, 25401, 25441, 25442, 25443, 25444, 25445, 25446, 25648, 25649, 25650, 25651, 25652, 25653, 25654, 25655, 25656, 25657, 25697, 25698, 25699, 25700, 25701, 25702, 25904, 25905, 25906, 25907, 25908, 25909, 25910, 25911, 25912, 25913, 25953, 25954, 25955, 25956, 25957, 25958, 26160, 26161, 26162, 26163, 26164, 26165, 26166, 26167, 26168, 26169, 26209, 26210, 26211, 26212, 26213, 26214 #else 12336, 12592, 12848, 13104, 13360, 13616, 13872, 14128, 14384, 14640, 24880, 25136, 25392, 25648, 25904, 26160, 12337, 12593, 12849, 13105, 13361, 13617, 13873, 14129, 14385, 14641, 24881, 25137, 25393, 25649, 25905, 26161, 12338, 12594, 12850, 13106, 13362, 13618, 13874, 14130, 14386, 14642, 24882, 25138, 25394, 25650, 25906, 26162, 12339, 12595, 12851, 13107, 13363, 13619, 13875, 14131, 14387, 14643, 24883, 25139, 25395, 25651, 25907, 26163, 12340, 12596, 12852, 13108, 13364, 13620, 13876, 14132, 14388, 14644, 24884, 25140, 25396, 25652, 25908, 26164, 12341, 12597, 12853, 13109, 13365, 13621, 13877, 14133, 14389, 14645, 24885, 25141, 25397, 25653, 25909, 26165, 12342, 12598, 12854, 13110, 13366, 13622, 13878, 14134, 14390, 14646, 24886, 25142, 25398, 25654, 25910, 26166, 12343, 12599, 12855, 13111, 13367, 13623, 13879, 14135, 14391, 14647, 24887, 25143, 25399, 25655, 25911, 26167, 12344, 12600, 12856, 13112, 13368, 13624, 13880, 14136, 14392, 14648, 24888, 25144, 25400, 25656, 25912, 26168, 12345, 12601, 12857, 13113, 13369, 13625, 13881, 14137, 14393, 14649, 24889, 25145, 25401, 25657, 25913, 26169, 12385, 12641, 12897, 13153, 13409, 13665, 13921, 14177, 14433, 14689, 24929, 25185, 25441, 25697, 25953, 26209, 12386, 12642, 12898, 13154, 13410, 13666, 13922, 14178, 14434, 14690, 24930, 25186, 25442, 25698, 25954, 26210, 12387, 12643, 12899, 13155, 13411, 13667, 13923, 14179, 14435, 14691, 24931, 25187, 25443, 25699, 25955, 26211, 12388, 12644, 12900, 13156, 13412, 13668, 13924, 14180, 14436, 14692, 24932, 25188, 25444, 25700, 25956, 26212, 12389, 12645, 12901, 13157, 13413, 13669, 13925, 14181, 14437, 14693, 24933, 25189, 25445, 25701, 25957, 26213, 12390, 12646, 12902, 13158, 13414, 13670, 13926, 14182, 14438, 14694, 24934, 25190, 25446, 25702, 25958, 26214 #endif }; /* *-------------------------------------------------------------------------- * * bson_oid_init_sequence -- * * Initializes @oid with the next oid in the sequence. The first 4 * bytes contain the current time and the following 8 contain a 64-bit * integer in big-endian format. * * The bson_oid_t generated by this function is not guaranteed to be * globally unique. Only unique within this context. It is however, * guaranteed to be sequential. * * Returns: * None. * * Side effects: * @oid is initialized. * *-------------------------------------------------------------------------- */ void bson_oid_init_sequence (bson_oid_t *oid, /* OUT */ bson_context_t *context) /* IN */ { uint32_t now = (uint32_t)(time (NULL)); if (!context) { context = bson_context_get_default (); } now = BSON_UINT32_TO_BE (now); memcpy (&oid->bytes[0], &now, sizeof (now)); context->oid_get_seq64 (context, oid); } /* *-------------------------------------------------------------------------- * * bson_oid_init -- * * Generates bytes for a new bson_oid_t and stores them in @oid. The * bytes will be generated according to the specification and includes * the current time, first 3 bytes of MD5(hostname), pid (or tid), and * monotonic counter. * * The bson_oid_t generated by this function is not guaranteed to be * globally unique. Only unique within this context. It is however, * guaranteed to be sequential. * * Returns: * None. * * Side effects: * @oid is initialized. * *-------------------------------------------------------------------------- */ void bson_oid_init (bson_oid_t *oid, /* OUT */ bson_context_t *context) /* IN */ { uint32_t now = (uint32_t)(time (NULL)); bson_return_if_fail (oid); if (!context) { context = bson_context_get_default (); } now = BSON_UINT32_TO_BE (now); memcpy (&oid->bytes[0], &now, sizeof (now)); context->oid_get_host (context, oid); context->oid_get_pid (context, oid); context->oid_get_seq32 (context, oid); } /** * bson_oid_init_from_data: * @oid: A bson_oid_t to initialize. * @bytes: A 12-byte buffer to copy into @oid. * */ /* *-------------------------------------------------------------------------- * * bson_oid_init_from_data -- * * Initializes an @oid from @data. @data MUST be a buffer of at least * 12 bytes. This method is analagous to memcpy()'ing data into @oid. * * Returns: * None. * * Side effects: * @oid is initialized. * *-------------------------------------------------------------------------- */ void bson_oid_init_from_data (bson_oid_t *oid, /* OUT */ const uint8_t *data) /* IN */ { bson_return_if_fail (oid); bson_return_if_fail (data); memcpy (oid, data, 12); } /* *-------------------------------------------------------------------------- * * bson_oid_init_from_string -- * * Parses @str containing hex formatted bytes of an object id and * places the bytes in @oid. * * Parameters: * @oid: A bson_oid_t * @str: A string containing at least 24 characters. * * Returns: * None. * * Side effects: * @oid is initialized. * *-------------------------------------------------------------------------- */ void bson_oid_init_from_string (bson_oid_t *oid, /* OUT */ const char *str) /* IN */ { bson_return_if_fail (oid); bson_return_if_fail (str); bson_oid_init_from_string_unsafe (oid, str); } /* *-------------------------------------------------------------------------- * * bson_oid_get_time_t -- * * Fetches the time for which @oid was created. * * Returns: * A time_t. * * Side effects: * None. * *-------------------------------------------------------------------------- */ time_t bson_oid_get_time_t (const bson_oid_t *oid) /* IN */ { bson_return_val_if_fail (oid, 0); return bson_oid_get_time_t_unsafe (oid); } /* *-------------------------------------------------------------------------- * * bson_oid_to_string -- * * Formats a bson_oid_t into a string. @str must contain enough bytes * for the resulting string which is 25 bytes with a terminating * NUL-byte. * * Parameters: * @oid: A bson_oid_t. * @str: A location to store the resulting string. * * Returns: * None. * * Side effects: * None. * *-------------------------------------------------------------------------- */ void bson_oid_to_string (const bson_oid_t *oid, /* IN */ char str[BSON_ENSURE_ARRAY_PARAM_SIZE(25)]) /* OUT */ { #if !defined(__i386__) && !defined(__x86_64__) bson_return_if_fail (oid); bson_return_if_fail (str); bson_snprintf (str, 25, "%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x", oid->bytes[0], oid->bytes[1], oid->bytes[2], oid->bytes[3], oid->bytes[4], oid->bytes[5], oid->bytes[6], oid->bytes[7], oid->bytes[8], oid->bytes[9], oid->bytes[10], oid->bytes[11]); #else uint16_t *dst; uint8_t *id = (uint8_t *)oid; bson_return_if_fail (oid); bson_return_if_fail (str); dst = (uint16_t *)(void *)str; dst[0] = gHexCharPairs[id[0]]; dst[1] = gHexCharPairs[id[1]]; dst[2] = gHexCharPairs[id[2]]; dst[3] = gHexCharPairs[id[3]]; dst[4] = gHexCharPairs[id[4]]; dst[5] = gHexCharPairs[id[5]]; dst[6] = gHexCharPairs[id[6]]; dst[7] = gHexCharPairs[id[7]]; dst[8] = gHexCharPairs[id[8]]; dst[9] = gHexCharPairs[id[9]]; dst[10] = gHexCharPairs[id[10]]; dst[11] = gHexCharPairs[id[11]]; str[24] = '\0'; #endif } /* *-------------------------------------------------------------------------- * * bson_oid_hash -- * * Hashes the bytes of the provided bson_oid_t using DJB hash. This * allows bson_oid_t to be used as keys in a hash table. * * Returns: * A hash value corresponding to @oid. * * Side effects: * None. * *-------------------------------------------------------------------------- */ uint32_t bson_oid_hash (const bson_oid_t *oid) /* IN */ { bson_return_val_if_fail (oid, 5381); return bson_oid_hash_unsafe (oid); } /* *-------------------------------------------------------------------------- * * bson_oid_compare -- * * A qsort() style compare function that will return less than zero if * @oid1 is less than @oid2, zero if they are the same, and greater * than zero if @oid2 is greater than @oid1. * * Returns: * A qsort() style compare integer. * * Side effects: * None. * *-------------------------------------------------------------------------- */ int bson_oid_compare (const bson_oid_t *oid1, /* IN */ const bson_oid_t *oid2) /* IN */ { bson_return_val_if_fail (oid1, 0); bson_return_val_if_fail (oid2, 0); return bson_oid_compare_unsafe (oid1, oid2); } /* *-------------------------------------------------------------------------- * * bson_oid_equal -- * * Compares for equality of @oid1 and @oid2. If they are equal, then * true is returned, otherwise false. * * Returns: * A boolean indicating the equality of @oid1 and @oid2. * * Side effects: * None. * *-------------------------------------------------------------------------- */ bool bson_oid_equal (const bson_oid_t *oid1, /* IN */ const bson_oid_t *oid2) /* IN */ { bson_return_val_if_fail (oid1, false); bson_return_val_if_fail (oid2, false); return bson_oid_equal_unsafe (oid1, oid2); } /* *-------------------------------------------------------------------------- * * bson_oid_copy -- * * Copies the contents of @src to @dst. * * Parameters: * @src: A bson_oid_t to copy from. * @dst: A bson_oid_t to copy to. * * Returns: * None. * * Side effects: * @dst will contain a copy of the data in @src. * *-------------------------------------------------------------------------- */ void bson_oid_copy (const bson_oid_t *src, /* IN */ bson_oid_t *dst) /* OUT */ { bson_return_if_fail (src); bson_return_if_fail (dst); bson_oid_copy_unsafe (src, dst); } /* *-------------------------------------------------------------------------- * * bson_oid_is_valid -- * * Validates that @str is a valid OID string. @length MUST be 24, but * is provided as a parameter to simplify calling code. * * Parameters: * @str: A string to validate. * @length: The length of @str. * * Returns: * true if @str can be passed to bson_oid_init_from_string(). * * Side effects: * None. * *-------------------------------------------------------------------------- */ bool bson_oid_is_valid (const char *str, /* IN */ size_t length) /* IN */ { size_t i; bson_return_val_if_fail (str, false); if ((length == 25) && (str [24] == '\0')) { length = 24; } if (length == 24) { for (i = 0; i < length; i++) { switch (str[i]) { case '0': case '1': case '2': case '3': case '4': case '5': case '6': case '7': case '8': case '9': case 'a': case 'b': case 'c': case 'd': case 'e': case 'f': break; default: return false; } } return true; } return false; } MongoDB-v1.2.2/bson/bson-oid.h000644 000765 000024 00000014670 12651754051 016263 0ustar00davidstaff000000 000000 /* * Copyright 2013 MongoDB, Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #ifndef BSON_OID_H #define BSON_OID_H #if !defined (BSON_INSIDE) && !defined (BSON_COMPILATION) # error "Only can be included directly." #endif #include #include "bson-context.h" #include "bson-macros.h" #include "bson-types.h" #include "bson-endian.h" BSON_BEGIN_DECLS int bson_oid_compare (const bson_oid_t *oid1, const bson_oid_t *oid2); void bson_oid_copy (const bson_oid_t *src, bson_oid_t *dst); bool bson_oid_equal (const bson_oid_t *oid1, const bson_oid_t *oid2); bool bson_oid_is_valid (const char *str, size_t length); time_t bson_oid_get_time_t (const bson_oid_t *oid); uint32_t bson_oid_hash (const bson_oid_t *oid); void bson_oid_init (bson_oid_t *oid, bson_context_t *context); void bson_oid_init_from_data (bson_oid_t *oid, const uint8_t *data); void bson_oid_init_from_string (bson_oid_t *oid, const char *str); void bson_oid_init_sequence (bson_oid_t *oid, bson_context_t *context); void bson_oid_to_string (const bson_oid_t *oid, char str[25]); /** * bson_oid_compare_unsafe: * @oid1: A bson_oid_t. * @oid2: A bson_oid_t. * * Performs a qsort() style comparison between @oid1 and @oid2. * * This function is meant to be as fast as possible and therefore performs * no argument validation. That is the callers responsibility. * * Returns: An integer < 0 if @oid1 is less than @oid2. Zero if they are equal. * An integer > 0 if @oid1 is greater than @oid2. */ static BSON_INLINE int bson_oid_compare_unsafe (const bson_oid_t *oid1, const bson_oid_t *oid2) { return memcmp (oid1, oid2, sizeof *oid1); } /** * bson_oid_equal_unsafe: * @oid1: A bson_oid_t. * @oid2: A bson_oid_t. * * Checks the equality of @oid1 and @oid2. * * This function is meant to be as fast as possible and therefore performs * no checks for argument validity. That is the callers responsibility. * * Returns: true if @oid1 and @oid2 are equal; otherwise false. */ static BSON_INLINE bool bson_oid_equal_unsafe (const bson_oid_t *oid1, const bson_oid_t *oid2) { return !memcmp (oid1, oid2, sizeof *oid1); } /** * bson_oid_hash_unsafe: * @oid: A bson_oid_t. * * This function performs a DJB style hash upon the bytes contained in @oid. * The result is a hash key suitable for use in a hashtable. * * This function is meant to be as fast as possible and therefore performs no * validation of arguments. The caller is responsible to ensure they are * passing valid arguments. * * Returns: A uint32_t containing a hash code. */ static BSON_INLINE uint32_t bson_oid_hash_unsafe (const bson_oid_t *oid) { uint32_t hash = 5381; uint32_t i; for (i = 0; i < sizeof oid->bytes; i++) { hash = ((hash << 5) + hash) + oid->bytes[i]; } return hash; } /** * bson_oid_copy_unsafe: * @src: A bson_oid_t to copy from. * @dst: A bson_oid_t to copy into. * * Copies the contents of @src into @dst. This function is meant to be as * fast as possible and therefore performs no argument checking. It is the * callers responsibility to ensure they are passing valid data into the * function. */ static BSON_INLINE void bson_oid_copy_unsafe (const bson_oid_t *src, bson_oid_t *dst) { memcpy (dst, src, sizeof *src); } /** * bson_oid_parse_hex_char: * @hex: A character to parse to its integer value. * * This function contains a jump table to return the integer value for a * character containing a hexidecimal value (0-9, a-f, A-F). If the character * is not a hexidecimal character then zero is returned. * * Returns: An integer between 0 and 15. */ static BSON_INLINE uint8_t bson_oid_parse_hex_char (char hex) { switch (hex) { case '0': return 0; case '1': return 1; case '2': return 2; case '3': return 3; case '4': return 4; case '5': return 5; case '6': return 6; case '7': return 7; case '8': return 8; case '9': return 9; case 'a': case 'A': return 0xa; case 'b': case 'B': return 0xb; case 'c': case 'C': return 0xc; case 'd': case 'D': return 0xd; case 'e': case 'E': return 0xe; case 'f': case 'F': return 0xf; default: return 0; } } /** * bson_oid_init_from_string_unsafe: * @oid: A bson_oid_t to store the result. * @str: A 24-character hexidecimal encoded string. * * Parses a string containing 24 hexidecimal encoded bytes into a bson_oid_t. * This function is meant to be as fast as possible and inlined into your * code. For that purpose, the function does not perform any sort of bounds * checking and it is the callers responsibility to ensure they are passing * valid input to the function. */ static BSON_INLINE void bson_oid_init_from_string_unsafe (bson_oid_t *oid, const char *str) { int i; for (i = 0; i < 12; i++) { oid->bytes[i] = ((bson_oid_parse_hex_char (str[2 * i]) << 4) | (bson_oid_parse_hex_char (str[2 * i + 1]))); } } /** * bson_oid_get_time_t_unsafe: * @oid: A bson_oid_t. * * Fetches the time @oid was generated. * * Returns: A time_t containing the UNIX timestamp of generation. */ static BSON_INLINE time_t bson_oid_get_time_t_unsafe (const bson_oid_t *oid) { uint32_t t; memcpy (&t, oid, sizeof (t)); return BSON_UINT32_FROM_BE (t); } BSON_END_DECLS #endif /* BSON_OID_H */ MongoDB-v1.2.2/bson/bson-private.h000644 000765 000024 00000004331 12651754051 017153 0ustar00davidstaff000000 000000 /* * Copyright 2013 MongoDB, Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #ifndef BSON_PRIVATE_H #define BSON_PRIVATE_H #include "bson-macros.h" #include "bson-memory.h" #include "bson-types.h" BSON_BEGIN_DECLS typedef enum { BSON_FLAG_NONE = 0, BSON_FLAG_INLINE = (1 << 0), BSON_FLAG_STATIC = (1 << 1), BSON_FLAG_RDONLY = (1 << 2), BSON_FLAG_CHILD = (1 << 3), BSON_FLAG_IN_CHILD = (1 << 4), BSON_FLAG_NO_FREE = (1 << 5), } bson_flags_t; BSON_ALIGNED_BEGIN (128) typedef struct { bson_flags_t flags; uint32_t len; uint8_t data [120]; } bson_impl_inline_t BSON_ALIGNED_END (128); BSON_STATIC_ASSERT (sizeof (bson_impl_inline_t) == 128); BSON_ALIGNED_BEGIN (128) typedef struct { bson_flags_t flags; /* flags describing the bson_t */ uint32_t len; /* length of bson document in bytes */ bson_t *parent; /* parent bson if a child */ uint32_t depth; /* Subdocument depth. */ uint8_t **buf; /* pointer to buffer pointer */ size_t *buflen; /* pointer to buffer length */ size_t offset; /* our offset inside *buf */ uint8_t *alloc; /* buffer that we own. */ size_t alloclen; /* length of buffer that we own. */ bson_realloc_func realloc; /* our realloc implementation */ void *realloc_func_ctx; /* context for our realloc func */ } bson_impl_alloc_t BSON_ALIGNED_END (128); BSON_STATIC_ASSERT (sizeof (bson_impl_alloc_t) <= 128); BSON_END_DECLS #endif /* BSON_PRIVATE_H */ MongoDB-v1.2.2/bson/bson-reader.c000644 000765 000024 00000046350 12651754051 016745 0ustar00davidstaff000000 000000 /* * Copyright 2013 MongoDB, Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #include "bson.h" #include #include #ifdef BSON_OS_WIN32 # include # include #endif #include #include #include #include #include "bson-reader.h" #include "bson-memory.h" typedef enum { BSON_READER_HANDLE = 1, BSON_READER_DATA = 2, } bson_reader_type_t; typedef struct { bson_reader_type_t type; void *handle; bool done : 1; bool failed : 1; size_t end; size_t len; size_t offset; size_t bytes_read; bson_t inline_bson; uint8_t *data; bson_reader_read_func_t read_func; bson_reader_destroy_func_t destroy_func; } bson_reader_handle_t; typedef struct { int fd; bool do_close; } bson_reader_handle_fd_t; typedef struct { bson_reader_type_t type; const uint8_t *data; size_t length; size_t offset; bson_t inline_bson; } bson_reader_data_t; /* *-------------------------------------------------------------------------- * * _bson_reader_handle_fill_buffer -- * * Attempt to read as much as possible until the underlying buffer * in @reader is filled or we have reached end-of-stream or * read failure. * * Returns: * None. * * Side effects: * None. * *-------------------------------------------------------------------------- */ static void _bson_reader_handle_fill_buffer (bson_reader_handle_t *reader) /* IN */ { ssize_t ret; BSON_ASSERT (reader); /* * Handle first read specially. */ if ((!reader->done) && (!reader->offset) && (!reader->end)) { ret = reader->read_func (reader->handle, &reader->data[0], reader->len); if (ret <= 0) { reader->done = true; return; } reader->bytes_read += ret; reader->end = ret; return; } /* * Move valid data to head. */ memmove (&reader->data[0], &reader->data[reader->offset], reader->end - reader->offset); reader->end = reader->end - reader->offset; reader->offset = 0; /* * Read in data to fill the buffer. */ ret = reader->read_func (reader->handle, &reader->data[reader->end], reader->len - reader->end); if (ret <= 0) { reader->done = true; reader->failed = (ret < 0); } else { reader->bytes_read += ret; reader->end += ret; } bson_return_if_fail (reader->offset == 0); bson_return_if_fail (reader->end <= reader->len); } /* *-------------------------------------------------------------------------- * * bson_reader_new_from_handle -- * * Allocates and initializes a new bson_reader_t using the opaque * handle provided. * * Parameters: * @handle: an opaque handle to use to read data. * @rf: a function to perform reads on @handle. * @df: a function to release @handle, or NULL. * * Returns: * A newly allocated bson_reader_t if successful, otherwise NULL. * Free the successful result with bson_reader_destroy(). * * Side effects: * None. * *-------------------------------------------------------------------------- */ bson_reader_t * bson_reader_new_from_handle (void *handle, bson_reader_read_func_t rf, bson_reader_destroy_func_t df) { bson_reader_handle_t *real; bson_return_val_if_fail (handle, NULL); bson_return_val_if_fail (rf, NULL); real = bson_malloc0 (sizeof *real); real->type = BSON_READER_HANDLE; real->data = bson_malloc0 (1024); real->handle = handle; real->len = 1024; real->offset = 0; bson_reader_set_read_func ((bson_reader_t *)real, rf); if (df) { bson_reader_set_destroy_func ((bson_reader_t *)real, df); } _bson_reader_handle_fill_buffer (real); return (bson_reader_t *)real; } /* *-------------------------------------------------------------------------- * * _bson_reader_handle_fd_destroy -- * * Cleanup allocations associated with state created in * bson_reader_new_from_fd(). * * Returns: * None. * * Side effects: * None. * *-------------------------------------------------------------------------- */ static void _bson_reader_handle_fd_destroy (void *handle) /* IN */ { bson_reader_handle_fd_t *fd = handle; if (fd) { if ((fd->fd != -1) && fd->do_close) { #ifdef _WIN32 _close (fd->fd); #else close (fd->fd); #endif } bson_free (fd); } } /* *-------------------------------------------------------------------------- * * _bson_reader_handle_fd_read -- * * Perform read on opaque handle created in * bson_reader_new_from_fd(). * * The underlying file descriptor is read from the current position * using the bson_reader_handle_fd_t allocated. * * Returns: * -1 on failure. * 0 on end of stream. * Greater than zero on success. * * Side effects: * None. * *-------------------------------------------------------------------------- */ static ssize_t _bson_reader_handle_fd_read (void *handle, /* IN */ void *buf, /* IN */ size_t len) /* IN */ { bson_reader_handle_fd_t *fd = handle; ssize_t ret = -1; if (fd && (fd->fd != -1)) { again: #ifdef BSON_OS_WIN32 ret = _read (fd->fd, buf, (unsigned int)len); #else ret = read (fd->fd, buf, len); #endif if ((ret == -1) && (errno == EAGAIN)) { goto again; } } return ret; } /* *-------------------------------------------------------------------------- * * bson_reader_new_from_fd -- * * Create a new bson_reader_t using the file-descriptor provided. * * Parameters: * @fd: a libc style file-descriptor. * @close_on_destroy: if close() should be called on @fd when * bson_reader_destroy() is called. * * Returns: * A newly allocated bson_reader_t on success; otherwise NULL. * * Side effects: * None. * *-------------------------------------------------------------------------- */ bson_reader_t * bson_reader_new_from_fd (int fd, /* IN */ bool close_on_destroy) /* IN */ { bson_reader_handle_fd_t *handle; bson_return_val_if_fail (fd != -1, NULL); handle = bson_malloc0 (sizeof *handle); handle->fd = fd; handle->do_close = close_on_destroy; return bson_reader_new_from_handle (handle, _bson_reader_handle_fd_read, _bson_reader_handle_fd_destroy); } /** * bson_reader_set_read_func: * @reader: A bson_reader_t. * * Note that @reader must be initialized by bson_reader_init_from_handle(), or data * will be destroyed. */ /* *-------------------------------------------------------------------------- * * bson_reader_set_read_func -- * * Set the read func to be provided for @reader. * * You probably want to use bson_reader_new_from_handle() or * bson_reader_new_from_fd() instead. * * Returns: * None. * * Side effects: * None. * *-------------------------------------------------------------------------- */ void bson_reader_set_read_func (bson_reader_t *reader, /* IN */ bson_reader_read_func_t func) /* IN */ { bson_reader_handle_t *real = (bson_reader_handle_t *)reader; bson_return_if_fail (reader->type == BSON_READER_HANDLE); real->read_func = func; } /* *-------------------------------------------------------------------------- * * bson_reader_set_destroy_func -- * * Set the function to cleanup state when @reader is destroyed. * * You probably want bson_reader_new_from_fd() or * bson_reader_new_from_handle() instead. * * Returns: * None. * * Side effects: * None. * *-------------------------------------------------------------------------- */ void bson_reader_set_destroy_func (bson_reader_t *reader, /* IN */ bson_reader_destroy_func_t func) /* IN */ { bson_reader_handle_t *real = (bson_reader_handle_t *)reader; bson_return_if_fail (reader->type == BSON_READER_HANDLE); real->destroy_func = func; } /* *-------------------------------------------------------------------------- * * _bson_reader_handle_grow_buffer -- * * Grow the buffer to the next power of two. * * Returns: * None. * * Side effects: * None. * *-------------------------------------------------------------------------- */ static void _bson_reader_handle_grow_buffer (bson_reader_handle_t *reader) /* IN */ { size_t size; bson_return_if_fail (reader); size = reader->len * 2; reader->data = bson_realloc (reader->data, size); reader->len = size; } /* *-------------------------------------------------------------------------- * * _bson_reader_handle_tell -- * * Tell the current position within the underlying file-descriptor. * * Returns: * An off_t containing the current offset. * * Side effects: * None. * *-------------------------------------------------------------------------- */ static off_t _bson_reader_handle_tell (bson_reader_handle_t *reader) /* IN */ { off_t off; bson_return_val_if_fail (reader, -1); off = (off_t)reader->bytes_read; off -= (off_t)reader->end; off += (off_t)reader->offset; return off; } /* *-------------------------------------------------------------------------- * * _bson_reader_handle_read -- * * Read the next chunk of data from the underlying file descriptor * and return a bson_t which should not be modified. * * There was a failure if NULL is returned and @reached_eof is * not set to true. * * Returns: * NULL on failure or end of stream. * * Side effects: * @reached_eof is set if non-NULL. * *-------------------------------------------------------------------------- */ static const bson_t * _bson_reader_handle_read (bson_reader_handle_t *reader, /* IN */ bool *reached_eof) /* IN */ { int32_t blen; bson_return_val_if_fail (reader, NULL); if (reached_eof) { *reached_eof = false; } while (!reader->done) { if ((reader->end - reader->offset) < 4) { _bson_reader_handle_fill_buffer (reader); continue; } memcpy (&blen, &reader->data[reader->offset], sizeof blen); blen = BSON_UINT32_FROM_LE (blen); if (blen < 5) { return NULL; } if (blen > (int32_t)(reader->end - reader->offset)) { if (blen > (int32_t)reader->len) { _bson_reader_handle_grow_buffer (reader); } _bson_reader_handle_fill_buffer (reader); continue; } if (!bson_init_static (&reader->inline_bson, &reader->data[reader->offset], (uint32_t)blen)) { return NULL; } reader->offset += blen; return &reader->inline_bson; } if (reached_eof) { *reached_eof = reader->done && !reader->failed; } return NULL; } /* *-------------------------------------------------------------------------- * * bson_reader_new_from_data -- * * Allocates and initializes a new bson_reader_t that will the memory * provided as a stream of BSON documents. * * Parameters: * @data: A buffer to read BSON documents from. * @length: The length of @data. * * Returns: * A newly allocated bson_reader_t that should be freed with * bson_reader_destroy(). * * Side effects: * None. * *-------------------------------------------------------------------------- */ bson_reader_t * bson_reader_new_from_data (const uint8_t *data, /* IN */ size_t length) /* IN */ { bson_reader_data_t *real; bson_return_val_if_fail (data, NULL); real = (bson_reader_data_t*)bson_malloc0 (sizeof *real); real->type = BSON_READER_DATA; real->data = data; real->length = length; real->offset = 0; return (bson_reader_t *)real; } /* *-------------------------------------------------------------------------- * * _bson_reader_data_read -- * * Read the next document from the underlying buffer. * * Returns: * NULL on failure or end of stream. * a bson_t which should not be modified. * * Side effects: * None. * *-------------------------------------------------------------------------- */ static const bson_t * _bson_reader_data_read (bson_reader_data_t *reader, /* IN */ bool *reached_eof) /* IN */ { int32_t blen; bson_return_val_if_fail (reader, NULL); if (reached_eof) { *reached_eof = false; } if ((reader->offset + 4) < reader->length) { memcpy (&blen, &reader->data[reader->offset], sizeof blen); blen = BSON_UINT32_FROM_LE (blen); if (blen < 5) { return NULL; } if (blen > (int32_t)(reader->length - reader->offset)) { return NULL; } if (!bson_init_static (&reader->inline_bson, &reader->data[reader->offset], (uint32_t)blen)) { return NULL; } reader->offset += blen; return &reader->inline_bson; } if (reached_eof) { *reached_eof = (reader->offset == reader->length); } return NULL; } /* *-------------------------------------------------------------------------- * * _bson_reader_data_tell -- * * Tell the current position in the underlying buffer. * * Returns: * An off_t of the current offset. * * Side effects: * None. * *-------------------------------------------------------------------------- */ static off_t _bson_reader_data_tell (bson_reader_data_t *reader) /* IN */ { bson_return_val_if_fail (reader, -1); return (off_t)reader->offset; } /* *-------------------------------------------------------------------------- * * bson_reader_destroy -- * * Release a bson_reader_t created with bson_reader_new_from_data(), * bson_reader_new_from_fd(), or bson_reader_new_from_handle(). * * Returns: * None. * * Side effects: * None. * *-------------------------------------------------------------------------- */ void bson_reader_destroy (bson_reader_t *reader) /* IN */ { bson_return_if_fail (reader); switch (reader->type) { case 0: break; case BSON_READER_HANDLE: { bson_reader_handle_t *handle = (bson_reader_handle_t *)reader; if (handle->destroy_func) { handle->destroy_func(handle->handle); } bson_free (handle->data); } break; case BSON_READER_DATA: break; default: fprintf (stderr, "No such reader type: %02x\n", reader->type); break; } reader->type = 0; bson_free (reader); } /* *-------------------------------------------------------------------------- * * bson_reader_read -- * * Reads the next bson_t in the underlying memory or storage. The * resulting bson_t should not be modified or freed. You may copy it * and iterate over it. Functions that take a const bson_t* are safe * to use. * * This structure does not survive calls to bson_reader_read() or * bson_reader_destroy() as it uses memory allocated by the reader or * underlying storage/memory. * * If NULL is returned then @reached_eof will be set to true if the * end of the file or buffer was reached. This indicates if there was * an error parsing the document stream. * * Returns: * A const bson_t that should not be modified or freed. * NULL on failure or end of stream. * * Side effects: * @reached_eof is set if non-NULL. * *-------------------------------------------------------------------------- */ const bson_t * bson_reader_read (bson_reader_t *reader, /* IN */ bool *reached_eof) /* OUT */ { bson_return_val_if_fail (reader, NULL); switch (reader->type) { case BSON_READER_HANDLE: return _bson_reader_handle_read ((bson_reader_handle_t *)reader, reached_eof); case BSON_READER_DATA: return _bson_reader_data_read ((bson_reader_data_t *)reader, reached_eof); default: fprintf (stderr, "No such reader type: %02x\n", reader->type); break; } return NULL; } /* *-------------------------------------------------------------------------- * * bson_reader_tell -- * * Return the current position in the underlying reader. This will * always be at the beginning of a bson document or end of file. * * Returns: * An off_t containing the current offset. * * Side effects: * None. * *-------------------------------------------------------------------------- */ off_t bson_reader_tell (bson_reader_t *reader) /* IN */ { bson_return_val_if_fail (reader, -1); switch (reader->type) { case BSON_READER_HANDLE: return _bson_reader_handle_tell ((bson_reader_handle_t *)reader); case BSON_READER_DATA: return _bson_reader_data_tell ((bson_reader_data_t *)reader); default: fprintf (stderr, "No such reader type: %02x\n", reader->type); return -1; } } /* *-------------------------------------------------------------------------- * * bson_reader_new_from_file -- * * A convenience function to open a file containing sequential * bson documents and read them using bson_reader_t. * * Returns: * A new bson_reader_t if successful, otherwise NULL and * @error is set. Free the non-NULL result with * bson_reader_destroy(). * * Side effects: * @error may be set. * *-------------------------------------------------------------------------- */ bson_reader_t * bson_reader_new_from_file (const char *path, /* IN */ bson_error_t *error) /* OUT */ { char errmsg_buf[BSON_ERROR_BUFFER_SIZE]; char *errmsg; int fd; bson_return_val_if_fail (path, NULL); #ifdef BSON_OS_WIN32 if (_sopen_s (&fd, path, (_O_RDONLY | _O_BINARY), _SH_DENYNO, 0) != 0) { fd = -1; } #else fd = open (path, O_RDONLY); #endif if (fd == -1) { errmsg = bson_strerror_r (errno, errmsg_buf, sizeof errmsg_buf); bson_set_error (error, BSON_ERROR_READER, BSON_ERROR_READER_BADFD, "%s", errmsg); return NULL; } return bson_reader_new_from_fd (fd, true); } MongoDB-v1.2.2/bson/bson-reader.h000644 000765 000024 00000007370 12651754051 016751 0ustar00davidstaff000000 000000 /* * Copyright 2013 MongoDB, Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #ifndef BSON_READER_H #define BSON_READER_H #if !defined (BSON_INSIDE) && !defined (BSON_COMPILATION) # error "Only can be included directly." #endif #include "bson-compat.h" #include "bson-oid.h" #include "bson-types.h" BSON_BEGIN_DECLS #define BSON_ERROR_READER_BADFD 1 /* *-------------------------------------------------------------------------- * * bson_reader_read_func_t -- * * This function is a callback used by bson_reader_t to read the * next chunk of data from the underlying opaque file descriptor. * * This function is meant to operate similar to the read() function * as part of libc on UNIX-like systems. * * Parameters: * @handle: The handle to read from. * @buf: The buffer to read into. * @count: The number of bytes to read. * * Returns: * 0 for end of stream. * -1 for read failure. * Greater than zero for number of bytes read into @buf. * * Side effects: * None. * *-------------------------------------------------------------------------- */ typedef ssize_t (*bson_reader_read_func_t) (void *handle, /* IN */ void *buf, /* IN */ size_t count); /* IN */ /* *-------------------------------------------------------------------------- * * bson_reader_destroy_func_t -- * * Destroy callback to release any resources associated with the * opaque handle. * * Parameters: * @handle: the handle provided to bson_reader_new_from_handle(). * * Returns: * None. * * Side effects: * None. * *-------------------------------------------------------------------------- */ typedef void (*bson_reader_destroy_func_t) (void *handle); /* IN */ bson_reader_t *bson_reader_new_from_handle (void *handle, bson_reader_read_func_t rf, bson_reader_destroy_func_t df); bson_reader_t *bson_reader_new_from_fd (int fd, bool close_on_destroy); bson_reader_t *bson_reader_new_from_file (const char *path, bson_error_t *error); bson_reader_t *bson_reader_new_from_data (const uint8_t *data, size_t length); void bson_reader_destroy (bson_reader_t *reader); void bson_reader_set_read_func (bson_reader_t *reader, bson_reader_read_func_t func); void bson_reader_set_destroy_func (bson_reader_t *reader, bson_reader_destroy_func_t func); const bson_t *bson_reader_read (bson_reader_t *reader, bool *reached_eof); off_t bson_reader_tell (bson_reader_t *reader); BSON_END_DECLS #endif /* BSON_READER_H */ MongoDB-v1.2.2/bson/bson-stdint-win32.h000644 000765 000024 00000017634 12651754051 017760 0ustar00davidstaff000000 000000 // ISO C9x compliant stdint.h for Microsoft Visual Studio // Based on ISO/IEC 9899:TC2 Committee draft (May 6, 2005) WG14/N1124 // // Copyright (c) 2006-2013 Alexander Chemeris // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are met: // // 1. Redistributions of source code must retain the above copyright notice, // this list of conditions and the following disclaimer. // // 2. Redistributions in binary form must reproduce the above copyright // notice, this list of conditions and the following disclaimer in the // documentation and/or other materials provided with the distribution. // // 3. Neither the name of the product nor the names of its contributors may // be used to endorse or promote products derived from this software // without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED // WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF // MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO // EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, // PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; // OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, // WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR // OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF // ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // /////////////////////////////////////////////////////////////////////////////// #ifndef _MSC_VER // [ #error "Use this header only with Microsoft Visual C++ compilers!" #endif // _MSC_VER ] #ifndef _MSC_STDINT_H_ // [ #define _MSC_STDINT_H_ #if _MSC_VER > 1000 #pragma once #endif #if _MSC_VER >= 1600 // [ #include #else // ] _MSC_VER >= 1600 [ #include // For Visual Studio 6 in C++ mode and for many Visual Studio versions when // compiling for ARM we should wrap include with 'extern "C++" {}' // or compiler give many errors like this: // error C2733: second C linkage of overloaded function 'wmemchr' not allowed #ifdef __cplusplus extern "C" { #endif # include #ifdef __cplusplus } #endif // Define _W64 macros to mark types changing their size, like intptr_t. #ifndef _W64 # if !defined(__midl) && (defined(_X86_) || defined(_M_IX86)) && _MSC_VER >= 1300 # define _W64 __w64 # else # define _W64 # endif #endif // 7.18.1 Integer types // 7.18.1.1 Exact-width integer types // Visual Studio 6 and Embedded Visual C++ 4 doesn't // realize that, e.g. char has the same size as __int8 // so we give up on __intX for them. #if (_MSC_VER < 1300) typedef signed char int8_t; typedef signed short int16_t; typedef signed int int32_t; typedef unsigned char uint8_t; typedef unsigned short uint16_t; typedef unsigned int uint32_t; #else typedef signed __int8 int8_t; typedef signed __int16 int16_t; typedef signed __int32 int32_t; typedef unsigned __int8 uint8_t; typedef unsigned __int16 uint16_t; typedef unsigned __int32 uint32_t; #endif typedef signed __int64 int64_t; typedef unsigned __int64 uint64_t; // 7.18.1.2 Minimum-width integer types typedef int8_t int_least8_t; typedef int16_t int_least16_t; typedef int32_t int_least32_t; typedef int64_t int_least64_t; typedef uint8_t uint_least8_t; typedef uint16_t uint_least16_t; typedef uint32_t uint_least32_t; typedef uint64_t uint_least64_t; // 7.18.1.3 Fastest minimum-width integer types typedef int8_t int_fast8_t; typedef int16_t int_fast16_t; typedef int32_t int_fast32_t; typedef int64_t int_fast64_t; typedef uint8_t uint_fast8_t; typedef uint16_t uint_fast16_t; typedef uint32_t uint_fast32_t; typedef uint64_t uint_fast64_t; // 7.18.1.4 Integer types capable of holding object pointers #ifdef _WIN64 // [ typedef signed __int64 intptr_t; typedef unsigned __int64 uintptr_t; #else // _WIN64 ][ typedef _W64 signed int intptr_t; typedef _W64 unsigned int uintptr_t; #endif // _WIN64 ] // 7.18.1.5 Greatest-width integer types typedef int64_t intmax_t; typedef uint64_t uintmax_t; // 7.18.2 Limits of specified-width integer types #if !defined(__cplusplus) || defined(__STDC_LIMIT_MACROS) // [ See footnote 220 at page 257 and footnote 221 at page 259 // 7.18.2.1 Limits of exact-width integer types #define INT8_MIN ((int8_t)_I8_MIN) #define INT8_MAX _I8_MAX #define INT16_MIN ((int16_t)_I16_MIN) #define INT16_MAX _I16_MAX #define INT32_MIN ((int32_t)_I32_MIN) #define INT32_MAX _I32_MAX #define INT64_MIN ((int64_t)_I64_MIN) #define INT64_MAX _I64_MAX #define UINT8_MAX _UI8_MAX #define UINT16_MAX _UI16_MAX #define UINT32_MAX _UI32_MAX #define UINT64_MAX _UI64_MAX // 7.18.2.2 Limits of minimum-width integer types #define INT_LEAST8_MIN INT8_MIN #define INT_LEAST8_MAX INT8_MAX #define INT_LEAST16_MIN INT16_MIN #define INT_LEAST16_MAX INT16_MAX #define INT_LEAST32_MIN INT32_MIN #define INT_LEAST32_MAX INT32_MAX #define INT_LEAST64_MIN INT64_MIN #define INT_LEAST64_MAX INT64_MAX #define UINT_LEAST8_MAX UINT8_MAX #define UINT_LEAST16_MAX UINT16_MAX #define UINT_LEAST32_MAX UINT32_MAX #define UINT_LEAST64_MAX UINT64_MAX // 7.18.2.3 Limits of fastest minimum-width integer types #define INT_FAST8_MIN INT8_MIN #define INT_FAST8_MAX INT8_MAX #define INT_FAST16_MIN INT16_MIN #define INT_FAST16_MAX INT16_MAX #define INT_FAST32_MIN INT32_MIN #define INT_FAST32_MAX INT32_MAX #define INT_FAST64_MIN INT64_MIN #define INT_FAST64_MAX INT64_MAX #define UINT_FAST8_MAX UINT8_MAX #define UINT_FAST16_MAX UINT16_MAX #define UINT_FAST32_MAX UINT32_MAX #define UINT_FAST64_MAX UINT64_MAX // 7.18.2.4 Limits of integer types capable of holding object pointers #ifdef _WIN64 // [ # define INTPTR_MIN INT64_MIN # define INTPTR_MAX INT64_MAX # define UINTPTR_MAX UINT64_MAX #else // _WIN64 ][ # define INTPTR_MIN INT32_MIN # define INTPTR_MAX INT32_MAX # define UINTPTR_MAX UINT32_MAX #endif // _WIN64 ] // 7.18.2.5 Limits of greatest-width integer types #define INTMAX_MIN INT64_MIN #define INTMAX_MAX INT64_MAX #define UINTMAX_MAX UINT64_MAX // 7.18.3 Limits of other integer types #ifdef _WIN64 // [ # define PTRDIFF_MIN _I64_MIN # define PTRDIFF_MAX _I64_MAX #else // _WIN64 ][ # define PTRDIFF_MIN _I32_MIN # define PTRDIFF_MAX _I32_MAX #endif // _WIN64 ] #define SIG_ATOMIC_MIN INT_MIN #define SIG_ATOMIC_MAX INT_MAX #ifndef SIZE_MAX // [ # ifdef _WIN64 // [ # define SIZE_MAX _UI64_MAX # else // _WIN64 ][ # define SIZE_MAX _UI32_MAX # endif // _WIN64 ] #endif // SIZE_MAX ] // WCHAR_MIN and WCHAR_MAX are also defined in #ifndef WCHAR_MIN // [ # define WCHAR_MIN 0 #endif // WCHAR_MIN ] #ifndef WCHAR_MAX // [ # define WCHAR_MAX _UI16_MAX #endif // WCHAR_MAX ] #define WINT_MIN 0 #define WINT_MAX _UI16_MAX #endif // __STDC_LIMIT_MACROS ] // 7.18.4 Limits of other integer types #if !defined(__cplusplus) || defined(__STDC_CONSTANT_MACROS) // [ See footnote 224 at page 260 // 7.18.4.1 Macros for minimum-width integer constants #define INT8_C(val) val##i8 #define INT16_C(val) val##i16 #define INT32_C(val) val##i32 #define INT64_C(val) val##i64 #define UINT8_C(val) val##ui8 #define UINT16_C(val) val##ui16 #define UINT32_C(val) val##ui32 #define UINT64_C(val) val##ui64 // 7.18.4.2 Macros for greatest-width integer constants // These #ifndef's are needed to prevent collisions with . // Check out Issue 9 for the details. #ifndef INTMAX_C // [ # define INTMAX_C INT64_C #endif // INTMAX_C ] #ifndef UINTMAX_C // [ # define UINTMAX_C UINT64_C #endif // UINTMAX_C ] #endif // __STDC_CONSTANT_MACROS ] #endif // _MSC_VER >= 1600 ] #endif // _MSC_STDINT_H_ ] MongoDB-v1.2.2/bson/bson-stdint.h000644 000765 000024 00000001046 12651754051 017006 0ustar00davidstaff000000 000000 #ifndef _SRC_BSON_BSON_STDINT_H #define _SRC_BSON_BSON_STDINT_H 1 #ifndef _GENERATED_STDINT_H #define _GENERATED_STDINT_H " " /* generated using a gnu compiler version gcc (Ubuntu 4.8.2-19ubuntu1) 4.8.2 Copyright (C) 2013 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. */ #include /* system headers have good uint64_t */ #ifndef _HAVE_UINT64_T #define _HAVE_UINT64_T #endif /* once */ #endif #endif MongoDB-v1.2.2/bson/bson-string.c000644 000765 000024 00000037170 12651754051 017011 0ustar00davidstaff000000 000000 /* * Copyright 2013 MongoDB, Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #include #include #include #include "bson-compat.h" #include "bson-config.h" #include "bson-string.h" #include "bson-memory.h" #include "bson-utf8.h" /* *-------------------------------------------------------------------------- * * bson_string_new -- * * Create a new bson_string_t. * * bson_string_t is a power-of-2 allocation growing string. Every * time data is appended the next power of two size is chosen for * the allocation. Pretty standard stuff. * * It is UTF-8 aware through the use of bson_string_append_unichar(). * The proper UTF-8 character sequence will be used. * * Parameters: * @str: a string to copy or NULL. * * Returns: * A newly allocated bson_string_t that should be freed with * bson_string_free(). * * Side effects: * None. * *-------------------------------------------------------------------------- */ bson_string_t * bson_string_new (const char *str) /* IN */ { bson_string_t *ret; ret = bson_malloc0 (sizeof *ret); ret->len = str ? (int)strlen (str) : 0; ret->alloc = ret->len + 1; if (!bson_is_power_of_two (ret->alloc)) { ret->alloc = (uint32_t)bson_next_power_of_two ((size_t)ret->alloc); } BSON_ASSERT (ret->alloc >= 1); ret->str = bson_malloc (ret->alloc); if (str) { memcpy (ret->str, str, ret->len); } ret->str [ret->len] = '\0'; ret->str [ret->len] = '\0'; return ret; } /* *-------------------------------------------------------------------------- * * bson_string_free -- * * Free the bson_string_t @string and related allocations. * * If @free_segment is false, then the strings buffer will be * returned and is not freed. Otherwise, NULL is returned. * * Returns: * The string->str if free_segment is false. * Otherwise NULL. * * Side effects: * None. * *-------------------------------------------------------------------------- */ char * bson_string_free (bson_string_t *string, /* IN */ bool free_segment) /* IN */ { char *ret = NULL; bson_return_val_if_fail (string, NULL); if (!free_segment) { ret = string->str; } else { bson_free (string->str); } bson_free (string); return ret; } /* *-------------------------------------------------------------------------- * * bson_string_append -- * * Append the UTF-8 string @str to @string. * * Returns: * None. * * Side effects: * None. * *-------------------------------------------------------------------------- */ void bson_string_append (bson_string_t *string, /* IN */ const char *str) /* IN */ { uint32_t len; bson_return_if_fail (string); bson_return_if_fail (str); len = (uint32_t)strlen (str); if ((string->alloc - string->len - 1) < len) { string->alloc += len; if (!bson_is_power_of_two (string->alloc)) { string->alloc = (uint32_t)bson_next_power_of_two ((size_t)string->alloc); } string->str = bson_realloc (string->str, string->alloc); } memcpy (string->str + string->len, str, len); string->len += len; string->str [string->len] = '\0'; } /* *-------------------------------------------------------------------------- * * bson_string_append_c -- * * Append the ASCII character @c to @string. * * Do not use this if you are working with UTF-8 sequences, * use bson_string_append_unichar(). * * Returns: * None. * * Side effects: * None. * *-------------------------------------------------------------------------- */ void bson_string_append_c (bson_string_t *string, /* IN */ char c) /* IN */ { char cc[2]; BSON_ASSERT (string); if (BSON_UNLIKELY (string->alloc == (string->len + 1))) { cc [0] = c; cc [1] = '\0'; bson_string_append (string, cc); return; } string->str [string->len++] = c; string->str [string->len] = '\0'; } /* *-------------------------------------------------------------------------- * * bson_string_append_unichar -- * * Append the bson_unichar_t @unichar to the string @string. * * Returns: * None. * * Side effects: * None. * *-------------------------------------------------------------------------- */ void bson_string_append_unichar (bson_string_t *string, /* IN */ bson_unichar_t unichar) /* IN */ { uint32_t len; char str [8]; BSON_ASSERT (string); BSON_ASSERT (unichar); bson_utf8_from_unichar (unichar, str, &len); if (len <= 6) { str [len] = '\0'; bson_string_append (string, str); } } /* *-------------------------------------------------------------------------- * * bson_string_append_printf -- * * Format a string according to @format and append it to @string. * * Returns: * None. * * Side effects: * None. * *-------------------------------------------------------------------------- */ void bson_string_append_printf (bson_string_t *string, const char *format, ...) { va_list args; char *ret; BSON_ASSERT (string); BSON_ASSERT (format); va_start (args, format); ret = bson_strdupv_printf (format, args); va_end (args); bson_string_append (string, ret); bson_free (ret); } /* *-------------------------------------------------------------------------- * * bson_string_truncate -- * * Truncate the string @string to @len bytes. * * The underlying memory will be released via realloc() down to * the minimum required size specified by @len. * * Returns: * None. * * Side effects: * None. * *-------------------------------------------------------------------------- */ void bson_string_truncate (bson_string_t *string, /* IN */ uint32_t len) /* IN */ { uint32_t alloc; bson_return_if_fail (string); bson_return_if_fail (len < INT_MAX); alloc = len + 1; if (alloc < 16) { alloc = 16; } if (!bson_is_power_of_two (alloc)) { alloc = (uint32_t)bson_next_power_of_two ((size_t)alloc); } string->str = bson_realloc (string->str, alloc); string->alloc = alloc; string->len = len; string->str [string->len] = '\0'; } /* *-------------------------------------------------------------------------- * * bson_strdup -- * * Portable strdup(). * * Returns: * A newly allocated string that should be freed with bson_free(). * * Side effects: * None. * *-------------------------------------------------------------------------- */ char * bson_strdup (const char *str) /* IN */ { long len; char *out; if (!str) { return NULL; } len = (long)strlen (str); out = bson_malloc (len + 1); if (!out) { return NULL; } memcpy (out, str, len + 1); return out; } /* *-------------------------------------------------------------------------- * * bson_strdupv_printf -- * * Like bson_strdup_printf() but takes a va_list. * * Returns: * A newly allocated string that should be freed with bson_free(). * * Side effects: * None. * *-------------------------------------------------------------------------- */ char * bson_strdupv_printf (const char *format, /* IN */ va_list args) /* IN */ { va_list my_args; char *buf; int len = 32; int n; bson_return_val_if_fail (format, NULL); buf = bson_malloc0 (len); while (true) { va_copy (my_args, args); n = bson_vsnprintf (buf, len, format, my_args); va_end (my_args); if (n > -1 && n < len) { return buf; } if (n > -1) { len = n + 1; } else { len *= 2; } buf = bson_realloc (buf, len); } } /* *-------------------------------------------------------------------------- * * bson_strdup_printf -- * * Convenience function that formats a string according to @format * and returns a copy of it. * * Returns: * A newly created string that should be freed with bson_free(). * * Side effects: * None. * *-------------------------------------------------------------------------- */ char * bson_strdup_printf (const char *format, /* IN */ ...) /* IN */ { va_list args; char *ret; bson_return_val_if_fail (format, NULL); va_start (args, format); ret = bson_strdupv_printf (format, args); va_end (args); return ret; } /* *-------------------------------------------------------------------------- * * bson_strndup -- * * A portable strndup(). * * Returns: * A newly allocated string that should be freed with bson_free(). * * Side effects: * None. * *-------------------------------------------------------------------------- */ char * bson_strndup (const char *str, /* IN */ size_t n_bytes) /* IN */ { char *ret; bson_return_val_if_fail (str, NULL); ret = bson_malloc (n_bytes + 1); memcpy (ret, str, n_bytes); ret[n_bytes] = '\0'; return ret; } /* *-------------------------------------------------------------------------- * * bson_strfreev -- * * Frees each string in a NULL terminated array of strings. * This also frees the underlying array. * * Returns: * None. * * Side effects: * None. * *-------------------------------------------------------------------------- */ void bson_strfreev (char **str) /* IN */ { int i; if (str) { for (i = 0; str [i]; i++) bson_free (str [i]); bson_free (str); } } /* *-------------------------------------------------------------------------- * * bson_strnlen -- * * A portable strnlen(). * * Returns: * The length of @s up to @maxlen. * * Side effects: * None. * *-------------------------------------------------------------------------- */ size_t bson_strnlen (const char *s, /* IN */ size_t maxlen) /* IN */ { #ifdef HAVE_STRNLEN return strnlen (s, maxlen); #else size_t i; for (i = 0; i < maxlen; i++) { if (s [i] == '\0') { return i + 1; } } return maxlen; #endif } /* *-------------------------------------------------------------------------- * * bson_strncpy -- * * A portable strncpy. * * Copies @src into @dst, which must be @size bytes or larger. * The result is guaranteed to be \0 terminated. * * Returns: * None. * * Side effects: * None. * *-------------------------------------------------------------------------- */ void bson_strncpy (char *dst, /* IN */ const char *src, /* IN */ size_t size) /* IN */ { #ifdef _MSC_VER strcpy_s (dst, size, src); #else strncpy (dst, src, size); dst[size - 1] = '\0'; #endif } /* *-------------------------------------------------------------------------- * * bson_vsnprintf -- * * A portable vsnprintf. * * If more than @size bytes are required (exluding the null byte), * then @size bytes will be written to @string and the return value * is the number of bytes required. * * This function will always return a NULL terminated string. * * Returns: * The number of bytes required for @format excluding the null byte. * * Side effects: * @str is initialized with the formatted string. * *-------------------------------------------------------------------------- */ int bson_vsnprintf (char *str, /* IN */ size_t size, /* IN */ const char *format, /* IN */ va_list ap) /* IN */ { #ifdef BSON_OS_WIN32 int r = -1; BSON_ASSERT (str); if (size != 0) { r = _vsnprintf_s (str, size, _TRUNCATE, format, ap); } if (r == -1) { r = _vscprintf (format, ap); } str [size - 1] = '\0'; return r; #else int r; r = vsnprintf (str, size, format, ap); str [size - 1] = '\0'; return r; #endif } /* *-------------------------------------------------------------------------- * * bson_snprintf -- * * A portable snprintf. * * If @format requires more than @size bytes, then @size bytes are * written and the result is the number of bytes required (excluding * the null byte). * * This function will always return a NULL terminated string. * * Returns: * The number of bytes required for @format. * * Side effects: * @str is initialized. * *-------------------------------------------------------------------------- */ int bson_snprintf (char *str, /* IN */ size_t size, /* IN */ const char *format, /* IN */ ...) { int r; va_list ap; BSON_ASSERT (str); va_start (ap, format); r = bson_vsnprintf (str, size, format, ap); va_end (ap); return r; } int64_t bson_ascii_strtoll (const char *s, char **e, int base) { char *tok = (char *)s; char c; int64_t number = 0; int64_t sign = 1; if (!s) { errno = EINVAL; return 0; } c = *tok; while (isspace (c)) { c = *++tok; } if (!isdigit (c) && (c != '+') && (c != '-')) { *e = tok - 1; errno = EINVAL; return 0; } if (c == '-') { sign = -1; c = *++tok; } if (c == '+') { c = *++tok; } if (c == '0' && tok[1] != '\0') { /* Hex, octal or binary -- maybe. */ c = *++tok; if (c == 'x' || c == 'X') { /* Hex */ if (base != 16) { *e = (char *)(s); errno = EINVAL; return 0; } c = *++tok; if (!isxdigit (c)) { *e = tok; errno = EINVAL; return 0; } do { number = (number << 4) + (c - '0'); c = *(++tok); } while (isxdigit (c)); } else { /* Octal */ if (base != 8) { *e = (char *)(s); errno = EINVAL; return 0; } if (c < '0' || c >= '8') { *e = tok; errno = EINVAL; return 0; } do { number = (number << 3) + (c - '0'); c = *(++tok); } while (('0' <= c) && (c < '8')); } while (c == 'l' || c == 'L' || c == 'u' || c == 'U') { c = *++tok; } } else { /* Decimal */ if (base != 10) { *e = (char *)(s); errno = EINVAL; return 0; } do { number = (number * 10) + (c - '0'); c = *(++tok); } while (isdigit (c)); while (c == 'l' || c == 'L' || c == 'u' || c == 'U') { c = *(++tok); } } *e = tok; errno = 0; return (sign * number); } MongoDB-v1.2.2/bson/bson-string.h000644 000765 000024 00000006774 12651754051 017024 0ustar00davidstaff000000 000000 /* * Copyright 2013 MongoDB, Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #ifndef BSON_STRING_H #define BSON_STRING_H #if !defined (BSON_INSIDE) && !defined (BSON_COMPILATION) # error "Only can be included directly." #endif #include #include "bson-macros.h" #include "bson-types.h" BSON_BEGIN_DECLS typedef struct { char *str; uint32_t len; uint32_t alloc; } bson_string_t; bson_string_t *bson_string_new (const char *str); char *bson_string_free (bson_string_t *string, bool free_segment); void bson_string_append (bson_string_t *string, const char *str); void bson_string_append_c (bson_string_t *string, char str); void bson_string_append_unichar (bson_string_t *string, bson_unichar_t unichar); void bson_string_append_printf (bson_string_t *string, const char *format, ...) BSON_GNUC_PRINTF (2, 3); void bson_string_truncate (bson_string_t *string, uint32_t len); char *bson_strdup (const char *str); char *bson_strdup_printf (const char *format, ...) BSON_GNUC_PRINTF (1, 2); char *bson_strdupv_printf (const char *format, va_list args) BSON_GNUC_PRINTF (1, 0); char *bson_strndup (const char *str, size_t n_bytes); void bson_strncpy (char *dst, const char *src, size_t size); int bson_vsnprintf (char *str, size_t size, const char *format, va_list ap) BSON_GNUC_PRINTF (3, 0); int bson_snprintf (char *str, size_t size, const char *format, ...) BSON_GNUC_PRINTF (3, 4); void bson_strfreev (char **strv); size_t bson_strnlen (const char *s, size_t maxlen); int64_t bson_ascii_strtoll (const char *str, char **endptr, int base); BSON_END_DECLS #endif /* BSON_STRING_H */ MongoDB-v1.2.2/bson/bson-thread-private.h000644 000765 000024 00000005373 12651754051 020427 0ustar00davidstaff000000 000000 /* * Copyright 2013 MongoDB, Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #ifndef BSON_THREAD_PRIVATE_H #define BSON_THREAD_PRIVATE_H #if !defined (BSON_INSIDE) && !defined (BSON_COMPILATION) # error "Only can be included directly." #endif #include "bson-compat.h" #include "bson-config.h" #include "bson-macros.h" BSON_BEGIN_DECLS #if defined(BSON_OS_UNIX) # include # define bson_mutex_t pthread_mutex_t # define bson_mutex_init(_n) pthread_mutex_init((_n), NULL) # define bson_mutex_lock pthread_mutex_lock # define bson_mutex_unlock pthread_mutex_unlock # define bson_mutex_destroy pthread_mutex_destroy # define bson_thread_t pthread_t # define bson_thread_create(_t,_f,_d) pthread_create((_t), NULL, (_f), (_d)) # define bson_thread_join(_n) pthread_join((_n), NULL) # define bson_once_t pthread_once_t # define bson_once pthread_once # define BSON_ONCE_FUN(n) void n(void) # define BSON_ONCE_RETURN return # ifdef _PTHREAD_ONCE_INIT_NEEDS_BRACES # define BSON_ONCE_INIT {PTHREAD_ONCE_INIT} # else # define BSON_ONCE_INIT PTHREAD_ONCE_INIT # endif #else # define bson_mutex_t CRITICAL_SECTION # define bson_mutex_init InitializeCriticalSection # define bson_mutex_lock EnterCriticalSection # define bson_mutex_unlock LeaveCriticalSection # define bson_mutex_destroy DeleteCriticalSection # define bson_thread_t HANDLE # define bson_thread_create(_t,_f,_d) (!(*(_t) = CreateThread(NULL,0,(void*)_f,_d,0,NULL))) # define bson_thread_join(_n) WaitForSingleObject((_n), INFINITE) # define bson_once_t INIT_ONCE # define BSON_ONCE_INIT INIT_ONCE_STATIC_INIT # define bson_once(o, c) InitOnceExecuteOnce(o, c, NULL, NULL) # define BSON_ONCE_FUN(n) BOOL CALLBACK n(PINIT_ONCE _ignored_a, PVOID _ignored_b, PVOID *_ignored_c) # define BSON_ONCE_RETURN return true #endif BSON_END_DECLS #endif /* BSON_THREAD_PRIVATE_H */ MongoDB-v1.2.2/bson/bson-timegm-private.h000644 000765 000024 00000001472 12651754051 020436 0ustar00davidstaff000000 000000 /* * Copyright 2014 MongoDB, Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #ifndef BSON_TIMEGM_PRIVATE_H #define BSON_TIMEGM_PRIVATE_H #include "bson-compat.h" #include "bson-macros.h" BSON_BEGIN_DECLS time_t _bson_timegm (struct tm *const tmp); BSON_END_DECLS #endif /* BSON_TIMEGM_PRIVATE_H */ MongoDB-v1.2.2/bson/bson-timegm.c000644 000765 000024 00000051611 12651754051 016761 0ustar00davidstaff000000 000000 /* ** This file is in the public domain, so clarified as of ** 1996-06-05 by Arthur David Olson. */ /* ** Leap second handling from Bradley White. ** POSIX-style TZ environment variable handling from Guy Harris. */ #include "bson-compat.h" #include "bson-macros.h" #include "bson-timegm-private.h" #ifndef BSON_OS_WIN32 #include "errno.h" #include "string.h" #include "limits.h" /* for CHAR_BIT et al. */ #include "time.h" /* Unlike 's isdigit, this also works if c < 0 | c > UCHAR_MAX. */ #define is_digit(c) ((unsigned)(c) - '0' <= 9) #ifndef CHAR_BIT #define CHAR_BIT 8 #endif #if 2 < __GNUC__ + (96 <= __GNUC_MINOR__) # define ATTRIBUTE_CONST __attribute__ ((const)) # define ATTRIBUTE_PURE __attribute__ ((__pure__)) # define ATTRIBUTE_FORMAT(spec) __attribute__ ((__format__ spec)) #else # define ATTRIBUTE_CONST /* empty */ # define ATTRIBUTE_PURE /* empty */ # define ATTRIBUTE_FORMAT(spec) /* empty */ #endif #if !defined _Noreturn && (!defined(__STDC_VERSION__) || __STDC_VERSION__ < 201112) # if 2 < __GNUC__ + (8 <= __GNUC_MINOR__) # define _Noreturn __attribute__ ((__noreturn__)) # else # define _Noreturn # endif #endif #if (!defined(__STDC_VERSION__) || __STDC_VERSION__ < 199901) && !defined restrict # define restrict /* empty */ #endif #ifndef TYPE_BIT #define TYPE_BIT(type) (sizeof (type) * CHAR_BIT) #endif /* !defined TYPE_BIT */ #ifndef TYPE_SIGNED #define TYPE_SIGNED(type) (((type) -1) < 0) #endif /* !defined TYPE_SIGNED */ /* The minimum and maximum finite time values. */ static time_t const time_t_min = (TYPE_SIGNED(time_t) ? (time_t) -1 << (CHAR_BIT * sizeof (time_t) - 1) : 0); static time_t const time_t_max = (TYPE_SIGNED(time_t) ? - (~ 0 < 0) - ((time_t) -1 << (CHAR_BIT * sizeof (time_t) - 1)) : -1); #ifndef TZ_MAX_TIMES #define TZ_MAX_TIMES 2000 #endif /* !defined TZ_MAX_TIMES */ #ifndef TZ_MAX_TYPES /* This must be at least 17 for Europe/Samara and Europe/Vilnius. */ #define TZ_MAX_TYPES 256 /* Limited by what (unsigned char)'s can hold */ #endif /* !defined TZ_MAX_TYPES */ #ifndef TZ_MAX_CHARS #define TZ_MAX_CHARS 50 /* Maximum number of abbreviation characters */ /* (limited by what unsigned chars can hold) */ #endif /* !defined TZ_MAX_CHARS */ #ifndef TZ_MAX_LEAPS #define TZ_MAX_LEAPS 50 /* Maximum number of leap second corrections */ #endif /* !defined TZ_MAX_LEAPS */ #define SECSPERMIN 60 #define MINSPERHOUR 60 #define HOURSPERDAY 24 #define DAYSPERWEEK 7 #define DAYSPERNYEAR 365 #define DAYSPERLYEAR 366 #define SECSPERHOUR (SECSPERMIN * MINSPERHOUR) #define SECSPERDAY ((int_fast32_t) SECSPERHOUR * HOURSPERDAY) #define MONSPERYEAR 12 #define TM_SUNDAY 0 #define TM_MONDAY 1 #define TM_TUESDAY 2 #define TM_WEDNESDAY 3 #define TM_THURSDAY 4 #define TM_FRIDAY 5 #define TM_SATURDAY 6 #define TM_JANUARY 0 #define TM_FEBRUARY 1 #define TM_MARCH 2 #define TM_APRIL 3 #define TM_MAY 4 #define TM_JUNE 5 #define TM_JULY 6 #define TM_AUGUST 7 #define TM_SEPTEMBER 8 #define TM_OCTOBER 9 #define TM_NOVEMBER 10 #define TM_DECEMBER 11 #define TM_YEAR_BASE 1900 #define EPOCH_YEAR 1970 #define EPOCH_WDAY TM_THURSDAY #define isleap(y) (((y) % 4) == 0 && (((y) % 100) != 0 || ((y) % 400) == 0)) /* ** Since everything in isleap is modulo 400 (or a factor of 400), we know that ** isleap(y) == isleap(y % 400) ** and so ** isleap(a + b) == isleap((a + b) % 400) ** or ** isleap(a + b) == isleap(a % 400 + b % 400) ** This is true even if % means modulo rather than Fortran remainder ** (which is allowed by C89 but not C99). ** We use this to avoid addition overflow problems. */ #define isleap_sum(a, b) isleap((a) % 400 + (b) % 400) #ifndef TZ_ABBR_MAX_LEN #define TZ_ABBR_MAX_LEN 16 #endif /* !defined TZ_ABBR_MAX_LEN */ #ifndef TZ_ABBR_CHAR_SET #define TZ_ABBR_CHAR_SET \ "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789 :+-._" #endif /* !defined TZ_ABBR_CHAR_SET */ #ifndef TZ_ABBR_ERR_CHAR #define TZ_ABBR_ERR_CHAR '_' #endif /* !defined TZ_ABBR_ERR_CHAR */ #ifndef WILDABBR /* ** Someone might make incorrect use of a time zone abbreviation: ** 1. They might reference tzname[0] before calling tzset (explicitly ** or implicitly). ** 2. They might reference tzname[1] before calling tzset (explicitly ** or implicitly). ** 3. They might reference tzname[1] after setting to a time zone ** in which Daylight Saving Time is never observed. ** 4. They might reference tzname[0] after setting to a time zone ** in which Standard Time is never observed. ** 5. They might reference tm.TM_ZONE after calling offtime. ** What's best to do in the above cases is open to debate; ** for now, we just set things up so that in any of the five cases ** WILDABBR is used. Another possibility: initialize tzname[0] to the ** string "tzname[0] used before set", and similarly for the other cases. ** And another: initialize tzname[0] to "ERA", with an explanation in the ** manual page of what this "time zone abbreviation" means (doing this so ** that tzname[0] has the "normal" length of three characters). */ #define WILDABBR " " #endif /* !defined WILDABBR */ #ifdef TM_ZONE static const char wildabbr[] = WILDABBR; #endif static const char gmt[] = "GMT"; struct ttinfo { /* time type information */ int_fast32_t tt_gmtoff; /* UT offset in seconds */ int tt_isdst; /* used to set tm_isdst */ int tt_abbrind; /* abbreviation list index */ int tt_ttisstd; /* true if transition is std time */ int tt_ttisgmt; /* true if transition is UT */ }; struct lsinfo { /* leap second information */ time_t ls_trans; /* transition time */ int_fast64_t ls_corr; /* correction to apply */ }; #define BIGGEST(a, b) (((a) > (b)) ? (a) : (b)) #ifdef TZNAME_MAX #define MY_TZNAME_MAX TZNAME_MAX #endif /* defined TZNAME_MAX */ #ifndef TZNAME_MAX #define MY_TZNAME_MAX 255 #endif /* !defined TZNAME_MAX */ struct state { int leapcnt; int timecnt; int typecnt; int charcnt; int goback; int goahead; time_t ats[TZ_MAX_TIMES]; unsigned char types[TZ_MAX_TIMES]; struct ttinfo ttis[TZ_MAX_TYPES]; char chars[BIGGEST(BIGGEST(TZ_MAX_CHARS + 1, sizeof gmt), (2 * (MY_TZNAME_MAX + 1)))]; struct lsinfo lsis[TZ_MAX_LEAPS]; int defaulttype; /* for early times or if no transitions */ }; struct rule { int r_type; /* type of rule--see below */ int r_day; /* day number of rule */ int r_week; /* week number of rule */ int r_mon; /* month number of rule */ int_fast32_t r_time; /* transition time of rule */ }; #define JULIAN_DAY 0 /* Jn - Julian day */ #define DAY_OF_YEAR 1 /* n - day of year */ #define MONTH_NTH_DAY_OF_WEEK 2 /* Mm.n.d - month, week, day of week */ /* ** Prototypes for static functions. */ static void gmtload(struct state * sp); static struct tm * gmtsub(const time_t * timep, int_fast32_t offset, struct tm * tmp); static int increment_overflow(int * number, int delta); static int leaps_thru_end_of(int y) ATTRIBUTE_PURE; static int increment_overflow32(int_fast32_t * number, int delta); static int normalize_overflow32(int_fast32_t * tensptr, int * unitsptr, int base); static int normalize_overflow(int * tensptr, int * unitsptr, int base); static time_t time1(struct tm * tmp, struct tm * (*funcp)(const time_t *, int_fast32_t, struct tm *), int_fast32_t offset); static time_t time2(struct tm *tmp, struct tm * (*funcp)(const time_t *, int_fast32_t, struct tm*), int_fast32_t offset, int * okayp); static time_t time2sub(struct tm *tmp, struct tm * (*funcp)(const time_t *, int_fast32_t, struct tm*), int_fast32_t offset, int * okayp, int do_norm_secs); static struct tm * timesub(const time_t * timep, int_fast32_t offset, const struct state * sp, struct tm * tmp); static int tmcomp(const struct tm * atmp, const struct tm * btmp); static struct state gmtmem; #define gmtptr (&gmtmem) static int gmt_is_set; static const int mon_lengths[2][MONSPERYEAR] = { { 31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31 }, { 31, 29, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31 } }; static const int year_lengths[2] = { DAYSPERNYEAR, DAYSPERLYEAR }; static void gmtload(struct state *const sp) { memset(sp, 0, sizeof(struct state)); sp->typecnt = 1; sp->charcnt = 4; sp->chars[0] = 'G'; sp->chars[1] = 'M'; sp->chars[2] = 'T'; } /* ** gmtsub is to gmtime as localsub is to localtime. */ static struct tm * gmtsub(const time_t *const timep, const int_fast32_t offset, struct tm *const tmp) { register struct tm * result; if (!gmt_is_set) { gmt_is_set = true; // if (gmtptr != NULL) gmtload(gmtptr); } result = timesub(timep, offset, gmtptr, tmp); #ifdef TM_ZONE /* ** Could get fancy here and deliver something such as ** "UT+xxxx" or "UT-xxxx" if offset is non-zero, ** but this is no time for a treasure hunt. */ tmp->TM_ZONE = offset ? wildabbr : gmtptr ? gmtptr->chars : gmt; #endif /* defined TM_ZONE */ return result; } /* ** Return the number of leap years through the end of the given year ** where, to make the math easy, the answer for year zero is defined as zero. */ static int leaps_thru_end_of(register const int y) { return (y >= 0) ? (y / 4 - y / 100 + y / 400) : -(leaps_thru_end_of(-(y + 1)) + 1); } static struct tm * timesub(const time_t *const timep, const int_fast32_t offset, register const struct state *const sp, register struct tm *const tmp) { register const struct lsinfo * lp; register time_t tdays; register int idays; /* unsigned would be so 2003 */ register int_fast64_t rem; int y; register const int * ip; register int_fast64_t corr; register int hit; register int i; corr = 0; hit = 0; i = (sp == NULL) ? 0 : sp->leapcnt; while (--i >= 0) { lp = &sp->lsis[i]; if (*timep >= lp->ls_trans) { if (*timep == lp->ls_trans) { hit = ((i == 0 && lp->ls_corr > 0) || lp->ls_corr > sp->lsis[i - 1].ls_corr); if (hit) while (i > 0 && sp->lsis[i].ls_trans == sp->lsis[i - 1].ls_trans + 1 && sp->lsis[i].ls_corr == sp->lsis[i - 1].ls_corr + 1) { ++hit; --i; } } corr = lp->ls_corr; break; } } y = EPOCH_YEAR; tdays = *timep / SECSPERDAY; rem = *timep - tdays * SECSPERDAY; while (tdays < 0 || tdays >= year_lengths[isleap(y)]) { int newy; register time_t tdelta; register int idelta; register int leapdays; tdelta = tdays / DAYSPERLYEAR; if (! ((! TYPE_SIGNED(time_t) || INT_MIN <= tdelta) && tdelta <= INT_MAX)) return NULL; idelta = (int) tdelta; if (idelta == 0) idelta = (tdays < 0) ? -1 : 1; newy = y; if (increment_overflow(&newy, idelta)) return NULL; leapdays = leaps_thru_end_of(newy - 1) - leaps_thru_end_of(y - 1); tdays -= ((time_t) newy - y) * DAYSPERNYEAR; tdays -= leapdays; y = newy; } { register int_fast32_t seconds; seconds = (int_fast32_t) (tdays * SECSPERDAY); tdays = seconds / SECSPERDAY; rem += seconds - tdays * SECSPERDAY; } /* ** Given the range, we can now fearlessly cast... */ idays = (int) tdays; rem += offset - corr; while (rem < 0) { rem += SECSPERDAY; --idays; } while (rem >= SECSPERDAY) { rem -= SECSPERDAY; ++idays; } while (idays < 0) { if (increment_overflow(&y, -1)) return NULL; idays += year_lengths[isleap(y)]; } while (idays >= year_lengths[isleap(y)]) { idays -= year_lengths[isleap(y)]; if (increment_overflow(&y, 1)) return NULL; } tmp->tm_year = y; if (increment_overflow(&tmp->tm_year, -TM_YEAR_BASE)) return NULL; tmp->tm_yday = idays; /* ** The "extra" mods below avoid overflow problems. */ tmp->tm_wday = EPOCH_WDAY + ((y - EPOCH_YEAR) % DAYSPERWEEK) * (DAYSPERNYEAR % DAYSPERWEEK) + leaps_thru_end_of(y - 1) - leaps_thru_end_of(EPOCH_YEAR - 1) + idays; tmp->tm_wday %= DAYSPERWEEK; if (tmp->tm_wday < 0) tmp->tm_wday += DAYSPERWEEK; tmp->tm_hour = (int) (rem / SECSPERHOUR); rem %= SECSPERHOUR; tmp->tm_min = (int) (rem / SECSPERMIN); /* ** A positive leap second requires a special ** representation. This uses "... ??:59:60" et seq. */ tmp->tm_sec = (int) (rem % SECSPERMIN) + hit; ip = mon_lengths[isleap(y)]; for (tmp->tm_mon = 0; idays >= ip[tmp->tm_mon]; ++(tmp->tm_mon)) idays -= ip[tmp->tm_mon]; tmp->tm_mday = (int) (idays + 1); tmp->tm_isdst = 0; #ifdef TM_GMTOFF tmp->TM_GMTOFF = offset; #endif /* defined TM_GMTOFF */ return tmp; } /* ** Adapted from code provided by Robert Elz, who writes: ** The "best" way to do mktime I think is based on an idea of Bob ** Kridle's (so its said...) from a long time ago. ** It does a binary search of the time_t space. Since time_t's are ** just 32 bits, its a max of 32 iterations (even at 64 bits it ** would still be very reasonable). */ #ifndef WRONG #define WRONG (-1) #endif /* !defined WRONG */ /* ** Normalize logic courtesy Paul Eggert. */ static int increment_overflow(int *const ip, int j) { register int const i = *ip; /* ** If i >= 0 there can only be overflow if i + j > INT_MAX ** or if j > INT_MAX - i; given i >= 0, INT_MAX - i cannot overflow. ** If i < 0 there can only be overflow if i + j < INT_MIN ** or if j < INT_MIN - i; given i < 0, INT_MIN - i cannot overflow. */ if ((i >= 0) ? (j > INT_MAX - i) : (j < INT_MIN - i)) return true; *ip += j; return false; } static int increment_overflow32(int_fast32_t *const lp, int const m) { register int_fast32_t const l = *lp; if ((l >= 0) ? (m > INT_FAST32_MAX - l) : (m < INT_FAST32_MIN - l)) return true; *lp += m; return false; } static int normalize_overflow(int *const tensptr, int *const unitsptr, const int base) { register int tensdelta; tensdelta = (*unitsptr >= 0) ? (*unitsptr / base) : (-1 - (-1 - *unitsptr) / base); *unitsptr -= tensdelta * base; return increment_overflow(tensptr, tensdelta); } static int normalize_overflow32(int_fast32_t *const tensptr, int *const unitsptr, const int base) { register int tensdelta; tensdelta = (*unitsptr >= 0) ? (*unitsptr / base) : (-1 - (-1 - *unitsptr) / base); *unitsptr -= tensdelta * base; return increment_overflow32(tensptr, tensdelta); } static int tmcomp(register const struct tm *const atmp, register const struct tm *const btmp) { register int result; if (atmp->tm_year != btmp->tm_year) return atmp->tm_year < btmp->tm_year ? -1 : 1; if ((result = (atmp->tm_mon - btmp->tm_mon)) == 0 && (result = (atmp->tm_mday - btmp->tm_mday)) == 0 && (result = (atmp->tm_hour - btmp->tm_hour)) == 0 && (result = (atmp->tm_min - btmp->tm_min)) == 0) result = atmp->tm_sec - btmp->tm_sec; return result; } static time_t time2sub(struct tm *const tmp, struct tm *(*const funcp)(const time_t *, int_fast32_t, struct tm *), const int_fast32_t offset, int *const okayp, const int do_norm_secs) { register const struct state * sp; register int dir; register int i, j; register int saved_seconds; register int_fast32_t li; register time_t lo; register time_t hi; int_fast32_t y; time_t newt; time_t t; struct tm yourtm, mytm; *okayp = false; yourtm = *tmp; if (do_norm_secs) { if (normalize_overflow(&yourtm.tm_min, &yourtm.tm_sec, SECSPERMIN)) return WRONG; } if (normalize_overflow(&yourtm.tm_hour, &yourtm.tm_min, MINSPERHOUR)) return WRONG; if (normalize_overflow(&yourtm.tm_mday, &yourtm.tm_hour, HOURSPERDAY)) return WRONG; y = yourtm.tm_year; if (normalize_overflow32(&y, &yourtm.tm_mon, MONSPERYEAR)) return WRONG; /* ** Turn y into an actual year number for now. ** It is converted back to an offset from TM_YEAR_BASE later. */ if (increment_overflow32(&y, TM_YEAR_BASE)) return WRONG; while (yourtm.tm_mday <= 0) { if (increment_overflow32(&y, -1)) return WRONG; li = y + (1 < yourtm.tm_mon); yourtm.tm_mday += year_lengths[isleap(li)]; } while (yourtm.tm_mday > DAYSPERLYEAR) { li = y + (1 < yourtm.tm_mon); yourtm.tm_mday -= year_lengths[isleap(li)]; if (increment_overflow32(&y, 1)) return WRONG; } for ( ; ; ) { i = mon_lengths[isleap(y)][yourtm.tm_mon]; if (yourtm.tm_mday <= i) break; yourtm.tm_mday -= i; if (++yourtm.tm_mon >= MONSPERYEAR) { yourtm.tm_mon = 0; if (increment_overflow32(&y, 1)) return WRONG; } } if (increment_overflow32(&y, -TM_YEAR_BASE)) return WRONG; yourtm.tm_year = y; if (yourtm.tm_year != y) return WRONG; if (yourtm.tm_sec >= 0 && yourtm.tm_sec < SECSPERMIN) saved_seconds = 0; else if (y + TM_YEAR_BASE < EPOCH_YEAR) { /* ** We can't set tm_sec to 0, because that might push the ** time below the minimum representable time. ** Set tm_sec to 59 instead. ** This assumes that the minimum representable time is ** not in the same minute that a leap second was deleted from, ** which is a safer assumption than using 58 would be. */ if (increment_overflow(&yourtm.tm_sec, 1 - SECSPERMIN)) return WRONG; saved_seconds = yourtm.tm_sec; yourtm.tm_sec = SECSPERMIN - 1; } else { saved_seconds = yourtm.tm_sec; yourtm.tm_sec = 0; } /* ** Do a binary search (this works whatever time_t's type is). */ if (!TYPE_SIGNED(time_t)) { lo = 0; hi = lo - 1; } else { lo = 1; for (i = 0; i < (int) TYPE_BIT(time_t) - 1; ++i) lo *= 2; hi = -(lo + 1); } for ( ; ; ) { t = lo / 2 + hi / 2; if (t < lo) t = lo; else if (t > hi) t = hi; if ((*funcp)(&t, offset, &mytm) == NULL) { /* ** Assume that t is too extreme to be represented in ** a struct tm; arrange things so that it is less ** extreme on the next pass. */ dir = (t > 0) ? 1 : -1; } else dir = tmcomp(&mytm, &yourtm); if (dir != 0) { if (t == lo) { if (t == time_t_max) return WRONG; ++t; ++lo; } else if (t == hi) { if (t == time_t_min) return WRONG; --t; --hi; } if (lo > hi) return WRONG; if (dir > 0) hi = t; else lo = t; continue; } if (yourtm.tm_isdst < 0 || mytm.tm_isdst == yourtm.tm_isdst) break; /* ** Right time, wrong type. ** Hunt for right time, right type. ** It's okay to guess wrong since the guess ** gets checked. */ sp = (const struct state *) gmtptr; if (sp == NULL) return WRONG; for (i = sp->typecnt - 1; i >= 0; --i) { if (sp->ttis[i].tt_isdst != yourtm.tm_isdst) continue; for (j = sp->typecnt - 1; j >= 0; --j) { if (sp->ttis[j].tt_isdst == yourtm.tm_isdst) continue; newt = t + sp->ttis[j].tt_gmtoff - sp->ttis[i].tt_gmtoff; if ((*funcp)(&newt, offset, &mytm) == NULL) continue; if (tmcomp(&mytm, &yourtm) != 0) continue; if (mytm.tm_isdst != yourtm.tm_isdst) continue; /* ** We have a match. */ t = newt; goto label; } } return WRONG; } label: newt = t + saved_seconds; if ((newt < t) != (saved_seconds < 0)) return WRONG; t = newt; if ((*funcp)(&t, offset, tmp)) *okayp = true; return t; } static time_t time2(struct tm * const tmp, struct tm * (*const funcp)(const time_t *, int_fast32_t, struct tm *), const int_fast32_t offset, int *const okayp) { time_t t; /* ** First try without normalization of seconds ** (in case tm_sec contains a value associated with a leap second). ** If that fails, try with normalization of seconds. */ t = time2sub(tmp, funcp, offset, okayp, false); return *okayp ? t : time2sub(tmp, funcp, offset, okayp, true); } static time_t time1(struct tm *const tmp, struct tm *(*const funcp) (const time_t *, int_fast32_t, struct tm *), const int_fast32_t offset) { register time_t t; register const struct state * sp; register int samei, otheri; register int sameind, otherind; register int i; register int nseen; int seen[TZ_MAX_TYPES]; int types[TZ_MAX_TYPES]; int okay; if (tmp == NULL) { errno = EINVAL; return WRONG; } if (tmp->tm_isdst > 1) tmp->tm_isdst = 1; t = time2(tmp, funcp, offset, &okay); if (okay) return t; if (tmp->tm_isdst < 0) #ifdef PCTS /* ** POSIX Conformance Test Suite code courtesy Grant Sullivan. */ tmp->tm_isdst = 0; /* reset to std and try again */ #else return t; #endif /* !defined PCTS */ /* ** We're supposed to assume that somebody took a time of one type ** and did some math on it that yielded a "struct tm" that's bad. ** We try to divine the type they started from and adjust to the ** type they need. */ sp = (const struct state *) gmtptr; if (sp == NULL) return WRONG; for (i = 0; i < sp->typecnt; ++i) seen[i] = false; nseen = 0; for (i = sp->timecnt - 1; i >= 0; --i) if (!seen[sp->types[i]]) { seen[sp->types[i]] = true; types[nseen++] = sp->types[i]; } for (sameind = 0; sameind < nseen; ++sameind) { samei = types[sameind]; if (sp->ttis[samei].tt_isdst != tmp->tm_isdst) continue; for (otherind = 0; otherind < nseen; ++otherind) { otheri = types[otherind]; if (sp->ttis[otheri].tt_isdst == tmp->tm_isdst) continue; tmp->tm_sec += sp->ttis[otheri].tt_gmtoff - sp->ttis[samei].tt_gmtoff; tmp->tm_isdst = !tmp->tm_isdst; t = time2(tmp, funcp, offset, &okay); if (okay) return t; tmp->tm_sec -= sp->ttis[otheri].tt_gmtoff - sp->ttis[samei].tt_gmtoff; tmp->tm_isdst = !tmp->tm_isdst; } } return WRONG; } time_t _bson_timegm(struct tm *const tmp) { if (tmp != NULL) tmp->tm_isdst = 0; return time1(tmp, gmtsub, 0L); } #endif MongoDB-v1.2.2/bson/bson-types.h000644 000765 000024 00000040541 12651754051 016650 0ustar00davidstaff000000 000000 /* * Copyright 2013 MongoDB, Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #ifndef BSON_TYPES_H #define BSON_TYPES_H #if !defined (BSON_INSIDE) && !defined (BSON_COMPILATION) #error "Only can be included directly." #endif #include #include #include "bson-macros.h" #include "bson-compat.h" BSON_BEGIN_DECLS /* *-------------------------------------------------------------------------- * * bson_unichar_t -- * * bson_unichar_t provides an unsigned 32-bit type for containing * unicode characters. When iterating UTF-8 sequences, this should * be used to avoid losing the high-bits of non-ascii characters. * * See also: * bson_string_append_unichar() * *-------------------------------------------------------------------------- */ typedef uint32_t bson_unichar_t; /** * bson_context_flags_t: * * This enumeration is used to configure a bson_context_t. * * %BSON_CONTEXT_NONE: Use default options. * %BSON_CONTEXT_THREAD_SAFE: Context will be called from multiple threads. * %BSON_CONTEXT_DISABLE_PID_CACHE: Call getpid() instead of caching the * result of getpid() when initializing the context. * %BSON_CONTEXT_DISABLE_HOST_CACHE: Call gethostname() instead of caching the * result of gethostname() when initializing the context. */ typedef enum { BSON_CONTEXT_NONE = 0, BSON_CONTEXT_THREAD_SAFE = (1 << 0), BSON_CONTEXT_DISABLE_HOST_CACHE = (1 << 1), BSON_CONTEXT_DISABLE_PID_CACHE = (1 << 2), #if defined(__linux__) BSON_CONTEXT_USE_TASK_ID = (1 << 3), #endif } bson_context_flags_t; /** * bson_context_t: * * This structure manages context for the bson library. It handles * configuration for thread-safety and other performance related requirements. * Consumers will create a context and may use multiple under a variety of * situations. * * If your program calls fork(), you should initialize a new bson_context_t * using bson_context_init(). * * If you are using threading, it is suggested that you use a bson_context_t * per thread for best performance. Alternatively, you can initialize the * bson_context_t with BSON_CONTEXT_THREAD_SAFE, although a performance penalty * will be incurred. * * Many functions will require that you provide a bson_context_t such as OID * generation. * * This structure is oqaque in that you cannot see the contents of the * structure. However, it is stack allocatable in that enough padding is * provided in _bson_context_t to hold the structure. */ typedef struct _bson_context_t bson_context_t; /** * bson_t: * * This structure manages a buffer whose contents are a properly formatted * BSON document. You may perform various transforms on the BSON documents. * Additionally, it can be iterated over using bson_iter_t. * * See bson_iter_init() for iterating the contents of a bson_t. * * When building a bson_t structure using the various append functions, * memory allocations may occur. That is performed using power of two * allocations and realloc(). * * See http://bsonspec.org for the BSON document spec. * * This structure is meant to fit in two sequential 64-byte cachelines. */ BSON_ALIGNED_BEGIN (128) typedef struct _bson_t { uint32_t flags; /* Internal flags for the bson_t. */ uint32_t len; /* Length of BSON data. */ uint8_t padding[120]; /* Padding for stack allocation. */ } bson_t BSON_ALIGNED_END (128); /** * BSON_INITIALIZER: * * This macro can be used to initialize a #bson_t structure on the stack * without calling bson_init(). * * |[ * bson_t b = BSON_INITIALIZER; * ]| */ #define BSON_INITIALIZER { 3, 5, { 5 } } BSON_STATIC_ASSERT (sizeof (bson_t) == 128); /** * bson_oid_t: * * This structure contains the binary form of a BSON Object Id as specified * on http://bsonspec.org. If you would like the bson_oid_t in string form * see bson_oid_to_string() or bson_oid_to_string_r(). */ typedef struct { uint8_t bytes[12]; } bson_oid_t; BSON_STATIC_ASSERT (sizeof (bson_oid_t) == 12); /** * bson_validate_flags_t: * * This enumeration is used for validation of BSON documents. It allows * selective control on what you wish to validate. * * %BSON_VALIDATE_NONE: No additional validation occurs. * %BSON_VALIDATE_UTF8: Check that strings are valid UTF-8. * %BSON_VALIDATE_DOLLAR_KEYS: Check that keys do not start with $. * %BSON_VALIDATE_DOT_KEYS: Check that keys do not contain a period. * %BSON_VALIDATE_UTF8_ALLOW_NULL: Allow NUL bytes in UTF-8 text. */ typedef enum { BSON_VALIDATE_NONE = 0, BSON_VALIDATE_UTF8 = (1 << 0), BSON_VALIDATE_DOLLAR_KEYS = (1 << 1), BSON_VALIDATE_DOT_KEYS = (1 << 2), BSON_VALIDATE_UTF8_ALLOW_NULL = (1 << 3), } bson_validate_flags_t; /** * bson_type_t: * * This enumeration contains all of the possible types within a BSON document. * Use bson_iter_type() to fetch the type of a field while iterating over it. */ typedef enum { BSON_TYPE_EOD = 0x00, BSON_TYPE_DOUBLE = 0x01, BSON_TYPE_UTF8 = 0x02, BSON_TYPE_DOCUMENT = 0x03, BSON_TYPE_ARRAY = 0x04, BSON_TYPE_BINARY = 0x05, BSON_TYPE_UNDEFINED = 0x06, BSON_TYPE_OID = 0x07, BSON_TYPE_BOOL = 0x08, BSON_TYPE_DATE_TIME = 0x09, BSON_TYPE_NULL = 0x0A, BSON_TYPE_REGEX = 0x0B, BSON_TYPE_DBPOINTER = 0x0C, BSON_TYPE_CODE = 0x0D, BSON_TYPE_SYMBOL = 0x0E, BSON_TYPE_CODEWSCOPE = 0x0F, BSON_TYPE_INT32 = 0x10, BSON_TYPE_TIMESTAMP = 0x11, BSON_TYPE_INT64 = 0x12, BSON_TYPE_MAXKEY = 0x7F, BSON_TYPE_MINKEY = 0xFF, } bson_type_t; /** * bson_subtype_t: * * This enumeration contains the various subtypes that may be used in a binary * field. See http://bsonspec.org for more information. */ typedef enum { BSON_SUBTYPE_BINARY = 0x00, BSON_SUBTYPE_FUNCTION = 0x01, BSON_SUBTYPE_BINARY_DEPRECATED = 0x02, BSON_SUBTYPE_UUID_DEPRECATED = 0x03, BSON_SUBTYPE_UUID = 0x04, BSON_SUBTYPE_MD5 = 0x05, BSON_SUBTYPE_USER = 0x80, } bson_subtype_t; /* *-------------------------------------------------------------------------- * * bson_value_t -- * * A boxed type to contain various bson_type_t types. * * See also: * bson_value_copy() * bson_value_destroy() * *-------------------------------------------------------------------------- */ BSON_ALIGNED_BEGIN (8) typedef struct _bson_value_t { bson_type_t value_type; int32_t padding; union { bson_oid_t v_oid; int64_t v_int64; int32_t v_int32; int8_t v_int8; double v_double; bool v_bool; int64_t v_datetime; struct { uint32_t timestamp; uint32_t increment; } v_timestamp; struct { char *str; uint32_t len; } v_utf8; struct { uint8_t *data; uint32_t data_len; } v_doc; struct { uint8_t *data; uint32_t data_len; bson_subtype_t subtype; } v_binary; struct { char *regex; char *options; } v_regex; struct { char *collection; uint32_t collection_len; bson_oid_t oid; } v_dbpointer; struct { char *code; uint32_t code_len; } v_code; struct { char *code; uint8_t *scope_data; uint32_t code_len; uint32_t scope_len; } v_codewscope; struct { char *symbol; uint32_t len; } v_symbol; } value; } bson_value_t BSON_ALIGNED_END (8); /** * bson_iter_t: * * This structure manages iteration over a bson_t structure. It keeps track * of the location of the current key and value within the buffer. Using the * various functions to get the value of the iter will read from these * locations. * * This structure is safe to discard on the stack. No cleanup is necessary * after using it. */ BSON_ALIGNED_BEGIN (128) typedef struct { const uint8_t *raw; /* The raw buffer being iterated. */ uint32_t len; /* The length of raw. */ uint32_t off; /* The offset within the buffer. */ uint32_t type; /* The offset of the type byte. */ uint32_t key; /* The offset of the key byte. */ uint32_t d1; /* The offset of the first data byte. */ uint32_t d2; /* The offset of the second data byte. */ uint32_t d3; /* The offset of the third data byte. */ uint32_t d4; /* The offset of the fourth data byte. */ uint32_t next_off; /* The offset of the next field. */ uint32_t err_off; /* The offset of the error. */ bson_value_t value; /* Internal value for various state. */ } bson_iter_t BSON_ALIGNED_END (128); /** * bson_reader_t: * * This structure is used to iterate over a sequence of BSON documents. It * allows for them to be iterated with the possibility of no additional * memory allocations under certain circumstances such as reading from an * incoming mongo packet. */ typedef struct { uint32_t type; /*< private >*/ } bson_reader_t; /** * bson_visitor_t: * * This structure contains a series of pointers that can be executed for * each field of a BSON document based on the field type. * * For example, if an int32 field is found, visit_int32 will be called. * * When visiting each field using bson_iter_visit_all(), you may provide a * data pointer that will be provided with each callback. This might be useful * if you are marshaling to another language. * * You may pre-maturely stop the visitation of fields by returning true in your * visitor. Returning false will continue visitation to further fields. */ BSON_ALIGNED_BEGIN (8) typedef struct { bool (*visit_before) (const bson_iter_t *iter, const char *key, void *data); bool (*visit_after) (const bson_iter_t *iter, const char *key, void *data); void (*visit_corrupt) (const bson_iter_t *iter, void *data); bool (*visit_double) (const bson_iter_t *iter, const char *key, double v_double, void *data); bool (*visit_utf8) (const bson_iter_t *iter, const char *key, size_t v_utf8_len, const char *v_utf8, void *data); bool (*visit_document) (const bson_iter_t *iter, const char *key, const bson_t *v_document, void *data); bool (*visit_array) (const bson_iter_t *iter, const char *key, const bson_t *v_array, void *data); bool (*visit_binary) (const bson_iter_t *iter, const char *key, bson_subtype_t v_subtype, size_t v_binary_len, const uint8_t *v_binary, void *data); bool (*visit_undefined) (const bson_iter_t *iter, const char *key, void *data); bool (*visit_oid) (const bson_iter_t *iter, const char *key, const bson_oid_t *v_oid, void *data); bool (*visit_bool) (const bson_iter_t *iter, const char *key, bool v_bool, void *data); bool (*visit_date_time) (const bson_iter_t *iter, const char *key, int64_t msec_since_epoch, void *data); bool (*visit_null) (const bson_iter_t *iter, const char *key, void *data); bool (*visit_regex) (const bson_iter_t *iter, const char *key, const char *v_regex, const char *v_options, void *data); bool (*visit_dbpointer) (const bson_iter_t *iter, const char *key, size_t v_collection_len, const char *v_collection, const bson_oid_t *v_oid, void *data); bool (*visit_code) (const bson_iter_t *iter, const char *key, size_t v_code_len, const char *v_code, void *data); bool (*visit_symbol) (const bson_iter_t *iter, const char *key, size_t v_symbol_len, const char *v_symbol, void *data); bool (*visit_codewscope) (const bson_iter_t *iter, const char *key, size_t v_code_len, const char *v_code, const bson_t *v_scope, void *data); bool (*visit_int32) (const bson_iter_t *iter, const char *key, int32_t v_int32, void *data); bool (*visit_timestamp) (const bson_iter_t *iter, const char *key, uint32_t v_timestamp, uint32_t v_increment, void *data); bool (*visit_int64) (const bson_iter_t *iter, const char *key, int64_t v_int64, void *data); bool (*visit_maxkey) (const bson_iter_t *iter, const char *key, void *data); bool (*visit_minkey) (const bson_iter_t *iter, const char *key, void *data); void *padding[9]; } bson_visitor_t BSON_ALIGNED_END (8); BSON_ALIGNED_BEGIN (8) typedef struct _bson_error_t { uint32_t domain; uint32_t code; char message[504]; } bson_error_t BSON_ALIGNED_END (8); BSON_STATIC_ASSERT (sizeof (bson_error_t) == 512); /** * bson_next_power_of_two: * @v: A 32-bit unsigned integer of required bytes. * * Determines the next larger power of two for the value of @v * in a constant number of operations. * * It is up to the caller to guarantee this will not overflow. * * Returns: The next power of 2 from @v. */ static BSON_INLINE size_t bson_next_power_of_two (size_t v) { v--; v |= v >> 1; v |= v >> 2; v |= v >> 4; v |= v >> 8; v |= v >> 16; #if BSON_WORD_SIZE == 64 v |= v >> 32; #endif v++; return v; } static BSON_INLINE bool bson_is_power_of_two (uint32_t v) { return ((v != 0) && ((v & (v - 1)) == 0)); } BSON_END_DECLS #endif /* BSON_TYPES_H */ MongoDB-v1.2.2/bson/bson-utf8.c000644 000765 000024 00000022774 12651754051 016375 0ustar00davidstaff000000 000000 /* * Copyright 2013 MongoDB, Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #include #include "bson-memory.h" #include "bson-string.h" #include "bson-utf8.h" /* *-------------------------------------------------------------------------- * * _bson_utf8_get_sequence -- * * Determine the sequence length of the first UTF-8 character in * @utf8. The sequence length is stored in @seq_length and the mask * for the first character is stored in @first_mask. * * Returns: * None. * * Side effects: * @seq_length is set. * @first_mask is set. * *-------------------------------------------------------------------------- */ static BSON_INLINE void _bson_utf8_get_sequence (const char *utf8, /* IN */ uint8_t *seq_length, /* OUT */ uint8_t *first_mask) /* OUT */ { unsigned char c = *(const unsigned char *)utf8; uint8_t m; uint8_t n; /* * See the following[1] for a description of what the given multi-byte * sequences will be based on the bits set of the first byte. We also need * to mask the first byte based on that. All subsequent bytes are masked * against 0x3F. * * [1] http://www.joelonsoftware.com/articles/Unicode.html */ if ((c & 0x80) == 0) { n = 1; m = 0x7F; } else if ((c & 0xE0) == 0xC0) { n = 2; m = 0x1F; } else if ((c & 0xF0) == 0xE0) { n = 3; m = 0x0F; } else if ((c & 0xF8) == 0xF0) { n = 4; m = 0x07; } else if ((c & 0xFC) == 0xF8) { n = 5; m = 0x03; } else if ((c & 0xFE) == 0xFC) { n = 6; m = 0x01; } else { n = 0; m = 0; } *seq_length = n; *first_mask = m; } /* *-------------------------------------------------------------------------- * * bson_utf8_validate -- * * Validates that @utf8 is a valid UTF-8 string. * * If @allow_null is true, then \0 is allowed within @utf8_len bytes * of @utf8. Generally, this is bad practice since the main point of * UTF-8 strings is that they can be used with strlen() and friends. * However, some languages such as Python can send UTF-8 encoded * strings with NUL's in them. * * Parameters: * @utf8: A UTF-8 encoded string. * @utf8_len: The length of @utf8 in bytes. * @allow_null: If \0 is allowed within @utf8, exclusing trailing \0. * * Returns: * true if @utf8 is valid UTF-8. otherwise false. * * Side effects: * None. * *-------------------------------------------------------------------------- */ bool bson_utf8_validate (const char *utf8, /* IN */ size_t utf8_len, /* IN */ bool allow_null) /* IN */ { bson_unichar_t c; uint8_t first_mask; uint8_t seq_length; unsigned i; unsigned j; bson_return_val_if_fail (utf8, false); for (i = 0; i < utf8_len; i += seq_length) { _bson_utf8_get_sequence (&utf8[i], &seq_length, &first_mask); /* * Ensure we have a valid multi-byte sequence length. */ if (!seq_length) { return false; } /* * Ensure we have enough bytes left. */ if ((utf8_len - i) < seq_length) { return false; } /* * Also calculate the next char as a unichar so we can * check code ranges for non-shortest form. */ c = utf8 [i] & first_mask; /* * Check the high-bits for each additional sequence byte. */ for (j = i + 1; j < (i + seq_length); j++) { c = (c << 6) | (utf8 [j] & 0x3F); if ((utf8[j] & 0xC0) != 0x80) { return false; } } /* * Check for NULL bytes afterwards. * * Hint: if you want to optimize this function, starting here to do * this in the same pass as the data above would probably be a good * idea. You would add a branch into the inner loop, but save possibly * on cache-line bouncing on larger strings. Just a thought. */ if (!allow_null) { for (j = 0; j < seq_length; j++) { if (((i + j) > utf8_len) || !utf8[i + j]) { return false; } } } /* * Code point wont fit in utf-16, not allowed. */ if (c > 0x0010FFFF) { return false; } /* * Byte is in reserved range for UTF-16 high-marks * for surrogate pairs. */ if ((c & 0xFFFFF800) == 0xD800) { return false; } /* * Check non-shortest form unicode. */ switch (seq_length) { case 1: if (c <= 0x007F) { continue; } return false; case 2: if ((c >= 0x0080) && (c <= 0x07FF)) { continue; } else if (c == 0) { /* Two-byte representation for NULL. */ continue; } return false; case 3: if (((c >= 0x0800) && (c <= 0x0FFF)) || ((c >= 0x1000) && (c <= 0xFFFF))) { continue; } return false; case 4: if (((c >= 0x10000) && (c <= 0x3FFFF)) || ((c >= 0x40000) && (c <= 0xFFFFF)) || ((c >= 0x100000) && (c <= 0x10FFFF))) { continue; } return false; default: return false; } } return true; } /* *-------------------------------------------------------------------------- * * bson_utf8_get_char -- * * Fetches the next UTF-8 character from the UTF-8 sequence. * * Parameters: * @utf8: A string containing validated UTF-8. * * Returns: * A 32-bit bson_unichar_t reprsenting the multi-byte sequence. * * Side effects: * None. * *-------------------------------------------------------------------------- */ bson_unichar_t bson_utf8_get_char (const char *utf8) /* IN */ { bson_unichar_t c; uint8_t mask; uint8_t num; int i; bson_return_val_if_fail (utf8, -1); _bson_utf8_get_sequence (utf8, &num, &mask); c = (*utf8) & mask; for (i = 1; i < num; i++) { c = (c << 6) | (utf8[i] & 0x3F); } return c; } /* *-------------------------------------------------------------------------- * * bson_utf8_next_char -- * * Returns an incremented pointer to the beginning of the next * multi-byte sequence in @utf8. * * Parameters: * @utf8: A string containing validated UTF-8. * * Returns: * An incremented pointer in @utf8. * * Side effects: * None. * *-------------------------------------------------------------------------- */ const char * bson_utf8_next_char (const char *utf8) /* IN */ { uint8_t mask; uint8_t num; bson_return_val_if_fail (utf8, NULL); _bson_utf8_get_sequence (utf8, &num, &mask); return utf8 + num; } /* *-------------------------------------------------------------------------- * * bson_utf8_from_unichar -- * * Converts the unichar to a sequence of utf8 bytes and stores those * in @utf8. The number of bytes in the sequence are stored in @len. * * Parameters: * @unichar: A bson_unichar_t. * @utf8: A location for the multi-byte sequence. * @len: A location for number of bytes stored in @utf8. * * Returns: * None. * * Side effects: * @utf8 is set. * @len is set. * *-------------------------------------------------------------------------- */ void bson_utf8_from_unichar ( bson_unichar_t unichar, /* IN */ char utf8[BSON_ENSURE_ARRAY_PARAM_SIZE(6)], /* OUT */ uint32_t *len) /* OUT */ { bson_return_if_fail (utf8); bson_return_if_fail (len); if (unichar <= 0x7F) { utf8[0] = unichar; *len = 1; } else if (unichar <= 0x7FF) { *len = 2; utf8[0] = 0xC0 | ((unichar >> 6) & 0x3F); utf8[1] = 0x80 | ((unichar) & 0x3F); } else if (unichar <= 0xFFFF) { *len = 3; utf8[0] = 0xE0 | ((unichar >> 12) & 0xF); utf8[1] = 0x80 | ((unichar >> 6) & 0x3F); utf8[2] = 0x80 | ((unichar) & 0x3F); } else if (unichar <= 0x1FFFFF) { *len = 4; utf8[0] = 0xF0 | ((unichar >> 18) & 0x7); utf8[1] = 0x80 | ((unichar >> 12) & 0x3F); utf8[2] = 0x80 | ((unichar >> 6) & 0x3F); utf8[3] = 0x80 | ((unichar) & 0x3F); } else if (unichar <= 0x3FFFFFF) { *len = 5; utf8[0] = 0xF8 | ((unichar >> 24) & 0x3); utf8[1] = 0x80 | ((unichar >> 18) & 0x3F); utf8[2] = 0x80 | ((unichar >> 12) & 0x3F); utf8[3] = 0x80 | ((unichar >> 6) & 0x3F); utf8[4] = 0x80 | ((unichar) & 0x3F); } else if (unichar <= 0x7FFFFFFF) { *len = 6; utf8[0] = 0xFC | ((unichar >> 31) & 0x1); utf8[1] = 0x80 | ((unichar >> 25) & 0x3F); utf8[2] = 0x80 | ((unichar >> 19) & 0x3F); utf8[3] = 0x80 | ((unichar >> 13) & 0x3F); utf8[4] = 0x80 | ((unichar >> 7) & 0x3F); utf8[5] = 0x80 | ((unichar) & 0x1); } else { *len = 0; } } MongoDB-v1.2.2/bson/bson-utf8.h000644 000765 000024 00000002601 12651754051 016365 0ustar00davidstaff000000 000000 /* * Copyright 2013 MongoDB, Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #ifndef BSON_UTF8_H #define BSON_UTF8_H #if !defined (BSON_INSIDE) && !defined (BSON_COMPILATION) # error "Only can be included directly." #endif #include "bson-macros.h" #include "bson-types.h" BSON_BEGIN_DECLS bool bson_utf8_validate (const char *utf8, size_t utf8_len, bool allow_null); bson_unichar_t bson_utf8_get_char (const char *utf8); const char *bson_utf8_next_char (const char *utf8); void bson_utf8_from_unichar (bson_unichar_t unichar, char utf8[6], uint32_t *len); BSON_END_DECLS #endif /* BSON_UTF8_H */ MongoDB-v1.2.2/bson/bson-value.c000644 000765 000024 00000014275 12651754051 016620 0ustar00davidstaff000000 000000 /* * Copyright 2014 MongoDB, Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #include "bson-memory.h" #include "bson-string.h" #include "bson-value.h" #include "bson-oid.h" void bson_value_copy (const bson_value_t *src, /* IN */ bson_value_t *dst) /* OUT */ { bson_return_if_fail (src); bson_return_if_fail (dst); dst->value_type = src->value_type; switch (src->value_type) { case BSON_TYPE_DOUBLE: dst->value.v_double = src->value.v_double; break; case BSON_TYPE_UTF8: dst->value.v_utf8.len = src->value.v_utf8.len; dst->value.v_utf8.str = bson_malloc (src->value.v_utf8.len + 1); memcpy (dst->value.v_utf8.str, src->value.v_utf8.str, dst->value.v_utf8.len); dst->value.v_utf8.str [dst->value.v_utf8.len] = '\0'; break; case BSON_TYPE_DOCUMENT: case BSON_TYPE_ARRAY: dst->value.v_doc.data_len = src->value.v_doc.data_len; dst->value.v_doc.data = bson_malloc (src->value.v_doc.data_len); memcpy (dst->value.v_doc.data, src->value.v_doc.data, dst->value.v_doc.data_len); break; case BSON_TYPE_BINARY: dst->value.v_binary.subtype = src->value.v_binary.subtype; dst->value.v_binary.data_len = src->value.v_binary.data_len; dst->value.v_binary.data = bson_malloc (src->value.v_binary.data_len); memcpy (dst->value.v_binary.data, src->value.v_binary.data, dst->value.v_binary.data_len); break; case BSON_TYPE_OID: bson_oid_copy (&src->value.v_oid, &dst->value.v_oid); break; case BSON_TYPE_BOOL: dst->value.v_bool = src->value.v_bool; break; case BSON_TYPE_DATE_TIME: dst->value.v_datetime = src->value.v_datetime; break; case BSON_TYPE_REGEX: dst->value.v_regex.regex = bson_strdup (src->value.v_regex.regex); dst->value.v_regex.options = bson_strdup (src->value.v_regex.options); break; case BSON_TYPE_DBPOINTER: dst->value.v_dbpointer.collection_len = src->value.v_dbpointer.collection_len; dst->value.v_dbpointer.collection = bson_malloc (src->value.v_dbpointer.collection_len + 1); memcpy (dst->value.v_dbpointer.collection, src->value.v_dbpointer.collection, dst->value.v_dbpointer.collection_len); dst->value.v_dbpointer.collection [dst->value.v_dbpointer.collection_len] = '\0'; bson_oid_copy (&src->value.v_dbpointer.oid, &dst->value.v_dbpointer.oid); break; case BSON_TYPE_CODE: dst->value.v_code.code_len = src->value.v_code.code_len; dst->value.v_code.code = bson_malloc (src->value.v_code.code_len + 1); memcpy (dst->value.v_code.code, src->value.v_code.code, dst->value.v_code.code_len); dst->value.v_code.code [dst->value.v_code.code_len] = '\0'; break; case BSON_TYPE_SYMBOL: dst->value.v_symbol.len = src->value.v_symbol.len; dst->value.v_symbol.symbol = bson_malloc (src->value.v_symbol.len + 1); memcpy (dst->value.v_symbol.symbol, src->value.v_symbol.symbol, dst->value.v_symbol.len); dst->value.v_symbol.symbol [dst->value.v_symbol.len] = '\0'; break; case BSON_TYPE_CODEWSCOPE: dst->value.v_codewscope.code_len = src->value.v_codewscope.code_len; dst->value.v_codewscope.code = bson_malloc (src->value.v_codewscope.code_len + 1); memcpy (dst->value.v_codewscope.code, src->value.v_codewscope.code, dst->value.v_codewscope.code_len); dst->value.v_codewscope.code [dst->value.v_codewscope.code_len] = '\0'; dst->value.v_codewscope.scope_len = src->value.v_codewscope.scope_len; dst->value.v_codewscope.scope_data = bson_malloc (src->value.v_codewscope.scope_len); memcpy (dst->value.v_codewscope.scope_data, src->value.v_codewscope.scope_data, dst->value.v_codewscope.scope_len); break; case BSON_TYPE_INT32: dst->value.v_int32 = src->value.v_int32; break; case BSON_TYPE_TIMESTAMP: dst->value.v_timestamp.timestamp = src->value.v_timestamp.timestamp; dst->value.v_timestamp.increment = src->value.v_timestamp.increment; break; case BSON_TYPE_INT64: dst->value.v_int64 = src->value.v_int64; break; case BSON_TYPE_UNDEFINED: case BSON_TYPE_NULL: case BSON_TYPE_MAXKEY: case BSON_TYPE_MINKEY: break; case BSON_TYPE_EOD: default: BSON_ASSERT (false); return; } } void bson_value_destroy (bson_value_t *value) /* IN */ { switch (value->value_type) { case BSON_TYPE_UTF8: bson_free (value->value.v_utf8.str); break; case BSON_TYPE_DOCUMENT: case BSON_TYPE_ARRAY: bson_free (value->value.v_doc.data); break; case BSON_TYPE_BINARY: bson_free (value->value.v_binary.data); break; case BSON_TYPE_REGEX: bson_free (value->value.v_regex.regex); bson_free (value->value.v_regex.options); break; case BSON_TYPE_DBPOINTER: bson_free (value->value.v_dbpointer.collection); break; case BSON_TYPE_CODE: bson_free (value->value.v_code.code); break; case BSON_TYPE_SYMBOL: bson_free (value->value.v_symbol.symbol); break; case BSON_TYPE_CODEWSCOPE: bson_free (value->value.v_codewscope.code); bson_free (value->value.v_codewscope.scope_data); break; case BSON_TYPE_DOUBLE: case BSON_TYPE_UNDEFINED: case BSON_TYPE_OID: case BSON_TYPE_BOOL: case BSON_TYPE_DATE_TIME: case BSON_TYPE_NULL: case BSON_TYPE_INT32: case BSON_TYPE_TIMESTAMP: case BSON_TYPE_INT64: case BSON_TYPE_MAXKEY: case BSON_TYPE_MINKEY: case BSON_TYPE_EOD: default: break; } } MongoDB-v1.2.2/bson/bson-value.h000644 000765 000024 00000001616 12651754051 016620 0ustar00davidstaff000000 000000 /* * Copyright 2014 MongoDB, Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #ifndef BSON_VALUE_H #define BSON_VALUE_H #include "bson-macros.h" #include "bson-types.h" BSON_BEGIN_DECLS void bson_value_copy (const bson_value_t *src, bson_value_t *dst); void bson_value_destroy (bson_value_t *value); BSON_END_DECLS #endif /* BSON_VALUE_H */ MongoDB-v1.2.2/bson/bson-version.c000644 000765 000024 00000001501 12651754051 017155 0ustar00davidstaff000000 000000 /* * Copyright 2014 MongoDB, Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #include "bson-version.h" int bson_get_major_version (void) { return BSON_MAJOR_VERSION; } int bson_get_minor_version (void) { return BSON_MINOR_VERSION; } int bson_get_micro_version (void) { return BSON_MICRO_VERSION; } MongoDB-v1.2.2/bson/bson-version.h000644 000765 000024 00000005157 12651754051 017175 0ustar00davidstaff000000 000000 /* * Copyright 2013 MongoDB, Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #if !defined (BSON_INSIDE) && !defined (BSON_COMPILATION) #error "Only can be included directly." #endif #ifndef BSON_VERSION_H #define BSON_VERSION_H /** * BSON_MAJOR_VERSION: * * BSON major version component (e.g. 1 if %BSON_VERSION is 1.2.3) */ #define BSON_MAJOR_VERSION (1) /** * BSON_MINOR_VERSION: * * BSON minor version component (e.g. 2 if %BSON_VERSION is 1.2.3) */ #define BSON_MINOR_VERSION (1) /** * BSON_MICRO_VERSION: * * BSON micro version component (e.g. 3 if %BSON_VERSION is 1.2.3) */ #define BSON_MICRO_VERSION (7) /** * BSON_VERSION: * * BSON version. */ #define BSON_VERSION (1.1.7) /** * BSON_VERSION_S: * * BSON version, encoded as a string, useful for printing and * concatenation. */ #define BSON_VERSION_S "1.1.7" /** * BSON_VERSION_HEX: * * BSON version, encoded as an hexadecimal number, useful for * integer comparisons. */ #define BSON_VERSION_HEX (BSON_MAJOR_VERSION << 24 | \ BSON_MINOR_VERSION << 16 | \ BSON_MICRO_VERSION << 8) /** * BSON_CHECK_VERSION: * @major: required major version * @minor: required minor version * @micro: required micro version * * Compile-time version checking. Evaluates to %TRUE if the version * of BSON is greater than the required one. */ #define BSON_CHECK_VERSION(major,minor,micro) \ (BSON_MAJOR_VERSION > (major) || \ (BSON_MAJOR_VERSION == (major) && BSON_MINOR_VERSION > (minor)) || \ (BSON_MAJOR_VERSION == (major) && BSON_MINOR_VERSION == (minor) && \ BSON_MICRO_VERSION >= (micro))) /** * bson_get_major_version: * * Helper function to return the runtime major version of the library. */ int bson_get_major_version (void); /** * bson_get_minor_version: * * Helper function to return the runtime minor version of the library. */ int bson_get_minor_version (void); /** * bson_get_micro_version: * * Helper function to return the runtime micro version of the library. */ int bson_get_micro_version (void); #endif /* BSON_VERSION_H */ MongoDB-v1.2.2/bson/bson-writer.c000644 000765 000024 00000015563 12651754051 017021 0ustar00davidstaff000000 000000 /* * Copyright 2013 MongoDB, Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #include "bson-private.h" #include "bson-writer.h" struct _bson_writer_t { bool ready; uint8_t **buf; size_t *buflen; size_t offset; bson_realloc_func realloc_func; void *realloc_func_ctx; bson_t b; }; /* *-------------------------------------------------------------------------- * * bson_writer_new -- * * Creates a new instance of bson_writer_t using the buffer, length, * offset, and realloc() function supplied. * * The caller is expected to clean up the structure when finished * using bson_writer_destroy(). * * Parameters: * @buf: (inout): A pointer to a target buffer. * @buflen: (inout): A pointer to the buffer length. * @offset: The offset in the target buffer to start from. * @realloc_func: A realloc() style function or NULL. * * Returns: * A newly allocated bson_writer_t that should be freed with * bson_writer_destroy(). * * Side effects: * None. * *-------------------------------------------------------------------------- */ bson_writer_t * bson_writer_new (uint8_t **buf, /* IN */ size_t *buflen, /* IN */ size_t offset, /* IN */ bson_realloc_func realloc_func, /* IN */ void *realloc_func_ctx) /* IN */ { bson_writer_t *writer; writer = bson_malloc0 (sizeof *writer); writer->buf = buf; writer->buflen = buflen; writer->offset = offset; writer->realloc_func = realloc_func; writer->realloc_func_ctx = realloc_func_ctx; writer->ready = true; return writer; } /* *-------------------------------------------------------------------------- * * bson_writer_destroy -- * * Cleanup after @writer and release any allocated memory. Note that * the buffer supplied to bson_writer_new() is NOT freed from this * method. The caller is responsible for that. * * Returns: * None. * * Side effects: * None. * *-------------------------------------------------------------------------- */ void bson_writer_destroy (bson_writer_t *writer) /* IN */ { bson_free (writer); } /* *-------------------------------------------------------------------------- * * bson_writer_get_length -- * * Fetches the current length of the content written by the buffer * (including the initial offset). This includes a partly written * document currently being written. * * This is useful if you want to check to see if you've passed a given * memory boundry that cannot be sent in a packet. See * bson_writer_rollback() to abort the current document being written. * * Returns: * The number of bytes written plus initial offset. * * Side effects: * None. * *-------------------------------------------------------------------------- */ size_t bson_writer_get_length (bson_writer_t *writer) /* IN */ { return writer->offset + writer->b.len; } /* *-------------------------------------------------------------------------- * * bson_writer_begin -- * * Begins writing a new document. The caller may use the bson * structure to write out a new BSON document. When completed, the * caller must call either bson_writer_end() or * bson_writer_rollback(). * * Parameters: * @writer: A bson_writer_t. * @bson: (out): A location for a bson_t*. * * Returns: * true if the underlying realloc was successful; otherwise false. * * Side effects: * @bson is initialized if true is returned. * *-------------------------------------------------------------------------- */ bool bson_writer_begin (bson_writer_t *writer, /* IN */ bson_t **bson) /* OUT */ { bson_impl_alloc_t *b; bool grown = false; bson_return_val_if_fail (writer, false); bson_return_val_if_fail (writer->ready, false); bson_return_val_if_fail (bson, false); writer->ready = false; memset (&writer->b, 0, sizeof (bson_t)); b = (bson_impl_alloc_t *)&writer->b; b->flags = BSON_FLAG_STATIC | BSON_FLAG_NO_FREE; b->len = 5; b->parent = NULL; b->buf = writer->buf; b->buflen = writer->buflen; b->offset = writer->offset; b->alloc = NULL; b->alloclen = 0; b->realloc = writer->realloc_func; b->realloc_func_ctx = writer->realloc_func_ctx; while ((writer->offset + writer->b.len) > *writer->buflen) { if (!writer->realloc_func) { memset (&writer->b, 0, sizeof (bson_t)); writer->ready = true; return false; } grown = true; if (!*writer->buflen) { *writer->buflen = 64; } else { (*writer->buflen) *= 2; } } if (grown) { *writer->buf = writer->realloc_func (*writer->buf, *writer->buflen, writer->realloc_func_ctx); } memset ((*writer->buf) + writer->offset + 1, 0, 5); (*writer->buf)[writer->offset] = 5; *bson = &writer->b; return true; } /* *-------------------------------------------------------------------------- * * bson_writer_end -- * * Complete writing of a bson_writer_t to the buffer supplied. * * Returns: * None. * * Side effects: * None. * *-------------------------------------------------------------------------- */ void bson_writer_end (bson_writer_t *writer) /* IN */ { bson_return_if_fail (writer); bson_return_if_fail (!writer->ready); writer->offset += writer->b.len; memset (&writer->b, 0, sizeof (bson_t)); writer->ready = true; } /* *-------------------------------------------------------------------------- * * bson_writer_rollback -- * * Abort the appending of the current bson_t to the memory region * managed by @writer. This is useful if you detected that you went * past a particular memory limit. For example, MongoDB has 48MB * message limits. * * Returns: * None. * * Side effects: * None. * *-------------------------------------------------------------------------- */ void bson_writer_rollback (bson_writer_t *writer) /* IN */ { bson_return_if_fail (writer); if (writer->b.len) { memset (&writer->b, 0, sizeof (bson_t)); } writer->ready = true; } MongoDB-v1.2.2/bson/bson-writer.h000644 000765 000024 00000003640 12651754051 017017 0ustar00davidstaff000000 000000 /* * Copyright 2013 MongoDB, Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #ifndef BSON_WRITER_H #define BSON_WRITER_H #include "bson.h" BSON_BEGIN_DECLS /** * bson_writer_t: * * The bson_writer_t structure is a helper for writing a series of BSON * documents to a single malloc() buffer. You can provide a realloc() style * function to grow the buffer as you go. * * This is useful if you want to build a series of BSON documents right into * the target buffer for an outgoing packet. The offset parameter allows you to * start at an offset of the target buffer. */ typedef struct _bson_writer_t bson_writer_t; bson_writer_t *bson_writer_new (uint8_t **buf, size_t *buflen, size_t offset, bson_realloc_func realloc_func, void *realloc_func_ctx); void bson_writer_destroy (bson_writer_t *writer); size_t bson_writer_get_length (bson_writer_t *writer); bool bson_writer_begin (bson_writer_t *writer, bson_t **bson); void bson_writer_end (bson_writer_t *writer); void bson_writer_rollback (bson_writer_t *writer); BSON_END_DECLS #endif /* BSON_WRITER_H */ MongoDB-v1.2.2/bson/bson.c000644 000765 000024 00000175666 12651754051 015522 0ustar00davidstaff000000 000000 /* * Copyright 2013 MongoDB, Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #include #include #include #include "b64_ntop.h" #include "bson.h" #include "bson-private.h" #include "bson-string.h" #ifndef BSON_MAX_RECURSION # define BSON_MAX_RECURSION 100 #endif typedef enum { BSON_VALIDATE_PHASE_START, BSON_VALIDATE_PHASE_TOP, BSON_VALIDATE_PHASE_LF_REF_KEY, BSON_VALIDATE_PHASE_LF_REF_UTF8, BSON_VALIDATE_PHASE_LF_ID_KEY, BSON_VALIDATE_PHASE_LF_DB_KEY, BSON_VALIDATE_PHASE_LF_DB_UTF8, BSON_VALIDATE_PHASE_NOT_DBREF, } bson_validate_phase_t; /* * Structures. */ typedef struct { bson_validate_flags_t flags; ssize_t err_offset; bson_validate_phase_t phase; } bson_validate_state_t; /* * Globals. */ static const uint8_t gZero; /* *-------------------------------------------------------------------------- * * _bson_impl_inline_grow -- * * Document growth implementation for documents that currently * contain stack based buffers. The document may be switched to * a malloc based buffer. * * Returns: * true if successful; otherwise false indicating INT_MAX overflow. * * Side effects: * None. * *-------------------------------------------------------------------------- */ static bool _bson_impl_inline_grow (bson_impl_inline_t *impl, /* IN */ size_t size) /* IN */ { bson_impl_alloc_t *alloc = (bson_impl_alloc_t *)impl; uint8_t *data; size_t req; BSON_ASSERT (impl); BSON_ASSERT (!(impl->flags & BSON_FLAG_RDONLY)); BSON_ASSERT (!(impl->flags & BSON_FLAG_CHILD)); if (((size_t)impl->len + size) <= sizeof impl->data) { return true; } req = bson_next_power_of_two (impl->len + size); if (req <= INT32_MAX) { data = bson_malloc (req); memcpy (data, impl->data, impl->len); alloc->flags &= ~BSON_FLAG_INLINE; alloc->parent = NULL; alloc->depth = 0; alloc->buf = &alloc->alloc; alloc->buflen = &alloc->alloclen; alloc->offset = 0; alloc->alloc = data; alloc->alloclen = req; alloc->realloc = bson_realloc_ctx; alloc->realloc_func_ctx = NULL; return true; } return false; } /* *-------------------------------------------------------------------------- * * _bson_impl_alloc_grow -- * * Document growth implementation for documents containing malloc * based buffers. * * Returns: * true if successful; otherwise false indicating INT_MAX overflow. * * Side effects: * None. * *-------------------------------------------------------------------------- */ static bool _bson_impl_alloc_grow (bson_impl_alloc_t *impl, /* IN */ size_t size) /* IN */ { size_t req; BSON_ASSERT (impl); /* * Determine how many bytes we need for this document in the buffer * including necessary trailing bytes for parent documents. */ req = (impl->offset + impl->len + size + impl->depth); if (req <= *impl->buflen) { return true; } req = bson_next_power_of_two (req); if ((req <= INT32_MAX) && impl->realloc) { *impl->buf = impl->realloc (*impl->buf, req, impl->realloc_func_ctx); *impl->buflen = req; return true; } return false; } /* *-------------------------------------------------------------------------- * * _bson_grow -- * * Grows the bson_t structure to be large enough to contain @size * bytes. * * Returns: * true if successful, false if the size would overflow. * * Side effects: * None. * *-------------------------------------------------------------------------- */ static bool _bson_grow (bson_t *bson, /* IN */ uint32_t size) /* IN */ { BSON_ASSERT (bson); BSON_ASSERT (!(bson->flags & BSON_FLAG_RDONLY)); if ((bson->flags & BSON_FLAG_INLINE)) { return _bson_impl_inline_grow ((bson_impl_inline_t *)bson, size); } return _bson_impl_alloc_grow ((bson_impl_alloc_t *)bson, size); } /* *-------------------------------------------------------------------------- * * _bson_data -- * * A helper function to return the contents of the bson document * taking into account the polymorphic nature of bson_t. * * Returns: * A buffer which should not be modified or freed. * * Side effects: * None. * *-------------------------------------------------------------------------- */ static BSON_INLINE uint8_t * _bson_data (const bson_t *bson) /* IN */ { if ((bson->flags & BSON_FLAG_INLINE)) { return ((bson_impl_inline_t *)bson)->data; } else { bson_impl_alloc_t *impl = (bson_impl_alloc_t *)bson; return (*impl->buf) + impl->offset; } } /* *-------------------------------------------------------------------------- * * _bson_encode_length -- * * Helper to encode the length of the bson_t in the first 4 bytes * of the bson document. Little endian format is used as specified * by bsonspec. * * Returns: * None. * * Side effects: * None. * *-------------------------------------------------------------------------- */ static BSON_INLINE void _bson_encode_length (bson_t *bson) /* IN */ { #if BSON_BYTE_ORDER == BSON_LITTLE_ENDIAN memcpy (_bson_data (bson), &bson->len, sizeof (bson->len)); #else uint32_t length_le = BSON_UINT32_TO_LE (bson->len); memcpy (_bson_data (bson), &length_le, sizeof (length_le)); #endif } /* *-------------------------------------------------------------------------- * * _bson_append_va -- * * Appends the length,buffer pairs to the bson_t. @n_bytes is an * optimization to perform one array growth rather than many small * growths. * * @bson: A bson_t * @n_bytes: The number of bytes to append to the document. * @n_pairs: The number of length,buffer pairs. * @first_len: Length of first buffer. * @first_data: First buffer. * @args: va_list of additional tuples. * * Returns: * true if the bytes were appended successfully. * false if it bson would overflow INT_MAX. * * Side effects: * None. * *-------------------------------------------------------------------------- */ static BSON_INLINE bool _bson_append_va (bson_t *bson, /* IN */ uint32_t n_bytes, /* IN */ uint32_t n_pairs, /* IN */ uint32_t first_len, /* IN */ const uint8_t *first_data, /* IN */ va_list args) /* IN */ { const uint8_t *data; uint32_t data_len; uint8_t *buf; BSON_ASSERT (bson); BSON_ASSERT (!(bson->flags & BSON_FLAG_IN_CHILD)); BSON_ASSERT (!(bson->flags & BSON_FLAG_RDONLY)); BSON_ASSERT (n_pairs); BSON_ASSERT (first_len); BSON_ASSERT (first_data); if (BSON_UNLIKELY (!_bson_grow (bson, n_bytes))) { return false; } data = first_data; data_len = first_len; buf = _bson_data (bson) + bson->len - 1; do { n_pairs--; memcpy (buf, data, data_len); bson->len += data_len; buf += data_len; if (n_pairs) { data_len = va_arg (args, uint32_t); data = va_arg (args, const uint8_t *); } } while (n_pairs); _bson_encode_length (bson); *buf = '\0'; return true; } /* *-------------------------------------------------------------------------- * * _bson_append -- * * Variadic function to append length,buffer pairs to a bson_t. If the * append would cause the bson_t to overflow a 32-bit length, it will * return false and no append will have occurred. * * Parameters: * @bson: A bson_t. * @n_pairs: Number of length,buffer pairs. * @n_bytes: the total number of bytes being appended. * @first_len: Length of first buffer. * @first_data: First buffer. * * Returns: * true if successful; otherwise false indicating INT_MAX overflow. * * Side effects: * None. * *-------------------------------------------------------------------------- */ static bool _bson_append (bson_t *bson, /* IN */ uint32_t n_pairs, /* IN */ uint32_t n_bytes, /* IN */ uint32_t first_len, /* IN */ const uint8_t *first_data, /* IN */ ...) { va_list args; bool ok; BSON_ASSERT (bson); BSON_ASSERT (n_pairs); BSON_ASSERT (first_len); BSON_ASSERT (first_data); /* * Check to see if this append would overflow 32-bit signed integer. I know * what you're thinking. BSON uses a signed 32-bit length field? Yeah. It * does. */ if (BSON_UNLIKELY (n_bytes > (BSON_MAX_SIZE - bson->len))) { return false; } va_start (args, first_data); ok = _bson_append_va (bson, n_bytes, n_pairs, first_len, first_data, args); va_end (args); return ok; } /* *-------------------------------------------------------------------------- * * _bson_append_bson_begin -- * * Begin appending a subdocument or subarray to the document using * the key provided by @key. * * If @key_length is < 0, then strlen() will be called on @key * to determine the length. * * @key_type MUST be either BSON_TYPE_DOCUMENT or BSON_TYPE_ARRAY. * * Returns: * true if successful; otherwise false indiciating INT_MAX overflow. * * Side effects: * @child is initialized if true is returned. * *-------------------------------------------------------------------------- */ static bool _bson_append_bson_begin (bson_t *bson, /* IN */ const char *key, /* IN */ int key_length, /* IN */ bson_type_t child_type, /* IN */ bson_t *child) /* OUT */ { const uint8_t type = child_type; const uint8_t empty[5] = { 5 }; bson_impl_alloc_t *aparent = (bson_impl_alloc_t *)bson; bson_impl_alloc_t *achild = (bson_impl_alloc_t *)child; BSON_ASSERT (bson); BSON_ASSERT (!(bson->flags & BSON_FLAG_RDONLY)); BSON_ASSERT (!(bson->flags & BSON_FLAG_IN_CHILD)); BSON_ASSERT (key); BSON_ASSERT ((child_type == BSON_TYPE_DOCUMENT) || (child_type == BSON_TYPE_ARRAY)); BSON_ASSERT (child); if (key_length < 0) { key_length = (int)strlen (key); } /* * If the parent is an inline bson_t, then we need to convert * it to a heap allocated buffer. This makes extending buffers * of child bson documents much simpler logic, as they can just * realloc the *buf pointer. */ if ((bson->flags & BSON_FLAG_INLINE)) { BSON_ASSERT (bson->len <= 120); if (!_bson_grow (bson, 128 - bson->len)) { return false; } BSON_ASSERT (!(bson->flags & BSON_FLAG_INLINE)); } /* * Append the type and key for the field. */ if (!_bson_append (bson, 4, (1 + key_length + 1 + 5), 1, &type, key_length, key, 1, &gZero, 5, empty)) { return false; } /* * Mark the document as working on a child document so that no * further modifications can happen until the caller has called * bson_append_{document,array}_end(). */ bson->flags |= BSON_FLAG_IN_CHILD; /* * Initialize the child bson_t structure and point it at the parents * buffers. This allows us to realloc directly from the child without * walking up to the parent bson_t. */ achild->flags = (BSON_FLAG_CHILD | BSON_FLAG_NO_FREE | BSON_FLAG_STATIC); if ((bson->flags & BSON_FLAG_CHILD)) { achild->depth = ((bson_impl_alloc_t *)bson)->depth + 1; } else { achild->depth = 1; } achild->parent = bson; achild->buf = aparent->buf; achild->buflen = aparent->buflen; achild->offset = aparent->offset + aparent->len - 1 - 5; achild->len = 5; achild->alloc = NULL; achild->alloclen = 0; achild->realloc = aparent->realloc; achild->realloc_func_ctx = aparent->realloc_func_ctx; return true; } /* *-------------------------------------------------------------------------- * * _bson_append_bson_end -- * * Complete a call to _bson_append_bson_begin. * * Returns: * true if successful; otherwise false indiciating INT_MAX overflow. * * Side effects: * @child is destroyed and no longer valid after calling this * function. * *-------------------------------------------------------------------------- */ static bool _bson_append_bson_end (bson_t *bson, /* IN */ bson_t *child) /* IN */ { BSON_ASSERT (bson); BSON_ASSERT ((bson->flags & BSON_FLAG_IN_CHILD)); BSON_ASSERT (!(child->flags & BSON_FLAG_IN_CHILD)); /* * Unmark the IN_CHILD flag. */ bson->flags &= ~BSON_FLAG_IN_CHILD; /* * Now that we are done building the sub-document, add the size to the * parent, not including the default 5 byte empty document already added. */ bson->len = (bson->len + child->len - 5); /* * Ensure we have a \0 byte at the end and proper length encoded at * the beginning of the document. */ _bson_data (bson)[bson->len - 1] = '\0'; _bson_encode_length (bson); return true; } /* *-------------------------------------------------------------------------- * * bson_append_array_begin -- * * Start appending a new array. * * Use @child to append to the data area for the given field. * * It is a programming error to call any other bson function on * @bson until bson_append_array_end() has been called. It is * valid to call bson_append*() functions on @child. * * This function is useful to allow building nested documents using * a single buffer owned by the top-level bson document. * * Returns: * true if successful; otherwise false and @child is invalid. * * Side effects: * @child is initialized if true is returned. * *-------------------------------------------------------------------------- */ bool bson_append_array_begin (bson_t *bson, /* IN */ const char *key, /* IN */ int key_length, /* IN */ bson_t *child) /* IN */ { bson_return_val_if_fail (bson, false); bson_return_val_if_fail (key, false); bson_return_val_if_fail (child, false); return _bson_append_bson_begin (bson, key, key_length, BSON_TYPE_ARRAY, child); } /* *-------------------------------------------------------------------------- * * bson_append_array_end -- * * Complete a call to bson_append_array_begin(). * * It is safe to append other fields to @bson after calling this * function. * * Returns: * true if successful; otherwise false indiciating INT_MAX overflow. * * Side effects: * @child is invalid after calling this function. * *-------------------------------------------------------------------------- */ bool bson_append_array_end (bson_t *bson, /* IN */ bson_t *child) /* IN */ { bson_return_val_if_fail (bson, false); bson_return_val_if_fail (child, false); return _bson_append_bson_end (bson, child); } /* *-------------------------------------------------------------------------- * * bson_append_document_begin -- * * Start appending a new document. * * Use @child to append to the data area for the given field. * * It is a programming error to call any other bson function on * @bson until bson_append_document_end() has been called. It is * valid to call bson_append*() functions on @child. * * This function is useful to allow building nested documents using * a single buffer owned by the top-level bson document. * * Returns: * true if successful; otherwise false and @child is invalid. * * Side effects: * @child is initialized if true is returned. * *-------------------------------------------------------------------------- */ bool bson_append_document_begin (bson_t *bson, /* IN */ const char *key, /* IN */ int key_length, /* IN */ bson_t *child) /* IN */ { bson_return_val_if_fail (bson, false); bson_return_val_if_fail (key, false); bson_return_val_if_fail (child, false); return _bson_append_bson_begin (bson, key, key_length, BSON_TYPE_DOCUMENT, child); } /* *-------------------------------------------------------------------------- * * bson_append_document_end -- * * Complete a call to bson_append_document_begin(). * * It is safe to append new fields to @bson after calling this * function, if true is returned. * * Returns: * true if successful; otherwise false indicating INT_MAX overflow. * * Side effects: * @child is destroyed and invalid after calling this function. * *-------------------------------------------------------------------------- */ bool bson_append_document_end (bson_t *bson, /* IN */ bson_t *child) /* IN */ { bson_return_val_if_fail (bson, false); bson_return_val_if_fail (child, false); return _bson_append_bson_end (bson, child); } /* *-------------------------------------------------------------------------- * * bson_append_array -- * * Append an array to @bson. * * Generally, bson_append_array_begin() will result in faster code * since few buffers need to be malloced. * * Returns: * true if successful; otherwise false indiciating INT_MAX overflow. * * Side effects: * None. * *-------------------------------------------------------------------------- */ bool bson_append_array (bson_t *bson, /* IN */ const char *key, /* IN */ int key_length, /* IN */ const bson_t *array) /* IN */ { static const uint8_t type = BSON_TYPE_ARRAY; bson_return_val_if_fail (bson, false); bson_return_val_if_fail (key, false); bson_return_val_if_fail (array, false); if (key_length < 0) { key_length = (int)strlen (key); } /* * Let's be a bit pedantic and ensure the array has properly formatted key * names. We will verify this simply by checking the first element for "0" * if the array is non-empty. */ if (array && !bson_empty (array)) { bson_iter_t iter; if (bson_iter_init (&iter, array) && bson_iter_next (&iter)) { if (0 != strcmp ("0", bson_iter_key (&iter))) { fprintf (stderr, "%s(): invalid array detected. first element of array " "parameter is not \"0\".\n", __FUNCTION__); } } } return _bson_append (bson, 4, (1 + key_length + 1 + array->len), 1, &type, key_length, key, 1, &gZero, array->len, _bson_data (array)); } /* *-------------------------------------------------------------------------- * * bson_append_binary -- * * Append binary data to @bson. The field will have the * BSON_TYPE_BINARY type. * * Parameters: * @subtype: the BSON Binary Subtype. See bsonspec.org for more * information. * @binary: a pointer to the raw binary data. * @length: the size of @binary in bytes. * * Returns: * true if successful; otherwise false. * * Side effects: * None. * *-------------------------------------------------------------------------- */ bool bson_append_binary (bson_t *bson, /* IN */ const char *key, /* IN */ int key_length, /* IN */ bson_subtype_t subtype, /* IN */ const uint8_t *binary, /* IN */ uint32_t length) /* IN */ { static const uint8_t type = BSON_TYPE_BINARY; uint32_t length_le; uint32_t deprecated_length_le; uint8_t subtype8 = 0; bson_return_val_if_fail (bson, false); bson_return_val_if_fail (key, false); bson_return_val_if_fail (binary, false); if (key_length < 0) { key_length = (int)strlen (key); } subtype8 = subtype; if (subtype == BSON_SUBTYPE_BINARY_DEPRECATED) { length_le = BSON_UINT32_TO_LE (length + 4); deprecated_length_le = BSON_UINT32_TO_LE (length); return _bson_append (bson, 7, (1 + key_length + 1 + 4 + 1 + 4 + length), 1, &type, key_length, key, 1, &gZero, 4, &length_le, 1, &subtype8, 4, &deprecated_length_le, length, binary); } else { length_le = BSON_UINT32_TO_LE (length); return _bson_append (bson, 6, (1 + key_length + 1 + 4 + 1 + length), 1, &type, key_length, key, 1, &gZero, 4, &length_le, 1, &subtype8, length, binary); } } /* *-------------------------------------------------------------------------- * * bson_append_bool -- * * Append a new field to @bson with the name @key. The value is * a boolean indicated by @value. * * Returns: * true if succesful; otherwise false. * * Side effects: * None. * *-------------------------------------------------------------------------- */ bool bson_append_bool (bson_t *bson, /* IN */ const char *key, /* IN */ int key_length, /* IN */ bool value) /* IN */ { static const uint8_t type = BSON_TYPE_BOOL; uint8_t byte = !!value; bson_return_val_if_fail (bson, false); bson_return_val_if_fail (key, false); if (key_length < 0) { key_length = (int)strlen (key); } return _bson_append (bson, 4, (1 + key_length + 1 + 1), 1, &type, key_length, key, 1, &gZero, 1, &byte); } /* *-------------------------------------------------------------------------- * * bson_append_code -- * * Append a new field to @bson containing javascript code. * * @javascript MUST be a zero terminated UTF-8 string. It MUST NOT * containing embedded \0 characters. * * Returns: * true if successful; otherwise false. * * Side effects: * None. * * See also: * bson_append_code_with_scope(). * *-------------------------------------------------------------------------- */ bool bson_append_code (bson_t *bson, /* IN */ const char *key, /* IN */ int key_length, /* IN */ const char *javascript) /* IN */ { static const uint8_t type = BSON_TYPE_CODE; uint32_t length; uint32_t length_le; bson_return_val_if_fail (bson, false); bson_return_val_if_fail (key, false); bson_return_val_if_fail (javascript, false); if (key_length < 0) { key_length = (int)strlen (key); } length = (int)strlen (javascript) + 1; length_le = BSON_UINT32_TO_LE (length); return _bson_append (bson, 5, (1 + key_length + 1 + 4 + length), 1, &type, key_length, key, 1, &gZero, 4, &length_le, length, javascript); } /* *-------------------------------------------------------------------------- * * bson_append_code_with_scope -- * * Append a new field to @bson containing javascript code with * supplied scope. * * Returns: * true if successful; otherwise false. * * Side effects: * None. * *-------------------------------------------------------------------------- */ bool bson_append_code_with_scope (bson_t *bson, /* IN */ const char *key, /* IN */ int key_length, /* IN */ const char *javascript, /* IN */ const bson_t *scope) /* IN */ { static const uint8_t type = BSON_TYPE_CODEWSCOPE; uint32_t codews_length_le; uint32_t codews_length; uint32_t js_length_le; uint32_t js_length; bson_return_val_if_fail (bson, false); bson_return_val_if_fail (key, false); bson_return_val_if_fail (javascript, false); if (bson_empty0 (scope)) { return bson_append_code (bson, key, key_length, javascript); } if (key_length < 0) { key_length = (int)strlen (key); } js_length = (int)strlen (javascript) + 1; js_length_le = BSON_UINT32_TO_LE (js_length); codews_length = 4 + 4 + js_length + scope->len; codews_length_le = BSON_UINT32_TO_LE (codews_length); return _bson_append (bson, 7, (1 + key_length + 1 + 4 + 4 + js_length + scope->len), 1, &type, key_length, key, 1, &gZero, 4, &codews_length_le, 4, &js_length_le, js_length, javascript, scope->len, _bson_data (scope)); } /* *-------------------------------------------------------------------------- * * bson_append_dbpointer -- * * This BSON data type is DEPRECATED. * * Append a BSON dbpointer field to @bson. * * Returns: * true if successful; otherwise false. * * Side effects: * None. * *-------------------------------------------------------------------------- */ bool bson_append_dbpointer (bson_t *bson, /* IN */ const char *key, /* IN */ int key_length, /* IN */ const char *collection, /* IN */ const bson_oid_t *oid) { static const uint8_t type = BSON_TYPE_DBPOINTER; uint32_t length; uint32_t length_le; bson_return_val_if_fail (bson, false); bson_return_val_if_fail (key, false); bson_return_val_if_fail (collection, false); bson_return_val_if_fail (oid, false); if (key_length < 0) { key_length = (int)strlen (key); } length = (int)strlen (collection) + 1; length_le = BSON_UINT32_TO_LE (length); return _bson_append (bson, 6, (1 + key_length + 1 + 4 + length + 12), 1, &type, key_length, key, 1, &gZero, 4, &length_le, length, collection, 12, oid); } /* *-------------------------------------------------------------------------- * * bson_append_document -- * * Append a new field to @bson containing a BSON document. * * In general, using bson_append_document_begin() results in faster * code and less memory fragmentation. * * Returns: * true if successful; otherwise false. * * Side effects: * None. * * See also: * bson_append_document_begin(). * *-------------------------------------------------------------------------- */ bool bson_append_document (bson_t *bson, /* IN */ const char *key, /* IN */ int key_length, /* IN */ const bson_t *value) /* IN */ { static const uint8_t type = BSON_TYPE_DOCUMENT; bson_return_val_if_fail (bson, false); bson_return_val_if_fail (key, false); bson_return_val_if_fail (value, false); if (key_length < 0) { key_length = (int)strlen (key); } return _bson_append (bson, 4, (1 + key_length + 1 + value->len), 1, &type, key_length, key, 1, &gZero, value->len, _bson_data (value)); } bool bson_append_double (bson_t *bson, const char *key, int key_length, double value) { static const uint8_t type = BSON_TYPE_DOUBLE; bson_return_val_if_fail (bson, false); bson_return_val_if_fail (key, false); if (key_length < 0) { key_length = (int)strlen (key); } #if BSON_BYTE_ORDER == BSON_BIG_ENDIAN value = BSON_DOUBLE_TO_LE (value); #endif return _bson_append (bson, 4, (1 + key_length + 1 + 8), 1, &type, key_length, key, 1, &gZero, 8, &value); } bool bson_append_int32 (bson_t *bson, const char *key, int key_length, int32_t value) { static const uint8_t type = BSON_TYPE_INT32; uint32_t value_le; bson_return_val_if_fail (bson, false); bson_return_val_if_fail (key, false); if (key_length < 0) { key_length = (int)strlen (key); } value_le = BSON_UINT32_TO_LE (value); return _bson_append (bson, 4, (1 + key_length + 1 + 4), 1, &type, key_length, key, 1, &gZero, 4, &value_le); } bool bson_append_int64 (bson_t *bson, const char *key, int key_length, int64_t value) { static const uint8_t type = BSON_TYPE_INT64; uint64_t value_le; bson_return_val_if_fail (bson, false); bson_return_val_if_fail (key, false); if (key_length < 0) { key_length = (int)strlen (key); } value_le = BSON_UINT64_TO_LE (value); return _bson_append (bson, 4, (1 + key_length + 1 + 8), 1, &type, key_length, key, 1, &gZero, 8, &value_le); } bool bson_append_iter (bson_t *bson, const char *key, int key_length, const bson_iter_t *iter) { bool ret = false; bson_return_val_if_fail (bson, false); bson_return_val_if_fail (iter, false); if (!key) { key = bson_iter_key (iter); key_length = -1; } switch (bson_iter_type_unsafe (iter)) { case BSON_TYPE_EOD: return false; case BSON_TYPE_DOUBLE: ret = bson_append_double (bson, key, key_length, bson_iter_double (iter)); break; case BSON_TYPE_UTF8: { uint32_t len = 0; const char *str; str = bson_iter_utf8 (iter, &len); ret = bson_append_utf8 (bson, key, key_length, str, len); } break; case BSON_TYPE_DOCUMENT: { const uint8_t *buf = NULL; uint32_t len = 0; bson_t doc; bson_iter_document (iter, &len, &buf); if (bson_init_static (&doc, buf, len)) { ret = bson_append_document (bson, key, key_length, &doc); bson_destroy (&doc); } } break; case BSON_TYPE_ARRAY: { const uint8_t *buf = NULL; uint32_t len = 0; bson_t doc; bson_iter_array (iter, &len, &buf); if (bson_init_static (&doc, buf, len)) { ret = bson_append_array (bson, key, key_length, &doc); bson_destroy (&doc); } } break; case BSON_TYPE_BINARY: { const uint8_t *binary = NULL; bson_subtype_t subtype = BSON_SUBTYPE_BINARY; uint32_t len = 0; bson_iter_binary (iter, &subtype, &len, &binary); ret = bson_append_binary (bson, key, key_length, subtype, binary, len); } break; case BSON_TYPE_UNDEFINED: ret = bson_append_undefined (bson, key, key_length); break; case BSON_TYPE_OID: ret = bson_append_oid (bson, key, key_length, bson_iter_oid (iter)); break; case BSON_TYPE_BOOL: ret = bson_append_bool (bson, key, key_length, bson_iter_bool (iter)); break; case BSON_TYPE_DATE_TIME: ret = bson_append_date_time (bson, key, key_length, bson_iter_date_time (iter)); break; case BSON_TYPE_NULL: ret = bson_append_null (bson, key, key_length); break; case BSON_TYPE_REGEX: { const char *regex; const char *options; regex = bson_iter_regex (iter, &options); ret = bson_append_regex (bson, key, key_length, regex, options); } break; case BSON_TYPE_DBPOINTER: { const bson_oid_t *oid; uint32_t len; const char *collection; bson_iter_dbpointer (iter, &len, &collection, &oid); ret = bson_append_dbpointer (bson, key, key_length, collection, oid); } break; case BSON_TYPE_CODE: { uint32_t len; const char *code; code = bson_iter_code (iter, &len); ret = bson_append_code (bson, key, key_length, code); } break; case BSON_TYPE_SYMBOL: { uint32_t len; const char *symbol; symbol = bson_iter_symbol (iter, &len); ret = bson_append_symbol (bson, key, key_length, symbol, len); } break; case BSON_TYPE_CODEWSCOPE: { const uint8_t *scope = NULL; uint32_t scope_len = 0; uint32_t len = 0; const char *javascript = NULL; bson_t doc; javascript = bson_iter_codewscope (iter, &len, &scope_len, &scope); if (bson_init_static (&doc, scope, scope_len)) { ret = bson_append_code_with_scope (bson, key, key_length, javascript, &doc); bson_destroy (&doc); } } break; case BSON_TYPE_INT32: ret = bson_append_int32 (bson, key, key_length, bson_iter_int32 (iter)); break; case BSON_TYPE_TIMESTAMP: { uint32_t ts; uint32_t inc; bson_iter_timestamp (iter, &ts, &inc); ret = bson_append_timestamp (bson, key, key_length, ts, inc); } break; case BSON_TYPE_INT64: ret = bson_append_int64 (bson, key, key_length, bson_iter_int64 (iter)); break; case BSON_TYPE_MAXKEY: ret = bson_append_maxkey (bson, key, key_length); break; case BSON_TYPE_MINKEY: ret = bson_append_minkey (bson, key, key_length); break; default: break; } return ret; } bool bson_append_maxkey (bson_t *bson, const char *key, int key_length) { static const uint8_t type = BSON_TYPE_MAXKEY; bson_return_val_if_fail (bson, false); bson_return_val_if_fail (key, false); if (key_length < 0) { key_length = (int)strlen (key); } return _bson_append (bson, 3, (1 + key_length + 1), 1, &type, key_length, key, 1, &gZero); } bool bson_append_minkey (bson_t *bson, const char *key, int key_length) { static const uint8_t type = BSON_TYPE_MINKEY; bson_return_val_if_fail (bson, false); bson_return_val_if_fail (key, false); if (key_length < 0) { key_length = (int)strlen (key); } return _bson_append (bson, 3, (1 + key_length + 1), 1, &type, key_length, key, 1, &gZero); } bool bson_append_null (bson_t *bson, const char *key, int key_length) { static const uint8_t type = BSON_TYPE_NULL; bson_return_val_if_fail (bson, false); bson_return_val_if_fail (key, false); if (key_length < 0) { key_length = (int)strlen (key); } return _bson_append (bson, 3, (1 + key_length + 1), 1, &type, key_length, key, 1, &gZero); } bool bson_append_oid (bson_t *bson, const char *key, int key_length, const bson_oid_t *value) { static const uint8_t type = BSON_TYPE_OID; bson_return_val_if_fail (bson, false); bson_return_val_if_fail (key, false); bson_return_val_if_fail (value, false); if (key_length < 0) { key_length = (int)strlen (key); } return _bson_append (bson, 4, (1 + key_length + 1 + 12), 1, &type, key_length, key, 1, &gZero, 12, value); } bool bson_append_regex (bson_t *bson, const char *key, int key_length, const char *regex, const char *options) { static const uint8_t type = BSON_TYPE_REGEX; uint32_t regex_len; uint32_t options_len; bson_return_val_if_fail (bson, false); bson_return_val_if_fail (key, false); if (key_length < 0) { key_length = (int)strlen (key); } if (!regex) { regex = ""; } if (!options) { options = ""; } regex_len = (int)strlen (regex) + 1; options_len = (int)strlen (options) + 1; return _bson_append (bson, 5, (1 + key_length + 1 + regex_len + options_len), 1, &type, key_length, key, 1, &gZero, regex_len, regex, options_len, options); } bool bson_append_utf8 (bson_t *bson, const char *key, int key_length, const char *value, int length) { static const uint8_t type = BSON_TYPE_UTF8; uint32_t length_le; bson_return_val_if_fail (bson, false); bson_return_val_if_fail (key, false); if (BSON_UNLIKELY (!value)) { return bson_append_null (bson, key, key_length); } if (BSON_UNLIKELY (key_length < 0)) { key_length = (int)strlen (key); } if (BSON_UNLIKELY (length < 0)) { length = (int)strlen (value); } length_le = BSON_UINT32_TO_LE (length + 1); return _bson_append (bson, 6, (1 + key_length + 1 + 4 + length + 1), 1, &type, key_length, key, 1, &gZero, 4, &length_le, length, value, 1, &gZero); } bool bson_append_symbol (bson_t *bson, const char *key, int key_length, const char *value, int length) { static const uint8_t type = BSON_TYPE_SYMBOL; uint32_t length_le; bson_return_val_if_fail (bson, false); bson_return_val_if_fail (key, false); if (!value) { return bson_append_null (bson, key, key_length); } if (key_length < 0) { key_length = (int)strlen (key); } if (length < 0) { length =(int)strlen (value); } length_le = BSON_UINT32_TO_LE (length + 1); return _bson_append (bson, 6, (1 + key_length + 1 + 4 + length + 1), 1, &type, key_length, key, 1, &gZero, 4, &length_le, length, value, 1, &gZero); } bool bson_append_time_t (bson_t *bson, const char *key, int key_length, time_t value) { #ifdef BSON_OS_WIN32 struct timeval tv = { (long)value, 0 }; #else struct timeval tv = { value, 0 }; #endif bson_return_val_if_fail (bson, false); bson_return_val_if_fail (key, false); return bson_append_timeval (bson, key, key_length, &tv); } bool bson_append_timestamp (bson_t *bson, const char *key, int key_length, uint32_t timestamp, uint32_t increment) { static const uint8_t type = BSON_TYPE_TIMESTAMP; uint64_t value; bson_return_val_if_fail (bson, false); bson_return_val_if_fail (key, false); if (key_length < 0) { key_length =(int)strlen (key); } value = ((((uint64_t)timestamp) << 32) | ((uint64_t)increment)); value = BSON_UINT64_TO_LE (value); return _bson_append (bson, 4, (1 + key_length + 1 + 8), 1, &type, key_length, key, 1, &gZero, 8, &value); } bool bson_append_now_utc (bson_t *bson, const char *key, int key_length) { bson_return_val_if_fail (bson, false); bson_return_val_if_fail (key, false); bson_return_val_if_fail (key_length >= -1, false); return bson_append_time_t (bson, key, key_length, time (NULL)); } bool bson_append_date_time (bson_t *bson, const char *key, int key_length, int64_t value) { static const uint8_t type = BSON_TYPE_DATE_TIME; uint64_t value_le; bson_return_val_if_fail (bson, false); bson_return_val_if_fail (key, false); if (key_length < 0) { key_length =(int)strlen (key); } value_le = BSON_UINT64_TO_LE (value); return _bson_append (bson, 4, (1 + key_length + 1 + 8), 1, &type, key_length, key, 1, &gZero, 8, &value_le); } bool bson_append_timeval (bson_t *bson, const char *key, int key_length, struct timeval *value) { uint64_t unix_msec; bson_return_val_if_fail (bson, false); bson_return_val_if_fail (key, false); bson_return_val_if_fail (value, false); unix_msec = (((uint64_t)value->tv_sec) * 1000UL) + (value->tv_usec / 1000UL); return bson_append_date_time (bson, key, key_length, unix_msec); } bool bson_append_undefined (bson_t *bson, const char *key, int key_length) { static const uint8_t type = BSON_TYPE_UNDEFINED; bson_return_val_if_fail (bson, false); bson_return_val_if_fail (key, false); if (key_length < 0) { key_length =(int)strlen (key); } return _bson_append (bson, 3, (1 + key_length + 1), 1, &type, key_length, key, 1, &gZero); } bool bson_append_value (bson_t *bson, const char *key, int key_length, const bson_value_t *value) { bson_t local; bool ret = false; bson_return_val_if_fail (bson, false); bson_return_val_if_fail (key, false); bson_return_val_if_fail (value, false); switch (value->value_type) { case BSON_TYPE_DOUBLE: ret = bson_append_double (bson, key, key_length, value->value.v_double); break; case BSON_TYPE_UTF8: ret = bson_append_utf8 (bson, key, key_length, value->value.v_utf8.str, value->value.v_utf8.len); break; case BSON_TYPE_DOCUMENT: if (bson_init_static (&local, value->value.v_doc.data, value->value.v_doc.data_len)) { ret = bson_append_document (bson, key, key_length, &local); bson_destroy (&local); } break; case BSON_TYPE_ARRAY: if (bson_init_static (&local, value->value.v_doc.data, value->value.v_doc.data_len)) { ret = bson_append_array (bson, key, key_length, &local); bson_destroy (&local); } break; case BSON_TYPE_BINARY: ret = bson_append_binary (bson, key, key_length, value->value.v_binary.subtype, value->value.v_binary.data, value->value.v_binary.data_len); break; case BSON_TYPE_UNDEFINED: ret = bson_append_undefined (bson, key, key_length); break; case BSON_TYPE_OID: ret = bson_append_oid (bson, key, key_length, &value->value.v_oid); break; case BSON_TYPE_BOOL: ret = bson_append_bool (bson, key, key_length, value->value.v_bool); break; case BSON_TYPE_DATE_TIME: ret = bson_append_date_time (bson, key, key_length, value->value.v_datetime); break; case BSON_TYPE_NULL: ret = bson_append_null (bson, key, key_length); break; case BSON_TYPE_REGEX: ret = bson_append_regex (bson, key, key_length, value->value.v_regex.regex, value->value.v_regex.options); break; case BSON_TYPE_DBPOINTER: ret = bson_append_dbpointer (bson, key, key_length, value->value.v_dbpointer.collection, &value->value.v_dbpointer.oid); break; case BSON_TYPE_CODE: ret = bson_append_code (bson, key, key_length, value->value.v_code.code); break; case BSON_TYPE_SYMBOL: ret = bson_append_symbol (bson, key, key_length, value->value.v_symbol.symbol, value->value.v_symbol.len); break; case BSON_TYPE_CODEWSCOPE: if (bson_init_static (&local, value->value.v_codewscope.scope_data, value->value.v_codewscope.scope_len)) { ret = bson_append_code_with_scope (bson, key, key_length, value->value.v_codewscope.code, &local); bson_destroy (&local); } break; case BSON_TYPE_INT32: ret = bson_append_int32 (bson, key, key_length, value->value.v_int32); break; case BSON_TYPE_TIMESTAMP: ret = bson_append_timestamp (bson, key, key_length, value->value.v_timestamp.timestamp, value->value.v_timestamp.increment); break; case BSON_TYPE_INT64: ret = bson_append_int64 (bson, key, key_length, value->value.v_int64); break; case BSON_TYPE_MAXKEY: ret = bson_append_maxkey (bson, key, key_length); break; case BSON_TYPE_MINKEY: ret = bson_append_minkey (bson, key, key_length); break; case BSON_TYPE_EOD: default: break; } return ret; } void bson_init (bson_t *bson) { bson_impl_inline_t *impl = (bson_impl_inline_t *)bson; bson_return_if_fail (bson); impl->flags = BSON_FLAG_INLINE | BSON_FLAG_STATIC; impl->len = 5; impl->data[0] = 5; impl->data[1] = 0; impl->data[2] = 0; impl->data[3] = 0; impl->data[4] = 0; } void bson_reinit (bson_t *bson) { uint8_t *data; bson_return_if_fail (bson); data = _bson_data (bson); bson->len = 5; data [0] = 5; data [1] = 0; data [2] = 0; data [3] = 0; data [4] = 0; } bool bson_init_static (bson_t *bson, const uint8_t *data, size_t length) { bson_impl_alloc_t *impl = (bson_impl_alloc_t *)bson; uint32_t len_le; bson_return_val_if_fail (bson, false); bson_return_val_if_fail (data, false); if ((length < 5) || (length > INT_MAX)) { return false; } memcpy (&len_le, data, sizeof (len_le)); if ((size_t)BSON_UINT32_FROM_LE (len_le) != length) { return false; } if (data[length - 1]) { return false; } impl->flags = BSON_FLAG_STATIC | BSON_FLAG_RDONLY; impl->len = (uint32_t)length; impl->parent = NULL; impl->depth = 0; impl->buf = &impl->alloc; impl->buflen = &impl->alloclen; impl->offset = 0; impl->alloc = (uint8_t *)data; impl->alloclen = length; impl->realloc = NULL; impl->realloc_func_ctx = NULL; return true; } bson_t * bson_new (void) { bson_impl_inline_t *impl; bson_t *bson; bson = bson_malloc (sizeof *bson); impl = (bson_impl_inline_t *)bson; impl->flags = BSON_FLAG_INLINE; impl->len = 5; impl->data[0] = 5; impl->data[1] = 0; impl->data[2] = 0; impl->data[3] = 0; impl->data[4] = 0; return bson; } bson_t * bson_sized_new (size_t size) { bson_impl_alloc_t *impl_a; bson_impl_inline_t *impl_i; bson_t *b; bson_return_val_if_fail (size <= INT32_MAX, NULL); b = bson_malloc (sizeof *b); impl_a = (bson_impl_alloc_t *)b; impl_i = (bson_impl_inline_t *)b; if (size <= sizeof impl_i->data) { bson_init (b); b->flags &= ~BSON_FLAG_STATIC; } else { impl_a->flags = BSON_FLAG_NONE; impl_a->len = 5; impl_a->parent = NULL; impl_a->depth = 0; impl_a->buf = &impl_a->alloc; impl_a->buflen = &impl_a->alloclen; impl_a->offset = 0; impl_a->alloclen = BSON_MAX (5, size); impl_a->alloc = bson_malloc (impl_a->alloclen); impl_a->alloc[0] = 5; impl_a->alloc[1] = 0; impl_a->alloc[2] = 0; impl_a->alloc[3] = 0; impl_a->alloc[4] = 0; impl_a->realloc = bson_realloc_ctx; impl_a->realloc_func_ctx = NULL; } return b; } bson_t * bson_new_from_data (const uint8_t *data, size_t length) { uint32_t len_le; bson_t *bson; bson_return_val_if_fail (data, NULL); if ((length < 5) || (length > INT_MAX) || data [length - 1]) { return NULL; } memcpy (&len_le, data, sizeof (len_le)); if (length != (size_t)BSON_UINT32_FROM_LE (len_le)) { return NULL; } bson = bson_sized_new (length); memcpy (_bson_data (bson), data, length); bson->len = (uint32_t)length; return bson; } bson_t * bson_new_from_buffer (uint8_t **buf, size_t *buf_len, bson_realloc_func realloc_func, void *realloc_func_ctx) { bson_impl_alloc_t *impl; uint32_t len_le; uint32_t length; bson_t *bson; bson_return_val_if_fail (buf, NULL); bson_return_val_if_fail (buf_len, NULL); if (!realloc_func) { realloc_func = bson_realloc_ctx; } bson = bson_malloc0 (sizeof *bson); impl = (bson_impl_alloc_t *)bson; if (!*buf) { length = 5; len_le = BSON_UINT32_TO_LE (length); *buf_len = 5; *buf = realloc_func (*buf, *buf_len, realloc_func_ctx); memcpy (*buf, &len_le, sizeof (len_le)); (*buf) [4] = '\0'; } else { if ((*buf_len < 5) || (*buf_len > INT_MAX)) { bson_free (bson); return NULL; } memcpy (&len_le, *buf, sizeof (len_le)); length = BSON_UINT32_FROM_LE(len_le); } if ((*buf)[length - 1]) { bson_free (bson); return NULL; } impl->flags = BSON_FLAG_NO_FREE; impl->len = length; impl->buf = buf; impl->buflen = buf_len; impl->realloc = realloc_func; impl->realloc_func_ctx = realloc_func_ctx; return bson; } bson_t * bson_copy (const bson_t *bson) { const uint8_t *data; bson_return_val_if_fail (bson, NULL); data = _bson_data (bson); return bson_new_from_data (data, bson->len); } void bson_copy_to (const bson_t *src, bson_t *dst) { const uint8_t *data; bson_impl_alloc_t *adst; size_t len; bson_return_if_fail (src); bson_return_if_fail (dst); if ((src->flags & BSON_FLAG_INLINE)) { memcpy (dst, src, sizeof *dst); dst->flags = (BSON_FLAG_STATIC | BSON_FLAG_INLINE); return; } data = _bson_data (src); len = bson_next_power_of_two ((size_t)src->len); adst = (bson_impl_alloc_t *)dst; adst->flags = BSON_FLAG_STATIC; adst->len = src->len; adst->parent = NULL; adst->depth = 0; adst->buf = &adst->alloc; adst->buflen = &adst->alloclen; adst->offset = 0; adst->alloc = bson_malloc (len); adst->alloclen = len; adst->realloc = bson_realloc_ctx; adst->realloc_func_ctx = NULL; memcpy (adst->alloc, data, src->len); } static bool should_ignore (const char *first_exclude, va_list args, const char *name) { bool ret = false; const char *exclude = first_exclude; va_list args_copy; va_copy (args_copy, args); do { if (!strcmp (name, exclude)) { ret = true; break; } } while ((exclude = va_arg (args_copy, const char *))); va_end (args_copy); return ret; } static void _bson_copy_to_excluding_va (const bson_t *src, bson_t *dst, const char *first_exclude, va_list args) { bson_iter_t iter; if (bson_iter_init (&iter, src)) { while (bson_iter_next (&iter)) { if (!should_ignore (first_exclude, args, bson_iter_key (&iter))) { if (!bson_append_iter (dst, NULL, 0, &iter)) { /* * This should not be able to happen since we are copying * from within a valid bson_t. */ BSON_ASSERT (false); return; } } } } } void bson_copy_to_excluding (const bson_t *src, bson_t *dst, const char *first_exclude, ...) { va_list args; bson_return_if_fail (src); bson_return_if_fail (dst); bson_return_if_fail (first_exclude); bson_init (dst); va_start (args, first_exclude); _bson_copy_to_excluding_va (src, dst, first_exclude, args); va_end (args); } void bson_copy_to_excluding_noinit (const bson_t *src, bson_t *dst, const char *first_exclude, ...) { va_list args; bson_return_if_fail (src); bson_return_if_fail (dst); bson_return_if_fail (first_exclude); va_start (args, first_exclude); _bson_copy_to_excluding_va (src, dst, first_exclude, args); va_end (args); } void bson_destroy (bson_t *bson) { BSON_ASSERT (bson); if (!(bson->flags & (BSON_FLAG_RDONLY | BSON_FLAG_INLINE | BSON_FLAG_NO_FREE))) { bson_free (*((bson_impl_alloc_t *)bson)->buf); } if (!(bson->flags & BSON_FLAG_STATIC)) { bson_free (bson); } } uint8_t * bson_destroy_with_steal (bson_t *bson, bool steal, uint32_t *length) { uint8_t *ret = NULL; bson_return_val_if_fail (bson, NULL); if (length) { *length = bson->len; } if (!steal) { bson_destroy (bson); return NULL; } if ((bson->flags & (BSON_FLAG_CHILD | BSON_FLAG_IN_CHILD | BSON_FLAG_RDONLY))) { /* Do nothing */ } else if ((bson->flags & BSON_FLAG_INLINE)) { bson_impl_inline_t *inl; inl = (bson_impl_inline_t *)bson; ret = bson_malloc (bson->len); memcpy (ret, inl->data, bson->len); } else { bson_impl_alloc_t *alloc; alloc = (bson_impl_alloc_t *)bson; ret = *alloc->buf; *alloc->buf = NULL; } bson_destroy (bson); return ret; } const uint8_t * bson_get_data (const bson_t *bson) { bson_return_val_if_fail (bson, NULL); return _bson_data (bson); } uint32_t bson_count_keys (const bson_t *bson) { uint32_t count = 0; bson_iter_t iter; bson_return_val_if_fail (bson, 0); if (bson_iter_init (&iter, bson)) { while (bson_iter_next (&iter)) { count++; } } return count; } bool bson_has_field (const bson_t *bson, const char *key) { bson_iter_t iter; bson_iter_t child; bson_return_val_if_fail (bson, false); bson_return_val_if_fail (key, false); if (NULL != strchr (key, '.')) { return (bson_iter_init (&iter, bson) && bson_iter_find_descendant (&iter, key, &child)); } return bson_iter_init_find (&iter, bson, key); } int bson_compare (const bson_t *bson, const bson_t *other) { const uint8_t *data1; const uint8_t *data2; size_t len1; size_t len2; int64_t ret; data1 = _bson_data (bson) + 4; len1 = bson->len - 4; data2 = _bson_data (other) + 4; len2 = other->len - 4; if (len1 == len2) { return memcmp (data1, data2, len1); } ret = memcmp (data1, data2, BSON_MIN (len1, len2)); if (ret == 0) { ret = len1 - len2; } return (ret < 0) ? -1 : (ret > 0); } bool bson_equal (const bson_t *bson, const bson_t *other) { return !bson_compare (bson, other); } static bool _bson_iter_validate_utf8 (const bson_iter_t *iter, const char *key, size_t v_utf8_len, const char *v_utf8, void *data) { bson_validate_state_t *state = data; bool allow_null; if ((state->flags & BSON_VALIDATE_UTF8)) { allow_null = !!(state->flags & BSON_VALIDATE_UTF8_ALLOW_NULL); if (!bson_utf8_validate (v_utf8, v_utf8_len, allow_null)) { state->err_offset = iter->off; return true; } } if ((state->flags & BSON_VALIDATE_DOLLAR_KEYS)) { if (state->phase == BSON_VALIDATE_PHASE_LF_REF_UTF8) { state->phase = BSON_VALIDATE_PHASE_LF_ID_KEY; } else if (state->phase == BSON_VALIDATE_PHASE_LF_DB_UTF8) { state->phase = BSON_VALIDATE_PHASE_NOT_DBREF; } } return false; } static void _bson_iter_validate_corrupt (const bson_iter_t *iter, void *data) { bson_validate_state_t *state = data; state->err_offset = iter->err_off; } static bool _bson_iter_validate_before (const bson_iter_t *iter, const char *key, void *data) { bson_validate_state_t *state = data; if ((state->flags & BSON_VALIDATE_DOLLAR_KEYS)) { if (key[0] == '$') { if (state->phase == BSON_VALIDATE_PHASE_LF_REF_KEY && strcmp (key, "$ref") == 0) { state->phase = BSON_VALIDATE_PHASE_LF_REF_UTF8; } else if (state->phase == BSON_VALIDATE_PHASE_LF_ID_KEY && strcmp (key, "$id") == 0) { state->phase = BSON_VALIDATE_PHASE_LF_DB_KEY; } else if (state->phase == BSON_VALIDATE_PHASE_LF_DB_KEY && strcmp (key, "$db") == 0) { state->phase = BSON_VALIDATE_PHASE_LF_DB_UTF8; } else { state->err_offset = iter->off; return true; } } else if (state->phase == BSON_VALIDATE_PHASE_LF_ID_KEY || state->phase == BSON_VALIDATE_PHASE_LF_REF_UTF8 || state->phase == BSON_VALIDATE_PHASE_LF_DB_UTF8) { state->err_offset = iter->off; return true; } else { state->phase = BSON_VALIDATE_PHASE_NOT_DBREF; } } if ((state->flags & BSON_VALIDATE_DOT_KEYS)) { if (strstr (key, ".")) { state->err_offset = iter->off; return true; } } return false; } static bool _bson_iter_validate_codewscope (const bson_iter_t *iter, const char *key, size_t v_code_len, const char *v_code, const bson_t *v_scope, void *data) { bson_validate_state_t *state = data; size_t offset; if (!bson_validate (v_scope, state->flags, &offset)) { state->err_offset = iter->off + offset; return false; } return true; } static bool _bson_iter_validate_document (const bson_iter_t *iter, const char *key, const bson_t *v_document, void *data); static const bson_visitor_t bson_validate_funcs = { _bson_iter_validate_before, NULL, /* visit_after */ _bson_iter_validate_corrupt, NULL, /* visit_double */ _bson_iter_validate_utf8, _bson_iter_validate_document, _bson_iter_validate_document, /* visit_array */ NULL, /* visit_binary */ NULL, /* visit_undefined */ NULL, /* visit_oid */ NULL, /* visit_bool */ NULL, /* visit_date_time */ NULL, /* visit_null */ NULL, /* visit_regex */ NULL, /* visit_dbpoint */ NULL, /* visit_code */ NULL, /* visit_symbol */ _bson_iter_validate_codewscope, }; static bool _bson_iter_validate_document (const bson_iter_t *iter, const char *key, const bson_t *v_document, void *data) { bson_validate_state_t *state = data; bson_iter_t child; bson_validate_phase_t phase = state->phase; if (!bson_iter_init (&child, v_document)) { state->err_offset = iter->off; return true; } if (state->phase == BSON_VALIDATE_PHASE_START) { state->phase = BSON_VALIDATE_PHASE_TOP; } else { state->phase = BSON_VALIDATE_PHASE_LF_REF_KEY; } bson_iter_visit_all (&child, &bson_validate_funcs, state); if (state->phase == BSON_VALIDATE_PHASE_LF_ID_KEY || state->phase == BSON_VALIDATE_PHASE_LF_REF_UTF8 || state->phase == BSON_VALIDATE_PHASE_LF_DB_UTF8) { state->err_offset = iter->off; return true; } state->phase = phase; return false; } bool bson_validate (const bson_t *bson, bson_validate_flags_t flags, size_t *offset) { bson_validate_state_t state = { flags, -1, BSON_VALIDATE_PHASE_START }; bson_iter_t iter; if (!bson_iter_init (&iter, bson)) { state.err_offset = 0; goto failure; } _bson_iter_validate_document (&iter, NULL, bson, &state); failure: if (offset) { *offset = state.err_offset; } return state.err_offset < 0; } bool bson_concat (bson_t *dst, const bson_t *src) { BSON_ASSERT (dst); BSON_ASSERT (src); if (!bson_empty (src)) { return _bson_append (dst, 1, src->len - 5, src->len - 5, _bson_data (src) + 4); } return true; } MongoDB-v1.2.2/bson/bson.h000644 000765 000024 00000070313 12651754051 015506 0ustar00davidstaff000000 000000 /* * Copyright 2013 MongoDB, Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #ifndef BSON_H #define BSON_H #define BSON_INSIDE #include "bson-compat.h" #include #include #include "bson-macros.h" #include "bson-config.h" #include "bson-atomic.h" #include "bson-context.h" #include "bson-clock.h" #include "bson-error.h" #include "bson-iter.h" #include "bson-keys.h" #include "bson-md5.h" #include "bson-memory.h" #include "bson-oid.h" #include "bson-reader.h" #include "bson-string.h" #include "bson-types.h" #include "bson-utf8.h" #include "bson-value.h" #include "bson-version.h" #include "bson-writer.h" #undef BSON_INSIDE BSON_BEGIN_DECLS /** * bson_empty: * @b: a bson_t. * * Checks to see if @b is an empty BSON document. An empty BSON document is * a 5 byte document which contains the length (4 bytes) and a single NUL * byte indicating end of fields. */ #define bson_empty(b) (((b)->len == 5) || !bson_get_data ((b))[4]) /** * bson_empty0: * * Like bson_empty() but treats NULL the same as an empty bson_t document. */ #define bson_empty0(b) (!(b) || bson_empty (b)) /** * bson_clear: * * Easily free a bson document and set it to NULL. Use like: * * bson_t *doc = bson_new(); * bson_clear (&doc); * assert (doc == NULL); */ #define bson_clear(bptr) \ do { \ if (*(bptr)) { \ bson_destroy (*(bptr)); \ *(bptr) = NULL; \ } \ } while (0) /** * BSON_MAX_SIZE: * * The maximum size in bytes of a BSON document. */ #define BSON_MAX_SIZE ((size_t)((1U << 31) - 1)) #define BSON_APPEND_ARRAY(b,key,val) \ bson_append_array (b, key, (int)strlen (key), val) #define BSON_APPEND_ARRAY_BEGIN(b,key,child) \ bson_append_array_begin (b, key, (int)strlen (key), child) #define BSON_APPEND_BINARY(b,key,subtype,val,len) \ bson_append_binary (b, key, (int) strlen (key), subtype, val, len) #define BSON_APPEND_BOOL(b,key,val) \ bson_append_bool (b, key, (int) strlen (key), val) #define BSON_APPEND_CODE(b,key,val) \ bson_append_code (b, key, (int) strlen (key), val) #define BSON_APPEND_CODE_WITH_SCOPE(b,key,val,scope) \ bson_append_code_with_scope (b, key, (int) strlen (key), val, scope) #define BSON_APPEND_DBPOINTER(b,key,coll,oid) \ bson_append_dbpointer (b, key, (int) strlen (key), coll, oid) #define BSON_APPEND_DOCUMENT_BEGIN(b,key,child) \ bson_append_document_begin (b, key, (int)strlen (key), child) #define BSON_APPEND_DOUBLE(b,key,val) \ bson_append_double (b, key, (int) strlen (key), val) #define BSON_APPEND_DOCUMENT(b,key,val) \ bson_append_document (b, key, (int) strlen (key), val) #define BSON_APPEND_INT32(b,key,val) \ bson_append_int32 (b, key, (int) strlen (key), val) #define BSON_APPEND_INT64(b,key,val) \ bson_append_int64 (b, key, (int) strlen (key), val) #define BSON_APPEND_MINKEY(b,key) \ bson_append_minkey (b, key, (int) strlen (key)) #define BSON_APPEND_MAXKEY(b,key) \ bson_append_maxkey (b, key, (int) strlen (key)) #define BSON_APPEND_NULL(b,key) \ bson_append_null (b, key, (int) strlen (key)) #define BSON_APPEND_OID(b,key,val) \ bson_append_oid (b, key, (int) strlen (key), val) #define BSON_APPEND_REGEX(b,key,val,opt) \ bson_append_regex (b, key, (int) strlen (key), val, opt) #define BSON_APPEND_UTF8(b,key,val) \ bson_append_utf8 (b, key, (int) strlen (key), val, (int) strlen (val)) #define BSON_APPEND_SYMBOL(b,key,val) \ bson_append_symbol (b, key, (int) strlen (key), val, (int) strlen (val)) #define BSON_APPEND_TIME_T(b,key,val) \ bson_append_time_t (b, key, (int) strlen (key), val) #define BSON_APPEND_TIMEVAL(b,key,val) \ bson_append_timeval (b, key, (int) strlen (key), val) #define BSON_APPEND_DATE_TIME(b,key,val) \ bson_append_date_time (b, key, (int) strlen (key), val) #define BSON_APPEND_TIMESTAMP(b,key,val,inc) \ bson_append_timestamp (b, key, (int) strlen (key), val, inc) #define BSON_APPEND_UNDEFINED(b,key) \ bson_append_undefined (b, key, (int) strlen (key)) #define BSON_APPEND_VALUE(b,key,val) \ bson_append_value (b, key, (int) strlen (key), (val)) /** * bson_new: * * Allocates a new bson_t structure. Call the various bson_append_*() * functions to add fields to the bson. You can iterate the bson_t at any * time using a bson_iter_t and bson_iter_init(). * * Returns: A newly allocated bson_t that should be freed with bson_destroy(). */ bson_t * bson_new (void); /** * bson_init_static: * @b: A pointer to a bson_t. * @data: The data buffer to use. * @length: The length of @data. * * Initializes a bson_t using @data and @length. This is ideal if you would * like to use a stack allocation for your bson and do not need to grow the * buffer. @data must be valid for the life of @b. * * Returns: true if initialized successfully; otherwise false. */ bool bson_init_static (bson_t *b, const uint8_t *data, size_t length); /** * bson_init: * @b: A pointer to a bson_t. * * Initializes a bson_t for use. This function is useful to those that want a * stack allocated bson_t. The usefulness of a stack allocated bson_t is * marginal as the target buffer for content will still require heap * allocations. It can help reduce heap fragmentation on allocators that do * not employ SLAB/magazine semantics. * * You must call bson_destroy() with @b to release resources when you are done * using @b. */ void bson_init (bson_t *b); /** * bson_reinit: * @b: (inout): A bson_t. * * This is equivalent to calling bson_destroy() and bson_init() on a #bson_t. * However, it will try to persist the existing malloc'd buffer if one exists. * This is useful in cases where you want to reduce malloc overhead while * building many documents. */ void bson_reinit (bson_t *b); /** * bson_new_from_data: * @data: A buffer containing a serialized bson document. * @length: The length of the document in bytes. * * Creates a new bson_t structure using the data provided. @data should contain * at least @length bytes that can be copied into the new bson_t structure. * * Returns: A newly allocated bson_t that should be freed with bson_destroy(). * If the first four bytes (little-endian) of data do not match @length, * then NULL will be returned. */ bson_t * bson_new_from_data (const uint8_t *data, size_t length); /** * bson_new_from_buffer: * @buf: A pointer to a buffer containing a serialized bson document. Or null * @buf_len: The length of the buffer in bytes. * @realloc_fun: a realloc like function * @realloc_fun_ctx: a context for the realloc function * * Creates a new bson_t structure using the data provided. @buf should contain * a bson document, or null pointer should be passed for new allocations. * * Returns: A newly allocated bson_t that should be freed with bson_destroy(). * The underlying buffer will be used and not be freed in destroy. */ bson_t * bson_new_from_buffer (uint8_t **buf, size_t *buf_len, bson_realloc_func realloc_func, void *realloc_func_ctx); /** * bson_sized_new: * @size: A size_t containing the number of bytes to allocate. * * This will allocate a new bson_t with enough bytes to hold a buffer * sized @size. @size must be smaller than INT_MAX bytes. * * Returns: A newly allocated bson_t that should be freed with bson_destroy(). */ bson_t * bson_sized_new (size_t size); /** * bson_copy: * @bson: A bson_t. * * Copies @bson into a newly allocated bson_t. You must call bson_destroy() * when you are done with the resulting value to free its resources. * * Returns: A newly allocated bson_t that should be free'd with bson_destroy() */ bson_t * bson_copy (const bson_t *bson); /** * bson_copy_to: * @src: The source bson_t. * @dst: The destination bson_t. * * Initializes @dst and copies the content from @src into @dst. */ void bson_copy_to (const bson_t *src, bson_t *dst); /** * bson_copy_to_excluding: * @src: A bson_t. * @dst: A bson_t to initialize and copy into. * @first_exclude: First field name to exclude. * * Copies @src into @dst excluding any field that is provided. * This is handy for situations when you need to remove one or * more fields in a bson_t. Note that bson_init() will be called * on dst. */ void bson_copy_to_excluding (const bson_t *src, bson_t *dst, const char *first_exclude, ...) BSON_GNUC_NULL_TERMINATED BSON_GNUC_DEPRECATED_FOR(bson_copy_to_excluding_noinit); /** * bson_copy_to_excluding_noinit: * @src: A bson_t. * @dst: A bson_t to initialize and copy into. * @first_exclude: First field name to exclude. * * The same as bson_copy_to_excluding, but does not call bson_init() * on the dst. This version should be preferred in new code, but the * old function is left for backwards compatibility. */ void bson_copy_to_excluding_noinit (const bson_t *src, bson_t *dst, const char *first_exclude, ...) BSON_GNUC_NULL_TERMINATED; /** * bson_destroy: * @bson: A bson_t. * * Frees the resources associated with @bson. */ void bson_destroy (bson_t *bson); /** * bson_destroy_with_steal: * @bson: A #bson_t. * @steal: If ownership of the data buffer should be transfered to caller. * @length: (out): location for the length of the buffer. * * Destroys @bson similar to calling bson_destroy() except that the underlying * buffer will be returned and ownership transfered to the caller if @steal * is non-zero. * * If length is non-NULL, the length of @bson will be stored in @length. * * It is a programming error to call this function with any bson that has * been initialized static, or is being used to create a subdocument with * functions such as bson_append_document_begin() or bson_append_array_begin(). * * Returns: a buffer owned by the caller if @steal is true. Otherwise NULL. * If there was an error, NULL is returned. */ uint8_t * bson_destroy_with_steal (bson_t *bson, bool steal, uint32_t *length); /** * bson_get_data: * @bson: A bson_t. * * Fetched the data buffer for @bson of @bson->len bytes in length. * * Returns: A buffer that should not be modified or freed. */ const uint8_t * bson_get_data (const bson_t *bson); /** * bson_count_keys: * @bson: A bson_t. * * Counts the number of elements found in @bson. */ uint32_t bson_count_keys (const bson_t *bson); /** * bson_has_field: * @bson: A bson_t. * @key: The key to lookup. * * Checks to see if @bson contains a field named @key. * * This function is case-sensitive. * * Returns: true if @key exists in @bson; otherwise false. */ bool bson_has_field (const bson_t *bson, const char *key); /** * bson_compare: * @bson: A bson_t. * @other: A bson_t. * * Compares @bson to @other in a qsort() style comparison. * See qsort() for information on how this function works. * * Returns: Less than zero, zero, or greater than zero. */ int bson_compare (const bson_t *bson, const bson_t *other); /* * bson_compare: * @bson: A bson_t. * @other: A bson_t. * * Checks to see if @bson and @other are equal. * * Returns: true if equal; otherwise false. */ bool bson_equal (const bson_t *bson, const bson_t *other); /** * bson_validate: * @bson: A bson_t. * @offset: A location for the error offset. * * Validates a BSON document by walking through the document and inspecting * the fields for valid content. * * Returns: true if @bson is valid; otherwise false and @offset is set. */ bool bson_validate (const bson_t *bson, bson_validate_flags_t flags, size_t *offset); bool bson_append_value (bson_t *bson, const char *key, int key_length, const bson_value_t *value); /** * bson_append_array: * @bson: A bson_t. * @key: The key for the field. * @array: A bson_t containing the array. * * Appends a BSON array to @bson. BSON arrays are like documents where the * key is the string version of the index. For example, the first item of the * array would have the key "0". The second item would have the index "1". * * Returns: true if successful; false if append would overflow max size. */ bool bson_append_array (bson_t *bson, const char *key, int key_length, const bson_t *array); /** * bson_append_binary: * @bson: A bson_t to append. * @key: The key for the field. * @subtype: The bson_subtype_t of the binary. * @binary: The binary buffer to append. * @length: The length of @binary. * * Appends a binary buffer to the BSON document. * * Returns: true if successful; false if append would overflow max size. */ bool bson_append_binary (bson_t *bson, const char *key, int key_length, bson_subtype_t subtype, const uint8_t *binary, uint32_t length); /** * bson_append_bool: * @bson: A bson_t. * @key: The key for the field. * @value: The boolean value. * * Appends a new field to @bson of type BSON_TYPE_BOOL. * * Returns: true if successful; false if append would overflow max size. */ bool bson_append_bool (bson_t *bson, const char *key, int key_length, bool value); /** * bson_append_code: * @bson: A bson_t. * @key: The key for the document. * @javascript: JavaScript code to be executed. * * Appends a field of type BSON_TYPE_CODE to the BSON document. @javascript * should contain a script in javascript to be executed. * * Returns: true if successful; false if append would overflow max size. */ bool bson_append_code (bson_t *bson, const char *key, int key_length, const char *javascript); /** * bson_append_code_with_scope: * @bson: A bson_t. * @key: The key for the document. * @javascript: JavaScript code to be executed. * @scope: A bson_t containing the scope for @javascript. * * Appends a field of type BSON_TYPE_CODEWSCOPE to the BSON document. * @javascript should contain a script in javascript to be executed. * * Returns: true if successful; false if append would overflow max size. */ bool bson_append_code_with_scope (bson_t *bson, const char *key, int key_length, const char *javascript, const bson_t *scope); /** * bson_append_dbpointer: * @bson: A bson_t. * @key: The key for the field. * @collection: The collection name. * @oid: The oid to the reference. * * Appends a new field of type BSON_TYPE_DBPOINTER. This datum type is * deprecated in the BSON spec and should not be used in new code. * * Returns: true if successful; false if append would overflow max size. */ bool bson_append_dbpointer (bson_t *bson, const char *key, int key_length, const char *collection, const bson_oid_t *oid); /** * bson_append_double: * @bson: A bson_t. * @key: The key for the field. * * Appends a new field to @bson of the type BSON_TYPE_DOUBLE. * * Returns: true if successful; false if append would overflow max size. */ bool bson_append_double (bson_t *bson, const char *key, int key_length, double value); /** * bson_append_document: * @bson: A bson_t. * @key: The key for the field. * @value: A bson_t containing the subdocument. * * Appends a new field to @bson of the type BSON_TYPE_DOCUMENT. * The documents contents will be copied into @bson. * * Returns: true if successful; false if append would overflow max size. */ bool bson_append_document (bson_t *bson, const char *key, int key_length, const bson_t *value); /** * bson_append_document_begin: * @bson: A bson_t. * @key: The key for the field. * @key_length: The length of @key in bytes not including NUL or -1 * if @key_length is NUL terminated. * @child: A location to an uninitialized bson_t. * * Appends a new field named @key to @bson. The field is, however, * incomplete. @child will be initialized so that you may add fields to the * child document. Child will use a memory buffer owned by @bson and * therefore grow the parent buffer as additional space is used. This allows * a single malloc'd buffer to be used when building documents which can help * reduce memory fragmentation. * * Returns: true if successful; false if append would overflow max size. */ bool bson_append_document_begin (bson_t *bson, const char *key, int key_length, bson_t *child); /** * bson_append_document_end: * @bson: A bson_t. * @child: A bson_t supplied to bson_append_document_begin(). * * Finishes the appending of a document to a @bson. @child is considered * disposed after this call and should not be used any further. * * Returns: true if successful; false if append would overflow max size. */ bool bson_append_document_end (bson_t *bson, bson_t *child); /** * bson_append_array_begin: * @bson: A bson_t. * @key: The key for the field. * @key_length: The length of @key in bytes not including NUL or -1 * if @key_length is NUL terminated. * @child: A location to an uninitialized bson_t. * * Appends a new field named @key to @bson. The field is, however, * incomplete. @child will be initialized so that you may add fields to the * child array. Child will use a memory buffer owned by @bson and * therefore grow the parent buffer as additional space is used. This allows * a single malloc'd buffer to be used when building arrays which can help * reduce memory fragmentation. * * The type of @child will be BSON_TYPE_ARRAY and therefore the keys inside * of it MUST be "0", "1", etc. * * Returns: true if successful; false if append would overflow max size. */ bool bson_append_array_begin (bson_t *bson, const char *key, int key_length, bson_t *child); /** * bson_append_array_end: * @bson: A bson_t. * @child: A bson_t supplied to bson_append_array_begin(). * * Finishes the appending of a array to a @bson. @child is considered * disposed after this call and should not be used any further. * * Returns: true if successful; false if append would overflow max size. */ bool bson_append_array_end (bson_t *bson, bson_t *child); /** * bson_append_int32: * @bson: A bson_t. * @key: The key for the field. * @value: The int32_t 32-bit integer value. * * Appends a new field of type BSON_TYPE_INT32 to @bson. * * Returns: true if successful; false if append would overflow max size. */ bool bson_append_int32 (bson_t *bson, const char *key, int key_length, int32_t value); /** * bson_append_int64: * @bson: A bson_t. * @key: The key for the field. * @value: The int64_t 64-bit integer value. * * Appends a new field of type BSON_TYPE_INT64 to @bson. * * Returns: true if successful; false if append would overflow max size. */ bool bson_append_int64 (bson_t *bson, const char *key, int key_length, int64_t value); /** * bson_append_iter: * @bson: A bson_t to append to. * @key: The key name or %NULL to take current key from @iter. * @key_length: The key length or -1 to use strlen(). * @iter: The iter located on the position of the element to append. * * Appends a new field to @bson that is equivalent to the field currently * pointed to by @iter. * * Returns: true if successful; false if append would overflow max size. */ bool bson_append_iter (bson_t *bson, const char *key, int key_length, const bson_iter_t *iter); /** * bson_append_minkey: * @bson: A bson_t. * @key: The key for the field. * * Appends a new field of type BSON_TYPE_MINKEY to @bson. This is a special * type that compares lower than all other possible BSON element values. * * See http://bsonspec.org for more information on this type. * * Returns: true if successful; false if append would overflow max size. */ bool bson_append_minkey (bson_t *bson, const char *key, int key_length); /** * bson_append_maxkey: * @bson: A bson_t. * @key: The key for the field. * * Appends a new field of type BSON_TYPE_MAXKEY to @bson. This is a special * type that compares higher than all other possible BSON element values. * * See http://bsonspec.org for more information on this type. * * Returns: true if successful; false if append would overflow max size. */ bool bson_append_maxkey (bson_t *bson, const char *key, int key_length); /** * bson_append_null: * @bson: A bson_t. * @key: The key for the field. * * Appends a new field to @bson with NULL for the value. * * Returns: true if successful; false if append would overflow max size. */ bool bson_append_null (bson_t *bson, const char *key, int key_length); /** * bson_append_oid: * @bson: A bson_t. * @key: The key for the field. * @oid: bson_oid_t. * * Appends a new field to the @bson of type BSON_TYPE_OID using the contents of * @oid. * * Returns: true if successful; false if append would overflow max size. */ bool bson_append_oid (bson_t *bson, const char *key, int key_length, const bson_oid_t *oid); /** * bson_append_regex: * @bson: A bson_t. * @key: The key of the field. * @regex: The regex to append to the bson. * @options: Options for @regex. * * Appends a new field to @bson of type BSON_TYPE_REGEX. @regex should * be the regex string. @options should contain the options for the regex. * * Valid options for @options are: * * 'i' for case-insensitive. * 'm' for multiple matching. * 'x' for verbose mode. * 'l' to make \w and \W locale dependent. * 's' for dotall mode ('.' matches everything) * 'u' to make \w and \W match unicode. * * For more information on what comprimises a BSON regex, see bsonspec.org. * * Returns: true if successful; false if append would overflow max size. */ bool bson_append_regex (bson_t *bson, const char *key, int key_length, const char *regex, const char *options); /** * bson_append_utf8: * @bson: A bson_t. * @key: The key for the field. * @value: A UTF-8 encoded string. * @length: The length of @value or -1 if it is NUL terminated. * * Appends a new field to @bson using @key as the key and @value as the UTF-8 * encoded value. * * It is the callers responsibility to ensure @value is valid UTF-8. You can * use bson_utf8_validate() to perform this check. * * Returns: true if successful; false if append would overflow max size. */ bool bson_append_utf8 (bson_t *bson, const char *key, int key_length, const char *value, int length); /** * bson_append_symbol: * @bson: A bson_t. * @key: The key for the field. * @value: The symbol as a string. * @length: The length of @value or -1 if NUL-terminated. * * Appends a new field to @bson of type BSON_TYPE_SYMBOL. This BSON type is * deprecated and should not be used in new code. * * See http://bsonspec.org for more information on this type. * * Returns: true if successful; false if append would overflow max size. */ bool bson_append_symbol (bson_t *bson, const char *key, int key_length, const char *value, int length); /** * bson_append_time_t: * @bson: A bson_t. * @key: The key for the field. * @value: A time_t. * * Appends a BSON_TYPE_DATE_TIME field to @bson using the time_t @value for the * number of seconds since UNIX epoch in UTC. * * Returns: true if successful; false if append would overflow max size. */ bool bson_append_time_t (bson_t *bson, const char *key, int key_length, time_t value); /** * bson_append_timeval: * @bson: A bson_t. * @key: The key for the field. * @value: A struct timeval containing the date and time. * * Appends a BSON_TYPE_DATE_TIME field to @bson using the struct timeval * provided. The time is persisted in milliseconds since the UNIX epoch in UTC. * * Returns: true if successful; false if append would overflow max size. */ bool bson_append_timeval (bson_t *bson, const char *key, int key_length, struct timeval *value); /** * bson_append_date_time: * @bson: A bson_t. * @key: The key for the field. * @key_length: The length of @key in bytes or -1 if \0 terminated. * @value: The number of milliseconds elapsed since UNIX epoch. * * Appends a new field to @bson of type BSON_TYPE_DATE_TIME. * * Returns: true if sucessful; otherwise false. */ bool bson_append_date_time (bson_t *bson, const char *key, int key_length, int64_t value); /** * bson_append_now_utc: * @bson: A bson_t. * @key: The key for the field. * @key_length: The length of @key or -1 if it is NULL terminated. * * Appends a BSON_TYPE_DATE_TIME field to @bson using the current time in UTC * as the field value. * * Returns: true if successful; false if append would overflow max size. */ bool bson_append_now_utc (bson_t *bson, const char *key, int key_length); /** * bson_append_timestamp: * @bson: A bson_t. * @key: The key for the field. * @timestamp: 4 byte timestamp. * @increment: 4 byte increment for timestamp. * * Appends a field of type BSON_TYPE_TIMESTAMP to @bson. This is a special type * used by MongoDB replication and sharding. If you need generic time and date * fields use bson_append_time_t() or bson_append_timeval(). * * Setting @increment and @timestamp to zero has special semantics. See * http://bsonspec.org for more information on this field type. * * Returns: true if successful; false if append would overflow max size. */ bool bson_append_timestamp (bson_t *bson, const char *key, int key_length, uint32_t timestamp, uint32_t increment); /** * bson_append_undefined: * @bson: A bson_t. * @key: The key for the field. * * Appends a field of type BSON_TYPE_UNDEFINED. This type is deprecated in the * spec and should not be used for new code. However, it is provided for those * needing to interact with legacy systems. * * Returns: true if successful; false if append would overflow max size. */ bool bson_append_undefined (bson_t *bson, const char *key, int key_length); bool bson_concat (bson_t *dst, const bson_t *src); BSON_END_DECLS #endif /* BSON_H */