Search-Elasticsearch-5.01000755000765000024 013001720020 15327 5ustar00clintonstaff000000000000README100644000765000024 3143313001720020 16314 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01NAME Search::Elasticsearch - The official client for Elasticsearch VERSION version 5.01 SYNOPSIS use Search::Elasticsearch; # Connect to localhost:9200: my $e = Search::Elasticsearch->new(); # Round-robin between two nodes: my $e = Search::Elasticsearch->new( nodes => [ 'search1:9200', 'search2:9200' ] ); # Connect to cluster at search1:9200, sniff all nodes and round-robin between them: my $e = Search::Elasticsearch->new( nodes => 'search1:9200', cxn_pool => 'Sniff' ); # Index a document: $e->index( index => 'my_app', type => 'blog_post', id => 1, body => { title => 'Elasticsearch clients', content => 'Interesting content...', date => '2013-09-24' } ); # Get the document: my $doc = $e->get( index => 'my_app', type => 'blog_post', id => 1 ); # Search: my $results = $e->search( index => 'my_app', body => { query => { match => { title => 'elasticsearch' } } } ); # Cluster requests: $info = $e->cluster->info; $health = $e->cluster->health; $node_stats = $e->cluster->node_stats; # Index requests: $e->indices->create(index=>'my_index'); $e->indices->delete(index=>'my_index'); DESCRIPTION Search::Elasticsearch is the official Perl client for Elasticsearch, supported by elasticsearch.com . Elasticsearch itself is a flexible and powerful open source, distributed real-time search and analytics engine for the cloud. You can read more about it on elastic.co . PREVIOUS VERSIONS OF ELASTICSEARCH This version of the client supports the Elasticsearch 5.0 branch, which is not backwards compatible with earlier branches. If you need to talk to a version of Elasticsearch before 5.0.0, please install one of the following packages: * Search::Elasticsearch::Client::2_0 * Search::Elasticsearch::Client::1_0 * Search::Elasticsearch::Client::0_90 Motivation *The greatest deception men suffer is from their own opinions.* Leonardo da Vinci All of us have opinions, especially when it comes to designing APIs. Unfortunately, the opinions of programmers seldom coincide. The intention of this client, and of the officially supported clients available for other languages, is to provide robust support for the full native Elasticsearch API with as few opinions as possible: you should be able to read the Elasticsearch reference documentation and understand how to use this client, or any of the other official clients. Should you decide that you want to customize the API, then this client provides the basis for your code. It does the hard stuff for you, allowing you to build on top of it. Features This client provides: * Full support for all Elasticsearch APIs * HTTP backend (for an async backend using Promises, see Search::Elasticsearch::Async) * Robust networking support which handles load balancing, failure detection and failover * Good defaults * Helper utilities for more complex operations, such as bulk indexing, and scrolled searches * Logging support via Log::Any * Compatibility with the official clients for Python, Ruby, PHP, and Javascript * Easy extensibility INSTALLING ELASTICSEARCH You can download the latest version of Elasticsearch from . See the installation instructions for details. You will need to have a recent version of Java installed, preferably the Java v8 from Sun. CREATING A NEW INSTANCE The "new()" method returns a new client which can be used to run requests against the Elasticsearch cluster. use Search::Elasticsearch; my $e = Search::Elasticsearch->new( %params ); The most important arguments to "new()" are the following: "nodes" The "nodes" parameter tells the client which Elasticsearch nodes it should talk to. It can be a single node, multiples nodes or, if not specified, will default to "localhost:9200": # default: localhost:9200 $e = Search::Elasticsearch->new(); # single $e = Search::Elasticsearch->new( nodes => 'search_1:9200'); # multiple $e = Search::Elasticsearch->new( nodes => [ 'search_1:9200', 'search_2:9200' ] ); Each "node" can be a URL including a scheme, host, port, path and userinfo (for authentication). For instance, this would be a valid node: https://username:password@search.domain.com:443/prefix/path See "node" in Search::Elasticsearch::Role::Cxn for more on node specification. "cxn_pool" The CxnPool modules manage connections to nodes in the Elasticsearch cluster. They handle the load balancing between nodes and failover when nodes fail. Which "CxnPool" you should use depends on where your cluster is. There are three choices: * "Static" $e = Search::Elasticsearch->new( cxn_pool => 'Static' # default nodes => [ 'search1.domain.com:9200', 'search2.domain.com:9200' ], ); The Static connection pool, which is the default, should be used when you don't have direct access to the Elasticsearch cluster, eg when you are accessing the cluster through a proxy. See Search::Elasticsearch::CxnPool::Static for more. * "Sniff" $e = Search::Elasticsearch->new( cxn_pool => 'Sniff', nodes => [ 'search1:9200', 'search2:9200' ], ); The Sniff connection pool should be used when you do have direct access to the Elasticsearch cluster, eg when your web servers and Elasticsearch servers are on the same network. The nodes that you specify are used to *discover* the cluster, which is then *sniffed* to find the current list of live nodes that the cluster knows about. See Search::Elasticsearch::CxnPool::Sniff. * "Static::NoPing" $e = Search::Elasticsearch->new( cxn_pool => 'Static::NoPing' nodes => [ 'proxy1.domain.com:80', 'proxy2.domain.com:80' ], ); The Static::NoPing connection pool should be used when your access to a remote cluster is so limited that you cannot ping individual nodes with a "HEAD /" request. See Search::Elasticsearch::CxnPool::Static::NoPing for more. "trace_to" For debugging purposes, it is useful to be able to dump the actual HTTP requests which are sent to the cluster, and the response that is received. This can be enabled with the "trace_to" parameter, as follows: # To STDERR $e = Search::Elasticsearch->new( trace_to => 'Stderr' ); # To a file $e = Search::Elasticsearch->new( trace_to => ['File','/path/to/filename'] ); Logging is handled by Log::Any. See Search::Elasticsearch::Logger::LogAny for more information. Other Other arguments are explained in the respective module docs. RUNNING REQUESTS When you create a new instance of Search::Elasticsearch, it returns a client object, which can be used for running requests. use Search::Elasticsearch; my $e = Search::Elasticsearch->new( %params ); # create an index $e->indices->create( index => 'my_index' ); # index a document $e->index( index => 'my_index', type => 'blog_post', id => 1, body => { title => 'Elasticsearch clients', content => 'Interesting content...', date => '2013-09-24' } ); See Search::Elasticsearch::Client::5_0::Direct for more details about the requests that can be run. MODULES Each chunk of functionality is handled by a different module, which can be specified in the call to new() as shown in cxn_pool above. For instance, the following will use the Search::Elasticsearch::CxnPool::Sniff module for the connection pool. $e = Search::Elasticsearch->new( cxn_pool => 'Sniff' ); Custom modules can be named with the appropriate prefix, eg "Search::Elasticsearch::CxnPool::", or by prefixing the full class name with "+": $e = Search::Elasticsearch->new( cxn_pool => '+My::Custom::CxnClass' ); The modules that you can override are specified with the following arguments to "new()": "client" The class to use for the client functionality, which provides methods that can be called to execute requests, such as "search()", "index()" or "delete()". The client parses the user's requests and passes them to the "transport" class to be executed. The default version of the client is "5_0::Direct", which can be explicitly specified as follows: $e = Search::Elasticsearch->new( client => '5_0::Direct' ); "transport" The Transport class accepts a parsed request from the "client" class, fetches a "cxn" from its "cxn_pool" and tries to execute the request, retrying after failure where appropriate. See: * Search::Elasticsearch::Transport "cxn" The class which handles raw requests to Elasticsearch nodes. See: * Search::Elasticsearch::Cxn::HTTPTiny (default) * Search::Elasticsearch::Cxn::Hijk * Search::Elasticsearch::Cxn::LWP * Search::Elasticsearch::Cxn::NetCurl "cxn_factory" The class which the "cxn_pool" uses to create new "cxn" objects. See: * Search::Elasticsearch::Cxn::Factory "cxn_pool" (2) The class to use for the connection pool functionality. It calls the "cxn_factory" class to create new "cxn" objects when appropriate. See: * Search::Elasticsearch::CxnPool::Static (default) * Search::Elasticsearch::CxnPool::Sniff * Search::Elasticsearch::CxnPool::Static::NoPing "logger" The class to use for logging events and tracing HTTP requests/responses. See: * Search::Elasticsearch::Logger::LogAny "serializer" The class to use for serializing request bodies and deserializing response bodies. See: * Search::Elasticsearch::Serializer::JSON (default) * Search::Elasticsearch::Serializer::JSON::Cpanel * Search::Elasticsearch::Serializer::JSON::XS * Search::Elasticsearch::Serializer::JSON::PP BUGS This is a stable API but this implementation is new. Watch this space for new releases. If you have any suggestions for improvements, or find any bugs, please report them to . I will be notified, and then you'll automatically be notified of progress on your bug as I make changes. SUPPORT You can find documentation for this module with the perldoc command. perldoc Search::Elasticsearch You can also look for information at: * GitHub * CPAN Ratings * Search MetaCPAN * IRC The #elasticsearch channel on "irc.freenode.net". * Mailing list The main Elasticsearch mailing list . TEST SUITE The full test suite requires a live Elasticsearch node to run, and should be run as : perl Makefile.PL ES=localhost:9200 make test TESTS RUN IN THIS WAY ARE DESTRUCTIVE! DO NOT RUN AGAINST A CLUSTER WITH DATA YOU WANT TO KEEP! You can change the Cxn class which is used by setting the "ES_CXN" environment variable: ES_CXN=Hijk ES=localhost:9200 make test AUTHOR Clinton Gormley COPYRIGHT AND LICENSE This software is Copyright (c) 2016 by Elasticsearch BV. This is free software, licensed under: The Apache License, Version 2.0, January 2004 Changes100644000765000024 3272113001720020 16730 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01Revision history for Search::Elasticsearch 5.01 2016-10-19 Doc fixes 5.00 2016-10-19 This version adds Elasticsearch 5.x compatibility, and makes it the default. It also adds deprecation logging which logs to STDERR by default. The Hijk backend will not work with Elasticsearch 5.x until this bug is fixed: https://rt.cpan.org/Ticket/Display.html?id=118425 BREAKING CHANGES: * The 0.90, 1.x, and 2.x compatible clients no longer ship by default. You should install one of the following: * Search::Elasticsearch::Client::2_0 * Search::Elasticsearch::Client::2_0::Async * Search::Elasticsearch::Client::1_0 * Search::Elasticsearch::Client::1_0::Async * Search::Elasticsearch::Client::0_90 * Search::Elasticsearch::Client::0_90::Async * The code has been reorganised so that all client-related modules are under the S::E::API_VERSION::Client namespace. This includes S::E::Bulk and S::E::Scroll. * Plugin authors note: the format for the API in ...Role::API has changed. * S::E::Cxn::HTTP has been rolled into S::E::Cxn as Elasticsearch no longer supports other protocols. 2.03 2016-05-24 Added register_qs() to allow plugins to expand known query string params Added api_version() to API roles so that plugins can determine which API version they should load. 2.02 2016-04-20 Bug fix: Sniffed http addresses may or may not have the hostname/ prepended 2.01 2016-04-15 Updated for Elasticsearch 2.3.0 New features: * Added delete_by_query(), reindex(), and update_by_query() * Added tasks.list() and tasks.cancel() * Added ignore_unavailable to cat.snapshots() * Added attributes and explain to indices.analyze() * Added verbose to indices.segments() * S::ES::Error now implements TO_JSON * TestServer can now be used with forked processes Bug fixes: * Search::Elasticsearch::Error shouldn't be a Moo class * Search::Elasticsearch::Scroll can now be used in a forked process * Hijk is now optional as it doesn't work on windows * cat.snapshots requires a repository 2.00 2015-10-28 The default client is now '2_0::Direct', for use with Elasticsearch 2.x. Specify client '1_0::Direct' if using with Elasticsearch 1.x. Breaking: * The field parameter to indices.get_field_mapping() has been renamed to fields New features: * Added fields param to Bulk helper * The name parameter to indices.get_template() can accept multiple options * Added indices.forcemerge() and deprecated indices.optimize() * The index parameter to indices.open() and indices.close() is required * Added allow_no_indices, expand_wildcards, and ignore_unavailable params to indices.flush_synced() * Added the timeout param to cluster.stats(), nodes.hot_threads(), nodes.stats(), and nodes.info() * cluster.health() can accept multiple indices * Added cat.repositories() and cat.snapshots() * Added detect_noop param to update() * search_shards() accepts multi values for index/type * delete_template() requires an id * Add fork protection to Scroll and Async::Scroll Bug fix: * Added missing debug QS param 1.99 2015-08-26 This release provides support for Elasticsearch 2.0.0-beta1 and above, but the default client is still '1_0::Direct' and will remain so until version 2.00 is released. New features: * Added default_qs_params, which will be added to every request * Added max_time to the Bulk helper, to flush after a max elapsed time * Added filter_path parameter to all methods which return JSON * Added indices.flush_synced() * Added render_search_template() * Added cat.nodeattrs() * Added human flag to indices.get and indices.get_settings * Added rewrite flag to indices.validate_query * Added rewrite flag to indices.analyze * Added fields param to bulk() * Added update_all_types to indices.create and indices.put_mapping * Added request_cache to indices.put_warmer and indices.stats * Added request to indices.clear_cache * Added RequestTimeout exception for server-side timeouts * Updated Plugin::Watcher with 1.0 API Removed: * Removed id and id_cache from indices.clear_cache * Removed filter and filter_cache from indices.clear_cache * Removed ignore_conflict from indices.put_mapping Bugfixes: * Fixed error handling in Hijk * Fixed live test to non-existent IP address 1.20 2015-05-17 Deprecated: * Search::Elasticsearch::Client::Direct in favour of Search::Elasticsearch::Client::1_0::Direct New features: * Added support for structured JSON exceptions in Elasticsearch 2.0 * Added support for plugins * Added Search::Elasticsearch::Client::2_0::Direct for the upcoming Elasticsearch 2.0 with these changes: * removed delete_by_query() * removed termvector() * removed indices.delete_mapping() * removed nodes.shutdown() * removed indices.status() * added terminate_after param to search() * added dfs param to termvectors() * removed filter_keys param from indices.clear_cache() * removed full param from indices.flush() * removed force param from indics.optmize() * removed replication param from all CRUD methods * removed mlt() method Bug fix: * The bulk buffer was being cleared on a NoNodes exception Added class: Added methods: * field_stats() Added params: * allow_no_indices, expand_wildcards, ignore_unavailable to cluster.state() * fielddata_fields to search() * master_timeout to indices.get_template() and indices.exists_template() * detect_noop to update() * only_ancient_segments to upgrade() * analyze_wildcards, analyzer, default_operator, df, lenient, lowercase_expanded_terms, and q to count(), search_exists() and indices.validate_query() Removed methods: * benchmark.* - never released in Elasticsearch Also: * arrays of enum query string params are now flattened as CSV * enum expand_wildcards also accepts: none, all * Search::Elasticsearch is no longer a Moo class * Updated elasticsearch.org URLs to use elastic.co instead * the request body is retained in exceptions * upgraded Hijk to 0.20 1.19 2015-01-15 Added method: * cat.segments() Added exceptions: * Unauthorized - for invalid user creds * SSL - for invalid SSL certs Renamed exception: * ClusterBlock -> Forbidden Also: * Simplified SSL support for HTTP::Tiny, LWP and improved instructions * Added optional tests for https/authz/authen 1.17 2014-12-29 Bug fix: * handle_args were not being passed to all backends, meaning that (eg) cookies could not be used Dependency bump: * Log::Any 1.02 broke bwc - fixed to work with new version Added params: * op_type, version, version_type to indices.put_template * version, version_type to indices.delete_template * version, version_type to termvectors * master_timeout, timeout to cluster.put_settings * ignore_idle_threads to nodes.hot_threads * terminate_after to search Deprecated: * termvector in favour of termvectors (but old method still works for now) 1.16 2014-11-15 Added dependency on Pod::Simple, which was causing installation on perl 5.8 to fail Added params: * percolate_preference and percolate_routing to percolate() Bug fix: * the index param is now required for indices.delete() 1.15 2014-11-05 Enhancements: * All backends (except Hijk) now default to not verifying SSL identities, but accept ssl_options to allow backend-specific configuration * Improved Mojo exceptions Bug fix: * is_https() didn't work Changed: * index param to put_alias() is now required Added methods: * index.get() * search_exists() * indices.upgrade() * indices.get_upgrade() * snapshot.verify_repository() Added parameters: * query_cache to search(), clear_cache(), stats() * wait_if_ongoing to flush() * script_id and scripted_upsert to update() * version and version_type to put_script(), get_script(), delete_script(), put_template(), get_template(), and delete_template() * op_type to put_script() and put_template() * metric to cluster_reroute() * realtime to termvector() and mtermvector() * dfs to termvector() Removed parameters: * filter_metadata from cluster_reroute() * search_query_hint from mlt() Bumped versions: JSON::XS 2.26 Package::Stash 0.34 Log::Any 0.15 1.14 2014-07-24 Added support for indexed scripts and indexed templates. 1.13 2014-06-13 Breaking change: The Scroll helper used to pass the scroll ID to scroll() and clear_scroll() in the query string by default, with the scroll_in_body parameter to change the behaviour. This was causing frequent errors with long scroll IDs, so the new default behaviour is to pass the scroll ID in the body, with the scroll_in_qs parameter to change that behaviour. All Search::Elasticsearch HTTP backends are now fork safe. Added track_scores param to search() Added create param to indices.put_template() Removed index_templates param from cluster.state() Removed indices_boost param from search() Added percolate_format param to percolate() Added cat.fielddata() 1.12 2014-05-09 Fixed bug when trying to reindex from a subref Added search_shards() Added char_filters to indices.analyze() Removed index_templates from cluster.state() Added conf to TestServer for passing arbitrary config 1.11 2014-04-23 Switched default Serializer::JSON to use JSON::MaybeXS, and added Serializer backends for Cpanel::JSON::XS, JSON::XS and JSON::PP Added scroll_in_body flag for Scroll helper Added support for: * search_template() * snapshot->status() * indices->recovery() * benchmark() * list_benchmarks() * abort_benchmark() 1.10 2014-03-05 Moved all modules to Search::Elasticsearch namespace. See https://github.com/elasticsearch/elasticsearch-perl/issues/20 1.05 2014-03-05 Deprecated the Elasticsearch namespace in favour of Search::Elasticsearch. See https://github.com/elasticsearch/elasticsearch-perl/issues/20 Improved the Bulk->reindex() API. Now accepts a remote $es object. Improved documentation. Added Hijk backend. 1.04 2014-02-27 Changed the default Cxn to HTTPTiny v0.043. Now provides persistent connections and is a lot faster than LWP. Changed ES::Scroll to pass the scroll_id in the URL instead of the body. Better support for older versions and servers behind caching proxies. 1.03 2014-02-12 Fixed node sniffing to work across 0.90 and 1.0 1.02 2014-02-11 Fixed bug in Elasticsearch::Scroll::next when called in scalar context 1.01 2014-02-09 Fixed plugin loader to work with latest version of Module::Runtime which complains about undefined versions 1.00 2014-02-07 API updated to be compatible with v1.x branch of Elasticsearch. BACKWARDS COMPATIBILITY: To use this client with versions of Elasticsearch before 1.x, specify the client version as: $es = Elasticsearch->new( client => '0_90::Direct' ); 0.76 2013-12-02 Added support for send_get_body_as GET/POST/source Added timeout to bulk API 0.75 2013-10-24 Fixed the sniff regex to accommodate hostnames when present 0.74 2013-10-03 Fixed a timeout bug in LWP with persistent connections and bad params when using https 0.73 2013-10-02 Added Elasticsearch::Cxn::LWP Added Elasticsearch::TestServer Die with explanation if a user on a case-insensitive file system loads this module instead of ElasticSearch 0.72 2013-09-29 Added Elasticsearch::Bulk and Elasticsearch::Scroll Changed `https` to `use_https` for compatibility with elasticsearch-py Numerous fixes for different Perl versions, and Moo 1.003 now required 0.71 2013-09-24 Fixed dist.ini to list dependencies correctly 0.70 2013-09-24 Bumped version numbers because CPAN clashes with ElasticSearch.pm 0.04 2013-09-23 First release LICENSE100644000765000024 2636013001720020 16444 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01This software is Copyright (c) 2016 by Elasticsearch BV. This is free software, licensed under: The Apache License, Version 2.0, January 2004 Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. META.yml100644000765000024 276113001720020 16667 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01--- abstract: 'The official client for Elasticsearch' author: - 'Clinton Gormley ' build_requires: IO::Socket::SSL: '0' Log::Any::Adapter::Callback: '0.09' Test::Deep: '0' Test::Exception: '0' Test::More: '0.98' Test::SharedFork: '0' lib: '0' strict: '0' configure_requires: ExtUtils::MakeMaker: '0' dynamic_config: 0 generated_by: 'Dist::Zilla version 5.041, CPAN::Meta::Converter version 2.150001' license: apache meta-spec: url: http://module-build.sourceforge.net/META-spec-v1.4.html version: '1.4' name: Search-Elasticsearch recommends: Hijk: '0.25' IO::Socket::IP: '0.37' URI::Escape::XS: '0' requires: Any::URI::Escape: '0' Data::Dumper: '0' Devel::GlobalDestruction: '0' Encode: '0' File::Temp: '0' HTTP::Headers: '0' HTTP::Request: '0' HTTP::Tiny: '0.043' IO::Compress::Deflate: '0' IO::Compress::Gzip: '0' IO::Select: '0' IO::Socket: '0' IO::Uncompress::Gunzip: '0' IO::Uncompress::Inflate: '0' JSON::MaybeXS: '1.002002' JSON::PP: '0' LWP::UserAgent: '0' List::Util: '0' Log::Any: '1.02' Log::Any::Adapter: '0' MIME::Base64: '0' Module::Runtime: '0' Moo: '1.003' Moo::Role: '0' POSIX: '0' Package::Stash: '0.34' Scalar::Util: '0' Sub::Exporter: '0' Time::HiRes: '0' Try::Tiny: '0' URI: '0' namespace::clean: '0' overload: '0' warnings: '0' resources: bugtracker: https://github.com/elastic/elasticsearch-perl/issues repository: git://github.com/elastic/elasticsearch-perl.git version: '5.01' MANIFEST100644000765000024 1137113001720020 16564 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01# This file was automatically generated by Dist::Zilla::Plugin::Manifest v5.041. Changes LICENSE MANIFEST META.json META.yml Makefile.PL README lib/Search/Elasticsearch.pm lib/Search/Elasticsearch/Client/5_0.pm lib/Search/Elasticsearch/Client/5_0/Bulk.pm lib/Search/Elasticsearch/Client/5_0/Direct.pm lib/Search/Elasticsearch/Client/5_0/Direct/Cat.pm lib/Search/Elasticsearch/Client/5_0/Direct/Cluster.pm lib/Search/Elasticsearch/Client/5_0/Direct/Indices.pm lib/Search/Elasticsearch/Client/5_0/Direct/Ingest.pm lib/Search/Elasticsearch/Client/5_0/Direct/Nodes.pm lib/Search/Elasticsearch/Client/5_0/Direct/Snapshot.pm lib/Search/Elasticsearch/Client/5_0/Direct/Tasks.pm lib/Search/Elasticsearch/Client/5_0/Role/API.pm lib/Search/Elasticsearch/Client/5_0/Role/Bulk.pm lib/Search/Elasticsearch/Client/5_0/Role/Scroll.pm lib/Search/Elasticsearch/Client/5_0/Scroll.pm lib/Search/Elasticsearch/Cxn/Factory.pm lib/Search/Elasticsearch/Cxn/HTTPTiny.pm lib/Search/Elasticsearch/Cxn/Hijk.pm lib/Search/Elasticsearch/Cxn/LWP.pm lib/Search/Elasticsearch/CxnPool/Sniff.pm lib/Search/Elasticsearch/CxnPool/Static.pm lib/Search/Elasticsearch/CxnPool/Static/NoPing.pm lib/Search/Elasticsearch/Error.pm lib/Search/Elasticsearch/Logger/LogAny.pm lib/Search/Elasticsearch/Role/API.pm lib/Search/Elasticsearch/Role/Client.pm lib/Search/Elasticsearch/Role/Client/Direct.pm lib/Search/Elasticsearch/Role/Cxn.pm lib/Search/Elasticsearch/Role/CxnPool.pm lib/Search/Elasticsearch/Role/CxnPool/Sniff.pm lib/Search/Elasticsearch/Role/CxnPool/Static.pm lib/Search/Elasticsearch/Role/CxnPool/Static/NoPing.pm lib/Search/Elasticsearch/Role/Is_Sync.pm lib/Search/Elasticsearch/Role/Logger.pm lib/Search/Elasticsearch/Role/Serializer.pm lib/Search/Elasticsearch/Role/Serializer/JSON.pm lib/Search/Elasticsearch/Role/Transport.pm lib/Search/Elasticsearch/Serializer/JSON.pm lib/Search/Elasticsearch/Serializer/JSON/Cpanel.pm lib/Search/Elasticsearch/Serializer/JSON/PP.pm lib/Search/Elasticsearch/Serializer/JSON/XS.pm lib/Search/Elasticsearch/TestServer.pm lib/Search/Elasticsearch/Transport.pm lib/Search/Elasticsearch/Util.pm t/10_Basic/10_load.t t/20_Serializer/10_load_cpanel.t t/20_Serializer/11_load_xs.t t/20_Serializer/12_load_pp.t t/20_Serializer/13_preload_cpanel.t t/20_Serializer/14_preload_xs.t t/20_Serializer/20_xs_encode_decode.t t/20_Serializer/21_xs_encode_bulk.t t/20_Serializer/22_xs_encode_pretty.t t/20_Serializer/30_cpanel_encode_decode.t t/20_Serializer/31_cpanel_encode_bulk.t t/20_Serializer/32_cpanel_encode_pretty.t t/20_Serializer/40_pp_encode_decode.t t/20_Serializer/41_pp_encode_bulk.t t/20_Serializer/42_pp_encode_pretty.t t/20_Serializer/encode_bulk.pl t/20_Serializer/encode_decode.pl t/20_Serializer/encode_pretty.pl t/30_Logger/10_explicit.t t/30_Logger/20_implicit.t t/30_Logger/30_log_methods.t t/30_Logger/40_trace_request.t t/30_Logger/50_trace_response.t t/30_Logger/60_trace_error.t t/30_Logger/70_trace_comment.t t/30_Logger/80_deprecation_methods.t t/30_Logger/90_error_json.t t/40_Transport/10_tidy_request.t t/40_Transport/20_send_body_as.t t/40_Transport/30_perform_request.t t/50_Cxn_Pool/10_static_normal.t t/50_Cxn_Pool/11_static_node_missing.t t/50_Cxn_Pool/12_static_node_fails.t t/50_Cxn_Pool/13_static_node_timesout.t t/50_Cxn_Pool/14_static_both_nodes_timeout.t t/50_Cxn_Pool/15_static_both_nodes_fail.t t/50_Cxn_Pool/16_static_nodes_starting.t t/50_Cxn_Pool/17_static_runaway_nodes.t t/50_Cxn_Pool/30_sniff_normal.t t/50_Cxn_Pool/31_sniff_new_nodes.t t/50_Cxn_Pool/32_sniff_node_fails.t t/50_Cxn_Pool/33_sniff_both_nodes_fail.t t/50_Cxn_Pool/34_sniff_node_timeout.t t/50_Cxn_Pool/35_sniff_both_nodes_timeout.t t/50_Cxn_Pool/36_sniff_nodes_starting.t t/50_Cxn_Pool/37_sniff_runaway_nodes.t t/50_Cxn_Pool/38_bad_sniff.t t/50_Cxn_Pool/39_sniff_max_content.t t/50_Cxn_Pool/40_sniff_extract_host.t t/50_Cxn_Pool/50_noping_normal.t t/50_Cxn_Pool/51_noping_node_fails.t t/50_Cxn_Pool/52_noping_node_timesout.t t/50_Cxn_Pool/53_noping_all_nodes_fail.t t/50_Cxn_Pool/54_noping_nodes_starting.t t/50_Cxn_Pool/55_noping_runaway_nodes.t t/50_Cxn_Pool/56_max_retries.t t/60_Cxn/10_basic.t t/60_Cxn/20_process_response.t t/60_Cxn/30_http.t t/95_TestServer/00_test_server.t t/95_TestServer/10_test_server_fork.t t/Client_5_0/00_print_version.t t/Client_5_0/10_live.t t/Client_5_0/15_conflict.t t/Client_5_0/20_fork_httptiny.t t/Client_5_0/21_fork_lwp.t t/Client_5_0/22_fork_hijk.t t/Client_5_0/30_bulk_add_action.t t/Client_5_0/31_bulk_helpers.t t/Client_5_0/32_bulk_flush.t t/Client_5_0/33_bulk_errors.t t/Client_5_0/34_bulk_cxn_errors.t t/Client_5_0/40_scroll.t t/Client_5_0/60_auth_httptiny.t t/Client_5_0/61_auth_lwp.t t/author-eol.t t/author-no-tabs.t t/author-pod-syntax.t t/lib/LogCallback.pl t/lib/MockCxn.pm t/lib/bad_cacert.pem t/lib/default_cxn.pl t/lib/es_sync.pl t/lib/es_sync_auth.pl t/lib/es_sync_fork.pl t/lib/index_test_data.pl META.json100644000765000024 573313001720020 17041 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01{ "abstract" : "The official client for Elasticsearch", "author" : [ "Clinton Gormley " ], "dynamic_config" : 0, "generated_by" : "Dist::Zilla version 5.041, CPAN::Meta::Converter version 2.150001", "license" : [ "apache_2_0" ], "meta-spec" : { "url" : "http://search.cpan.org/perldoc?CPAN::Meta::Spec", "version" : 2 }, "name" : "Search-Elasticsearch", "prereqs" : { "configure" : { "requires" : { "ExtUtils::MakeMaker" : "0" } }, "develop" : { "requires" : { "Test::EOL" : "0", "Test::More" : "0.88", "Test::NoTabs" : "0", "Test::Pod" : "1.41" } }, "runtime" : { "recommends" : { "Hijk" : "0.25", "IO::Socket::IP" : "0.37", "URI::Escape::XS" : "0" }, "requires" : { "Any::URI::Escape" : "0", "Data::Dumper" : "0", "Devel::GlobalDestruction" : "0", "Encode" : "0", "File::Temp" : "0", "HTTP::Headers" : "0", "HTTP::Request" : "0", "HTTP::Tiny" : "0.043", "IO::Compress::Deflate" : "0", "IO::Compress::Gzip" : "0", "IO::Select" : "0", "IO::Socket" : "0", "IO::Uncompress::Gunzip" : "0", "IO::Uncompress::Inflate" : "0", "JSON::MaybeXS" : "1.002002", "JSON::PP" : "0", "LWP::UserAgent" : "0", "List::Util" : "0", "Log::Any" : "1.02", "Log::Any::Adapter" : "0", "MIME::Base64" : "0", "Module::Runtime" : "0", "Moo" : "1.003", "Moo::Role" : "0", "POSIX" : "0", "Package::Stash" : "0.34", "Scalar::Util" : "0", "Sub::Exporter" : "0", "Time::HiRes" : "0", "Try::Tiny" : "0", "URI" : "0", "namespace::clean" : "0", "overload" : "0", "warnings" : "0" } }, "test" : { "recommends" : { "Cpanel::JSON::XS" : "0", "JSON::XS" : "0", "Mojo::IOLoop" : "0", "Mojo::UserAgent" : "0" }, "requires" : { "IO::Socket::SSL" : "0", "Log::Any::Adapter::Callback" : "0.09", "Test::Deep" : "0", "Test::Exception" : "0", "Test::More" : "0.98", "Test::SharedFork" : "0", "lib" : "0", "strict" : "0" } } }, "release_status" : "stable", "resources" : { "bugtracker" : { "web" : "https://github.com/elastic/elasticsearch-perl/issues" }, "repository" : { "type" : "git", "url" : "git://github.com/elastic/elasticsearch-perl.git", "web" : "https://github.com/elastic/elasticsearch-perl" } }, "version" : "5.01" } Makefile.PL100644000765000024 616613001720020 17373 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01# This file was automatically generated by Dist::Zilla::Plugin::MakeMaker v5.041. use strict; use warnings; use ExtUtils::MakeMaker; my %WriteMakefileArgs = ( "ABSTRACT" => "The official client for Elasticsearch", "AUTHOR" => "Clinton Gormley ", "CONFIGURE_REQUIRES" => { "ExtUtils::MakeMaker" => 0 }, "DISTNAME" => "Search-Elasticsearch", "LICENSE" => "apache", "NAME" => "Search::Elasticsearch", "PREREQ_PM" => { "Any::URI::Escape" => 0, "Data::Dumper" => 0, "Devel::GlobalDestruction" => 0, "Encode" => 0, "File::Temp" => 0, "HTTP::Headers" => 0, "HTTP::Request" => 0, "HTTP::Tiny" => "0.043", "IO::Compress::Deflate" => 0, "IO::Compress::Gzip" => 0, "IO::Select" => 0, "IO::Socket" => 0, "IO::Uncompress::Gunzip" => 0, "IO::Uncompress::Inflate" => 0, "JSON::MaybeXS" => "1.002002", "JSON::PP" => 0, "LWP::UserAgent" => 0, "List::Util" => 0, "Log::Any" => "1.02", "Log::Any::Adapter" => 0, "MIME::Base64" => 0, "Module::Runtime" => 0, "Moo" => "1.003", "Moo::Role" => 0, "POSIX" => 0, "Package::Stash" => "0.34", "Scalar::Util" => 0, "Sub::Exporter" => 0, "Time::HiRes" => 0, "Try::Tiny" => 0, "URI" => 0, "namespace::clean" => 0, "overload" => 0, "warnings" => 0 }, "TEST_REQUIRES" => { "IO::Socket::SSL" => 0, "Log::Any::Adapter::Callback" => "0.09", "Test::Deep" => 0, "Test::Exception" => 0, "Test::More" => "0.98", "Test::SharedFork" => 0, "lib" => 0, "strict" => 0 }, "VERSION" => "5.01", "test" => { "TESTS" => "t/*.t t/10_Basic/*.t t/20_Serializer/*.t t/30_Logger/*.t t/40_Transport/*.t t/50_Cxn_Pool/*.t t/60_Cxn/*.t t/95_TestServer/*.t t/Client_5_0/*.t" } ); my %FallbackPrereqs = ( "Any::URI::Escape" => 0, "Data::Dumper" => 0, "Devel::GlobalDestruction" => 0, "Encode" => 0, "File::Temp" => 0, "HTTP::Headers" => 0, "HTTP::Request" => 0, "HTTP::Tiny" => "0.043", "IO::Compress::Deflate" => 0, "IO::Compress::Gzip" => 0, "IO::Select" => 0, "IO::Socket" => 0, "IO::Socket::SSL" => 0, "IO::Uncompress::Gunzip" => 0, "IO::Uncompress::Inflate" => 0, "JSON::MaybeXS" => "1.002002", "JSON::PP" => 0, "LWP::UserAgent" => 0, "List::Util" => 0, "Log::Any" => "1.02", "Log::Any::Adapter" => 0, "Log::Any::Adapter::Callback" => "0.09", "MIME::Base64" => 0, "Module::Runtime" => 0, "Moo" => "1.003", "Moo::Role" => 0, "POSIX" => 0, "Package::Stash" => "0.34", "Scalar::Util" => 0, "Sub::Exporter" => 0, "Test::Deep" => 0, "Test::Exception" => 0, "Test::More" => "0.98", "Test::SharedFork" => 0, "Time::HiRes" => 0, "Try::Tiny" => 0, "URI" => 0, "lib" => 0, "namespace::clean" => 0, "overload" => 0, "strict" => 0, "warnings" => 0 ); unless ( eval { ExtUtils::MakeMaker->VERSION(6.63_03) } ) { delete $WriteMakefileArgs{TEST_REQUIRES}; delete $WriteMakefileArgs{BUILD_REQUIRES}; $WriteMakefileArgs{PREREQ_PM} = \%FallbackPrereqs; } delete $WriteMakefileArgs{CONFIGURE_REQUIRES} unless eval { ExtUtils::MakeMaker->VERSION(6.52) }; WriteMakefile(%WriteMakefileArgs); t000755000765000024 013001720020 15513 5ustar00clintonstaff000000000000Search-Elasticsearch-5.01author-eol.t100644000765000024 1356113001720020 20145 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t BEGIN { unless ($ENV{AUTHOR_TESTING}) { require Test::More; Test::More::plan(skip_all => 'these tests are for testing by the author'); } } use strict; use warnings; # this test was generated with Dist::Zilla::Plugin::Test::EOL 0.18 use Test::More 0.88; use Test::EOL; my @files = ( 'lib/Search/Elasticsearch.pm', 'lib/Search/Elasticsearch/Client/5_0.pm', 'lib/Search/Elasticsearch/Client/5_0/Bulk.pm', 'lib/Search/Elasticsearch/Client/5_0/Direct.pm', 'lib/Search/Elasticsearch/Client/5_0/Direct/Cat.pm', 'lib/Search/Elasticsearch/Client/5_0/Direct/Cluster.pm', 'lib/Search/Elasticsearch/Client/5_0/Direct/Indices.pm', 'lib/Search/Elasticsearch/Client/5_0/Direct/Ingest.pm', 'lib/Search/Elasticsearch/Client/5_0/Direct/Nodes.pm', 'lib/Search/Elasticsearch/Client/5_0/Direct/Snapshot.pm', 'lib/Search/Elasticsearch/Client/5_0/Direct/Tasks.pm', 'lib/Search/Elasticsearch/Client/5_0/Role/API.pm', 'lib/Search/Elasticsearch/Client/5_0/Role/Bulk.pm', 'lib/Search/Elasticsearch/Client/5_0/Role/Scroll.pm', 'lib/Search/Elasticsearch/Client/5_0/Scroll.pm', 'lib/Search/Elasticsearch/Cxn/Factory.pm', 'lib/Search/Elasticsearch/Cxn/HTTPTiny.pm', 'lib/Search/Elasticsearch/Cxn/Hijk.pm', 'lib/Search/Elasticsearch/Cxn/LWP.pm', 'lib/Search/Elasticsearch/CxnPool/Sniff.pm', 'lib/Search/Elasticsearch/CxnPool/Static.pm', 'lib/Search/Elasticsearch/CxnPool/Static/NoPing.pm', 'lib/Search/Elasticsearch/Error.pm', 'lib/Search/Elasticsearch/Logger/LogAny.pm', 'lib/Search/Elasticsearch/Role/API.pm', 'lib/Search/Elasticsearch/Role/Client.pm', 'lib/Search/Elasticsearch/Role/Client/Direct.pm', 'lib/Search/Elasticsearch/Role/Cxn.pm', 'lib/Search/Elasticsearch/Role/CxnPool.pm', 'lib/Search/Elasticsearch/Role/CxnPool/Sniff.pm', 'lib/Search/Elasticsearch/Role/CxnPool/Static.pm', 'lib/Search/Elasticsearch/Role/CxnPool/Static/NoPing.pm', 'lib/Search/Elasticsearch/Role/Is_Sync.pm', 'lib/Search/Elasticsearch/Role/Logger.pm', 'lib/Search/Elasticsearch/Role/Serializer.pm', 'lib/Search/Elasticsearch/Role/Serializer/JSON.pm', 'lib/Search/Elasticsearch/Role/Transport.pm', 'lib/Search/Elasticsearch/Serializer/JSON.pm', 'lib/Search/Elasticsearch/Serializer/JSON/Cpanel.pm', 'lib/Search/Elasticsearch/Serializer/JSON/PP.pm', 'lib/Search/Elasticsearch/Serializer/JSON/XS.pm', 'lib/Search/Elasticsearch/TestServer.pm', 'lib/Search/Elasticsearch/Transport.pm', 'lib/Search/Elasticsearch/Util.pm', 't/10_Basic/10_load.t', 't/20_Serializer/10_load_cpanel.t', 't/20_Serializer/11_load_xs.t', 't/20_Serializer/12_load_pp.t', 't/20_Serializer/13_preload_cpanel.t', 't/20_Serializer/14_preload_xs.t', 't/20_Serializer/20_xs_encode_decode.t', 't/20_Serializer/21_xs_encode_bulk.t', 't/20_Serializer/22_xs_encode_pretty.t', 't/20_Serializer/30_cpanel_encode_decode.t', 't/20_Serializer/31_cpanel_encode_bulk.t', 't/20_Serializer/32_cpanel_encode_pretty.t', 't/20_Serializer/40_pp_encode_decode.t', 't/20_Serializer/41_pp_encode_bulk.t', 't/20_Serializer/42_pp_encode_pretty.t', 't/20_Serializer/encode_bulk.pl', 't/20_Serializer/encode_decode.pl', 't/20_Serializer/encode_pretty.pl', 't/30_Logger/10_explicit.t', 't/30_Logger/20_implicit.t', 't/30_Logger/30_log_methods.t', 't/30_Logger/40_trace_request.t', 't/30_Logger/50_trace_response.t', 't/30_Logger/60_trace_error.t', 't/30_Logger/70_trace_comment.t', 't/30_Logger/80_deprecation_methods.t', 't/30_Logger/90_error_json.t', 't/40_Transport/10_tidy_request.t', 't/40_Transport/20_send_body_as.t', 't/40_Transport/30_perform_request.t', 't/50_Cxn_Pool/10_static_normal.t', 't/50_Cxn_Pool/11_static_node_missing.t', 't/50_Cxn_Pool/12_static_node_fails.t', 't/50_Cxn_Pool/13_static_node_timesout.t', 't/50_Cxn_Pool/14_static_both_nodes_timeout.t', 't/50_Cxn_Pool/15_static_both_nodes_fail.t', 't/50_Cxn_Pool/16_static_nodes_starting.t', 't/50_Cxn_Pool/17_static_runaway_nodes.t', 't/50_Cxn_Pool/30_sniff_normal.t', 't/50_Cxn_Pool/31_sniff_new_nodes.t', 't/50_Cxn_Pool/32_sniff_node_fails.t', 't/50_Cxn_Pool/33_sniff_both_nodes_fail.t', 't/50_Cxn_Pool/34_sniff_node_timeout.t', 't/50_Cxn_Pool/35_sniff_both_nodes_timeout.t', 't/50_Cxn_Pool/36_sniff_nodes_starting.t', 't/50_Cxn_Pool/37_sniff_runaway_nodes.t', 't/50_Cxn_Pool/38_bad_sniff.t', 't/50_Cxn_Pool/39_sniff_max_content.t', 't/50_Cxn_Pool/40_sniff_extract_host.t', 't/50_Cxn_Pool/50_noping_normal.t', 't/50_Cxn_Pool/51_noping_node_fails.t', 't/50_Cxn_Pool/52_noping_node_timesout.t', 't/50_Cxn_Pool/53_noping_all_nodes_fail.t', 't/50_Cxn_Pool/54_noping_nodes_starting.t', 't/50_Cxn_Pool/55_noping_runaway_nodes.t', 't/50_Cxn_Pool/56_max_retries.t', 't/60_Cxn/10_basic.t', 't/60_Cxn/20_process_response.t', 't/60_Cxn/30_http.t', 't/95_TestServer/00_test_server.t', 't/95_TestServer/10_test_server_fork.t', 't/Client_5_0/00_print_version.t', 't/Client_5_0/10_live.t', 't/Client_5_0/15_conflict.t', 't/Client_5_0/20_fork_httptiny.t', 't/Client_5_0/21_fork_lwp.t', 't/Client_5_0/22_fork_hijk.t', 't/Client_5_0/30_bulk_add_action.t', 't/Client_5_0/31_bulk_helpers.t', 't/Client_5_0/32_bulk_flush.t', 't/Client_5_0/33_bulk_errors.t', 't/Client_5_0/34_bulk_cxn_errors.t', 't/Client_5_0/40_scroll.t', 't/Client_5_0/60_auth_httptiny.t', 't/Client_5_0/61_auth_lwp.t', 't/author-eol.t', 't/author-no-tabs.t', 't/author-pod-syntax.t', 't/lib/LogCallback.pl', 't/lib/MockCxn.pm', 't/lib/bad_cacert.pem', 't/lib/default_cxn.pl', 't/lib/es_sync.pl', 't/lib/es_sync_auth.pl', 't/lib/es_sync_fork.pl', 't/lib/index_test_data.pl' ); eol_unix_ok($_, { trailing_whitespace => 1 }) foreach @files; done_testing; lib000755000765000024 013001720020 16261 5ustar00clintonstaff000000000000Search-Elasticsearch-5.01/tes_sync.pl100644000765000024 306313001720020 20423 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/libuse Search::Elasticsearch; use Test::More; use strict; use warnings; my $trace = !$ENV{TRACE} ? undef : $ENV{TRACE} eq '1' ? 'Stderr' : [ 'File', $ENV{TRACE} ]; die 'No $ENV{ES_VERSION} specified' unless $ENV{ES_VERSION}; my $api = "$ENV{ES_VERSION}::Direct"; my $body = $ENV{ES_BODY} || 'GET'; my $cxn = $ENV{ES_CXN} || do "default_cxn.pl" || die( $@ || $! ); my $cxn_pool = $ENV{ES_CXN_POOL} || 'Static'; my $timeout = $ENV{ES_TIMEOUT} || 30; my @plugins = split /,/, ( $ENV{ES_PLUGINS} || '' ); our %Auth; my $es; if ( $ENV{ES} ) { eval { $es = Search::Elasticsearch->new( nodes => $ENV{ES}, trace_to => $trace, cxn => $cxn, cxn_pool => $cxn_pool, client => $api, send_get_body_as => $body, request_timeout => $timeout, plugins => \@plugins, %Auth ); $es->ping unless $ENV{ES_SKIP_PING}; 1; } || do { diag $@; undef $es; }; } unless ($es) { plan skip_all => 'No Elasticsearch test node available'; exit; } unless ( $ENV{ES_SKIP_PING} ) { my $version = $es->info->{version}{number}; my $api = $es->api_version; unless ( $api eq '0_90' && $version =~ /^0\.9/ || substr( $api, 0, 1 ) eq substr( $version, 0, 1 ) ) { plan skip_all => "Tests are for API version $api but Elasticsearch is version $version\n"; exit; } } return $es; MockCxn.pm100644000765000024 656013001720020 20330 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/libpackage MockCxn; use strict; use warnings; our $VERSION = $Search::Elasticsearch::VERSION; use Data::Dumper; use Moo; with 'Search::Elasticsearch::Role::Cxn', 'Search::Elasticsearch::Role::Is_Sync'; use Sub::Exporter -setup => { exports => [ qw( mock_static_client mock_sniff_client mock_noping_client ) ] }; our $i = 0; has 'mock_responses' => ( is => 'rw', required => 1 ); has 'marked_live' => ( is => 'rw', default => sub {0} ); has 'node_num' => ( is => 'ro', default => sub { ++$i } ); #=================================== sub BUILD { #=================================== my $self = shift; $self->logger->debugf( "[%s-%s] CREATED", $self->node_num, $self->host ); } #=================================== sub error_from_text { return $_[2] } #=================================== #=================================== sub perform_request { #=================================== my $self = shift; my $params = shift; my $response = shift @{ $self->mock_responses } or die "Mock responses exhausted"; if ( my $node = $response->{node} ) { die "Mock response handled by wrong node [" . $self->node_num . "]: " . Dumper($response) unless $node eq $self->node_num; } my $log_msg; # Sniff request if ( my $nodes = $response->{sniff} ) { $log_msg = "SNIFF: [" . ( join ", ", @$nodes ) . "]"; $response->{code} ||= 200; my $i = 1; unless ( $response->{error} ) { $response->{content} = $self->serializer->encode( { nodes => { map { 'node_' . $i++ => { http_address => "inet[/$_]" } } @$nodes } } ); } } # Normal request elsif ( $response->{code} ) { $log_msg = "REQUEST: " . ( $response->{error} || $response->{code} ); } # Ping request else { $log_msg = "PING: " . ( $response->{ping} ? 'OK' : 'NOT_OK' ); $response = $response->{ping} ? { code => 200 } : { code => 500, error => 'Cxn' }; } $self->logger->debugf( "[%s-%s] %s", $self->node_num, $self->host, $log_msg ); return $self->process_response( $params, # request $response->{code}, # code $response->{error}, # msg $response->{content}, # body { 'content-type' => 'application/json' } ); } #### EXPORTS ### my $trace = !$ENV{TRACE} ? undef : $ENV{TRACE} eq '1' ? 'Stderr' : [ 'File', $ENV{TRACE} ]; #=================================== sub mock_static_client { _mock_client( 'Static', @_ ) } sub mock_sniff_client { _mock_client( 'Sniff', @_ ) } sub mock_noping_client { _mock_client( 'Static::NoPing', @_ ) } #=================================== #=================================== sub _mock_client { #=================================== my $pool = shift; my $params = shift; $i = 0; return Search::Elasticsearch->new( cxn => '+MockCxn', cxn_pool => $pool, mock_responses => \@_, randomize_cxns => 0, log_to => $trace, %$params, )->transport; } 1 60_Cxn000755000765000024 013001720020 16550 5ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t30_http.t100644000765000024 1116113001720020 20376 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/60_Cxnuse Test::More; use Test::Exception; use Test::Deep; use Search::Elasticsearch; sub is_cxn(@); ### Scalar nodes ### is_cxn "Default", new_cxn(), {}; is_cxn "Host", new_cxn( nodes => 'foo' ), { host => 'foo', port => '80', uri => 'http://foo:80' }; is_cxn "Host:Port", new_cxn( nodes => 'foo:1000' ), { host => 'foo', port => '1000', uri => 'http://foo:1000' }; is_cxn "HTTPS", new_cxn( nodes => 'https://foo' ), { scheme => 'https', host => 'foo', port => '443', uri => 'https://foo:443' }; is_cxn "Path", new_cxn( nodes => 'foo/bar' ), { host => 'foo', port => '80', uri => 'http://foo:80/bar' }; is_cxn "Userinfo", new_cxn( nodes => 'http://foo:bar@localhost/' ), { port => '80', uri => 'http://localhost:80', default_headers => { Authorization => 'Basic Zm9vOmJhcg==' }, userinfo => 'foo:bar' }; ### Options with scalar ### is_cxn "HTTPS option", new_cxn( nodes => 'foo', use_https => 1 ), { scheme => 'https', host => 'foo', port => '443', uri => 'https://foo:443' }; is_cxn "HTTPS option with settings", new_cxn( nodes => 'http://foo', use_https => 1 ), { scheme => 'http', host => 'foo', port => '80', uri => 'http://foo:80' }; is_cxn "Port option", new_cxn( nodes => 'foo', port => 456 ), { host => 'foo', port => '456', uri => 'http://foo:456' }; is_cxn "Port option with settings", new_cxn( nodes => 'foo:123', port => 456 ), { host => 'foo', port => '123', uri => 'http://foo:123' }; is_cxn "Path option", new_cxn( nodes => 'foo', path_prefix => '/bar/' ), { host => 'foo', port => 80, uri => 'http://foo:80/bar' }; is_cxn "Path option with settings", new_cxn( nodes => 'foo/baz/', path_prefix => '/bar/' ), { host => 'foo', port => 80, uri => 'http://foo:80/baz' }; is_cxn "Userinfo option", new_cxn( nodes => 'foo', userinfo => 'foo:bar' ), { host => 'foo', port => 80, uri => 'http://foo:80', default_headers => { Authorization => 'Basic Zm9vOmJhcg==' }, userinfo => 'foo:bar' }; is_cxn "Userinfo option with settings", new_cxn( nodes => 'foo:bar@foo', userinfo => 'foo:baz' ), { host => 'foo', port => 80, uri => 'http://foo:80', default_headers => { Authorization => 'Basic Zm9vOmJhcg==' }, userinfo => 'foo:bar' }; is_cxn "Deflate option", new_cxn( deflate => 1 ), { default_headers => { 'Accept-Encoding' => 'deflate' } }; ### Hash ### is_cxn "Hash host", new_cxn( nodes => { host => 'foo' } ), { host => 'foo', port => 80, uri => 'http://foo:80' }; is_cxn "Hash port", new_cxn( nodes => { port => '123' } ), { port => 123, uri => 'http://localhost:123' }; is_cxn "Hash path", new_cxn( nodes => { path => 'baz' } ), { port => 80, uri => 'http://localhost:80/baz' }; # Build URI is new_cxn()->build_uri( { path => '/' } ), 'http://localhost:9200/', "Default URI"; is new_cxn( { nodes => 'http://localhost:9200/foo' } ) ->build_uri( { path => '/_search' } ), 'http://localhost:9200/foo/_search', "URI with path"; is new_cxn( { default_qs_params => { session => 'key' } } ) ->build_uri( { path => '/_search' } ), 'http://localhost:9200/_search?session=key', "default_qs_params"; my $uri = new_cxn( { default_qs_params => { session => 'key' } } ) ->build_uri( { path => '/_search', qs => { foo => 'bar' } } ); like $uri, qr{^http://localhost:9200/_search?}, "default_qs_params and qs - 1"; like $uri, qr{session=key}, "default_qs_params and qs - 2"; like $uri, qr{foo=bar}, "default_qs_params and qs - 3"; is new_cxn( { default_qs_params => { session => 'key' } } ) ->build_uri( { path => '/_search', qs => { session => 'bar' } } ), 'http://localhost:9200/_search?session=bar', "default_qs_params overwritten"; done_testing; #=================================== sub is_cxn (@) { #=================================== my ( $title, $cxn, $params ) = @_; my %params = ( host => 'localhost', port => '9200', scheme => 'http', uri => 'http://localhost:9200', default_headers => {}, userinfo => '', %$params ); for my $key ( sort keys %params ) { my $val = $cxn->$key; $val = "$val" unless ref $val eq 'HASH'; cmp_deeply $val, $params{$key}, "$title - $key"; } } #=================================== sub new_cxn { #=================================== return Search::Elasticsearch->new(@_)->transport->cxn_pool->cxns->[0]; } author-no-tabs.t100644000765000024 1352713001720020 20733 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t BEGIN { unless ($ENV{AUTHOR_TESTING}) { require Test::More; Test::More::plan(skip_all => 'these tests are for testing by the author'); } } use strict; use warnings; # this test was generated with Dist::Zilla::Plugin::Test::NoTabs 0.15 use Test::More 0.88; use Test::NoTabs; my @files = ( 'lib/Search/Elasticsearch.pm', 'lib/Search/Elasticsearch/Client/5_0.pm', 'lib/Search/Elasticsearch/Client/5_0/Bulk.pm', 'lib/Search/Elasticsearch/Client/5_0/Direct.pm', 'lib/Search/Elasticsearch/Client/5_0/Direct/Cat.pm', 'lib/Search/Elasticsearch/Client/5_0/Direct/Cluster.pm', 'lib/Search/Elasticsearch/Client/5_0/Direct/Indices.pm', 'lib/Search/Elasticsearch/Client/5_0/Direct/Ingest.pm', 'lib/Search/Elasticsearch/Client/5_0/Direct/Nodes.pm', 'lib/Search/Elasticsearch/Client/5_0/Direct/Snapshot.pm', 'lib/Search/Elasticsearch/Client/5_0/Direct/Tasks.pm', 'lib/Search/Elasticsearch/Client/5_0/Role/API.pm', 'lib/Search/Elasticsearch/Client/5_0/Role/Bulk.pm', 'lib/Search/Elasticsearch/Client/5_0/Role/Scroll.pm', 'lib/Search/Elasticsearch/Client/5_0/Scroll.pm', 'lib/Search/Elasticsearch/Cxn/Factory.pm', 'lib/Search/Elasticsearch/Cxn/HTTPTiny.pm', 'lib/Search/Elasticsearch/Cxn/Hijk.pm', 'lib/Search/Elasticsearch/Cxn/LWP.pm', 'lib/Search/Elasticsearch/CxnPool/Sniff.pm', 'lib/Search/Elasticsearch/CxnPool/Static.pm', 'lib/Search/Elasticsearch/CxnPool/Static/NoPing.pm', 'lib/Search/Elasticsearch/Error.pm', 'lib/Search/Elasticsearch/Logger/LogAny.pm', 'lib/Search/Elasticsearch/Role/API.pm', 'lib/Search/Elasticsearch/Role/Client.pm', 'lib/Search/Elasticsearch/Role/Client/Direct.pm', 'lib/Search/Elasticsearch/Role/Cxn.pm', 'lib/Search/Elasticsearch/Role/CxnPool.pm', 'lib/Search/Elasticsearch/Role/CxnPool/Sniff.pm', 'lib/Search/Elasticsearch/Role/CxnPool/Static.pm', 'lib/Search/Elasticsearch/Role/CxnPool/Static/NoPing.pm', 'lib/Search/Elasticsearch/Role/Is_Sync.pm', 'lib/Search/Elasticsearch/Role/Logger.pm', 'lib/Search/Elasticsearch/Role/Serializer.pm', 'lib/Search/Elasticsearch/Role/Serializer/JSON.pm', 'lib/Search/Elasticsearch/Role/Transport.pm', 'lib/Search/Elasticsearch/Serializer/JSON.pm', 'lib/Search/Elasticsearch/Serializer/JSON/Cpanel.pm', 'lib/Search/Elasticsearch/Serializer/JSON/PP.pm', 'lib/Search/Elasticsearch/Serializer/JSON/XS.pm', 'lib/Search/Elasticsearch/TestServer.pm', 'lib/Search/Elasticsearch/Transport.pm', 'lib/Search/Elasticsearch/Util.pm', 't/10_Basic/10_load.t', 't/20_Serializer/10_load_cpanel.t', 't/20_Serializer/11_load_xs.t', 't/20_Serializer/12_load_pp.t', 't/20_Serializer/13_preload_cpanel.t', 't/20_Serializer/14_preload_xs.t', 't/20_Serializer/20_xs_encode_decode.t', 't/20_Serializer/21_xs_encode_bulk.t', 't/20_Serializer/22_xs_encode_pretty.t', 't/20_Serializer/30_cpanel_encode_decode.t', 't/20_Serializer/31_cpanel_encode_bulk.t', 't/20_Serializer/32_cpanel_encode_pretty.t', 't/20_Serializer/40_pp_encode_decode.t', 't/20_Serializer/41_pp_encode_bulk.t', 't/20_Serializer/42_pp_encode_pretty.t', 't/20_Serializer/encode_bulk.pl', 't/20_Serializer/encode_decode.pl', 't/20_Serializer/encode_pretty.pl', 't/30_Logger/10_explicit.t', 't/30_Logger/20_implicit.t', 't/30_Logger/30_log_methods.t', 't/30_Logger/40_trace_request.t', 't/30_Logger/50_trace_response.t', 't/30_Logger/60_trace_error.t', 't/30_Logger/70_trace_comment.t', 't/30_Logger/80_deprecation_methods.t', 't/30_Logger/90_error_json.t', 't/40_Transport/10_tidy_request.t', 't/40_Transport/20_send_body_as.t', 't/40_Transport/30_perform_request.t', 't/50_Cxn_Pool/10_static_normal.t', 't/50_Cxn_Pool/11_static_node_missing.t', 't/50_Cxn_Pool/12_static_node_fails.t', 't/50_Cxn_Pool/13_static_node_timesout.t', 't/50_Cxn_Pool/14_static_both_nodes_timeout.t', 't/50_Cxn_Pool/15_static_both_nodes_fail.t', 't/50_Cxn_Pool/16_static_nodes_starting.t', 't/50_Cxn_Pool/17_static_runaway_nodes.t', 't/50_Cxn_Pool/30_sniff_normal.t', 't/50_Cxn_Pool/31_sniff_new_nodes.t', 't/50_Cxn_Pool/32_sniff_node_fails.t', 't/50_Cxn_Pool/33_sniff_both_nodes_fail.t', 't/50_Cxn_Pool/34_sniff_node_timeout.t', 't/50_Cxn_Pool/35_sniff_both_nodes_timeout.t', 't/50_Cxn_Pool/36_sniff_nodes_starting.t', 't/50_Cxn_Pool/37_sniff_runaway_nodes.t', 't/50_Cxn_Pool/38_bad_sniff.t', 't/50_Cxn_Pool/39_sniff_max_content.t', 't/50_Cxn_Pool/40_sniff_extract_host.t', 't/50_Cxn_Pool/50_noping_normal.t', 't/50_Cxn_Pool/51_noping_node_fails.t', 't/50_Cxn_Pool/52_noping_node_timesout.t', 't/50_Cxn_Pool/53_noping_all_nodes_fail.t', 't/50_Cxn_Pool/54_noping_nodes_starting.t', 't/50_Cxn_Pool/55_noping_runaway_nodes.t', 't/50_Cxn_Pool/56_max_retries.t', 't/60_Cxn/10_basic.t', 't/60_Cxn/20_process_response.t', 't/60_Cxn/30_http.t', 't/95_TestServer/00_test_server.t', 't/95_TestServer/10_test_server_fork.t', 't/Client_5_0/00_print_version.t', 't/Client_5_0/10_live.t', 't/Client_5_0/15_conflict.t', 't/Client_5_0/20_fork_httptiny.t', 't/Client_5_0/21_fork_lwp.t', 't/Client_5_0/22_fork_hijk.t', 't/Client_5_0/30_bulk_add_action.t', 't/Client_5_0/31_bulk_helpers.t', 't/Client_5_0/32_bulk_flush.t', 't/Client_5_0/33_bulk_errors.t', 't/Client_5_0/34_bulk_cxn_errors.t', 't/Client_5_0/40_scroll.t', 't/Client_5_0/60_auth_httptiny.t', 't/Client_5_0/61_auth_lwp.t', 't/author-eol.t', 't/author-no-tabs.t', 't/author-pod-syntax.t', 't/lib/LogCallback.pl', 't/lib/MockCxn.pm', 't/lib/bad_cacert.pem', 't/lib/default_cxn.pl', 't/lib/es_sync.pl', 't/lib/es_sync_auth.pl', 't/lib/es_sync_fork.pl', 't/lib/index_test_data.pl' ); notabs_ok($_) foreach @files; done_testing; 10_basic.t100644000765000024 233413001720020 20460 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/60_Cxnuse Test::More; use Search::Elasticsearch; my $c = Search::Elasticsearch->new->transport->cxn_pool->cxns->[0]; ok $c->does('Search::Elasticsearch::Role::Cxn'), 'Does Search::Elasticsearch::Role::Cxn'; # MARK LIVE $c->mark_live; ok $c->is_live, "Cxn is live"; is $c->ping_failures, 0, "No ping failures"; is $c->next_ping, 0, "No ping scheduled"; # MARK DEAD $c->mark_dead; ok $c->is_dead, "Cxn is dead"; is $c->ping_failures, 1, "Has ping failure"; ok $c->next_ping > time(), "Ping scheduled"; ok $c->next_ping <= time() + $c->dead_timeout, "Dead timeout x 1"; $c->mark_dead; ok $c->is_dead, "Cxn still dead"; is $c->ping_failures, 2, "Has 2 ping failures"; ok $c->next_ping > time(), "Ping scheduled"; ok $c->next_ping <= time() + 2 * $c->dead_timeout, "Dead timeout x 2"; $c->mark_dead for 1 .. 100; ok $c->is_dead, "Cxn still dead"; is $c->ping_failures, 102, "Has 102 ping failures"; ok $c->next_ping > time(), "Ping scheduled"; ok $c->next_ping <= time() + $c->max_dead_timeout, "Max dead timeout"; # FORCE PING $c->force_ping; ok $c->is_dead, "Cxn is dead after force ping"; is $c->ping_failures, 0, "Force ping has no ping failures"; is $c->next_ping, -1, "Next ping scheduled for now"; done_testing; 10_Basic000755000765000024 013001720020 17034 5ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t10_load.t100644000765000024 110313001720020 20573 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/10_Basicuse Test::More; BEGIN { use_ok('Search::Elasticsearch') } my ( $e, $p, $t ); ok $e = Search::Elasticsearch->new(), "new client"; ok $e->does('Search::Elasticsearch::Role::Client::Direct'), "client does Search::Elasticsearch::Role::Client::Direct"; isa_ok $t = $e->transport, 'Search::Elasticsearch::Transport', "transport"; isa_ok $p = $t->cxn_pool, 'Search::Elasticsearch::CxnPool::Static', "cxn_pool"; isa_ok $p->cxn_factory, 'Search::Elasticsearch::Cxn::Factory', "cxn_factory"; isa_ok $e->logger, 'Search::Elasticsearch::Logger::LogAny', "logger"; done_testing; bad_cacert.pem100644000765000024 241113001720020 21171 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/lib-----BEGIN CERTIFICATE----- MIIDijCCAvOgAwIBAgIJAIFQM5672YHcMA0GCSqGSIb3DQEBBQUAMIGLMRcwFQYD sQQKEw5LZXZpbiBUZXN0IE9yZzEbMBkGCSqGSIb3DQEJARYMa2V2aW5AZXMub3Jn MRIwEAYDVQQHEwlBbXN0ZXJkYW0xEjAQBgNVBAgTCUFtc3RlcmRhbTELMAkGA1UE BhMCTkwxHjAcBgNVBAMTFUtldmlucyBob3VzZSBvZiBjZXJ0czAeFw0xNDEwMTcy MzIyMjlaFw0xNTEwMTcyMzIyMjlaMIGLMRcwFQYDVQQKEw5LZXZpbiBUZXN0IE9y ZzEbMBkGCSqGSIb3DQEJARYMa2V2aW5AZXMub3JnMRIwEAYDVQQHEwlBbXN0ZXJk YW0xEjAQBgNVBAgTCUFtc3RlcmRhbTELMAkGA1UEBhMCTkwxHjAcBgNVBAMTFUtl dmlucyBob3VzZSBvZiBjZXJ0czCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEA 9xG5d4JaJ2vFuyKGbzvAlHpAeIiOFuCOum9UXsUIeCCQn/q/BNlIaF+UQ+Y/yNJr 3zraL9oboVSJZph8CIN7dKmLSnnAe83cjlQQNosS1heUTSyVWC7dWCj3djO3xeT9 qTfhAj4a2OfvLHk2yT5Mp2cZYUnEKqCwhC98R7jKGtsCAwEAAaOB8zCB8DAMBgNV HRMEBTADAQH/MB0GA1UdDgQWBBQUtCQRtRzPojRpZ/3hanfZN3nxwjCBwAYDVR0j BIG4MIG1gBQUtCQRtRzPojRpZ/3hanfZN3nxwqGBkaSBjjCBizEXMBUGA1UEChMO S2V2aW4gVGVzdCBPcmcxGzAZBgkqhkiG9w0BCQEWDGtldmluQGVzLm9yZzESMBAG A1UEBxMJQW1zdGVyZGFtMRIwEAYDVQQIEwlBbXN0ZXJkYW0xCzAJBgNVBAYTAk5M MR4wHAYDVQQDExVLZXZpbnMgaG91c2Ugb2YgY2VydHOCCQCBUDOeu9mB3DANBgkq hkiG9w0BAQUFAAOBgQDF2nfTTrM7cviLiExF6iQP/HwigXiHhotcBtyjfPvXhRe0 k96MwEWS+87XsLERF1FPkEzW4TjF6f4pRxAYbTA3frWZ4vFwM7CflI/9ca9HlRux WTG7ZMdyKE1Z2Vip2W1kVtVb/Gd/qWzxEoCwuHWo5dRZ8nrZ27U+Ij3CAFWEhQ== -----END CERTIFICATE----- default_cxn.pl100644000765000024 2313001720020 21205 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/libreturn 'HTTPTiny'; LogCallback.pl100644000765000024 44313001720020 21075 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/libuse Log::Any::Adapter::Callback 0.09; use Log::Any::Adapter; our ( $method, $format ); Log::Any::Adapter->set( 'Callback', min_level => 'trace', logging_cb => sub { ( $method, undef, $format ) = @_; }, detection_cb => sub { $method = shift; } ); 1 es_sync_auth.pl100644000765000024 406113001720020 21443 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/lib#! perl use Test::More; use Test::Deep; use Test::Exception; use strict; use warnings; use lib 't/lib'; our $Throws_SSL; unless ( $ENV{ES_SSL} ) { plan skip_all => "$ENV{ES_CXN} - No https server specified in ES_SSL"; exit; } unless ( $ENV{ES_USERINFO} ) { plan skip_all => "$ENV{ES_CXN} - No user/pass specified in ES_USERINFO"; exit; } unless ( $ENV{ES_CA_PATH} ) { plan skip_all => "$ENV{ES_CXN} - No cacert specified in ES_CA_PATH"; exit; } $ENV{ES} = $ENV{ES_SSL}; $ENV{ES_SKIP_PING} = 1; our %Auth = ( use_https => 1, userinfo => $ENV{ES_USERINFO} ); # Test https connection with correct auth, without cacert $ENV{ES_CXN_POOL} = 'Static'; my $es = do "es_sync.pl" or die( $@ || $! ); ok $es->cluster->health, "$ENV{ES_CXN} - Non-cert HTTPS with auth, cxn static"; $ENV{ES_CXN_POOL} = 'Sniff'; $es = do "es_sync.pl" or die( $@ || $! ); ok $es->cluster->health, "$ENV{ES_CXN} - Non-cert HTTPS with auth, cxn sniff"; $ENV{ES_CXN_POOL} = 'Static::NoPing'; $es = do "es_sync.pl" or die( $@ || $! ); ok $es->cluster->health, "$ENV{ES_CXN} - Non-cert HTTPS with auth, cxn noping"; # Test forbidden action throws_ok { $es->nodes->shutdown } "Search::Elasticsearch::Error::Forbidden", "$ENV{ES_CXN} - Forbidden action"; # Test https connection with correct auth, with valid cacert $Auth{ssl_options} = ssl_options( $ENV{ES_CA_PATH} ); $es = do "es_sync.pl" or die( $@ || $! ); ok $es->cluster->health, "$ENV{ES_CXN} - Valid cert HTTPS with auth"; # Test invalid user credentials %Auth = ( userinfo => 'foobar:baz' ); $es = do "es_sync.pl" or die( $@ || $! ); throws_ok { $es->cluster->health } "Search::Elasticsearch::Error::Unauthorized", "$ENV{ES_CXN} - Bad userinfo"; # Test https connection with correct auth, with invalid cacert $Auth{ssl_options} = ssl_options('t/lib/bad_cacert.pem'); $ENV{ES} = "https://www.google.com"; $es = do "es_sync.pl" or die( $@ || $! ); throws_ok { $es->cluster->health } "Search::Elasticsearch::Error::$Throws_SSL", "$ENV{ES_CXN} - Invalid cert throws $Throws_SSL"; done_testing; es_sync_fork.pl100644000765000024 140013001720020 21435 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/libuse Test::More; use POSIX ":sys_wait_h"; my $es = do "es_sync.pl" or die( $@ || $! ); my $cxn_class = ref $es->transport->cxn_pool->cxns->[0]; ok $es->info, "$cxn_class - Info before fork"; my $Kids = 4; my %pids; for my $child ( 1 .. $Kids ) { my $pid = fork(); if ($pid) { $pids{$pid} = $child; next; } if ( !defined $pid ) { skip "fork() not supported"; done_testing; last; } for ( 1 .. 100 ) { $es->info; } exit; } my $ok = 0; for ( 1 .. 10 ) { my $pid = waitpid( -1, WNOHANG ); if ( $pid > 0 ) { delete $pids{$pid}; $ok++ unless $?; redo; } last unless keys %pids; sleep 1; } is $ok, $Kids, "$cxn_class - Fork"; done_testing; author-pod-syntax.t100644000765000024 50313001720020 21424 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t#!perl BEGIN { unless ($ENV{AUTHOR_TESTING}) { require Test::More; Test::More::plan(skip_all => 'these tests are for testing by the author'); } } # This file was automatically generated by Dist::Zilla::Plugin::PodSyntaxTests. use strict; use warnings; use Test::More; use Test::Pod 1.41; all_pod_files_ok(); Client_5_0000755000765000024 013001720020 17374 5ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t10_live.t100644000765000024 136113001720020 21161 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/Client_5_0use Test::More; use Test::Deep; use Test::Exception; use strict; use warnings; use lib 't/lib'; my $es; $ENV{ES_VERSION} = '5_0'; local $ENV{ES_CXN_POOL}; $ENV{ES_CXN_POOL} = 'Static'; $es = do "es_sync.pl" or die( $@ || $! ); is $es->info->{tagline}, "You Know, for Search", 'CxnPool::Static'; $ENV{ES_CXN_POOL} = 'Static::NoPing'; $es = do "es_sync.pl" or die( $@ || $! ); is $es->info->{tagline}, "You Know, for Search", 'CxnPool::Static::NoPing'; $ENV{ES_CXN_POOL} = 'Sniff'; $es = do "es_sync.pl" or die( $@ || $! ); is $es->info->{tagline}, "You Know, for Search", 'CxnPool::Sniff'; my ($node) = values %{ $es->transport->cxn_pool->next_cxn->sniff }; ok $node->{http}{max_content_length_in_bytes}, 'Sniffs max_content length'; done_testing; 40_scroll.t100644000765000024 1234213001720020 21544 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/Client_5_0use Test::More; use Test::Deep; use Test::Exception; use lib 't/lib'; use strict; use warnings; $ENV{ES_VERSION} = '5_0'; our $es = do "es_sync.pl" or die( $@ || $! ); $es->indices->delete( index => '_all', ignore => 404 ); test_scroll( "No indices", {}, total => 0, max_score => 0, steps => [ is_finished => 1, next => [0], refill_buffer => 0, drain_buffer => [0], ] ); do "index_test_data.pl" or die( $@ || $! ); test_scroll( "Match all", {}, total => 100, max_score => 1, steps => [ is_finished => '', buffer_size => 10, next => [1], drain_buffer => [9], refill_buffer => 10, refill_buffer => 20, is_finished => '', next_81 => [81], next_20 => [9], next => [0], is_finished => 1, ] ); test_scroll( "Query", { body => { query => { term => { color => 'red' } }, suggest => { mysuggest => { text => 'green', term => { field => 'color' } } }, aggs => { switch => { terms => { field => 'switch' } } }, } }, total => 50, max_score => num( 1, 0.5 ), aggs => bool(1), suggest => bool(1), steps => [ next => [1], next_50 => [49], is_finished => 1, ] ); test_scroll( "Scroll in qs", { scroll_in_qs => 1, body => { query => { term => { color => 'red' } }, suggest => { mysuggest => { text => 'green', term => { field => 'color' } } }, aggs => { switch => { terms => { field => 'switch' } } }, } }, total => 50, max_score => num( 1.0, 0.5 ), aggs => bool(1), suggest => bool(1), steps => [ next => [1], next_50 => [49], is_finished => 1, ] ); test_scroll( "Finish", {}, total => 100, max_score => 1, steps => [ is_finished => '', next => [1], finish => 1, is_finished => 1, buffer_size => 0, next => [0] ] ); my $s = $es->scroll_helper; my $d = $s->next; ok ref $d && $d->{_source}, 'next() in scalar context'; { # Test auto finish fork protection. my $s = $es->scroll_helper( size => 5 ); my $pid = fork(); unless ( defined($pid) ) { die "Cannot fork. Lack of resources?"; } unless ($pid) { # Child. Call finish check that its not finished # (the call to finish did nothing). $s->finish(); exit; } else { # Wait for children waitpid( $pid, 0 ); is $?, 0, "Child exited without errors"; } ok !$s->is_finished(), "Our Scroll is not finished"; my $count = 0; while ( $s->next ) { $count++ } is $count, 100, "All documents retrieved"; ok $s->is_finished, "Our scroll is finished"; } { # Test Scroll usage attempt in a different process. my $s = $es->scroll_helper( size => 5 ); my $pid = fork(); unless ( defined($pid) ) { die "Cannot fork. Lack of resources?"; } unless ($pid) { # Calling this next should crash, not exiting this process with 0 eval { while ( $s->next ) { } }; my $err = $@; exit( eval { $err->is('Illegal') && 123 } || 999 ); } else { # Wait for children waitpid( $pid, 0 ); is $? >> 8, 123, "Child threw Illegal exception"; } } { # Test valid Scroll usage after initial fork my $pid = fork(); unless ( defined($pid) ) { die "Cannot fork. Lack of resources?"; } unless ($pid) { my $s = $es->scroll_helper( size => 5 ); while ( $s->next ) { } exit 0; } else { # Wait for children waitpid( $pid, 0 ); is $? , 0, "Scroll completed successfully"; } } done_testing; $es->indices->delete( index => 'test' ); #=================================== sub test_scroll { #=================================== my ( $title, $params, %tests ) = @_; subtest $title => sub { my $s = $es->scroll_helper($params); is $s->total, $tests{total}, "$title - total"; cmp_deeply $s->max_score, $tests{max_score}, "$title - max_score"; cmp_deeply $s->suggest, $tests{suggest}, "$title - suggest"; cmp_deeply $s->aggregations, $tests{aggs}, "$title - aggs"; my $i = 1; my @steps = @{ $tests{steps} }; while ( my $name = shift @steps ) { my $expect = shift @steps; my ( $method, $result, @p ); if ( $name =~ /next(?:_(\d+))?/ ) { $method = 'next'; @p = $1; } else { $method = $name; } if ( ref $expect eq 'ARRAY' ) { my @result = $s->$method(@p); $result = 0 + @result; $expect = $expect->[0]; } else { $result = $s->$method(@p); } is $result, $expect, "$title - Step $i: $name"; $i++; } } } index_test_data.pl100644000765000024 663613001720020 22130 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/libuse strict; use warnings; use lib 't/lib'; local $ENV{ES_CXN}; local $ENV{ES_CXN_POOL}; my $es = do 'es_sync.pl' or die( $@ || $! ); $es->indices->delete( index => 'test', ignore => 404 ); $es->indices->create( index => 'test' ); $es->cluster->health( wait_for_status => 'yellow' ); my $b = $es->bulk_helper( index => 'test', type => 'test' ); my $i = 1; for ( names() ) { $b->index( { id => $i, source => { name => $_, count => $i, color => ( $i % 2 ? 'red' : 'green' ), switch => ( $i % 2 ? 1 : 2 ) } } ); $i++; } $b->flush; $es->indices->refresh; #=================================== sub names { #=================================== return ( 'Adaptoid', 'Alpha Ray', 'Alysande Stuart', 'Americop', 'Andrew Chord', 'Android Man', 'Ani-Mator', 'Aqueduct', 'Archangel', 'Arena', 'Auric', 'Barton, Clint', 'Behemoth', 'Bereet', 'Black Death', 'Black King', 'Blaze', 'Cancer', 'Charlie-27', 'Christians, Isaac', 'Clea', 'Contemplator', 'Copperhead', 'Darkdevil', 'Deathbird', 'Diablo', 'Doctor Arthur Nagan', 'Doctor Droom', 'Doctor Octopus', 'Epoch', 'Eternity', 'Feline', 'Firestar', 'Flex', 'Garokk the Petrified Man', 'Gill, Donald "Donny"', 'Glitch', 'Golden Girl', 'Grandmaster', 'Grey, Elaine', 'Halloween Jack', 'Hannibal King', 'Hero for Hire', 'Hrimhari', 'Ikonn', 'Infinity', 'Jack-in-the-Box', 'Jim Hammond', 'Joe Cartelli', 'Juarez, Bonita', 'Judd, Eugene', 'Korrek', 'Krang', 'Kukulcan', 'Lizard', 'Machinesmith', 'Master Man', 'Match', 'Maur-Konn', 'Mekano', 'Miguel Espinosa', 'Mister Sinister', 'Mogul of the Mystic Mountain', 'Mutant Master', 'Night Thrasher', 'Nital, Taj', 'Obituary', 'Ogre', 'Owl', 'Ozone', 'Paris', 'Phastos', 'Piper', 'Prodigy', 'Quagmire', 'Quasar', 'Radioactive Man', 'Rankin, Calvin', 'Scarlet Scarab', 'Scarlet Witch', 'Seth', 'Slug', 'Sluggo', 'Smallwood, Marrina', 'Smith, Tabitha', 'St. Croix, Claudette', 'Stacy X', 'Stallior', 'Star-Dancer', 'Stitch', 'Storm, Susan', 'Summers, Gabriel', 'Thane Ector', 'Toad-In-Waiting', 'Ultron', 'Urich, Phil', 'Vibro', 'Victorius', 'Wolfsbane', 'Yandroth' ); } 30_Logger000755000765000024 013001720020 17234 5ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t10_explicit.t100644000765000024 240113001720020 21677 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/30_Loggeruse Test::More; use Search::Elasticsearch; use File::Temp; my $file = File::Temp->new( EXLOCK => 0 ); # default isa_ok my $l = Search::Elasticsearch->new->logger, 'Search::Elasticsearch::Logger::LogAny', 'Default Logger'; is $l->log_as, 'elasticsearch.event', 'Log as'; is $l->trace_as, 'elasticsearch.trace', 'Trace as'; isa_ok $l->log_handle->adapter, 'Log::Any::Adapter::Null', 'Default - Log to NULL'; isa_ok $l->trace_handle->adapter, 'Log::Any::Adapter::Null', 'Default - Trace to NULL'; # stdout/stderr isa_ok $l = Search::Elasticsearch->new( log_to => 'Stderr', trace_to => 'Stdout' ) ->logger, 'Search::Elasticsearch::Logger::LogAny', 'Std Logger'; isa_ok $l->log_handle->adapter, 'Log::Any::Adapter::Stderr', 'Std - Log to Stderr'; isa_ok $l->trace_handle->adapter, 'Log::Any::Adapter::Stdout', 'Std - Trace to Stdout'; # file isa_ok $l = Search::Elasticsearch->new( log_to => [ 'File', $file->filename ], trace_to => [ 'File', $file->filename ] )->logger, 'Search::Elasticsearch::Logger::LogAny', 'File Logger'; isa_ok $l->log_handle->adapter, 'Log::Any::Adapter::File', 'File - Log to file'; isa_ok $l->trace_handle->adapter, 'Log::Any::Adapter::File', 'File - Trace to file'; done_testing; 20_implicit.t100644000765000024 163513001720020 21701 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/30_Loggeruse Test::More; use Search::Elasticsearch; use Log::Any::Adapter; Log::Any::Adapter->set( { category => 'elasticsearch.event' }, 'Stdout' ); Log::Any::Adapter->set( { category => 'elasticsearch.trace' }, 'Stderr' ); # default isa_ok my $l = Search::Elasticsearch->new->logger, 'Search::Elasticsearch::Logger::LogAny', 'Default Logger'; isa_ok $l->log_handle->adapter, 'Log::Any::Adapter::Stdout', 'Default - Log to Stdout'; isa_ok $l->trace_handle->adapter, 'Log::Any::Adapter::Stderr', 'Default - Trace to Stderr'; # override isa_ok $l = Search::Elasticsearch->new( log_to => 'Stderr', trace_to => 'Stdout' ) ->logger, 'Search::Elasticsearch::Logger::LogAny', 'Override Logger'; isa_ok $l->log_handle->adapter, 'Log::Any::Adapter::Stderr', 'Override - Log to Stderr'; isa_ok $l->trace_handle->adapter, 'Log::Any::Adapter::Stdout', 'Override - Trace to Stdout'; done_testing; 15_conflict.t100644000765000024 106613001720020 22032 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/Client_5_0use Test::More; use strict; use warnings; use lib 't/lib'; $ENV{ES_VERSION} = '5_0'; my $es = do "es_sync.pl" or die( $@ || $! ); $es->indices->delete( index => '_all' ); $es->index( index => 'test', type => 'test', id => 1, body => {} ); my $error; eval { $es->index( index => 'test', type => 'test', id => 1, body => {}, version => 2 ); 1; } or $error = $@; ok $error->is('Conflict'), 'Conflict Exception'; is $error->{vars}{current_version}, 1, "Error has current version v1"; done_testing; 21_fork_lwp.t100644000765000024 15413001720020 22026 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/Client_5_0use lib 't/lib'; $ENV{ES_VERSION} = '5_0'; $ENV{ES_CXN} = 'LWP'; do "es_sync_fork.pl" or die( $@ || $! ); 61_auth_lwp.t100644000765000024 35513001720020 22035 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/Client_5_0use lib 't/lib'; $ENV{ES_VERSION} = '5_0'; $ENV{ES_CXN} = 'LWP'; our $Throws_SSL = "Cxn"; sub ssl_options { return { verify_hostname => 1, SSL_ca_file => $_[0] }; } do "es_sync_auth.pl" or die( $@ || $! ); Search000755000765000024 013001720020 17223 5ustar00clintonstaff000000000000Search-Elasticsearch-5.01/libElasticsearch.pm100644000765000024 3410213001720020 22513 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/lib/Searchpackage Search::Elasticsearch; use Moo 1.003 (); use Search::Elasticsearch::Util qw(parse_params load_plugin); use namespace::clean; our $VERSION = '5.01'; my %Default_Plugins = ( client => [ 'Search::Elasticsearch::Client', '5_0::Direct' ], cxn_factory => [ 'Search::Elasticsearch::Cxn::Factory', '' ], cxn_pool => [ 'Search::Elasticsearch::CxnPool', 'Static' ], logger => [ 'Search::Elasticsearch::Logger', 'LogAny' ], serializer => [ 'Search::Elasticsearch::Serializer', 'JSON' ], transport => [ 'Search::Elasticsearch::Transport', '' ], ); my @Load_Order = qw( serializer logger cxn_factory cxn_pool transport client ); #=================================== sub new { #=================================== my ( $class, $params ) = parse_params(@_); $params->{cxn} ||= 'HTTPTiny'; my $plugins = delete $params->{plugins} || []; $plugins = [$plugins] unless ref $plugins eq 'ARRAY'; for my $name (@Load_Order) { my ( $base, $default ) = @{ $Default_Plugins{$name} }; my $sub_class = $params->{$name} || $default; my $plugin_class = load_plugin( $base, $sub_class ); $params->{$name} = $plugin_class->new($params); } for my $name (@$plugins) { my $plugin_class = load_plugin( 'Search::Elasticsearch::Plugin', $name ); $plugin_class->_init_plugin($params); } return $params->{client}; } 1; =pod =encoding UTF-8 =head1 NAME Search::Elasticsearch - The official client for Elasticsearch =head1 VERSION version 5.01 =head1 SYNOPSIS use Search::Elasticsearch; # Connect to localhost:9200: my $e = Search::Elasticsearch->new(); # Round-robin between two nodes: my $e = Search::Elasticsearch->new( nodes => [ 'search1:9200', 'search2:9200' ] ); # Connect to cluster at search1:9200, sniff all nodes and round-robin between them: my $e = Search::Elasticsearch->new( nodes => 'search1:9200', cxn_pool => 'Sniff' ); # Index a document: $e->index( index => 'my_app', type => 'blog_post', id => 1, body => { title => 'Elasticsearch clients', content => 'Interesting content...', date => '2013-09-24' } ); # Get the document: my $doc = $e->get( index => 'my_app', type => 'blog_post', id => 1 ); # Search: my $results = $e->search( index => 'my_app', body => { query => { match => { title => 'elasticsearch' } } } ); # Cluster requests: $info = $e->cluster->info; $health = $e->cluster->health; $node_stats = $e->cluster->node_stats; # Index requests: $e->indices->create(index=>'my_index'); $e->indices->delete(index=>'my_index'); =head1 DESCRIPTION L is the official Perl client for Elasticsearch, supported by L. Elasticsearch itself is a flexible and powerful open source, distributed real-time search and analytics engine for the cloud. You can read more about it on L. =head1 PREVIOUS VERSIONS OF ELASTICSEARCH This version of the client supports the Elasticsearch 5.0 branch, which is not backwards compatible with earlier branches. If you need to talk to a version of Elasticsearch before 5.0.0, please install one of the following packages: =over =item * L =item * L =item * L =back =head2 Motivation =over I Leonardo da Vinci =back All of us have opinions, especially when it comes to designing APIs. Unfortunately, the opinions of programmers seldom coincide. The intention of this client, and of the officially supported clients available for other languages, is to provide robust support for the full native Elasticsearch API with as few opinions as possible: you should be able to read the L and understand how to use this client, or any of the other official clients. Should you decide that you want to customize the API, then this client provides the basis for your code. It does the hard stuff for you, allowing you to build on top of it. =head2 Features This client provides: =over =item * Full support for all Elasticsearch APIs =item * HTTP backend (for an async backend using L, see L) =item * Robust networking support which handles load balancing, failure detection and failover =item * Good defaults =item * Helper utilities for more complex operations, such as L, and L =item * Logging support via L =item * Compatibility with the official clients for Python, Ruby, PHP, and Javascript =item * Easy extensibility =back =head1 INSTALLING ELASTICSEARCH You can download the latest version of Elasticsearch from L. See the L for details. You will need to have a recent version of Java installed, preferably the Java v8 from Sun. =head1 CREATING A NEW INSTANCE The L method returns a new L which can be used to run requests against the Elasticsearch cluster. use Search::Elasticsearch; my $e = Search::Elasticsearch->new( %params ); The most important arguments to L are the following: =head2 C The C parameter tells the client which Elasticsearch nodes it should talk to. It can be a single node, multiples nodes or, if not specified, will default to C: # default: localhost:9200 $e = Search::Elasticsearch->new(); # single $e = Search::Elasticsearch->new( nodes => 'search_1:9200'); # multiple $e = Search::Elasticsearch->new( nodes => [ 'search_1:9200', 'search_2:9200' ] ); Each C can be a URL including a scheme, host, port, path and userinfo (for authentication). For instance, this would be a valid node: https://username:password@search.domain.com:443/prefix/path See L for more on node specification. =head2 C The L modules manage connections to nodes in the Elasticsearch cluster. They handle the load balancing between nodes and failover when nodes fail. Which C you should use depends on where your cluster is. There are three choices: =over =item * C $e = Search::Elasticsearch->new( cxn_pool => 'Static' # default nodes => [ 'search1.domain.com:9200', 'search2.domain.com:9200' ], ); The L connection pool, which is the default, should be used when you don't have direct access to the Elasticsearch cluster, eg when you are accessing the cluster through a proxy. See L for more. =item * C $e = Search::Elasticsearch->new( cxn_pool => 'Sniff', nodes => [ 'search1:9200', 'search2:9200' ], ); The L connection pool should be used when you B have direct access to the Elasticsearch cluster, eg when your web servers and Elasticsearch servers are on the same network. The nodes that you specify are used to I the cluster, which is then I to find the current list of live nodes that the cluster knows about. See L. =item * C $e = Search::Elasticsearch->new( cxn_pool => 'Static::NoPing' nodes => [ 'proxy1.domain.com:80', 'proxy2.domain.com:80' ], ); The L connection pool should be used when your access to a remote cluster is so limited that you cannot ping individual nodes with a C request. See L for more. =back =head2 C For debugging purposes, it is useful to be able to dump the actual HTTP requests which are sent to the cluster, and the response that is received. This can be enabled with the C parameter, as follows: # To STDERR $e = Search::Elasticsearch->new( trace_to => 'Stderr' ); # To a file $e = Search::Elasticsearch->new( trace_to => ['File','/path/to/filename'] ); Logging is handled by L. See L for more information. =head2 Other Other arguments are explained in the respective L. =head1 RUNNING REQUESTS When you create a new instance of Search::Elasticsearch, it returns a L object, which can be used for running requests. use Search::Elasticsearch; my $e = Search::Elasticsearch->new( %params ); # create an index $e->indices->create( index => 'my_index' ); # index a document $e->index( index => 'my_index', type => 'blog_post', id => 1, body => { title => 'Elasticsearch clients', content => 'Interesting content...', date => '2013-09-24' } ); See L for more details about the requests that can be run. =head1 MODULES Each chunk of functionality is handled by a different module, which can be specified in the call to L as shown in L above. For instance, the following will use the L module for the connection pool. $e = Search::Elasticsearch->new( cxn_pool => 'Sniff' ); Custom modules can be named with the appropriate prefix, eg C, or by prefixing the full class name with C<+>: $e = Search::Elasticsearch->new( cxn_pool => '+My::Custom::CxnClass' ); The modules that you can override are specified with the following arguments to L: =head2 C The class to use for the client functionality, which provides methods that can be called to execute requests, such as C, C or C. The client parses the user's requests and passes them to the L class to be executed. The default version of the client is C<5_0::Direct>, which can be explicitly specified as follows: $e = Search::Elasticsearch->new( client => '5_0::Direct' ); =head2 C The Transport class accepts a parsed request from the L class, fetches a L from its L and tries to execute the request, retrying after failure where appropriate. See: =over =item * L =back =head2 C The class which handles raw requests to Elasticsearch nodes. See: =over =item * L (default) =item * L =item * L =item * L =back =head2 C The class which the L uses to create new L objects. See: =over =item * L =back =head2 C (2) The class to use for the L functionality. It calls the L class to create new L objects when appropriate. See: =over =item * L (default) =item * L =item * L =back =head2 C The class to use for logging events and tracing HTTP requests/responses. See: =over =item * L =back =head2 C The class to use for serializing request bodies and deserializing response bodies. See: =over =item * L (default) =item * L =item * L =item * L =back =head1 BUGS This is a stable API but this implementation is new. Watch this space for new releases. If you have any suggestions for improvements, or find any bugs, please report them to L. I will be notified, and then you'll automatically be notified of progress on your bug as I make changes. =head1 SUPPORT You can find documentation for this module with the perldoc command. perldoc Search::Elasticsearch You can also look for information at: =over 4 =item * GitHub L =item * CPAN Ratings L =item * Search MetaCPAN L =item * IRC The L<#elasticsearch|irc://irc.freenode.net/elasticsearch> channel on C. =item * Mailing list The main L. =back =head1 TEST SUITE The full test suite requires a live Elasticsearch node to run, and should be run as : perl Makefile.PL ES=localhost:9200 make test B You can change the Cxn class which is used by setting the C environment variable: ES_CXN=Hijk ES=localhost:9200 make test =head1 AUTHOR Clinton Gormley =head1 COPYRIGHT AND LICENSE This software is Copyright (c) 2016 by Elasticsearch BV. This is free software, licensed under: The Apache License, Version 2.0, January 2004 =cut __END__ # ABSTRACT: The official client for Elasticsearch 90_error_json.t100644000765000024 100213001720020 22244 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/30_Loggeruse Test::More tests => 3; use Test::Exception; use lib 't/lib'; use_ok('Search::Elasticsearch::Error'); eval 'use JSON::PP;'; SKIP: { skip 'JSON::PP module not installed', 2 if $@; ok( my $es_error = Search::Elasticsearch::Error->new( 'Missing', "Foo missing", { code => 404 } ), 'Create test error' ); like( JSON::PP->new->convert_blessed(1)->encode( { eserr => $es_error } ), qr/Foo missing/, 'encode_json', ); } 22_fork_hijk.t100644000765000024 25113001720020 22150 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/Client_5_0use lib 't/lib'; use Test::More skip_all => "Hijk doesn't work with Netty4"; $ENV{ES_VERSION} = '5_0'; $ENV{ES_CXN} = 'Hijk'; do "es_sync_fork.pl" or die( $@ || $! ); 20_Serializer000755000765000024 013001720020 20125 5ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t11_load_xs.t100644000765000024 54213001720020 22365 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/20_Serializeruse Test::More; use lib sub { die "No Cpanel" if $_[1] =~ m{Cpanel/JSON/XS.pm$}; return undef; }; use Search::Elasticsearch; my $s = Search::Elasticsearch->new()->transport->serializer->JSON; SKIP: { skip 'JSON::XS not installed' => 1 unless eval { require JSON::XS; 1 }; isa_ok $s, "JSON::XS", 'JSON::XS'; } done_testing; 12_load_pp.t100644000765000024 62613001720020 22356 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/20_Serializeruse Test::More; use lib sub { die "No Cpanel" if $_[1] =~ m{Cpanel/JSON/XS.pm$}; die "No JSON::XS" if $_[1] =~ m{JSON/XS.pm$}; return undef; }; use Search::Elasticsearch; my $s = Search::Elasticsearch->new()->transport->serializer->JSON; SKIP: { skip 'JSON::PP not installed' => 1 unless eval { require JSON::PP; 1 }; isa_ok $s, "JSON::PP", 'JSON::PP'; } done_testing; 30_log_methods.t100644000765000024 273213001720020 22373 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/30_Loggeruse Test::More; use Test::Exception; use Search::Elasticsearch; use lib 't/lib'; do 'LogCallback.pl' or die( $@ || $! ); isa_ok my $l = Search::Elasticsearch->new->logger, 'Search::Elasticsearch::Logger::LogAny', 'Logger'; test_level($_) for qw(debug info warning error critical trace); test_throw($_) for qw(error critical); done_testing; #=================================== sub test_level { #=================================== my $level = shift; my $levelf = $level . 'f'; my $is_level = 'is_' . $level; # ->debug ( $method, $format ) = (); ok $l->$level("foo"), "$level"; is $method, $level, "$level - method"; is $format, "foo", "$level - format"; # ->debugf ( $method, $format ) = (); ok $l->$levelf( "foo %s", "bar" ), "$levelf"; is $method, $level, "$levelf - method"; is $format, "foo bar", "$levelf - format"; # ->is_debug ( $method, $format ) = (); ok $l->$is_level(), "$is_level"; is $method, $is_level, "$is_level - method"; is $format, undef, "$is_level - format"; } #=================================== sub test_throw { #=================================== my $level = shift; my $throw = 'throw_' . $level; my $re = qr/\[Request\] \*\* Foo/; ( $method, $format ) = (); throws_ok { $l->$throw( 'Request', 'Foo', 42 ) } $re, $throw; is $@->{vars}, 42, "$throw - vars"; is $method, $level, "$throw - method"; like $format, $re, "$throw - format"; } 60_trace_error.t100644000765000024 205213001720020 22374 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/30_Loggeruse Test::More; use Test::Exception; use Search::Elasticsearch; use lib 't/lib'; do 'LogCallback.pl' or die( $@ || $! ); ok my $e = Search::Elasticsearch->new( nodes => 'https://foo.bar:444/some/path' ), 'Client'; isa_ok my $l = $e->logger, 'Search::Elasticsearch::Logger::LogAny', 'Logger'; my $c = $e->transport->cxn_pool->cxns->[0]; ok $c->does('Search::Elasticsearch::Role::Cxn'), 'Does Search::Elasticsearch::Role::Cxn'; # No body ok $l->trace_error( $c, Search::Elasticsearch::Error->new( 'Missing', "Foo missing", { code => 404 } ) ), 'No body'; is $format, <<"RESPONSE", 'No body - format'; # ERROR: Search::Elasticsearch::Error::Missing Foo missing #\x20 RESPONSE # Body ok $l->trace_error( $c, Search::Elasticsearch::Error->new( 'Missing', "Foo missing", { code => 404, body => { foo => 'bar' } } ) ), 'Body'; is $format, <<"RESPONSE", 'Body - format'; # ERROR: Search::Elasticsearch::Error::Missing Foo missing # { # "foo" : "bar" # } RESPONSE done_testing; 50_Cxn_Pool000755000765000024 013001720020 17540 5ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t38_bad_sniff.t100644000765000024 63613001720020 22277 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/50_Cxn_Pooluse Test::More; use Test::Exception; use Search::Elasticsearch; use lib 't/lib'; use MockCxn qw(mock_sniff_client); ## For whatever reason, sniffing returns bad data my $t = mock_sniff_client( { nodes => ['one'] }, { node => 1, code => 200, content => '{"nodes":{"one":{}}}' }, # throw NoNodes ); ok !eval { $t->perform_request } && $@ =~ /NoNodes/, "Missing http_address"; done_testing; 32_bulk_flush.t100644000765000024 407113001720020 22365 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/Client_5_0use Test::More; use Test::Deep; use strict; use warnings; use lib 't/lib'; $ENV{ES_VERSION} = '5_0'; my $es = do "es_sync.pl" or die( $@ || $! ); $es->indices->delete( index => '_all' ); test_flush( "max count", # { max_count => 3 }, # 1, 2, 0, 1, 2, 0, 1, 2, 0, 1 ); test_flush( "max size", # { max_size => 95 }, # 1, 2, 3, 0, 1, 2, 3, 0, 1, 2 ); test_flush( "max size > max_count", { max_size => 95, max_count => 3 }, 1, 2, 0, 1, 2, 0, 1, 2, 0, 1 ); test_flush( "max size < max_count", { max_size => 95, max_count => 5 }, 1, 2, 3, 0, 1, 2, 3, 0, 1, 2 ); test_flush( "max size = 0, max_count", { max_size => 0, max_count => 5 }, 1, 2, 3, 4, 0, 1, 2, 3, 4, 0 ); test_flush( "max count = 0, max_size", { max_size => 95, max_count => 0 }, 1, 2, 3, 0, 1, 2, 3, 0, 1, 2 ); test_flush( "max count = 0, max_size = 0", { max_size => 0, max_count => 0 }, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 ); test_flush( "max_count = 5, max_time = 5", { max_count => 5, max_time => 5 }, 1, 2, 0, 1, 2, 3, 4, 0, 0, 1 ); done_testing; $es->indices->delete( index => 'test' ); #=================================== sub test_flush { #=================================== my $title = shift; my $params = shift; my $b = $es->bulk_helper( %$params, index => 'test', type => 'test' ); my @seq = @_; $es->indices->delete( index => 'test', ignore => 404 ); $es->indices->create( index => 'test' ); $es->cluster->health( wait_for_status => 'yellow' ); for my $i ( 10 .. 19 ) { # sleep on 12 or 18 if max_time specified if ( $params->{max_time} && ( $i == 12 || $i == 18 ) ) { $b->_last_flush( time - $params->{max_time} - 1 ); } $b->index( { id => $i, source => {} } ); is $b->_buffer_count, shift @seq, "$title - " . ( $i - 9 ); } $b->flush; is $b->_buffer_count, 0, "$title - final flush"; $es->indices->refresh; is $es->count->{count}, 10, "$title - all docs indexed"; } 33_bulk_errors.t100644000765000024 1112213001720020 22574 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/Client_5_0use Test::More; use Test::Deep; use Test::Exception; use strict; use warnings; use lib 't/lib'; use Log::Any::Adapter; $ENV{ES_VERSION} = '5_0'; my $es = do "es_sync.pl" or die( $@ || $! ); my $TRUE = $es->transport->serializer->decode('{"true":true}')->{true}; $es->indices->delete( index => '_all' ); my @Std = ( { id => 1, source => { count => 1 } }, { id => 1, source => { count => 'foo' } }, { id => 1, version => 10, source => {} }, ); my ( $b, $success_count, $error_count, $custom_count, $conflict_count ); ## Default error handling $b = bulk( { index => 'test', type => 'test' }, @Std ); test_flush( "Default", 0, 2, 0, 0 ); ## Custom error handling $b = bulk( { index => 'test', type => 'test', on_error => sub { $custom_count++ } }, @Std ); test_flush( "Custom error", 0, 0, 2, 0 ); # Conflict errors $b = bulk( { index => 'test', type => 'test', on_conflict => sub { $conflict_count++ } }, @Std ); test_flush( "Conflict error", 0, 1, 0, 1 ); # Both error handling $b = bulk( { index => 'test', type => 'test', on_conflict => sub { $conflict_count++ }, on_error => sub { $custom_count++ } }, @Std ); test_flush( "Conflict and custom", 0, 0, 1, 1 ); # Conflict disable error $b = bulk( { index => 'test', type => 'test', on_conflict => sub { $conflict_count++ }, on_error => undef }, @Std ); test_flush( "Conflict, error undef", 0, 0, 0, 1 ); # Disable both $b = bulk( { index => 'test', type => 'test', on_conflict => undef, on_error => undef }, @Std ); test_flush( "Both undef", 0, 0, 0, 0 ); # Success $b = bulk( { index => 'test', type => 'test', on_success => sub { $success_count++ }, }, @Std ); test_flush( "Success", 1, 2, 0, 0 ); # cbs have correct params $b = bulk( { index => 'test', type => 'test', on_success => test_params( 'on_success', { _index => 'test', _type => 'test', _id => 1, _version => 1, status => 201, created => $TRUE, result => 'created', _shards => { successful => 1, total => 2, failed => 0 }, }, 0 ), on_error => test_params( 'on_error', { _index => 'test', _type => 'test', _id => 1, error => any( re('MapperParsingException'), superhashof( { type => 'mapper_parsing_exception' } ) ), status => 400, }, 1 ), on_conflict => test_params( 'on_conflict', { _index => 'test', _type => 'test', _id => 1, error => any( re('version conflict'), superhashof( { type => 'version_conflict_engine_exception' } ) ), status => 409, }, 2, 1 ), }, @Std ); $b->flush; done_testing; $es->indices->delete( index => 'test' ); #=================================== sub bulk { #=================================== my $params = shift; my $b = $es->bulk_helper($params); $es->indices->delete( index => 'test', ignore => 404 ); $es->indices->create( index => 'test' ); $es->cluster->health( wait_for_status => 'yellow' ); $b->index(@_); return $b; } #=================================== sub test_flush { #=================================== my ( $title, $success, $default, $custom, $conflict ) = @_; $success_count = $custom_count = $error_count = $conflict_count = 0; { local $SIG{__WARN__} = sub { $error_count++ }; $b->flush; } is $success_count, $success, "$title - success"; is $error_count, $default, "$title - default"; is $custom_count, $custom, "$title - custom"; is $conflict_count, $conflict, "$title - conflict"; } #=================================== sub test_params { #=================================== my ( $type, $result, $j, $version ) = @_; return sub { is $_[0], 'index', "$type - action"; cmp_deeply $_[1], subhashof($result), "$type - result"; is $_[2], $j, "$type - array index"; is $_[3], $version, "$type - version"; }; } encode_bulk.pl100644000765000024 271613001720020 23102 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/20_Serializeruse Test::More; use Test::Deep; use Test::Exception; use Search::Elasticsearch; our $JSON_BACKEND; my $utf8_bytes = "彈性搜索"; my $utf8_str = $utf8_bytes; utf8::decode($utf8_str); my $hash = { "foo" => "$utf8_str" }; my $arr = [ $hash, $hash ]; my $json_hash = qq({"foo":"$utf8_bytes"}); my $json_arr = qq($json_hash\n$json_hash\n); isa_ok my $s = Search::Elasticsearch->new( serializer => $JSON_BACKEND ) ->transport->serializer, "Search::Elasticsearch::Serializer::$JSON_BACKEND", 'Serializer'; is $s->encode_bulk(), undef, # 'Enc - No args returns undef'; is $s->encode_bulk(undef), undef, # 'Enc - Undef returns undef'; is $s->encode_bulk(''), '', # 'Enc - Empty string returns same'; is $s->encode_bulk('foo'), 'foo', # 'Enc - String returns same'; is $s->encode_bulk($utf8_str), $utf8_bytes, # 'Enc - Unicode string returns encoded'; is $s->encode_bulk($utf8_bytes), $utf8_bytes, # 'Enc - Unicode bytes returns same'; is $s->encode_bulk($arr), $json_arr, # 'Enc - Array returns JSON'; is $s->encode_bulk( [ $json_hash, $json_hash ] ), $json_arr, # 'Enc - Array of strings'; throws_ok { $s->encode_bulk($hash) } qr/must be an array/, # 'Enc - Hash dies'; throws_ok { $s->encode_bulk( \$utf8_str ) } qr/Serializer/, # 'Enc - scalar ref dies'; throws_ok { $s->encode_bulk( [ \$utf8_str ] ) } qr/Serializer/, # 'Enc - array of scalar ref dies'; done_testing; 40_trace_request.t100644000765000024 445013001720020 22735 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/30_Loggeruse Test::More; use Test::Exception; use Search::Elasticsearch; use lib 't/lib'; do 'LogCallback.pl' or die( $@ || $! ); ok my $e = Search::Elasticsearch->new( nodes => 'https://foo.bar:444/some/path' ), 'Client'; isa_ok my $l = $e->logger, 'Search::Elasticsearch::Logger::LogAny', 'Logger'; my $c = $e->transport->cxn_pool->cxns->[0]; ok $c->does('Search::Elasticsearch::Role::Cxn'), 'Does Search::Elasticsearch::Role::Cxn'; # No body ok $l->trace_request( $c, { method => 'POST', qs => { foo => 'bar' }, serialize => 'std', path => '/xyz' } ), 'No body'; is $format, <<'REQUEST', 'No body - format'; # Request to: https://foo.bar:444/some/path curl -XPOST 'http://localhost:9200/xyz?foo=bar&pretty=1' REQUEST # Std body ok $l->trace_request( $c, { method => 'POST', qs => { foo => 'bar' }, serialize => 'std', path => '/xyz', body => { foo => qq(bar\n'baz) }, data => qq({"foo":"bar\n'baz"}), } ), 'Body'; is $format, <<'REQUEST', 'Body - format'; # Request to: https://foo.bar:444/some/path curl -XPOST 'http://localhost:9200/xyz?foo=bar&pretty=1' -d ' { "foo" : "bar\n\u0027baz" } ' REQUEST # Bulk body ok $l->trace_request( $c, { method => 'POST', qs => { foo => 'bar' }, serialize => 'bulk', path => '/xyz', body => [ { foo => qq(bar\n'baz) }, { foo => qq(bar\n'baz) } ], data => qq({"foo":"bar\\n\\u0027baz"}\n{"foo":"bar\\n\\u0027baz"}\n), } ), 'Bulk'; is $format, <<'REQUEST', 'Bulk - format'; # Request to: https://foo.bar:444/some/path curl -XPOST 'http://localhost:9200/xyz?foo=bar&pretty=1' -d ' {"foo":"bar\n\u0027baz"} {"foo":"bar\n\u0027baz"} ' REQUEST # String body ok $l->trace_request( $c, { method => 'POST', qs => { foo => 'bar' }, serialize => 'std', path => '/xyz', body => qq(The quick brown fox\njumped over the lazy dog's basket), } ), 'Body string'; is $format, <<'REQUEST', 'Body string - format'; # Request to: https://foo.bar:444/some/path curl -XPOST 'http://localhost:9200/xyz?foo=bar&pretty=1' -d ' The quick brown fox jumped over the lazy dog\u0027s basket' REQUEST done_testing; 70_trace_comment.t100644000765000024 114713001720020 22712 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/30_Loggeruse Test::More; use Test::Exception; use Search::Elasticsearch; use lib 't/lib'; do 'LogCallback.pl' or die( $@ || $! ); our $format; ok my $e = Search::Elasticsearch->new( nodes => 'https://foo.bar:444/some/path' ), 'Client'; isa_ok my $l = $e->logger, 'Search::Elasticsearch::Logger::LogAny', 'Logger'; my $c = $e->transport->cxn_pool->cxns->[0]; ok $c->does('Search::Elasticsearch::Role::Cxn'), 'Does Search::Elasticsearch::Role::Cxn'; ok $l->trace_comment("The quick fox\njumped"), 'Comment'; is $format, <<"COMMENT", 'Comment - format'; # *** The quick fox # *** jumped COMMENT done_testing; 56_max_retries.t100644000765000024 150313001720020 22720 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/50_Cxn_Pooluse Test::More; use Test::Exception; use Search::Elasticsearch; use lib 't/lib'; use MockCxn qw(mock_noping_client); ## Max retries my $t = mock_noping_client( { nodes => [ 'one', 'two', 'three' ], max_retries => 1 }, { node => 1, code => 200, content => 1 }, { node => 2, code => 200, content => 1 }, { node => 3, code => 200, content => 1 }, { node => 1, code => 509, error => 'Unavailable' }, { node => 2, code => 509, error => 'Unavailable' }, # throws unavailable { node => 3, code => 200, content => 1 }, { node => 3, code => 200, content => 1 }, ); ok $t->perform_request() && $t->perform_request && $t->perform_request && !eval { $t->perform_request } && $@ =~ /Unavailable/ && $t->perform_request && $t->perform_request, 'Max retries'; done_testing; 20_process_response.t100644000765000024 462113001720020 22775 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/60_Cxnuse Test::More; use Test::Exception; use Test::Deep; use Search::Elasticsearch; my $c = Search::Elasticsearch->new->transport->cxn_pool->cxns->[0]; ok $c->does('Search::Elasticsearch::Role::Cxn'), 'Does Search::Elasticsearch::Role::Cxn'; my ( $code, $response ); ### OK GET ( $code, $response ) = $c->process_response( { method => 'GET', ignore => [] }, 200, "OK", '{"ok":1}', { 'content-type' => 'application/json' } ); is $code, 200, "OK GET - code"; cmp_deeply $response, { ok => 1 }, "OK GET - body"; ### OK GET - Text body ( $code, $response ) = $c->process_response( { method => 'GET', ignore => [] }, 200, "OK", 'Foo', { 'content-type' => 'text/plain' } ); is $code, 200, "OK GET Text body - code"; cmp_deeply $response, 'Foo', "OK GET Text body - body"; ### OK GET - Empty body ( $code, $response ) = $c->process_response( { method => 'GET', ignore => [] }, 200, "OK", '' ); is $code, 200, "OK GET Empty body - code"; cmp_deeply $response, '', "OK GET Empty body - body"; ### OK HEAD ( $code, $response ) = $c->process_response( { method => 'HEAD', ignore => [] }, 200, "OK" ); is $code, 200, "OK HEAD - code"; is $response, 1, "OK HEAD - body"; ### Missing GET throws_ok { $c->process_response( { method => 'GET', ignore => [] }, 404, "Missing", '{"error": "Something is missing"}', { 'content-type' => 'application/json' } ); } qr/Missing/, "Missing GET"; ### Missing GET ignore ( $code, $response ) = $c->process_response( { method => 'GET', ignore => [404] }, 404, "Missing", '{"error": "Something is missing"}', { 'content-type' => 'application/json' } ); is $code, 404, "Missing GET - code"; is $response, undef, "Missing GET - body"; ### Missing HEAD ( $code, $response ) = $c->process_response( { method => 'HEAD', ignore => [] }, 404, "Missing" ); is $code, 404, "Missing HEAD - code"; is $response, undef, "Missing HEAD - body"; ### Request error throws_ok { $c->process_response( { method => 'GET', ignore => [] }, 400, "Request", '{"error":"error in body"}', { 'content-type' => 'application/json' } ); } qr/\[400\] error in body/, "Request error"; ### Timeout error throws_ok { $c->process_response( { method => 'GET', ignore => [] }, 509, "28: Timed out,read timeout" ); } qr/Timeout/, "Timeout error"; done_testing; 31_bulk_helpers.t100644000765000024 2121613001720020 22725 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/Client_5_0use Test::More; use Test::Deep; use Test::Exception; use strict; use warnings; use lib 't/lib'; $ENV{ES_VERSION} = '5_0'; my $es = do "es_sync.pl" or die( $@ || $! ); my $b = $es->bulk_helper( index => 'i', type => 't' ); my $s = $b->_serializer; $s->_set_canonical; ## INDEX ## ok $b->index(), 'Empty index'; ok $b->index( { index => 'foo', type => 'bar', id => 1, routing => 1, parent => 1, timestamp => 1380019061000, ttl => '10m', version => 1, version_type => 'external', source => { foo => 'bar' }, }, { _index => 'foo', _type => 'bar', _id => 2, _routing => 2, _parent => 2, _timestamp => 1380019061000, _ttl => '10m', _version => 1, _version_type => 'external', _source => { foo => 'bar' }, } ), 'Index'; cmp_deeply $b->_buffer, [ q({"index":{"_id":1,"_index":"foo","_parent":1,"_routing":1,"_timestamp":1380019061000,"_ttl":"10m","_type":"bar","_version":1,"_version_type":"external"}}), q({"foo":"bar"}), q({"index":{"_id":2,"_index":"foo","_parent":2,"_routing":2,"_timestamp":1380019061000,"_ttl":"10m","_type":"bar","_version":1,"_version_type":"external"}}), q({"foo":"bar"}) ], "Index in buffer"; is $b->_buffer_size, 336, "Index buffer size"; is $b->_buffer_count, 2, "Index buffer count"; $b->clear_buffer; ## CREATE ## ok $b->create(), 'Create empty'; ok $b->create( { index => 'foo', type => 'bar', id => 1, routing => 1, parent => 1, timestamp => 1380019061000, ttl => '10m', version => 1, version_type => 'external', source => { foo => 'bar' }, }, { _index => 'foo', _type => 'bar', _id => 2, _routing => 2, _parent => 2, _timestamp => 1380019061000, _ttl => '10m', _version => 1, _version_type => 'external', _source => { foo => 'bar' }, } ), 'Create'; cmp_deeply $b->_buffer, [ q({"create":{"_id":1,"_index":"foo","_parent":1,"_routing":1,"_timestamp":1380019061000,"_ttl":"10m","_type":"bar","_version":1,"_version_type":"external"}}), q({"foo":"bar"}), q({"create":{"_id":2,"_index":"foo","_parent":2,"_routing":2,"_timestamp":1380019061000,"_ttl":"10m","_type":"bar","_version":1,"_version_type":"external"}}), q({"foo":"bar"}) ], "Create in buffer"; is $b->_buffer_size, 338, "Create buffer size"; is $b->_buffer_count, 2, "Create buffer count"; $b->clear_buffer; ## CREATE DOCS## ok $b->create_docs(), 'Create_docs empty'; ok $b->create_docs( { foo => 'bar' }, { foo => 'baz' } ), 'Create docs'; cmp_deeply $b->_buffer, [ q({"create":{}}), q({"foo":"bar"}), q({"create":{}}), q({"foo":"baz"}) ], "Create docs in buffer"; is $b->_buffer_size, 56, "Create docs buffer size"; is $b->_buffer_count, 2, "Create docs buffer count"; $b->clear_buffer; ## DELETE ## ok $b->delete(), 'Delete empty'; ok $b->delete( { index => 'foo', type => 'bar', id => 1, routing => 1, parent => 1, version => 1, version_type => 'external', }, { _index => 'foo', _type => 'bar', _id => 2, _routing => 2, _parent => 2, _version => 1, _version_type => 'external', } ), 'Delete'; cmp_deeply $b->_buffer, [ q({"delete":{"_id":1,"_index":"foo","_parent":1,"_routing":1,"_type":"bar","_version":1,"_version_type":"external"}}), q({"delete":{"_id":2,"_index":"foo","_parent":2,"_routing":2,"_type":"bar","_version":1,"_version_type":"external"}}), ], "Delete in buffer"; is $b->_buffer_size, 230, "Delete buffer size"; is $b->_buffer_count, 2, "Delete buffer count"; $b->clear_buffer; ## DELETE IDS ## ok $b->delete_ids(), 'Delete IDs empty'; ok $b->delete_ids( 1, 2, 3 ), 'Delete IDs'; cmp_deeply $b->_buffer, [ q({"delete":{"_id":1}}), q({"delete":{"_id":2}}), q({"delete":{"_id":3}}), ], "Delete IDs in buffer"; is $b->_buffer_size, 63, "Delete IDs buffer size"; is $b->_buffer_count, 3, "Delete IDS buffer count"; $b->clear_buffer; ## UPDATE ACTIONS ## ok $b->update(), 'Update empty'; ok $b->update( { index => 'foo', type => 'bar', id => 1, routing => 1, parent => 1, timestamp => 1380019061000, ttl => '10m', version => 1, version_type => 'external', doc => { foo => 'bar' }, doc_as_upsert => 1, }, { index => 'foo', type => 'bar', id => 1, routing => 1, parent => 1, timestamp => 1380019061000, ttl => '10m', version => 1, version_type => 'external', upsert => { counter => 0 }, script => '_ctx.source.counter+=incr', lang => 'mvel', params => { incr => 1 }, }, { _index => 'foo', _type => 'bar', _id => 1, _routing => 1, _parent => 1, _timestamp => 1380019061000, _ttl => '10m', _version => 1, _version_type => 'external', doc => { foo => 'bar' }, doc_as_upsert => 1, }, { _index => 'foo', _type => 'bar', _id => 1, _routing => 1, _parent => 1, _timestamp => 1380019061000, _ttl => '10m', _version => 1, _version_type => 'external', upsert => { counter => 0 }, script => '_ctx.source.counter+=incr', lang => 'mvel', params => { incr => 1 }, }, { _index => 'foo', _type => 'bar', _id => 1, _routing => 1, _parent => 1, _timestamp => 1380019061000, _ttl => '10m', _version => 1, _version_type => 'external', doc => { foo => 'bar' }, doc_as_upsert => 1, detect_noop => 1, }, { _index => 'foo', _type => 'bar', _id => 1, _routing => 1, _parent => 1, _timestamp => 1380019061000, _ttl => '10m', _version => 1, _version_type => 'external', upsert => { counter => 0 }, script => '_ctx.source.counter+=incr', lang => 'mvel', params => { incr => 1 }, detect_noop => 1, }, ), 'Update'; cmp_deeply $b->_buffer, [ q({"update":{"_id":1,"_index":"foo","_parent":1,"_routing":1,"_timestamp":1380019061000,"_ttl":"10m","_type":"bar","_version":1,"_version_type":"external"}}), q({"doc":{"foo":"bar"},"doc_as_upsert":1}), q({"update":{"_id":1,"_index":"foo","_parent":1,"_routing":1,"_timestamp":1380019061000,"_ttl":"10m","_type":"bar","_version":1,"_version_type":"external"}}), q({"lang":"mvel","params":{"incr":1},"script":"_ctx.source.counter+=incr","upsert":{"counter":0}}), q({"update":{"_id":1,"_index":"foo","_parent":1,"_routing":1,"_timestamp":1380019061000,"_ttl":"10m","_type":"bar","_version":1,"_version_type":"external"}}), q({"doc":{"foo":"bar"},"doc_as_upsert":1}), q({"update":{"_id":1,"_index":"foo","_parent":1,"_routing":1,"_timestamp":1380019061000,"_ttl":"10m","_type":"bar","_version":1,"_version_type":"external"}}), q({"lang":"mvel","params":{"incr":1},"script":"_ctx.source.counter+=incr","upsert":{"counter":0}}), q({"update":{"_id":1,"_index":"foo","_parent":1,"_routing":1,"_timestamp":1380019061000,"_ttl":"10m","_type":"bar","_version":1,"_version_type":"external"}}), q({"detect_noop":1,"doc":{"foo":"bar"},"doc_as_upsert":1}), q({"update":{"_id":1,"_index":"foo","_parent":1,"_routing":1,"_timestamp":1380019061000,"_ttl":"10m","_type":"bar","_version":1,"_version_type":"external"}}), q({"detect_noop":1,"lang":"mvel","params":{"incr":1},"script":"_ctx.source.counter+=incr","upsert":{"counter":0}}), ], "Update in buffer"; is $b->_buffer_size, 1370, "Update buffer size"; is $b->_buffer_count, 6, "Update buffer count"; $b->clear_buffer; done_testing; 14_preload_xs.t100644000765000024 47013001720020 23077 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/20_Serializeruse Test::More; BEGIN { eval { require JSON::XS; 1 } or do { plan skip_all => 'JSON::XS not installed'; done_testing; exit; } } use Search::Elasticsearch; my $s = Search::Elasticsearch->new()->transport->serializer->JSON; isa_ok $s, "JSON::XS", 'JSON::XS'; done_testing; 50_trace_response.t100644000765000024 144413001720020 23104 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/30_Loggeruse Test::More; use Test::Exception; use Search::Elasticsearch; use lib 't/lib'; do 'LogCallback.pl' or die( $@ || $! ); ok my $e = Search::Elasticsearch->new( nodes => 'https://foo.bar:444/some/path' ), 'Client'; isa_ok my $l = $e->logger, 'Search::Elasticsearch::Logger::LogAny', 'Logger'; my $c = $e->transport->cxn_pool->cxns->[0]; ok $c->does('Search::Elasticsearch::Role::Cxn'), 'Does Search::Elasticsearch::Role::Cxn'; # No body ok $l->trace_response( $c, 200, undef, 0.123 ), 'No body'; is $format, <<"RESPONSE", 'No body - format'; # Response: 200, Took: 123 ms #\x20 RESPONSE # Body ok $l->trace_response( $c, 200, { foo => 'bar' }, 0.123 ), 'Body'; is $format, <<'RESPONSE', 'Body - format'; # Response: 200, Took: 123 ms # { # "foo" : "bar" # } RESPONSE done_testing; 30_sniff_normal.t100644000765000024 114013001720020 23040 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/50_Cxn_Pooluse Test::More; use Test::Exception; use Search::Elasticsearch; use lib 't/lib'; use MockCxn qw(mock_sniff_client); ## Both nodes respond - check ping before first use my $t = mock_sniff_client( { nodes => [ 'one', 'two' ] }, { sniff => [ 'one', 'two' ] }, { node => 2, code => 200, content => 1 }, { node => 3, code => 200, content => 1 }, { node => 2, code => 200, content => 1 }, { node => 3, code => 200, content => 1 }, ); ok $t->perform_request() && $t->perform_request && $t->perform_request && $t->perform_request, 'Sniff before first use'; done_testing; 00_print_version.t100644000765000024 107013001720020 23117 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/Client_5_0use Test::More; use lib 't/lib'; $ENV{ES_VERSION} = '5_0'; my $es = do "es_sync.pl" or die( $@ || $! ); eval { my $v = $es->info->{version}; diag ""; diag ""; diag "Testing against Elasticsearch v" . $v->{number}; for ( sort keys %$v ) { diag sprintf "%-20s: %s", $_, $v->{$_}; } diag ""; diag "Client: " . ref($es); diag "Cxn: " . $es->transport->cxn_pool->cxn_factory->cxn_class; diag "GET Body: " . $es->transport->send_get_body_as; diag ""; pass "ES Version"; } or fail "ES Version"; done_testing; 20_fork_httptiny.t100644000765000024 16113001720020 23104 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/Client_5_0use lib 't/lib'; $ENV{ES_VERSION} = '5_0'; $ENV{ES_CXN} = 'HTTPTiny'; do "es_sync_fork.pl" or die( $@ || $! ); 60_auth_httptiny.t100644000765000024 42513001720020 23113 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/Client_5_0use IO::Socket::SSL; use lib 't/lib'; $ENV{ES_VERSION} = '5_0'; $ENV{ES_CXN} = 'HTTPTiny'; our $Throws_SSL = "SSL"; sub ssl_options { return { SSL_verify_mode => SSL_VERIFY_PEER, SSL_ca_file => $_[0] }; } do "es_sync_auth.pl" or die( $@ || $! ); Elasticsearch000755000765000024 013001720020 21775 5ustar00clintonstaff000000000000Search-Elasticsearch-5.01/lib/SearchUtil.pm100644000765000024 611513001720020 23413 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/lib/Search/Elasticsearchpackage Search::Elasticsearch::Util; $Search::Elasticsearch::Util::VERSION = '5.01'; use Moo; use Search::Elasticsearch::Error(); use Scalar::Util qw(blessed); use Module::Runtime qw(compose_module_name is_module_name use_module); use Sub::Exporter -setup => { exports => [ qw( parse_params to_list load_plugin new_error throw upgrade_error is_compat ) ] }; #=================================== sub to_list { #=================================== grep {defined} ref $_[0] eq 'ARRAY' ? @{ $_[0] } : @_; } #=================================== sub parse_params { #=================================== my $self = shift; my %params; if ( @_ % 2 ) { throw( "Param", 'Expecting a HASH ref or a list of key-value pairs', { params => \@_ } ) unless ref $_[0] eq 'HASH'; %params = %{ shift() }; } else { %params = @_; } return ( $self, \%params ); } #=================================== sub load_plugin { #=================================== my ( $base, $spec ) = @_; $spec ||= "+$base"; return $spec if blessed $spec; my ( $class, $version ); if ( ref $spec eq 'ARRAY' ) { ( $class, $version ) = @$spec; } else { $class = $spec; } unless ( $class =~ s/\A\+// ) { $class = compose_module_name( $base, $class ); } $version ? use_module( $class, $version ) : use_module($class); } #=================================== sub throw { #=================================== my ( $type, $msg, $vars ) = @_; die Search::Elasticsearch::Error->new( $type, $msg, $vars, 1 ); } #=================================== sub new_error { #=================================== my ( $type, $msg, $vars ) = @_; return Search::Elasticsearch::Error->new( $type, $msg, $vars, 1 ); } #=================================== sub upgrade_error { #=================================== my ( $error, $vars ) = @_; return ref($error) && $error->isa('Search::Elasticsearch::Error') ? $error : Search::Elasticsearch::Error->new( "Internal", $error, $vars || {}, 1 ); } #=================================== sub is_compat { #=================================== my ( $attr, $one, $two ) = @_; my $role = $one->does('Search::Elasticsearch::Role::Is_Sync') ? 'Search::Elasticsearch::Role::Is_Sync' : 'Search::Elasticsearch::Role::Is_Async'; return if eval { $two->does($role); }; my $class = ref($two) || $two; die "$attr ($class) does not do $role"; } 1; # ABSTRACT: A utility class for internal use by Search::Elasticsearch __END__ =pod =encoding UTF-8 =head1 NAME Search::Elasticsearch::Util - A utility class for internal use by Search::Elasticsearch =head1 VERSION version 5.01 =head1 AUTHOR Clinton Gormley =head1 COPYRIGHT AND LICENSE This software is Copyright (c) 2016 by Elasticsearch BV. This is free software, licensed under: The Apache License, Version 2.0, January 2004 =cut 10_load_cpanel.t100644000765000024 43513001720020 23175 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/20_Serializeruse Test::More; use Search::Elasticsearch; my $s = Search::Elasticsearch->new()->transport->serializer->JSON; SKIP: { skip 'Cpanel::JSON::XS not installed' => 1 unless eval { require Cpanel::JSON::XS; 1 }; isa_ok $s, "Cpanel::JSON::XS", 'Cpanel'; } done_testing; encode_decode.pl100644000765000024 343113001720020 23363 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/20_Serializeruse Test::More; use Test::Deep; use Test::Exception; use Search::Elasticsearch; our $JSON_BACKEND; my $utf8_bytes = "彈性搜索"; my $utf8_str = $utf8_bytes; utf8::decode($utf8_str); my $hash = { "foo" => "$utf8_str" }; my $arr = [$hash]; my $json_hash = qq({"foo":"$utf8_bytes"}); my $json_arr = qq([$json_hash]); isa_ok my $s = Search::Elasticsearch->new( serializer => $JSON_BACKEND ) ->transport->serializer, "Search::Elasticsearch::Serializer::$JSON_BACKEND", 'Serializer'; is $s->mime_type, 'application/json', 'Mime type is JSON'; # encode is $s->encode(), undef, 'Enc - No args returns undef'; is $s->encode(undef), undef, 'Enc - Undef returns undef'; is $s->encode(''), '', 'Enc - Empty string returns same'; is $s->encode('foo'), 'foo', 'Enc - String returns same'; is $s->encode($utf8_str), $utf8_bytes, 'Enc - Unicode string returns encoded'; is $s->encode($utf8_bytes), $utf8_bytes, 'Enc - Unicode bytes returns same'; is $s->encode($hash), $json_hash, 'Enc - Hash returns JSON'; is $s->encode($arr), $json_arr, 'Enc - Array returns JSON'; throws_ok { $s->encode( \$utf8_str ) } qr/Serializer/, 'Enc - scalar ref dies'; # decode is $s->decode(), undef, 'Dec - No args returns undef'; is $s->decode(undef), undef, 'Dec - Undef returns undef'; is $s->decode(''), '', 'Dec - Empty string returns same'; is $s->decode('foo'), 'foo', 'Dec - String returns same'; is $s->decode($utf8_bytes), $utf8_str, 'Dec - Unicode bytes returns decoded'; is $s->decode($utf8_str), $utf8_str, 'Dec - Unicode string returns same'; cmp_deeply $s->decode($json_hash), $hash, 'Dec - JSON returns hash'; cmp_deeply $s->decode($json_arr), $arr, 'Dec - JSON returns array'; throws_ok { $s->decode('{') } qr/Serializer/, 'Dec - invalid JSON dies'; done_testing; encode_pretty.pl100644000765000024 271313001720020 23471 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/20_Serializeruse Test::More; use Test::Deep; use Test::Exception; use Search::Elasticsearch; our $JSON_BACKEND; my $utf8_bytes = "彈性搜索"; my $utf8_str = $utf8_bytes; utf8::decode($utf8_str); my $hash = { "foo" => "$utf8_str" }; my $arr = [$hash]; my $json_hash = <new( serializer => $JSON_BACKEND ) ->transport->serializer, "Search::Elasticsearch::Serializer::$JSON_BACKEND", 'Serializer'; # encode is_pretty( [], undef, 'Enc - No args returns undef' ); is_pretty( [undef], undef, 'Enc - Undef returns undef' ); is_pretty( [''], '', 'Enc - Empty string returns same' ); is_pretty( ['foo'], 'foo', 'Enc - String returns same' ); is_pretty( [$utf8_str], $utf8_bytes, 'Enc - Unicode string returns encoded' ); is_pretty( [$utf8_bytes], $utf8_bytes, 'Enc - Unicode bytes returns same' ); is_pretty( [$hash], $json_hash, 'Enc - Hash returns JSON' ); is_pretty( [$arr], $json_arr, 'Enc - Array returns JSON' ); throws_ok { $s->encode_pretty( \$utf8_str ) } qr/Serializer/, # 'Enc - scalar ref dies'; sub is_pretty { my ( $arg, $expect, $desc ) = @_; my $got = $s->encode_pretty(@$arg); defined $got and $got =~ s/^\s+//gm; defined $expect and $expect =~ s/^\s+//gm; is $got, $expect, $desc; } done_testing; 40_Transport000755000765000024 013001720020 20012 5ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t10_tidy_request.t100644000765000024 343713001720020 23367 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/40_Transportuse Test::More; use Test::Deep; use Search::Elasticsearch; isa_ok my $t = Search::Elasticsearch->new->transport, 'Search::Elasticsearch::Transport'; test_tidy( 'Empty', {}, {} ); test_tidy( 'Method', { method => 'POST' }, { method => 'POST' } ); test_tidy( 'Path', { path => '/foo' }, { path => '/foo' } ); test_tidy( 'QS', { qs => { foo => 'bar' } }, { qs => { foo => 'bar' } } ); test_tidy( 'Body - Str', { body => 'foo' }, { body => 'foo', data => 'foo', serialize => 'std', mime_type => 'application/json', } ); test_tidy( 'Body - Hash', { body => { foo => 'bar' } }, { body => { foo => 'bar' }, data => '{"foo":"bar"}', serialize => 'std', mime_type => 'application/json', } ); test_tidy( 'Body - Array', { body => [ { foo => 'bar' } ] }, { body => [ { foo => 'bar' } ], data => '[{"foo":"bar"}]', serialize => 'std', mime_type => 'application/json', } ); test_tidy( 'Body - Bulk', { body => [ { foo => 'bar' } ], serialize => 'bulk' }, { body => [ { foo => 'bar' } ], data => qq({"foo":"bar"}\n), serialize => 'bulk', mime_type => 'application/json', } ); test_tidy( 'MimeType', { mime_type => 'text/plain', body => 'foo' }, { mime_type => 'text/plain', body => 'foo', data => 'foo', serialize => 'std' } ); #=================================== sub test_tidy { #=================================== my ( $title, $params, $test ) = @_; $test = { method => 'GET', path => '/', qs => {}, ignore => [], %$test }; cmp_deeply $t->tidy_request($params), $test, $title; } done_testing; 20_send_body_as.t100644000765000024 304113001720020 23267 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/40_Transportuse Test::More; use Test::Deep; use Search::Elasticsearch; my $t = Search::Elasticsearch->new( send_get_body_as => 'GET' )->transport; test_tidy( 'GET-empty', { path => '/_search' }, {} ); test_tidy( 'GET-body', { path => '/_search', body => { foo => 'bar' } }, { body => { foo => 'bar' }, data => '{"foo":"bar"}', method => 'GET', mime_type => 'application/json', serialize => 'std', } ); $t = Search::Elasticsearch->new( send_get_body_as => 'POST' )->transport; test_tidy( 'POST-empty', { path => '/_search' }, {} ); test_tidy( 'POST-eody', { path => '/_search', body => { foo => 'bar' } }, { body => { foo => 'bar' }, data => '{"foo":"bar"}', method => 'POST', mime_type => 'application/json', serialize => 'std', } ); $t = Search::Elasticsearch->new( send_get_body_as => 'source' )->transport; test_tidy( 'source-empty', { path => '/_search' }, {} ); test_tidy( 'source-body', { path => '/_search', body => { foo => 'bar' } }, { method => 'GET', qs => { source => '{"foo":"bar"}' }, mime_type => 'application/json', serialize => 'std', } ); #=================================== sub test_tidy { #=================================== my ( $title, $params, $test ) = @_; $test = { method => 'GET', path => '/_search', qs => {}, ignore => [], %$test }; cmp_deeply $t->tidy_request($params), $test, $title; } done_testing; 10_static_normal.t100644000765000024 117313001720020 23226 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/50_Cxn_Pooluse Test::More; use Test::Exception; use Search::Elasticsearch; use lib 't/lib'; use MockCxn qw(mock_static_client); ## Both nodes respond - check ping before first use my $t = mock_static_client( { nodes => [ 'one', 'two' ] }, { node => 1, ping => 1 }, { node => 1, code => 200, content => 1 }, { node => 2, ping => 1 }, { node => 2, code => 200, content => 1 }, { node => 1, code => 200, content => 1 }, { node => 2, code => 200, content => 1 }, ); ok $t->perform_request() && $t->perform_request && $t->perform_request && $t->perform_request, 'Ping before first use'; done_testing; 50_noping_normal.t100644000765000024 126113001720020 23233 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/50_Cxn_Pooluse Test::More; use Test::Exception; use Search::Elasticsearch; use lib 't/lib'; use MockCxn qw(mock_noping_client); ## All nodes respond my $t = mock_noping_client( { nodes => [ 'one', 'two', 'three' ] }, { node => 1, code => 200, content => 1 }, { node => 2, code => 200, content => 1 }, { node => 3, code => 200, content => 1 }, { node => 1, code => 200, content => 1 }, { node => 2, code => 200, content => 1 }, { node => 3, code => 200, content => 1 }, ); ok $t->perform_request() && $t->perform_request && $t->perform_request && $t->perform_request && $t->perform_request && $t->perform_request, 'Round robin'; done_testing; 95_TestServer000755000765000024 013001720020 20136 5ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t00_test_server.t100644000765000024 304013001720020 23324 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/95_TestServeruse strict; use warnings; use Test::More; use File::Temp; use POSIX ":sys_wait_h"; use Search::Elasticsearch; use Search::Elasticsearch::TestServer; my @pids; SKIP: { skip 'ES_HOME not set', 7 unless $ENV{ES_HOME}; my $tempdir = File::Temp->newdir( 'testserver-XXXXX', DIR => '/tmp' ); my $server = Search::Elasticsearch::TestServer->new( es_home => $ENV{ES_HOME}, conf => [ "path.data=$tempdir", "path.logs=$tempdir", ] ); my $nodes = $server->start(); ok( $nodes, "server->start returned nodes" ) or diag explain { server => $server }; ok( defined( $server->pids ), "server->pids defined" ); cmp_ok( scalar @{ $server->pids }, '>', 0, "more than 0 pids" ); @pids = @{ $server->pids }; subtest 'ES pids are alive' => sub { verify_pids_alive(@pids); }; $server->shutdown; note 'sleep to give ES time to die'; sleep 5; subtest 'ES pids are dead after shutdown' => sub { verify_pids_dead(@pids); }; eval { $server->shutdown }; is( $@, '', "second shutdown did not set error" ); subtest 'ES pids are dead after second shutdown' => sub { verify_pids_dead(@pids); }; } done_testing; #important to waitpid or kill0 will return true for zombies. sub verify_pids_alive { for my $pid (@_) { waitpid( $pid, WNOHANG ); ok( kill( 0, $pid ), "pid $pid is alive" ); } } sub verify_pids_dead { for my $pid (@_) { waitpid( $pid, WNOHANG ); ok( !kill( 0, $pid ), "pid $pid is dead" ); } } Error.pm100644000765000024 2230013001720020 23601 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/lib/Search/Elasticsearchpackage Search::Elasticsearch::Error; $Search::Elasticsearch::Error::VERSION = '5.01'; our $DEBUG = 0; @Search::Elasticsearch::Error::Internal::ISA = __PACKAGE__; @Search::Elasticsearch::Error::Param::ISA = __PACKAGE__; @Search::Elasticsearch::Error::NoNodes::ISA = __PACKAGE__; @Search::Elasticsearch::Error::Unauthorized::ISA = __PACKAGE__; @Search::Elasticsearch::Error::Forbidden::ISA = __PACKAGE__; @Search::Elasticsearch::Error::Illegal::ISA = __PACKAGE__; @Search::Elasticsearch::Error::Request::ISA = __PACKAGE__; @Search::Elasticsearch::Error::Timeout::ISA = __PACKAGE__; @Search::Elasticsearch::Error::Cxn::ISA = __PACKAGE__; @Search::Elasticsearch::Error::Serializer::ISA = __PACKAGE__; @Search::Elasticsearch::Error::Conflict::ISA = ( 'Search::Elasticsearch::Error::Request', __PACKAGE__ ); @Search::Elasticsearch::Error::Missing::ISA = ( 'Search::Elasticsearch::Error::Request', __PACKAGE__ ); @Search::Elasticsearch::Error::RequestTimeout::ISA = ( 'Search::Elasticsearch::Error::Request', __PACKAGE__ ); @Search::Elasticsearch::Error::ContentLength::ISA = ( __PACKAGE__, 'Search::Elasticsearch::Error::Request' ); @Search::Elasticsearch::Error::SSL::ISA = ( __PACKAGE__, 'Search::Elasticsearch::Error::Cxn' ); @Search::Elasticsearch::Error::BadGateway::ISA = ( 'Search::Elasticsearch::Error::Cxn', __PACKAGE__ ); @Search::Elasticsearch::Error::Unavailable::ISA = ( 'Search::Elasticsearch::Error::Cxn', __PACKAGE__ ); @Search::Elasticsearch::Error::GatewayTimeout::ISA = ( 'Search::Elasticsearch::Error::Cxn', __PACKAGE__ ); use overload ( '""' => '_stringify', 'cmp' => '_compare', ); use Data::Dumper(); #=================================== sub new { #=================================== my ( $class, $type, $msg, $vars, $caller ) = @_; return $type if ref $type; $caller ||= 0; my $error_class = 'Search::Elasticsearch::Error::' . $type; $msg = 'Unknown error' unless defined $msg; local $DEBUG = 2 if $type eq 'Internal'; my $stack = $class->_stack; my $self = bless { type => $type, text => $msg, vars => $vars, stack => $stack, }, $error_class; return $self; } #=================================== sub is { #=================================== my $self = shift; for (@_) { return 1 if $self->isa("Search::Elasticsearch::Error::$_"); } return 0; } #=================================== sub _stringify { #=================================== my $self = shift; local $Data::Dumper::Terse = 1; local $Data::Dumper::Indent = !!$DEBUG; unless ( $self->{msg} ) { my $stack = $self->{stack}; my $caller = $stack->[0]; $self->{msg} = sprintf( "[%s] ** %s, called from sub %s at %s line %d.", $self->{type}, $self->{text}, @{$caller}[ 3, 1, 2 ] ); if ( $self->{vars} ) { $self->{msg} .= sprintf( " With vars: %s\n", Data::Dumper::Dumper $self->{vars} ); } if ( @$stack > 1 ) { $self->{msg} .= sprintf( "Stacktrace:\n%s\n", $self->stacktrace($stack) ); } } return $self->{msg}; } #=================================== sub _compare { #=================================== my ( $self, $other, $swap ) = @_; $self .= ''; ( $self, $other ) = ( $other, $self ) if $swap; return $self cmp $other; } #=================================== sub _stack { #=================================== my $self = shift; my $caller = shift() || 2; my @stack; while ( my @caller = caller( ++$caller ) ) { next if $caller[0] eq 'Try::Tiny'; if ( $caller[3] =~ /^(.+)::__ANON__\[(.+):(\d+)\]$/ ) { @caller = ( $1, $2, $3, '(ANON)' ); } elsif ( $caller[1] =~ /^\(eval \d+\)/ ) { $caller[3] = "modified(" . $caller[3] . ")"; } next if $caller[0] =~ /^Search::Elasticsearch/ and ( $DEBUG < 2 or $caller[3] eq 'Try::Tiny::try' ); push @stack, [ @caller[ 0, 1, 2, 3 ] ]; last unless $DEBUG > 1; } return \@stack; } #=================================== sub stacktrace { #=================================== my $self = shift; my $stack = shift || $self->_stack(); my $o = sprintf "%s\n%-4s %-50s %-5s %s\n%s\n", '-' x 80, '#', 'Package', 'Line', 'Sub-routine', '-' x 80; my $i = 1; for (@$stack) { $o .= sprintf "%-4d %-50s %4d %s\n", $i++, @{$_}[ 0, 2, 3 ]; } return $o .= ( '-' x 80 ) . "\n"; } #=================================== sub TO_JSON { #=================================== my $self = shift; return $self->_stringify; } 1; # ABSTRACT: Errors thrown by Search::Elasticsearch __END__ =pod =encoding UTF-8 =head1 NAME Search::Elasticsearch::Error - Errors thrown by Search::Elasticsearch =head1 VERSION version 5.01 =head1 DESCRIPTION Errors thrown by Search::Elasticsearch are error objects, which can include a stack trace and information to help debug problems. An error object consists of the following: { type => $type, # eg Missing text => 'Error message', vars => {...}, # vars which may help to explain the error stack => [...], # a stack trace } The C<$Search::Elasticsearch::Error::DEBUG> variable can be set to C<1> or C<2> to increase the verbosity of errors. Error objects stringify to a human readable error message when used in text context (for example: C). They also support the C method to support conversion to JSON when L is enabled. =head1 ERROR CLASSES The following error classes are defined: =over =item * C A bad parameter has been passed to a method. =item * C There was some generic error performing your request in Elasticsearch. This error is triggered by HTTP status codes C<400> and C<500>. This class has the following sub-classes: =over =item * C Invalid (or no) username/password provided as C for a password protected service. These errors are triggered by the C<401> HTTP status code. =item * C A resource that you requested was not found. These errors are triggered by the C<404> HTTP status code. =item * C Your request could not be performed because of some conflict. For instance, if you try to delete a document with a particular version number, and the document has already changed, it will throw a C error. If it can, it will include the C in the error vars. This error is triggered by the C<409> HTTP status code. =item * C The request body was longer than the L. =item * C The request took longer than the specified C. Currently only applies to the L request. =back =item * C The request timed out. =item * C There was an error connecting to a node in the cluster. This error indicates node failure and will be retried on another node. This error has the following sub-classes: =over =item * C The current node is unable to handle your request at the moment. Your request will be retried on another node. This error is triggered by the C<503> HTTP status code. =item * C A proxy between the client and Elasticsearch is unable to connect to Elasticsearch. This error is triggered by the C<502> HTTP status code. =item * C A proxy between the client and Elasticsearch is unable to connect to Elasticsearch within its own timeout. This error is triggered by the C<504> HTTP status code. =item * C There was a problem validating the SSL certificate. Not all backends support this error type. =back =item * C Either the cluster was unable to process the request because it is currently blocking, eg there are not enough master nodes to form a cluster, or because the authenticated user is trying to perform an unauthorized action. This error is triggered by the C<403> HTTP status code. =item * C You have attempted to perform an illegal operation. For instance, you attempted to use a Scroll helper in a different process after forking. =item * C There was an error serializing a variable or deserializing a string. =item * C An internal error occurred - please report this as a bug in this module. =back =head1 AUTHOR Clinton Gormley =head1 COPYRIGHT AND LICENSE This software is Copyright (c) 2016 by Elasticsearch BV. This is free software, licensed under: The Apache License, Version 2.0, January 2004 =cut 30_bulk_add_action.t100644000765000024 2140413001720020 23346 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/Client_5_0use Test::More; use Test::Deep; use Test::Exception; use strict; use warnings; use lib 't/lib'; $ENV{ES_VERSION} = '5_0'; my $es = do "es_sync.pl" or die( $@ || $! ); my $b = $es->bulk_helper; $b->_serializer->_set_canonical; ## EMPTY ok $b->add_action(), 'Empty add action'; ## INDEX ACTIONS ## ok $b->add_action( index => { index => 'foo', type => 'bar', id => 1, routing => 1, parent => 1, timestamp => 1380019061000, ttl => '10m', version => 1, version_type => 'external', source => { foo => 'bar' }, }, index => { _index => 'foo', _type => 'bar', _id => 2, _routing => 2, _parent => 2, _timestamp => 1380019061000, _ttl => '10m', _version => 1, _version_type => 'external', _source => { foo => 'bar' }, } ), 'Add index actions'; cmp_deeply $b->_buffer, [ q({"index":{"_id":1,"_index":"foo","_parent":1,"_routing":1,"_timestamp":1380019061000,"_ttl":"10m","_type":"bar","_version":1,"_version_type":"external"}}), q({"foo":"bar"}), q({"index":{"_id":2,"_index":"foo","_parent":2,"_routing":2,"_timestamp":1380019061000,"_ttl":"10m","_type":"bar","_version":1,"_version_type":"external"}}), q({"foo":"bar"}) ], "Index actions in buffer"; is $b->_buffer_size, 336, "Index actions buffer size"; is $b->_buffer_count, 2, "Index actions buffer count"; $b->clear_buffer; ## CREATE ACTIONS ## ok $b->add_action( create => { index => 'foo', type => 'bar', id => 1, routing => 1, parent => 1, timestamp => 1380019061000, ttl => '10m', version => 1, version_type => 'external', source => { foo => 'bar' }, }, create => { _index => 'foo', _type => 'bar', _id => 2, _routing => 2, _parent => 2, _timestamp => 1380019061000, _ttl => '10m', _version => 1, _version_type => 'external', _source => { foo => 'bar' }, } ), 'Add create actions'; cmp_deeply $b->_buffer, [ q({"create":{"_id":1,"_index":"foo","_parent":1,"_routing":1,"_timestamp":1380019061000,"_ttl":"10m","_type":"bar","_version":1,"_version_type":"external"}}), q({"foo":"bar"}), q({"create":{"_id":2,"_index":"foo","_parent":2,"_routing":2,"_timestamp":1380019061000,"_ttl":"10m","_type":"bar","_version":1,"_version_type":"external"}}), q({"foo":"bar"}) ], "Create actions in buffer"; is $b->_buffer_size, 338, "Create actions buffer size"; is $b->_buffer_count, 2, "Create actions buffer count"; $b->clear_buffer; ## DELETE ACTIONS ## ok $b->add_action( delete => { index => 'foo', type => 'bar', id => 1, routing => 1, parent => 1, version => 1, version_type => 'external', }, delete => { _index => 'foo', _type => 'bar', _id => 2, _routing => 2, _parent => 2, _version => 1, _version_type => 'external', } ), 'Add delete actions'; cmp_deeply $b->_buffer, [ q({"delete":{"_id":1,"_index":"foo","_parent":1,"_routing":1,"_type":"bar","_version":1,"_version_type":"external"}}), q({"delete":{"_id":2,"_index":"foo","_parent":2,"_routing":2,"_type":"bar","_version":1,"_version_type":"external"}}), ], "Delete actions in buffer"; is $b->_buffer_size, 230, "Delete actions buffer size"; is $b->_buffer_count, 2, "Delete actions buffer count"; $b->clear_buffer; ## UPDATE ACTIONS ## ok $b->add_action( update => { index => 'foo', type => 'bar', id => 1, routing => 1, parent => 1, timestamp => 1380019061000, ttl => '10m', version => 1, version_type => 'external', doc => { foo => 'bar' }, doc_as_upsert => 1, }, update => { index => 'foo', type => 'bar', id => 1, routing => 1, parent => 1, timestamp => 1380019061000, ttl => '10m', version => 1, version_type => 'external', upsert => { counter => 0 }, script => '_ctx.source.counter+=incr', lang => 'mvel', params => { incr => 1 }, }, update => { _index => 'foo', _type => 'bar', _id => 1, _routing => 1, _parent => 1, _timestamp => 1380019061000, _ttl => '10m', _version => 1, _version_type => 'external', doc => { foo => 'bar' }, doc_as_upsert => 1, }, update => { _index => 'foo', _type => 'bar', _id => 1, _routing => 1, _parent => 1, _timestamp => 1380019061000, _ttl => '10m', _version => 1, _version_type => 'external', upsert => { counter => 0 }, script => '_ctx.source.counter+=incr', lang => 'mvel', params => { incr => 1 }, }, update => { _index => 'foo', _type => 'bar', _id => 1, _routing => 1, _parent => 1, _timestamp => 1380019061000, _ttl => '10m', _version => 1, _version_type => 'external', doc => { foo => 'bar' }, doc_as_upsert => 1, detect_noop => 1, }, update => { _index => 'foo', _type => 'bar', _id => 1, _routing => 1, _parent => 1, _timestamp => 1380019061000, _ttl => '10m', _version => 1, _version_type => 'external', upsert => { counter => 0 }, script => '_ctx.source.counter+=incr', lang => 'mvel', params => { incr => 1 }, detect_noop => 1, }, ), 'Add update actions'; cmp_deeply $b->_buffer, [ q({"update":{"_id":1,"_index":"foo","_parent":1,"_routing":1,"_timestamp":1380019061000,"_ttl":"10m","_type":"bar","_version":1,"_version_type":"external"}}), q({"doc":{"foo":"bar"},"doc_as_upsert":1}), q({"update":{"_id":1,"_index":"foo","_parent":1,"_routing":1,"_timestamp":1380019061000,"_ttl":"10m","_type":"bar","_version":1,"_version_type":"external"}}), q({"lang":"mvel","params":{"incr":1},"script":"_ctx.source.counter+=incr","upsert":{"counter":0}}), q({"update":{"_id":1,"_index":"foo","_parent":1,"_routing":1,"_timestamp":1380019061000,"_ttl":"10m","_type":"bar","_version":1,"_version_type":"external"}}), q({"doc":{"foo":"bar"},"doc_as_upsert":1}), q({"update":{"_id":1,"_index":"foo","_parent":1,"_routing":1,"_timestamp":1380019061000,"_ttl":"10m","_type":"bar","_version":1,"_version_type":"external"}}), q({"lang":"mvel","params":{"incr":1},"script":"_ctx.source.counter+=incr","upsert":{"counter":0}}), q({"update":{"_id":1,"_index":"foo","_parent":1,"_routing":1,"_timestamp":1380019061000,"_ttl":"10m","_type":"bar","_version":1,"_version_type":"external"}}), q({"detect_noop":1,"doc":{"foo":"bar"},"doc_as_upsert":1}), q({"update":{"_id":1,"_index":"foo","_parent":1,"_routing":1,"_timestamp":1380019061000,"_ttl":"10m","_type":"bar","_version":1,"_version_type":"external"}}), q({"detect_noop":1,"lang":"mvel","params":{"incr":1},"script":"_ctx.source.counter+=incr","upsert":{"counter":0}}), ], "Update actions in buffer"; is $b->_buffer_size, 1370, "Update actions buffer size"; is $b->_buffer_count, 6, "Update actions buffer count"; $b->clear_buffer; ## ERRORS ## throws_ok { $b->add_action( 'foo' => {} ) } qr/Unrecognised action/, 'Bad action'; throws_ok { $b->add_action( 'index', 'bar' ) } qr/Missing /, 'Missing params'; throws_ok { $b->add_action( index => { type => 't' } ) } qr/Missing .*/, 'Missing index'; throws_ok { $b->add_action( index => { index => 'i' } ) } qr/Missing .*/, 'Missing type'; throws_ok { $b->add_action( index => { index => 'i', type => 't' } ) } qr/Missing /, 'Missing source'; throws_ok { $b->add_action( index => { index => 'i', type => 't', _source => {}, foo => 1 } ); } qr/Unknown params/, 'Unknown params'; done_testing; 34_bulk_cxn_errors.t100644000765000024 167013001720020 23434 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/Client_5_0use Test::More; use Test::Deep; use Test::Exception; use strict; use warnings; use lib 't/lib'; use Log::Any::Adapter; $ENV{ES_VERSION} = '5_0'; $ENV{ES} = '10.255.255.1:9200'; $ENV{ES_SKIP_PING} = 1; $ENV{ES_CXN_POOL} = 'Static'; $ENV{ES_TIMEOUT} = 1; my $es = do "es_sync.pl" or die( $@ || $! ); SKIP: { skip "IO::Socket::IP doesn't respect timeout: https://rt.cpan.org/Ticket/Display.html?id=103878", 3 if $es->transport->cxn_pool->cxn_factory->cxn_class eq 'Search::Elasticsearch::Cxn::HTTPTiny' && $^V =~ /^v5.2\d/; # Check that the buffer is not cleared on a NoNodes exception my $b = $es->bulk_helper( index => 'foo', type => 'bar' ); $b->create_docs( { foo => 'bar' } ); is $b->_buffer_count, 1, "Buffer count pre-flush"; throws_ok { $b->flush } 'Search::Elasticsearch::Error::NoNodes'; is $b->_buffer_count, 1, "Buffer count post-flush"; } done_testing; 31_sniff_new_nodes.t100644000765000024 160713001720020 23542 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/50_Cxn_Pooluse Test::More; use Test::Exception; use Search::Elasticsearch; use lib 't/lib'; use MockCxn qw(mock_sniff_client); ## Sniff new nodes my $t = mock_sniff_client( { nodes => [ 'one', 'two' ] }, { node => 1, sniff => [], code => 509, error => 'Cxn' }, { node => 2, sniff => [ 'two', 'three' ] }, { node => 3, code => 200, content => 1 }, { node => 4, code => 200, content => 1 }, # force sniff { node => 3, sniff => [ 'one', 'two', 'three' ] }, { node => 5, code => 200, content => 1 }, { node => 6, code => 200, content => 1 }, { node => 7, code => 200, content => 1 }, { node => 5, code => 200, content => 1 }, ); ok $t->perform_request() && $t->perform_request && $t->cxn_pool->schedule_check && $t->perform_request && $t->perform_request && $t->perform_request && $t->perform_request, 'Sniff new nodes'; done_testing; Cxn000755000765000024 013001720020 22525 5ustar00clintonstaff000000000000Search-Elasticsearch-5.01/lib/Search/ElasticsearchLWP.pm100644000765000024 1706013001720020 23711 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/lib/Search/Elasticsearch/Cxnpackage Search::Elasticsearch::Cxn::LWP; $Search::Elasticsearch::Cxn::LWP::VERSION = '5.01'; use Moo; with 'Search::Elasticsearch::Role::Cxn', 'Search::Elasticsearch::Role::Is_Sync'; use LWP::UserAgent(); use HTTP::Headers(); use HTTP::Request(); my $Cxn_Error = qr/ Can't.connect | Server.closed.connection | Connection.refused /x; use namespace::clean; #=================================== sub perform_request { #=================================== my ( $self, $params ) = @_; my $uri = $self->build_uri($params); my $method = $params->{method}; my %headers; if ( $params->{data} ) { $headers{'Content-Type'} = $params->{mime_type}; $headers{'Content-Encoding'} = $params->{encoding} if $params->{encoding}; } my $request = HTTP::Request->new( $method => $uri, [ %headers, %{ $self->default_headers }, ], $params->{data} ); my $ua = $self->handle; my $timeout = $params->{timeout} || $self->request_timeout; if ( $timeout ne $ua->timeout ) { $ua->conn_cache->drop; $ua->timeout($timeout); } my $response = $ua->request($request); return $self->process_response( $params, # request $response->code, # code $response->message, # msg $response->content, # body $response->headers # headers ); } #=================================== sub error_from_text { #=================================== local $_ = $_[2]; return /read timeout/ ? 'Timeout' : /write failed: Connection reset by peer/ ? 'ContentLength' : /$Cxn_Error/ ? 'Cxn' : 'Request'; } #=================================== sub _build_handle { #=================================== my $self = shift; my %args = ( keep_alive => 1, parse_head => 0 ); if ( $self->is_https ) { $args{ssl_opts} = $self->has_ssl_options ? $self->ssl_options : { verify_hostname => 0 }; } return LWP::UserAgent->new( %args, %{ $self->handle_args } ); } 1; # ABSTRACT: A Cxn implementation which uses LWP __END__ =pod =encoding UTF-8 =head1 NAME Search::Elasticsearch::Cxn::LWP - A Cxn implementation which uses LWP =head1 VERSION version 5.01 =head1 DESCRIPTION Provides an HTTP Cxn class and based on L. The LWP backend uses pure Perl and persistent connections. This class does L, whose documentation provides more information, and L. =head1 CONFIGURATION =head2 Inherited configuration From L =over =item * L =item * L =item * L =item * L =item * L =item * L =item * L =item * L =item * L =item * L =item * L =item * L =back =head1 SSL/TLS L uses L to support HTTPS. By default, no validation of the remote host is performed. This behaviour can be changed by passing the C parameter with any options accepted by L. For instance, to check that the remote host has a trusted certificate, and to avoid man-in-the-middle attacks, you could do the following: use Search::Elasticsearch; my $es = Search::Elasticsearch->new( cxn => 'LWP', nodes => [ "https://node1.mydomain.com:9200", "https://node2.mydomain.com:9200", ], ssl_options => { verify_hostname => 1, SSL_ca_file => '/path/to/cacert.pem' } ); If the remote server cannot be verified, an L will be thrown - LWP does not allow us to detect that the connection error was due to invalid SSL. If you want your client to present its own certificate to the remote server, then use: use Search::Elasticsearch; my $es = Search::Elasticsearch->new( cxn => 'LWP', nodes => [ "https://node1.mydomain.com:9200", "https://node2.mydomain.com:9200", ], ssl_options => { verify_hostname => 1, SSL_ca_file => '/path/to/cacert.pem', SSL_use_cert => 1, SSL_cert_file => '/path/to/client.pem', SSL_key_file => '/path/to/client.pem', } ); =head1 METHODS =head2 C ($status,$body) = $self->perform_request({ # required method => 'GET|HEAD|POST|PUT|DELETE', path => '/path/of/request', qs => \%query_string_params, # optional data => $body_as_string, mime_type => 'application/json', timeout => $timeout }); Sends the request to the associated Elasticsearch node and returns a C<$status> code and the decoded response C<$body>, or throws an error if the request failed. =head2 Inherited methods From L =over =item * L =item * L =item * L =item * L =item * L =item * L =item * L =item * L =item * L =item * L =item * L =item * L =item * L =item * L =item * L =item * L =item * L =item * L =item * L =back =head1 SEE ALSO =over =item * L =item * L =item * L =item * L =back =head1 AUTHOR Clinton Gormley =head1 COPYRIGHT AND LICENSE This software is Copyright (c) 2016 by Elasticsearch BV. This is free software, licensed under: The Apache License, Version 2.0, January 2004 =cut 13_preload_cpanel.t100644000765000024 51613001720020 23707 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/20_Serializeruse Test::More; BEGIN { eval { require Cpanel::JSON::XS; 1 } or do { plan skip_all => 'Cpanel::JSON::XS not installed'; done_testing; exit; } } use Search::Elasticsearch; my $s = Search::Elasticsearch->new()->transport->serializer->JSON; isa_ok $s, "Cpanel::JSON::XS", 'Cpanel'; done_testing; 21_xs_encode_bulk.t100644000765000024 32613001720020 23721 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/20_Serializeruse Test::More; eval { require JSON::XS; 1 } or do { plan skip_all => 'JSON::XS not installed'; done_testing; }; our $JSON_BACKEND = 'JSON::XS'; do 't/20_Serializer/encode_decode.pl' or die( $@ || $! ); 41_pp_encode_bulk.t100644000765000024 32613001720020 23710 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/20_Serializeruse Test::More; eval { require JSON::PP; 1 } or do { plan skip_all => 'JSON::PP not installed'; done_testing; }; our $JSON_BACKEND = 'JSON::PP'; do 't/20_Serializer/encode_decode.pl' or die( $@ || $! ); 30_perform_request.t100644000765000024 322013001720020 24060 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/40_Transportuse Test::More; use Test::Deep; use Test::Exception; use Search::Elasticsearch; use lib 't/lib'; use MockCxn qw(mock_static_client); our $t; # good request $t = mock_static_client( { nodes => ['one'] }, # { node => 1, ping => 1 }, # { node => 1, code => '200', content => 1 } ); ok $t->perform_request, 'Simple request'; # Request error $t = mock_static_client( { nodes => ['one'] }, { node => 1, ping => 1 }, { node => 1, code => '404', error => 'NotFound' } ); throws_ok { $t->perform_request } qr/Missing/, 'Request error'; # Timeout error $t = mock_static_client( { nodes => ['one'] }, { node => 1, ping => 1 }, { node => 1, code => '509', error => 'Timeout' }, { node => 1, ping => 1 }, { node => 1, code => '200', content => 1 } ); throws_ok { $t->perform_request } qr/Timeout/, 'Timeout error'; ok $t->perform_request, 'Timeout resolved'; # Cxn error $t = mock_static_client( { nodes => ['one'] }, { node => 1, ping => 1 }, { node => 1, code => '509', error => 'Cxn' }, { node => 1, ping => 1 }, { node => 1, code => '200', content => 1 } ); ok $t->perform_request, 'Retried connection error'; # NoNodes from failure $t = mock_static_client( { nodes => ['one'] }, { node => 1, ping => 1 }, { node => 1, code => '509', error => 'Cxn' }, { node => 1, ping => 0 }, ); throws_ok { $t->perform_request } qr/NoNodes/, 'Cxn then bad ping'; # NoNodes reachable $t = mock_static_client( { nodes => ['one'] }, # { node => 1, ping => 0 }, ); throws_ok { $t->perform_request } qr/NoNodes/, 'Initial bad ping'; done_testing; 32_sniff_node_fails.t100644000765000024 152413001720020 23663 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/50_Cxn_Pooluse Test::More; use Test::Exception; use Search::Elasticsearch; use lib 't/lib'; use MockCxn qw(mock_sniff_client); ## Sniff node failures my $t = mock_sniff_client( { nodes => [ 'one', 'two' ] }, { node => 1, sniff => [ 'one', 'two' ] }, { node => 2, code => 200, content => 1 }, { node => 3, code => 509, error => 'Cxn' }, { node => 2, sniff => ['one'] }, { node => 4, code => 200, content => 1 }, { node => 4, code => 200, content => 1 }, # force sniff { node => 4, sniff => [ 'one', 'two' ] }, { node => 5, code => 200, content => 1 }, { node => 6, code => 200, content => 1 }, ); ok $t->perform_request() && $t->perform_request && $t->perform_request && $t->cxn_pool->schedule_check && $t->perform_request && $t->perform_request, 'Sniff after failure'; done_testing; Hijk.pm100644000765000024 1667113001720020 24143 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/lib/Search/Elasticsearch/Cxnpackage Search::Elasticsearch::Cxn::Hijk; $Search::Elasticsearch::Cxn::Hijk::VERSION = '5.01'; use Moo; with 'Search::Elasticsearch::Role::Cxn', 'Search::Elasticsearch::Role::Is_Sync'; use Hijk; use Try::Tiny; use namespace::clean; has 'connect_timeout' => ( is => 'ro', default => 2 ); has '_socket_cache' => ( is => 'rw', default => sub { {} } ); my $Cxn_Error = qr/ Connection.(?:timed.out|re(?:set|fused)) | connect:.timeout | Host.is.down | No.route.to.host | temporarily.unavailable | Socket.is.not.connected | Broken.pipe | Failed.to | select\(2\) | connect\(2\) | send.error | zombie.error /x; #=================================== sub perform_request { #=================================== my ( $self, $params ) = @_; my $uri = $self->build_uri($params); my $method = $params->{method}; my $cache = $self->_socket_cache; my %args = ( host => $uri->host, port => $uri->port, socket_cache => $self->_socket_cache, connect_timeout => $self->request_timeout, read_timeout => $params->{timeout} || $self->request_timeout, method => $method, path => $uri->path, query_string => $uri->query, %{ $self->handle_args } ); if ( defined $params->{data} ) { $args{body} = $params->{data}; $args{head} = [ 'Content-Type', $params->{mime_type} ]; push @{ $args{headers} }, ( 'Content-Encoding', $params->{encoding} ) if $params->{encoding}; } my $response; try { local $SIG{PIPE} = sub { die $! }; $response = Hijk::request( \%args ); } catch { $response = { status => 500, error => $_ || 'Unknown error' }; }; my $head = $response->{head} || {}; my %head = map { lc($_) => $head->{$_} } keys %$head; return $self->process_response( $params, # request $response->{status} || 500, # code $response->{error}, # msg $response->{body} || $response->{error_message} || $response->{errno_string}, # body \%head # headers ); } #=================================== sub clear_handle { #=================================== my $self = shift; $self->_socket_cache( {} ); } #=================================== sub error_from_text { #=================================== local $_ = $_[2]; no warnings 'numeric'; my $type = 0 + $_ & Hijk::Error::TIMEOUT ? 'Timeout' : 0 + $_ & Hijk::Error::CANNOT_RESOLVE ? 'Cxn' : 0 + $_ & Hijk::Error::REQUEST_ERROR ? 'Cxn' : 0 + $_ & Hijk::Error::RESPONSE_ERROR ? 'Cxn' : /Connection reset by peer/ ? 'ContentLength' : m/$Cxn_Error/ ? 'Cxn' : 'Request'; if ( $type eq 'Cxn' || $type eq 'Timeout' ) { %{ $_[0]->_socket_cache } = (); } return $type; } 1; # ABSTRACT: A Cxn implementation which uses Hijk __END__ =pod =encoding UTF-8 =head1 NAME Search::Elasticsearch::Cxn::Hijk - A Cxn implementation which uses Hijk =head1 VERSION version 5.01 =head1 DESCRIPTION Provides an HTTP Cxn class based on L. The Hijk backend is pure Perl and is very fast, faster even that L, but doesn't provide support for https or proxies. This class does L, whose documentation provides more information, and L. =head1 CONFIGURATION =head2 C Unlike most HTTP backends, L accepts a separate C parameter, which defaults to C<2> seconds but can be reduced in an environment with low network latency. =head2 Inherited configuration From L =over =item * L =item * L =item * L =item * L =item * L =item * L =item * L =item * L =item * L =item * L =item * L =item * L =back =head1 SSL/TLS L does not support HTTPS. =head1 METHODS =head2 C ($status,$body) = $self->perform_request({ # required method => 'GET|HEAD|POST|PUT|DELETE', path => '/path/of/request', qs => \%query_string_params, # optional data => $body_as_string, mime_type => 'application/json', timeout => $timeout }); Sends the request to the associated Elasticsearch node and returns a C<$status> code and the decoded response C<$body>, or throws an error if the request failed. =head2 Inherited methods From L =over =item * L =item * L =item * L =item * L =item * L =item * L =item * L =item * L =item * L =item * L =item * L =item * L =item * L =item * L =item * L =item * L =item * L =item * L =item * L =back =head1 SEE ALSO =over =item * L =item * L =item * L =item * L =back =head1 AUTHOR Clinton Gormley =head1 COPYRIGHT AND LICENSE This software is Copyright (c) 2016 by Elasticsearch BV. This is free software, licensed under: The Apache License, Version 2.0, January 2004 =cut Role000755000765000024 013001720020 22676 5ustar00clintonstaff000000000000Search-Elasticsearch-5.01/lib/Search/ElasticsearchAPI.pm100644000765000024 315213001720020 24006 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/lib/Search/Elasticsearch/Rolepackage Search::Elasticsearch::Role::API; $Search::Elasticsearch::Role::API::VERSION = '5.01'; use Moo::Role; requires 'api_version'; requires 'api'; use Search::Elasticsearch::Util qw(throw); use namespace::clean; our %Handler = ( string => sub {"$_[0]"}, list => sub { ref $_[0] eq 'ARRAY' ? join( ',', @{ shift() } ) : shift(); }, boolean => sub { $_[0] && $_[0] ne 'false' ? 'true' : 'false'; }, enum => sub { ref $_[0] eq 'ARRAY' ? join( ',', @{ shift() } ) : shift(); }, number => sub { 0 + $_[0] }, time => sub {"$_[0]"} ); #=================================== sub _qs_init { #=================================== my $class = shift; my $API = shift; for my $spec ( keys %$API ) { my $qs = $API->{$spec}{qs}; for my $param ( keys %$qs ) { my $handler = $Handler{ $qs->{$param} } or throw( "Internal", "Unknown type <" . $qs->{$param} . "> for param <$param> in API <$spec>" ); $qs->{$param} = $handler; } } } 1; # ABSTRACT: Provides common functionality for API implementations __END__ =pod =encoding UTF-8 =head1 NAME Search::Elasticsearch::Role::API - Provides common functionality for API implementations =head1 VERSION version 5.01 =head1 AUTHOR Clinton Gormley =head1 COPYRIGHT AND LICENSE This software is Copyright (c) 2016 by Elasticsearch BV. This is free software, licensed under: The Apache License, Version 2.0, January 2004 =cut Cxn.pm100644000765000024 5525213001720020 24155 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/lib/Search/Elasticsearch/Rolepackage Search::Elasticsearch::Role::Cxn; $Search::Elasticsearch::Role::Cxn::VERSION = '5.01'; use Moo::Role; use Search::Elasticsearch::Util qw(parse_params throw to_list); use List::Util qw(min); use Try::Tiny; use URI(); use IO::Compress::Deflate(); use IO::Uncompress::Inflate(); use IO::Compress::Gzip(); use IO::Uncompress::Gunzip qw(gunzip $GunzipError); use Search::Elasticsearch::Util qw(to_list); use namespace::clean; requires qw(perform_request error_from_text handle); has 'host' => ( is => 'ro', required => 1 ); has 'port' => ( is => 'ro', required => 1 ); has 'uri' => ( is => 'ro', required => 1 ); has 'request_timeout' => ( is => 'ro', default => 30 ); has 'ping_timeout' => ( is => 'ro', default => 2 ); has 'sniff_timeout' => ( is => 'ro', default => 1 ); has 'sniff_request_timeout' => ( is => 'ro', default => 2 ); has 'next_ping' => ( is => 'rw', default => 0 ); has 'ping_failures' => ( is => 'rw', default => 0 ); has 'dead_timeout' => ( is => 'ro', default => 60 ); has 'max_dead_timeout' => ( is => 'ro', default => 3600 ); has 'serializer' => ( is => 'ro', required => 1 ); has 'logger' => ( is => 'ro', required => 1 ); has 'handle_args' => ( is => 'ro', default => sub { {} } ); has 'default_qs_params' => ( is => 'ro', default => sub { {} } ); has 'scheme' => ( is => 'ro' ); has 'is_https' => ( is => 'ro' ); has 'userinfo' => ( is => 'ro' ); has 'max_content_length' => ( is => 'ro' ); has 'default_headers' => ( is => 'ro' ); has 'deflate' => ( is => 'ro' ); has 'gzip' => ( is => 'ro' ); has 'ssl_options' => ( is => 'ro', predicate => 'has_ssl_options' ); has 'handle' => ( is => 'lazy', clearer => 1 ); has '_pid' => ( is => 'rw', default => $$ ); my %Code_To_Error = ( 400 => 'Request', 401 => 'Unauthorized', 403 => 'Forbidden', 404 => 'Missing', 408 => 'RequestTimeout', 409 => 'Conflict', 502 => 'BadGateway', 503 => 'Unavailable', 504 => 'GatewayTimeout' ); #=================================== sub stringify { shift->uri . '' } #=================================== #=================================== sub BUILDARGS { #=================================== my ( $class, $params ) = parse_params(@_); my $node = $params->{node} || { host => 'localhost', port => '9200' }; unless ( ref $node eq 'HASH' ) { unless ( $node =~ m{^http(s)?://} ) { $node = ( $params->{use_https} ? 'https://' : 'http://' ) . $node; } if ( $params->{port} && $node !~ m{//[^/]+:\d+} ) { $node =~ s{(//[^/]+)}{$1:$params->{port}}; } my $uri = URI->new($node); $node = { scheme => $uri->scheme, host => $uri->host, port => $uri->port, path => $uri->path, userinfo => $uri->userinfo }; } my $host = $node->{host} || 'localhost'; my $userinfo = $node->{userinfo} || $params->{userinfo} || ''; my $scheme = $node->{scheme} || ( $params->{use_https} ? 'https' : 'http' ); my $port = $node->{port} || $params->{port} || ( $scheme eq 'http' ? 80 : 443 ); my $path = $node->{path} || $params->{path_prefix} || ''; $path =~ s{^/?}{/}g; $path =~ s{/+$}{}; my %default_headers = %{ $params->{default_headers} || {} }; if ($userinfo) { require MIME::Base64; my $auth = MIME::Base64::encode_base64($userinfo); chomp $auth; $default_headers{Authorization} = "Basic $auth"; } if ( $params->{gzip} ) { $default_headers{'Accept-Encoding'} = "gzip"; } elsif ( $params->{deflate} ) { $default_headers{'Accept-Encoding'} = "deflate"; } $params->{scheme} = $scheme; $params->{is_https} = $scheme eq 'https'; $params->{host} = $host; $params->{port} = $port; $params->{path} = $path; $params->{userinfo} = $userinfo; $params->{uri} = URI->new("$scheme://$host:$port$path"); $params->{default_headers} = \%default_headers; return $params; } #=================================== before 'handle' => sub { #=================================== my $self = shift; if ( $$ != $self->_pid ) { $self->clear_handle; $self->_pid($$); } }; #=================================== sub is_live { !shift->next_ping } sub is_dead { !!shift->next_ping } #=================================== #=================================== sub mark_live { #=================================== my $self = shift; $self->ping_failures(0); $self->next_ping(0); } #=================================== sub mark_dead { #=================================== my $self = shift; my $fails = $self->ping_failures; $self->ping_failures( $fails + 1 ); my $timeout = min( $self->dead_timeout * 2**$fails, $self->max_dead_timeout ); my $next = $self->next_ping( time() + $timeout ); $self->logger->infof( 'Marking [%s] as dead. Next ping at: %s', $self->stringify, scalar localtime($next) ); } #=================================== sub force_ping { #=================================== my $self = shift; $self->ping_failures(0); $self->next_ping(-1); } #=================================== sub pings_ok { #=================================== my $self = shift; $self->logger->infof( 'Pinging [%s]', $self->stringify ); return try { $self->perform_request( { method => 'HEAD', path => '/', timeout => $self->ping_timeout, } ); $self->logger->infof( 'Marking [%s] as live', $self->stringify ); $self->mark_live; 1; } catch { $self->logger->debug("$_"); $self->mark_dead; 0; }; } #=================================== sub sniff { #=================================== my $self = shift; $self->logger->infof( 'Sniffing [%s]', $self->stringify ); return try { $self->perform_request( { method => 'GET', path => '/_nodes/http', qs => { timeout => $self->sniff_timeout . 's' }, timeout => $self->sniff_request_timeout, } )->{nodes}; } catch { $self->logger->debug($_); return; }; } #=================================== sub build_uri { #=================================== my ( $self, $params ) = @_; my $uri = $self->uri->clone; $uri->path( $uri->path . $params->{path} ); my %qs = ( %{ $self->default_qs_params }, %{ $params->{qs} || {} } ); $uri->query_form( \%qs ); return $uri; } #=================================== before 'perform_request' => sub { #=================================== my ( $self, $params ) = @_; return unless defined $params->{data}; $self->_compress_body($params); my $max = $self->max_content_length or return; return if length( $params->{data} ) < $max; $self->logger->throw_error( 'ContentLength', "Body is longer than max_content_length ($max)", ); }; #=================================== sub _compress_body { #=================================== my ( $self, $params ) = @_; my $output; if ( $self->gzip ) { IO::Compress::Gzip::gzip( \( $params->{data} ), \$output ) or throw( 'Request', "Couldn't gzip request: $IO::Compress::Gzip::GzipError" ); $params->{data} = $output; $params->{encoding} = 'gzip'; } elsif ( $self->deflate ) { IO::Compress::Deflate::deflate( \( $params->{data} ), \$output ) or throw( 'Request', "Couldn't deflate request: $IO::Compress::Deflate::DeflateError" ); $params->{data} = $output; $params->{encoding} = 'deflate'; } } #=================================== sub _decompress_body { #=================================== my ( $self, $body_ref, $headers ) = @_; if ( my $encoding = $headers->{'content-encoding'} ) { my $output; if ( $encoding eq 'gzip' ) { IO::Uncompress::Gunzip::gunzip( $body_ref, \$output ) or throw( 'Request', "Couldn't gunzip response: $IO::Uncompress::Gunzip::GunzipError" ); } elsif ( $encoding eq 'deflate' ) { IO::Uncompress::Inflate::inflate( $body_ref, \$output, Transparent => 0 ) or throw( 'Request', "Couldn't inflate response: $IO::Uncompress::Inflate::InflateError" ); } else { throw( 'Request', "Unknown content-encoding: $encoding" ); } ${$body_ref} = $output; } } #=================================== sub process_response { #=================================== my ( $self, $params, $code, $msg, $body, $headers ) = @_; $self->_decompress_body( \$body, $headers ); my ($mime_type) = split /\s*;\s*/, ( $headers->{'content-type'} || '' ); my $is_encoded = $mime_type && $mime_type ne 'text/plain'; # Deprecation warnings if (my $warnings = $headers->{warning}) { $warnings = join ("; ",@$warnings) if ref $warnings eq 'ARRAY'; $self->logger->deprecation($warnings,$params); } # Request is successful if ( $code >= 200 and $code <= 209 ) { if ( defined $body and length $body ) { $body = $self->serializer->decode($body) if $is_encoded; return $code, $body; } return ( $code, 1 ) if $params->{method} eq 'HEAD'; return ( $code, '' ); } # Check if the error should be ignored my @ignore = to_list( $params->{ignore} ); push @ignore, 404 if $params->{method} eq 'HEAD'; return ($code) if grep { $_ eq $code } @ignore; # Determine error type my $error_type = $Code_To_Error{$code}; unless ($error_type) { if ( defined $body and length $body ) { $msg = $body; $body = undef; } $error_type = $self->error_from_text( $code, $msg ); } delete $params->{data} if $params->{body}; my %error_args = ( status_code => $code, request => $params ); # Extract error message from the body, if present if ( $body = $self->serializer->decode($body) ) { $error_args{body} = $body; $msg = $self->_munge_elasticsearch_exception($body) || $msg; $error_args{current_version} = $1 if $error_type eq 'Conflict' and $msg =~ /: version conflict, current (?:version )?\[(\d+)\]/; } $msg ||= $error_type; chomp $msg; throw( $error_type, "[" . $self->stringify . "]-[$code] $msg", \%error_args ); } #=================================== sub _munge_elasticsearch_exception { #=================================== my ( $self, $body ) = @_; return $body unless ref $body eq 'HASH'; my $error = $body->{error} || return; return $error unless ref $error eq 'HASH'; my $root_causes = $error->{root_cause} || []; unless (@$root_causes) { my $msg = "[" . $error->{type} . "] " if $error->{type}; $msg .= $error->{reason} if $error->{reason}; return $msg; } my $json = $self->serializer; my @msgs; for (@$root_causes) { my %cause = (%$_); my $msg = "[" . ( delete $cause{type} ) . "] " . ( delete $cause{reason} ); if ( keys %cause ) { $msg .= ", with: " . $json->encode( \%cause ); } push @msgs, $msg; } return ( join ", ", @msgs ); } 1; # ABSTRACT: Provides common functionality to HTTP Cxn implementations __END__ =pod =encoding UTF-8 =head1 NAME Search::Elasticsearch::Role::Cxn - Provides common functionality to HTTP Cxn implementations =head1 VERSION version 5.01 =head1 DESCRIPTION L provides common functionality to Cxn implementations. Cxn instances are created by a L implementation, using the L class. =head1 CONFIGURATION The configuration options are as follows: =head2 C A single C is passed to C by the L class. It can either be a URI or a hash containing each part. For instance: node => 'localhost'; # equiv of 'http://localhost:80' node => 'localhost:9200'; # equiv of 'http://localhost:9200' node => 'http://localhost:9200'; node => 'https://localhost'; # equiv of 'https://localhost:443' node => 'localhost/path'; # equiv of 'http://localhost:80/path' node => 'http://user:pass@localhost'; # equiv of 'http://localhost:80' # with userinfo => 'user:pass' Alternatively, a C can be specified as a hash: { scheme => 'http', host => 'search.domain.com', port => '9200', path => '/path', userinfo => 'user:pass' } Similarly, default values can be specified with C, C, C and C: $e = Search::Elasticsearch->new( port => 9201, path_prefix => '/path', userinfo => 'user:pass', use_https => 1, nodes => [ 'search1', 'search2' ] ) =head2 C By default, all backends that support HTTPS disable verification of the host they are connecting to. Use C to configure the type of verification that you would like the client to perform, or to configure the client to present its own certificate. The values accepted by C depend on the C class. See the documentation for the C class that you are using. =head2 C By default, Elasticsearch nodes accept a maximum post body of 100MB or C<104_857_600> bytes. This client enforces that limit. The limit can be customised with the C parameter (specified in bytes). If you're using the L module, then the C will be automatically retrieved from the live cluster, unless you specify a custom C: # max_content_length retrieved from cluster $e = Search::Elasticsearch->new( cxn_pool => 'Sniff' ); # max_content_length fixed at 10,000 bytes $e = Search::Elasticsearch->new( cxn_pool => 'Sniff', max_content_length => 10_000 ); =head2 C Enable Gzip compression of requests to and responses from Elasticsearch as follows: $e = Search::Elasticsearch->new( gzip => 1 ); =head2 C Enable Inflate/Deflate compression of requests to and responses from Elasticsearch as follows: $e = Search::Elasticsearch->new( deflate => 1 ); B The L, L, L, and L parameters default to values that allow this module to function with low powered hardware and slow networks. When you use Elasticsearch in production, you will probably want to reduce these timeout parameters to values that suit your environment. The configuration parameters are as follows: =head2 C $e = Search::Elasticsearch->new( request_timeout => 30 ); How long a normal request (ie not a ping or sniff request) should wait before throwing a C error. Defaults to C<30> seconds. B In production, no CRUD or search request should take 30 seconds to run, although admin tasks like C, C, or snapshot C may take much longer. A more reasonable value for production would be C<10> seconds or lower. =head2 C $e = Search::Elasticsearch->new( ping_timeout => 2 ); How long a ping request should wait before throwing a C error. Defaults to C<2> seconds. The L module pings nodes on first use, after any failure, and periodically to ensure that nodes are healthy. The C should be long enough to allow nodes respond in time, but not so long that sick nodes cause delays. A reasonable value for use in production on reasonable hardware would be C<0.3>-C<1> seconds. =head2 C $e = Search::Elasticsearch->new( dead_timeout => 60 ); How long a Cxn should be considered to be I (not used to serve requests), before it is retried. The default is C<60> seconds. This value is increased by powers of 2 for each time a request fails. In other words, the delay after each failure is as follows: Failure Delay 1 60 * 1 = 60 seconds 2 60 * 2 = 120 seconds 3 60 * 4 = 240 seconds 4 60 * 8 = 480 seconds 5 60 * 16 = 960 seconds =head2 C $e = Search::Elasticsearch->new( max_dead_timeout => 3600 ); The maximum delay that should be applied to a failed node. If the L calculation results in a delay greater than C (default C<3,600> seconds) then the C is used instead. In other words, dead nodes will be retried at least once every hour by default. =head2 C $e = Search::Elasticsearch->new( sniff_request_timeout => 2 ); How long a sniff request should wait before throwing a C error. Defaults to C<2> seconds. A reasonable value for production would be C<0.5>-C<2> seconds. =head2 C $e = Search::Elasticsearch->new( sniff_timeout => 1 ); How long the node being sniffed should wait for responses from other nodes before responding to the client. Defaults to C<1> second. A reasonable value in production would be C<0.3>-C<1> seconds. B The C is distinct from the L. For example, let's say you have a cluster with 5 nodes, 2 of which are unhealthy (taking a long time to respond): =over =item * If you sniff an unhealthy node, the request will throw a C error after C seconds. =item * If you sniff a healthy node, it will gather responses from the other nodes, and give up after C seconds, returning just the information it has managed to gather from the healthy nodes. =back B The C must be longer than the C to ensure that you get information about healthy nodes from the cluster. =head2 C Any default arguments which should be passed when creating a new instance of the class which handles the network transport, eg L. =head2 C $e = Search::Elasticsearch->new( default_qs_params => { session_key => 'my_session_key' } ); Any values passed to C will be added to the query string of every request. Also see L. =head1 METHODS None of the methods listed below are useful to the user. They are documented for those who are writing alternative implementations only. =head2 C $scheme = $cxn->scheme; Returns the scheme of the connection, ie C or C. =head2 C $bool = $cxn->is_https; Returns C or C depending on whether the C is C or not. =head2 C $userinfo = $cxn->userinfo Returns the username and password of the cxn, if any, eg C<"user:pass">. If C is provided, then a Basic Authorization header is added to each request. =head2 C $headers = $cxn->default_headers The default headers that are passed with each request. This includes the C header if C is true, and the C header if C has a value. Also see L. =head2 C $int = $cxn->max_content_length; Returns the maximum length in bytes that the HTTP body can have. =head2 C $uri = $cxn->build_uri({ path => '/_search', qs => { size => 10 }}); Returns the HTTP URI to use for a particular request, combining the passed in C parameter with any defined C, and adding the query-string parameters. =head1 METHODS None of the methods listed below are useful to the user. They are documented for those who are writing alternative implementations only. =head2 C $host = $cxn->host; The value of the C parameter, eg C. =head2 C $port = $cxn->port; The value of the C parameter, eg C<9200>. =head2 C $uri = $cxn->uri; A L object representing the node, eg C. =head2 C $bool = $cxn->is_dead Is the current node marked as I. =head2 C $bool = $cxn->is_live Is the current node marked as I. =head2 C $time = $cxn->next_ping($time) Get/set the time for the next scheduled ping. If zero, no ping is scheduled and the cxn is considered to be alive. If -1, a ping is scheduled before the next use. =head2 C $num = $cxn->ping_failures($num) The number of times that a cxn has been marked as dead. =head2 C $cxn->mark_dead Mark the cxn as I, set L and increment L. =head2 C Mark the cxn as I, set L and L to zero. =head2 C Set L to -1 (ie before next use) and L to zero. =head2 C $bool = $cxn->pings_ok Try to ping the node and call L or L depending on the success or failure of the ping. =head2 C $response = $cxn->sniff; Send a sniff request to the node and return the response. =head2 C ($code,$result) = $cxn->process_response($params, $code, $msg, $body ); Processes the response received from an Elasticsearch node and either returns the HTTP status code and the response body (deserialized from JSON) or throws an error of the appropriate type. The C<$params> are the original params passed to L, the C<$code> is the HTTP status code, the C<$msg> is the error message returned by the backend library and the C<$body> is the HTTP response body returned by Elasticsearch. =head1 AUTHOR Clinton Gormley =head1 COPYRIGHT AND LICENSE This software is Copyright (c) 2016 by Elasticsearch BV. This is free software, licensed under: The Apache License, Version 2.0, January 2004 =cut 80_deprecation_methods.t100644000765000024 71513001720020 24073 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/30_Loggeruse Test::More; use Test::Exception; use Search::Elasticsearch; use lib 't/lib'; do 'LogCallback.pl' or die( $@ || $! ); isa_ok my $l = Search::Elasticsearch->new->logger, 'Search::Elasticsearch::Logger::LogAny', 'Logger'; ( $method, $format ) = (); ok $l->deprecation( "foo", { foo => 1 } ), "deprecation"; is $method, "warning", "deprecation - method"; is $format, "[DEPRECATION] foo. In request: {foo => 1}", "deprecation - format"; done_testing; 12_static_node_fails.t100644000765000024 206513001720020 24044 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/50_Cxn_Pooluse Test::More; use Test::Exception; use Search::Elasticsearch; use lib 't/lib'; use MockCxn qw(mock_static_client); ## One node fails with a Cxn error, then rejoins my $t = mock_static_client( { nodes => [ 'one', 'two' ] }, { node => 1, ping => 1 }, { node => 1, code => 200, content => 1 }, { node => 2, ping => 1 }, { node => 2, code => 200, content => 1 }, { node => 1, code => 509, error => 'Cxn' }, { node => 2, ping => 1 }, { node => 2, code => 200, content => 1 }, { node => 2, code => 200, content => 1 }, # force ping on missing node { node => 1, ping => 1 }, { node => 1, code => 200, content => 1 }, { node => 2, code => 200, content => 1 }, { node => 1, code => 200, content => 1 }, ); ok $t->perform_request && $t->perform_request && $t->perform_request && $t->perform_request, 'One node throws Cxn'; # force ping on missing node $t->cxn_pool->cxns->[0]->next_ping(-1); ok $t->perform_request && $t->perform_request && $t->perform_request, 'Failed node recovers'; done_testing; 39_sniff_max_content.t100644000765000024 223213001720020 24103 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/50_Cxn_Pooluse Test::More; use Test::Exception; use Search::Elasticsearch; use lib 't/lib'; use MockCxn qw(mock_sniff_client); ## Dynamic max content length my $response = < ['one'] }, { node => 1, code => 200, content => $response }, { node => 2, code => 200, content => 1 }, ); is $t->perform_request && $t->cxn_pool->next_cxn->max_content_length, 200, "Dynamic max content length"; $t = mock_sniff_client( { nodes => ['one'], max_content_length => 1000 }, { node => 1, code => 200, content => $response }, { node => 2, code => 200, content => 1 }, ); is $t->perform_request && $t->cxn_pool->next_cxn->max_content_length, 1000, "Dynamic max content length"; done_testing; 51_noping_node_fails.t100644000765000024 157313001720020 24055 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/50_Cxn_Pooluse Test::More; use Test::Exception; use Search::Elasticsearch; use lib 't/lib'; use MockCxn qw(mock_noping_client); ## Node fails and recover my $t = mock_noping_client( { nodes => [ 'one', 'two', 'three' ] }, { node => 1, code => 200, content => 1 }, { node => 2, code => 509, error => 'Cxn' }, { node => 3, code => 200, content => 1 }, { node => 1, code => 200, content => 1 }, { node => 3, code => 200, content => 1 }, # force check { node => 1, code => 200, content => 1 }, { node => 2, code => 200, content => 1 }, { node => 3, code => 200, content => 1 }, ); ok $t->perform_request() && $t->perform_request && $t->perform_request && $t->perform_request && $t->cxn_pool->cxns->[1]->force_ping && $t->perform_request && $t->perform_request && $t->perform_request, 'Node fails and recovers'; done_testing; Transport.pm100644000765000024 1034513001720020 24512 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/lib/Search/Elasticsearchpackage Search::Elasticsearch::Transport; $Search::Elasticsearch::Transport::VERSION = '5.01'; use Moo; use URI(); use Time::HiRes qw(time); use Try::Tiny; use Search::Elasticsearch::Util qw(upgrade_error); use namespace::clean; with 'Search::Elasticsearch::Role::Is_Sync', 'Search::Elasticsearch::Role::Transport'; #=================================== sub perform_request { #=================================== my $self = shift; my $params = $self->tidy_request(@_); my $pool = $self->cxn_pool; my $logger = $self->logger; my ( $code, $response, $cxn, $error ); try { $cxn = $pool->next_cxn; my $start = time(); $logger->trace_request( $cxn, $params ); ( $code, $response ) = $cxn->perform_request($params); $pool->request_ok($cxn); $logger->trace_response( $cxn, $code, $response, time() - $start ); } catch { $error = upgrade_error( $_, { request => $params, status_code => $code, body => $response } ); }; if ($error) { if ( $pool->request_failed( $cxn, $error ) ) { $logger->debugf( "[%s] %s", $cxn->stringify, "$error" ); $logger->info('Retrying request on a new cxn'); return $self->perform_request($params); } $logger->trace_error( $cxn, $error ); $error->is('NoNodes') ? $logger->throw_critical($error) : $logger->throw_error($error); } return $response; } 1; #ABSTRACT: Provides interface between the client class and the Elasticsearch cluster __END__ =pod =encoding UTF-8 =head1 NAME Search::Elasticsearch::Transport - Provides interface between the client class and the Elasticsearch cluster =head1 VERSION version 5.01 =head1 DESCRIPTION The Transport class manages the request cycle. It receives parsed requests from the (user-facing) client class, and tries to execute the request on a node in the cluster, retrying a request if necessary. This class does L and L. =head1 CONFIGURATION =head2 C $e = Search::Elasticsearch->new( send_get_body_as => 'POST' ); Certain endpoints like L default to using a C method, even when they include a request body. Some proxy servers do not support C requests with a body. To work around this, the C parameter accepts the following: =over =item * C The default. Request bodies are sent as C requests. =item * C The method is changed to C when a body is present. =item * C The body is encoded as JSON and added to the query string as the C parameter. This has the advantage of still being a C request (for those filtering on request method) but has the disadvantage of being restricted in size. The limit depends on the proxies between the client and Elasticsearch, but usually is around 4kB. =back =head1 METHODS =head2 C Raw requests can be executed using the transport class as follows: $result = $e->transport->perform_request( method => 'POST', path => '/_search', qs => { from => 0, size => 10 }, body => { query => { match => { title => "Elasticsearch clients" } } } ); Other than the C, C, C and C parameters, which should be self-explanatory, it also accepts: =over =item C The HTTP error codes which should be ignored instead of throwing an error, eg C<404 NOT FOUND>: $result = $e->transport->perform_request( method => 'GET', path => '/index/type/id' ignore => [404], ); =item C Whether the C should be serialized in the standard way (as plain JSON) or using the special I format: C<"std"> or C<"bulk">. =back =head1 AUTHOR Clinton Gormley =head1 COPYRIGHT AND LICENSE This software is Copyright (c) 2016 by Elasticsearch BV. This is free software, licensed under: The Apache License, Version 2.0, January 2004 =cut 20_xs_encode_decode.t100644000765000024 32613001720020 24206 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/20_Serializeruse Test::More; eval { require JSON::XS; 1 } or do { plan skip_all => 'JSON::XS not installed'; done_testing; }; our $JSON_BACKEND = 'JSON::XS'; do 't/20_Serializer/encode_pretty.pl' or die( $@ || $! ); 22_xs_encode_pretty.t100644000765000024 32613001720020 24314 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/20_Serializeruse Test::More; eval { require JSON::XS; 1 } or do { plan skip_all => 'JSON::XS not installed'; done_testing; }; our $JSON_BACKEND = 'JSON::XS'; do 't/20_Serializer/encode_decode.pl' or die( $@ || $! ); 40_pp_encode_decode.t100644000765000024 32613001720020 24175 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/20_Serializeruse Test::More; eval { require JSON::PP; 1 } or do { plan skip_all => 'JSON::PP not installed'; done_testing; }; our $JSON_BACKEND = 'JSON::PP'; do 't/20_Serializer/encode_pretty.pl' or die( $@ || $! ); 42_pp_encode_pretty.t100644000765000024 32613001720020 24303 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/20_Serializeruse Test::More; eval { require JSON::PP; 1 } or do { plan skip_all => 'JSON::PP not installed'; done_testing; }; our $JSON_BACKEND = 'JSON::PP'; do 't/20_Serializer/encode_decode.pl' or die( $@ || $! ); 34_sniff_node_timeout.t100644000765000024 165713001720020 24264 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/50_Cxn_Pooluse Test::More; use Test::Exception; use Search::Elasticsearch; use lib 't/lib'; use MockCxn qw(mock_sniff_client); ## Sniff after Timeout error my $t = mock_sniff_client( { nodes => [ 'one', 'two' ] }, { node => 1, sniff => [ 'one', 'two' ] }, { node => 2, code => 200, content => 1 }, { node => 3, code => 509, error => 'Timeout' }, # throws Timeout { node => 2, sniff => ['one'] }, { node => 4, code => 200, content => 1 }, { node => 4, code => 200, content => 1 }, # force sniff { node => 4, sniff => [ 'one', 'two' ] }, { node => 5, code => 200, content => 1 }, { node => 6, code => 200, content => 1 }, ); ok $t->perform_request() && !eval { $t->perform_request } && $@ =~ /Timeout/ && $t->perform_request && $t->perform_request && $t->cxn_pool->schedule_check && $t->perform_request && $t->perform_request, 'Sniff after timeout'; done_testing; 40_sniff_extract_host.t100644000765000024 114713001720020 24267 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/50_Cxn_Pooluse Test::More; use Search::Elasticsearch; use lib 't/lib'; my $pool = Search::Elasticsearch->new( cxn_pool => 'Sniff' )->transport->cxn_pool; is $pool->_extract_host('127.0.0.1:9200'), '127.0.0.1:9200', "IP"; is $pool->_extract_host('myhost/127.0.0.1:9200'), '127.0.0.1:9200', "Host/IP"; is $pool->_extract_host('inet[127.0.0.1:9200]'), '127.0.0.1:9200', "inet[IP]"; is $pool->_extract_host('inet[myhost/127.0.0.1:9200]'), '127.0.0.1:9200', "inet[Host/IP]"; is $pool->_extract_host('inet[/127.0.0.1:9200]'), '127.0.0.1:9200', "inet[/IP]"; ok !$pool->_extract_host(), "Undefined"; done_testing; 10_test_server_fork.t100644000765000024 364213001720020 24356 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/95_TestServeruse strict; use warnings; use Test::More; use Test::SharedFork; use File::Temp; use POSIX ":sys_wait_h"; use Search::Elasticsearch; use Search::Elasticsearch::TestServer; my $pids = []; SKIP: { skip 'ES_HOME not set', 8 unless $ENV{ES_HOME}; my $tempdir = File::Temp->newdir( 'testserver-XXXXX', DIR => '/tmp' ); my $server = Search::Elasticsearch::TestServer->new( es_home => $ENV{ES_HOME}, conf => [ "path.data=$tempdir", "path.logs=$tempdir", ] ); my $nodes = $server->start(); ok( $nodes, "server->start returned nodes" ) or diag explain { server => $server }; ok( defined( $server->pids ), "server->pids defined" ); cmp_ok( scalar @{ $server->pids }, '>', 0, "more than 0 pids" ); $pids = \@{ $server->pids }; verify_pids_alive( $pids, 'ES pids are alive' ); { my $pid = fork; die "cannot fork" unless defined $pid; if ( $pid == 0 ) { verify_pids_alive( $pids, 'ES pids are alive in child' ); exit 0; } else { verify_pids_alive( $pids, 'ES pids are alive in parent' ); waitpid( $pid, 0 ); sleep 5; verify_pids_alive( $pids, 'ES pids are alive in parent after child dies' ); } } $server->shutdown; note 'sleep to give ES time to die'; sleep 5; verify_pids_dead( $pids, 'ES pids are dead after shutdown' ); } done_testing; #important to waitpid or kill0 will return true for zombies. sub verify_pids_alive { my ( $pids, $msg ) = @_; $msg = '' if !defined $msg; for my $pid (@$pids) { waitpid( $pid, WNOHANG ); ok( kill( 0, $pid ), "$msg: pid $pid is alive" ); } } sub verify_pids_dead { my ( $pids, $msg ) = @_; $msg = '' if !defined $msg; for my $pid (@$pids) { waitpid( $pid, WNOHANG ); ok( !kill( 0, $pid ), "$msg: pid $pid is dead" ); } } TestServer.pm100644000765000024 1645513001720020 24634 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/lib/Search/Elasticsearchpackage Search::Elasticsearch::TestServer; $Search::Elasticsearch::TestServer::VERSION = '5.01'; use Moo; use Search::Elasticsearch(); use POSIX 'setsid'; use File::Temp(); use IO::Socket(); use HTTP::Tiny; use Search::Elasticsearch::Util qw(parse_params throw); use namespace::clean; has 'es_home' => ( is => 'ro', required => 1 ); has 'instances' => ( is => 'ro', default => 1 ); has 'http_port' => ( is => 'ro', default => 9600 ); has 'es_port' => ( is => 'ro', default => 9700 ); has 'pids' => ( is => 'ro', default => sub { [] }, clearer => 1, predicate => 1 ); has 'dir' => ( is => 'ro', clearer => 1 ); has 'conf' => ( is => 'ro', default => sub { [] } ); has '_starter_pid' => ( is => 'rw', required => 0, predicate => 1 ); #=================================== sub start { #=================================== my $self = shift; my $home = $self->es_home or throw( 'Param', "Missing required param " ); my $instances = $self->instances; my $port = $self->http_port; my $es_port = $self->es_port; my @http = map { $port++ } ( 1 .. $instances ); my @transport = map { $es_port++ } ( 1 .. $instances ); $self->_check_ports( @http, @transport ); my $old_SIGINT = $SIG{INT}; $SIG{INT} = sub { $self->shutdown; if ( ref $old_SIGINT eq 'CODE' ) { return $old_SIGINT->(); } exit(1); }; my $dir = File::Temp->newdir(); for ( 0 .. $instances - 1 ) { print "Starting node: http://127.0.0.1:$http[$_]\n"; $self->_start_node( $dir, $transport[$_], $http[$_] ); } $self->_check_nodes(@http); return [ map {"http://127.0.0.1:$_"} @http ]; } #=================================== sub _check_ports { #=================================== my $self = shift; for my $port (@_) { next unless IO::Socket::INET->new("127.0.0.1:$port"); throw( 'Param', "There is already a service running on 127.0.0.1:$port. " . "Please shut it down before starting the test server" ); } } #=================================== sub _check_nodes { #=================================== my $self = shift; my $http = HTTP::Tiny->new; for my $node (@_) { print "Checking node: http://127.0.0.1:$node\n"; my $i = 20; while (1) { last if $http->head("http://127.0.0.1:$node/")->{status} == 200; throw( 'Cxn', "Couldn't connect to http://127.0.0.1:$node" ) unless $i--; sleep 1; } } } #=================================== sub _start_node { #=================================== my ( $self, $dir, $transport, $http ) = @_; my $pid_file = File::Temp->new; my @config = $self->_command_line( $pid_file, $dir, $transport, $http ); my $int_caught = 0; { local $SIG{INT} = sub { $int_caught++; }; defined( my $pid = fork ) or throw( 'Internal', "Couldn't fork a new process: $!" ); if ( $pid == 0 ) { throw( 'Internal', "Can't start a new session: $!" ) if setsid == -1; exec(@config) or die "Couldn't execute @config: $!"; } else { for ( 1 .. 5 ) { last if -s $pid_file->filename(); sleep 1; } open my $pid_fh, '<', $pid_file->filename; my $pid = <$pid_fh>; throw( 'Internal', "No PID file found for Elasticsearch" ) unless $pid; chomp $pid; push @{ $self->{pids} }, $pid; $self->_starter_pid($$); } } $SIG{INT}->('INT') if $int_caught; } #=================================== sub guarded_shutdown { #=================================== my $self = shift; if ( $self->_has_starter_pid && $$ == $self->_starter_pid ) { $self->shutdown(); } } #=================================== sub shutdown { #=================================== my $self = shift; local $?; return unless $self->has_pids; my $pids = $self->pids; $self->clear_pids; return unless @$pids; kill 9, @$pids; $self->clear_dir; } #=================================== sub _command_line { #=================================== my ( $self, $pid_file, $dir, $transport, $http ) = @_; return ( $self->es_home . '/bin/elasticsearch', '-p', $pid_file->filename, map {"-Des.$_"} ( 'path.data=' . $dir, 'network.host=127.0.0.1', 'cluster.name=es_test', 'discovery.zen.ping.multicast.enabled=false', 'discovery.zen.ping_timeout=1s', 'discovery.zen.ping.unicast.hosts=127.0.0.1:' . $self->es_port, 'transport.tcp.port=' . $transport, 'http.port=' . $http, @{ $self->conf } ) ); } #=================================== sub DEMOLISH { shift->guarded_shutdown } #=================================== 1; # ABSTRACT: A helper class to launch Elasticsearch nodes __END__ =pod =encoding UTF-8 =head1 NAME Search::Elasticsearch::TestServer - A helper class to launch Elasticsearch nodes =head1 VERSION version 5.01 =head1 SYNOPSIS use Search::Elasticsearch; use Search::Elasticsearch::TestServer; my $server = Search::Elasticsearch::TestServer->new( es_home => '/path/to/elasticsearch', ); my $nodes = $server->start; my $es = Search::Elasticsearch->new( nodes => $nodes ); # run tests $server->shutdown; =head1 DESCRIPTION The L class can be used to launch one or more instances of Elasticsearch for testing purposes. The nodes will be shutdown automatically. =head1 METHODS =head2 C my $server = Search::Elasticsearch::TestServer->new( es_home => '/path/to/elasticsearch', instances => 1, http_port => 9600, es_port => 9700, conf => ['script.disable_dynamic=false'], ); Params: =over =item * C Required. Must point to the Elasticsearch home directory, which contains C<./bin/elasticsearch>. =item * C The number of nodes to start. Defaults to 1 =item * C The port to use for HTTP. If multiple instances are started, the C will be incremented for each subsequent instance. Defaults to 9600. =item * C The port to use for Elasticsearch's internal transport. If multiple instances are started, the C will be incremented for each subsequent instance. Defaults to 9700 =item * C An array containing any extra startup options that should be passed to Elasticsearch. =back =head1 C $nodes = $server->start; Starts the required instances and returns an array ref containing the IP and port of each node, suitable for passing to L: $es = Search::Elasticsearch->new( nodes => $nodes ); =head1 C $server->shutdown; Kills the running instances. This will be called automatically when C<$server> goes out of scope or if the program receives a C. =head1 AUTHOR Clinton Gormley =head1 COPYRIGHT AND LICENSE This software is Copyright (c) 2016 by Elasticsearch BV. This is free software, licensed under: The Apache License, Version 2.0, January 2004 =cut Client000755000765000024 013001720020 23213 5ustar00clintonstaff000000000000Search-Elasticsearch-5.01/lib/Search/Elasticsearch5_0.pm100644000765000024 256613001720020 24305 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/lib/Search/Elasticsearch/Clientpackage Search::Elasticsearch::Client::5_0; our $VERSION='5.01'; use Search::Elasticsearch 5.00 (); 1; =pod =encoding UTF-8 =head1 NAME Search::Elasticsearch::Client::5_0 - Thin client with full support for Elasticsearch 5.x APIs =head1 VERSION version 5.01 =head1 DESCRIPTION The L package provides a client compatible with Elasticsearch 5.x. It should be used in conjunction with L as follows: $e = Search::Elasticsearch->new( client => "5_0::Direct" ); See L for documentation about how to use the client itself. =head1 PREVIOUS VERSIONS OF ELASTICSEARCH This version of the client supports the Elasticsearch 5.0 branch, which is not backwards compatible with earlier branches. If you need to talk to a version of Elasticsearch before 5.0.0, please install one of the following packages: =over =item * L =item * L =item * L =back =head1 AUTHOR Clinton Gormley =head1 COPYRIGHT AND LICENSE This software is Copyright (c) 2016 by Elasticsearch BV. This is free software, licensed under: The Apache License, Version 2.0, January 2004 =cut __END__ # ABSTRACT: Thin client with full support for Elasticsearch 5.x APIs 11_static_node_missing.t100644000765000024 162313001720020 24415 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/50_Cxn_Pooluse Test::More; use Test::Exception; use Search::Elasticsearch; use lib 't/lib'; use MockCxn qw(mock_static_client); ## One node missing at first, then joins later my $t = mock_static_client( { nodes => [ 'one', 'two' ] }, { node => 1, ping => 1 }, { node => 1, code => 200, content => 1 }, { node => 2, ping => 0 }, { node => 1, code => 200, content => 1 }, { node => 1, code => 200, content => 1 }, # force ping on missing node { node => 2, ping => 1 }, { node => 2, code => 200, content => 1 }, { node => 1, code => 200, content => 1 }, { node => 2, code => 200, content => 1 }, ); ok $t->perform_request && $t->perform_request && $t->perform_request, 'One node missing'; # force ping on missing node $t->cxn_pool->cxns->[1]->next_ping(-1); ok $t->perform_request && $t->perform_request && $t->perform_request, 'Missing node joined - 2'; done_testing; 37_sniff_runaway_nodes.t100644000765000024 221313001720020 24437 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/50_Cxn_Pooluse Test::More; use Test::Exception; use Search::Elasticsearch; use lib 't/lib'; use MockCxn qw(mock_sniff_client); ## Runaway nodes (ie wrong HTTP response codes signal node failure, instead of ## request failure) my $t = mock_sniff_client( { nodes => [ 'one', 'two' ] }, { node => 1, sniff => [ 'one', 'two' ] }, { node => 2, code => 200, content => 1 }, { node => 3, code => 503, error => 'Unavailable' }, { node => 2, sniff => [ 'one', 'two' ] }, { node => 4, code => 503, error => 'Unavailable' }, # throw Unavailable: too many retries { node => 5, sniff => [ 'one', 'two' ] }, { node => 6, code => 503, error => 'Unavailable' }, { node => 7, sniff => [ 'one', 'two' ] }, { node => 8, code => 503, error => 'Unavailable' }, # throw Unavailable: too many retries { node => 9, sniff => [ 'one', 'two' ] }, { node => 10, code => 200, content => 1 }, ); ok $t->perform_request && !eval { $t->perform_request } && $@ =~ /Unavailable/ && !eval { $t->perform_request } && $@ =~ /Unavailable/ && $t->perform_request, "Runaway nodes"; done_testing; Factory.pm100644000765000024 365013001720020 24636 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/lib/Search/Elasticsearch/Cxnpackage Search::Elasticsearch::Cxn::Factory; $Search::Elasticsearch::Cxn::Factory::VERSION = '5.01'; use Moo; use Search::Elasticsearch::Util qw(parse_params load_plugin); use namespace::clean; has 'cxn_class' => ( is => 'ro', required => 1 ); has '_factory' => ( is => 'ro', required => 1 ); has 'default_host' => ( is => 'ro', default => 'http://localhost:9200' ); has 'max_content_length' => ( is => 'rw', default => 104_857_600 ); #=================================== sub BUILDARGS { #=================================== my ( $class, $params ) = parse_params(@_); my %args = (%$params); delete $args{nodes}; my $cxn_class = load_plugin( 'Search::Elasticsearch::Cxn', delete $args{cxn} ); $params->{_factory} = sub { my ( $self, $node ) = @_; $cxn_class->new( %args, node => $node, max_content_length => $self->max_content_length ); }; $params->{cxn_args} = \%args; $params->{cxn_class} = $cxn_class; return $params; } #=================================== sub new_cxn { shift->_factory->(@_) } #=================================== 1; =pod =encoding UTF-8 =head1 NAME Search::Elasticsearch::Cxn::Factory - Used by CxnPools to create new Cxn instances. =head1 VERSION version 5.01 =head1 DESCRIPTION This class is used by the L implementations to create new L-based instances. It holds on to all the configuration options passed to L so that new Cxns can use them. It contains no user serviceable parts. =head1 AUTHOR Clinton Gormley =head1 COPYRIGHT AND LICENSE This software is Copyright (c) 2016 by Elasticsearch BV. This is free software, licensed under: The Apache License, Version 2.0, January 2004 =cut __END__ # ABSTRACT: Used by CxnPools to create new Cxn instances. Client.pm100644000765000024 260513001720020 24615 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/lib/Search/Elasticsearch/Rolepackage Search::Elasticsearch::Role::Client; $Search::Elasticsearch::Role::Client::VERSION = '5.01'; use Moo::Role; use namespace::clean; requires 'parse_request'; has 'transport' => ( is => 'ro', required => 1 ); has 'logger' => ( is => 'ro', required => 1 ); #=================================== sub perform_request { #=================================== my $self = shift; my $request = $self->parse_request(@_); return $self->transport->perform_request($request); } 1; =pod =encoding UTF-8 =head1 NAME Search::Elasticsearch::Role::Client - Provides common functionality for Client implementations =head1 VERSION version 5.01 =head1 DESCRIPTION This role provides a common C method for Client implementations. =head1 METHODS =head2 C This method takes whatever arguments it is passed and passes them directly to a C method (which should be provided by Client implementations). The C method should return a request suitable for passing to L. =head1 AUTHOR Clinton Gormley =head1 COPYRIGHT AND LICENSE This software is Copyright (c) 2016 by Elasticsearch BV. This is free software, licensed under: The Apache License, Version 2.0, January 2004 =cut __END__ # ABSTRACT: Provides common functionality for Client implementations Logger.pm100644000765000024 1560213001720020 24637 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/lib/Search/Elasticsearch/Rolepackage Search::Elasticsearch::Role::Logger; $Search::Elasticsearch::Role::Logger::VERSION = '5.01'; use Moo::Role; use URI(); use Try::Tiny; use Search::Elasticsearch::Util qw(new_error); use namespace::clean; has 'serializer' => ( is => 'ro', required => 1 ); has 'log_as' => ( is => 'ro', default => 'elasticsearch.event' ); has 'trace_as' => ( is => 'ro', default => 'elasticsearch.trace' ); has 'deprecate_as' => ( is => 'ro', default => 'elasticsearch.deprecation' ); has 'log_to' => ( is => 'ro' ); has 'trace_to' => ( is => 'ro' ); has 'deprecate_to' => ( is => 'ro' ); has 'trace_handle' => ( is => 'lazy', handles => [qw( trace tracef is_trace)] ); has 'log_handle' => ( is => 'lazy', handles => [ qw( debug debugf is_debug info infof is_info warning warningf is_warning error errorf is_error critical criticalf is_critical ) ] ); has 'deprecate_handle' => ( is => 'lazy' ); #=================================== sub throw_error { #=================================== my ( $self, $type, $msg, $vars ) = @_; my $error = new_error( $type, $msg, $vars ); $self->error($error); die $error; } #=================================== sub throw_critical { #=================================== my ( $self, $type, $msg, $vars ) = @_; my $error = new_error( $type, $msg, $vars ); $self->critical($error); die $error; } #=================================== sub trace_request { #=================================== my ( $self, $cxn, $params ) = @_; return unless $self->is_trace; my $uri = URI->new( 'http://localhost:9200' . $params->{path} ); my %qs = ( %{ $params->{qs} }, pretty => 1 ); $uri->query_form( [ map { $_, $qs{$_} } sort keys %qs ] ); my $body = $params->{serialize} eq 'std' ? $self->serializer->encode_pretty( $params->{body} ) : $params->{data}; if ( defined $body ) { $body =~ s/'/\\u0027/g; $body = " -d '\n$body'\n"; } else { $body = "\n" } my $msg = sprintf( "# Request to: %s\n" # . "curl -X%s '%s'%s", # $cxn->stringify, $params->{method}, $uri, $body ); $self->trace($msg); } #=================================== sub trace_response { #=================================== my ( $self, $cxn, $code, $response, $took ) = @_; return unless $self->is_trace; my $body = $self->serializer->encode_pretty($response) || "\n"; $body =~ s/^/# /mg; my $msg = sprintf( "# Response: %s, Took: %d ms\n%s", # $code, $took * 1000, $body ); $self->trace($msg); } #=================================== sub trace_error { #=================================== my ( $self, $cxn, $error ) = @_; return unless $self->is_trace; my $body = $self->serializer->encode_pretty( $error->{vars}{body} || "\n" ); $body =~ s/^/# /mg; my $msg = sprintf( "# ERROR: %s %s\n%s", ref($error), $error->{text}, $body ); $self->trace($msg); } #=================================== sub trace_comment { #=================================== my ( $self, $comment ) = @_; return unless $self->is_trace; $comment =~ s/^/# *** /mg; chomp $comment; $self->trace("$comment\n"); } #=================================== sub deprecation { #=================================== my $self = shift; $self->deprecate_handle->warnf( "[DEPRECATION] %s. In request: %s", @_ ); } 1; # ABSTRACT: Provides common functionality to Logger implementations __END__ =pod =encoding UTF-8 =head1 NAME Search::Elasticsearch::Role::Logger - Provides common functionality to Logger implementations =head1 VERSION version 5.01 =head1 DESCRIPTION This role provides common functionality to Logger implementations, to enable the logging of events and the tracing of request-response conversations with Elasticsearch nodes. See L for the default implementation. =head1 CONFIGURATION =head2 C Parameters passed to C are used by L implementations to setup the L. See L for details. =head2 C By default, events emitted by L, L, L, L and L are logged to the L under the category C<"elasticsearch.event">, which can be configured with C. =head2 C Parameters passed to C are used by L implementations to setup the L. See L for details. =head2 C By default, trace output emitted by L, L, L and L are logged under the category C, which can be configured with C. =head2 C Parameters passed to C are used by L implementations to setup the L. See L for details. =head2 C By default, events emitted by L are logged to the L under the category C<"elasticsearch.deprecation">, which can be configured with C. =head1 METHODS =head2 C Returns an object which can handle the methods: C, C, C, C, C, C, C, C, C, C, C, C, C, C and C. =head2 C Returns an object which can handle the methods: C, C and C. =head2 C Returns an object which can handle the C method. =head2 C $logger->trace_request($cxn,\%request); Accepts a Cxn object and request parameters and logs them if tracing is enabled. =head2 C $logger->trace_response($cxn,$code,$response,$took); Logs a successful HTTP response, where C<$code> is the HTTP status code, C<$response> is the HTTP body and C<$took> is the time the request took in seconds =head2 C $logger->trace_error($cxn,$error); Logs a failed HTTP response, where C<$error> is an L object. =head2 C $logger->trace_comment($comment); Used to insert debugging comments into trace output. =head2 C $logger->deprecation($warning,$request) Issues a deprecation warning to the deprecation logger. =head1 AUTHOR Clinton Gormley =head1 COPYRIGHT AND LICENSE This software is Copyright (c) 2016 by Elasticsearch BV. This is free software, licensed under: The Apache License, Version 2.0, January 2004 =cut 31_cpanel_encode_bulk.t100644000765000024 35213001720020 24531 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/20_Serializeruse Test::More; eval { require Cpanel::JSON::XS; 1 } or do { plan skip_all => 'Cpanel::JSON::XS not installed'; done_testing; }; our $JSON_BACKEND = 'JSON::Cpanel'; do 't/20_Serializer/encode_decode.pl' or die( $@ || $! ); 13_static_node_timesout.t100644000765000024 143413001720020 24617 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/50_Cxn_Pooluse Test::More; use Test::Exception; use Search::Elasticsearch; use lib 't/lib'; use MockCxn qw(mock_static_client); ## One node fails with a Timeout error, then rejoins my $t = mock_static_client( { nodes => [ 'one', 'two' ] }, { node => 1, ping => 1 }, { node => 1, code => 200, content => 1 }, { node => 2, ping => 1 }, { node => 2, code => 200, content => 1 }, { node => 1, code => 509, error => 'Timeout' }, { node => 2, ping => 1 }, { node => 2, code => 200, content => 1 }, { node => 1, ping => 1 }, { node => 1, code => 200, content => 1 }, ); ok $t->perform_request && $t->perform_request && !eval { $t->perform_request } && $@ =~ /Timeout/ && $t->perform_request, 'One node throws Timeout then recovers'; done_testing; 17_static_runaway_nodes.t100644000765000024 202513001720020 24620 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/50_Cxn_Pooluse Test::More; use Test::Exception; use Search::Elasticsearch; use lib 't/lib'; use MockCxn qw(mock_static_client); ## Runaway nodes (ie wrong HTTP response codes signal node failure, instead of ## request failure) my $t = mock_static_client( { nodes => 'one' }, { node => 1, ping => 1 }, { node => 1, code => 200, content => 1 }, { node => 1, code => 503, error => 'Unavailable' }, { node => 1, ping => 1 }, { node => 1, code => 503, error => 'Unavailable' }, # throw Unavailable: too many retries { node => 1, ping => 1 }, { node => 1, code => 503, error => 'Unavailable' }, { node => 1, ping => 1 }, { node => 1, code => 503, error => 'Unavailable' }, # throw Unavailable: too many retries { node => 1, ping => 1 }, { node => 1, code => 200, content => 1 }, ); ok $t->perform_request && !eval { $t->perform_request } && $@ =~ /Unavailable/ && !eval { $t->perform_request } && $@ =~ /Unavailable/ && $t->perform_request, "Runaway nodes"; done_testing; 36_sniff_nodes_starting.t100644000765000024 212213001720020 24602 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/50_Cxn_Pooluse Test::More; use Test::Exception; use Search::Elasticsearch; use lib 't/lib'; use MockCxn qw(mock_sniff_client); ## Nodes initially unavailable my $t = mock_sniff_client( { nodes => [ 'one', 'two' ] }, { node => 1, sniff => [], error => 'Cxn', code => 509 }, { node => 2, sniff => [], error => 'Cxn', code => 509 }, # NoNodes { node => 3, sniff => [], error => 'Cxn', code => 509 }, { node => 4, sniff => [], error => 'Cxn', code => 509 }, # NoNodes { node => 5, sniff => ['one'] }, { node => 6, code => 200, content => 1 }, { node => 6, code => 200, content => 1 }, # force sniff { node => 6, sniff => [ 'one', 'two' ] }, { node => 7, code => 200, content => 1 }, { node => 8, code => 200, content => 1 }, ); ok !eval { $t->perform_request } && $@ =~ /NoNodes/ && !eval { $t->perform_request } && $@ =~ /NoNodes/ && $t->perform_request && $t->perform_request && $t->cxn_pool->schedule_check && $t->perform_request && $t->perform_request, 'Sniff unavailable nodes while starting up'; done_testing; 52_noping_node_timesout.t100644000765000024 167613001720020 24635 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/50_Cxn_Pooluse Test::More; use Test::Exception; use Search::Elasticsearch; use lib 't/lib'; use MockCxn qw(mock_noping_client); ## Nodes fail and recover my $t = mock_noping_client( { nodes => [ 'one', 'two', 'three' ] }, { node => 1, code => 200, content => 1 }, { node => 2, code => 509, error => 'Timeout' }, { node => 3, code => 200, content => 1 }, { node => 1, code => 200, content => 1 }, { node => 3, code => 200, content => 1 }, # force check { node => 1, code => 200, content => 1 }, { node => 2, code => 200, content => 1 }, { node => 3, code => 200, content => 1 }, ); ok $t->perform_request() && !eval { $t->perform_request } && $@ =~ /Timeout/ && $t->perform_request && $t->perform_request && $t->perform_request && $t->cxn_pool->cxns->[1]->force_ping && $t->perform_request && $t->perform_request && $t->perform_request, 'Node timesout and recovers'; done_testing; 55_noping_runaway_nodes.t100644000765000024 200113001720020 24617 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/50_Cxn_Pooluse Test::More; use Test::Exception; use Search::Elasticsearch; use lib 't/lib'; use MockCxn qw(mock_noping_client); ## Runaway nodes my $t = mock_noping_client( { nodes => [ 'one', 'two', 'three' ] }, { node => 1, code => 200, content => 1 }, { node => 2, code => 200, content => 1 }, { node => 3, code => 200, content => 1 }, { node => 1, code => 509, error => 'Unavailable' }, { node => 2, code => 509, error => 'Unavailable' }, { node => 3, code => 509, error => 'Unavailable' }, # throws unavailable { node => 1, code => 200, content => 1 }, { node => 1, code => 200, content => 1 }, { node => 2, code => 200, content => 1 }, { node => 3, code => 200, content => 1 }, ); ok $t->perform_request() && $t->perform_request && $t->perform_request && !eval { $t->perform_request } && $@ =~ /Unavailable/ && $t->perform_request && $t->perform_request && $t->perform_request && $t->perform_request, 'Runaway nodes'; done_testing; HTTPTiny.pm100644000765000024 1710013001720020 24665 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/lib/Search/Elasticsearch/Cxnpackage Search::Elasticsearch::Cxn::HTTPTiny; $Search::Elasticsearch::Cxn::HTTPTiny::VERSION = '5.01'; use Moo; with 'Search::Elasticsearch::Role::Cxn', 'Search::Elasticsearch::Role::Is_Sync'; use HTTP::Tiny 0.043 (); use namespace::clean; my $Cxn_Error = qr/ Connection.(?:timed.out|re(?:set|fused)) | connect:.timeout | Host.is.down | No.route.to.host | temporarily.unavailable /x; #=================================== sub perform_request { #=================================== my ( $self, $params ) = @_; my $uri = $self->build_uri($params); my $method = $params->{method}; my %args; if ( defined $params->{data} ) { $args{content} = $params->{data}; $args{headers}{'Content-Type'} = $params->{mime_type}; $args{headers}{'Content-Encoding'} = $params->{encoding} if $params->{encoding}; } my $handle = $self->handle; $handle->timeout( $params->{timeout} || $self->request_timeout ); my $response = $handle->request( $method, "$uri", \%args ); return $self->process_response( $params, # request $response->{status}, # code $response->{reason}, # msg $response->{content}, # body $response->{headers} # headers ); } #=================================== sub error_from_text { #=================================== local $_ = $_[2]; return /[Tt]imed out/ ? 'Timeout' : /Unexpected end of stream/ ? 'ContentLength' : /SSL connection failed/ ? 'SSL' : /$Cxn_Error/ ? 'Cxn' : 'Request'; } #=================================== sub _build_handle { #=================================== my $self = shift; my %args = ( default_headers => $self->default_headers ); if ( $self->is_https && $self->has_ssl_options ) { $args{SSL_options} = $self->ssl_options; if ( $args{SSL_options}{SSL_verify_mode} ) { $args{verify_ssl} = 1; } } return HTTP::Tiny->new( %args, %{ $self->handle_args } ); } 1; # ABSTRACT: A Cxn implementation which uses HTTP::Tiny __END__ =pod =encoding UTF-8 =head1 NAME Search::Elasticsearch::Cxn::HTTPTiny - A Cxn implementation which uses HTTP::Tiny =head1 VERSION version 5.01 =head1 DESCRIPTION Provides the default HTTP Cxn class and is based on L. The HTTP::Tiny backend is fast, uses pure Perl, support proxies and https and provides persistent connections. This class does L, whose documentation provides more information, and L. =head1 CONFIGURATION =head2 Inherited configuration From L =over =item * L =item * L =item * L =item * L =item * L =item * L =item * L =item * L =item * L =item * L =item * L =item * L =back =head1 SSL/TLS L uses L to support HTTPS. By default, no validation of the remote host is performed. This behaviour can be changed by passing the C parameter with any options accepted by L. For instance, to check that the remote host has a trusted certificate, and to avoid man-in-the-middle attacks, you could do the following: use Search::Elasticsearch; use IO::Socket::SSL; my $es = Search::Elasticsearch->new( nodes => [ "https://node1.mydomain.com:9200", "https://node2.mydomain.com:9200", ], ssl_options => { SSL_verify_mode => SSL_VERIFY_PEER, SSL_ca_file => '/path/to/cacert.pem' } ); If the remote server cannot be verified, an L will be thrown. If you want your client to present its own certificate to the remote server, then use: use Search::Elasticsearch; use IO::Socket::SSL; my $es = Search::Elasticsearch->new( nodes => [ "https://node1.mydomain.com:9200", "https://node2.mydomain.com:9200", ], ssl_options => { SSL_verify_mode => SSL_VERIFY_PEER, SSL_use_cert => 1, SSL_ca_file => '/path/to/cacert.pem', SSL_cert_file => '/path/to/client.pem', SSL_key_file => '/path/to/client.pem', } ); =head1 METHODS =head2 C ($status,$body) = $self->perform_request({ # required method => 'GET|HEAD|POST|PUT|DELETE', path => '/path/of/request', qs => \%query_string_params, # optional data => $body_as_string, mime_type => 'application/json', timeout => $timeout }); Sends the request to the associated Elasticsearch node and returns a C<$status> code and the decoded response C<$body>, or throws an error if the request failed. =head2 Inherited methods From L =over =item * L =item * L =item * L =item * L =item * L =item * L =item * L =item * L =item * L =item * L =item * L =item * L =item * L =item * L =item * L =item * L =item * L =item * L =item * L =back =head1 SEE ALSO =over =item * L =item * L =item * L =item * L =back =head1 AUTHOR Clinton Gormley =head1 COPYRIGHT AND LICENSE This software is Copyright (c) 2016 by Elasticsearch BV. This is free software, licensed under: The Apache License, Version 2.0, January 2004 =cut CxnPool.pm100644000765000024 1617613001720020 25011 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/lib/Search/Elasticsearch/Rolepackage Search::Elasticsearch::Role::CxnPool; $Search::Elasticsearch::Role::CxnPool::VERSION = '5.01'; use Moo::Role; use Search::Elasticsearch::Util qw(parse_params); use List::Util qw(shuffle); use IO::Select(); use Time::HiRes qw(time sleep); use Search::Elasticsearch::Util qw(to_list); use namespace::clean; requires qw(next_cxn schedule_check); has 'cxn_factory' => ( is => 'ro', required => 1 ); has 'logger' => ( is => 'ro', required => 1 ); has 'serializer' => ( is => 'ro', required => 1 ); has 'current_cxn_num' => ( is => 'rwp', default => 0 ); has 'cxns' => ( is => 'rwp', default => sub { [] } ); has 'seed_nodes' => ( is => 'ro', required => 1 ); has 'retries' => ( is => 'rw', default => 0 ); has 'randomize_cxns' => ( is => 'ro', default => 1 ); #=================================== around BUILDARGS => sub { #=================================== my $orig = shift; my $params = $orig->(@_); my @seed = grep {$_} to_list( delete $params->{nodes} || ('') ); @seed = $params->{cxn_factory}->default_host unless @seed; $params->{seed_nodes} = \@seed; return $params; }; #=================================== sub next_cxn_num { #=================================== my $self = shift; my $cxns = $self->cxns; return unless @$cxns; my $current = $self->current_cxn_num; $self->_set_current_cxn_num( ( $current + 1 ) % @$cxns ); return $current; } #=================================== sub set_cxns { #=================================== my $self = shift; my $factory = $self->cxn_factory; my @cxns = map { $factory->new_cxn($_) } @_; @cxns = shuffle @cxns if $self->randomize_cxns; $self->_set_cxns( \@cxns ); $self->_set_current_cxn_num(0); $self->logger->infof( "Current cxns: %s", [ map { $_->stringify } @cxns ] ); return; } #=================================== sub request_ok { #=================================== my ( $self, $cxn ) = @_; $cxn->mark_live; $self->reset_retries; } #=================================== sub request_failed { #=================================== my ( $self, $cxn, $error ) = @_; if ( $error->is( 'Cxn', 'Timeout' ) ) { $cxn->mark_dead if $self->should_mark_dead($error); $self->schedule_check; if ( $self->should_retry($error) ) { my $retries = $self->retries( $self->retries + 1 ); return 1 if $retries < $self->_max_retries; } } else { $cxn->mark_live if $cxn; } $self->reset_retries; return 0; } #=================================== sub should_retry { #=================================== my ( $self, $error ) = @_; return $error->is('Cxn'); } #=================================== sub should_mark_dead { #=================================== my ( $self, $error ) = @_; return $error->is('Cxn'); } #=================================== sub cxns_str { #=================================== my $self = shift; join ", ", map { $_->stringify } @{ $self->cxns }; } #=================================== sub cxns_seeds_str { #=================================== my $self = shift; join ", ", ( map { $_->stringify } @{ $self->cxns } ), @{ $self->seed_nodes }; } #=================================== sub reset_retries { shift->retries(0) } sub _max_retries {2} #=================================== 1; =pod =encoding UTF-8 =head1 NAME Search::Elasticsearch::Role::CxnPool - Provides common functionality to the CxnPool implementations =head1 VERSION version 5.01 =head1 DESCRIPTION See the CxnPool implementations: =over =item * L =item * L =item * L =back =head1 CONFIGURATION These configuration options should not be set by the user but are documented here for completeness. =head2 C By default, the order of cxns passed to L is randomized before they are stored. Set C to a false value to disable. =head1 METHODS =head2 C $factory = $cxn_pool->cxn_factory Returns the L object for creating a new C<$cxn> instance. =head2 C $logger = $cxn_pool->logger Returns the L-based object, which defaults to L. =head2 C $serializer = $cxn_pool->serializer Returns the L-based object, which defaults to L. =head2 C $num = $cxn_pool->current_cxn_num Returns the current cxn number, which is an offset into the array of cxns set by L. =head2 C \@cxns = $cxn_pool->cxns; Returns the current list of L-based cxn objects as set by L. =head2 C \@seed_nodes = $cxn_pool->seed_nodes Returns the list of C originally specified when calling L. =head2 C $num = $cxn_pool->next_cxn_num; Returns the number of the next connection, in round-robin fashion. Updates the L. =head2 C $cxn_pool->set_cxns(@nodes); Takes a list of nodes, converts them into L-based objects and makes them accessible via L. =head2 C $cxn_pool->request_ok($cxn); Called when a request by the specified C<$cxn> object has completed successfully. Marks the C<$cxn> as live. =head2 C $should_retry = $cxn_pool->request_failed($cxn,$error); Called when a request by the specified C<$cxn> object has failed. Returns C<1> if the request should be retried or C<0> if it shouldn't. =head2 C $bool = $cxn_pool->should_retry($error); Examines the error to decide whether the request should be retried or not. By default, only L errors are retried. =head2 C $bool = $cxn_pool->should_mark_dead($error); Examines the error to decide whether the C<$cxn> should be marked as dead or not. By default, only L errors cause a C<$cxn> to be marked as dead. =head2 C $str = $cxn_pool->cxns_str Returns all L as a string for logging purposes. =head2 C $str = $cxn_pool->cxns_seeeds_str Returns all L and L as a string for logging purposes. =head2 C $retries = $cxn_pool->retries The number of times the current request has been retried. =head2 C $cxn_pool->reset_retries; Called at the start of a new request to reset the retries count. =head1 AUTHOR Clinton Gormley =head1 COPYRIGHT AND LICENSE This software is Copyright (c) 2016 by Elasticsearch BV. This is free software, licensed under: The Apache License, Version 2.0, January 2004 =cut __END__ #ABSTRACT: Provides common functionality to the CxnPool implementations Is_Sync.pm100644000765000024 120113001720020 24735 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/lib/Search/Elasticsearch/Rolepackage Search::Elasticsearch::Role::Is_Sync; $Search::Elasticsearch::Role::Is_Sync::VERSION = '5.01'; use Moo::Role; use namespace::clean; 1; # ABSTRACT: A role to mark classes which should be used with other sync classes __END__ =pod =encoding UTF-8 =head1 NAME Search::Elasticsearch::Role::Is_Sync - A role to mark classes which should be used with other sync classes =head1 VERSION version 5.01 =head1 AUTHOR Clinton Gormley =head1 COPYRIGHT AND LICENSE This software is Copyright (c) 2016 by Elasticsearch BV. This is free software, licensed under: The Apache License, Version 2.0, January 2004 =cut 16_static_nodes_starting.t100644000765000024 134613001720020 24771 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/50_Cxn_Pooluse Test::More; use Test::Exception; use Search::Elasticsearch; use lib 't/lib'; use MockCxn qw(mock_static_client); ## Nodes initially unavailable my $t = mock_static_client( { nodes => [ 'one', 'two' ] }, { node => 2, ping => 0 }, { node => 1, ping => 0 }, # NoNodes { node => 2, ping => 0 }, { node => 1, ping => 0 }, # NoNodes { node => 1, ping => 1 }, { node => 1, code => 200, content => 1 }, { node => 2, ping => 1 }, { node => 2, code => 200, content => 1 }, ); ok !eval { $t->perform_request } && $@ =~ /NoNodes/ && !eval { $t->perform_request } && $@ =~ /NoNodes/ && $t->perform_request && $t->perform_request, 'Nodes initially unavailable'; done_testing; 33_sniff_both_nodes_fail.t100644000765000024 241313001720020 24676 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/50_Cxn_Pooluse Test::More; use Test::Exception; use Search::Elasticsearch; use lib 't/lib'; use MockCxn qw(mock_sniff_client); ## Sniff all nodes fail my $t = mock_sniff_client( { nodes => [ 'one', 'two' ] }, { node => 1, sniff => [ 'one', 'two' ] }, { node => 2, code => 200, content => 1 }, { node => 3, code => 509, error => 'Cxn' }, { node => 2, sniff => [], error => 'Cxn', code => 509 }, { node => 3, sniff => [], error => 'Cxn', code => 509 }, { node => 4, sniff => [], error => 'Cxn', code => 509 }, { node => 5, sniff => [], error => 'Cxn', code => 509 }, # throws NoNodes { node => 2, sniff => [], error => 'Cxn', code => 509 }, { node => 3, sniff => [], error => 'Cxn', code => 509 }, { node => 6, sniff => [], error => 'Cxn', code => 509 }, { node => 7, sniff => [], error => 'Cxn', code => 509 }, # throws NoNodes { node => 2, sniff => [ 'one', 'two' ] }, { node => 8, code => 200, content => 1 }, { node => 9, code => 200, content => 1 }, { node => 8, code => 200, content => 1 }, ); ok $t->perform_request() && !eval { $t->perform_request } && $@ =~ /NoNodes/ && !eval { $t->perform_request } && $@ =~ /NoNodes/ && $t->perform_request, 'Sniff after all nodes fail'; done_testing; 53_noping_all_nodes_fail.t100644000765000024 224313001720020 24702 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/50_Cxn_Pooluse Test::More; use Test::Exception; use Search::Elasticsearch; use lib 't/lib'; use MockCxn qw(mock_noping_client); ## All nodes fail and recover my $t = mock_noping_client( { nodes => [ 'one', 'two', 'three' ] }, { node => 1, code => 200, content => 1 }, { node => 2, code => 200, content => 1 }, { node => 3, code => 200, content => 1 }, { node => 1, code => 200, content => 1 }, { node => 2, code => 509, error => 'Cxn' }, { node => 3, code => 200, content => 1 }, { node => 1, code => 509, error => 'Cxn' }, { node => 3, code => 509, error => 'Cxn' }, { node => 2, code => 200, content => 1 }, # force check { node => 1, code => 200, content => 1 }, { node => 2, code => 200, content => 1 }, { node => 3, code => 200, content => 1 }, ); ok $t->perform_request() && $t->perform_request && $t->perform_request && $t->perform_request && $t->perform_request && $t->perform_request && $t->cxn_pool->cxns->[0]->force_ping && $t->cxn_pool->cxns->[2]->force_ping && $t->perform_request && $t->perform_request && $t->perform_request, 'All nodes fail and recover'; done_testing; 54_noping_nodes_starting.t100644000765000024 150213001720020 24770 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/50_Cxn_Pooluse Test::More; use Test::Exception; use Search::Elasticsearch; use lib 't/lib'; use MockCxn qw(mock_noping_client); ## Nodes initially unavailable my $t = mock_noping_client( { nodes => [ 'one', 'two', 'three' ] }, { node => 1, code => 509, error => 'Cxn' }, { node => 2, code => 509, error => 'Cxn' }, { node => 3, code => 200, content => 1 }, { node => 3, code => 200, content => 1 }, # force check { node => 1, code => 200, content => 1 }, { node => 2, code => 200, content => 1 }, { node => 3, code => 200, content => 1 }, ); ok $t->perform_request() && $t->perform_request && $t->cxn_pool->cxns->[0]->force_ping && $t->cxn_pool->cxns->[1]->force_ping && $t->perform_request && $t->perform_request && $t->perform_request, 'Nodes starting'; done_testing; CxnPool000755000765000024 013001720020 23357 5ustar00clintonstaff000000000000Search-Elasticsearch-5.01/lib/Search/ElasticsearchSniff.pm100644000765000024 1440513001720020 25146 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/lib/Search/Elasticsearch/CxnPoolpackage Search::Elasticsearch::CxnPool::Sniff; $Search::Elasticsearch::CxnPool::Sniff::VERSION = '5.01'; use Moo; with 'Search::Elasticsearch::Role::CxnPool::Sniff', 'Search::Elasticsearch::Role::Is_Sync'; use Search::Elasticsearch::Util qw(throw); use namespace::clean; #=================================== sub next_cxn { #=================================== my ($self) = @_; $self->sniff if $self->next_sniff <= time(); my $cxns = $self->cxns; my $total = @$cxns; while ( 0 < $total-- ) { my $cxn = $cxns->[ $self->next_cxn_num ]; return $cxn if $cxn->is_live; } throw( "NoNodes", "No nodes are available: [" . $self->cxns_seeds_str . ']' ); } #=================================== sub sniff { #=================================== my $self = shift; my $cxns = $self->cxns; my $total = @$cxns; my @skipped; while ( 0 < $total-- ) { my $cxn = $cxns->[ $self->next_cxn_num ]; if ( $cxn->is_dead ) { push @skipped, $cxn; } else { $self->sniff_cxn($cxn) and return; $cxn->mark_dead; } } for my $cxn (@skipped) { $self->sniff_cxn($cxn) and return; } $self->logger->info("No live nodes available. Trying seed nodes."); for my $seed ( @{ $self->seed_nodes } ) { my $cxn = $self->cxn_factory->new_cxn($seed); $self->sniff_cxn($cxn) and return; } } #=================================== sub sniff_cxn { #=================================== my ( $self, $cxn ) = @_; return $self->parse_sniff( $cxn->sniff ); } 1; # ABSTRACT: A CxnPool for connecting to a local cluster with a dynamic node list __END__ =pod =encoding UTF-8 =head1 NAME Search::Elasticsearch::CxnPool::Sniff - A CxnPool for connecting to a local cluster with a dynamic node list =head1 VERSION version 5.01 =head1 SYNOPSIS $e = Search::Elasticsearch->new( cxn_pool => 'Sniff', nodes => [ 'search1:9200', 'search2:9200' ], ); =head1 DESCRIPTION The L connection pool should be used when you B have direct access to the Elasticsearch cluster, eg when your web servers and Elasticsearch servers are on the same network. The nodes that you specify are used to I the cluster, which is then I to find the current list of live nodes that the cluster knows about. This sniff process is repeated regularly, or whenever a node fails, to update the list of healthy nodes. So if you add more nodes to your cluster, they will be auto-discovered during a sniff. If all sniffed nodes fail, then it falls back to sniffing the original I nodes that you specified in C. For L, this module will also dynamically detect the C which the nodes in the cluster will accept. This class does L and L. =head1 CONFIGURATION =head2 C The list of nodes to use to discover the cluster. Can accept a single node, multiple nodes, and defaults to C if no C are specified. See L for details of the node specification. =head2 See also =over =item * L =item * L =item * L =back =head2 Inherited configuration From L =over =item * L =item * L =back From L =over =item * L =back =head1 METHODS =head2 C $cxn = $cxn_pool->next_cxn Returns the next available live node (in round robin fashion), or throws a C error if no nodes can be sniffed from the cluster. =head2 C $cxn_pool->schedule_check Forces a sniff before the next Cxn is returned, to updated the list of healthy nodes in the cluster. =head2 C $bool = $cxn_pool->sniff Sniffs the cluster and returns C if the sniff was successful. =head2 Inherited methods From L =over =item * L =item * L =item * L =back From L =over =item * L =item * L =item * L =item * L =item * L =item * L =item * L =item * L =item * L =item * L =item * L =item * L =item * L =item * L =item * L =item * L =back =head1 AUTHOR Clinton Gormley =head1 COPYRIGHT AND LICENSE This software is Copyright (c) 2016 by Elasticsearch BV. This is free software, licensed under: The Apache License, Version 2.0, January 2004 =cut Logger000755000765000024 013001720020 23214 5ustar00clintonstaff000000000000Search-Elasticsearch-5.01/lib/Search/ElasticsearchLogAny.pm100644000765000024 745713001720020 25120 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/lib/Search/Elasticsearch/Loggerpackage Search::Elasticsearch::Logger::LogAny; $Search::Elasticsearch::Logger::LogAny::VERSION = '5.01'; use Moo; with 'Search::Elasticsearch::Role::Logger'; use Search::Elasticsearch::Util qw(parse_params to_list); use namespace::clean; use Log::Any 1.02 (); use Log::Any::Adapter(); #=================================== sub _build_log_handle { #=================================== my $self = shift; if ( my @args = to_list( $self->log_to ) ) { Log::Any::Adapter->set( { category => $self->log_as }, @args ); } Log::Any->get_logger( category => $self->log_as ); } #=================================== sub _build_trace_handle { #=================================== my $self = shift; if ( my @args = to_list( $self->trace_to ) ) { Log::Any::Adapter->set( { category => $self->trace_as }, @args ); } Log::Any->get_logger( category => $self->trace_as ); } #=================================== sub _build_deprecate_handle { #=================================== my $self = shift; if ( my @args = to_list( $self->deprecate_to ) ) { Log::Any::Adapter->set( { category => $self->deprecate_as }, @args ); } Log::Any->get_logger( default_adapter => 'Stderr', category => $self->deprecate_as ); } 1; # ABSTRACT: A Log::Any-based Logger implementation __END__ =pod =encoding UTF-8 =head1 NAME Search::Elasticsearch::Logger::LogAny - A Log::Any-based Logger implementation =head1 VERSION version 5.01 =head1 DESCRIPTION L provides event logging and the tracing of request/response conversations with Elasticsearch nodes via the L module. I refers to log events, such as node failures, pings, sniffs, etc, and should be enabled for monitoring purposes. I refers to the actual HTTP requests and responses sent to Elasticsearch nodes. Tracing can be enabled for debugging purposes, or for generating a pretty-printed C script which can be used for reporting problems. I refers to deprecation warnings returned by Elasticsearch 5.x and above. Deprecations are logged to STDERR by default. =head1 CONFIGURATION Logging and tracing can be enabled using L, or by passing options to L. =head2 USING LOG::ANY::ADAPTER Send all logging and tracing to C: use Log::Any::Adapter qw(Stderr); use Search::Elasticsearch; my $e = Search::Elasticsearch->new; Send logging and deprecations to a file, and tracing to Stderr: use Log::Any::Adapter(); Log::Any::Adapter->set( { category => 'elasticsearch.event' }, 'File', '/path/to/file.log' ); Log::Any::Adapter->set( { category => 'elasticsearch.trace' }, 'Stderr' ); Log::Any::Adapter->set( { category => 'elasticsearch.deprecation' }, 'File', '/path/to/deprecations.log' ); use Search::Elasticsearch; my $e = Search::Elasticsearch->new; =head2 USING C, C AND C Send all logging and tracing to C: use Search::Elasticsearch; my $e = Search::Elasticsearch->new( log_to => 'Stderr', trace_to => 'Stderr', deprecate_to => 'Stderr' # default ); Send logging and deprecations to a file, and tracing to Stderr: use Search::Elasticsearch; my $e = Search::Elasticsearch->new( log_to => ['File', '/path/to/file.log'], trace_to => 'Stderr', deprecate_to => ['File', '/path/to/deprecations.log'], ); See L for more. =head1 AUTHOR Clinton Gormley =head1 COPYRIGHT AND LICENSE This software is Copyright (c) 2016 by Elasticsearch BV. This is free software, licensed under: The Apache License, Version 2.0, January 2004 =cut 30_cpanel_encode_decode.t100644000765000024 35213001720020 25016 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/20_Serializeruse Test::More; eval { require Cpanel::JSON::XS; 1 } or do { plan skip_all => 'Cpanel::JSON::XS not installed'; done_testing; }; our $JSON_BACKEND = 'JSON::Cpanel'; do 't/20_Serializer/encode_pretty.pl' or die( $@ || $! ); 32_cpanel_encode_pretty.t100644000765000024 35213001720020 25124 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/20_Serializeruse Test::More; eval { require Cpanel::JSON::XS; 1 } or do { plan skip_all => 'Cpanel::JSON::XS not installed'; done_testing; }; our $JSON_BACKEND = 'JSON::Cpanel'; do 't/20_Serializer/encode_decode.pl' or die( $@ || $! ); 15_static_both_nodes_fail.t100644000765000024 214113001720020 25056 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/t/50_Cxn_Pooluse Test::More; use Test::Exception; use Search::Elasticsearch; use lib 't/lib'; use MockCxn qw(mock_static_client); ## All nodes fail my $t = mock_static_client( { nodes => [ 'one', 'two' ] }, { node => 1, ping => 1 }, { node => 1, code => 200, content => 1 }, { node => 2, ping => 1 }, { node => 2, code => 200, content => 1 }, { node => 1, code => 509, error => 'Cxn' }, { node => 2, ping => 0 }, { node => 1, ping => 0 }, # NoNodes { node => 2, ping => 0 }, { node => 1, ping => 0 }, # NoNodes { node => 2, ping => 0 }, { node => 1, ping => 0 }, # NoNodes { node => 2, ping => 1 }, { node => 2, code => 200, content => 1 }, { node => 1, ping => 1 }, { node => 1, code => 200, content => 1 }, ); ok $t->perform_request && $t->perform_request && !eval { $t->perform_request } && $@ =~ /NoNodes/ && !eval { $t->perform_request } && $@ =~ /NoNodes/ && !eval { $t->perform_request } && $@ =~ /NoNodes/ && $t->perform_request && $t->perform_request, 'Both nodes fails then recover'; done_testing; Static.pm100644000765000024 1124313001720020 25325 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/lib/Search/Elasticsearch/CxnPoolpackage Search::Elasticsearch::CxnPool::Static; $Search::Elasticsearch::CxnPool::Static::VERSION = '5.01'; use Moo; with 'Search::Elasticsearch::Role::CxnPool::Static', 'Search::Elasticsearch::Role::Is_Sync'; use Search::Elasticsearch::Util qw(throw); use namespace::clean; #=================================== sub next_cxn { #=================================== my ($self) = @_; my $cxns = $self->cxns; my $total = @$cxns; my $now = time(); my @skipped; while ( $total-- ) { my $cxn = $cxns->[ $self->next_cxn_num ]; return $cxn if $cxn->is_live; if ( $cxn->next_ping < $now ) { return $cxn if $cxn->pings_ok; } else { push @skipped, $cxn; } } for my $cxn (@skipped) { return $cxn if $cxn->pings_ok; } $_->force_ping for @$cxns; throw( "NoNodes", "No nodes are available: [" . $self->cxns_str . ']' ); } 1; =pod =encoding UTF-8 =head1 NAME Search::Elasticsearch::CxnPool::Static - A CxnPool for connecting to a remote cluster with a static list of nodes. =head1 VERSION version 5.01 =head1 SYNOPSIS $e = Search::Elasticsearch->new( cxn_pool => 'Static' # default nodes => [ 'search1:9200', 'search2:9200' ], ); =head1 DESCRIPTION The L connection pool, which is the default, should be used when you don't have direct access to the Elasticsearch cluster, eg when you are accessing the cluster through a proxy. It round-robins through the nodes that you specified, and pings each node before it is used for the first time, to ensure that it is responding. If any node fails, then all nodes are pinged before the next request to ensure that they are still alive and responding. Failed nodes will be pinged regularly to check if they have recovered. This class does L and L. =head1 CONFIGURATION =head2 C The list of nodes to use to serve requests. Can accept a single node, multiple nodes, and defaults to C if no C are specified. See L for details of the node specification. =head2 See also =over =item * L =item * L =item * L =item * L =back =head2 Inherited configuration From L =over =item * L =back =head1 METHODS =head2 C $cxn = $cxn_pool->next_cxn Returns the next available live node (in round robin fashion), or throws a C error if no nodes respond to ping requests. =head2 Inherited methods From L =over =item * L =back From L =over =item * L =item * L =item * L =item * L =item * L =item * L =item * L =item * L =item * L =item * L =item * L =item * L =item * L =item * L =item * L =item * L =back =head1 AUTHOR Clinton Gormley =head1 COPYRIGHT AND LICENSE This software is Copyright (c) 2016 by Elasticsearch BV. This is free software, licensed under: The Apache License, Version 2.0, January 2004 =cut __END__ # ABSTRACT: A CxnPool for connecting to a remote cluster with a static list of nodes. Transport.pm100644000765000024 427513001720020 25400 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/lib/Search/Elasticsearch/Rolepackage Search::Elasticsearch::Role::Transport; $Search::Elasticsearch::Role::Transport::VERSION = '5.01'; use Moo::Role; requires qw(perform_request); use Try::Tiny; use Search::Elasticsearch::Util qw(parse_params is_compat); use namespace::clean; has 'serializer' => ( is => 'ro', required => 1 ); has 'logger' => ( is => 'ro', required => 1 ); has 'send_get_body_as' => ( is => 'ro', default => 'GET' ); has 'cxn_pool' => ( is => 'ro', required => 1 ); #=================================== sub BUILD { #=================================== my $self = shift; my $pool = $self->cxn_pool; is_compat( 'cxn_pool', $self, $pool ); is_compat( 'cxn', $self, $pool->cxn_factory->cxn_class ); return $self; } #=================================== sub tidy_request { #=================================== my ( $self, $params ) = parse_params(@_); $params->{method} ||= 'GET'; $params->{path} ||= '/'; $params->{qs} ||= {}; $params->{ignore} ||= []; my $body = $params->{body}; return $params unless defined $body; $params->{serialize} ||= 'std'; $params->{data} = $params->{serialize} eq 'std' ? $self->serializer->encode($body) : $self->serializer->encode_bulk($body); if ( $params->{method} eq 'GET' ) { my $send_as = $self->send_get_body_as; if ( $send_as eq 'POST' ) { $params->{method} = 'POST'; } elsif ( $send_as eq 'source' ) { $params->{qs}{source} = delete $params->{data}; delete $params->{body}; } } $params->{mime_type} ||= $self->serializer->mime_type; return $params; } 1; #ABSTRACT: Transport role providing interface between the client class and the Elasticsearch cluster __END__ =pod =encoding UTF-8 =head1 NAME Search::Elasticsearch::Role::Transport - Transport role providing interface between the client class and the Elasticsearch cluster =head1 VERSION version 5.01 =head1 AUTHOR Clinton Gormley =head1 COPYRIGHT AND LICENSE This software is Copyright (c) 2016 by Elasticsearch BV. This is free software, licensed under: The Apache License, Version 2.0, January 2004 =cut 5_0000755000765000024 013001720020 23576 5ustar00clintonstaff000000000000Search-Elasticsearch-5.01/lib/Search/Elasticsearch/ClientBulk.pm100644000765000024 2776013001720020 25225 0ustar00clintonstaff000000000000Search-Elasticsearch-5.01/lib/Search/Elasticsearch/Client/5_0package Search::Elasticsearch::Client::5_0::Bulk; $Search::Elasticsearch::Client::5_0::Bulk::VERSION = '5.01'; use Moo; with 'Search::Elasticsearch::Client::5_0::Role::Bulk', 'Search::Elasticsearch::Role::Is_Sync'; use Search::Elasticsearch::Util qw(parse_params throw); use Try::Tiny; use namespace::clean; #=================================== sub add_action { #=================================== my $self = shift; my $buffer = $self->_buffer; my $max_size = $self->max_size; my $max_count = $self->max_count; my $max_time = $self->max_time; while (@_) { my @json = $self->_encode_action( splice( @_, 0, 2 ) ); push @$buffer, @json; my $size = $self->_buffer_size; $size += length($_) + 1 for @json; $self->_buffer_size($size); my $count = $self->_buffer_count( $self->_buffer_count + 1 ); $self->flush if ( $max_size and $size >= $max_size ) || ( $max_count and $count >= $max_count ) || ( $max_time and time >= $self->_last_flush + $max_time ); } return 1; } #=================================== sub flush { #=================================== my $self = shift; $self->_last_flush(time); return { items => [] } unless $self->_buffer_size; if ( $self->verbose ) { local $| = 1; print "."; } my $buffer = $self->_buffer; my $results = try { my $res = $self->es->bulk( %{ $self->_bulk_args }, body => $buffer ); $self->clear_buffer; return $res; } catch { my $error = $_; $self->clear_buffer unless $error->is( 'Cxn', 'NoNodes' ); die $error; }; $self->_report( $buffer, $results ); return defined wantarray ? $results : undef; } 1; =pod =encoding UTF-8 =head1 NAME Search::Elasticsearch::Client::5_0::Bulk - A helper module for the Bulk API =head1 VERSION version 5.01 =head1 SYNOPSIS use Search::Elasticsearch; my $es = Search::Elasticsearch->new; my $bulk = $es->bulk_helper( index => 'my_index', type => 'my_type' ); # Index docs: $bulk->index({ id => 1, source => { foo => 'bar' }}); $bulk->add_action( index => { id => 1, source => { foo=> 'bar' }}); # Create docs: $bulk->create({ id => 1, source => { foo => 'bar' }}); $bulk->add_action( create => { id => 1, source => { foo=> 'bar' }}); $bulk->create_docs({ foo => 'bar' }) # Delete docs: $bulk->delete({ id => 1}); $bulk->add_action( delete => { id => 1 }); $bulk->delete_ids(1,2,3) # Update docs: $bulk->update({ id => 1, script => '...' }); $bulk->add_action( update => { id => 1, script => '...' }); # Manual flush $bulk->flush; =head1 DESCRIPTION This module provides a wrapper for the L method which makes it easier to run multiple create, index, update or delete actions in a single request. The L module acts as a queue, buffering up actions until it reaches a maximum count of actions, or a maximum size of JSON request body, at which point it issues a C request. Once you have finished adding actions, call L to force the final C request on the items left in the queue. This class does L and L. =head1 CREATING A NEW INSTANCE =head2 C my $bulk = $es->bulk_helper( index => 'default_index', # optional type => 'default_type', # optional %other_bulk_params # optional max_count => 1_000, # optional max_size => 1_000_000, # optional max_time => 5, # optional verbose => 0 | 1, # optional on_success => sub {...}, # optional on_error => sub {...}, # optional on_conflict => sub {...}, # optional ); The C method returns a new C<$bulk> object. You must pass your Search::Elasticsearch client as the C argument. The C and C parameters provide default values for C and C, which can be overridden in each action. You can also pass any other values which are accepted by the L method. See L for more information about the other parameters. =head1 FLUSHING THE BUFFER =head2 C $result = $bulk->flush; The C method sends all buffered actions to Elasticsearch using a L request. =head2 Auto-flushing An automatic L is triggered whenever the C, C, or C threshold is breached. This causes all actions in the buffer to be sent to Elasticsearch. =over =item * C The maximum number of actions to allow before triggering a L. This can be disabled by setting C to C<0>. Defaults to C<1,000>. =item * C The maximum size of JSON request body to allow before triggering a L. This can be disabled by setting C to C<0>. Defaults to C<1_000,000> bytes. =item * C The maximum number of seconds to wait before triggering a flush. Defaults to C<0> seconds, which means that it is disabled. B This timeout is only triggered when new items are added to the queue, not in the background. =back =head2 Errors when flushing There are two types of error which can be thrown when L is called, either manually or automatically. =over =item * Temporary Elasticsearch errors A C error like a C error which indicates that your cluster is down. These errors do not clear the buffer, as they can be retried later on. =item * Action errors Individual actions may fail. For instance, a C action will fail if a document with the same C, C and C already exists. These action errors are reported via L. =back =head2 Using callbacks By default, any I (see above) cause warnings to be written to C. However, you can use the C, C and C callbacks for more fine-grained control. All callbacks receive the following arguments: =over =item C<$action> The name of the action, ie C, C, C or C. =item C<$response> The response that Elasticsearch returned for this action. =item C<$i> The index of the action, ie the first action in the flush request will have C<$i> set to C<0>, the second will have C<$i> set to C<1> etc. =back =head3 C my $bulk = $es->bulk_helper( on_success => sub { my ($action,$response,$i) = @_; # do something }, ); The C callback is called for every action that has a successful response. =head3 C my $bulk = $es->bulk_helper( on_conflict => sub { my ($action,$response,$i,$version) = @_; # do something }, ); The C callback is called for actions that have triggered a C error, eg trying to C a document which already exists. The C<$version> argument will contain the version number of the document currently stored in Elasticsearch (if found). =head3 C my $bulk = $es->bulk_helper( on_error => sub { my ($action,$response,$i) = @_; # do something }, ); The C callback is called for any error (unless the C) callback has already been called). =head2 Disabling callbacks and autoflush If you want to be in control of flushing, and you just want to receive the raw response that Elasticsearch sends instead of using callbacks, then you can do so as follows: my $bulk = $es->bulk_helper( max_count => 0, max_size => 0, on_error => undef ); $bulk->add_actions(....); $response = $bulk->flush; =head1 CREATE, INDEX, UPDATE, DELETE =head2 C $bulk->add_action( create => { ...params... }, index => { ...params... }, update => { ...params... }, delete => { ...params... } ); The C method allows you to add multiple C, C, C and C actions to the queue. The first value is the action type, and the second value is the parameters that describe that action. See the individual helper methods below for details. B Parameters like C or C can be specified as C or as C<_index>, so the following two lines are equivalent: index => { index => 'index', type => 'type', id => 1, source => {...}}, index => { _index => 'index', _type => 'type', _id => 1, _source => {...}}, B The C and C parameters can be specified in the params for any action, but if not specified, will default to the C and C values specified in L. These are required parameters: they must be specified either in L or in every action. =head2 C $bulk->create( { index => 'custom_index', source => { doc body }}, { type => 'custom_type', id => 1, source => { doc body }}, ... ); The C helper method allows you to add multiple C actions. It accepts the same parameters as L except that the document body should be passed as the C or C<_source> parameter, instead of as C. =head2 C $bulk->create_docs( { doc body }, { doc body }, ... ); The C helper is a shorter form of L which can be used when you are using the default C and C as set in L and you are not specifying a custom C per document. In this case, you can just pass the individual document bodies. =head2 C $bulk->index( { index => 'custom_index', source => { doc body }}, { type => 'custom_type', id => 1, source => { doc body }}, ... ); The C helper method allows you to add multiple C actions. It accepts the same parameters as L except that the document body should be passed as the C or C<_source> parameter, instead of as C. =head2 C $bulk->delete( { index => 'custom_index', id => 1}, { type => 'custom_type', id => 2}, ... ); The C helper method allows you to add multiple C actions. It accepts the same parameters as L. =head2 C $bulk->delete_ids(1,2,3...) The C helper method can be used when all of the documents you want to delete have the default C and C as set in L. In this case, all you have to do is to pass in a list of IDs. =head2 C $bulk->update( { id => 1, doc => { partial doc }, doc_as_upsert => 1 }, { id => 2, lang => 'mvel', script => { script } upsert => { upsert doc } }, ... ); The C helper method allows you to add multiple C actions. It accepts the same parameters as L. An update can either use a I which gets merged with an existing doc (example 1 above), or can use a C