Search-Elasticsearch-Client-2_0-6.81000755001750001750 013675355153 17155 5ustar00enricoenrico000000000000README100644001750001750 137413675355153 20123 0ustar00enricoenrico000000000000Search-Elasticsearch-Client-2_0-6.81NAME Search::Elasticsearch::Client::2_0 - Thin client with full support for Elasticsearch 2.x APIs VERSION version 6.81 DESCRIPTION The Search::Elasticsearch::Client::2_0 package provides a client compatible with Elasticsearch 2.x. It should be used in conjunction with Search::Elasticsearch as follows: $e = Search::Elasticsearch->new( client => "2_0::Direct" ); See Search::Elasticsearch::Client::2_0::Direct for documentation about how to use the client itself. AUTHOR Enrico Zimuel COPYRIGHT AND LICENSE This software is Copyright (c) 2020 by Elasticsearch BV. This is free software, licensed under: The Apache License, Version 2.0, January 2004 Changes100644001750001750 66713675355153 20522 0ustar00enricoenrico000000000000Search-Elasticsearch-Client-2_0-6.81Revision history for Search::Elasticsearch::Client::2_0 6.81 2020-06-26 Bumped to version 6.81 6.80 2020-03-25 Bumped to version 6.80 6.80_1 2020-03-11 Bumped to version 6.80 5.02 2017-04-02 Updated to work with Search::Elasticsearch 5.02 5.01 2016-10-19 Doc fixes 5.00 2016-10-19 First release of the 2_0 client module for Search::Elasticsearch 5.00 LICENSE100644001750001750 2636013675355153 20272 0ustar00enricoenrico000000000000Search-Elasticsearch-Client-2_0-6.81This software is Copyright (c) 2020 by Elasticsearch BV. This is free software, licensed under: The Apache License, Version 2.0, January 2004 Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. MANIFEST100644001750001750 271413675355153 20373 0ustar00enricoenrico000000000000Search-Elasticsearch-Client-2_0-6.81# This file was automatically generated by Dist::Zilla::Plugin::Manifest v6.012. Changes LICENSE MANIFEST META.json Makefile.PL README lib/Search/Elasticsearch/Client/2_0.pm lib/Search/Elasticsearch/Client/2_0/Bulk.pm lib/Search/Elasticsearch/Client/2_0/Direct.pm lib/Search/Elasticsearch/Client/2_0/Direct/Cat.pm lib/Search/Elasticsearch/Client/2_0/Direct/Cluster.pm lib/Search/Elasticsearch/Client/2_0/Direct/Indices.pm lib/Search/Elasticsearch/Client/2_0/Direct/Nodes.pm lib/Search/Elasticsearch/Client/2_0/Direct/Snapshot.pm lib/Search/Elasticsearch/Client/2_0/Direct/Tasks.pm lib/Search/Elasticsearch/Client/2_0/Role/API.pm lib/Search/Elasticsearch/Client/2_0/Role/Bulk.pm lib/Search/Elasticsearch/Client/2_0/Role/Scroll.pm lib/Search/Elasticsearch/Client/2_0/Scroll.pm lib/Search/Elasticsearch/Client/2_0/TestServer.pm t/Client_2_0/00_print_version.t t/Client_2_0/10_live.t t/Client_2_0/15_conflict.t t/Client_2_0/20_fork_httptiny.t t/Client_2_0/21_fork_lwp.t t/Client_2_0/22_fork_hijk.t t/Client_2_0/30_bulk_add_action.t t/Client_2_0/31_bulk_helpers.t t/Client_2_0/32_bulk_flush.t t/Client_2_0/33_bulk_errors.t t/Client_2_0/34_bulk_cxn_errors.t t/Client_2_0/40_scroll.t t/Client_2_0/50_reindex.t t/Client_2_0/60_auth_httptiny.t t/Client_2_0/61_auth_lwp.t t/author-eol.t t/author-no-tabs.t t/author-pod-syntax.t t/lib/LogCallback.pl t/lib/MockCxn.pm t/lib/bad_cacert.pem t/lib/default_cxn.pl t/lib/es_sync.pl t/lib/es_sync_auth.pl t/lib/es_sync_fork.pl t/lib/index_test_data.pl META.json100644001750001750 447413675355153 20670 0ustar00enricoenrico000000000000Search-Elasticsearch-Client-2_0-6.81{ "abstract" : "Thin client with full support for Elasticsearch 2.x APIs", "author" : [ "Enrico Zimuel " ], "dynamic_config" : 0, "generated_by" : "Dist::Zilla version 6.012, CPAN::Meta::Converter version 2.150010", "license" : [ "apache_2_0" ], "meta-spec" : { "url" : "http://search.cpan.org/perldoc?CPAN::Meta::Spec", "version" : 2 }, "name" : "Search-Elasticsearch-Client-2_0", "prereqs" : { "configure" : { "requires" : { "ExtUtils::MakeMaker" : "0" } }, "develop" : { "requires" : { "Test::EOL" : "0", "Test::More" : "0.88", "Test::NoTabs" : "0", "Test::Pod" : "1.41" } }, "runtime" : { "requires" : { "Devel::GlobalDestruction" : "0", "Moo" : "0", "Moo::Role" : "0", "Search::Elasticsearch" : "6.00", "Search::Elasticsearch::Role::API" : "0", "Search::Elasticsearch::Role::Client::Direct" : "0", "Search::Elasticsearch::Role::Is_Sync" : "0", "Search::Elasticsearch::Util" : "0", "Try::Tiny" : "0", "namespace::clean" : "0", "strict" : "0", "warnings" : "0" } }, "test" : { "requires" : { "Data::Dumper" : "0", "IO::Socket::SSL" : "0", "Log::Any::Adapter" : "0", "Log::Any::Adapter::Callback" : "0.09", "POSIX" : "0", "Search::Elasticsearch::Role::Cxn" : "0", "Sub::Exporter" : "0", "Test::Deep" : "0", "Test::Exception" : "0", "Test::More" : "0.98", "lib" : "0" } } }, "release_status" : "stable", "resources" : { "bugtracker" : { "web" : "https://github.com/elastic/elasticsearch-perl/issues" }, "homepage" : "https://metacpan.org/pod/Search::Elasticsearch", "repository" : { "type" : "git", "url" : "git://github.com/elastic/elasticsearch-perl.git", "web" : "https://github.com/elastic/elasticsearch-perl" } }, "version" : "6.81", "x_generated_by_perl" : "v5.26.1", "x_serialization_backend" : "Cpanel::JSON::XS version 4.19" } Makefile.PL100644001750001750 447113675355153 21216 0ustar00enricoenrico000000000000Search-Elasticsearch-Client-2_0-6.81# This file was automatically generated by Dist::Zilla::Plugin::MakeMaker v6.012. use strict; use warnings; use ExtUtils::MakeMaker; my %WriteMakefileArgs = ( "ABSTRACT" => "Thin client with full support for Elasticsearch 2.x APIs", "AUTHOR" => "Enrico Zimuel ", "CONFIGURE_REQUIRES" => { "ExtUtils::MakeMaker" => 0 }, "DISTNAME" => "Search-Elasticsearch-Client-2_0", "LICENSE" => "apache", "NAME" => "Search::Elasticsearch::Client::2_0", "PREREQ_PM" => { "Devel::GlobalDestruction" => 0, "Moo" => 0, "Moo::Role" => 0, "Search::Elasticsearch" => "6.00", "Search::Elasticsearch::Role::API" => 0, "Search::Elasticsearch::Role::Client::Direct" => 0, "Search::Elasticsearch::Role::Is_Sync" => 0, "Search::Elasticsearch::Util" => 0, "Try::Tiny" => 0, "namespace::clean" => 0, "strict" => 0, "warnings" => 0 }, "TEST_REQUIRES" => { "Data::Dumper" => 0, "IO::Socket::SSL" => 0, "Log::Any::Adapter" => 0, "Log::Any::Adapter::Callback" => "0.09", "POSIX" => 0, "Search::Elasticsearch::Role::Cxn" => 0, "Sub::Exporter" => 0, "Test::Deep" => 0, "Test::Exception" => 0, "Test::More" => "0.98", "lib" => 0 }, "VERSION" => "6.81", "test" => { "TESTS" => "t/*.t t/Client_2_0/*.t" } ); my %FallbackPrereqs = ( "Data::Dumper" => 0, "Devel::GlobalDestruction" => 0, "IO::Socket::SSL" => 0, "Log::Any::Adapter" => 0, "Log::Any::Adapter::Callback" => "0.09", "Moo" => 0, "Moo::Role" => 0, "POSIX" => 0, "Search::Elasticsearch" => "6.00", "Search::Elasticsearch::Role::API" => 0, "Search::Elasticsearch::Role::Client::Direct" => 0, "Search::Elasticsearch::Role::Cxn" => 0, "Search::Elasticsearch::Role::Is_Sync" => 0, "Search::Elasticsearch::Util" => 0, "Sub::Exporter" => 0, "Test::Deep" => 0, "Test::Exception" => 0, "Test::More" => "0.98", "Try::Tiny" => 0, "lib" => 0, "namespace::clean" => 0, "strict" => 0, "warnings" => 0 ); unless ( eval { ExtUtils::MakeMaker->VERSION(6.63_03) } ) { delete $WriteMakefileArgs{TEST_REQUIRES}; delete $WriteMakefileArgs{BUILD_REQUIRES}; $WriteMakefileArgs{PREREQ_PM} = \%FallbackPrereqs; } delete $WriteMakefileArgs{CONFIGURE_REQUIRES} unless eval { ExtUtils::MakeMaker->VERSION(6.52) }; WriteMakefile(%WriteMakefileArgs); t000755001750001750 013675355153 17341 5ustar00enricoenrico000000000000Search-Elasticsearch-Client-2_0-6.81author-eol.t100644001750001750 370013675355153 21745 0ustar00enricoenrico000000000000Search-Elasticsearch-Client-2_0-6.81/t BEGIN { unless ($ENV{AUTHOR_TESTING}) { print qq{1..0 # SKIP these tests are for testing by the author\n}; exit } } use strict; use warnings; # this test was generated with Dist::Zilla::Plugin::Test::EOL 0.19 use Test::More 0.88; use Test::EOL; my @files = ( 'lib/Search/Elasticsearch/Client/2_0.pm', 'lib/Search/Elasticsearch/Client/2_0/Bulk.pm', 'lib/Search/Elasticsearch/Client/2_0/Direct.pm', 'lib/Search/Elasticsearch/Client/2_0/Direct/Cat.pm', 'lib/Search/Elasticsearch/Client/2_0/Direct/Cluster.pm', 'lib/Search/Elasticsearch/Client/2_0/Direct/Indices.pm', 'lib/Search/Elasticsearch/Client/2_0/Direct/Nodes.pm', 'lib/Search/Elasticsearch/Client/2_0/Direct/Snapshot.pm', 'lib/Search/Elasticsearch/Client/2_0/Direct/Tasks.pm', 'lib/Search/Elasticsearch/Client/2_0/Role/API.pm', 'lib/Search/Elasticsearch/Client/2_0/Role/Bulk.pm', 'lib/Search/Elasticsearch/Client/2_0/Role/Scroll.pm', 'lib/Search/Elasticsearch/Client/2_0/Scroll.pm', 'lib/Search/Elasticsearch/Client/2_0/TestServer.pm', 't/Client_2_0/00_print_version.t', 't/Client_2_0/10_live.t', 't/Client_2_0/15_conflict.t', 't/Client_2_0/20_fork_httptiny.t', 't/Client_2_0/21_fork_lwp.t', 't/Client_2_0/22_fork_hijk.t', 't/Client_2_0/30_bulk_add_action.t', 't/Client_2_0/31_bulk_helpers.t', 't/Client_2_0/32_bulk_flush.t', 't/Client_2_0/33_bulk_errors.t', 't/Client_2_0/34_bulk_cxn_errors.t', 't/Client_2_0/40_scroll.t', 't/Client_2_0/50_reindex.t', 't/Client_2_0/60_auth_httptiny.t', 't/Client_2_0/61_auth_lwp.t', 't/author-eol.t', 't/author-no-tabs.t', 't/author-pod-syntax.t', 't/lib/LogCallback.pl', 't/lib/MockCxn.pm', 't/lib/bad_cacert.pem', 't/lib/default_cxn.pl', 't/lib/es_sync.pl', 't/lib/es_sync_auth.pl', 't/lib/es_sync_fork.pl', 't/lib/index_test_data.pl' ); eol_unix_ok($_, { trailing_whitespace => 1 }) foreach @files; done_testing; lib000755001750001750 013675355153 20107 5ustar00enricoenrico000000000000Search-Elasticsearch-Client-2_0-6.81/tMockCxn.pm100644001750001750 656013675355153 22156 0ustar00enricoenrico000000000000Search-Elasticsearch-Client-2_0-6.81/t/libpackage MockCxn; use strict; use warnings; our $VERSION = $Search::Elasticsearch::VERSION; use Data::Dumper; use Moo; with 'Search::Elasticsearch::Role::Cxn', 'Search::Elasticsearch::Role::Is_Sync'; use Sub::Exporter -setup => { exports => [ qw( mock_static_client mock_sniff_client mock_noping_client ) ] }; our $i = 0; has 'mock_responses' => ( is => 'rw', required => 1 ); has 'marked_live' => ( is => 'rw', default => sub {0} ); has 'node_num' => ( is => 'ro', default => sub { ++$i } ); #=================================== sub BUILD { #=================================== my $self = shift; $self->logger->debugf( "[%s-%s] CREATED", $self->node_num, $self->host ); } #=================================== sub error_from_text { return $_[2] } #=================================== #=================================== sub perform_request { #=================================== my $self = shift; my $params = shift; my $response = shift @{ $self->mock_responses } or die "Mock responses exhausted"; if ( my $node = $response->{node} ) { die "Mock response handled by wrong node [" . $self->node_num . "]: " . Dumper($response) unless $node eq $self->node_num; } my $log_msg; # Sniff request if ( my $nodes = $response->{sniff} ) { $log_msg = "SNIFF: [" . ( join ", ", @$nodes ) . "]"; $response->{code} ||= 200; my $i = 1; unless ( $response->{error} ) { $response->{content} = $self->serializer->encode( { nodes => { map { 'node_' . $i++ => { http_address => "inet[/$_]" } } @$nodes } } ); } } # Normal request elsif ( $response->{code} ) { $log_msg = "REQUEST: " . ( $response->{error} || $response->{code} ); } # Ping request else { $log_msg = "PING: " . ( $response->{ping} ? 'OK' : 'NOT_OK' ); $response = $response->{ping} ? { code => 200 } : { code => 500, error => 'Cxn' }; } $self->logger->debugf( "[%s-%s] %s", $self->node_num, $self->host, $log_msg ); return $self->process_response( $params, # request $response->{code}, # code $response->{error}, # msg $response->{content}, # body { 'content-type' => 'application/json' } ); } #### EXPORTS ### my $trace = !$ENV{TRACE} ? undef : $ENV{TRACE} eq '1' ? 'Stderr' : [ 'File', $ENV{TRACE} ]; #=================================== sub mock_static_client { _mock_client( 'Static', @_ ) } sub mock_sniff_client { _mock_client( 'Sniff', @_ ) } sub mock_noping_client { _mock_client( 'Static::NoPing', @_ ) } #=================================== #=================================== sub _mock_client { #=================================== my $pool = shift; my $params = shift; $i = 0; return Search::Elasticsearch->new( cxn => '+MockCxn', cxn_pool => $pool, mock_responses => \@_, randomize_cxns => 0, log_to => $trace, %$params, )->transport; } 1 es_sync.pl100644001750001750 313013675355153 22244 0ustar00enricoenrico000000000000Search-Elasticsearch-Client-2_0-6.81/t/libuse Search::Elasticsearch; use Test::More; use strict; use warnings; my $trace = !$ENV{TRACE} ? undef : $ENV{TRACE} eq '1' ? 'Stderr' : [ 'File', $ENV{TRACE} ]; unless ($ENV{CLIENT_VER}) { plan skip_all => 'No $ENV{CLIENT_VER} specified'; exit; } unless ($ENV{ES}) { plan skip_all => 'No Elasticsearch test node available'; exit; } my $api = "$ENV{CLIENT_VER}::Direct"; my $body = $ENV{ES_BODY} || 'GET'; my $cxn = $ENV{ES_CXN} || do "default_cxn.pl" || die( $@ || $! ); my $cxn_pool = $ENV{ES_CXN_POOL} || 'Static'; my $timeout = $ENV{ES_TIMEOUT} || 30; my @plugins = split /,/, ( $ENV{ES_PLUGINS} || '' ); our %Auth; my $es; if ( $ENV{ES} ) { eval { $es = Search::Elasticsearch->new( nodes => $ENV{ES}, trace_to => $trace, cxn => $cxn, cxn_pool => $cxn_pool, client => $api, send_get_body_as => $body, request_timeout => $timeout, plugins => \@plugins, %Auth ); $es->ping unless $ENV{ES_SKIP_PING}; 1; } || do { diag $@; undef $es; }; } unless ( $ENV{ES_SKIP_PING} ) { my $version = $es->info->{version}{number}; my $api = $es->api_version; unless ( $api eq '0_90' && $version =~ /^0\.9/ || substr( $api, 0, 1 ) eq substr( $version, 0, 1 ) ) { plan skip_all => "Tests are for API version $api but Elasticsearch is version $version\n"; exit; } } return $es; author-no-tabs.t100644001750001750 364613675355153 22542 0ustar00enricoenrico000000000000Search-Elasticsearch-Client-2_0-6.81/t BEGIN { unless ($ENV{AUTHOR_TESTING}) { print qq{1..0 # SKIP these tests are for testing by the author\n}; exit } } use strict; use warnings; # this test was generated with Dist::Zilla::Plugin::Test::NoTabs 0.15 use Test::More 0.88; use Test::NoTabs; my @files = ( 'lib/Search/Elasticsearch/Client/2_0.pm', 'lib/Search/Elasticsearch/Client/2_0/Bulk.pm', 'lib/Search/Elasticsearch/Client/2_0/Direct.pm', 'lib/Search/Elasticsearch/Client/2_0/Direct/Cat.pm', 'lib/Search/Elasticsearch/Client/2_0/Direct/Cluster.pm', 'lib/Search/Elasticsearch/Client/2_0/Direct/Indices.pm', 'lib/Search/Elasticsearch/Client/2_0/Direct/Nodes.pm', 'lib/Search/Elasticsearch/Client/2_0/Direct/Snapshot.pm', 'lib/Search/Elasticsearch/Client/2_0/Direct/Tasks.pm', 'lib/Search/Elasticsearch/Client/2_0/Role/API.pm', 'lib/Search/Elasticsearch/Client/2_0/Role/Bulk.pm', 'lib/Search/Elasticsearch/Client/2_0/Role/Scroll.pm', 'lib/Search/Elasticsearch/Client/2_0/Scroll.pm', 'lib/Search/Elasticsearch/Client/2_0/TestServer.pm', 't/Client_2_0/00_print_version.t', 't/Client_2_0/10_live.t', 't/Client_2_0/15_conflict.t', 't/Client_2_0/20_fork_httptiny.t', 't/Client_2_0/21_fork_lwp.t', 't/Client_2_0/22_fork_hijk.t', 't/Client_2_0/30_bulk_add_action.t', 't/Client_2_0/31_bulk_helpers.t', 't/Client_2_0/32_bulk_flush.t', 't/Client_2_0/33_bulk_errors.t', 't/Client_2_0/34_bulk_cxn_errors.t', 't/Client_2_0/40_scroll.t', 't/Client_2_0/50_reindex.t', 't/Client_2_0/60_auth_httptiny.t', 't/Client_2_0/61_auth_lwp.t', 't/author-eol.t', 't/author-no-tabs.t', 't/author-pod-syntax.t', 't/lib/LogCallback.pl', 't/lib/MockCxn.pm', 't/lib/bad_cacert.pem', 't/lib/default_cxn.pl', 't/lib/es_sync.pl', 't/lib/es_sync_auth.pl', 't/lib/es_sync_fork.pl', 't/lib/index_test_data.pl' ); notabs_ok($_) foreach @files; done_testing; LogCallback.pl100644001750001750 44313675355153 22723 0ustar00enricoenrico000000000000Search-Elasticsearch-Client-2_0-6.81/t/libuse Log::Any::Adapter::Callback 0.09; use Log::Any::Adapter; our ( $method, $format ); Log::Any::Adapter->set( 'Callback', min_level => 'trace', logging_cb => sub { ( $method, undef, $format ) = @_; }, detection_cb => sub { $method = shift; } ); 1 default_cxn.pl100644001750001750 2313675355153 23033 0ustar00enricoenrico000000000000Search-Elasticsearch-Client-2_0-6.81/t/libreturn 'HTTPTiny'; bad_cacert.pem100644001750001750 241113675355153 23017 0ustar00enricoenrico000000000000Search-Elasticsearch-Client-2_0-6.81/t/lib-----BEGIN CERTIFICATE----- MIIDijCCAvOgAwIBAgIJAIFQM5672YHcMA0GCSqGSIb3DQEBBQUAMIGLMRcwFQYD sQQKEw5LZXZpbiBUZXN0IE9yZzEbMBkGCSqGSIb3DQEJARYMa2V2aW5AZXMub3Jn MRIwEAYDVQQHEwlBbXN0ZXJkYW0xEjAQBgNVBAgTCUFtc3RlcmRhbTELMAkGA1UE BhMCTkwxHjAcBgNVBAMTFUtldmlucyBob3VzZSBvZiBjZXJ0czAeFw0xNDEwMTcy MzIyMjlaFw0xNTEwMTcyMzIyMjlaMIGLMRcwFQYDVQQKEw5LZXZpbiBUZXN0IE9y ZzEbMBkGCSqGSIb3DQEJARYMa2V2aW5AZXMub3JnMRIwEAYDVQQHEwlBbXN0ZXJk YW0xEjAQBgNVBAgTCUFtc3RlcmRhbTELMAkGA1UEBhMCTkwxHjAcBgNVBAMTFUtl dmlucyBob3VzZSBvZiBjZXJ0czCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEA 9xG5d4JaJ2vFuyKGbzvAlHpAeIiOFuCOum9UXsUIeCCQn/q/BNlIaF+UQ+Y/yNJr 3zraL9oboVSJZph8CIN7dKmLSnnAe83cjlQQNosS1heUTSyVWC7dWCj3djO3xeT9 qTfhAj4a2OfvLHk2yT5Mp2cZYUnEKqCwhC98R7jKGtsCAwEAAaOB8zCB8DAMBgNV HRMEBTADAQH/MB0GA1UdDgQWBBQUtCQRtRzPojRpZ/3hanfZN3nxwjCBwAYDVR0j BIG4MIG1gBQUtCQRtRzPojRpZ/3hanfZN3nxwqGBkaSBjjCBizEXMBUGA1UEChMO S2V2aW4gVGVzdCBPcmcxGzAZBgkqhkiG9w0BCQEWDGtldmluQGVzLm9yZzESMBAG A1UEBxMJQW1zdGVyZGFtMRIwEAYDVQQIEwlBbXN0ZXJkYW0xCzAJBgNVBAYTAk5M MR4wHAYDVQQDExVLZXZpbnMgaG91c2Ugb2YgY2VydHOCCQCBUDOeu9mB3DANBgkq hkiG9w0BAQUFAAOBgQDF2nfTTrM7cviLiExF6iQP/HwigXiHhotcBtyjfPvXhRe0 k96MwEWS+87XsLERF1FPkEzW4TjF6f4pRxAYbTA3frWZ4vFwM7CflI/9ca9HlRux WTG7ZMdyKE1Z2Vip2W1kVtVb/Gd/qWzxEoCwuHWo5dRZ8nrZ27U+Ij3CAFWEhQ== -----END CERTIFICATE----- es_sync_auth.pl100644001750001750 406113675355153 23271 0ustar00enricoenrico000000000000Search-Elasticsearch-Client-2_0-6.81/t/lib#! perl use Test::More; use Test::Deep; use Test::Exception; use strict; use warnings; use lib 't/lib'; our $Throws_SSL; unless ( $ENV{ES_SSL} ) { plan skip_all => "$ENV{ES_CXN} - No https server specified in ES_SSL"; exit; } unless ( $ENV{ES_USERINFO} ) { plan skip_all => "$ENV{ES_CXN} - No user/pass specified in ES_USERINFO"; exit; } unless ( $ENV{ES_CA_PATH} ) { plan skip_all => "$ENV{ES_CXN} - No cacert specified in ES_CA_PATH"; exit; } $ENV{ES} = $ENV{ES_SSL}; $ENV{ES_SKIP_PING} = 1; our %Auth = ( use_https => 1, userinfo => $ENV{ES_USERINFO} ); # Test https connection with correct auth, without cacert $ENV{ES_CXN_POOL} = 'Static'; my $es = do "es_sync.pl" or die( $@ || $! ); ok $es->cluster->health, "$ENV{ES_CXN} - Non-cert HTTPS with auth, cxn static"; $ENV{ES_CXN_POOL} = 'Sniff'; $es = do "es_sync.pl" or die( $@ || $! ); ok $es->cluster->health, "$ENV{ES_CXN} - Non-cert HTTPS with auth, cxn sniff"; $ENV{ES_CXN_POOL} = 'Static::NoPing'; $es = do "es_sync.pl" or die( $@ || $! ); ok $es->cluster->health, "$ENV{ES_CXN} - Non-cert HTTPS with auth, cxn noping"; # Test forbidden action throws_ok { $es->nodes->shutdown } "Search::Elasticsearch::Error::Forbidden", "$ENV{ES_CXN} - Forbidden action"; # Test https connection with correct auth, with valid cacert $Auth{ssl_options} = ssl_options( $ENV{ES_CA_PATH} ); $es = do "es_sync.pl" or die( $@ || $! ); ok $es->cluster->health, "$ENV{ES_CXN} - Valid cert HTTPS with auth"; # Test invalid user credentials %Auth = ( userinfo => 'foobar:baz' ); $es = do "es_sync.pl" or die( $@ || $! ); throws_ok { $es->cluster->health } "Search::Elasticsearch::Error::Unauthorized", "$ENV{ES_CXN} - Bad userinfo"; # Test https connection with correct auth, with invalid cacert $Auth{ssl_options} = ssl_options('t/lib/bad_cacert.pem'); $ENV{ES} = "https://www.google.com"; $es = do "es_sync.pl" or die( $@ || $! ); throws_ok { $es->cluster->health } "Search::Elasticsearch::Error::$Throws_SSL", "$ENV{ES_CXN} - Invalid cert throws $Throws_SSL"; done_testing; es_sync_fork.pl100644001750001750 140013675355153 23263 0ustar00enricoenrico000000000000Search-Elasticsearch-Client-2_0-6.81/t/libuse Test::More; use POSIX ":sys_wait_h"; my $es = do "es_sync.pl" or die( $@ || $! ); my $cxn_class = ref $es->transport->cxn_pool->cxns->[0]; ok $es->info, "$cxn_class - Info before fork"; my $Kids = 4; my %pids; for my $child ( 1 .. $Kids ) { my $pid = fork(); if ($pid) { $pids{$pid} = $child; next; } if ( !defined $pid ) { skip "fork() not supported"; done_testing; last; } for ( 1 .. 100 ) { $es->info; } exit; } my $ok = 0; for ( 1 .. 10 ) { my $pid = waitpid( -1, WNOHANG ); if ( $pid > 0 ) { delete $pids{$pid}; $ok++ unless $?; redo; } last unless keys %pids; sleep 1; } is $ok, $Kids, "$cxn_class - Fork"; done_testing; author-pod-syntax.t100644001750001750 45413675355153 23257 0ustar00enricoenrico000000000000Search-Elasticsearch-Client-2_0-6.81/t#!perl BEGIN { unless ($ENV{AUTHOR_TESTING}) { print qq{1..0 # SKIP these tests are for testing by the author\n}; exit } } # This file was automatically generated by Dist::Zilla::Plugin::PodSyntaxTests. use strict; use warnings; use Test::More; use Test::Pod 1.41; all_pod_files_ok(); Client_2_0000755001750001750 013675355153 21217 5ustar00enricoenrico000000000000Search-Elasticsearch-Client-2_0-6.81/t10_live.t100644001750001750 136113675355153 23004 0ustar00enricoenrico000000000000Search-Elasticsearch-Client-2_0-6.81/t/Client_2_0use Test::More; use Test::Deep; use Test::Exception; use strict; use warnings; use lib 't/lib'; my $es; $ENV{ES_VERSION} = '2_0'; local $ENV{ES_CXN_POOL}; $ENV{ES_CXN_POOL} = 'Static'; $es = do "es_sync.pl" or die( $@ || $! ); is $es->info->{tagline}, "You Know, for Search", 'CxnPool::Static'; $ENV{ES_CXN_POOL} = 'Static::NoPing'; $es = do "es_sync.pl" or die( $@ || $! ); is $es->info->{tagline}, "You Know, for Search", 'CxnPool::Static::NoPing'; $ENV{ES_CXN_POOL} = 'Sniff'; $es = do "es_sync.pl" or die( $@ || $! ); is $es->info->{tagline}, "You Know, for Search", 'CxnPool::Sniff'; my ($node) = values %{ $es->transport->cxn_pool->next_cxn->sniff }; ok $node->{http}{max_content_length_in_bytes}, 'Sniffs max_content length'; done_testing; index_test_data.pl100644001750001750 663613675355153 23756 0ustar00enricoenrico000000000000Search-Elasticsearch-Client-2_0-6.81/t/libuse strict; use warnings; use lib 't/lib'; local $ENV{ES_CXN}; local $ENV{ES_CXN_POOL}; my $es = do 'es_sync.pl' or die( $@ || $! ); $es->indices->delete( index => 'test', ignore => 404 ); $es->indices->create( index => 'test' ); $es->cluster->health( wait_for_status => 'yellow' ); my $b = $es->bulk_helper( index => 'test', type => 'test' ); my $i = 1; for ( names() ) { $b->index( { id => $i, source => { name => $_, count => $i, color => ( $i % 2 ? 'red' : 'green' ), switch => ( $i % 2 ? 1 : 2 ) } } ); $i++; } $b->flush; $es->indices->refresh; #=================================== sub names { #=================================== return ( 'Adaptoid', 'Alpha Ray', 'Alysande Stuart', 'Americop', 'Andrew Chord', 'Android Man', 'Ani-Mator', 'Aqueduct', 'Archangel', 'Arena', 'Auric', 'Barton, Clint', 'Behemoth', 'Bereet', 'Black Death', 'Black King', 'Blaze', 'Cancer', 'Charlie-27', 'Christians, Isaac', 'Clea', 'Contemplator', 'Copperhead', 'Darkdevil', 'Deathbird', 'Diablo', 'Doctor Arthur Nagan', 'Doctor Droom', 'Doctor Octopus', 'Epoch', 'Eternity', 'Feline', 'Firestar', 'Flex', 'Garokk the Petrified Man', 'Gill, Donald "Donny"', 'Glitch', 'Golden Girl', 'Grandmaster', 'Grey, Elaine', 'Halloween Jack', 'Hannibal King', 'Hero for Hire', 'Hrimhari', 'Ikonn', 'Infinity', 'Jack-in-the-Box', 'Jim Hammond', 'Joe Cartelli', 'Juarez, Bonita', 'Judd, Eugene', 'Korrek', 'Krang', 'Kukulcan', 'Lizard', 'Machinesmith', 'Master Man', 'Match', 'Maur-Konn', 'Mekano', 'Miguel Espinosa', 'Mister Sinister', 'Mogul of the Mystic Mountain', 'Mutant Master', 'Night Thrasher', 'Nital, Taj', 'Obituary', 'Ogre', 'Owl', 'Ozone', 'Paris', 'Phastos', 'Piper', 'Prodigy', 'Quagmire', 'Quasar', 'Radioactive Man', 'Rankin, Calvin', 'Scarlet Scarab', 'Scarlet Witch', 'Seth', 'Slug', 'Sluggo', 'Smallwood, Marrina', 'Smith, Tabitha', 'St. Croix, Claudette', 'Stacy X', 'Stallior', 'Star-Dancer', 'Stitch', 'Storm, Susan', 'Summers, Gabriel', 'Thane Ector', 'Toad-In-Waiting', 'Ultron', 'Urich, Phil', 'Vibro', 'Victorius', 'Wolfsbane', 'Yandroth' ); } 40_scroll.t100644001750001750 1324713675355153 23374 0ustar00enricoenrico000000000000Search-Elasticsearch-Client-2_0-6.81/t/Client_2_0use Test::More; use Test::Deep; use Test::Exception; use lib 't/lib'; use strict; use warnings; $ENV{ES_VERSION} = '2_0'; our $es = do "es_sync.pl" or die( $@ || $! ); $es->indices->delete( index => '_all', ignore => 404 ); test_scroll( "No indices", {}, total => 0, max_score => 0, steps => [ is_finished => 1, next => [0], refill_buffer => 0, drain_buffer => [0], ] ); do "index_test_data.pl" or die( $@ || $! ); test_scroll( "Match all", {}, total => 100, max_score => 1, steps => [ is_finished => '', buffer_size => 10, next => [1], drain_buffer => [9], refill_buffer => 10, refill_buffer => 20, is_finished => '', next_81 => [81], next_20 => [9], next => [0], is_finished => 1, ] ); test_scroll( "Query", { body => { query => { term => { color => 'red' } }, suggest => { mysuggest => { text => 'green', term => { field => 'color' } } }, aggs => { switch => { terms => { field => 'switch' } } }, } }, total => 50, max_score => num( 1.6, 0.5 ), aggs => bool(1), suggest => bool(1), steps => [ next => [1], next_50 => [49], is_finished => 1, ] ); test_scroll( "Scroll in qs", { scroll_in_qs => 1, body => { query => { term => { color => 'red' } }, suggest => { mysuggest => { text => 'green', term => { field => 'color' } } }, aggs => { switch => { terms => { field => 'switch' } } }, } }, total => 50, max_score => num( 1.6, 0.5 ), aggs => bool(1), suggest => bool(1), steps => [ next => [1], next_50 => [49], is_finished => 1, ] ); test_scroll( "Scan", { search_type => 'scan', body => { suggest => { mysuggest => { text => 'green', term => { field => 'color' } } }, } }, total => 100, max_score => 0, suggest => bool(1), steps => [ buffer_size => 0, next => [1], buffer_size => 49, next_100 => [99], is_finished => 1, ] ); test_scroll( "Finish", {}, total => 100, max_score => 1, steps => [ is_finished => '', next => [1], finish => 1, is_finished => 1, buffer_size => 0, next => [0] ] ); my $s = $es->scroll_helper; my $d = $s->next; ok ref $d && $d->{_source}, 'next() in scalar context'; { # Test auto finish fork protection. my $s = $es->scroll_helper( size => 5 ); my $pid = fork(); unless ( defined($pid) ) { die "Cannot fork. Lack of resources?"; } unless ($pid) { # Child. Call finish check that its not finished # (the call to finish did nothing). $s->finish(); exit; } else { # Wait for children waitpid( $pid, 0 ); is $?, 0, "Child exited without errors"; } ok !$s->is_finished(), "Our Scroll is not finished"; my $count = 0; while ( $s->next ) { $count++ } is $count, 100, "All documents retrieved"; ok $s->is_finished, "Our scroll is finished"; } { # Test Scroll usage attempt in a different process. my $s = $es->scroll_helper( size => 5 ); my $pid = fork(); unless ( defined($pid) ) { die "Cannot fork. Lack of resources?"; } unless ($pid) { # Calling this next should crash, not exiting this process with 0 eval { while ( $s->next ) { } }; my $err = $@; exit( eval { $err->is('Illegal') && 123 } || 999 ); } else { # Wait for children waitpid( $pid, 0 ); is $? >> 8, 123, "Child threw Illegal exception"; } } { # Test valid Scroll usage after initial fork my $pid = fork(); unless ( defined($pid) ) { die "Cannot fork. Lack of resources?"; } unless ($pid) { my $s = $es->scroll_helper( size => 5 ); while ( $s->next ) { } exit 0; } else { # Wait for children waitpid( $pid, 0 ); is $? , 0, "Scroll completed successfully"; } } done_testing; $es->indices->delete( index => 'test' ); #=================================== sub test_scroll { #=================================== my ( $title, $params, %tests ) = @_; subtest $title => sub { my $s = $es->scroll_helper($params); is $s->total, $tests{total}, "$title - total"; cmp_deeply $s->max_score, $tests{max_score}, "$title - max_score"; cmp_deeply $s->suggest, $tests{suggest}, "$title - suggest"; cmp_deeply $s->aggregations, $tests{aggs}, "$title - aggs"; my $i = 1; my @steps = @{ $tests{steps} }; while ( my $name = shift @steps ) { my $expect = shift @steps; my ( $method, $result, @p ); if ( $name =~ /next(?:_(\d+))?/ ) { $method = 'next'; @p = $1; } else { $method = $name; } if ( ref $expect eq 'ARRAY' ) { my @result = $s->$method(@p); $result = 0 + @result; $expect = $expect->[0]; } else { $result = $s->$method(@p); } is $result, $expect, "$title - Step $i: $name"; $i++; } } } 50_reindex.t100644001750001750 651313675355153 23513 0ustar00enricoenrico000000000000Search-Elasticsearch-Client-2_0-6.81/t/Client_2_0use Test::More; use Test::Deep; use Test::Exception; use lib 't/lib'; use strict; use warnings; $ENV{ES_VERSION} = '2_0'; our $es = do "es_sync.pl" or die( $@ || $! ); $es->indices->delete( index => '_all', ignore => 404 ); do "index_test_data.pl" or die( $@ || $! ); my $b; # Reindex to new index and new type $b = $es->bulk_helper( index => 'test2', type => 'test2' ); $b->reindex( source => { index => 'test' } ); $es->indices->refresh; is $es->count( index => 'test2', type => 'test2' )->{count}, 100, 'Reindexed to new index and type'; # Reindex to same index $b = $es->bulk_helper(); $b->reindex( source => { index => 'test' } ); $es->indices->refresh; is $es->count( index => 'test', type => 'test' )->{count}, 100, 'Reindexed to same index'; is $es->get( index => 'test', type => 'test', id => 1 )->{_version}, 2, "Reindexed to same index - version updated"; # Reindex from generic source my @docs = map { { _index => 'foo', _type => 'bar', _id => $_, _source => { num => $_ } } } ( 1 .. 10 ); $es->indices->delete( index => 'test2' ); $b = $es->bulk_helper( index => 'test2' ); $b->reindex( index => 'test2', source => sub { shift @docs } ); $es->indices->refresh; is $es->count( index => 'test2', type => 'bar' )->{count}, 10, 'Reindexed from generic source'; # Reindex with transform $es->indices->delete( index => 'test2' ); $b = $es->bulk_helper( index => 'test2' ); $b->reindex( source => { index => 'test' }, transform => sub { my $doc = shift; return if $doc->{_source}{color} eq 'red'; $doc->{_source}{transformed} = 1; return $doc; } ); $es->indices->refresh; is $es->count( index => 'test2', type => 'test' )->{count}, 50, 'Transfrom - removed docs'; my $query = { query => { bool => { must => [ { term => { color => 'green' } }, { term => { transformed => 1 } } ] } } }; is $es->count( index => 'test2', type => 'test', body => $query, )->{count}, 50, 'Transfrom - transformed docs'; # Reindex with parent & routing $es->indices->delete( index => '_all', ignore => 404 ); for ( 'test', 'test2' ) { $es->indices->create( index => $_, body => { mappings => { test => { _parent => { type => 'foo' } } } } ); } $es->cluster->health( wait_for_status => 'yellow' ); for ( 1 .. 5 ) { $es->index( index => 'test', type => 'test', version_type => 'external', version => $_, id => $_, parent => 1, routing => 2, body => { count => $_ }, ); } $es->indices->refresh; $b = $es->bulk_helper( index => 'test2' ); ok $b->reindex( version_type => 'external', source => { index => 'test', version => 1, fields => [ '_parent', '_routing', '_source' ] } ), "Advanced"; $es->indices->refresh; my $results = $es->search( index => 'test2', type => 'test', sort => 'count', fields => [ '_parent', '_routing' ], version => 1, )->{hits}{hits}; is $results->[3]{_parent}, 1, "Advanced - parent"; is $results->[3]{_routing}, 2, "Advanced - routing"; is $results->[3]{_version}, 4, "Advanced - version"; done_testing; 61_auth_lwp.t100644001750001750 35513675355153 23660 0ustar00enricoenrico000000000000Search-Elasticsearch-Client-2_0-6.81/t/Client_2_0use lib 't/lib'; $ENV{ES_VERSION} = '2_0'; $ENV{ES_CXN} = 'LWP'; our $Throws_SSL = "Cxn"; sub ssl_options { return { verify_hostname => 1, SSL_ca_file => $_[0] }; } do "es_sync_auth.pl" or die( $@ || $! ); 21_fork_lwp.t100644001750001750 15413675355153 23651 0ustar00enricoenrico000000000000Search-Elasticsearch-Client-2_0-6.81/t/Client_2_0use lib 't/lib'; $ENV{ES_VERSION} = '2_0'; $ENV{ES_CXN} = 'LWP'; do "es_sync_fork.pl" or die( $@ || $! ); 15_conflict.t100644001750001750 106613675355153 23655 0ustar00enricoenrico000000000000Search-Elasticsearch-Client-2_0-6.81/t/Client_2_0use Test::More; use strict; use warnings; use lib 't/lib'; $ENV{ES_VERSION} = '2_0'; my $es = do "es_sync.pl" or die( $@ || $! ); $es->indices->delete( index => '_all' ); $es->index( index => 'test', type => 'test', id => 1, body => {} ); my $error; eval { $es->index( index => 'test', type => 'test', id => 1, body => {}, version => 2 ); 1; } or $error = $@; ok $error->is('Conflict'), 'Conflict Exception'; is $error->{vars}{current_version}, 1, "Error has current version v1"; done_testing; 22_fork_hijk.t100644001750001750 15513675355153 23776 0ustar00enricoenrico000000000000Search-Elasticsearch-Client-2_0-6.81/t/Client_2_0use lib 't/lib'; $ENV{ES_VERSION} = '2_0'; $ENV{ES_CXN} = 'Hijk'; do "es_sync_fork.pl" or die( $@ || $! ); 32_bulk_flush.t100644001750001750 407113675355153 24210 0ustar00enricoenrico000000000000Search-Elasticsearch-Client-2_0-6.81/t/Client_2_0use Test::More; use Test::Deep; use strict; use warnings; use lib 't/lib'; $ENV{ES_VERSION} = '2_0'; my $es = do "es_sync.pl" or die( $@ || $! ); $es->indices->delete( index => '_all' ); test_flush( "max count", # { max_count => 3 }, # 1, 2, 0, 1, 2, 0, 1, 2, 0, 1 ); test_flush( "max size", # { max_size => 95 }, # 1, 2, 3, 0, 1, 2, 3, 0, 1, 2 ); test_flush( "max size > max_count", { max_size => 95, max_count => 3 }, 1, 2, 0, 1, 2, 0, 1, 2, 0, 1 ); test_flush( "max size < max_count", { max_size => 95, max_count => 5 }, 1, 2, 3, 0, 1, 2, 3, 0, 1, 2 ); test_flush( "max size = 0, max_count", { max_size => 0, max_count => 5 }, 1, 2, 3, 4, 0, 1, 2, 3, 4, 0 ); test_flush( "max count = 0, max_size", { max_size => 95, max_count => 0 }, 1, 2, 3, 0, 1, 2, 3, 0, 1, 2 ); test_flush( "max count = 0, max_size = 0", { max_size => 0, max_count => 0 }, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 ); test_flush( "max_count = 5, max_time = 5", { max_count => 5, max_time => 5 }, 1, 2, 0, 1, 2, 3, 4, 0, 0, 1 ); done_testing; $es->indices->delete( index => 'test' ); #=================================== sub test_flush { #=================================== my $title = shift; my $params = shift; my $b = $es->bulk_helper( %$params, index => 'test', type => 'test' ); my @seq = @_; $es->indices->delete( index => 'test', ignore => 404 ); $es->indices->create( index => 'test' ); $es->cluster->health( wait_for_status => 'yellow' ); for my $i ( 10 .. 19 ) { # sleep on 12 or 18 if max_time specified if ( $params->{max_time} && ( $i == 12 || $i == 18 ) ) { $b->_last_flush( time - $params->{max_time} - 1 ); } $b->index( { id => $i, source => {} } ); is $b->_buffer_count, shift @seq, "$title - " . ( $i - 9 ); } $b->flush; is $b->_buffer_count, 0, "$title - final flush"; $es->indices->refresh; is $es->count->{count}, 10, "$title - all docs indexed"; } 33_bulk_errors.t100644001750001750 1105213675355153 24421 0ustar00enricoenrico000000000000Search-Elasticsearch-Client-2_0-6.81/t/Client_2_0use Test::More; use Test::Deep; use Test::Exception; use strict; use warnings; use lib 't/lib'; use Log::Any::Adapter; $ENV{ES_VERSION} = '2_0'; my $es = do "es_sync.pl" or die( $@ || $! ); my $TRUE = $es->transport->serializer->decode('{"true":true}')->{true}; $es->indices->delete( index => '_all' ); my @Std = ( { id => 1, source => { count => 1 } }, { id => 1, source => { count => 'foo' } }, { id => 1, version => 10, source => {} }, ); my ( $b, $success_count, $error_count, $custom_count, $conflict_count ); ## Default error handling $b = bulk( { index => 'test', type => 'test' }, @Std ); test_flush( "Default", 0, 2, 0, 0 ); ## Custom error handling $b = bulk( { index => 'test', type => 'test', on_error => sub { $custom_count++ } }, @Std ); test_flush( "Custom error", 0, 0, 2, 0 ); # Conflict errors $b = bulk( { index => 'test', type => 'test', on_conflict => sub { $conflict_count++ } }, @Std ); test_flush( "Conflict error", 0, 1, 0, 1 ); # Both error handling $b = bulk( { index => 'test', type => 'test', on_conflict => sub { $conflict_count++ }, on_error => sub { $custom_count++ } }, @Std ); test_flush( "Conflict and custom", 0, 0, 1, 1 ); # Conflict disable error $b = bulk( { index => 'test', type => 'test', on_conflict => sub { $conflict_count++ }, on_error => undef }, @Std ); test_flush( "Conflict, error undef", 0, 0, 0, 1 ); # Disable both $b = bulk( { index => 'test', type => 'test', on_conflict => undef, on_error => undef }, @Std ); test_flush( "Both undef", 0, 0, 0, 0 ); # Success $b = bulk( { index => 'test', type => 'test', on_success => sub { $success_count++ }, }, @Std ); test_flush( "Success", 1, 2, 0, 0 ); # cbs have correct params $b = bulk( { index => 'test', type => 'test', on_success => test_params( 'on_success', { _index => 'test', _type => 'test', _id => 1, _version => 1, status => 201, ok => $TRUE, _shards => { successful => 1, total => 2, failed => 0 } }, 0 ), on_error => test_params( 'on_error', { _index => 'test', _type => 'test', _id => 1, error => any( re('MapperParsingException'), superhashof( { type => 'mapper_parsing_exception' } ) ), status => 400, }, 1 ), on_conflict => test_params( 'on_conflict', { _index => 'test', _type => 'test', _id => 1, error => any( re('version conflict'), superhashof( { type => 'version_conflict_engine_exception' } ) ), status => 409, }, 2, 1 ), }, @Std ); $b->flush; done_testing; $es->indices->delete( index => 'test' ); #=================================== sub bulk { #=================================== my $params = shift; my $b = $es->bulk_helper($params); $es->indices->delete( index => 'test', ignore => 404 ); $es->indices->create( index => 'test' ); $es->cluster->health( wait_for_status => 'yellow' ); $b->index(@_); return $b; } #=================================== sub test_flush { #=================================== my ( $title, $success, $default, $custom, $conflict ) = @_; $success_count = $custom_count = $error_count = $conflict_count = 0; { local $SIG{__WARN__} = sub { $error_count++ }; $b->flush; } is $success_count, $success, "$title - success"; is $error_count, $default, "$title - default"; is $custom_count, $custom, "$title - custom"; is $conflict_count, $conflict, "$title - conflict"; } #=================================== sub test_params { #=================================== my ( $type, $result, $j, $version ) = @_; return sub { is $_[0], 'index', "$type - action"; cmp_deeply $_[1], subhashof($result), "$type - result"; is $_[2], $j, "$type - array index"; is $_[3], $version, "$type - version"; }; } 31_bulk_helpers.t100644001750001750 2141413675355153 24550 0ustar00enricoenrico000000000000Search-Elasticsearch-Client-2_0-6.81/t/Client_2_0use Test::More; use Test::Deep; use Test::Exception; use strict; use warnings; use lib 't/lib'; $ENV{ES_VERSION} = '2_0'; my $es = do "es_sync.pl" or die( $@ || $! ); my $b = $es->bulk_helper( index => 'i', type => 't' ); my $s = $b->_serializer; $s->_set_canonical; ## INDEX ## ok $b->index(), 'Empty index'; ok $b->index( { index => 'foo', type => 'bar', id => 1, routing => 1, parent => 1, timestamp => 1380019061000, ttl => '10m', version => 1, version_type => 'external', source => { foo => 'bar' }, }, { _index => 'foo', _type => 'bar', _id => 2, _routing => 2, _parent => 2, _timestamp => 1380019061000, _ttl => '10m', _version => 1, _version_type => 'external', _source => { foo => 'bar' }, } ), 'Index'; cmp_deeply $b->_buffer, [ q({"index":{"_id":1,"_index":"foo","_parent":1,"_routing":1,"_timestamp":1380019061000,"_ttl":"10m","_type":"bar","_version":1,"_version_type":"external"}}), q({"foo":"bar"}), q({"index":{"_id":2,"_index":"foo","_parent":2,"_routing":2,"_timestamp":1380019061000,"_ttl":"10m","_type":"bar","_version":1,"_version_type":"external"}}), q({"foo":"bar"}) ], "Index in buffer"; is $b->_buffer_size, 336, "Index buffer size"; is $b->_buffer_count, 2, "Index buffer count"; $b->clear_buffer; ## CREATE ## ok $b->create(), 'Create empty'; ok $b->create( { index => 'foo', type => 'bar', id => 1, routing => 1, parent => 1, timestamp => 1380019061000, ttl => '10m', version => 1, version_type => 'external', source => { foo => 'bar' }, }, { _index => 'foo', _type => 'bar', _id => 2, _routing => 2, _parent => 2, _timestamp => 1380019061000, _ttl => '10m', _version => 1, _version_type => 'external', _source => { foo => 'bar' }, } ), 'Create'; cmp_deeply $b->_buffer, [ q({"create":{"_id":1,"_index":"foo","_parent":1,"_routing":1,"_timestamp":1380019061000,"_ttl":"10m","_type":"bar","_version":1,"_version_type":"external"}}), q({"foo":"bar"}), q({"create":{"_id":2,"_index":"foo","_parent":2,"_routing":2,"_timestamp":1380019061000,"_ttl":"10m","_type":"bar","_version":1,"_version_type":"external"}}), q({"foo":"bar"}) ], "Create in buffer"; is $b->_buffer_size, 338, "Create buffer size"; is $b->_buffer_count, 2, "Create buffer count"; $b->clear_buffer; ## CREATE DOCS## ok $b->create_docs(), 'Create_docs empty'; ok $b->create_docs( { foo => 'bar' }, { foo => 'baz' } ), 'Create docs'; cmp_deeply $b->_buffer, [ q({"create":{}}), q({"foo":"bar"}), q({"create":{}}), q({"foo":"baz"}) ], "Create docs in buffer"; is $b->_buffer_size, 56, "Create docs buffer size"; is $b->_buffer_count, 2, "Create docs buffer count"; $b->clear_buffer; ## DELETE ## ok $b->delete(), 'Delete empty'; ok $b->delete( { index => 'foo', type => 'bar', id => 1, routing => 1, parent => 1, version => 1, version_type => 'external', }, { _index => 'foo', _type => 'bar', _id => 2, _routing => 2, _parent => 2, _version => 1, _version_type => 'external', } ), 'Delete'; cmp_deeply $b->_buffer, [ q({"delete":{"_id":1,"_index":"foo","_parent":1,"_routing":1,"_type":"bar","_version":1,"_version_type":"external"}}), q({"delete":{"_id":2,"_index":"foo","_parent":2,"_routing":2,"_type":"bar","_version":1,"_version_type":"external"}}), ], "Delete in buffer"; is $b->_buffer_size, 230, "Delete buffer size"; is $b->_buffer_count, 2, "Delete buffer count"; $b->clear_buffer; ## DELETE IDS ## ok $b->delete_ids(), 'Delete IDs empty'; ok $b->delete_ids( 1, 2, 3 ), 'Delete IDs'; cmp_deeply $b->_buffer, [ q({"delete":{"_id":1}}), q({"delete":{"_id":2}}), q({"delete":{"_id":3}}), ], "Delete IDs in buffer"; is $b->_buffer_size, 63, "Delete IDs buffer size"; is $b->_buffer_count, 3, "Delete IDS buffer count"; $b->clear_buffer; ## UPDATE ACTIONS ## ok $b->update(), 'Update empty'; ok $b->update( { index => 'foo', type => 'bar', id => 1, routing => 1, parent => 1, timestamp => 1380019061000, ttl => '10m', version => 1, version_type => 'external', doc => { foo => 'bar' }, doc_as_upsert => 1, }, { index => 'foo', type => 'bar', id => 1, routing => 1, parent => 1, timestamp => 1380019061000, ttl => '10m', version => 1, version_type => 'external', upsert => { counter => 0 }, script => '_ctx.source.counter+=incr', lang => 'mvel', params => { incr => 1 }, }, { _index => 'foo', _type => 'bar', _id => 1, _routing => 1, _parent => 1, _timestamp => 1380019061000, _ttl => '10m', _version => 1, _version_type => 'external', doc => { foo => 'bar' }, doc_as_upsert => 1, }, { _index => 'foo', _type => 'bar', _id => 1, _routing => 1, _parent => 1, _timestamp => 1380019061000, _ttl => '10m', _version => 1, _version_type => 'external', upsert => { counter => 0 }, script => '_ctx.source.counter+=incr', lang => 'mvel', params => { incr => 1 }, }, { _index => 'foo', _type => 'bar', _id => 1, _routing => 1, _parent => 1, _timestamp => 1380019061000, _ttl => '10m', _version => 1, _version_type => 'external', doc => { foo => 'bar' }, doc_as_upsert => 1, detect_noop => 1, }, { _index => 'foo', _type => 'bar', _id => 1, _routing => 1, _parent => 1, _timestamp => 1380019061000, _ttl => '10m', _version => 1, _version_type => 'external', upsert => { counter => 0 }, script => '_ctx.source.counter+=incr', lang => 'mvel', params => { incr => 1 }, detect_noop => 1, _retry_on_conflict => 3, }, ), 'Update'; cmp_deeply $b->_buffer, [ q({"update":{"_id":1,"_index":"foo","_parent":1,"_routing":1,"_timestamp":1380019061000,"_ttl":"10m","_type":"bar","_version":1,"_version_type":"external"}}), q({"doc":{"foo":"bar"},"doc_as_upsert":1}), q({"update":{"_id":1,"_index":"foo","_parent":1,"_routing":1,"_timestamp":1380019061000,"_ttl":"10m","_type":"bar","_version":1,"_version_type":"external"}}), q({"lang":"mvel","params":{"incr":1},"script":"_ctx.source.counter+=incr","upsert":{"counter":0}}), q({"update":{"_id":1,"_index":"foo","_parent":1,"_routing":1,"_timestamp":1380019061000,"_ttl":"10m","_type":"bar","_version":1,"_version_type":"external"}}), q({"doc":{"foo":"bar"},"doc_as_upsert":1}), q({"update":{"_id":1,"_index":"foo","_parent":1,"_routing":1,"_timestamp":1380019061000,"_ttl":"10m","_type":"bar","_version":1,"_version_type":"external"}}), q({"lang":"mvel","params":{"incr":1},"script":"_ctx.source.counter+=incr","upsert":{"counter":0}}), q({"update":{"_id":1,"_index":"foo","_parent":1,"_routing":1,"_timestamp":1380019061000,"_ttl":"10m","_type":"bar","_version":1,"_version_type":"external"}}), q({"detect_noop":1,"doc":{"foo":"bar"},"doc_as_upsert":1}), q({"update":{"_id":1,"_index":"foo","_parent":1,"_routing":1,"_timestamp":1380019061000,"_ttl":"10m","_type":"bar","_version":1,"_version_type":"external"}}), q({"_retry_on_conflict":3,"detect_noop":1,"lang":"mvel","params":{"incr":1},"script":"_ctx.source.counter+=incr","upsert":{"counter":0}}), ], "Update in buffer"; is $b->_buffer_size, 1393, "Update buffer size"; is $b->_buffer_count, 6, "Update buffer count"; $b->clear_buffer; done_testing; 60_auth_httptiny.t100644001750001750 42513675355153 24736 0ustar00enricoenrico000000000000Search-Elasticsearch-Client-2_0-6.81/t/Client_2_0use IO::Socket::SSL; use lib 't/lib'; $ENV{ES_VERSION} = '2_0'; $ENV{ES_CXN} = 'HTTPTiny'; our $Throws_SSL = "SSL"; sub ssl_options { return { SSL_verify_mode => SSL_VERIFY_PEER, SSL_ca_file => $_[0] }; } do "es_sync_auth.pl" or die( $@ || $! ); 20_fork_httptiny.t100644001750001750 16113675355153 24727 0ustar00enricoenrico000000000000Search-Elasticsearch-Client-2_0-6.81/t/Client_2_0use lib 't/lib'; $ENV{ES_VERSION} = '2_0'; $ENV{ES_CXN} = 'HTTPTiny'; do "es_sync_fork.pl" or die( $@ || $! ); 00_print_version.t100644001750001750 107013675355153 24742 0ustar00enricoenrico000000000000Search-Elasticsearch-Client-2_0-6.81/t/Client_2_0use Test::More; use lib 't/lib'; $ENV{ES_VERSION} = '2_0'; my $es = do "es_sync.pl" or die( $@ || $! ); eval { my $v = $es->info->{version}; diag ""; diag ""; diag "Testing against Elasticsearch v" . $v->{number}; for ( sort keys %$v ) { diag sprintf "%-20s: %s", $_, $v->{$_}; } diag ""; diag "Client: " . ref($es); diag "Cxn: " . $es->transport->cxn_pool->cxn_factory->cxn_class; diag "GET Body: " . $es->transport->send_get_body_as; diag ""; pass "ES Version"; } or fail "ES Version"; done_testing; 34_bulk_cxn_errors.t100644001750001750 167013675355153 25257 0ustar00enricoenrico000000000000Search-Elasticsearch-Client-2_0-6.81/t/Client_2_0use Test::More; use Test::Deep; use Test::Exception; use strict; use warnings; use lib 't/lib'; use Log::Any::Adapter; $ENV{ES_VERSION} = '2_0'; $ENV{ES} = '10.255.255.1:9200'; $ENV{ES_SKIP_PING} = 1; $ENV{ES_CXN_POOL} = 'Static'; $ENV{ES_TIMEOUT} = 1; my $es = do "es_sync.pl" or die( $@ || $! ); SKIP: { skip "IO::Socket::IP doesn't respect timeout: https://rt.cpan.org/Ticket/Display.html?id=103878", 3 if $es->transport->cxn_pool->cxn_factory->cxn_class eq 'Search::Elasticsearch::Cxn::HTTPTiny' && $^V =~ /^v5.2\d/; # Check that the buffer is not cleared on a NoNodes exception my $b = $es->bulk_helper( index => 'foo', type => 'bar' ); $b->create_docs( { foo => 'bar' } ); is $b->_buffer_count, 1, "Buffer count pre-flush"; throws_ok { $b->flush } 'Search::Elasticsearch::Error::NoNodes'; is $b->_buffer_count, 1, "Buffer count post-flush"; } done_testing; 30_bulk_add_action.t100644001750001750 2160213675355153 25171 0ustar00enricoenrico000000000000Search-Elasticsearch-Client-2_0-6.81/t/Client_2_0use Test::More; use Test::Deep; use Test::Exception; use strict; use warnings; use lib 't/lib'; $ENV{ES_VERSION} = '2_0'; my $es = do "es_sync.pl" or die( $@ || $! ); my $b = $es->bulk_helper; $b->_serializer->_set_canonical; ## EMPTY ok $b->add_action(), 'Empty add action'; ## INDEX ACTIONS ## ok $b->add_action( index => { index => 'foo', type => 'bar', id => 1, routing => 1, parent => 1, timestamp => 1380019061000, ttl => '10m', version => 1, version_type => 'external', source => { foo => 'bar' }, }, index => { _index => 'foo', _type => 'bar', _id => 2, _routing => 2, _parent => 2, _timestamp => 1380019061000, _ttl => '10m', _version => 1, _version_type => 'external', _source => { foo => 'bar' }, } ), 'Add index actions'; cmp_deeply $b->_buffer, [ q({"index":{"_id":1,"_index":"foo","_parent":1,"_routing":1,"_timestamp":1380019061000,"_ttl":"10m","_type":"bar","_version":1,"_version_type":"external"}}), q({"foo":"bar"}), q({"index":{"_id":2,"_index":"foo","_parent":2,"_routing":2,"_timestamp":1380019061000,"_ttl":"10m","_type":"bar","_version":1,"_version_type":"external"}}), q({"foo":"bar"}) ], "Index actions in buffer"; is $b->_buffer_size, 336, "Index actions buffer size"; is $b->_buffer_count, 2, "Index actions buffer count"; $b->clear_buffer; ## CREATE ACTIONS ## ok $b->add_action( create => { index => 'foo', type => 'bar', id => 1, routing => 1, parent => 1, timestamp => 1380019061000, ttl => '10m', version => 1, version_type => 'external', source => { foo => 'bar' }, }, create => { _index => 'foo', _type => 'bar', _id => 2, _routing => 2, _parent => 2, _timestamp => 1380019061000, _ttl => '10m', _version => 1, _version_type => 'external', _source => { foo => 'bar' }, } ), 'Add create actions'; cmp_deeply $b->_buffer, [ q({"create":{"_id":1,"_index":"foo","_parent":1,"_routing":1,"_timestamp":1380019061000,"_ttl":"10m","_type":"bar","_version":1,"_version_type":"external"}}), q({"foo":"bar"}), q({"create":{"_id":2,"_index":"foo","_parent":2,"_routing":2,"_timestamp":1380019061000,"_ttl":"10m","_type":"bar","_version":1,"_version_type":"external"}}), q({"foo":"bar"}) ], "Create actions in buffer"; is $b->_buffer_size, 338, "Create actions buffer size"; is $b->_buffer_count, 2, "Create actions buffer count"; $b->clear_buffer; ## DELETE ACTIONS ## ok $b->add_action( delete => { index => 'foo', type => 'bar', id => 1, routing => 1, parent => 1, version => 1, version_type => 'external', }, delete => { _index => 'foo', _type => 'bar', _id => 2, _routing => 2, _parent => 2, _version => 1, _version_type => 'external', } ), 'Add delete actions'; cmp_deeply $b->_buffer, [ q({"delete":{"_id":1,"_index":"foo","_parent":1,"_routing":1,"_type":"bar","_version":1,"_version_type":"external"}}), q({"delete":{"_id":2,"_index":"foo","_parent":2,"_routing":2,"_type":"bar","_version":1,"_version_type":"external"}}), ], "Delete actions in buffer"; is $b->_buffer_size, 230, "Delete actions buffer size"; is $b->_buffer_count, 2, "Delete actions buffer count"; $b->clear_buffer; ## UPDATE ACTIONS ## ok $b->add_action( update => { index => 'foo', type => 'bar', id => 1, routing => 1, parent => 1, timestamp => 1380019061000, ttl => '10m', version => 1, version_type => 'external', doc => { foo => 'bar' }, doc_as_upsert => 1, }, update => { index => 'foo', type => 'bar', id => 1, routing => 1, parent => 1, timestamp => 1380019061000, ttl => '10m', version => 1, version_type => 'external', upsert => { counter => 0 }, script => '_ctx.source.counter+=incr', lang => 'mvel', params => { incr => 1 }, }, update => { _index => 'foo', _type => 'bar', _id => 1, _routing => 1, _parent => 1, _timestamp => 1380019061000, _ttl => '10m', _version => 1, _version_type => 'external', doc => { foo => 'bar' }, doc_as_upsert => 1, }, update => { _index => 'foo', _type => 'bar', _id => 1, _routing => 1, _parent => 1, _timestamp => 1380019061000, _ttl => '10m', _version => 1, _version_type => 'external', upsert => { counter => 0 }, script => '_ctx.source.counter+=incr', lang => 'mvel', params => { incr => 1 }, }, update => { _index => 'foo', _type => 'bar', _id => 1, _routing => 1, _parent => 1, _timestamp => 1380019061000, _ttl => '10m', _version => 1, _version_type => 'external', doc => { foo => 'bar' }, doc_as_upsert => 1, detect_noop => 1, }, update => { _index => 'foo', _type => 'bar', _id => 1, _routing => 1, _parent => 1, _timestamp => 1380019061000, _ttl => '10m', _version => 1, _version_type => 'external', upsert => { counter => 0 }, script => '_ctx.source.counter+=incr', lang => 'mvel', params => { incr => 1 }, detect_noop => 1, _retry_on_conflict => 3, }, ), 'Add update actions'; cmp_deeply $b->_buffer, [ q({"update":{"_id":1,"_index":"foo","_parent":1,"_routing":1,"_timestamp":1380019061000,"_ttl":"10m","_type":"bar","_version":1,"_version_type":"external"}}), q({"doc":{"foo":"bar"},"doc_as_upsert":1}), q({"update":{"_id":1,"_index":"foo","_parent":1,"_routing":1,"_timestamp":1380019061000,"_ttl":"10m","_type":"bar","_version":1,"_version_type":"external"}}), q({"lang":"mvel","params":{"incr":1},"script":"_ctx.source.counter+=incr","upsert":{"counter":0}}), q({"update":{"_id":1,"_index":"foo","_parent":1,"_routing":1,"_timestamp":1380019061000,"_ttl":"10m","_type":"bar","_version":1,"_version_type":"external"}}), q({"doc":{"foo":"bar"},"doc_as_upsert":1}), q({"update":{"_id":1,"_index":"foo","_parent":1,"_routing":1,"_timestamp":1380019061000,"_ttl":"10m","_type":"bar","_version":1,"_version_type":"external"}}), q({"lang":"mvel","params":{"incr":1},"script":"_ctx.source.counter+=incr","upsert":{"counter":0}}), q({"update":{"_id":1,"_index":"foo","_parent":1,"_routing":1,"_timestamp":1380019061000,"_ttl":"10m","_type":"bar","_version":1,"_version_type":"external"}}), q({"detect_noop":1,"doc":{"foo":"bar"},"doc_as_upsert":1}), q({"update":{"_id":1,"_index":"foo","_parent":1,"_routing":1,"_timestamp":1380019061000,"_ttl":"10m","_type":"bar","_version":1,"_version_type":"external"}}), q({"_retry_on_conflict":3,"detect_noop":1,"lang":"mvel","params":{"incr":1},"script":"_ctx.source.counter+=incr","upsert":{"counter":0}}), ], "Update actions in buffer"; is $b->_buffer_size, 1393, "Update actions buffer size"; is $b->_buffer_count, 6, "Update actions buffer count"; $b->clear_buffer; ## ERRORS ## throws_ok { $b->add_action( 'foo' => {} ) } qr/Unrecognised action/, 'Bad action'; throws_ok { $b->add_action( 'index', 'bar' ) } qr/Missing /, 'Missing params'; throws_ok { $b->add_action( index => { type => 't' } ) } qr/Missing .*/, 'Missing index'; throws_ok { $b->add_action( index => { index => 'i' } ) } qr/Missing .*/, 'Missing type'; throws_ok { $b->add_action( index => { index => 'i', type => 't' } ) } qr/Missing /, 'Missing source'; throws_ok { $b->add_action( index => { index => 'i', type => 't', _source => {}, foo => 1 } ); } qr/Unknown params/, 'Unknown params'; done_testing; Client000755001750001750 013675355153 25041 5ustar00enricoenrico000000000000Search-Elasticsearch-Client-2_0-6.81/lib/Search/Elasticsearch2_0.pm100644001750001750 171013675355153 26116 0ustar00enricoenrico000000000000Search-Elasticsearch-Client-2_0-6.81/lib/Search/Elasticsearch/Clientpackage Search::Elasticsearch::Client::2_0; our $VERSION='6.81'; use Search::Elasticsearch 6.00 (); 1; =pod =encoding UTF-8 =head1 NAME Search::Elasticsearch::Client::2_0 - Thin client with full support for Elasticsearch 2.x APIs =head1 VERSION version 6.81 =head1 DESCRIPTION The L package provides a client compatible with Elasticsearch 2.x. It should be used in conjunction with L as follows: $e = Search::Elasticsearch->new( client => "2_0::Direct" ); See L for documentation about how to use the client itself. =head1 AUTHOR Enrico Zimuel =head1 COPYRIGHT AND LICENSE This software is Copyright (c) 2020 by Elasticsearch BV. This is free software, licensed under: The Apache License, Version 2.0, January 2004 =cut __END__ # ABSTRACT: Thin client with full support for Elasticsearch 2.x APIs 2_0000755001750001750 013675355153 25421 5ustar00enricoenrico000000000000Search-Elasticsearch-Client-2_0-6.81/lib/Search/Elasticsearch/ClientBulk.pm100644001750001750 4413113675355153 27037 0ustar00enricoenrico000000000000Search-Elasticsearch-Client-2_0-6.81/lib/Search/Elasticsearch/Client/2_0package Search::Elasticsearch::Client::2_0::Bulk; $Search::Elasticsearch::Client::2_0::Bulk::VERSION = '6.81'; use Moo; with 'Search::Elasticsearch::Client::2_0::Role::Bulk', 'Search::Elasticsearch::Role::Is_Sync'; use Search::Elasticsearch::Util qw(parse_params throw); use Try::Tiny; use namespace::clean; #=================================== sub add_action { #=================================== my $self = shift; my $buffer = $self->_buffer; my $max_size = $self->max_size; my $max_count = $self->max_count; my $max_time = $self->max_time; while (@_) { my @json = $self->_encode_action( splice( @_, 0, 2 ) ); push @$buffer, @json; my $size = $self->_buffer_size; $size += length($_) + 1 for @json; $self->_buffer_size($size); my $count = $self->_buffer_count( $self->_buffer_count + 1 ); $self->flush if ( $max_size and $size >= $max_size ) || ( $max_count and $count >= $max_count ) || ( $max_time and time >= $self->_last_flush + $max_time ); } return 1; } #=================================== sub flush { #=================================== my $self = shift; $self->_last_flush(time); return { items => [] } unless $self->_buffer_size; if ( $self->verbose ) { local $| = 1; print "."; } my $buffer = $self->_buffer; my $results = try { my $res = $self->es->bulk( %{ $self->_bulk_args }, body => $buffer ); $self->clear_buffer; return $res; } catch { my $error = $_; $self->clear_buffer unless $error->is( 'Cxn', 'NoNodes' ); die $error; }; $self->_report( $buffer, $results ); return defined wantarray ? $results : undef; } #=================================== sub reindex { #=================================== my ( $self, $params ) = parse_params(@_); my $src = $params->{source} or throw( 'Param', "Missing required param " ); my $transform = $self->_doc_transformer($params); if ( ref $src eq 'HASH' ) { $src = {%$src}; my $es = delete $src->{es} || $self->es; my $scroll = $es->scroll_helper( search_type => 'scan', size => 500, %$src ); $src = sub { $scroll->refill_buffer; $scroll->drain_buffer; }; print "Reindexing " . $scroll->total . " docs\n" if $self->verbose; } while ( my @docs = grep {defined} $src->() ) { $self->index( grep {$_} map { $transform->($_) } @docs ); } $self->flush; return 1; } 1; =pod =encoding UTF-8 =head1 NAME Search::Elasticsearch::Client::2_0::Bulk - A helper module for the Bulk API and for reindexing =head1 VERSION version 6.81 =head1 SYNOPSIS use Search::Elasticsearch; my $es = Search::Elasticsearch->new; my $bulk = $es->bulk_helper( index => 'my_index', type => 'my_type' ); # Index docs: $bulk->index({ id => 1, source => { foo => 'bar' }}); $bulk->add_action( index => { id => 1, source => { foo=> 'bar' }}); # Create docs: $bulk->create({ id => 1, source => { foo => 'bar' }}); $bulk->add_action( create => { id => 1, source => { foo=> 'bar' }}); $bulk->create_docs({ foo => 'bar' }) # Delete docs: $bulk->delete({ id => 1}); $bulk->add_action( delete => { id => 1 }); $bulk->delete_ids(1,2,3) # Update docs: $bulk->update({ id => 1, script => '...' }); $bulk->add_action( update => { id => 1, script => '...' }); # Manual flush $bulk->flush; # Reindex docs: my $bulk = $es->bulk_helper( index => 'new_index', verbose => 1 ); $bulk->reindex( source => { index => 'old_index' }); =head1 DESCRIPTION This module provides a wrapper for the L method which makes it easier to run multiple create, index, update or delete actions in a single request. It also provides a simple interface for L. The L module acts as a queue, buffering up actions until it reaches a maximum count of actions, or a maximum size of JSON request body, at which point it issues a C request. Once you have finished adding actions, call L to force the final C request on the items left in the queue. This class does L and L. =head1 CREATING A NEW INSTANCE =head2 C my $bulk = $es->bulk_helper( index => 'default_index', # optional type => 'default_type', # optional %other_bulk_params # optional max_count => 1_000, # optional max_size => 1_000_000, # optional max_time => 5, # optional verbose => 0 | 1, # optional on_success => sub {...}, # optional on_error => sub {...}, # optional on_conflict => sub {...}, # optional ); The C method returns a new C<$bulk> object. You must pass your Search::Elasticsearch client as the C argument. The C and C parameters provide default values for C and C, which can be overridden in each action. You can also pass any other values which are accepted by the L method. See L for more information about the other parameters. =head1 FLUSHING THE BUFFER =head2 C $result = $bulk->flush; The C method sends all buffered actions to Elasticsearch using a L request. =head2 Auto-flushing An automatic L is triggered whenever the C, C, or C threshold is breached. This causes all actions in the buffer to be sent to Elasticsearch. =over =item * C The maximum number of actions to allow before triggering a L. This can be disabled by setting C to C<0>. Defaults to C<1,000>. =item * C The maximum size of JSON request body to allow before triggering a L. This can be disabled by setting C to C<0>. Defaults to C<1_000,000> bytes. =item * C The maximum number of seconds to wait before triggering a flush. Defaults to C<0> seconds, which means that it is disabled. B This timeout is only triggered when new items are added to the queue, not in the background. =back =head2 Errors when flushing There are two types of error which can be thrown when L is called, either manually or automatically. =over =item * Temporary Elasticsearch errors A C error like a C error which indicates that your cluster is down. These errors do not clear the buffer, as they can be retried later on. =item * Action errors Individual actions may fail. For instance, a C action will fail if a document with the same C, C and C already exists. These action errors are reported via L. =back =head2 Using callbacks By default, any I (see above) cause warnings to be written to C. However, you can use the C, C and C callbacks for more fine-grained control. All callbacks receive the following arguments: =over =item C<$action> The name of the action, ie C, C, C or C. =item C<$response> The response that Elasticsearch returned for this action. =item C<$i> The index of the action, ie the first action in the flush request will have C<$i> set to C<0>, the second will have C<$i> set to C<1> etc. =back =head3 C my $bulk = $es->bulk_helper( on_success => sub { my ($action,$response,$i) = @_; # do something }, ); The C callback is called for every action that has a successful response. =head3 C my $bulk = $es->bulk_helper( on_conflict => sub { my ($action,$response,$i,$version) = @_; # do something }, ); The C callback is called for actions that have triggered a C error, eg trying to C a document which already exists. The C<$version> argument will contain the version number of the document currently stored in Elasticsearch (if found). =head3 C my $bulk = $es->bulk_helper( on_error => sub { my ($action,$response,$i) = @_; # do something }, ); The C callback is called for any error (unless the C) callback has already been called). =head2 Disabling callbacks and autoflush If you want to be in control of flushing, and you just want to receive the raw response that Elasticsearch sends instead of using callbacks, then you can do so as follows: my $bulk = $es->bulk_helper( max_count => 0, max_size => 0, on_error => undef ); $bulk->add_actions(....); $response = $bulk->flush; =head1 CREATE, INDEX, UPDATE, DELETE =head2 C $bulk->add_action( create => { ...params... }, index => { ...params... }, update => { ...params... }, delete => { ...params... } ); The C method allows you to add multiple C, C, C and C actions to the queue. The first value is the action type, and the second value is the parameters that describe that action. See the individual helper methods below for details. B Parameters like C or C can be specified as C or as C<_index>, so the following two lines are equivalent: index => { index => 'index', type => 'type', id => 1, source => {...}}, index => { _index => 'index', _type => 'type', _id => 1, _source => {...}}, B The C and C parameters can be specified in the params for any action, but if not specified, will default to the C and C values specified in L. These are required parameters: they must be specified either in L or in every action. =head2 C $bulk->create( { index => 'custom_index', source => { doc body }}, { type => 'custom_type', id => 1, source => { doc body }}, ... ); The C helper method allows you to add multiple C actions. It accepts the same parameters as L except that the document body should be passed as the C or C<_source> parameter, instead of as C. =head2 C $bulk->create_docs( { doc body }, { doc body }, ... ); The C helper is a shorter form of L which can be used when you are using the default C and C as set in L and you are not specifying a custom C per document. In this case, you can just pass the individual document bodies. =head2 C $bulk->index( { index => 'custom_index', source => { doc body }}, { type => 'custom_type', id => 1, source => { doc body }}, ... ); The C helper method allows you to add multiple C actions. It accepts the same parameters as L except that the document body should be passed as the C or C<_source> parameter, instead of as C. =head2 C $bulk->delete( { index => 'custom_index', id => 1}, { type => 'custom_type', id => 2}, ... ); The C helper method allows you to add multiple C actions. It accepts the same parameters as L. =head2 C $bulk->delete_ids(1,2,3...) The C helper method can be used when all of the documents you want to delete have the default C and C as set in L. In this case, all you have to do is to pass in a list of IDs. =head2 C $bulk->update( { id => 1, doc => { partial doc }, doc_as_upsert => 1 }, { id => 2, lang => 'mvel', script => { script } upsert => { upsert doc } }, ... ); The C helper method allows you to add multiple C actions. It accepts the same parameters as L. An update can either use a I which gets merged with an existing doc (example 1 above), or can use a C