Net-Amazon-S3-0.59000755000765000024 012121037303 13104 5ustar00pfigstaff000000000000README100644000765000024 56112121037303 14027 0ustar00pfigstaff000000000000Net-Amazon-S3-0.59 This archive contains the distribution Net-Amazon-S3, version 0.59: Use the Amazon S3 - Simple Storage Service This software is copyright (c) 2013 by Amazon Digital Services, Leon Brocard, Brad Fitzpatrick, Pedro Figueiredo. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. CHANGES100644000765000024 1372712121037303 14212 0ustar00pfigstaff000000000000Net-Amazon-S3-0.59Revision history for Perl module Net::Amazon::S3: 0.59 Sat Mar 16 09:43:25 GMT 2013 0.59 Sat Mar 17 09:36:38 GMT 2013 - Added support for security token authentication (patch by Miquel Ruiz) 0.58 Wed Nov 28 16:22:42 GMT 2012 - Added support for multi-part upload and multiple objects deletion (patch by Robert B. Clarke) 0.57 Sun Sep 16 22:38:38 BST 2012 - Added support for Content-disposition (initial patch by Michele Beltrame) - Added sync_up option to s3cl (patch by Leo Lapworth) - Dist::Zilla tidying 0.56 Sun Dec 18 22:22:32 GMT 2011 - Spelling error (RT #69817, patch from Fabrizio Regalli) - Added use_ok tests (RT #72856, suggested by Gregor Herrmann) - Added missing use URI::Escape (RT #72857, patch from Gregor Herrmann) - Bumped dependency on MooseX::StrictConstructor to 0.16 (RT #73229, suggested by Andrew Main) - Enforce dependency versions (RT #73193, suggested by Andrew Main) 0.55 Sat Dec 10 00:55:29 GMT 2011 - Replaced dependency on MooseX::Types::DateTimeX with MooseX::Types::DateTime::MoreCoercions (RT #72472). 0.54 Sat Mar 21 21:23:32 BST 2011 - Fix for a naked qw() warning (patch by David Wheeler) - Fixed path issues (patch by Pavel Karoukin) *WARNING* THIS MIGHT BREAK EXISTING APPS *WARNING* - Author and development information changes 0.53 Tue Mar 30 15:24:19 BST 2010 - fix authenticated urls to work with EU buckets (patch by Edmund von der Burg) - tiny POD fix (patch by Frank Wiegand) - add an exists method to Net::Amazon::S3::Client (suggested by David Golden) - fix max_keys when listing buckets (spotted by Andrew Bryan) - add content_encoding to Net::Amazon::S3::Object (suggested by Egor Korablev) - update s3cl: You need to use the module before you use it, added the mkbucket command, now you can run the help without your AWS secret key, add docs about the env variables you need to run s3cl (patches by Jesse Vincent) 0.52 Thu Jul 2 09:17:11 BST 2009 - increase version prerequisites for some modules so that they are known to work 0.51 Tue May 19 08:31:59 BST 2009 - use MooseX::Types::DateTimeX so that we work with latest Moose (noticed by Ted Zlatanov) 0.50 Wed Jan 21 10:42:00 GMT 2009 - add support for an expires header when putting an object to Net::Amazon::S3::Client::Object - make all the classes immutable - add query_string_authentication_uri() to Net::Amazon::S3::Client::Object, suggested by Meng Wong 0.49 Tue Jan 13 09:04:42 GMT 2009 - add support for listing a bucket with a prefix to Net::Amazon::S3::Client::Bucket 0.48 Thu Dec 4 09:24:23 GMT 2008 - be slightly less strict about bucket names: they can contain uppercase letters, Amazon just doesn't recommend it (noticed by Simon Elliott, fixes Brackup) 0.47 Tue Dec 2 08:03:39 GMT 2008 - fix listing with a prefix (spotted by Nobuo Danjou) 0.46 Mon Nov 24 08:53:18 GMT 2008 - refactor request creation into Net::Amazon::S3::Request and many subclasses - move to Moose - add Net::Amazon::S3::Client and subclasses 0.45 Wed Aug 20 17:06:49 BST 2008 - make add_key, head_key etc. return all the headers, not just the X-Amazon ones (patch by Andrew Hanenkamp) - require IO::File 1.14 (noticed by tsw) - remove DateTime::Format::Strptime prerequisite as it was not being used (noticed by Yen-Ming Lee) - do not try and parse non-XML errors (patch by lostlogic) - make it possible to store and delete the key "0" (patch by Joey Hess) - make it possible to store empty files (patch by BDOLAN) - add Copy support (patch by BDOLAN) - add s3cl for command-line access (patch by Leo Lapworth) 0.44 Thu Mar 27 08:35:59 GMT 2008 - fix bug with storing files consisting of "0" (thanks to Martin Atkins) 0.43 Sat Mar 1 10:55:54 GMT 2008 - add binmode() to support Windows (thanks to Gabriel Weinberg) 0.42 Thu Feb 28 06:39:59 GMT 2008 - add exponential backoff upon temporary errors with the new retry option 0.41 Fri Nov 30 10:42:26 GMT 2007 - fix the expensive tests (patch by BDOLAN) - added support for EU buckets (patch by BDOLAN) 0.40 Tue Oct 30 11:40:42 GMT 2007 - fix for content length with empty keys by Mark A. Hershberger - get_key and get_key_filename now return content_length - rewrote synopsis - added support for common prefix (thanks to Andy Grundman) 0.39 Sun Aug 19 14:47:01 BST 2007 - add add_key_filename and get_key_filename which send files directly from disk - good for large files (Jim Blomo) - support UTF8 keys (Jim Blomo) - switch back from Build.PL to Makefile.PL 0.38 Sun Mar 4 16:43:28 GMT 2007 - use http_proxy and https_proxy environment variables for proxy settings (Ask Bjoern Hansen) - don't add the Authorization header if one is already specified when making a request - good for allowing caching to resources that are public. (Ask Bjoern Hansen) 0.37 Fri Oct 13 19:14:57 BST 2006 - added support for ACLs (thanks to Gordon McCreight) 0.36 Sun Sep 10 16:30:39 BST 2006 - remove extra warning 0.35 Sun Sep 10 16:25:44 BST 2006 - added list_bucket_all to stop having to worrying about 'marker' 0.34 Sun Sep 10 07:27:06 BST 2006 - added next marker and more docs from Jesse Vincent 0.33 Sat Aug 26 16:26:37 BST 2006 - documentation and test cleanup from Jesse Vincent - use HTTP keep alive (patch by Scott Gifford) - remove ununused code in _make_request (patch by Scott Gifford) 0.32 Tue Apr 25 19:51:06 BST 2006 - fix bug with listing buckets with parameters (thanks to karjala) 0.31 Tue Apr 4 21:15:02 BST 2006 - many patches from Brad Fitzpatrick to make change the API, make return values sane, add err/errstr, make Bucket object - added a timeout option as suggested by Brad Dixon - it's the Brad release! 0.30 Mon Mar 20 20:20:29 GMT 2006 - initial release LICENSE100644000765000024 4414212121037303 14217 0ustar00pfigstaff000000000000Net-Amazon-S3-0.59This software is copyright (c) 2013 by Amazon Digital Services, Leon Brocard, Brad Fitzpatrick, Pedro Figueiredo. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. Terms of the Perl programming language system itself a) the GNU General Public License as published by the Free Software Foundation; either version 1, or (at your option) any later version, or b) the "Artistic License" --- The GNU General Public License, Version 1, February 1989 --- This software is Copyright (c) 2013 by Amazon Digital Services, Leon Brocard, Brad Fitzpatrick, Pedro Figueiredo. This is free software, licensed under: The GNU General Public License, Version 1, February 1989 GNU GENERAL PUBLIC LICENSE Version 1, February 1989 Copyright (C) 1989 Free Software Foundation, Inc. 51 Franklin St, Suite 500, Boston, MA 02110-1335 USA Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The license agreements of most software companies try to keep users at the mercy of those companies. By contrast, our General Public License is intended to guarantee your freedom to share and change free software--to make sure the software is free for all its users. The General Public License applies to the Free Software Foundation's software and to any other program whose authors commit to using it. You can use it for your programs, too. When we speak of free software, we are referring to freedom, not price. Specifically, the General Public License is designed to make sure that you have the freedom to give away or sell copies of free software, that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs; and that you know you can do these things. To protect your rights, we need to make restrictions that forbid anyone to deny you these rights or to ask you to surrender the rights. These restrictions translate to certain responsibilities for you if you distribute copies of the software, or if you modify it. For example, if you distribute copies of a such a program, whether gratis or for a fee, you must give the recipients all the rights that you have. You must make sure that they, too, receive or can get the source code. And you must tell them their rights. We protect your rights with two steps: (1) copyright the software, and (2) offer you this license which gives you legal permission to copy, distribute and/or modify the software. Also, for each author's protection and ours, we want to make certain that everyone understands that there is no warranty for this free software. If the software is modified by someone else and passed on, we want its recipients to know that what they have is not the original, so that any problems introduced by others will not reflect on the original authors' reputations. The precise terms and conditions for copying, distribution and modification follow. GNU GENERAL PUBLIC LICENSE TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION 0. This License Agreement applies to any program or other work which contains a notice placed by the copyright holder saying it may be distributed under the terms of this General Public License. The "Program", below, refers to any such program or work, and a "work based on the Program" means either the Program or any work containing the Program or a portion of it, either verbatim or with modifications. Each licensee is addressed as "you". 1. You may copy and distribute verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice and disclaimer of warranty; keep intact all the notices that refer to this General Public License and to the absence of any warranty; and give any other recipients of the Program a copy of this General Public License along with the Program. You may charge a fee for the physical act of transferring a copy. 2. You may modify your copy or copies of the Program or any portion of it, and copy and distribute such modifications under the terms of Paragraph 1 above, provided that you also do the following: a) cause the modified files to carry prominent notices stating that you changed the files and the date of any change; and b) cause the whole of any work that you distribute or publish, that in whole or in part contains the Program or any part thereof, either with or without modifications, to be licensed at no charge to all third parties under the terms of this General Public License (except that you may choose to grant warranty protection to some or all third parties, at your option). c) If the modified program normally reads commands interactively when run, you must cause it, when started running for such interactive use in the simplest and most usual way, to print or display an announcement including an appropriate copyright notice and a notice that there is no warranty (or else, saying that you provide a warranty) and that users may redistribute the program under these conditions, and telling the user how to view a copy of this General Public License. d) You may charge a fee for the physical act of transferring a copy, and you may at your option offer warranty protection in exchange for a fee. Mere aggregation of another independent work with the Program (or its derivative) on a volume of a storage or distribution medium does not bring the other work under the scope of these terms. 3. You may copy and distribute the Program (or a portion or derivative of it, under Paragraph 2) in object code or executable form under the terms of Paragraphs 1 and 2 above provided that you also do one of the following: a) accompany it with the complete corresponding machine-readable source code, which must be distributed under the terms of Paragraphs 1 and 2 above; or, b) accompany it with a written offer, valid for at least three years, to give any third party free (except for a nominal charge for the cost of distribution) a complete machine-readable copy of the corresponding source code, to be distributed under the terms of Paragraphs 1 and 2 above; or, c) accompany it with the information you received as to where the corresponding source code may be obtained. (This alternative is allowed only for noncommercial distribution and only if you received the program in object code or executable form alone.) Source code for a work means the preferred form of the work for making modifications to it. For an executable file, complete source code means all the source code for all modules it contains; but, as a special exception, it need not include source code for modules which are standard libraries that accompany the operating system on which the executable file runs, or for standard header files or definitions files that accompany that operating system. 4. You may not copy, modify, sublicense, distribute or transfer the Program except as expressly provided under this General Public License. Any attempt otherwise to copy, modify, sublicense, distribute or transfer the Program is void, and will automatically terminate your rights to use the Program under this License. However, parties who have received copies, or rights to use copies, from you under this General Public License will not have their licenses terminated so long as such parties remain in full compliance. 5. By copying, distributing or modifying the Program (or any work based on the Program) you indicate your acceptance of this license to do so, and all its terms and conditions. 6. Each time you redistribute the Program (or any work based on the Program), the recipient automatically receives a license from the original licensor to copy, distribute or modify the Program subject to these terms and conditions. You may not impose any further restrictions on the recipients' exercise of the rights granted herein. 7. The Free Software Foundation may publish revised and/or new versions of the General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies a version number of the license which applies to it and "any later version", you have the option of following the terms and conditions either of that version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of the license, you may choose any version ever published by the Free Software Foundation. 8. If you wish to incorporate parts of the Program into other free programs whose distribution conditions are different, write to the author to ask for permission. For software which is copyrighted by the Free Software Foundation, write to the Free Software Foundation; we sometimes make exceptions for this. Our decision will be guided by the two goals of preserving the free status of all derivatives of our free software and of promoting the sharing and reuse of software generally. NO WARRANTY 9. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 10. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. END OF TERMS AND CONDITIONS Appendix: How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to humanity, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively convey the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. Copyright (C) 19yy This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 1, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston MA 02110-1301 USA Also add information on how to contact you by electronic and paper mail. If the program is interactive, make it output a short notice like this when it starts in an interactive mode: Gnomovision version 69, Copyright (C) 19xx name of author Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details. The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, the commands you use may be called something other than `show w' and `show c'; they could even be mouse-clicks or menu items--whatever suits your program. You should also get your employer (if you work as a programmer) or your school, if any, to sign a "copyright disclaimer" for the program, if necessary. Here a sample; alter the names: Yoyodyne, Inc., hereby disclaims all copyright interest in the program `Gnomovision' (a program to direct compilers to make passes at assemblers) written by James Hacker. , 1 April 1989 Ty Coon, President of Vice That's all there is to it! --- The Artistic License 1.0 --- This software is Copyright (c) 2013 by Amazon Digital Services, Leon Brocard, Brad Fitzpatrick, Pedro Figueiredo. This is free software, licensed under: The Artistic License 1.0 The Artistic License Preamble The intent of this document is to state the conditions under which a Package may be copied, such that the Copyright Holder maintains some semblance of artistic control over the development of the package, while giving the users of the package the right to use and distribute the Package in a more-or-less customary fashion, plus the right to make reasonable modifications. Definitions: - "Package" refers to the collection of files distributed by the Copyright Holder, and derivatives of that collection of files created through textual modification. - "Standard Version" refers to such a Package if it has not been modified, or has been modified in accordance with the wishes of the Copyright Holder. - "Copyright Holder" is whoever is named in the copyright or copyrights for the package. - "You" is you, if you're thinking about copying or distributing this Package. - "Reasonable copying fee" is whatever you can justify on the basis of media cost, duplication charges, time of people involved, and so on. (You will not be required to justify it to the Copyright Holder, but only to the computing community at large as a market that must bear the fee.) - "Freely Available" means that no fee is charged for the item itself, though there may be fees involved in handling the item. It also means that recipients of the item may redistribute it under the same conditions they received it. 1. You may make and give away verbatim copies of the source form of the Standard Version of this Package without restriction, provided that you duplicate all of the original copyright notices and associated disclaimers. 2. You may apply bug fixes, portability fixes and other modifications derived from the Public Domain or from the Copyright Holder. A Package modified in such a way shall still be considered the Standard Version. 3. You may otherwise modify your copy of this Package in any way, provided that you insert a prominent notice in each changed file stating how and when you changed that file, and provided that you do at least ONE of the following: a) place your modifications in the Public Domain or otherwise make them Freely Available, such as by posting said modifications to Usenet or an equivalent medium, or placing the modifications on a major archive site such as ftp.uu.net, or by allowing the Copyright Holder to include your modifications in the Standard Version of the Package. b) use the modified Package only within your corporation or organization. c) rename any non-standard executables so the names do not conflict with standard executables, which must also be provided, and provide a separate manual page for each non-standard executable that clearly documents how it differs from the Standard Version. d) make other distribution arrangements with the Copyright Holder. 4. You may distribute the programs of this Package in object code or executable form, provided that you do at least ONE of the following: a) distribute a Standard Version of the executables and library files, together with instructions (in the manual page or equivalent) on where to get the Standard Version. b) accompany the distribution with the machine-readable source of the Package with your modifications. c) accompany any non-standard executables with their corresponding Standard Version executables, giving the non-standard executables non-standard names, and clearly documenting the differences in manual pages (or equivalent), together with instructions on where to get the Standard Version. d) make other distribution arrangements with the Copyright Holder. 5. You may charge a reasonable copying fee for any distribution of this Package. You may charge any fee you choose for support of this Package. You may not charge a fee for this Package itself. However, you may distribute this Package in aggregate with other (possibly commercial) programs as part of a larger (possibly commercial) software distribution provided that you do not advertise this Package as a product of your own. 6. The scripts and library files supplied as input to or produced as output from the programs of this Package do not automatically fall under the copyright of this Package, but belong to whomever generated them, and may be sold commercially, and may be aggregated with this Package. 7. C or perl subroutines supplied by you and linked into this Package shall not be considered part of this Package. 8. The name of the Copyright Holder may not be used to endorse or promote products derived from this software without specific prior written permission. 9. THIS PACKAGE IS PROVIDED "AS IS" AND WITHOUT ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE. The End dist.ini100644000765000024 135612121037303 14636 0ustar00pfigstaff000000000000Net-Amazon-S3-0.59name = Net-Amazon-S3 author = Pedro Figueiredo license = Perl_5 copyright_holder = Amazon Digital Services, Leon Brocard, Brad Fitzpatrick, Pedro Figueiredo version = 0.59 [@Filter] -bundle = @Basic option = for_basic [MetaJSON] [MetaResources] bugtracker.web = https://github.com/pfig/net-amazon-s3/issues repository.url = https://github.com/pfig/net-amazon-s3.git repository.web = https://github.com/pfig/net-amazon-s3 repository.type = git [AutoPrereqs] [CPANFile] [CheckChangeLog] [PodWeaver] [ReadmeMarkdownFromPod] [PkgVersion] [NextRelease] filename = CHANGES format = %-4v %{ccc MMM dd HH:mm:ss vvv yyyy}d [@TestingMania] disable = Test::Kwalitee disable = Test::Pod::LinkCheck disable = PodCoverageTests bin000755000765000024 012121037303 13575 5ustar00pfigstaff000000000000Net-Amazon-S3-0.59s3cl100755000765000024 3037212121037303 14554 0ustar00pfigstaff000000000000Net-Amazon-S3-0.59/bin#!/usr/bin/env perl use strict; use warnings; use Getopt::Long; use Pod::Usage; use Path::Class; use File::Find::Rule; use Digest::MD5 qw(md5_hex); use Net::Amazon::S3; use MIME::Types qw(by_suffix); use Term::ProgressBar::Simple; # PODNAME: s3cl # ABSTRACT: Command line for Amazon s3 cloud storage my $s3; my %args; my %commands = ( mkbucket => \&mk_bucket, buckets => \&buckets, ls => \&ls, rm => \&rm, cp => \&cp, sync => \&sync, sync_up => \&sync_up, help => \&helper, ); main(); sub main { terminal(); get_options(); init_s3(); my $command = shift @ARGV || "help"; $commands{$command} or helper("Unknown command: $command"); $commands{$command}->(); } sub init_s3 { # TODO: read key_id and secret from config file? # use AppConfig; # TODO: probably nicer to put all of this in Net::Amazon::S3::CommandLine # and have simple call to that from here. my $aws_access_key_id = $ENV{'AWS_ACCESS_KEY_ID'}; my $aws_secret_access_key = $ENV{'AWS_ACCESS_KEY_SECRET'}; $s3 = Net::Amazon::S3->new( { aws_access_key_id => $aws_access_key_id, aws_secret_access_key => $aws_secret_access_key, retry => 1, } ); } sub sync { my $dest = $args{dest_or_source} || ''; helper("No destination supplied") if $dest eq ''; helper("Can not write to: $args{dest_or_source}") unless -w $dest; my $bucket = _get_bucket(); my $list = ls('data'); foreach my $key ( @{ $list->{keys} } ) { my $source = file( $key->{key} ); my $destination = file( $dest, $source ); $destination->dir->mkpath(); warn "$source -> $destination"; my $response = $bucket->get_key_filename( $source->stringify, 'GET', $destination->stringify ) or die $s3->err . ": " . $s3->errstr; } } sub sync_up { my $source = $args{dest_or_source} || ''; my $prefix = $args{prefix_or_key} || ''; my $acl_short = $args{acl_short}; helper("No source supplied") if $source eq ''; helper("Can not read directory: $args{dest_or_source}") unless -d $source; # Work out our local files my @files = File::Find::Rule->file()->in( ($source) ); my $progress = Term::ProgressBar::Simple->new( scalar(@files) ); my $bucket = _get_bucket(); # Get a list of all the remote files my $remote_file_list = $bucket->list_all( { prefix => $prefix } ) or die $s3->err . ": " . $s3->errstr; # Now hash, so we can look up a specific key to find the etag my %remote_files; foreach my $key_meta ( @{ $remote_file_list->{keys} } ) { my $key = $key_meta->{key}; $remote_files{$key} = $key_meta; } my $dir = dir($source); my $dir_string = $dir->stringify; my $mimetypes = MIME::Types->new; foreach my $f (@files) { my $file = file($f); my ( $mediatype, $encoding ) = by_suffix $file->basename(); # Assume plain text unless we can work i unless ($mediatype) { if ( -T $file ) { $mediatype = 'text/plain'; } else { $progress++; $progress->message("$f - NOT uploading"); warn "Not uploading: $file"; warn "Unknown mime type, submit patch to MIME::Types"; next; } } my $content = $file->slurp(); my $md5 = md5_hex($content); my $key = $file->stringify; $key =~ s/$dir_string//; # remove our local path for the dir $key =~ s{^/}{}; # remove the trailing slash $key = "$prefix$key"; # Add the prefix if there is one if ( my $remote = $remote_files{$key} ) { if ( $remote->{etag} eq $md5 ) { $progress->message("$key - $mediatype - not changed"); next; } } $bucket->add_key_filename( $key, $f, { content_type => $mediatype, }, ) or die $s3->err . ": " . $s3->errstr; if ($acl_short) { $bucket->set_acl( { key => $key, acl_short => $acl_short, } ) || die $s3->err . ": " . $s3->errstr; } $progress->message("$key - $mediatype - uploaded"); $progress++; } } sub cp { my $dest = $args{dest_or_source} || ''; helper("No destination supplied") if $dest eq ''; my $key = $args{prefix_or_key} || helper("No key supplied"); if ( -d $dest ) { # If we have a directory we need to add the file name $dest = file( $dest, file($key)->basename ); } my $bucket = _get_bucket(); unless ( $bucket->get_key_filename( "$key", 'GET', "$dest" ) ) { die $s3->err . ": " . $s3->errstr if $s3->err; die "Could not copy $key from bucket $args{bucket}"; } } sub ls { my $mode = shift || 'print'; my $bucket = _get_bucket(); my $ls_conf; $ls_conf->{prefix} = $args{prefix_or_key} if $args{prefix_or_key}; # list files in the bucket my $response = $bucket->list_all($ls_conf) or die $s3->err . ": " . $s3->errstr; return $response if $mode eq 'data'; foreach my $key ( @{ $response->{keys} } ) { my $key_last_modified = $key->{last_modified}; # 2008-07-14T22:31:10.000Z $key_last_modified =~ s/:\d{2}\.\d{3}Z$//; my $key_name = $key->{key}; my $key_size = $key->{size}; print "$key_size $key_last_modified $key_name\n"; } } sub rm { my $bucket = _get_bucket(); helper("Must have a :") unless $args{prefix_or_key}; my $res = "NO"; if ( $args{force} ) { $res = 'y'; } else { print "\nOnce deleted there is no way to retrieve this key again." . "\nAre you sure you want to delete $args{bucket}:$args{prefix_or_key}? y/N\n"; ( $res = ) =~ s/\n//; } if ( $res eq 'y' ) { # delete key in this bucket my $response = $bucket->delete_key( $args{prefix_or_key} ) or die $s3->err . ": " . $s3->errstr; } } sub mk_bucket { my $bucketname = $args{bucket}; my $bucket = $s3->add_bucket( { bucket => $bucketname, location_constraint => 'EU' } ) or die $s3->err . ": " . $s3->errstr; } sub buckets { my $response = $s3->buckets; my $num = scalar @{ $response->{buckets} || [] }; print "You have $num bucket"; print "s" if $num != 1; print ":\n"; foreach my $bucket ( @{ $response->{buckets} } ) { print '- ' . $bucket->bucket . "\n"; } } sub terminal { my $encoding = eval { require Term::Encoding; Term::Encoding::get_encoding(); } || "utf-8"; binmode STDOUT, ":encoding($encoding)"; } # TODO: Replace with AppConfig this is ick! sub get_options { my $help = 0; my $man = 0; my $force = 0; my $loc = "US"; my $bucket = ""; GetOptions( \%args, "bucket=s", "jurisdiction=s", "acl_short=s", "f|force" => \$force, "h|help|?" => \$help, "man" => \$man, ) or pod2usage(2); $args{force} = $force; foreach my $arg (@ARGV) { if ( $arg =~ /:/ ) { my ( $b, $rest ) = split( ":", $arg ); $args{bucket} = $b; $args{prefix_or_key} = $rest; } } # For cp / sync etc $args{dest_or_source} = $ARGV[2] if $ARGV[2]; pod2usage(1) if $help || @ARGV == 0; pod2usage( -verbose => 2 ) if $man; } sub _get_bucket { helper("No bucket supplied") unless $args{bucket}; my $bucket = $s3->bucket( $args{bucket} ); die $s3->err . ": " . $s3->errstr if $s3->err; helper("Could not get bucket $args{bucket}") unless $bucket; return $bucket; } sub helper { my $msg = shift; if ($msg) { pod2usage( -message => $msg, -exitval => 2 ); } exit; } =pod =head1 NAME s3cl - Command line for Amazon s3 cloud storage =head1 VERSION version 0.59 =head1 SYNOPSIS s3cl command [options] s3cl buckets s3cl mkbucket --bucket some_bucket_name --jurisdiction [EU|US] s3cl ls :[prefix] s3cl cp : /path/[filename] s3cl sync :[prefix] /path/ s3cl sync_up [--acl_short=public-read] /path/ :[prefix] s3cl rm : Options: -help brief help message -man full documentation We take NO responsibility for the costs incured through using this script. To run this script, you need to set a pair of environment variables: AWS_ACCESS_KEY_ID AWS_ACCESS_KEY_SECRET =head1 DESCRIPTION This program gives a command line interface to Amazons s3 storage service. It does not limit the number of requests (which may cost you more money than if you did it a different way!) and each request costs Money (although some costs from EC2 may be $0.0, check latest from Amazon costs page) - we take NO reponsibility for your bill. =head1 AUTHOR Pedro Figueiredo =head1 COPYRIGHT AND LICENSE This software is copyright (c) 2013 by Amazon Digital Services, Leon Brocard, Brad Fitzpatrick, Pedro Figueiredo. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. =cut __DATA__ =head1 COMMANDS =over 4 =item B s3cl buckets List all buckets for this account. =item B s3cl mkbucket --bucket sombucketname [--jurisdiction [EU|US]] Create a new bucket, optionally specifying what jurisdiction it should be created in. =item B s3cl ls :[prefix] List contents of a bucket, the optional B can be partial, in which case all keys matching this as the start of the key name will be returned. If no B is supplied all keys of the bucket will be returned. =item B s3cl cp : target_file s3cl cp : target_directory Copy a single key from the bucket to the target file, or into the target_directory. =item B s3cl sync :[prefix] target_dir Downloads all files matching the prefix into a directory structure replicating that of the prefix and all 'sub-directories'. It will download ALL files - even if already on your local disk: http://www.amazon.com/gp/browse.html?node=16427261 # Data transfer "in" and "out" refers to transfer into and out # of Amazon S3. Data transferred between Amazon EC2 and # Amazon S3, is free of charge (i.e., $0.00 per GB), except # data transferred between Amazon EC2 and Amazon S3-Europe, # which will be charged at regular rates. =item B s3cl sync_up [--acl_short=public-read] /path/ :[prefix] Upload all the files below /path/ to S3, with an optional prefix at the start of the key name. The existing S3 files and meta data are fetched from S3 and the md5 (etag) is compaired to what is on the local disk, files are not upload if the content has not changed. Use --acl_short to set access control, options from L this is only applied when the file is uploaded. Each files content-type is worked out using L, if this does not match 'text/plain' is used for ASCII text files, otherwise a warning is issued and the file is NOT uploaded. Currently this does NOT remove old files from S3, and if there is any change to a file then the entire file will be reuploaded. =item B s3cl rm : Remove a key(file) from the bucket, removing a non-existent file is not classed as an error. Once removed the key (file) can not be restored - so use with care! =back =head1 ABOUT This module contains code modified from Amazon that contains the following notice (which is also applicicable to this code): # This software code is made available "AS IS" without # warranties of any kind. You may copy, display, modify and # redistribute the software code either by itself or as incorporated # into your code; provided that you do not remove any proprietary # notices. Your use of this software code is at your own risk and # you waive any claim against Amazon Digital Services, Inc. or its # affiliates with respect to your use of this software code. # (c) 2006 Amazon Digital Services, Inc. or its affiliates. =head1 AUTHOR Leo Lapworth - Part of the HinuHinu project =cut META.yml100644000765000024 244212121037303 14440 0ustar00pfigstaff000000000000Net-Amazon-S3-0.59--- abstract: 'Use the Amazon S3 - Simple Storage Service' author: - 'Pedro Figueiredo ' build_requires: File::Find: 0 File::Temp: 0 LWP::Simple: 0 Test::Exception: 0 Test::More: 0 vars: 0 configure_requires: ExtUtils::MakeMaker: 6.30 dynamic_config: 0 generated_by: 'Dist::Zilla version 4.300023, CPAN::Meta::Converter version 2.120921' license: perl meta-spec: url: http://module-build.sourceforge.net/META-spec-v1.4.html version: 1.4 name: Net-Amazon-S3 requires: Carp: 0 Data::Stream::Bulk::Callback: 0 DateTime::Format::HTTP: 0 Digest::HMAC_SHA1: 0 Digest::MD5: 0 Digest::MD5::File: 0 File::Find::Rule: 0 File::stat: 0 Getopt::Long: 0 HTTP::Date: 0 HTTP::Status: 0 IO::File: 1.14 LWP::UserAgent::Determined: 0 MIME::Base64: 0 MIME::Types: 0 Moose: 0.85 Moose::Util::TypeConstraints: 0 MooseX::StrictConstructor: 0.16 MooseX::Types::DateTime::MoreCoercions: 0.07 Path::Class: 0 Pod::Usage: 0 Regexp::Common: 0 Term::Encoding: 0 Term::ProgressBar::Simple: 0 URI: 0 URI::Escape: 0 URI::QueryParam: 0 XML::LibXML: 0 XML::LibXML::XPathContext: 0 strict: 0 warnings: 0 resources: bugtracker: https://github.com/pfig/net-amazon-s3/issues repository: https://github.com/pfig/net-amazon-s3.git version: 0.59 MANIFEST100644000765000024 276112121037303 14324 0ustar00pfigstaff000000000000Net-Amazon-S3-0.59CHANGES LICENSE MANIFEST META.json META.yml Makefile.PL README README.md README.mkdn bin/s3cl cpanfile dist.ini examples/backup_cpan.pl lib/Net/Amazon/S3.pm lib/Net/Amazon/S3/Bucket.pm lib/Net/Amazon/S3/Client.pm lib/Net/Amazon/S3/Client/Bucket.pm lib/Net/Amazon/S3/Client/Object.pm lib/Net/Amazon/S3/HTTPRequest.pm lib/Net/Amazon/S3/Request.pm lib/Net/Amazon/S3/Request/CompleteMultipartUpload.pm lib/Net/Amazon/S3/Request/CreateBucket.pm lib/Net/Amazon/S3/Request/DeleteBucket.pm lib/Net/Amazon/S3/Request/DeleteMultiObject.pm lib/Net/Amazon/S3/Request/DeleteObject.pm lib/Net/Amazon/S3/Request/GetBucketAccessControl.pm lib/Net/Amazon/S3/Request/GetBucketLocationConstraint.pm lib/Net/Amazon/S3/Request/GetObject.pm lib/Net/Amazon/S3/Request/GetObjectAccessControl.pm lib/Net/Amazon/S3/Request/InitiateMultipartUpload.pm lib/Net/Amazon/S3/Request/ListAllMyBuckets.pm lib/Net/Amazon/S3/Request/ListBucket.pm lib/Net/Amazon/S3/Request/ListParts.pm lib/Net/Amazon/S3/Request/PutObject.pm lib/Net/Amazon/S3/Request/PutPart.pm lib/Net/Amazon/S3/Request/SetBucketAccessControl.pm lib/Net/Amazon/S3/Request/SetObjectAccessControl.pm perlcritic.rc t/00-compile.t t/00use.t t/01api.t t/02client.t t/03token.t t/author-critic.t t/author-test-eol.t t/release-cpan-changes.t t/release-dist-manifest.t t/release-distmeta.t t/release-meta-json.t t/release-minimum-version.t t/release-mojibake.t t/release-no-tabs.t t/release-pod-syntax.t t/release-portability.t t/release-synopsis.t t/release-test-version.t t/release-unused-vars.t cpanfile100644000765000024 255012121037303 14673 0ustar00pfigstaff000000000000Net-Amazon-S3-0.59requires "Carp" => "0"; requires "Data::Stream::Bulk::Callback" => "0"; requires "DateTime::Format::HTTP" => "0"; requires "Digest::HMAC_SHA1" => "0"; requires "Digest::MD5" => "0"; requires "Digest::MD5::File" => "0"; requires "File::Find::Rule" => "0"; requires "File::stat" => "0"; requires "Getopt::Long" => "0"; requires "HTTP::Date" => "0"; requires "HTTP::Status" => "0"; requires "IO::File" => "1.14"; requires "LWP::UserAgent::Determined" => "0"; requires "MIME::Base64" => "0"; requires "MIME::Types" => "0"; requires "Moose" => "0.85"; requires "Moose::Util::TypeConstraints" => "0"; requires "MooseX::StrictConstructor" => "0.16"; requires "MooseX::Types::DateTime::MoreCoercions" => "0.07"; requires "Path::Class" => "0"; requires "Pod::Usage" => "0"; requires "Regexp::Common" => "0"; requires "Term::Encoding" => "0"; requires "Term::ProgressBar::Simple" => "0"; requires "URI" => "0"; requires "URI::Escape" => "0"; requires "URI::QueryParam" => "0"; requires "XML::LibXML" => "0"; requires "XML::LibXML::XPathContext" => "0"; requires "strict" => "0"; requires "warnings" => "0"; on 'test' => sub { requires "File::Find" => "0"; requires "File::Temp" => "0"; requires "LWP::Simple" => "0"; requires "Test::Exception" => "0"; requires "Test::More" => "0"; requires "vars" => "0"; }; on 'configure' => sub { requires "ExtUtils::MakeMaker" => "6.30"; }; README.md100644000765000024 432712121037303 14452 0ustar00pfigstaff000000000000Net-Amazon-S3-0.59# DESCRIPTION This module provides a Perlish interface to Amazon S3. From the developer blurb: "Amazon S3 is storage for the Internet. It is designed to make web-scale computing easier for developers. Amazon S3 provides a simple web services interface that can be used to store and retrieve any amount of data, at any time, from anywhere on the web. It gives any developer access to the same highly scalable, reliable, fast, inexpensive data storage infrastructure that Amazon uses to run its own global network of web sites. The service aims to maximize benefits of scale and to pass those benefits on to developers". To find out more about S3, please visit: http://s3.amazonaws.com/ To use this module you will need to sign up to Amazon Web Services and provide an "Access Key ID" and " Secret Access Key". If you use this module, you will incurr costs as specified by Amazon. Please check the costs. If you use this module with your Access Key ID and Secret Access Key you must be responsible for these costs. I highly recommend reading all about S3, but in a nutshell data is stored in values. Values are referenced by keys, and keys are stored in buckets. Bucket names are global. Note: This is the legacy interface, please check out Net::Amazon::S3::Client instead. Development of this code happens here: http://github.com/pfig/net-amazon-s3/ Homepage for the project (just started) is at http://pfig.github.com/net-amazon-s3/ # LICENSE This module contains code modified from Amazon that contains the following notice: > This software code is made available "AS IS" without warranties of any > kind. You may copy, display, modify and redistribute the software > code either by itself or as incorporated into your code; provided that > you do not remove any proprietary notices. Your use of this software > code is at your own risk and you waive any claim against Amazon > Digital Services, Inc. or its affiliates with respect to your use of > this software code. (c) 2006 Amazon Digital Services, Inc. or its > affiliates. # AUTHOR * Leon Brocard and unknown Amazon Digital Services programmers. * Brad Fitzpatrick - return values, Bucket object. * Pedro Figueiredo - since 0.54. t000755000765000024 012121037303 13270 5ustar00pfigstaff000000000000Net-Amazon-S3-0.5900use.t100644000765000024 35212121037303 14531 0ustar00pfigstaff000000000000Net-Amazon-S3-0.59/t#!perl use warnings; use strict; use lib 'lib'; use Test::More tests => 4; use_ok( 'Net::Amazon::S3' ); use_ok( 'Net::Amazon::S3::Client' ); use_ok( 'Net::Amazon::S3::Client::Bucket' ); use_ok( 'Net::Amazon::S3::Client::Object' ); 01api.t100644000765000024 2775612121037303 14570 0ustar00pfigstaff000000000000Net-Amazon-S3-0.59/t#!perl use warnings; use strict; use lib 'lib'; use Digest::MD5::File qw(file_md5_hex); use Test::More; unless ( $ENV{'AMAZON_S3_EXPENSIVE_TESTS'} ) { plan skip_all => 'Testing this module for real costs money.'; } else { plan tests => 71 * 2 + 4; } use_ok('Net::Amazon::S3'); use vars qw/$OWNER_ID $OWNER_DISPLAYNAME/; my $aws_access_key_id = $ENV{'AWS_ACCESS_KEY_ID'}; my $aws_secret_access_key = $ENV{'AWS_ACCESS_KEY_SECRET'}; my $s3 = Net::Amazon::S3->new( { aws_access_key_id => $aws_access_key_id, aws_secret_access_key => $aws_secret_access_key, retry => 1, } ); # list all buckets that i own my $response = $s3->buckets; $OWNER_ID = $response->{owner_id}; $OWNER_DISPLAYNAME = $response->{owner_displayname}; TODO: { local $TODO = "These tests only work if you're pedro"; like( $response->{owner_id}, qr/^c7483d612ac7f0c0/ ); is( $response->{owner_displayname}, 'pedro_figueiredo' ); is( scalar @{ $response->{buckets} }, 6 ); } for my $location ( undef, 'EU' ) { # create a bucket # make sure it's a valid hostname for EU testing # we use the same bucket name for both in order to force one or the other to # have stale DNS my $bucketname = 'net-amazon-s3-test-' . lc $aws_access_key_id; # for testing # my $bucket = $s3->bucket($bucketname); $bucket->delete_bucket; exit; my $bucket_obj = $s3->add_bucket( { bucket => $bucketname, acl_short => 'public-read', location_constraint => $location } ) or die $s3->err . ": " . $s3->errstr; is( ref $bucket_obj, "Net::Amazon::S3::Bucket" ); is( $bucket_obj->get_location_constraint, $location ); like_acl_allusers_read($bucket_obj); ok( $bucket_obj->set_acl( { acl_short => 'private' } ) ); unlike_acl_allusers_read($bucket_obj); # another way to get a bucket object (does no network I/O, # assumes it already exists). Read Net::Amazon::S3::Bucket. $bucket_obj = $s3->bucket($bucketname); is( ref $bucket_obj, "Net::Amazon::S3::Bucket" ); # fetch contents of the bucket # note prefix, marker, max_keys options can be passed in $response = $bucket_obj->list or die $s3->err . ": " . $s3->errstr; is( $response->{bucket}, $bucketname ); is( $response->{prefix}, '' ); is( $response->{marker}, '' ); is( $response->{max_keys}, 1_000 ); is( $response->{is_truncated}, 0 ); is_deeply( $response->{keys}, [] ); is( undef, $bucket_obj->get_key("non-existing-key") ); my $keyname = 'testing.txt'; { # Create a publicly readable key, then turn it private with a short acl. # This key will persist past the end of the block. my $value = 'T'; $bucket_obj->add_key( $keyname, $value, { content_type => 'text/plain', 'x-amz-meta-colour' => 'orange', acl_short => 'public-read', } ); is_request_response_code( "http://$bucketname.s3.amazonaws.com/$keyname", 200, "can access the publicly readable key" ); like_acl_allusers_read( $bucket_obj, $keyname ); ok( $bucket_obj->set_acl( { key => $keyname, acl_short => 'private' } ) ); is_request_response_code( "http://$bucketname.s3.amazonaws.com/$keyname", 403, "cannot access the private key" ); unlike_acl_allusers_read( $bucket_obj, $keyname ); ok( $bucket_obj->set_acl( { key => $keyname, acl_xml => acl_xml_from_acl_short('public-read') } ) ); is_request_response_code( "http://$bucketname.s3.amazonaws.com/$keyname", 200, "can access the publicly readable key after acl_xml set" ); like_acl_allusers_read( $bucket_obj, $keyname ); ok( $bucket_obj->set_acl( { key => $keyname, acl_xml => acl_xml_from_acl_short('private') } ) ); is_request_response_code( "http://$bucketname.s3.amazonaws.com/$keyname", 403, "cannot access the private key after acl_xml set" ); unlike_acl_allusers_read( $bucket_obj, $keyname ); } { # Create a private key, then make it publicly readable with a short # acl. Delete it at the end so we're back to having a single key in # the bucket. my $keyname2 = 'testing2.txt'; my $value = 'T2'; $bucket_obj->add_key( $keyname2, $value, { content_type => 'text/plain', 'x-amz-meta-colour' => 'blue', acl_short => 'private', } ); is_request_response_code( "http://$bucketname.s3.amazonaws.com/$keyname2", 403, "cannot access the private key" ); unlike_acl_allusers_read( $bucket_obj, $keyname2 ); ok( $bucket_obj->set_acl( { key => $keyname2, acl_short => 'public-read' } ) ); is_request_response_code( "http://$bucketname.s3.amazonaws.com/$keyname2", 200, "can access the publicly readable key" ); like_acl_allusers_read( $bucket_obj, $keyname2 ); $bucket_obj->delete_key($keyname2); } { # Copy a key, keeping metadata my $keyname2 = 'testing2.txt'; $bucket_obj->copy_key( $keyname2, "/$bucketname/$keyname" ); is_request_response_code( "http://$bucketname.s3.amazonaws.com/$keyname2", 403, "cannot access the private key" ); # Overwrite, making publically readable $bucket_obj->copy_key( $keyname2, "/$bucketname/$keyname", { acl_short => 'public-read' } ); sleep 1; is_request_response_code( "http://$bucketname.s3.amazonaws.com/$keyname2", 200, "can access the publicly readable key" ); # Now copy it over itself, making it private $bucket_obj->edit_metadata( $keyname2, { short_acl => 'private' } ); is_request_response_code( "http://$bucketname.s3.amazonaws.com/$keyname2", 403, "cannot access the private key" ); # Get rid of it, bringing us back to only one key $bucket_obj->delete_key($keyname2); # Expect a nonexistent key copy to fail ok( !$bucket_obj->copy_key( "newkey", "/$bucketname/$keyname2" ), "Copying a nonexistent key fails" ); } # list keys in the bucket $response = $bucket_obj->list or die $s3->err . ": " . $s3->errstr; is( $response->{bucket}, $bucketname ); is( $response->{prefix}, '' ); is( $response->{marker}, '' ); is( $response->{max_keys}, 1_000 ); is( $response->{is_truncated}, 0 ); my @keys = @{ $response->{keys} }; is( @keys, 1 ); my $key = $keys[0]; is( $key->{key}, $keyname ); # the etag is the MD5 of the value is( $key->{etag}, 'b9ece18c950afbfa6b0fdbfa4ff731d3' ); is( $key->{size}, 1 ); is( $key->{owner_id}, $OWNER_ID ); is( $key->{owner_displayname}, $OWNER_DISPLAYNAME ); # You can't delete a bucket with things in it ok( !$bucket_obj->delete_bucket() ); $bucket_obj->delete_key($keyname); # now play with the file methods my $readme_md5 = file_md5_hex('README'); my $readme_size = -s 'README'; $keyname .= "2"; $bucket_obj->add_key_filename( $keyname, 'README', { content_type => 'text/plain', 'x-amz-meta-colour' => 'orangy', } ); $response = $bucket_obj->get_key($keyname); is( $response->{content_type}, 'text/plain' ); like( $response->{value}, qr/Amazon Digital Services/ ); is( $response->{etag}, $readme_md5 ); is( $response->{'x-amz-meta-colour'}, 'orangy' ); is( $response->{content_length}, $readme_size ); unlink('t/README'); $response = $bucket_obj->get_key_filename( $keyname, undef, 't/README' ); is( $response->{content_type}, 'text/plain' ); is( $response->{value}, '' ); is( $response->{etag}, $readme_md5 ); is( file_md5_hex('t/README'), $readme_md5 ); is( $response->{'x-amz-meta-colour'}, 'orangy' ); is( $response->{content_length}, $readme_size ); $bucket_obj->delete_key($keyname); # try empty files $keyname .= "3"; $bucket_obj->add_key( $keyname, '' ); $response = $bucket_obj->get_key($keyname); is( $response->{value}, '' ); is( $response->{etag}, 'd41d8cd98f00b204e9800998ecf8427e' ); is( $response->{content_type}, 'binary/octet-stream' ); is( $response->{content_length}, 0 ); $bucket_obj->delete_key($keyname); # how about using add_key_filename? $keyname .= '4'; open FILE, ">", "t/empty" or die "Can't open t/empty for write: $!"; close FILE; $bucket_obj->add_key_filename( $keyname, 't/empty' ); $response = $bucket_obj->get_key($keyname); is( $response->{value}, '' ); is( $response->{etag}, 'd41d8cd98f00b204e9800998ecf8427e' ); is( $response->{content_type}, 'binary/octet-stream' ); is( $response->{content_length}, 0 ); $bucket_obj->delete_key($keyname); unlink 't/empty'; # fetch contents of the bucket # note prefix, marker, max_keys options can be passed in $response = $bucket_obj->list or die $s3->err . ": " . $s3->errstr; is( $response->{bucket}, $bucketname ); is( $response->{prefix}, '' ); is( $response->{marker}, '' ); is( $response->{max_keys}, 1_000 ); is( $response->{is_truncated}, 0 ); is_deeply( $response->{keys}, [] ); ok( $bucket_obj->delete_bucket() ); } # see more docs in Net::Amazon::S3::Bucket # local test methods sub is_request_response_code { my ( $url, $code, $message ) = @_; my $request = HTTP::Request->new( 'GET', $url ); #warn $request->as_string(); my $response = $s3->ua->request($request); is( $response->code, $code, $message ); } sub like_acl_allusers_read { my ( $bucketobj, $keyname ) = @_; my $message = acl_allusers_read_message( 'like', @_ ); like( $bucketobj->get_acl($keyname), qr(AllUsers.+READ), $message ); } sub unlike_acl_allusers_read { my ( $bucketobj, $keyname ) = @_; my $message = acl_allusers_read_message( 'unlike', @_ ); unlike( $bucketobj->get_acl($keyname), qr(AllUsers.+READ), $message ); } sub acl_allusers_read_message { my ( $like_or_unlike, $bucketobj, $keyname ) = @_; my $message = $like_or_unlike . "_acl_allusers_read: " . $bucketobj->bucket; $message .= " - $keyname" if $keyname; return $message; } sub acl_xml_from_acl_short { my $acl_short = shift || 'private'; my $public_read = ''; if ( $acl_short eq 'public-read' ) { $public_read = qq~ http://acs.amazonaws.com/groups/global/AllUsers READ ~; } return qq~ $OWNER_ID $OWNER_DISPLAYNAME $OWNER_ID $OWNER_DISPLAYNAME FULL_CONTROL $public_read ~; } META.json100644000765000024 452612121037303 14615 0ustar00pfigstaff000000000000Net-Amazon-S3-0.59{ "abstract" : "Use the Amazon S3 - Simple Storage Service", "author" : [ "Pedro Figueiredo " ], "dynamic_config" : 0, "generated_by" : "Dist::Zilla version 4.300023, CPAN::Meta::Converter version 2.120921", "license" : [ "perl_5" ], "meta-spec" : { "url" : "http://search.cpan.org/perldoc?CPAN::Meta::Spec", "version" : "2" }, "name" : "Net-Amazon-S3", "prereqs" : { "configure" : { "requires" : { "ExtUtils::MakeMaker" : "6.30" } }, "runtime" : { "requires" : { "Carp" : "0", "Data::Stream::Bulk::Callback" : "0", "DateTime::Format::HTTP" : "0", "Digest::HMAC_SHA1" : "0", "Digest::MD5" : "0", "Digest::MD5::File" : "0", "File::Find::Rule" : "0", "File::stat" : "0", "Getopt::Long" : "0", "HTTP::Date" : "0", "HTTP::Status" : "0", "IO::File" : "1.14", "LWP::UserAgent::Determined" : "0", "MIME::Base64" : "0", "MIME::Types" : "0", "Moose" : "0.85", "Moose::Util::TypeConstraints" : "0", "MooseX::StrictConstructor" : "0.16", "MooseX::Types::DateTime::MoreCoercions" : "0.07", "Path::Class" : "0", "Pod::Usage" : "0", "Regexp::Common" : "0", "Term::Encoding" : "0", "Term::ProgressBar::Simple" : "0", "URI" : "0", "URI::Escape" : "0", "URI::QueryParam" : "0", "XML::LibXML" : "0", "XML::LibXML::XPathContext" : "0", "strict" : "0", "warnings" : "0" } }, "test" : { "requires" : { "File::Find" : "0", "File::Temp" : "0", "LWP::Simple" : "0", "Test::Exception" : "0", "Test::More" : "0", "vars" : "0" } } }, "release_status" : "stable", "resources" : { "bugtracker" : { "web" : "https://github.com/pfig/net-amazon-s3/issues" }, "repository" : { "type" : "git", "url" : "https://github.com/pfig/net-amazon-s3.git", "web" : "https://github.com/pfig/net-amazon-s3" } }, "version" : "0.59" } 03token.t100644000765000024 163112121037303 15101 0ustar00pfigstaff000000000000Net-Amazon-S3-0.59/t#!perl use warnings; use strict; use lib 'lib'; use Test::More; use Test::Exception; unless ( $ENV{'AWS_ACCESS_KEY_ID'} and $ENV{'AWS_ACCESS_KEY_SECRET'} and $ENV{'AWS_ACCESS_TOKEN'} ) { plan skip_all => 'Need these vars in ENV: AWS_ACCESS_KEY_ID, ' . 'AWS_ACCESS_KEY_SECRET, AWS_ACCESS_TOKEN'; } else { plan tests => 1 + 1; } use_ok('Net::Amazon::S3'); my $aws_access_key_id = $ENV{'AWS_ACCESS_KEY_ID'}; my $aws_secret_access_key = $ENV{'AWS_ACCESS_KEY_SECRET'}; my $aws_session_token = $ENV{'AWS_ACCESS_TOKEN'}; my $s3 = Net::Amazon::S3->new( { aws_access_key_id => $aws_access_key_id, aws_secret_access_key => $aws_secret_access_key, aws_session_token => $aws_session_token, retry => 1, } ); # list all buckets that i own my $response = $s3->buckets; ok($response, "Authentication with token succeded"); Makefile.PL100644000765000024 375712121037303 15153 0ustar00pfigstaff000000000000Net-Amazon-S3-0.59 use strict; use warnings; use ExtUtils::MakeMaker 6.30; my %WriteMakefileArgs = ( "ABSTRACT" => "Use the Amazon S3 - Simple Storage Service", "AUTHOR" => "Pedro Figueiredo ", "BUILD_REQUIRES" => { "File::Find" => 0, "File::Temp" => 0, "LWP::Simple" => 0, "Test::Exception" => 0, "Test::More" => 0, "vars" => 0 }, "CONFIGURE_REQUIRES" => { "ExtUtils::MakeMaker" => "6.30" }, "DISTNAME" => "Net-Amazon-S3", "EXE_FILES" => [ "bin/s3cl" ], "LICENSE" => "perl", "NAME" => "Net::Amazon::S3", "PREREQ_PM" => { "Carp" => 0, "Data::Stream::Bulk::Callback" => 0, "DateTime::Format::HTTP" => 0, "Digest::HMAC_SHA1" => 0, "Digest::MD5" => 0, "Digest::MD5::File" => 0, "File::Find::Rule" => 0, "File::stat" => 0, "Getopt::Long" => 0, "HTTP::Date" => 0, "HTTP::Status" => 0, "IO::File" => "1.14", "LWP::UserAgent::Determined" => 0, "MIME::Base64" => 0, "MIME::Types" => 0, "Moose" => "0.85", "Moose::Util::TypeConstraints" => 0, "MooseX::StrictConstructor" => "0.16", "MooseX::Types::DateTime::MoreCoercions" => "0.07", "Path::Class" => 0, "Pod::Usage" => 0, "Regexp::Common" => 0, "Term::Encoding" => 0, "Term::ProgressBar::Simple" => 0, "URI" => 0, "URI::Escape" => 0, "URI::QueryParam" => 0, "XML::LibXML" => 0, "XML::LibXML::XPathContext" => 0, "strict" => 0, "warnings" => 0 }, "VERSION" => "0.59", "test" => { "TESTS" => "t/*.t" } ); unless ( eval { ExtUtils::MakeMaker->VERSION(6.56) } ) { my $br = delete $WriteMakefileArgs{BUILD_REQUIRES}; my $pp = $WriteMakefileArgs{PREREQ_PM}; for my $mod ( keys %$br ) { if ( exists $pp->{$mod} ) { $pp->{$mod} = $br->{$mod} if $br->{$mod} > $pp->{$mod}; } else { $pp->{$mod} = $br->{$mod}; } } } delete $WriteMakefileArgs{CONFIGURE_REQUIRES} unless eval { ExtUtils::MakeMaker->VERSION(6.52) }; WriteMakefile(%WriteMakefileArgs); README.mkdn100644000765000024 3113512121037303 15020 0ustar00pfigstaff000000000000Net-Amazon-S3-0.59# NAME Net::Amazon::S3 - Use the Amazon S3 - Simple Storage Service # VERSION version 0.59 # SYNOPSIS use Net::Amazon::S3; my $aws_access_key_id = 'fill me in'; my $aws_secret_access_key = 'fill me in too'; my $s3 = Net::Amazon::S3->new( { aws_access_key_id => $aws_access_key_id, aws_secret_access_key => $aws_secret_access_key, retry => 1, } ); # a bucket is a globally-unique directory # list all buckets that i own my $response = $s3->buckets; foreach my $bucket ( @{ $response->{buckets} } ) { print "You have a bucket: " . $bucket->bucket . "\n"; } # create a new bucket my $bucketname = 'acmes_photo_backups'; my $bucket = $s3->add_bucket( { bucket => $bucketname } ) or die $s3->err . ": " . $s3->errstr; # or use an existing bucket $bucket = $s3->bucket($bucketname); # store a file in the bucket $bucket->add_key_filename( '1.JPG', 'DSC06256.JPG', { content_type => 'image/jpeg', }, ) or die $s3->err . ": " . $s3->errstr; # store a value in the bucket $bucket->add_key( 'reminder.txt', 'this is where my photos are backed up' ) or die $s3->err . ": " . $s3->errstr; # list files in the bucket $response = $bucket->list_all or die $s3->err . ": " . $s3->errstr; foreach my $key ( @{ $response->{keys} } ) { my $key_name = $key->{key}; my $key_size = $key->{size}; print "Bucket contains key '$key_name' of size $key_size\n"; } # fetch file from the bucket $response = $bucket->get_key_filename( '1.JPG', 'GET', 'backup.jpg' ) or die $s3->err . ": " . $s3->errstr; # fetch value from the bucket $response = $bucket->get_key('reminder.txt') or die $s3->err . ": " . $s3->errstr; print "reminder.txt:\n"; print " content length: " . $response->{content_length} . "\n"; print " content type: " . $response->{content_type} . "\n"; print " etag: " . $response->{content_type} . "\n"; print " content: " . $response->{value} . "\n"; # delete keys $bucket->delete_key('reminder.txt') or die $s3->err . ": " . $s3->errstr; $bucket->delete_key('1.JPG') or die $s3->err . ": " . $s3->errstr; # and finally delete the bucket $bucket->delete_bucket or die $s3->err . ": " . $s3->errstr; # DESCRIPTION This module provides a Perlish interface to Amazon S3. From the developer blurb: "Amazon S3 is storage for the Internet. It is designed to make web-scale computing easier for developers. Amazon S3 provides a simple web services interface that can be used to store and retrieve any amount of data, at any time, from anywhere on the web. It gives any developer access to the same highly scalable, reliable, fast, inexpensive data storage infrastructure that Amazon uses to run its own global network of web sites. The service aims to maximize benefits of scale and to pass those benefits on to developers". To find out more about S3, please visit: http://s3.amazonaws.com/ To use this module you will need to sign up to Amazon Web Services and provide an "Access Key ID" and " Secret Access Key". If you use this module, you will incurr costs as specified by Amazon. Please check the costs. If you use this module with your Access Key ID and Secret Access Key you must be responsible for these costs. I highly recommend reading all about S3, but in a nutshell data is stored in values. Values are referenced by keys, and keys are stored in buckets. Bucket names are global. Note: This is the legacy interface, please check out [Net::Amazon::S3::Client](http://search.cpan.org/perldoc?Net::Amazon::S3::Client) instead. Development of this code happens here: http://github.com/pfig/net-amazon-s3/ Homepage for the project (just started) is at http://pfig.github.com/net-amazon-s3/ # METHODS ## new Create a new S3 client object. Takes some arguments: - aws\_access\_key\_id Use your Access Key ID as the value of the AWSAccessKeyId parameter in requests you send to Amazon Web Services (when required). Your Access Key ID identifies you as the party responsible for the request. - aws\_secret\_access\_key Since your Access Key ID is not encrypted in requests to AWS, it could be discovered and used by anyone. Services that are not free require you to provide additional information, a request signature, to verify that a request containing your unique Access Key ID could only have come from you. DO NOT INCLUDE THIS IN SCRIPTS OR APPLICATIONS YOU DISTRIBUTE. YOU'LL BE SORRY - aws\_session\_token If you are using temporary credentials provided by the AWS Security Token Service, set the token here, and it will be added to the request in order to authenticate it. - secure Set this to `1` if you want to use SSL-encrypted connections when talking to S3. Defaults to `0`. - timeout How many seconds should your script wait before bailing on a request to S3? Defaults to 30. - retry If this library should retry upon errors. This option is recommended. This uses exponential backoff with retries after 1, 2, 4, 8, 16, 32 seconds, as recommended by Amazon. Defaults to off. ## buckets Returns undef on error, else hashref of results ## add\_bucket Takes a hashref: - bucket The name of the bucket you want to add - acl\_short (optional) See the set\_acl subroutine for documenation on the acl\_short options - location\_constraint (option) Sets the location constraint of the new bucket. If left unspecified, the default S3 datacenter location will be used. Otherwise, you can set it to 'EU' for a European data center - note that costs are different. Returns 0 on failure, Net::Amazon::S3::Bucket object on success ## bucket BUCKET Takes a scalar argument, the name of the bucket you're creating Returns an (unverified) bucket object from an account. Does no network access. ## delete\_bucket Takes either a [Net::Amazon::S3::Bucket](http://search.cpan.org/perldoc?Net::Amazon::S3::Bucket) object or a hashref containing - bucket The name of the bucket to remove Returns false (and fails) if the bucket isn't empty. Returns true if the bucket is successfully deleted. ## list\_bucket List all keys in this bucket. Takes a hashref of arguments: MANDATORY - bucket The name of the bucket you want to list keys on OPTIONAL - prefix Restricts the response to only contain results that begin with the specified prefix. If you omit this optional argument, the value of prefix for your query will be the empty string. In other words, the results will be not be restricted by prefix. - delimiter If this optional, Unicode string parameter is included with your request, then keys that contain the same string between the prefix and the first occurrence of the delimiter will be rolled up into a single result element in the CommonPrefixes collection. These rolled-up keys are not returned elsewhere in the response. For example, with prefix="USA/" and delimiter="/", the matching keys "USA/Oregon/Salem" and "USA/Oregon/Portland" would be summarized in the response as a single "USA/Oregon" element in the CommonPrefixes collection. If an otherwise matching key does not contain the delimiter after the prefix, it appears in the Contents collection. Each element in the CommonPrefixes collection counts as one against the MaxKeys limit. The rolled-up keys represented by each CommonPrefixes element do not. If the Delimiter parameter is not present in your request, keys in the result set will not be rolled-up and neither the CommonPrefixes collection nor the NextMarker element will be present in the response. - max-keys This optional argument limits the number of results returned in response to your query. Amazon S3 will return no more than this number of results, but possibly less. Even if max-keys is not specified, Amazon S3 will limit the number of results in the response. Check the IsTruncated flag to see if your results are incomplete. If so, use the Marker parameter to request the next page of results. For the purpose of counting max-keys, a 'result' is either a key in the 'Contents' collection, or a delimited prefix in the 'CommonPrefixes' collection. So for delimiter requests, max-keys limits the total number of list results, not just the number of keys. - marker This optional parameter enables pagination of large result sets. `marker` specifies where in the result set to resume listing. It restricts the response to only contain results that occur alphabetically after the value of marker. To retrieve the next page of results, use the last key from the current page of results as the marker in your next request. See also `next_marker`, below. If `marker` is omitted,the first page of results is returned. Returns undef on error and a hashref of data on success: The hashref looks like this: { bucket => $bucket_name, prefix => $bucket_prefix, common_prefixes => [$prefix1,$prefix2,...] marker => $bucket_marker, next_marker => $bucket_next_available_marker, max_keys => $bucket_max_keys, is_truncated => $bucket_is_truncated_boolean keys => [$key1,$key2,...] } Explanation of bits of that: - common\_prefixes If list\_bucket was requested with a delimiter, common\_prefixes will contain a list of prefixes matching that delimiter. Drill down into these prefixes by making another request with the prefix parameter. - is\_truncated B flag that indicates whether or not all results of your query were returned in this response. If your results were truncated, you can make a follow-up paginated request using the Marker parameter to retrieve the rest of the results. - next\_marker A convenience element, useful when paginating with delimiters. The value of `next_marker`, if present, is the largest (alphabetically) of all key names and all CommonPrefixes prefixes in the response. If the `is_truncated` flag is set, request the next page of results by setting `marker` to the value of `next_marker`. This element is only present in the response if the `delimiter` parameter was sent with the request. Each key is a hashref that looks like this: { key => $key, last_modified => $last_mod_date, etag => $etag, # An MD5 sum of the stored content. size => $size, # Bytes storage_class => $storage_class # Doc? owner_id => $owner_id, owner_displayname => $owner_name } ## list\_bucket\_all List all keys in this bucket without having to worry about 'marker'. This is a convenience method, but may make multiple requests to S3 under the hood. Takes the same arguments as list\_bucket. ## add\_key DEPRECATED. DO NOT USE ## get\_key DEPRECATED. DO NOT USE ## head\_key DEPRECATED. DO NOT USE ## delete\_key DEPRECATED. DO NOT USE # LICENSE This module contains code modified from Amazon that contains the following notice: # This software code is made available "AS IS" without warranties of any # kind. You may copy, display, modify and redistribute the software # code either by itself or as incorporated into your code; provided that # you do not remove any proprietary notices. Your use of this software # code is at your own risk and you waive any claim against Amazon # Digital Services, Inc. or its affiliates with respect to your use of # this software code. (c) 2006 Amazon Digital Services, Inc. or its # affiliates. # TESTING Testing S3 is a tricky thing. Amazon wants to charge you a bit of money each time you use their service. And yes, testing counts as using. Because of this, the application's test suite skips anything approaching a real test unless you set these three environment variables: - AMAZON\_S3\_EXPENSIVE\_TESTS Doesn't matter what you set it to. Just has to be set - AWS\_ACCESS\_KEY\_ID Your AWS access key - AWS\_ACCESS\_KEY\_SECRET Your AWS sekkr1t passkey. Be forewarned that setting this environment variable on a shared system might leak that information to another user. Be careful. # AUTHOR Leon Brocard and unknown Amazon Digital Services programmers. Brad Fitzpatrick - return values, Bucket object Pedro Figueiredo - since 0.54 # SEE ALSO [Net::Amazon::S3::Bucket](http://search.cpan.org/perldoc?Net::Amazon::S3::Bucket) # AUTHOR Pedro Figueiredo # COPYRIGHT AND LICENSE This software is copyright (c) 2013 by Amazon Digital Services, Leon Brocard, Brad Fitzpatrick, Pedro Figueiredo. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. 02client.t100755000765000024 2304312121037303 15262 0ustar00pfigstaff000000000000Net-Amazon-S3-0.59/t#!perl use warnings; use strict; use lib 'lib'; use Digest::MD5::File qw(file_md5_hex); use LWP::Simple; use File::stat; use Test::More; use Test::Exception; use File::Temp qw/ :seekable /; unless ( $ENV{'AMAZON_S3_EXPENSIVE_TESTS'} ) { plan skip_all => 'Testing this module for real costs money.'; } else { plan tests => 48; } use_ok('Net::Amazon::S3'); my $aws_access_key_id = $ENV{'AWS_ACCESS_KEY_ID'}; my $aws_secret_access_key = $ENV{'AWS_ACCESS_KEY_SECRET'}; my $s3 = Net::Amazon::S3->new( aws_access_key_id => $aws_access_key_id, aws_secret_access_key => $aws_secret_access_key, retry => 1, ); my $readme_size = stat('README')->size; my $readme_md5hex = file_md5_hex('README'); my $client = Net::Amazon::S3::Client->new( s3 => $s3 ); my @buckets = $client->buckets; TODO: { local $TODO = "These tests only work if you're pedro"; my $first_bucket = $buckets[0]; like( $first_bucket->owner_id, qr/^c7483d612ac7f0c0/, 'have owner id' ); is( $first_bucket->owner_display_name, 'pedro_figueiredo', 'have display name' ); is( scalar @buckets, 6, 'have a bunch of buckets' ); } my $bucket_name = 'net-amazon-s3-test-' . lc $aws_access_key_id; my $bucket = $client->create_bucket( name => $bucket_name, acl_short => 'public-read', location_constraint => 'EU', ); is( $bucket->name, $bucket_name, 'newly created bucket has correct name' ); like( $bucket->acl, qr{[a-z0-9]{64}.+?[a-z0-9]{64}.+?FULL_CONTROLhttp://acs.amazonaws.com/groups/global/AllUsersREAD}, 'newly created bucket is public-readable' ); is( $bucket->location_constraint, 'EU', 'newly created bucket is in the EU' ); my $stream = $bucket->list; until ( $stream->is_done ) { foreach my $object ( $stream->items ) { $object->delete; } } my $count = 0; $stream = $bucket->list; until ( $stream->is_done ) { foreach my $object ( $stream->items ) { $count++; } } is( $count, 0, 'newly created bucket has no objects' ); my $object = $bucket->object( key => 'this is the key' ); ok( !$object->exists, 'object does not exist yet' ); $object->put('this is the value'); ok( $object->exists, 'object now exists yet' ); my @objects; @objects = (); $stream = $bucket->list( { prefix => 'this is the key' } ); until ( $stream->is_done ) { foreach my $object ( $stream->items ) { push @objects, $object; } } is( @objects, 1, 'bucket list with prefix finds key' ); @objects = (); $stream = $bucket->list( { prefix => 'this is not the key' } ); until ( $stream->is_done ) { foreach my $object ( $stream->items ) { push @objects, $object; } } is( @objects, 0, 'bucket list with different prefix does not find key' ); @objects = (); $stream = $bucket->list; until ( $stream->is_done ) { foreach my $object ( $stream->items ) { push @objects, $object; } } is( @objects, 1, 'bucket list finds newly created key' ); is( $objects[0]->key, 'this is the key', 'newly created object has the right key' ); is( $objects[0]->etag, '94325a12f8db22ffb6934cc5f22f6698', 'newly created object has the right etag' ); is( $objects[0]->size, '17', 'newly created object has the right size' ); is( $object->get, 'this is the value', 'newly created object has the right value' ); is( $bucket->object( key => 'this is the key' )->get, 'this is the value', 'newly created object fetched by name has the right value' ); is( get( $object->uri ), undef, 'newly created object cannot be fetched by uri' ); $object->expires('2037-01-01'); is( get( $object->query_string_authentication_uri() ), 'this is the value', 'newly created object can be fetch by authentication uri' ); $object->delete; # upload a public object $object = $bucket->object( key => 'this is the public key', acl_short => 'public-read', content_type => 'text/plain', content_encoding => 'identity', expires => '2001-02-03', ); $object->put('this is the public value'); is( get( $object->uri ), 'this is the public value', 'newly created public object is publically accessible' ); is( ( head( $object->uri ) )[0], 'text/plain', 'newly created public object has the right content type' ); is( ( head( $object->uri ) )[3], $object->expires->epoch, 'newly created public object has the right expires' ); $object->delete; # delete a non-existant object $object = $bucket->object( key => 'not here' ); throws_ok { $object->get } qr/NoSuchKey/, 'getting non-existant object throws exception'; # upload a file with put_filename $object = $bucket->object( key => 'the readme' ); $object->put_filename('README'); @objects = (); $stream = $bucket->list; until ( $stream->is_done ) { foreach my $object ( $stream->items ) { push @objects, $object; } } is( @objects, 1, 'have newly uploaded object' ); is( $objects[0]->key, 'the readme', 'newly uploaded object has the right key' ); is( $objects[0]->etag, $readme_md5hex, 'newly uploaded object has the right etag' ); is( $objects[0]->size, $readme_size, 'newly created object has the right size' ); ok( $objects[0]->last_modified, 'newly created object has a last modified' ); $object->delete; # upload a public object with put_filename $object = $bucket->object( key => 'the public readme', acl_short => 'public-read' ); $object->put_filename('README'); is( length( get( $object->uri ) ), $readme_size, 'newly uploaded public object has the right size' ); $object->delete; # upload a file with put_filename with known md5hex and size $object = $bucket->object( key => 'the new readme', etag => $readme_md5hex, size => $readme_size ); $object->put_filename('README'); @objects = (); $stream = $bucket->list; until ( $stream->is_done ) { foreach my $object ( $stream->items ) { push @objects, $object; } } is( @objects, 1, 'have newly uploaded object' ); is( $objects[0]->key, 'the new readme', 'newly uploaded object has the right key' ); is( $objects[0]->etag, $readme_md5hex, 'newly uploaded object has the right etag' ); is( $objects[0]->size, $readme_size, 'newly created object has the right size' ); ok( $objects[0]->last_modified, 'newly created object has a last modified' ); # download an object with get_filename my $tmp_fh = File::Temp->new(); $object->get_filename($tmp_fh->filename); is( stat($tmp_fh->filename)->size, $readme_size, 'download has right size' ); is( file_md5_hex($tmp_fh->filename), $readme_md5hex, 'download has right etag' ); $object->delete; # upload a public object with put_filename with known md5hex and size $object = $bucket->object( key => 'the new public readme', etag => $readme_md5hex, size => $readme_size, acl_short => 'public-read' ); $object->put_filename( 'README', $readme_md5hex, $readme_size ); is( length( get( $object->uri ) ), $readme_size, 'newly uploaded public object has the right size' ); $object->delete; # upload an object using multipart upload $object = $bucket->object( key => 'new multipart file', acl_short => 'public-read' ); my $upload_id; ok($upload_id = $object->initiate_multipart_upload, "can initiate a new multipart upload"); #put part my $put_part_response; ok( $put_part_response = $object->put_part(part_number => 1, upload_id => $upload_id, value => 'x' x (5 * 1024 * 1024)), 'Got a successful response for PUT part' ); my @etags; push @etags, $put_part_response->header('ETag'); ok( $put_part_response = $object->put_part(part_number => 2, upload_id => $upload_id, value => 'z' x (1024 * 1024)), 'Got a successful response for 2nd PUT part' ); push @etags, $put_part_response->header('ETag'); # TODO list part? - We've got this, but how to expose it nicely? #complete multipart upload my $complete_upload_response; ok( $complete_upload_response = $object->complete_multipart_upload( upload_id => $upload_id, part_numbers => [1,2], etags => \@etags), "successful response for complete multipart upload" ); #get the file and check that it looks like we expect ok($object->exists, "object has now been created"); $tmp_fh = File::Temp->new(); $object->get_filename($tmp_fh->filename); is( stat($tmp_fh->filename)->size, 6 * 1024 * 1024, "downloaded file has a size equivalent to the sum of it's parts"); $tmp_fh->seek((5 * 1024 * 1024) - 1, SEEK_SET);#jump to 5MB position my $test_bytes; read($tmp_fh, $test_bytes, 2); is($test_bytes, "xz", "The second chunk of the file begins in the correct place"); $object->delete; #test multi-object delete #make 3 identical objects @objects =(); for my $i(1..3){ my $bulk_object = $bucket->object( key => "bulk-readme-$i", etag => $readme_md5hex, size => $readme_size ); $bulk_object->put_filename('README'); push @objects, $bulk_object; } #now delete 2 of those objects ok($bucket->delete_multi_object(@objects[0..1]), "executed multi delete operation"); ok( !grep($_->exists, @objects[0..1]), "target objects no longer exist"); ok( $objects[2]->exists, "object not included in multi-object delete still exists" ); $objects[2]->delete; $bucket->delete; perlcritic.rc100644000765000024 46312121037303 15636 0ustar00pfigstaff000000000000Net-Amazon-S3-0.59severity = 5 only = 1 force = 0 verbose = 4 top = 50 theme = (pbp || security) && bugs include = NamingConventions ClassHierarchies exclude = Variables Modules::RequirePackage RequireUseStrict RequireUseWarnings criticism-fatal = 1 color = 1 allow-unsafe = 1 pager = less 00-compile.t100644000765000024 312512121037303 15463 0ustar00pfigstaff000000000000Net-Amazon-S3-0.59/t#!perl use strict; use warnings; use Test::More; use File::Find; use File::Temp qw{ tempdir }; my @modules; find( sub { return if $File::Find::name !~ /\.pm\z/; my $found = $File::Find::name; $found =~ s{^lib/}{}; $found =~ s{[/\\]}{::}g; $found =~ s/\.pm$//; # nothing to skip push @modules, $found; }, 'lib', ); sub _find_scripts { my $dir = shift @_; my @found_scripts = (); find( sub { return unless -f; my $found = $File::Find::name; # nothing to skip open my $FH, '<', $_ or do { note( "Unable to open $found in ( $! ), skipping" ); return; }; my $shebang = <$FH>; return unless $shebang =~ /^#!.*?\bperl\b\s*$/; push @found_scripts, $found; }, $dir, ); return @found_scripts; } my @scripts; do { push @scripts, _find_scripts($_) if -d $_ } for qw{ bin script scripts }; my $plan = scalar(@modules) + scalar(@scripts); $plan ? (plan tests => $plan) : (plan skip_all => "no tests to run"); { # fake home for cpan-testers # no fake requested ## local $ENV{HOME} = tempdir( CLEANUP => 1 ); like( qx{ $^X -Ilib -e "require $_; print '$_ ok'" }, qr/^\s*$_ ok/s, "$_ loaded ok" ) for sort @modules; SKIP: { eval "use Test::Script 1.05; 1;"; skip "Test::Script needed to test script compilation", scalar(@scripts) if $@; foreach my $file ( @scripts ) { my $script = $file; $script =~ s!.*/!!; script_compiles( $file, "$script script compiles" ); } } } author-critic.t100644000765000024 66612121037303 16362 0ustar00pfigstaff000000000000Net-Amazon-S3-0.59/t#!perl BEGIN { unless ($ENV{AUTHOR_TESTING}) { require Test::More; Test::More::plan(skip_all => 'these tests are for testing by the author'); } } use strict; use warnings; use Test::More; use English qw(-no_match_vars); eval "use Test::Perl::Critic"; plan skip_all => 'Test::Perl::Critic required to criticise code' if $@; Test::Perl::Critic->import( -profile => "perlcritic.rc" ) if -e "perlcritic.rc"; all_critic_ok(); release-no-tabs.t100644000765000024 45012121037303 16555 0ustar00pfigstaff000000000000Net-Amazon-S3-0.59/t BEGIN { unless ($ENV{RELEASE_TESTING}) { require Test::More; Test::More::plan(skip_all => 'these tests are for release candidate testing'); } } use strict; use warnings; use Test::More; eval 'use Test::NoTabs'; plan skip_all => 'Test::NoTabs required' if $@; all_perl_files_ok(); author-test-eol.t100644000765000024 61512121037303 16633 0ustar00pfigstaff000000000000Net-Amazon-S3-0.59/t BEGIN { unless ($ENV{AUTHOR_TESTING}) { require Test::More; Test::More::plan(skip_all => 'these tests are for testing by the author'); } } use strict; use warnings; use Test::More; # generated by Dist::Zilla::Plugin::Test::EOL 0.07 eval "use Test::EOL; 1;" or die $@; # ^^ hack to get around prereqscanner detection, remove someday all_perl_files_ok({ trailing_whitespace => 1 }); Amazon000755000765000024 012121037303 15546 5ustar00pfigstaff000000000000Net-Amazon-S3-0.59/lib/NetS3.pm100755000765000024 6251612121037303 16566 0ustar00pfigstaff000000000000Net-Amazon-S3-0.59/lib/Net/Amazonpackage Net::Amazon::S3; { $Net::Amazon::S3::VERSION = '0.59'; } use Moose 0.85; use MooseX::StrictConstructor 0.16; # ABSTRACT: Use the Amazon S3 - Simple Storage Service use Carp; use Digest::HMAC_SHA1; use Net::Amazon::S3::Bucket; use Net::Amazon::S3::Client; use Net::Amazon::S3::Client::Bucket; use Net::Amazon::S3::Client::Object; use Net::Amazon::S3::HTTPRequest; use Net::Amazon::S3::Request; use Net::Amazon::S3::Request::CompleteMultipartUpload; use Net::Amazon::S3::Request::CreateBucket; use Net::Amazon::S3::Request::DeleteBucket; use Net::Amazon::S3::Request::DeleteMultiObject; use Net::Amazon::S3::Request::DeleteObject; use Net::Amazon::S3::Request::GetBucketAccessControl; use Net::Amazon::S3::Request::GetBucketLocationConstraint; use Net::Amazon::S3::Request::GetObject; use Net::Amazon::S3::Request::GetObjectAccessControl; use Net::Amazon::S3::Request::InitiateMultipartUpload; use Net::Amazon::S3::Request::ListAllMyBuckets; use Net::Amazon::S3::Request::ListBucket; use Net::Amazon::S3::Request::ListParts; use Net::Amazon::S3::Request::PutObject; use Net::Amazon::S3::Request::PutPart; use Net::Amazon::S3::Request::SetBucketAccessControl; use Net::Amazon::S3::Request::SetObjectAccessControl; use LWP::UserAgent::Determined; use URI::Escape qw(uri_escape_utf8); use XML::LibXML; use XML::LibXML::XPathContext; has 'aws_access_key_id' => ( is => 'ro', isa => 'Str', required => 1 ); has 'aws_secret_access_key' => ( is => 'ro', isa => 'Str', required => 1 ); has 'secure' => ( is => 'ro', isa => 'Bool', required => 0, default => 0 ); has 'timeout' => ( is => 'ro', isa => 'Num', required => 0, default => 30 ); has 'retry' => ( is => 'ro', isa => 'Bool', required => 0, default => 0 ); has 'libxml' => ( is => 'rw', isa => 'XML::LibXML', required => 0 ); has 'ua' => ( is => 'rw', isa => 'LWP::UserAgent', required => 0 ); has 'err' => ( is => 'rw', isa => 'Maybe[Str]', required => 0 ); has 'errstr' => ( is => 'rw', isa => 'Maybe[Str]', required => 0 ); has 'aws_session_token' => ( is => 'ro', isa => 'Str', required => 0 ); __PACKAGE__->meta->make_immutable; my $KEEP_ALIVE_CACHESIZE = 10; sub BUILD { my $self = shift; my $ua; if ( $self->retry ) { $ua = LWP::UserAgent::Determined->new( keep_alive => $KEEP_ALIVE_CACHESIZE, requests_redirectable => [qw(GET HEAD DELETE PUT POST)], ); $ua->timing('1,2,4,8,16,32'); } else { $ua = LWP::UserAgent->new( keep_alive => $KEEP_ALIVE_CACHESIZE, requests_redirectable => [qw(GET HEAD DELETE PUT POST)], ); } $ua->timeout( $self->timeout ); $ua->env_proxy; $self->ua($ua); $self->libxml( XML::LibXML->new ); } sub buckets { my $self = shift; my $http_request = Net::Amazon::S3::Request::ListAllMyBuckets->new( s3 => $self ) ->http_request; # die $request->http_request->as_string; my $xpc = $self->_send_request($http_request); return undef unless $xpc && !$self->_remember_errors($xpc); my $owner_id = $xpc->findvalue("//s3:Owner/s3:ID"); my $owner_displayname = $xpc->findvalue("//s3:Owner/s3:DisplayName"); my @buckets; foreach my $node ( $xpc->findnodes(".//s3:Bucket") ) { push @buckets, Net::Amazon::S3::Bucket->new( { bucket => $xpc->findvalue( ".//s3:Name", $node ), creation_date => $xpc->findvalue( ".//s3:CreationDate", $node ), account => $self, } ); } return { owner_id => $owner_id, owner_displayname => $owner_displayname, buckets => \@buckets, }; } sub add_bucket { my ( $self, $conf ) = @_; my $http_request = Net::Amazon::S3::Request::CreateBucket->new( s3 => $self, bucket => $conf->{bucket}, acl_short => $conf->{acl_short}, location_constraint => $conf->{location_constraint}, )->http_request; return 0 unless $self->_send_request_expect_nothing($http_request); return $self->bucket( $conf->{bucket} ); } sub bucket { my ( $self, $bucketname ) = @_; return Net::Amazon::S3::Bucket->new( { bucket => $bucketname, account => $self } ); } sub delete_bucket { my ( $self, $conf ) = @_; my $bucket; if ( eval { $conf->isa("Net::S3::Amazon::Bucket"); } ) { $bucket = $conf->bucket; } else { $bucket = $conf->{bucket}; } croak 'must specify bucket' unless $bucket; my $http_request = Net::Amazon::S3::Request::DeleteBucket->new( s3 => $self, bucket => $bucket, )->http_request; return $self->_send_request_expect_nothing($http_request); } sub list_bucket { my ( $self, $conf ) = @_; my $http_request = Net::Amazon::S3::Request::ListBucket->new( s3 => $self, bucket => $conf->{bucket}, delimiter => $conf->{delimiter}, max_keys => $conf->{max_keys}, marker => $conf->{marker}, prefix => $conf->{prefix}, )->http_request; my $xpc = $self->_send_request($http_request); return undef unless $xpc && !$self->_remember_errors($xpc); my $return = { bucket => $xpc->findvalue("//s3:ListBucketResult/s3:Name"), prefix => $xpc->findvalue("//s3:ListBucketResult/s3:Prefix"), marker => $xpc->findvalue("//s3:ListBucketResult/s3:Marker"), next_marker => $xpc->findvalue("//s3:ListBucketResult/s3:NextMarker"), max_keys => $xpc->findvalue("//s3:ListBucketResult/s3:MaxKeys"), is_truncated => ( scalar $xpc->findvalue("//s3:ListBucketResult/s3:IsTruncated") eq 'true' ? 1 : 0 ), }; my @keys; foreach my $node ( $xpc->findnodes(".//s3:Contents") ) { my $etag = $xpc->findvalue( ".//s3:ETag", $node ); $etag =~ s/^"//; $etag =~ s/"$//; push @keys, { key => $xpc->findvalue( ".//s3:Key", $node ), last_modified => $xpc->findvalue( ".//s3:LastModified", $node ), etag => $etag, size => $xpc->findvalue( ".//s3:Size", $node ), storage_class => $xpc->findvalue( ".//s3:StorageClass", $node ), owner_id => $xpc->findvalue( ".//s3:ID", $node ), owner_displayname => $xpc->findvalue( ".//s3:DisplayName", $node ), }; } $return->{keys} = \@keys; if ( $conf->{delimiter} ) { my @common_prefixes; my $strip_delim = qr/$conf->{delimiter}$/; foreach my $node ( $xpc->findnodes(".//s3:CommonPrefixes") ) { my $prefix = $xpc->findvalue( ".//s3:Prefix", $node ); # strip delimiter from end of prefix $prefix =~ s/$strip_delim//; push @common_prefixes, $prefix; } $return->{common_prefixes} = \@common_prefixes; } return $return; } sub list_bucket_all { my ( $self, $conf ) = @_; $conf ||= {}; my $bucket = $conf->{bucket}; croak 'must specify bucket' unless $bucket; my $response = $self->list_bucket($conf); return $response unless $response->{is_truncated}; my $all = $response; while (1) { my $next_marker = $response->{next_marker} || $response->{keys}->[-1]->{key}; $conf->{marker} = $next_marker; $conf->{bucket} = $bucket; $response = $self->list_bucket($conf); push @{ $all->{keys} }, @{ $response->{keys} }; last unless $response->{is_truncated}; } delete $all->{is_truncated}; delete $all->{next_marker}; return $all; } sub _compat_bucket { my ( $self, $conf ) = @_; return Net::Amazon::S3::Bucket->new( { account => $self, bucket => delete $conf->{bucket} } ); } # compat wrapper; deprecated as of 2005-03-23 sub add_key { my ( $self, $conf ) = @_; my $bucket = $self->_compat_bucket($conf); my $key = delete $conf->{key}; my $value = delete $conf->{value}; return $bucket->add_key( $key, $value, $conf ); } # compat wrapper; deprecated as of 2005-03-23 sub get_key { my ( $self, $conf ) = @_; my $bucket = $self->_compat_bucket($conf); return $bucket->get_key( $conf->{key} ); } # compat wrapper; deprecated as of 2005-03-23 sub head_key { my ( $self, $conf ) = @_; my $bucket = $self->_compat_bucket($conf); return $bucket->head_key( $conf->{key} ); } # compat wrapper; deprecated as of 2005-03-23 sub delete_key { my ( $self, $conf ) = @_; my $bucket = $self->_compat_bucket($conf); return $bucket->delete_key( $conf->{key} ); } sub _validate_acl_short { my ( $self, $policy_name ) = @_; if (!grep( { $policy_name eq $_ } qw(private public-read public-read-write authenticated-read) ) ) { croak "$policy_name is not a supported canned access policy"; } } # $self->_send_request($HTTP::Request) # $self->_send_request(@params_to_make_request) sub _send_request { my ( $self, $http_request ) = @_; # warn $http_request->as_string; my $response = $self->_do_http($http_request); my $content = $response->content; return $content unless $response->content_type eq 'application/xml'; return unless $content; return $self->_xpc_of_content($content); } # centralize all HTTP work, for debugging sub _do_http { my ( $self, $http_request, $filename ) = @_; confess 'Need HTTP::Request object' if ( ref($http_request) ne 'HTTP::Request' ); # convenient time to reset any error conditions $self->err(undef); $self->errstr(undef); return $self->ua->request( $http_request, $filename ); } sub _send_request_expect_nothing { my ( $self, $http_request ) = @_; # warn $http_request->as_string; my $response = $self->_do_http($http_request); return 1 if $response->code =~ /^2\d\d$/; # anything else is a failure, and we save the parsed result $self->_remember_errors( $response->content ); return 0; } # Send a HEAD request first, to find out if we'll be hit with a 307 redirect. # Since currently LWP does not have true support for 100 Continue, it simply # slams the PUT body into the socket without waiting for any possible redirect. # Thus when we're reading from a filehandle, when LWP goes to reissue the request # having followed the redirect, the filehandle's already been closed from the # first time we used it. Thus, we need to probe first to find out what's going on, # before we start sending any actual data. sub _send_request_expect_nothing_probed { my ( $self, $http_request ) = @_; my $head = Net::Amazon::S3::HTTPRequest->new( s3 => $self, method => 'HEAD', path => $http_request->uri->path, )->http_request; #my $head_request = $self->_make_request( $head ); my $override_uri = undef; my $old_redirectable = $self->ua->requests_redirectable; $self->ua->requests_redirectable( [] ); my $response = $self->_do_http($head); if ( $response->code =~ /^3/ && defined $response->header('Location') ) { $override_uri = $response->header('Location'); } $http_request->uri($override_uri) if defined $override_uri; $response = $self->_do_http($http_request); $self->ua->requests_redirectable($old_redirectable); return 1 if $response->code =~ /^2\d\d$/; # anything else is a failure, and we save the parsed result $self->_remember_errors( $response->content ); return 0; } sub _croak_if_response_error { my ( $self, $response ) = @_; unless ( $response->code =~ /^2\d\d$/ ) { $self->err("network_error"); $self->errstr( $response->status_line ); croak "Net::Amazon::S3: Amazon responded with " . $response->status_line . "\n"; } } sub _xpc_of_content { my ( $self, $content ) = @_; my $doc = $self->libxml->parse_string($content); # warn $doc->toString(1); my $xpc = XML::LibXML::XPathContext->new($doc); $xpc->registerNs( 's3', 'http://s3.amazonaws.com/doc/2006-03-01/' ); return $xpc; } # returns 1 if errors were found sub _remember_errors { my ( $self, $src ) = @_; # Do not try to parse non-xml unless ( ref $src || $src =~ m/^[[:space:]]*err($code); $self->errstr($src); return 1; } my $xpc = ref $src ? $src : $self->_xpc_of_content($src); if ( $xpc->findnodes("//Error") ) { $self->err( $xpc->findvalue("//Error/Code") ); $self->errstr( $xpc->findvalue("//Error/Message") ); return 1; } return 0; } sub _urlencode { my ( $self, $unencoded ) = @_; return uri_escape_utf8( $unencoded, '^A-Za-z0-9_\-\.' ); } 1; __END__ =pod =head1 NAME Net::Amazon::S3 - Use the Amazon S3 - Simple Storage Service =head1 VERSION version 0.59 =head1 SYNOPSIS use Net::Amazon::S3; my $aws_access_key_id = 'fill me in'; my $aws_secret_access_key = 'fill me in too'; my $s3 = Net::Amazon::S3->new( { aws_access_key_id => $aws_access_key_id, aws_secret_access_key => $aws_secret_access_key, retry => 1, } ); # a bucket is a globally-unique directory # list all buckets that i own my $response = $s3->buckets; foreach my $bucket ( @{ $response->{buckets} } ) { print "You have a bucket: " . $bucket->bucket . "\n"; } # create a new bucket my $bucketname = 'acmes_photo_backups'; my $bucket = $s3->add_bucket( { bucket => $bucketname } ) or die $s3->err . ": " . $s3->errstr; # or use an existing bucket $bucket = $s3->bucket($bucketname); # store a file in the bucket $bucket->add_key_filename( '1.JPG', 'DSC06256.JPG', { content_type => 'image/jpeg', }, ) or die $s3->err . ": " . $s3->errstr; # store a value in the bucket $bucket->add_key( 'reminder.txt', 'this is where my photos are backed up' ) or die $s3->err . ": " . $s3->errstr; # list files in the bucket $response = $bucket->list_all or die $s3->err . ": " . $s3->errstr; foreach my $key ( @{ $response->{keys} } ) { my $key_name = $key->{key}; my $key_size = $key->{size}; print "Bucket contains key '$key_name' of size $key_size\n"; } # fetch file from the bucket $response = $bucket->get_key_filename( '1.JPG', 'GET', 'backup.jpg' ) or die $s3->err . ": " . $s3->errstr; # fetch value from the bucket $response = $bucket->get_key('reminder.txt') or die $s3->err . ": " . $s3->errstr; print "reminder.txt:\n"; print " content length: " . $response->{content_length} . "\n"; print " content type: " . $response->{content_type} . "\n"; print " etag: " . $response->{content_type} . "\n"; print " content: " . $response->{value} . "\n"; # delete keys $bucket->delete_key('reminder.txt') or die $s3->err . ": " . $s3->errstr; $bucket->delete_key('1.JPG') or die $s3->err . ": " . $s3->errstr; # and finally delete the bucket $bucket->delete_bucket or die $s3->err . ": " . $s3->errstr; =head1 DESCRIPTION This module provides a Perlish interface to Amazon S3. From the developer blurb: "Amazon S3 is storage for the Internet. It is designed to make web-scale computing easier for developers. Amazon S3 provides a simple web services interface that can be used to store and retrieve any amount of data, at any time, from anywhere on the web. It gives any developer access to the same highly scalable, reliable, fast, inexpensive data storage infrastructure that Amazon uses to run its own global network of web sites. The service aims to maximize benefits of scale and to pass those benefits on to developers". To find out more about S3, please visit: http://s3.amazonaws.com/ To use this module you will need to sign up to Amazon Web Services and provide an "Access Key ID" and " Secret Access Key". If you use this module, you will incurr costs as specified by Amazon. Please check the costs. If you use this module with your Access Key ID and Secret Access Key you must be responsible for these costs. I highly recommend reading all about S3, but in a nutshell data is stored in values. Values are referenced by keys, and keys are stored in buckets. Bucket names are global. Note: This is the legacy interface, please check out L instead. Development of this code happens here: http://github.com/pfig/net-amazon-s3/ Homepage for the project (just started) is at http://pfig.github.com/net-amazon-s3/ =head1 METHODS =head2 new Create a new S3 client object. Takes some arguments: =over =item aws_access_key_id Use your Access Key ID as the value of the AWSAccessKeyId parameter in requests you send to Amazon Web Services (when required). Your Access Key ID identifies you as the party responsible for the request. =item aws_secret_access_key Since your Access Key ID is not encrypted in requests to AWS, it could be discovered and used by anyone. Services that are not free require you to provide additional information, a request signature, to verify that a request containing your unique Access Key ID could only have come from you. DO NOT INCLUDE THIS IN SCRIPTS OR APPLICATIONS YOU DISTRIBUTE. YOU'LL BE SORRY =item aws_session_token If you are using temporary credentials provided by the AWS Security Token Service, set the token here, and it will be added to the request in order to authenticate it. =item secure Set this to C<1> if you want to use SSL-encrypted connections when talking to S3. Defaults to C<0>. =item timeout How many seconds should your script wait before bailing on a request to S3? Defaults to 30. =item retry If this library should retry upon errors. This option is recommended. This uses exponential backoff with retries after 1, 2, 4, 8, 16, 32 seconds, as recommended by Amazon. Defaults to off. =back =head2 buckets Returns undef on error, else hashref of results =head2 add_bucket Takes a hashref: =over =item bucket The name of the bucket you want to add =item acl_short (optional) See the set_acl subroutine for documenation on the acl_short options =item location_constraint (option) Sets the location constraint of the new bucket. If left unspecified, the default S3 datacenter location will be used. Otherwise, you can set it to 'EU' for a European data center - note that costs are different. =back Returns 0 on failure, Net::Amazon::S3::Bucket object on success =head2 bucket BUCKET Takes a scalar argument, the name of the bucket you're creating Returns an (unverified) bucket object from an account. Does no network access. =head2 delete_bucket Takes either a L object or a hashref containing =over =item bucket The name of the bucket to remove =back Returns false (and fails) if the bucket isn't empty. Returns true if the bucket is successfully deleted. =head2 list_bucket List all keys in this bucket. Takes a hashref of arguments: MANDATORY =over =item bucket The name of the bucket you want to list keys on =back OPTIONAL =over =item prefix Restricts the response to only contain results that begin with the specified prefix. If you omit this optional argument, the value of prefix for your query will be the empty string. In other words, the results will be not be restricted by prefix. =item delimiter If this optional, Unicode string parameter is included with your request, then keys that contain the same string between the prefix and the first occurrence of the delimiter will be rolled up into a single result element in the CommonPrefixes collection. These rolled-up keys are not returned elsewhere in the response. For example, with prefix="USA/" and delimiter="/", the matching keys "USA/Oregon/Salem" and "USA/Oregon/Portland" would be summarized in the response as a single "USA/Oregon" element in the CommonPrefixes collection. If an otherwise matching key does not contain the delimiter after the prefix, it appears in the Contents collection. Each element in the CommonPrefixes collection counts as one against the MaxKeys limit. The rolled-up keys represented by each CommonPrefixes element do not. If the Delimiter parameter is not present in your request, keys in the result set will not be rolled-up and neither the CommonPrefixes collection nor the NextMarker element will be present in the response. =item max-keys This optional argument limits the number of results returned in response to your query. Amazon S3 will return no more than this number of results, but possibly less. Even if max-keys is not specified, Amazon S3 will limit the number of results in the response. Check the IsTruncated flag to see if your results are incomplete. If so, use the Marker parameter to request the next page of results. For the purpose of counting max-keys, a 'result' is either a key in the 'Contents' collection, or a delimited prefix in the 'CommonPrefixes' collection. So for delimiter requests, max-keys limits the total number of list results, not just the number of keys. =item marker This optional parameter enables pagination of large result sets. C specifies where in the result set to resume listing. It restricts the response to only contain results that occur alphabetically after the value of marker. To retrieve the next page of results, use the last key from the current page of results as the marker in your next request. See also C, below. If C is omitted,the first page of results is returned. =back Returns undef on error and a hashref of data on success: The hashref looks like this: { bucket => $bucket_name, prefix => $bucket_prefix, common_prefixes => [$prefix1,$prefix2,...] marker => $bucket_marker, next_marker => $bucket_next_available_marker, max_keys => $bucket_max_keys, is_truncated => $bucket_is_truncated_boolean keys => [$key1,$key2,...] } Explanation of bits of that: =over =item common_prefixes If list_bucket was requested with a delimiter, common_prefixes will contain a list of prefixes matching that delimiter. Drill down into these prefixes by making another request with the prefix parameter. =item is_truncated B flag that indicates whether or not all results of your query were returned in this response. If your results were truncated, you can make a follow-up paginated request using the Marker parameter to retrieve the rest of the results. =item next_marker A convenience element, useful when paginating with delimiters. The value of C, if present, is the largest (alphabetically) of all key names and all CommonPrefixes prefixes in the response. If the C flag is set, request the next page of results by setting C to the value of C. This element is only present in the response if the C parameter was sent with the request. =back Each key is a hashref that looks like this: { key => $key, last_modified => $last_mod_date, etag => $etag, # An MD5 sum of the stored content. size => $size, # Bytes storage_class => $storage_class # Doc? owner_id => $owner_id, owner_displayname => $owner_name } =head2 list_bucket_all List all keys in this bucket without having to worry about 'marker'. This is a convenience method, but may make multiple requests to S3 under the hood. Takes the same arguments as list_bucket. =head2 add_key DEPRECATED. DO NOT USE =head2 get_key DEPRECATED. DO NOT USE =head2 head_key DEPRECATED. DO NOT USE =head2 delete_key DEPRECATED. DO NOT USE =head1 LICENSE This module contains code modified from Amazon that contains the following notice: # This software code is made available "AS IS" without warranties of any # kind. You may copy, display, modify and redistribute the software # code either by itself or as incorporated into your code; provided that # you do not remove any proprietary notices. Your use of this software # code is at your own risk and you waive any claim against Amazon # Digital Services, Inc. or its affiliates with respect to your use of # this software code. (c) 2006 Amazon Digital Services, Inc. or its # affiliates. =head1 TESTING Testing S3 is a tricky thing. Amazon wants to charge you a bit of money each time you use their service. And yes, testing counts as using. Because of this, the application's test suite skips anything approaching a real test unless you set these three environment variables: =over =item AMAZON_S3_EXPENSIVE_TESTS Doesn't matter what you set it to. Just has to be set =item AWS_ACCESS_KEY_ID Your AWS access key =item AWS_ACCESS_KEY_SECRET Your AWS sekkr1t passkey. Be forewarned that setting this environment variable on a shared system might leak that information to another user. Be careful. =back =head1 AUTHOR Leon Brocard and unknown Amazon Digital Services programmers. Brad Fitzpatrick - return values, Bucket object Pedro Figueiredo - since 0.54 =head1 SEE ALSO L =head1 AUTHOR Pedro Figueiredo =head1 COPYRIGHT AND LICENSE This software is copyright (c) 2013 by Amazon Digital Services, Leon Brocard, Brad Fitzpatrick, Pedro Figueiredo. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. =cut release-synopsis.t100644000765000024 46312121037303 17105 0ustar00pfigstaff000000000000Net-Amazon-S3-0.59/t#!perl BEGIN { unless ($ENV{RELEASE_TESTING}) { require Test::More; Test::More::plan(skip_all => 'these tests are for release candidate testing'); } } use Test::More; eval "use Test::Synopsis"; plan skip_all => "Test::Synopsis required for testing synopses" if $@; all_synopsis_ok('lib'); release-distmeta.t100644000765000024 45512121037303 17031 0ustar00pfigstaff000000000000Net-Amazon-S3-0.59/t#!perl BEGIN { unless ($ENV{RELEASE_TESTING}) { require Test::More; Test::More::plan(skip_all => 'these tests are for release candidate testing'); } } use Test::More; eval "use Test::CPAN::Meta"; plan skip_all => "Test::CPAN::Meta required for testing META.yml" if $@; meta_yaml_ok(); release-mojibake.t100644000765000024 47612121037303 17003 0ustar00pfigstaff000000000000Net-Amazon-S3-0.59/t#!perl BEGIN { unless ($ENV{RELEASE_TESTING}) { require Test::More; Test::More::plan(skip_all => 'these tests are for release candidate testing'); } } use Test::More; eval 'use Test::Mojibake'; plan skip_all => 'Test::Mojibake required for source encoding testing' if $@; all_files_encoding_ok(); release-meta-json.t100644000765000024 47112121037303 17112 0ustar00pfigstaff000000000000Net-Amazon-S3-0.59/t#!perl BEGIN { unless ($ENV{RELEASE_TESTING}) { require Test::More; Test::More::plan(skip_all => 'these tests are for release candidate testing'); } } use Test::More; eval 'use Test::CPAN::Meta::JSON'; plan skip_all => 'Test::CPAN::Meta::JSON required for testing META.json' if $@; meta_json_ok(); release-pod-syntax.t100644000765000024 45012121037303 17320 0ustar00pfigstaff000000000000Net-Amazon-S3-0.59/t#!perl BEGIN { unless ($ENV{RELEASE_TESTING}) { require Test::More; Test::More::plan(skip_all => 'these tests are for release candidate testing'); } } use Test::More; eval "use Test::Pod 1.41"; plan skip_all => "Test::Pod 1.41 required for testing POD" if $@; all_pod_files_ok(); examples000755000765000024 012121037303 14643 5ustar00pfigstaff000000000000Net-Amazon-S3-0.59backup_cpan.pl100755000765000024 763312121037303 17622 0ustar00pfigstaff000000000000Net-Amazon-S3-0.59/examples#!/home/acme/bin/perl use strict; use warnings; use lib 'lib'; use Data::Stream::Bulk::Path::Class; use Net::Amazon::S3; use Perl6::Say; use Path::Class; use Set::Object; use Term::ProgressBar::Simple; use List::Util qw(sum); use Digest::MD5::File qw(file_md5_hex); use BerkeleyDB::Manager; use Cwd; use Config; my $m = BerkeleyDB::Manager->new( home => Path::Class::Dir->new(cwd), db_class => 'BerkeleyDB::Hash', create => 1, ); my $db = $m->open_db( file => 'md5_cache' ); my $s3 = Net::Amazon::S3->new( aws_access_key_id => 'XXX', aws_secret_access_key => 'XXX', retry => 1, ); my $client = Net::Amazon::S3::Client->new( s3 => $s3 ); my $bucket = $client->bucket( name => 'minicpan' ); my $root = '/home/acme/Public/minicpan/'; my $file_stream = Data::Stream::Bulk::Path::Class->new( dir => Path::Class::Dir->new($root), only_files => 1, ); my %files; my $file_set = Set::Object->new(); until ( $file_stream->is_done ) { foreach my $filename ( $file_stream->items ) { my $key = $filename->relative($root)->stringify; #[rootname]path/to/file.txt:,,, my $stat = $filename->stat; my $ctime = $stat->ctime; my $mtime = $stat->mtime; my $size = $stat->size; my $inodenum = $stat->ino; my $cachekey = "$key:$ctime,$mtime,$size,$inodenum"; $db->db_get( $cachekey, my $md5_hex ); if ($md5_hex) { #say "hit $cachekey $md5hex"; } else { $md5_hex = file_md5_hex($filename) || die "Failed to find MD5 for $filename"; $m->txn_do( sub { $db->db_put( $cachekey, $md5_hex ); } ); #say "miss $cachekey $md5_hex"; } $files{$key} = { filename => $filename, key => $key, md5_hex => $md5_hex, size => -s $filename, }; $file_set->insert($key); } } my %objects; my $s3_set = Set::Object->new(); my $object_stream = $bucket->list; until ( $object_stream->is_done ) { foreach my $object ( $object_stream->items ) { my $key = $object->key; $objects{$key} = { filename => file( $root, $key )->stringify, key => $key, md5_hex => $object->etag, size => $object->size, }; # say $object->key . ' ' . $object->size . ' ' . $object->etag; $s3_set->insert( $object->key ); } } my @to_add; my @to_delete; foreach my $key ( sort keys %files ) { my $file = $files{$key}; my $object = $objects{$key}; if ($object) { if ( $file->{md5_hex} eq $object->{md5_hex} ) { # say "$key same"; } else { # say "$key different"; push @to_add, $file; } } else { #say "$key missing"; push @to_add, $file; } } foreach my $key ( sort keys %objects ) { my $object = $objects{$key}; my $file = $files{$key}; if ($file) { } else { # say "$key to delete"; push @to_delete, $object; } } my $total_size = sum map { file( $_->{filename} )->stat->size } @to_add; $total_size += scalar(@to_delete); my $progress = Term::ProgressBar::Simple->new($total_size); foreach my $file (@to_add) { my $key = $file->{key}; my $filename = $file->{filename}; my $md5_hex = $file->{md5_hex}; my $size = $file->{size}; # say "put $key"; $progress += $size; my $object = $bucket->object( key => $key, etag => $md5_hex, size => $size ); $object->put_filename($filename); } foreach my $object (@to_delete) { my $key = $object->{key}; my $filename = $object->{filename}; my $object = $bucket->object(key => $key); # say "delete $key"; $object->delete; $progress++; } release-unused-vars.t100644000765000024 44512121037303 17472 0ustar00pfigstaff000000000000Net-Amazon-S3-0.59/t#!perl BEGIN { unless ($ENV{RELEASE_TESTING}) { require Test::More; Test::More::plan(skip_all => 'these tests are for release candidate testing'); } } use Test::More; eval "use Test::Vars"; plan skip_all => "Test::Vars required for testing unused vars" if $@; all_vars_ok(); release-portability.t100644000765000024 50112121037303 17551 0ustar00pfigstaff000000000000Net-Amazon-S3-0.59/t#!perl BEGIN { unless ($ENV{RELEASE_TESTING}) { require Test::More; Test::More::plan(skip_all => 'these tests are for release candidate testing'); } } use Test::More; eval 'use Test::Portability::Files'; plan skip_all => 'Test::Portability::Files required for testing portability' if $@; run_tests(); release-test-version.t100644000765000024 110112121037303 17666 0ustar00pfigstaff000000000000Net-Amazon-S3-0.59/t BEGIN { unless ($ENV{RELEASE_TESTING}) { require Test::More; Test::More::plan(skip_all => 'these tests are for release candidate testing'); } } use strict; use warnings; use Test::More; # generated by Dist::Zilla::Plugin::Test::Version 0.002004 BEGIN { eval "use Test::Version; 1;" or die $@; } my @imports = ( 'version_all_ok' ); my $params = { is_strict => 0, has_version => 1, }; push @imports, $params if version->parse( $Test::Version::VERSION ) >= version->parse('1.002'); Test::Version->import(@imports); version_all_ok; done_testing; release-cpan-changes.t100644000765000024 47112121037303 17544 0ustar00pfigstaff000000000000Net-Amazon-S3-0.59/t#!perl BEGIN { unless ($ENV{RELEASE_TESTING}) { require Test::More; Test::More::plan(skip_all => 'these tests are for release candidate testing'); } } use Test::More; eval 'use Test::CPAN::Changes'; plan skip_all => 'Test::CPAN::Changes required for this test' if $@; changes_ok(); done_testing(); release-dist-manifest.t100644000765000024 46612121037303 17770 0ustar00pfigstaff000000000000Net-Amazon-S3-0.59/t#!perl BEGIN { unless ($ENV{RELEASE_TESTING}) { require Test::More; Test::More::plan(skip_all => 'these tests are for release candidate testing'); } } use Test::More; eval "use Test::DistManifest"; plan skip_all => "Test::DistManifest required for testing the manifest" if $@; manifest_ok(); S3000755000765000024 012121037303 16033 5ustar00pfigstaff000000000000Net-Amazon-S3-0.59/lib/Net/AmazonBucket.pm100644000765000024 3624712121037303 20002 0ustar00pfigstaff000000000000Net-Amazon-S3-0.59/lib/Net/Amazon/S3package Net::Amazon::S3::Bucket; { $Net::Amazon::S3::Bucket::VERSION = '0.59'; } use Moose 0.85; use MooseX::StrictConstructor 0.16; use Carp; use File::stat; use IO::File 1.14; has 'account' => ( is => 'ro', isa => 'Net::Amazon::S3', required => 1 ); has 'bucket' => ( is => 'ro', isa => 'Str', required => 1 ); has 'creation_date' => ( is => 'ro', isa => 'Maybe[Str]', required => 0 ); __PACKAGE__->meta->make_immutable; # ABSTRACT: convenience object for working with Amazon S3 buckets sub _uri { my ( $self, $key ) = @_; return ($key) ? $self->bucket . "/" . $self->account->_urlencode($key) : $self->bucket . "/"; } sub _conf_to_headers { my ( $self, $conf ) = @_; $conf = {} unless defined $conf; $conf = {%$conf}; # clone it so as not to clobber the caller's copy if ( $conf->{acl_short} ) { $self->account->_validate_acl_short( $conf->{acl_short} ); $conf->{'x-amz-acl'} = $conf->{acl_short}; delete $conf->{acl_short}; } return $conf; } # returns bool sub add_key { my ( $self, $key, $value, $conf ) = @_; if ( ref($value) eq 'SCALAR' ) { $conf->{'Content-Length'} ||= -s $$value; $value = _content_sub($$value); } else { $conf->{'Content-Length'} ||= length $value; } my $acl_short; if ( $conf->{acl_short} ) { $acl_short = $conf->{acl_short}; delete $conf->{acl_short}; } my $http_request = Net::Amazon::S3::Request::PutObject->new( s3 => $self->account, bucket => $self->bucket, key => $key, value => $value, acl_short => $acl_short, headers => $conf, )->http_request; # If we're pushing to a bucket that's under DNS flux, we might get a 307 # Since LWP doesn't support actually waiting for a 100 Continue response, # we'll just send a HEAD first to see what's going on if ( ref($value) ) { return $self->account->_send_request_expect_nothing_probed($http_request); } else { return $self->account->_send_request_expect_nothing($http_request); } } sub add_key_filename { my ( $self, $key, $value, $conf ) = @_; return $self->add_key( $key, \$value, $conf ); } sub copy_key { my ( $self, $key, $source, $conf ) = @_; my $acl_short; if ( defined $conf ) { if ( $conf->{acl_short} ) { $acl_short = $conf->{acl_short}; delete $conf->{acl_short}; } $conf->{'x-amz-metadata-directive'} = 'REPLACE'; } else { $conf = {}; } $conf->{'x-amz-copy-source'} = $source; my $acct = $self->account; my $http_request = Net::Amazon::S3::Request::PutObject->new( s3 => $self->account, bucket => $self->bucket, key => $key, value => '', acl_short => $acl_short, headers => $conf, )->http_request; my $response = $acct->_do_http( $http_request ); my $xpc = $acct->_xpc_of_content( $response->content ); if ( !$response->is_success || !$xpc || $xpc->findnodes("//Error") ) { $acct->_remember_errors( $response->content ); return 0; } return 1; } sub edit_metadata { my ( $self, $key, $conf ) = @_; croak "Need configuration hash" unless defined $conf; return $self->copy_key( $key, "/" . $self->bucket . "/" . $key, $conf ); } sub head_key { my ( $self, $key ) = @_; return $self->get_key( $key, "HEAD" ); } sub get_key { my ( $self, $key, $method, $filename ) = @_; $filename = $$filename if ref $filename; my $acct = $self->account; my $http_request = Net::Amazon::S3::Request::GetObject->new( s3 => $acct, bucket => $self->bucket, key => $key, method => $method || 'GET', )->http_request; my $response = $acct->_do_http( $http_request, $filename ); if ( $response->code == 404 ) { return undef; } $acct->_croak_if_response_error($response); my $etag = $response->header('ETag'); if ($etag) { $etag =~ s/^"//; $etag =~ s/"$//; } my $return; foreach my $header ( $response->headers->header_field_names ) { $return->{ lc $header } = $response->header($header); } $return->{content_length} = $response->content_length || 0; $return->{content_type} = $response->content_type; $return->{etag} = $etag; $return->{value} = $response->content; return $return; } sub get_key_filename { my ( $self, $key, $method, $filename ) = @_; return $self->get_key( $key, $method, \$filename ); } # returns bool sub delete_key { my ( $self, $key ) = @_; croak 'must specify key' unless defined $key && length $key; my $http_request = Net::Amazon::S3::Request::DeleteObject->new( s3 => $self->account, bucket => $self->bucket, key => $key, )->http_request; return $self->account->_send_request_expect_nothing($http_request); } sub delete_bucket { my $self = shift; croak "Unexpected arguments" if @_; return $self->account->delete_bucket($self); } sub list { my $self = shift; my $conf = shift || {}; $conf->{bucket} = $self->bucket; return $self->account->list_bucket($conf); } sub list_all { my $self = shift; my $conf = shift || {}; $conf->{bucket} = $self->bucket; return $self->account->list_bucket_all($conf); } sub get_acl { my ( $self, $key ) = @_; my $account = $self->account; my $http_request; if ($key) { $http_request = Net::Amazon::S3::Request::GetObjectAccessControl->new( s3 => $account, bucket => $self->bucket, key => $key, )->http_request; } else { $http_request = Net::Amazon::S3::Request::GetBucketAccessControl->new( s3 => $account, bucket => $self->bucket, )->http_request; } my $response = $account->_do_http($http_request); if ( $response->code == 404 ) { return undef; } $account->_croak_if_response_error($response); return $response->content; } sub set_acl { my ( $self, $conf ) = @_; $conf ||= {}; my $key = $conf->{key}; my $http_request; if ($key) { $http_request = Net::Amazon::S3::Request::SetObjectAccessControl->new( s3 => $self->account, bucket => $self->bucket, key => $key, acl_short => $conf->{acl_short}, acl_xml => $conf->{acl_xml}, )->http_request; } else { $http_request = Net::Amazon::S3::Request::SetBucketAccessControl->new( s3 => $self->account, bucket => $self->bucket, acl_short => $conf->{acl_short}, acl_xml => $conf->{acl_xml}, )->http_request; } return $self->account->_send_request_expect_nothing($http_request); } sub get_location_constraint { my ($self) = @_; my $http_request = Net::Amazon::S3::Request::GetBucketLocationConstraint->new( s3 => $self->account, bucket => $self->bucket, )->http_request; my $xpc = $self->account->_send_request($http_request); return undef unless $xpc && !$self->account->_remember_errors($xpc); my $lc = $xpc->findvalue("//s3:LocationConstraint"); if ( defined $lc && $lc eq '' ) { $lc = undef; } return $lc; } # proxy up the err requests sub err { $_[0]->account->err } sub errstr { $_[0]->account->errstr } sub _content_sub { my $filename = shift; my $stat = stat($filename); my $remaining = $stat->size; my $blksize = $stat->blksize || 4096; croak "$filename not a readable file with fixed size" unless -r $filename and ( -f _ || $remaining ); my $fh = IO::File->new( $filename, 'r' ) or croak "Could not open $filename: $!"; $fh->binmode; return sub { my $buffer; # upon retries the file is closed and we must reopen it unless ( $fh->opened ) { $fh = IO::File->new( $filename, 'r' ) or croak "Could not open $filename: $!"; $fh->binmode; $remaining = $stat->size; } # warn "read remaining $remaining"; unless ( my $read = $fh->read( $buffer, $blksize ) ) { # warn "read $read buffer $buffer remaining $remaining"; croak "Error while reading upload content $filename ($remaining remaining) $!" if $! and $remaining; # otherwise, we found EOF $fh->close or croak "close of upload content $filename failed: $!"; $buffer ||= '' ; # LWP expects an emptry string on finish, read returns 0 } $remaining -= length($buffer); return $buffer; }; } 1; __END__ =pod =head1 NAME Net::Amazon::S3::Bucket - convenience object for working with Amazon S3 buckets =head1 VERSION version 0.59 =head1 SYNOPSIS use Net::Amazon::S3; my $bucket = $s3->bucket("foo"); ok($bucket->add_key("key", "data")); ok($bucket->add_key("key", "data", { content_type => "text/html", 'x-amz-meta-colour' => 'orange', })); # the err and errstr methods just proxy up to the Net::Amazon::S3's # objects err/errstr methods. $bucket->add_key("bar", "baz") or die $bucket->err . $bucket->errstr; # fetch a key $val = $bucket->get_key("key"); is( $val->{value}, 'data' ); is( $val->{content_type}, 'text/html' ); is( $val->{etag}, 'b9ece18c950afbfa6b0fdbfa4ff731d3' ); is( $val->{'x-amz-meta-colour'}, 'orange' ); # returns undef on missing or on error (check $bucket->err) is(undef, $bucket->get_key("non-existing-key")); die $bucket->errstr if $bucket->err; # fetch a key's metadata $val = $bucket->head_key("key"); is( $val->{value}, '' ); is( $val->{content_type}, 'text/html' ); is( $val->{etag}, 'b9ece18c950afbfa6b0fdbfa4ff731d3' ); is( $val->{'x-amz-meta-colour'}, 'orange' ); # delete a key ok($bucket->delete_key($key_name)); ok(! $bucket->delete_key("non-exist-key")); # delete the entire bucket (Amazon requires it first be empty) $bucket->delete_bucket; =head1 DESCRIPTION This module represents an S3 bucket. You get a bucket object from the Net::Amazon::S3 object. =for test_synopsis no strict 'vars' =head1 METHODS =head2 new Create a new bucket object. Expects a hash containing these two arguments: =over =item bucket =item account =back =head2 add_key Takes three positional parameters: =over =item key =item value =item configuration A hash of configuration data for this key. (See synopsis); =back Returns a boolean. =head2 add_key_filename Use this to upload a large file to S3. Takes three positional parameters: =over =item key =item filename =item configuration A hash of configuration data for this key. (See synopsis); =back Returns a boolean. =head2 copy_key Creates (or replaces) a key, copying its contents from another key elsewhere in S3. Takes the following parameters: =over =item key The key to (over)write =item source Where to copy the key from. Should be in the form C/I>/. =item conf Optional configuration hash. If present and defined, the configuration (ACL and headers) there will be used for the new key; otherwise it will be copied from the source key. =back =head2 edit_metadata Changes the metadata associated with an existing key. Arguments: =over =item key The key to edit =item conf The new configuration hash to use =back =head2 head_key KEY Takes the name of a key in this bucket and returns its configuration hash =head2 get_key $key_name [$method] Takes a key name and an optional HTTP method (which defaults to C. Fetches the key from AWS. On failure: Returns undef on missing content, throws an exception (dies) on server errors. On success: Returns a hashref of { content_type, etag, value, @meta } on success. Other values from the server are there too, with the key being lowercased. =head2 get_key_filename $key_name $method $filename Use this to download large files from S3. Takes a key name and an optional HTTP method (which defaults to C. Fetches the key from AWS and writes it to the filename. THe value returned will be empty. On failure: Returns undef on missing content, throws an exception (dies) on server errors. On success: Returns a hashref of { content_type, etag, value, @meta } on success =head2 delete_key $key_name Removes C<$key> from the bucket. Forever. It's gone after this. Returns true on success and false on failure =head2 delete_bucket Delete the current bucket object from the server. Takes no arguments. Fails if the bucket has anything in it. This is an alias for C<< $s3->delete_bucket($bucket) >> =head2 list List all keys in this bucket. see L for documentation of this method. =head2 list_all List all keys in this bucket without having to worry about 'marker'. This may make multiple requests to S3 under the hood. see L for documentation of this method. =head2 get_acl Takes one optional positional parameter =over =item key (optional) If no key is specified, it returns the acl for the bucket. =back Returns an acl in XML format. =head2 set_acl Takes a configuration hash_ref containing: =over =item acl_xml (cannot be used in conjunction with acl_short) An XML string which contains access control information which matches Amazon's published schema. There is an example of one of these XML strings in the tests for this module. =item acl_short (cannot be used in conjunction with acl_xml) You can use the shorthand notation instead of specifying XML for certain 'canned' types of acls. (from the Amazon API documentation) private: Owner gets FULL_CONTROL. No one else has any access rights. This is the default. public-read:Owner gets FULL_CONTROL and the anonymous principal is granted READ access. If this policy is used on an object, it can be read from a browser with no authentication. public-read-write:Owner gets FULL_CONTROL, the anonymous principal is granted READ and WRITE access. This is a useful policy to apply to a bucket, if you intend for any anonymous user to PUT objects into the bucket. authenticated-read:Owner gets FULL_CONTROL, and any principal authenticated as a registered Amazon S3 user is granted READ access. =item key (optional) If the key is not set, it will apply the acl to the bucket. =back Returns a boolean. =head2 get_location_constraint Retrieves the location constraint set when the bucket was created. Returns a string (eg, 'EU'), or undef if no location constraint was set. =head2 err The S3 error code for the last error the object ran into =head2 errstr A human readable error string for the last error the object ran into =head1 SEE ALSO L =head1 AUTHOR Pedro Figueiredo =head1 COPYRIGHT AND LICENSE This software is copyright (c) 2013 by Amazon Digital Services, Leon Brocard, Brad Fitzpatrick, Pedro Figueiredo. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. =cut Client.pm100644000765000024 1412312121037303 17770 0ustar00pfigstaff000000000000Net-Amazon-S3-0.59/lib/Net/Amazon/S3package Net::Amazon::S3::Client; { $Net::Amazon::S3::Client::VERSION = '0.59'; } use Moose 0.85; use HTTP::Status qw(is_error status_message); use MooseX::StrictConstructor 0.16; use Moose::Util::TypeConstraints; # ABSTRACT: An easy-to-use Amazon S3 client type 'Etag' => where { $_ =~ /^[a-z0-9]{32}$/ }; type 'OwnerId' => where { $_ =~ /^[a-z0-9]{64}$/ }; has 's3' => ( is => 'ro', isa => 'Net::Amazon::S3', required => 1 ); __PACKAGE__->meta->make_immutable; sub buckets { my $self = shift; my $s3 = $self->s3; my $http_request = Net::Amazon::S3::Request::ListAllMyBuckets->new( s3 => $s3 ) ->http_request; my $xpc = $self->_send_request_xpc($http_request); my $owner_id = $xpc->findvalue('/s3:ListAllMyBucketsResult/s3:Owner/s3:ID'); my $owner_display_name = $xpc->findvalue( '/s3:ListAllMyBucketsResult/s3:Owner/s3:DisplayName'); my @buckets; foreach my $node ( $xpc->findnodes('/s3:ListAllMyBucketsResult/s3:Buckets/s3:Bucket') ) { push @buckets, Net::Amazon::S3::Client::Bucket->new( { client => $self, name => $xpc->findvalue( './s3:Name', $node ), creation_date => $xpc->findvalue( './s3:CreationDate', $node ), owner_id => $owner_id, owner_display_name => $owner_display_name, } ); } return @buckets; } sub create_bucket { my ( $self, %conf ) = @_; my $bucket = Net::Amazon::S3::Client::Bucket->new( client => $self, name => $conf{name}, ); $bucket->_create( acl_short => $conf{acl_short}, location_constraint => $conf{location_constraint}, ); return $bucket; } sub bucket { my ( $self, %conf ) = @_; return Net::Amazon::S3::Client::Bucket->new( client => $self, %conf, ); } sub _send_request_raw { my ( $self, $http_request, $filename ) = @_; return $self->s3->ua->request( $http_request, $filename ); } sub _send_request { my ( $self, $http_request, $filename ) = @_; my $http_response = $self->_send_request_raw( $http_request, $filename ); my $content = $http_response->content; my $content_type = $http_response->content_type; my $code = $http_response->code; if ( is_error($code) ) { if ( $content_type eq 'application/xml' ) { my $doc = $self->s3->libxml->parse_string($content); my $xpc = XML::LibXML::XPathContext->new($doc); $xpc->registerNs( 's3', 'http://s3.amazonaws.com/doc/2006-03-01/' ); if ( $xpc->findnodes('/Error') ) { my $code = $xpc->findvalue('/Error/Code'); my $message = $xpc->findvalue('/Error/Message'); confess("$code: $message"); } else { confess status_message($code); } } else { confess status_message($code); } } return $http_response; } sub _send_request_content { my ( $self, $http_request, $filename ) = @_; my $http_response = $self->_send_request( $http_request, $filename ); return $http_response->content; } sub _send_request_xpc { my ( $self, $http_request, $filename ) = @_; my $http_response = $self->_send_request( $http_request, $filename ); my $doc = $self->s3->libxml->parse_string( $http_response->content ); my $xpc = XML::LibXML::XPathContext->new($doc); $xpc->registerNs( 's3', 'http://s3.amazonaws.com/doc/2006-03-01/' ); return $xpc; } 1; __END__ =pod =head1 NAME Net::Amazon::S3::Client - An easy-to-use Amazon S3 client =head1 VERSION version 0.59 =head1 SYNOPSIS my $s3 = Net::Amazon::S3->new( aws_access_key_id => $aws_access_key_id, aws_secret_access_key => $aws_secret_access_key, retry => 1, ); my $client = Net::Amazon::S3::Client->new( s3 => $s3 ); # list all my buckets # returns a list of L objects my @buckets = $client->buckets; foreach my $bucket (@buckets) { print $bucket->name . "\n"; } # create a new bucket # returns a L object my $bucket = $client->create_bucket( name => $bucket_name, acl_short => 'private', location_constraint => 'US', ); # or use an existing bucket # returns a L object my $bucket = $client->bucket( name => $bucket_name ); =head1 DESCRIPTION The L module was written when the Amazon S3 service had just come out and it is a light wrapper around the APIs. Some bad API decisions were also made. The L, L and L classes are designed after years of usage to be easy to use for common tasks. These classes throw an exception when a fatal error occurs. It also is very careful to pass an MD5 of the content when uploaded to S3 and check the resultant ETag. WARNING: This is an early release of the Client classes, the APIs may change. =for test_synopsis no strict 'vars' =head1 METHODS =head2 buckets # list all my buckets # returns a list of L objects my @buckets = $client->buckets; foreach my $bucket (@buckets) { print $bucket->name . "\n"; } =head2 create_bucket # create a new bucket # returns a L object my $bucket = $client->create_bucket( name => $bucket_name, acl_short => 'private', location_constraint => 'US', ); =head2 bucket # or use an existing bucket # returns a L object my $bucket = $client->bucket( name => $bucket_name ); =head1 AUTHOR Pedro Figueiredo =head1 COPYRIGHT AND LICENSE This software is copyright (c) 2013 by Amazon Digital Services, Leon Brocard, Brad Fitzpatrick, Pedro Figueiredo. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. =cut release-minimum-version.t100644000765000024 52612121037303 20354 0ustar00pfigstaff000000000000Net-Amazon-S3-0.59/t#!perl BEGIN { unless ($ENV{RELEASE_TESTING}) { require Test::More; Test::More::plan(skip_all => 'these tests are for release candidate testing'); } } use Test::More; eval "use Test::MinimumVersion"; plan skip_all => "Test::MinimumVersion required for testing minimum versions" if $@; all_minimum_version_from_metayml_ok(); Request.pm100644000765000024 434112121037303 20163 0ustar00pfigstaff000000000000Net-Amazon-S3-0.59/lib/Net/Amazon/S3package Net::Amazon::S3::Request; { $Net::Amazon::S3::Request::VERSION = '0.59'; } use Moose 0.85; use MooseX::StrictConstructor 0.16; use Moose::Util::TypeConstraints; use Regexp::Common qw /net/; # ABSTRACT: Base class for request objects enum 'AclShort' => qw(private public-read public-read-write authenticated-read); enum 'LocationConstraint' => ( 'US', 'EU' ); # To comply with Amazon S3 requirements, bucket names must: # Contain lowercase letters, numbers, periods (.), underscores (_), and dashes (-) # Start with a number or letter # Be between 3 and 255 characters long # Not be in an IP address style (e.g., "192.168.5.4") subtype 'BucketName1' => as 'Str' => where { $_ =~ /^[a-zA-Z0-9._-]+$/; } => message { "Bucket name ($_) must contain lowercase letters, numbers, periods (.), underscores (_), and dashes (-)"; }; subtype 'BucketName2' => as 'BucketName1' => where { $_ =~ /^[a-zA-Z0-9]/; } => message { "Bucket name ($_) must start with a number or letter"; }; subtype 'BucketName3' => as 'BucketName2' => where { length($_) >= 3 && length($_) <= 255; } => message { "Bucket name ($_) must be between 3 and 255 characters long"; }; subtype 'BucketName' => as 'BucketName3' => where { $_ !~ /^$RE{net}{IPv4}$/; } => message { "Bucket name ($_) must not be in an IP address style (e.g., '192.168.5.4')"; }; has 's3' => ( is => 'ro', isa => 'Net::Amazon::S3', required => 1 ); __PACKAGE__->meta->make_immutable; sub _uri { my ( $self, $key ) = @_; return ($key) ? $self->bucket . "/" . (join '/', map {$self->s3->_urlencode($_)} split /\//, $key) : $self->bucket . "/"; } 1; __END__ =pod =head1 NAME Net::Amazon::S3::Request - Base class for request objects =head1 VERSION version 0.59 =head1 SYNOPSIS # do not instantiate directly =head1 DESCRIPTION This module is a base class for all the Net::Amazon::S3::Request::* classes. =head1 AUTHOR Pedro Figueiredo =head1 COPYRIGHT AND LICENSE This software is copyright (c) 2013 by Amazon Digital Services, Leon Brocard, Brad Fitzpatrick, Pedro Figueiredo. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. =cut HTTPRequest.pm100755000765000024 2110312121037303 20701 0ustar00pfigstaff000000000000Net-Amazon-S3-0.59/lib/Net/Amazon/S3package Net::Amazon::S3::HTTPRequest; { $Net::Amazon::S3::HTTPRequest::VERSION = '0.59'; } use Moose 0.85; use MooseX::StrictConstructor 0.16; use HTTP::Date; use MIME::Base64 qw( encode_base64 ); use Moose::Util::TypeConstraints; use URI::Escape qw( uri_escape_utf8 ); use URI::QueryParam; use URI; # ABSTRACT: Create a signed HTTP::Request my $METADATA_PREFIX = 'x-amz-meta-'; my $AMAZON_HEADER_PREFIX = 'x-amz-'; enum 'HTTPMethod' => qw(DELETE GET HEAD PUT POST); has 's3' => ( is => 'ro', isa => 'Net::Amazon::S3', required => 1 ); has 'method' => ( is => 'ro', isa => 'HTTPMethod', required => 1 ); has 'path' => ( is => 'ro', isa => 'Str', required => 1 ); has 'headers' => ( is => 'ro', isa => 'HashRef', required => 0, default => sub { {} } ); has 'content' => ( is => 'ro', isa => 'Str|CodeRef', required => 0, default => '' ); has 'metadata' => ( is => 'ro', isa => 'HashRef', required => 0, default => sub { {} } ); __PACKAGE__->meta->make_immutable; # make the HTTP::Request object sub http_request { my $self = shift; my $method = $self->method; my $path = $self->path; my $headers = $self->headers; my $content = $self->content; my $metadata = $self->metadata; my $http_headers = $self->_merge_meta( $headers, $metadata ); $self->_add_auth_header( $http_headers, $method, $path ) unless exists $headers->{Authorization}; my $protocol = $self->s3->secure ? 'https' : 'http'; my $uri = "$protocol://s3.amazonaws.com/$path"; if ( $path =~ m{^([^/?]+)(.*)} && _is_dns_bucket($1) ) { $uri = "$protocol://$1.s3.amazonaws.com$2"; } my $request = HTTP::Request->new( $method, $uri, $http_headers, $content ); # my $req_as = $request->as_string; # $req_as =~ s/[^\n\r\x20-\x7f]/?/g; # $req_as = substr( $req_as, 0, 1024 ) . "\n\n"; # warn $req_as; return $request; } sub query_string_authentication_uri { my ( $self, $expires ) = @_; my $method = $self->method; my $path = $self->path; my $headers = $self->headers; my $aws_access_key_id = $self->s3->aws_access_key_id; my $aws_secret_access_key = $self->s3->aws_secret_access_key; my $canonical_string = $self->_canonical_string( $method, $path, $headers, $expires ); my $encoded_canonical = $self->_encode( $aws_secret_access_key, $canonical_string ); my $protocol = $self->s3->secure ? 'https' : 'http'; my $uri = "$protocol://s3.amazonaws.com/$path"; if ( $path =~ m{^([^/?]+)(.*)} && _is_dns_bucket($1) ) { $uri = "$protocol://$1.s3.amazonaws.com$2"; } $uri = URI->new($uri); $uri->query_param( AWSAccessKeyId => $aws_access_key_id ); $uri->query_param( Expires => $expires ); $uri->query_param( Signature => $encoded_canonical ); return $uri; } sub _add_auth_header { my ( $self, $headers, $method, $path ) = @_; my $aws_access_key_id = $self->s3->aws_access_key_id; my $aws_secret_access_key = $self->s3->aws_secret_access_key; my $aws_session_token = $self->s3->aws_session_token; if ( not $headers->header('Date') ) { $headers->header( Date => time2str(time) ); } if ( not $headers->header('x-amz-security-token') and defined $aws_session_token ) { $headers->header( 'x-amz-security-token' => $aws_session_token ); } my $canonical_string = $self->_canonical_string( $method, $path, $headers ); my $encoded_canonical = $self->_encode( $aws_secret_access_key, $canonical_string ); $headers->header( Authorization => "AWS $aws_access_key_id:$encoded_canonical" ); } # generate a canonical string for the given parameters. expires is optional and is # only used by query string authentication. sub _canonical_string { my ( $self, $method, $path, $headers, $expires ) = @_; my %interesting_headers = (); while ( my ( $key, $value ) = each %$headers ) { my $lk = lc $key; if ( $lk eq 'content-md5' or $lk eq 'content-type' or $lk eq 'date' or $lk =~ /^$AMAZON_HEADER_PREFIX/ ) { $interesting_headers{$lk} = $self->_trim($value); } } # these keys get empty strings if they don't exist $interesting_headers{'content-type'} ||= ''; $interesting_headers{'content-md5'} ||= ''; # just in case someone used this. it's not necessary in this lib. $interesting_headers{'date'} = '' if $interesting_headers{'x-amz-date'}; # if you're using expires for query string auth, then it trumps date # (and x-amz-date) $interesting_headers{'date'} = $expires if $expires; my $buf = "$method\n"; foreach my $key ( sort keys %interesting_headers ) { if ( $key =~ /^$AMAZON_HEADER_PREFIX/ ) { $buf .= "$key:$interesting_headers{$key}\n"; } else { $buf .= "$interesting_headers{$key}\n"; } } # don't include anything after the first ? in the resource... $path =~ /^([^?]*)/; $buf .= "/$1"; # ...unless there any parameters we're interested in... if ( $path =~ /[&?](acl|torrent|location|uploads|delete)($|=|&)/ ) { $buf .= "?$1"; } elsif ( my %query_params = URI->new($path)->query_form ){ #see if the remaining parsed query string provides us with any query string or upload id if($query_params{partNumber} && $query_params{uploadId}){ #re-evaluate query string, the order of the params is important for request signing, so we can't depend on URI to do the right thing $buf .= sprintf("?partNumber=%s&uploadId=%s", $query_params{partNumber}, $query_params{uploadId}); } elsif($query_params{uploadId}){ $buf .= sprintf("?uploadId=%s",$query_params{uploadId}); } } return $buf; } # finds the hmac-sha1 hash of the canonical string and the aws secret access key and then # base64 encodes the result (optionally urlencoding after that). sub _encode { my ( $self, $aws_secret_access_key, $str, $urlencode ) = @_; my $hmac = Digest::HMAC_SHA1->new($aws_secret_access_key); $hmac->add($str); my $b64 = encode_base64( $hmac->digest, '' ); if ($urlencode) { return $self->_urlencode($b64); } else { return $b64; } } # EU buckets must be accessed via their DNS name. This routine figures out if # a given bucket name can be safely used as a DNS name. sub _is_dns_bucket { my $bucketname = $_[0]; if ( length $bucketname > 63 ) { return 0; } if ( length $bucketname < 3 ) { return; } return 0 unless $bucketname =~ m{^[a-z0-9][a-z0-9.-]+$}; my @components = split /\./, $bucketname; for my $c (@components) { return 0 if $c =~ m{^-}; return 0 if $c =~ m{-$}; return 0 if $c eq ''; } return 1; } # generates an HTTP::Headers objects given one hash that represents http # headers to set and another hash that represents an object's metadata. sub _merge_meta { my ( $self, $headers, $metadata ) = @_; $headers ||= {}; $metadata ||= {}; my $http_header = HTTP::Headers->new; while ( my ( $k, $v ) = each %$headers ) { $http_header->header( $k => $v ); } while ( my ( $k, $v ) = each %$metadata ) { $http_header->header( "$METADATA_PREFIX$k" => $v ); } return $http_header; } sub _trim { my ( $self, $value ) = @_; $value =~ s/^\s+//; $value =~ s/\s+$//; return $value; } sub _urlencode { my ( $self, $unencoded ) = @_; return uri_escape_utf8( $unencoded, '^A-Za-z0-9_-' ); } 1; __END__ =pod =head1 NAME Net::Amazon::S3::HTTPRequest - Create a signed HTTP::Request =head1 VERSION version 0.59 =head1 SYNOPSIS my $http_request = Net::Amazon::S3::HTTPRequest->new( s3 => $self->s3, method => 'PUT', path => $self->bucket . '/', headers => $headers, content => $content, )->http_request; =head1 DESCRIPTION This module creates an HTTP::Request object that is signed appropriately for Amazon S3. =for test_synopsis no strict 'vars' =head1 METHODS =head2 http_request This method creates, signs and returns a HTTP::Request object. =head2 query_string_authentication_uri This method creates, signs and returns a query string authentication URI. =head1 AUTHOR Pedro Figueiredo =head1 COPYRIGHT AND LICENSE This software is copyright (c) 2013 by Amazon Digital Services, Leon Brocard, Brad Fitzpatrick, Pedro Figueiredo. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. =cut Client000755000765000024 012121037303 17251 5ustar00pfigstaff000000000000Net-Amazon-S3-0.59/lib/Net/Amazon/S3Bucket.pm100755000765000024 1622312121037303 21213 0ustar00pfigstaff000000000000Net-Amazon-S3-0.59/lib/Net/Amazon/S3/Clientpackage Net::Amazon::S3::Client::Bucket; { $Net::Amazon::S3::Client::Bucket::VERSION = '0.59'; } use Moose 0.85; use MooseX::StrictConstructor 0.16; use Data::Stream::Bulk::Callback; use MooseX::Types::DateTime::MoreCoercions 0.07 qw( DateTime ); # ABSTRACT: An easy-to-use Amazon S3 client bucket has 'client' => ( is => 'ro', isa => 'Net::Amazon::S3::Client', required => 1 ); has 'name' => ( is => 'ro', isa => 'Str', required => 1 ); has 'creation_date' => ( is => 'ro', isa => DateTime, coerce => 1, required => 0 ); has 'owner_id' => ( is => 'ro', isa => 'OwnerId', required => 0 ); has 'owner_display_name' => ( is => 'ro', isa => 'Str', required => 0 ); __PACKAGE__->meta->make_immutable; sub _create { my ( $self, %conf ) = @_; my $http_request = Net::Amazon::S3::Request::CreateBucket->new( s3 => $self->client->s3, bucket => $self->name, acl_short => $conf{acl_short}, location_constraint => $conf{location_constraint}, )->http_request; $self->client->_send_request($http_request); } sub delete { my $self = shift; my $http_request = Net::Amazon::S3::Request::DeleteBucket->new( s3 => $self->client->s3, bucket => $self->name, )->http_request; $self->client->_send_request($http_request); } sub acl { my $self = shift; my $http_request = Net::Amazon::S3::Request::GetBucketAccessControl->new( s3 => $self->client->s3, bucket => $self->name, )->http_request; return $self->client->_send_request_content($http_request); } sub location_constraint { my $self = shift; my $http_request = Net::Amazon::S3::Request::GetBucketLocationConstraint->new( s3 => $self->client->s3, bucket => $self->name, )->http_request; my $xpc = $self->client->_send_request_xpc($http_request); my $lc = $xpc->findvalue('/s3:LocationConstraint'); if ( defined $lc && $lc eq '' ) { $lc = 'US'; } return $lc; } sub list { my ( $self, $conf ) = @_; $conf ||= {}; my $prefix = $conf->{prefix}; my $marker = undef; my $end = 0; return Data::Stream::Bulk::Callback->new( callback => sub { return undef if $end; my $http_request = Net::Amazon::S3::Request::ListBucket->new( s3 => $self->client->s3, bucket => $self->name, marker => $marker, prefix => $prefix, )->http_request; my $xpc = $self->client->_send_request_xpc($http_request); my @objects; foreach my $node ( $xpc->findnodes('/s3:ListBucketResult/s3:Contents') ) { my $etag = $xpc->findvalue( "./s3:ETag", $node ); $etag =~ s/^"//; $etag =~ s/"$//; # storage_class => $xpc->findvalue( ".//s3:StorageClass", $node ), # owner_id => $xpc->findvalue( ".//s3:ID", $node ), # owner_displayname => # $xpc->findvalue( ".//s3:DisplayName", $node ), push @objects, Net::Amazon::S3::Client::Object->new( client => $self->client, bucket => $self, key => $xpc->findvalue( './s3:Key', $node ), last_modified => $xpc->findvalue( './s3:LastModified', $node ), etag => $etag, size => $xpc->findvalue( './s3:Size', $node ), ); } return undef unless @objects; my $is_truncated = scalar $xpc->findvalue( '/s3:ListBucketResult/s3:IsTruncated') eq 'true' ? 1 : 0; $end = 1 unless $is_truncated; $marker = $xpc->findvalue('/s3:ListBucketResult/s3:NextMarker') || $objects[-1]->key; return \@objects; } ); } sub delete_multi_object { my $self = shift; my @objects = @_; return unless( scalar(@objects) ); my $http_request = Net::Amazon::S3::Request::DeleteMultiObject->new( s3 => $self->client->s3, bucket => $self->name, keys => [ map($_->key, @objects) ], )->http_request; return $self->client->_send_request($http_request); } sub object { my ( $self, %conf ) = @_; return Net::Amazon::S3::Client::Object->new( client => $self->client, bucket => $self, %conf, ); } 1; __END__ =pod =head1 NAME Net::Amazon::S3::Client::Bucket - An easy-to-use Amazon S3 client bucket =head1 VERSION version 0.59 =head1 SYNOPSIS # return the bucket name print $bucket->name . "\n"; # return the bucket location constraint print "Bucket is in the " . $bucket->location_constraint . "\n"; # return the ACL XML my $acl = $bucket->acl; # list objects in the bucket # this returns a L object which returns a # stream of L objects, as it may # have to issue multiple API requests my $stream = $bucket->list; until ( $stream->is_done ) { foreach my $object ( $stream->items ) { ... } } # or list by a prefix my $prefix_stream = $bucket->list( { prefix => 'logs/' } ); # returns a L, which can then # be used to get or put my $object = $bucket->object( key => 'this is the key' ); # delete the bucket (it must be empty) $bucket->delete; =head1 DESCRIPTION This module represents buckets. =for test_synopsis no strict 'vars' =head1 METHODS =head2 acl # return the ACL XML my $acl = $bucket->acl; =head2 delete # delete the bucket (it must be empty) $bucket->delete; =head2 list # list objects in the bucket # this returns a L object which returns a # stream of L objects, as it may # have to issue multiple API requests my $stream = $bucket->list; until ( $stream->is_done ) { foreach my $object ( $stream->items ) { ... } } # or list by a prefix my $prefix_stream = $bucket->list( { prefix => 'logs/' } ); =head2 location_constraint # return the bucket location constraint print "Bucket is in the " . $bucket->location_constraint . "\n"; =head2 name # return the bucket name print $bucket->name . "\n"; =head2 object # returns a L, which can then # be used to get or put my $object = $bucket->object( key => 'this is the key' ); =head2 delete_multi_object # delete multiple objects using a multi object delete operation # Accepts a list of L objects. # Limited to a maximum of 1000 objects in one operation $bucket->delete_multi_object($object1, $object2) =head1 AUTHOR Pedro Figueiredo =head1 COPYRIGHT AND LICENSE This software is copyright (c) 2013 by Amazon Digital Services, Leon Brocard, Brad Fitzpatrick, Pedro Figueiredo. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. =cut Object.pm100755000765000024 3604012121037303 21203 0ustar00pfigstaff000000000000Net-Amazon-S3-0.59/lib/Net/Amazon/S3/Clientpackage Net::Amazon::S3::Client::Object; { $Net::Amazon::S3::Client::Object::VERSION = '0.59'; } use Moose 0.85; use MooseX::StrictConstructor 0.16; use DateTime::Format::HTTP; use Digest::MD5 qw(md5 md5_hex); use Digest::MD5::File qw(file_md5 file_md5_hex); use File::stat; use MIME::Base64; use Moose::Util::TypeConstraints; use MooseX::Types::DateTime::MoreCoercions 0.07 qw( DateTime ); use IO::File 1.14; # ABSTRACT: An easy-to-use Amazon S3 client object enum 'AclShort' => qw(private public-read public-read-write authenticated-read); has 'client' => ( is => 'ro', isa => 'Net::Amazon::S3::Client', required => 1 ); has 'bucket' => ( is => 'ro', isa => 'Net::Amazon::S3::Client::Bucket', required => 1 ); has 'key' => ( is => 'ro', isa => 'Str', required => 1 ); has 'etag' => ( is => 'ro', isa => 'Etag', required => 0 ); has 'size' => ( is => 'ro', isa => 'Int', required => 0 ); has 'last_modified' => ( is => 'ro', isa => DateTime, coerce => 1, required => 0 ); has 'expires' => ( is => 'rw', isa => DateTime, coerce => 1, required => 0 ); has 'acl_short' => ( is => 'ro', isa => 'AclShort', required => 0, default => 'private' ); has 'content_type' => ( is => 'ro', isa => 'Str', required => 0, default => 'binary/octet-stream' ); has 'content_disposition' => ( is => 'ro', isa => 'Str', required => 0, ); has 'content_encoding' => ( is => 'ro', isa => 'Str', required => 0, ); __PACKAGE__->meta->make_immutable; sub exists { my $self = shift; my $http_request = Net::Amazon::S3::Request::GetObject->new( s3 => $self->client->s3, bucket => $self->bucket->name, key => $self->key, method => 'HEAD', )->http_request; my $http_response = $self->client->_send_request_raw($http_request); return $http_response->code == 200 ? 1 : 0; } sub get { my $self = shift; my $http_request = Net::Amazon::S3::Request::GetObject->new( s3 => $self->client->s3, bucket => $self->bucket->name, key => $self->key, method => 'GET', )->http_request; my $http_response = $self->client->_send_request($http_request); my $content = $http_response->content; my $md5_hex = md5_hex($content); my $etag = $self->etag || $self->_etag($http_response); confess 'Corrupted download' if( !$self->_is_multipart_etag($etag) && $etag ne $md5_hex); return $content; } sub get_filename { my ($self, $filename) = @_; my $http_request = Net::Amazon::S3::Request::GetObject->new( s3 => $self->client->s3, bucket => $self->bucket->name, key => $self->key, method => 'GET', )->http_request; my $http_response = $self->client->_send_request($http_request, $filename); my $md5_hex = file_md5_hex($filename); my $etag = $self->etag || $self->_etag($http_response); confess 'Corrupted download' if(!$self->_is_multipart_etag($etag) && $etag ne $md5_hex); } sub put { my ( $self, $value ) = @_; my $md5 = md5($value); my $md5_hex = unpack( 'H*', $md5 ); my $md5_base64 = encode_base64($md5); chomp $md5_base64; my $conf = { 'Content-MD5' => $md5_base64, 'Content-Length' => length $value, 'Content-Type' => $self->content_type, }; if ( $self->expires ) { $conf->{Expires} = DateTime::Format::HTTP->format_datetime( $self->expires ); } if ( $self->content_encoding ) { $conf->{'Content-Encoding'} = $self->content_encoding; } if ( $self->content_disposition ) { $conf->{'Content-Disposition'} = $self->content_disposition; } my $http_request = Net::Amazon::S3::Request::PutObject->new( s3 => $self->client->s3, bucket => $self->bucket->name, key => $self->key, value => $value, headers => $conf, acl_short => $self->acl_short, )->http_request; my $http_response = $self->client->_send_request($http_request); confess 'Error uploading' if $http_response->code != 200; my $etag = $self->_etag($http_response); confess 'Corrupted upload' if $etag ne $md5_hex; } sub put_filename { my ( $self, $filename ) = @_; my $md5_hex = $self->etag || file_md5_hex($filename); my $size = $self->size; unless ($size) { my $stat = stat($filename) || confess("No $filename: $!"); $size = $stat->size; } my $md5 = pack( 'H*', $md5_hex ); my $md5_base64 = encode_base64($md5); chomp $md5_base64; my $conf = { 'Content-MD5' => $md5_base64, 'Content-Length' => $size, 'Content-Type' => $self->content_type, }; if ( $self->expires ) { $conf->{Expires} = DateTime::Format::HTTP->format_datetime( $self->expires ); } if ( $self->content_encoding ) { $conf->{'Content-Encoding'} = $self->content_encoding; } if ( $self->content_disposition ) { $conf->{'Content-Disposition'} = $self->content_disposition; } my $http_request = Net::Amazon::S3::Request::PutObject->new( s3 => $self->client->s3, bucket => $self->bucket->name, key => $self->key, value => $self->_content_sub($filename), headers => $conf, acl_short => $self->acl_short, )->http_request; my $http_response = $self->client->_send_request($http_request); confess 'Error uploading' . $http_response->as_string if $http_response->code != 200; confess 'Corrupted upload' if $self->_etag($http_response) ne $md5_hex; } sub delete { my $self = shift; my $http_request = Net::Amazon::S3::Request::DeleteObject->new( s3 => $self->client->s3, bucket => $self->bucket->name, key => $self->key, )->http_request; $self->client->_send_request($http_request); } sub initiate_multipart_upload { my $self = shift; my $http_request = Net::Amazon::S3::Request::InitiateMultipartUpload->new( s3 => $self->client->s3, bucket => $self->bucket->name, key => $self->key, )->http_request; my $xpc = $self->client->_send_request_xpc($http_request); my $upload_id = $xpc->findvalue('//s3:UploadId'); confess "Couldn't get upload id from initiate_multipart_upload response XML" unless $upload_id; return $upload_id; } sub complete_multipart_upload { my $self = shift; my %args = ref($_[0]) ? %{$_[0]} : @_; #set default args $args{s3} = $self->client->s3; $args{key} = $self->key; $args{bucket} = $self->bucket->name; my $http_request = Net::Amazon::S3::Request::CompleteMultipartUpload->new(%args)->http_request; return $self->client->_send_request($http_request); } sub put_part { my $self = shift; my %args = ref($_[0]) ? %{$_[0]} : @_; #set default args $args{s3} = $self->client->s3; $args{key} = $self->key; $args{bucket} = $self->bucket->name; #work out content length header $args{headers}->{'Content-Length'} = length $args{value} if(defined $args{value}); my $http_request = Net::Amazon::S3::Request::PutPart->new(%args)->http_request; return $self->client->_send_request($http_request); } sub list_parts { confess "Not implemented"; # TODO - Net::Amazon::S3::Request:ListParts is implemented, but need to # define better interface at this level. Currently returns raw XML. } sub uri { my $self = shift; return Net::Amazon::S3::Request::GetObject->new( s3 => $self->client->s3, bucket => $self->bucket->name, key => $self->key, method => 'GET', )->http_request->uri; } sub query_string_authentication_uri { my $self = shift; return Net::Amazon::S3::Request::GetObject->new( s3 => $self->client->s3, bucket => $self->bucket->name, key => $self->key, method => 'GET', )->query_string_authentication_uri( $self->expires->epoch ); } sub _content_sub { my $self = shift; my $filename = shift; my $stat = stat($filename); my $remaining = $stat->size; my $blksize = $stat->blksize || 4096; confess "$filename not a readable file with fixed size" unless -r $filename and ( -f _ || $remaining ); my $fh = IO::File->new( $filename, 'r' ) or confess "Could not open $filename: $!"; $fh->binmode; return sub { my $buffer; # upon retries the file is closed and we must reopen it unless ( $fh->opened ) { $fh = IO::File->new( $filename, 'r' ) or confess "Could not open $filename: $!"; $fh->binmode; $remaining = $stat->size; } # warn "read remaining $remaining"; unless ( my $read = $fh->read( $buffer, $blksize ) ) { # warn "read $read buffer $buffer remaining $remaining"; confess "Error while reading upload content $filename ($remaining remaining) $!" if $! and $remaining; # otherwise, we found EOF $fh->close or confess "close of upload content $filename failed: $!"; $buffer ||= '' ; # LWP expects an emptry string on finish, read returns 0 } $remaining -= length($buffer); return $buffer; }; } sub _etag { my ( $self, $http_response ) = @_; my $etag = $http_response->header('ETag'); if ($etag) { $etag =~ s/^"//; $etag =~ s/"$//; } return $etag; } sub _is_multipart_etag { my ( $self, $etag ) = @_; return 1 if($etag =~ /\-\d+$/); } 1; __END__ =pod =head1 NAME Net::Amazon::S3::Client::Object - An easy-to-use Amazon S3 client object =head1 VERSION version 0.59 =head1 SYNOPSIS # show the key print $object->key . "\n"; # show the etag of an existing object (if fetched by listing # a bucket) print $object->etag . "\n"; # show the size of an existing object (if fetched by listing # a bucket) print $object->size . "\n"; # to create a new object my $object = $bucket->object( key => 'this is the key' ); $object->put('this is the value'); # to get the vaue of an object my $value = $object->get; # to see if an object exists if ($object->exists) { ... } # to delete an object $object->delete; # to create a new object which is publically-accessible with a # content-type of text/plain which expires on 2010-01-02 my $object = $bucket->object( key => 'this is the public key', acl_short => 'public-read', content_type => 'text/plain', expires => '2010-01-02', ); $object->put('this is the public value'); # return the URI of a publically-accessible object my $uri = $object->uri; # upload a file my $object = $bucket->object( key => 'images/my_hat.jpg', content_type => 'image/jpeg', ); $object->put_filename('hat.jpg'); # upload a file if you already know its md5_hex and size my $object = $bucket->object( key => 'images/my_hat.jpg', content_type => 'image/jpeg', etag => $md5_hex, size => $size, ); $object->put_filename('hat.jpg'); # download the value of the object into a file my $object = $bucket->object( key => 'images/my_hat.jpg' ); $object->get_filename('hat_backup.jpg'); # use query string authentication my $object = $bucket->object( key => 'images/my_hat.jpg', expires => '2009-03-01', ); my $uri = $object->query_string_authentication_uri(); =head1 DESCRIPTION This module represents objects in buckets. =for test_synopsis no strict 'vars' =head1 METHODS =head2 etag # show the etag of an existing object (if fetched by listing # a bucket) print $object->etag . "\n"; =head2 delete # to delete an object $object->delete; =head2 exists # to see if an object exists if ($object->exists) { ... } =head2 get # to get the vaue of an object my $value = $object->get; =head2 get_filename # download the value of the object into a file my $object = $bucket->object( key => 'images/my_hat.jpg' ); $object->get_filename('hat_backup.jpg'); =head2 key # show the key print $object->key . "\n"; =head2 put # to create a new object my $object = $bucket->object( key => 'this is the key' ); $object->put('this is the value'); # to create a new object which is publically-accessible with a # content-type of text/plain my $object = $bucket->object( key => 'this is the public key', acl_short => 'public-read', content_type => 'text/plain', ); $object->put('this is the public value'); You may also set Content-Encoding using content_encoding, and Content-Disposition using content_disposition. =head2 put_filename # upload a file my $object = $bucket->object( key => 'images/my_hat.jpg', content_type => 'image/jpeg', ); $object->put_filename('hat.jpg'); # upload a file if you already know its md5_hex and size my $object = $bucket->object( key => 'images/my_hat.jpg', content_type => 'image/jpeg', etag => $md5_hex, size => $size, ); $object->put_filename('hat.jpg'); You may also set Content-Encoding using content_encoding, and Content-Disposition using content_disposition. =head2 query_string_authentication_uri # use query string authentication my $object = $bucket->object( key => 'images/my_hat.jpg', expires => '2009-03-01', ); my $uri = $object->query_string_authentication_uri(); =head2 size # show the size of an existing object (if fetched by listing # a bucket) print $object->size . "\n"; =head2 uri # return the URI of a publically-accessible object my $uri = $object->uri; =head2 initiate_multipart_upload #initiate a new multipart upload for this object my $object = $bucket->object( key => 'massive_video.avi' ); my $upload_id = $object->initiate_multipart_upload; =head2 put_part #add a part to a multipart upload my $put_part_response = $object->put_part( upload_id => $upload_id, part_number => 1, value => $chunk_content, ); my $part_etag = $put_part_response->header('ETag') Returns an L object. It is necessary to keep the ETags for each part, as these are required to complete the upload. =head2 complete_multipart_upload #complete a multipart upload $object->complete_multipart_upload( upload_id => $upload_id, etags => [$etag_1, $etag_2], part_numbers => [$part_number_1, $part_number2], ); The etag and part_numbers parameters are ordered lists specifying the part numbers and ETags for each individual part of the multipart upload. =head1 AUTHOR Pedro Figueiredo =head1 COPYRIGHT AND LICENSE This software is copyright (c) 2013 by Amazon Digital Services, Leon Brocard, Brad Fitzpatrick, Pedro Figueiredo. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. =cut Request000755000765000024 012121037303 17463 5ustar00pfigstaff000000000000Net-Amazon-S3-0.59/lib/Net/Amazon/S3PutPart.pm100755000765000024 530712121037303 21570 0ustar00pfigstaff000000000000Net-Amazon-S3-0.59/lib/Net/Amazon/S3/Requestpackage Net::Amazon::S3::Request::PutPart; { $Net::Amazon::S3::Request::PutPart::VERSION = '0.59'; } use Moose 0.85; use MooseX::StrictConstructor 0.16; extends 'Net::Amazon::S3::Request'; has 'bucket' => ( is => 'ro', isa => 'BucketName', required => 1 ); has 'key' => ( is => 'ro', isa => 'Str', required => 1 ); has 'value' => ( is => 'ro', isa => 'Str|CodeRef', required => 0 ); has 'upload_id' => ( is => 'ro', isa => 'Str', required => 1 ); has 'part_number' => ( is => 'ro', isa => 'Int', required => 1 ); has 'copy_source_bucket' => ( is => 'ro', isa => 'Str', required => 0 ); has 'copy_source_key' => ( is => 'ro', isa => 'Str', required => 0 ); has 'acl_short' => ( is => 'ro', isa => 'Maybe[AclShort]', required => 0 ); has 'headers' => ( is => 'ro', isa => 'HashRef', required => 0, default => sub { {} } ); __PACKAGE__->meta->make_immutable; sub http_request { my $self = shift; my $headers = $self->headers; if ( $self->acl_short ) { $headers->{'x-amz-acl'} = $self->acl_short; } if(defined $self->copy_source_bucket && defined $self->copy_source_key){ $headers->{'x-amz-copy-source'} = $self->copy_source_bucket.'/'.$self->copy_source_key; } return Net::Amazon::S3::HTTPRequest->new( s3 => $self->s3, method => 'PUT', path => $self->_uri($self->key) . '?partNumber=' . $self->part_number . '&uploadId=' . $self->upload_id, headers => $headers, content => $self->value // '', )->http_request; } 1; =pod =head1 NAME Net::Amazon::S3::Request::PutPart - An internal class to put part of a multipart upload =head1 VERSION version 0.59 =head1 SYNOPSIS my $http_request = Net::Amazon::S3::Request::PutPart->new( s3 => $s3, bucket => $bucket, key => $key, value => $value, acl_short => $acl_short, headers => $conf, part_number => $part_number, upload_id => $upload_id )->http_request; =head1 DESCRIPTION This module puts an object. =for test_synopsis no strict 'vars' =head1 METHODS =head2 http_request This method returns a HTTP::Request object. =head1 AUTHOR Pedro Figueiredo =head1 COPYRIGHT AND LICENSE This software is copyright (c) 2013 by Amazon Digital Services, Leon Brocard, Brad Fitzpatrick, Pedro Figueiredo. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. =cut __END__ # ABSTRACT: An internal class to put part of a multipart upload GetObject.pm100644000765000024 356312121037303 22036 0ustar00pfigstaff000000000000Net-Amazon-S3-0.59/lib/Net/Amazon/S3/Requestpackage Net::Amazon::S3::Request::GetObject; { $Net::Amazon::S3::Request::GetObject::VERSION = '0.59'; } use Moose 0.85; use MooseX::StrictConstructor 0.16; extends 'Net::Amazon::S3::Request'; has 'bucket' => ( is => 'ro', isa => 'BucketName', required => 1 ); has 'key' => ( is => 'ro', isa => 'Str', required => 1 ); has 'method' => ( is => 'ro', isa => 'HTTPMethod', required => 1 ); # ABSTRACT: An internal class to get an object __PACKAGE__->meta->make_immutable; sub http_request { my $self = shift; return Net::Amazon::S3::HTTPRequest->new( s3 => $self->s3, method => $self->method, path => $self->_uri( $self->key ), )->http_request; } sub query_string_authentication_uri { my ( $self, $expires ) = @_; return Net::Amazon::S3::HTTPRequest->new( s3 => $self->s3, method => $self->method, path => $self->_uri( $self->key ), )->query_string_authentication_uri($expires); } 1; __END__ =pod =head1 NAME Net::Amazon::S3::Request::GetObject - An internal class to get an object =head1 VERSION version 0.59 =head1 SYNOPSIS my $http_request = Net::Amazon::S3::Request::GetObject->new( s3 => $s3, bucket => $bucket, key => $key, method => 'GET', )->http_request; =head1 DESCRIPTION This module gets an object. =for test_synopsis no strict 'vars' =head1 METHODS =head2 http_request This method returns a HTTP::Request object. =head2 query_string_authentication_uri This method returns query string authentication URI. =head1 AUTHOR Pedro Figueiredo =head1 COPYRIGHT AND LICENSE This software is copyright (c) 2013 by Amazon Digital Services, Leon Brocard, Brad Fitzpatrick, Pedro Figueiredo. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. =cut ListParts.pm100755000765000024 310212121037303 22105 0ustar00pfigstaff000000000000Net-Amazon-S3-0.59/lib/Net/Amazon/S3/Requestpackage Net::Amazon::S3::Request::ListParts; { $Net::Amazon::S3::Request::ListParts::VERSION = '0.59'; } # ABSTRACT: List the parts in a multipart upload. use Moose 0.85; use MooseX::StrictConstructor 0.16; extends 'Net::Amazon::S3::Request'; has 'bucket' => ( is => 'ro', isa => 'BucketName', required => 1 ); has 'key' => ( is => 'ro', isa => 'Str', required => 1 ); has 'upload_id' => ( is => 'ro', isa => 'Str', required => 1 ); has 'acl_short' => ( is => 'ro', isa => 'Maybe[AclShort]', required => 0 ); has 'headers' => ( is => 'ro', isa => 'HashRef', required => 0, default => sub { {} } ); __PACKAGE__->meta->make_immutable; sub http_request { my $self = shift; my $headers = $self->headers; if ( $self->acl_short ) { $headers->{'x-amz-acl'} = $self->acl_short; } return Net::Amazon::S3::HTTPRequest->new( s3 => $self->s3, method => 'GET', path => $self->_uri( $self->key ).'?uploadId='.$self->upload_id, headers => $self->headers, )->http_request; } 1; __END__ =pod =head1 NAME Net::Amazon::S3::Request::ListParts - List the parts in a multipart upload. =head1 VERSION version 0.59 =head1 AUTHOR Pedro Figueiredo =head1 COPYRIGHT AND LICENSE This software is copyright (c) 2013 by Amazon Digital Services, Leon Brocard, Brad Fitzpatrick, Pedro Figueiredo. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. =cut PutObject.pm100644000765000024 367712121037303 22075 0ustar00pfigstaff000000000000Net-Amazon-S3-0.59/lib/Net/Amazon/S3/Requestpackage Net::Amazon::S3::Request::PutObject; { $Net::Amazon::S3::Request::PutObject::VERSION = '0.59'; } use Moose 0.85; use MooseX::StrictConstructor 0.16; extends 'Net::Amazon::S3::Request'; # ABSTRACT: An internal class to put an object has 'bucket' => ( is => 'ro', isa => 'BucketName', required => 1 ); has 'key' => ( is => 'ro', isa => 'Str', required => 1 ); has 'value' => ( is => 'ro', isa => 'Str|CodeRef', required => 1 ); has 'acl_short' => ( is => 'ro', isa => 'Maybe[AclShort]', required => 0 ); has 'headers' => ( is => 'ro', isa => 'HashRef', required => 0, default => sub { {} } ); __PACKAGE__->meta->make_immutable; sub http_request { my $self = shift; my $headers = $self->headers; if ( $self->acl_short ) { $headers->{'x-amz-acl'} = $self->acl_short; } return Net::Amazon::S3::HTTPRequest->new( s3 => $self->s3, method => 'PUT', path => $self->_uri( $self->key ), headers => $self->headers, content => $self->value, )->http_request; } 1; __END__ =pod =head1 NAME Net::Amazon::S3::Request::PutObject - An internal class to put an object =head1 VERSION version 0.59 =head1 SYNOPSIS my $http_request = Net::Amazon::S3::Request::PutObject->new( s3 => $s3, bucket => $bucket, key => $key, value => $value, acl_short => $acl_short, headers => $conf, )->http_request; =head1 DESCRIPTION This module puts an object. =for test_synopsis no strict 'vars' =head1 METHODS =head2 http_request This method returns a HTTP::Request object. =head1 AUTHOR Pedro Figueiredo =head1 COPYRIGHT AND LICENSE This software is copyright (c) 2013 by Amazon Digital Services, Leon Brocard, Brad Fitzpatrick, Pedro Figueiredo. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. =cut ListBucket.pm100644000765000024 433612121037303 22240 0ustar00pfigstaff000000000000Net-Amazon-S3-0.59/lib/Net/Amazon/S3/Requestpackage Net::Amazon::S3::Request::ListBucket; { $Net::Amazon::S3::Request::ListBucket::VERSION = '0.59'; } use Moose 0.85; use MooseX::StrictConstructor 0.16; use URI::Escape qw(uri_escape_utf8); extends 'Net::Amazon::S3::Request'; # ABSTRACT: An internal class to list a bucket has 'bucket' => ( is => 'ro', isa => 'BucketName', required => 1 ); has 'prefix' => ( is => 'ro', isa => 'Maybe[Str]', required => 0 ); has 'delimiter' => ( is => 'ro', isa => 'Maybe[Str]', required => 0 ); has 'max_keys' => ( is => 'ro', isa => 'Maybe[Int]', required => 0, default => 1000 ); has 'marker' => ( is => 'ro', isa => 'Maybe[Str]', required => 0 ); __PACKAGE__->meta->make_immutable; sub http_request { my $self = shift; my $path = $self->bucket . "/"; my @post; foreach my $method ( qw(prefix delimiter max_keys marker) ) { my $value = $self->$method; next unless $value; my $key = $method; $key = 'max-keys' if $method eq 'max_keys'; push @post, $key . "=" . $self->_urlencode($value); } if (@post) { $path .= '?' . join( '&', @post ); } return Net::Amazon::S3::HTTPRequest->new( s3 => $self->s3, method => 'GET', path => $path, )->http_request; } sub _urlencode { my ( $self, $unencoded ) = @_; return uri_escape_utf8( $unencoded, '^A-Za-z0-9_-' ); } 1; __END__ =pod =head1 NAME Net::Amazon::S3::Request::ListBucket - An internal class to list a bucket =head1 VERSION version 0.59 =head1 SYNOPSIS my $http_request = Net::Amazon::S3::Request::ListBucket->new( s3 => $s3, bucket => $bucket, delimiter => $delimiter, max_keys => $max_keys, marker => $marker, )->http_request; =head1 DESCRIPTION This module lists a bucket. =for test_synopsis no strict 'vars' =head1 METHODS =head2 http_request This method returns a HTTP::Request object. =head1 AUTHOR Pedro Figueiredo =head1 COPYRIGHT AND LICENSE This software is copyright (c) 2013 by Amazon Digital Services, Leon Brocard, Brad Fitzpatrick, Pedro Figueiredo. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. =cut CreateBucket.pm100644000765000024 407512121037303 22530 0ustar00pfigstaff000000000000Net-Amazon-S3-0.59/lib/Net/Amazon/S3/Requestpackage Net::Amazon::S3::Request::CreateBucket; { $Net::Amazon::S3::Request::CreateBucket::VERSION = '0.59'; } use Moose 0.85; extends 'Net::Amazon::S3::Request'; # ABSTRACT: An internal class to create a bucket has 'bucket' => ( is => 'ro', isa => 'BucketName', required => 1 ); has 'acl_short' => ( is => 'ro', isa => 'Maybe[AclShort]', required => 0 ); has 'location_constraint' => ( is => 'ro', isa => 'Maybe[LocationConstraint]', required => 0 ); __PACKAGE__->meta->make_immutable; sub http_request { my $self = shift; my $headers = ( $self->acl_short ) ? { 'x-amz-acl' => $self->acl_short } : {}; my $content = ''; if ( defined $self->location_constraint && $self->location_constraint eq 'EU' ) { $content = "" . $self->location_constraint . ""; } return Net::Amazon::S3::HTTPRequest->new( s3 => $self->s3, method => 'PUT', path => $self->bucket . '/', headers => $headers, content => $content, )->http_request; } 1; __END__ =pod =head1 NAME Net::Amazon::S3::Request::CreateBucket - An internal class to create a bucket =head1 VERSION version 0.59 =head1 SYNOPSIS my $http_request = Net::Amazon::S3::Request::CreateBucket->new( s3 => $s3, bucket => $bucket, acl_short => $acl_short, location_constraint => $location_constraint, )->http_request; =head1 DESCRIPTION This module creates a bucket. =for test_synopsis no strict 'vars' =head1 METHODS =head2 http_request This method returns a HTTP::Request object. =head1 AUTHOR Pedro Figueiredo =head1 COPYRIGHT AND LICENSE This software is copyright (c) 2013 by Amazon Digital Services, Leon Brocard, Brad Fitzpatrick, Pedro Figueiredo. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. =cut DeleteBucket.pm100644000765000024 246112121037303 22524 0ustar00pfigstaff000000000000Net-Amazon-S3-0.59/lib/Net/Amazon/S3/Requestpackage Net::Amazon::S3::Request::DeleteBucket; { $Net::Amazon::S3::Request::DeleteBucket::VERSION = '0.59'; } use Moose 0.85; extends 'Net::Amazon::S3::Request'; # ABSTRACT: An internal class to delete a bucket has 'bucket' => ( is => 'ro', isa => 'BucketName', required => 1 ); __PACKAGE__->meta->make_immutable; sub http_request { my $self = shift; return Net::Amazon::S3::HTTPRequest->new( s3 => $self->s3, method => 'DELETE', path => $self->bucket . '/', )->http_request; } 1; __END__ =pod =head1 NAME Net::Amazon::S3::Request::DeleteBucket - An internal class to delete a bucket =head1 VERSION version 0.59 =head1 SYNOPSIS my $http_request = Net::Amazon::S3::Request::DeleteBucket->new( s3 => $s3, bucket => $bucket, )->http_request; =head1 DESCRIPTION This module deletes a bucket. =for test_synopsis no strict 'vars' =head1 METHODS =head2 http_request This method returns a HTTP::Request object. =head1 AUTHOR Pedro Figueiredo =head1 COPYRIGHT AND LICENSE This software is copyright (c) 2013 by Amazon Digital Services, Leon Brocard, Brad Fitzpatrick, Pedro Figueiredo. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. =cut DeleteObject.pm100644000765000024 266412121037303 22522 0ustar00pfigstaff000000000000Net-Amazon-S3-0.59/lib/Net/Amazon/S3/Requestpackage Net::Amazon::S3::Request::DeleteObject; { $Net::Amazon::S3::Request::DeleteObject::VERSION = '0.59'; } use Moose 0.85; use Moose::Util::TypeConstraints; extends 'Net::Amazon::S3::Request'; # ABSTRACT: An internal class to delete an object has 'bucket' => ( is => 'ro', isa => 'BucketName', required => 1 ); has 'key' => ( is => 'ro', isa => 'Str', required => 1 ); __PACKAGE__->meta->make_immutable; sub http_request { my $self = shift; return Net::Amazon::S3::HTTPRequest->new( s3 => $self->s3, method => 'DELETE', path => $self->_uri( $self->key ), )->http_request; } 1; __END__ =pod =head1 NAME Net::Amazon::S3::Request::DeleteObject - An internal class to delete an object =head1 VERSION version 0.59 =head1 SYNOPSIS my $http_request = Net::Amazon::S3::Request::DeleteObject->new( s3 => $s3, bucket => $bucket, key => $key, )->http_request; =head1 DESCRIPTION This module deletes an object. =for test_synopsis no strict 'vars' =head1 METHODS =head2 http_request This method returns a HTTP::Request object. =head1 AUTHOR Pedro Figueiredo =head1 COPYRIGHT AND LICENSE This software is copyright (c) 2013 by Amazon Digital Services, Leon Brocard, Brad Fitzpatrick, Pedro Figueiredo. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. =cut ListAllMyBuckets.pm100644000765000024 237012121037303 23356 0ustar00pfigstaff000000000000Net-Amazon-S3-0.59/lib/Net/Amazon/S3/Requestpackage Net::Amazon::S3::Request::ListAllMyBuckets; { $Net::Amazon::S3::Request::ListAllMyBuckets::VERSION = '0.59'; } use Moose 0.85; use MooseX::StrictConstructor 0.16; extends 'Net::Amazon::S3::Request'; # ABSTRACT: An internal class to list all buckets __PACKAGE__->meta->make_immutable; sub http_request { my $self = shift; return Net::Amazon::S3::HTTPRequest->new( s3 => $self->s3, method => 'GET', path => '', )->http_request; } 1; __END__ =pod =head1 NAME Net::Amazon::S3::Request::ListAllMyBuckets - An internal class to list all buckets =head1 VERSION version 0.59 =head1 SYNOPSIS my $http_request = Net::Amazon::S3::Request::ListAllMyBuckets->new( s3 => $s3 ) ->http_request; =head1 DESCRIPTION This module lists all buckets. =for test_synopsis no strict 'vars' =head1 METHODS =head2 http_request This method returns a HTTP::Request object. =head1 AUTHOR Pedro Figueiredo =head1 COPYRIGHT AND LICENSE This software is copyright (c) 2013 by Amazon Digital Services, Leon Brocard, Brad Fitzpatrick, Pedro Figueiredo. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. =cut DeleteMultiObject.pm100755000765000024 511012121037303 23525 0ustar00pfigstaff000000000000Net-Amazon-S3-0.59/lib/Net/Amazon/S3/Requestpackage Net::Amazon::S3::Request::DeleteMultiObject; { $Net::Amazon::S3::Request::DeleteMultiObject::VERSION = '0.59'; } use Moose 0.85; use Digest::MD5 qw/md5 md5_hex/; use MIME::Base64; use Carp qw/croak/; extends 'Net::Amazon::S3::Request'; has 'bucket' => ( is => 'ro', isa => 'BucketName', required => 1 ); has 'keys' => ( is => 'ro', isa => 'ArrayRef', required => 1 ); __PACKAGE__->meta->make_immutable; sub http_request { my $self = shift; #croak if we get a request for over 1000 objects croak "The maximum number of keys is 1000" if (scalar(@{$self->keys}) > 1000); #build XML doc my $xml_doc = XML::LibXML::Document->new('1.0','UTF-8'); my $root_element = $xml_doc->createElement('Delete'); $xml_doc->addChild($root_element); $root_element->appendTextChild('Quiet'=>'true'); #add content foreach my $key (@{$self->keys}){ my $obj_element = $xml_doc->createElement('Object'); $obj_element->appendTextChild('Key' => $key); $root_element->addChild($obj_element); } my $content = $xml_doc->toString; my $md5 = md5($content); my $md5_base64 = encode_base64($md5); chomp $md5_base64; my $header_spec = { 'Content-MD5' => $md5_base64, 'Content-Length' => length $content, 'Content-Type' => 'application/xml' }; #build signed request return Net::Amazon::S3::HTTPRequest->new( s3 => $self->s3, method => 'POST', path => $self->bucket . '/?delete', content => $content, headers => $header_spec, )->http_request; } 1; =pod =head1 NAME Net::Amazon::S3::Request::DeleteMultiObject - An internal class to delete multiple objects from a bucket =head1 VERSION version 0.59 =head1 SYNOPSIS my $http_request = Net::Amazon::S3::Request::DeleteMultiObject->new( s3 => $s3, bucket => $bucket, keys => [$key1, $key2], )->http_request; =head1 DESCRIPTION This module deletes multiple objects from a bucket. =for test_synopsis no strict 'vars' =head1 METHODS =head2 http_request This method returns a HTTP::Request object. =head1 AUTHOR Pedro Figueiredo =head1 COPYRIGHT AND LICENSE This software is copyright (c) 2013 by Amazon Digital Services, Leon Brocard, Brad Fitzpatrick, Pedro Figueiredo. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. =cut __END__ # ABSTRACT: An internal class to delete multiple objects from a bucket GetBucketAccessControl.pm100644000765000024 265112121037303 24525 0ustar00pfigstaff000000000000Net-Amazon-S3-0.59/lib/Net/Amazon/S3/Requestpackage Net::Amazon::S3::Request::GetBucketAccessControl; { $Net::Amazon::S3::Request::GetBucketAccessControl::VERSION = '0.59'; } use Moose 0.85; use MooseX::StrictConstructor 0.16; extends 'Net::Amazon::S3::Request'; # ABSTRACT: An internal class to get a bucket's access control has 'bucket' => ( is => 'ro', isa => 'BucketName', required => 1 ); __PACKAGE__->meta->make_immutable; sub http_request { my $self = shift; return Net::Amazon::S3::HTTPRequest->new( s3 => $self->s3, method => 'GET', path => $self->_uri('') . '?acl', )->http_request; } 1; __END__ =pod =head1 NAME Net::Amazon::S3::Request::GetBucketAccessControl - An internal class to get a bucket's access control =head1 VERSION version 0.59 =head1 SYNOPSIS my $http_request = Net::Amazon::S3::Request::GetBucketAccessControl->new( s3 => $s3, bucket => $bucket, )->http_request; =head1 DESCRIPTION This module gets a bucket's access control. =for test_synopsis no strict 'vars' =head1 METHODS =head2 http_request This method returns a HTTP::Request object. =head1 AUTHOR Pedro Figueiredo =head1 COPYRIGHT AND LICENSE This software is copyright (c) 2013 by Amazon Digital Services, Leon Brocard, Brad Fitzpatrick, Pedro Figueiredo. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. =cut GetObjectAccessControl.pm100644000765000024 301612121037303 24512 0ustar00pfigstaff000000000000Net-Amazon-S3-0.59/lib/Net/Amazon/S3/Requestpackage Net::Amazon::S3::Request::GetObjectAccessControl; { $Net::Amazon::S3::Request::GetObjectAccessControl::VERSION = '0.59'; } use Moose 0.85; use MooseX::StrictConstructor 0.16; extends 'Net::Amazon::S3::Request'; # ABSTRACT: An internal class to get an object's access control has 'bucket' => ( is => 'ro', isa => 'BucketName', required => 1 ); has 'key' => ( is => 'ro', isa => 'Str', required => 1 ); __PACKAGE__->meta->make_immutable; sub http_request { my $self = shift; return Net::Amazon::S3::HTTPRequest->new( s3 => $self->s3, method => 'GET', path => $self->_uri($self->key) . '?acl', )->http_request; } 1; __END__ =pod =head1 NAME Net::Amazon::S3::Request::GetObjectAccessControl - An internal class to get an object's access control =head1 VERSION version 0.59 =head1 SYNOPSIS my $http_request = Net::Amazon::S3::Request::GetObjectAccessControl->new( s3 => $s3, bucket => $bucket, key => $key, )->http_request; =head1 DESCRIPTION This module gets an object's access control. =for test_synopsis no strict 'vars' =head1 METHODS =head2 http_request This method returns a HTTP::Request object. =head1 AUTHOR Pedro Figueiredo =head1 COPYRIGHT AND LICENSE This software is copyright (c) 2013 by Amazon Digital Services, Leon Brocard, Brad Fitzpatrick, Pedro Figueiredo. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. =cut SetBucketAccessControl.pm100644000765000024 406412121037303 24541 0ustar00pfigstaff000000000000Net-Amazon-S3-0.59/lib/Net/Amazon/S3/Requestpackage Net::Amazon::S3::Request::SetBucketAccessControl; { $Net::Amazon::S3::Request::SetBucketAccessControl::VERSION = '0.59'; } use Moose 0.85; use MooseX::StrictConstructor 0.16; extends 'Net::Amazon::S3::Request'; # ABSTRACT: An internal class to set a bucket's access control has 'bucket' => ( is => 'ro', isa => 'BucketName', required => 1 ); has 'acl_short' => ( is => 'ro', isa => 'Maybe[AclShort]', required => 0 ); has 'acl_xml' => ( is => 'ro', isa => 'Maybe[Str]', required => 0 ); __PACKAGE__->meta->make_immutable; sub http_request { my $self = shift; unless ( $self->acl_xml || $self->acl_short ) { confess "need either acl_xml or acl_short"; } if ( $self->acl_xml && $self->acl_short ) { confess "can not provide both acl_xml and acl_short"; } my $headers = ( $self->acl_short ) ? { 'x-amz-acl' => $self->acl_short } : {}; my $xml = $self->acl_xml || ''; return Net::Amazon::S3::HTTPRequest->new( s3 => $self->s3, method => 'PUT', path => $self->_uri('') . '?acl', headers => $headers, content => $xml, )->http_request; } 1; __END__ =pod =head1 NAME Net::Amazon::S3::Request::SetBucketAccessControl - An internal class to set a bucket's access control =head1 VERSION version 0.59 =head1 SYNOPSIS my $http_request = Net::Amazon::S3::Request::SetBucketAccessControl->new( s3 => $s3, bucket => $bucket, acl_short => $acl_short, acl_xml => $acl_xml, )->http_request; =head1 DESCRIPTION This module sets a bucket's access control. =for test_synopsis no strict 'vars' =head1 METHODS =head2 http_request This method returns a HTTP::Request object. =head1 AUTHOR Pedro Figueiredo =head1 COPYRIGHT AND LICENSE This software is copyright (c) 2013 by Amazon Digital Services, Leon Brocard, Brad Fitzpatrick, Pedro Figueiredo. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. =cut SetObjectAccessControl.pm100644000765000024 424412121037303 24532 0ustar00pfigstaff000000000000Net-Amazon-S3-0.59/lib/Net/Amazon/S3/Requestpackage Net::Amazon::S3::Request::SetObjectAccessControl; { $Net::Amazon::S3::Request::SetObjectAccessControl::VERSION = '0.59'; } use Moose 0.85; use MooseX::StrictConstructor 0.16; extends 'Net::Amazon::S3::Request'; # ABSTRACT: An internal class to set an object's access control has 'bucket' => ( is => 'ro', isa => 'BucketName', required => 1 ); has 'key' => ( is => 'ro', isa => 'Str', required => 1 ); has 'acl_short' => ( is => 'ro', isa => 'Maybe[AclShort]', required => 0 ); has 'acl_xml' => ( is => 'ro', isa => 'Maybe[Str]', required => 0 ); __PACKAGE__->meta->make_immutable; sub http_request { my $self = shift; unless ( $self->acl_xml || $self->acl_short ) { confess "need either acl_xml or acl_short"; } if ( $self->acl_xml && $self->acl_short ) { confess "can not provide both acl_xml and acl_short"; } my $headers = ( $self->acl_short ) ? { 'x-amz-acl' => $self->acl_short } : {}; my $xml = $self->acl_xml || ''; return Net::Amazon::S3::HTTPRequest->new( s3 => $self->s3, method => 'PUT', path => $self->_uri( $self->key ) . '?acl', headers => $headers, content => $xml, )->http_request; } 1; __END__ =pod =head1 NAME Net::Amazon::S3::Request::SetObjectAccessControl - An internal class to set an object's access control =head1 VERSION version 0.59 =head1 SYNOPSIS my $http_request = Net::Amazon::S3::Request::SetObjectAccessControl->new( s3 => $s3, bucket => $bucket, key => $key, acl_short => $acl_short, acl_xml => $acl_xml, )->http_request; =head1 DESCRIPTION This module sets an object's access control. =for test_synopsis no strict 'vars' =head1 METHODS =head2 http_request This method returns a HTTP::Request object. =head1 AUTHOR Pedro Figueiredo =head1 COPYRIGHT AND LICENSE This software is copyright (c) 2013 by Amazon Digital Services, Leon Brocard, Brad Fitzpatrick, Pedro Figueiredo. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. =cut CompleteMultipartUpload.pm100755000765000024 570612121037303 25013 0ustar00pfigstaff000000000000Net-Amazon-S3-0.59/lib/Net/Amazon/S3/Requestpackage Net::Amazon::S3::Request::CompleteMultipartUpload; { $Net::Amazon::S3::Request::CompleteMultipartUpload::VERSION = '0.59'; } use Moose 0.85; use Digest::MD5 qw/md5 md5_hex/; use MIME::Base64; use Carp qw/croak/; use XML::LibXML; extends 'Net::Amazon::S3::Request'; has 'bucket' => ( is => 'ro', isa => 'BucketName', required => 1 ); has 'etags' => ( is => 'ro', isa => 'ArrayRef', required => 1 ); has 'key' => ( is => 'ro', isa => 'Str', required => 1 ); has 'part_numbers' => ( is => 'ro', isa => 'ArrayRef', required => 1 ); has 'upload_id' => ( is => 'ro', isa => 'Str', required => 1 ); __PACKAGE__->meta->make_immutable; sub http_request { my $self = shift; croak "must have an equally sized list of etags and part numbers" unless scalar(@{$self->part_numbers}) == scalar(@{$self->etags}); #build XML doc my $xml_doc = XML::LibXML::Document->new('1.0','UTF-8'); my $root_element = $xml_doc->createElement('CompleteMultipartUpload'); $xml_doc->addChild($root_element); #add content for(my $i = 0; $i < scalar(@{$self->part_numbers}); $i++ ){ my $part = $xml_doc->createElement('Part'); $part->appendTextChild('PartNumber' => $self->part_numbers->[$i]); $part->appendTextChild('ETag' => $self->etags->[$i]); $root_element->addChild($part); } my $content = $xml_doc->toString; my $md5 = md5($content); my $md5_base64 = encode_base64($md5); chomp $md5_base64; my $header_spec = { 'Content-MD5' => $md5_base64, 'Content-Length' => length $content, 'Content-Type' => 'application/xml' }; #build signed request return Net::Amazon::S3::HTTPRequest->new( #See patch below s3 => $self->s3, method => 'POST', path => $self->_uri( $self->key ). '?uploadId='.$self->upload_id, content => $content, headers => $header_spec, )->http_request; } 1; =pod =head1 NAME Net::Amazon::S3::Request::CompleteMultipartUpload - An internal class to complete a multipart upload =head1 VERSION version 0.59 =head1 SYNOPSIS my $http_request = Net::Amazon::S3::Request::CompleteMultipartUpload->new( s3 => $s3, bucket => $bucket, etags => \@etags, part_numbers => \@part_numbers, )->http_request; =head1 DESCRIPTION This module completes a multipart upload. =for test_synopsis no strict 'vars' =head1 METHODS =head2 http_request This method returns a HTTP::Request object. =head1 AUTHOR Pedro Figueiredo =head1 COPYRIGHT AND LICENSE This software is copyright (c) 2013 by Amazon Digital Services, Leon Brocard, Brad Fitzpatrick, Pedro Figueiredo. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. =cut __END__ # ABSTRACT: An internal class to complete a multipart upload InitiateMultipartUpload.pm100755000765000024 360712121037303 25007 0ustar00pfigstaff000000000000Net-Amazon-S3-0.59/lib/Net/Amazon/S3/Requestpackage Net::Amazon::S3::Request::InitiateMultipartUpload; { $Net::Amazon::S3::Request::InitiateMultipartUpload::VERSION = '0.59'; } use Moose 0.85; use MooseX::StrictConstructor 0.16; extends 'Net::Amazon::S3::Request'; has 'bucket' => ( is => 'ro', isa => 'BucketName', required => 1 ); has 'key' => ( is => 'ro', isa => 'Str', required => 1 ); has 'acl_short' => ( is => 'ro', isa => 'Maybe[AclShort]', required => 0 ); has 'headers' => ( is => 'ro', isa => 'HashRef', required => 0, default => sub { {} } ); __PACKAGE__->meta->make_immutable; sub http_request { my $self = shift; my $headers = $self->headers; if ( $self->acl_short ) { $headers->{'x-amz-acl'} = $self->acl_short; } return Net::Amazon::S3::HTTPRequest->new( s3 => $self->s3, method => 'POST', path => $self->_uri( $self->key ).'?uploads', headers => $self->headers, )->http_request; } 1; =pod =head1 NAME Net::Amazon::S3::Request::InitiateMultipartUpload - An internal class to begin a multipart upload =head1 VERSION version 0.59 =head1 SYNOPSIS my $http_request = Net::Amazon::S3::Request::InitiateMultipartUpload->new( s3 => $s3, bucket => $bucket, keys => $key, )->http_request; =head1 DESCRIPTION This module begins a multipart upload =for test_synopsis no strict 'vars' =head1 METHODS =head2 http_request This method returns a HTTP::Request object. =head1 AUTHOR Pedro Figueiredo =head1 COPYRIGHT AND LICENSE This software is copyright (c) 2013 by Amazon Digital Services, Leon Brocard, Brad Fitzpatrick, Pedro Figueiredo. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. =cut __END__ #ABSTRACT: An internal class to begin a multipart upload GetBucketLocationConstraint.pm100644000765000024 272112121037303 25576 0ustar00pfigstaff000000000000Net-Amazon-S3-0.59/lib/Net/Amazon/S3/Requestpackage Net::Amazon::S3::Request::GetBucketLocationConstraint; { $Net::Amazon::S3::Request::GetBucketLocationConstraint::VERSION = '0.59'; } use Moose 0.85; use MooseX::StrictConstructor 0.16; extends 'Net::Amazon::S3::Request'; # ABSTRACT: An internal class to get a bucket's location constraint has 'bucket' => ( is => 'ro', isa => 'BucketName', required => 1 ); __PACKAGE__->meta->make_immutable; sub http_request { my $self = shift; return Net::Amazon::S3::HTTPRequest->new( s3 => $self->s3, method => 'GET', path => $self->_uri('') . '?location', )->http_request; } 1; __END__ =pod =head1 NAME Net::Amazon::S3::Request::GetBucketLocationConstraint - An internal class to get a bucket's location constraint =head1 VERSION version 0.59 =head1 SYNOPSIS my $http_request = Net::Amazon::S3::Request::GetBucketLocationConstraint->new( s3 => $s3, bucket => $bucket, )->http_request; =head1 DESCRIPTION This module gets a bucket's location constraint. =for test_synopsis no strict 'vars' =head1 METHODS =head2 http_request This method returns a HTTP::Request object. =head1 AUTHOR Pedro Figueiredo =head1 COPYRIGHT AND LICENSE This software is copyright (c) 2013 by Amazon Digital Services, Leon Brocard, Brad Fitzpatrick, Pedro Figueiredo. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. =cut