Net-Amazon-S3-0.80/000755 000765 000024 00000000000 12512367152 014701 5ustar00rconoverstaff000000 000000 Net-Amazon-S3-0.80/bin/000755 000765 000024 00000000000 12512367152 015451 5ustar00rconoverstaff000000 000000 Net-Amazon-S3-0.80/CHANGES000644 000765 000024 00000073121 12512367152 015700 0ustar00rconoverstaff000000 000000 ================================================== Changes from 2014-04-12 00:00:00 +0000 to present. ================================================== ----------------------------------------- version 0.80 at 2015-04-12 04:02:44 +0000 ----------------------------------------- Change: 23ebe246719a53fdd3677ae6585ab0ea4e070c47 Author: Rusty Conover Date : 2015-04-11 23:58:28 +0000 Add gitignore Change: 515a34059eefd3ad0eb4cf29aced7e5b07f4ed0a Author: Rusty Conover Date : 2015-04-11 23:55:09 +0000 Bump to version 0.80 for new CPAN release, change repo references. Fix a few line endings and indentation problems. Change to generating changelog from git. Change: 9301ac886269c1bbe3f9e3dba1ad6ce583212dc6 Author: Ali Anari Date : 2015-04-11 23:44:46 +0000 Fixed a bug in Etags validation Change: acc316c18401651dbced6297918b0d13f8f0ca46 Author: Rusty Conover Date : 2014-11-04 00:14:12 +0000 Add get_callback and make copy key respect x-amz-metadata-directive. get_callback retrieve a key to a callback, pass the function reference as the second parameter. If you'd rather key metadata is copied rather than replaced define x-amz-metadata-directive when calling copy_key. Change: b2966b9742872328b70dcf1357e15a10fc748849 Author: Rusty Conover Date : 2014-10-16 17:26:30 +0000 Fix typo Change: eea3098513cc03a6d6389be7c7c33f48933966e4 Author: Rusty Conover Date : 2014-10-15 00:11:26 +0000 Merge branch 'iam_roles' into best-merge Change: 17f337c8648c61cc53022029f4601002a5c399fb Author: Rusty Conover Date : 2014-10-15 00:09:37 +0000 Add support for IAM roles. Add a new option use_iam_role to use IAM role based temporary credentials for all requests. No longer require access_key and secret_key parameters. Change: edc14f9dfedb3ad2ce61d15e9706488cc5ffa4f1 Author: Rusty Conover Date : 2014-09-28 23:21:00 +0000 Remove broken bucket as hostname detection code. The code that detects if a bucket can be used as a hostname is non optimal for SSL. All buckets can be accessed via an endpoint and a bucketname, or via a hostname. When using SSL accessing buckets that contain periods in their name the wildcard validation does not match causing SSL errors. Also accessing buckets via hostname causes connection caching to not work as well, since there are different connections for every S3 bucket since they are connected to different hostnames. This change does require the correct S3 endpoint to be specified for a bucket if it is the US standard region. Endpoints listed here: http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region Change: 75e65d9c595fc6b2eca62bf09f2ed54e079067ad Author: Rusty Conover Date : 2014-06-16 14:37:40 +0000 Add support for retriving keys that are "0". Otherwise it would fail with a bucket list. Change: 6772a9f636c100b3b382dd99e725974004a03852 Author: Rusty Conover Date : 2014-06-04 23:27:32 +0000 Add support for delete_multi_object to handle more than 1000 requests. Also allow it to handle string keys in the bucket along with Net::S3::Client::Objects. Change: 0017a86e924788822522a81d1e7f8a4f880201c9 Author: Rusty Conover Date : 2014-06-04 23:26:30 +0000 Revert "fix synopsis for s3cl sync_up" Wrong changelog entry This reverts commit 36fa3b1d658c88f6dfbe4485886b262bf95b5a31. Change: 36fa3b1d658c88f6dfbe4485886b262bf95b5a31 Author: perlpong Date : 2014-06-04 23:20:13 +0000 fix synopsis for s3cl sync_up Change: 172e28e0b26ac8173acf8629481a8648ea68c342 Author: Rusty Conover Date : 2014-06-04 22:33:36 +0000 Merge change to fix compatibility with perl 5.8 Change: f8a2a4b45e9427f6ce9fd6b12f0cf96cc18b5362 Author: Panu Ervamaa Date : 2014-03-24 14:48:59 +0000 Merge branch 'master' of github.com:gabiruh/net-amazon-s3 No actual changes, just marking it as merged. Change: ae2d9aa9cfdfe3e979c0bab4c19497d42f4a21a3 Author: Panu Ervamaa Date : 2014-03-24 14:38:25 +0000 correct plan t/02client.t Change: 7835760d9b981a6c599c8ae9d34da281b3a009ac Author: Panu Ervamaa Date : 2014-03-24 14:23:09 +0000 fix bucket creation in t/02client.t $client->bucket() creates just the local object, not on AWS. make lc() precedence explicit and consistent with t/01api.t Change: 14afb92eb4219d3d120125bcba8239e5de505845 Author: Panu Ervamaa Date : 2014-03-24 14:23:05 +0000 use LWP >= 6.03 to support proper 100-continue with content ref body Obsolete and broken implementation of _send_request_expect_nothing_probed() removed. The request path for the HEAD request was incorrect: it didn't include the bucketname, that has been moved to hostname prefix at earlier stage. Change: 01a0cd1ea2ff2d0b3e5b5a4b54d2f128494e5409 Author: Panu Ervamaa Date : 2014-03-24 14:23:00 +0000 avoid 409 Conflict when running 01api.t Using the same bucket name in another region after removing it from another fails with "409 Conflict" status. Quote from the AWS official support: "I strongly recommend against deleting a bucket that you want to keep. There’s never any guarantee that you will be able to create a new bucket with the same bucket name." Change: 686263d640a34e098506825edcca2c8eae2e3786 Author: Panu Ervamaa Date : 2014-03-24 14:22:56 +0000 use README.md for tests Change: a35b0a28cc510ae2f76cb98c5a3b6eb86d9c99e5 Author: Panu Ervamaa Date : 2014-03-24 14:19:46 +0000 add blank line after each package statement to satisfy Dist::Zilla::Plugin::PkgVersion Change: 8636491890c3a889f7ce7aa18af6f334f4ed0e0b Author: Panu Ervamaa Date : 2014-03-24 14:19:36 +0000 Merge pull request #28 from github.com:toritori0318/net-amazon-s3 feature/cache-control2 Change: aa912cfefd8f55d86c26023f607f3022443209bd Author: Panu Ervamaa Date : 2014-03-24 14:15:58 +0000 Merge pull request #24 from github.com:yuryu/net-amazon-s3 feature/server-side-encryption Change: 92415ff572c5087991bcf9169d8d53bfead60011 Author: Panu Ervamaa Date : 2014-03-24 14:05:18 +0000 Merge pull requests #23, #31 and #33 from github.com:rustyconover/net-amazon-s3 Change: 8c00ff869614e83f526d4ca31a72528ec77bf7f6 Author: Panu Ervamaa Date : 2014-03-24 14:05:18 +0000 Merge pull request #22 from github.com:NewsNow/net-amazon-s3 Change: a8c699e78ce80d84770ca2404870989b6f4808e7 Author: Panu Ervamaa Date : 2014-03-24 14:05:17 +0000 Merge pull request #21 from github.com:perlpong/net-amazon-s3 Change: 848a9cefd80dc8401225fb8517b7eb3104511aa6 Author: Panu Ervamaa Date : 2014-03-24 14:05:03 +0000 Merge pull request #6 from github.com:arc/net-amazon-s3 Change: d5d51488987e61b744f29c2de3233f48be0293da Author: Panu Ervamaa Date : 2014-03-23 12:50:17 +0000 Merge pull request #3 from github.com:dylanwh/net-amazon-s3 Change: 4dfc79db33fa11616f6afe2e64c126b36380c908 Author: Panu Ervamaa Date : 2014-03-23 12:42:59 +0000 Merge pull request #1 from github.com:berler/net-amazon-s3 Change: 4623411176cacb11dbf639fc3293cd05a93ce8e8 Author: Gabriel Andrade Date : 2014-02-17 09:56:55 +0000 Merge branch 'patch-3' of https://github.com/renatoaware/net-amazon-s3 Change: 20c292dfdccb1b3453e9d2a39c67632e9edda502 Author: Gabriel Andrade Date : 2014-02-17 09:56:37 +0000 Merge branch 'patch-1' of https://github.com/renatoaware/net-amazon-s3 Change: d9440e362cef9b30caf868efc8be6d78146bb391 Author: Renato Santos Date : 2014-02-17 09:14:31 +0000 Update Request.pm The non-arrayref forms of enum and duck_type have been deprecated Change: 6b725babb202ca567ba85dee9fa01762a555f7fe Author: Renato Santos Date : 2014-02-17 09:14:05 +0000 Update HTTPRequest.pm The non-arrayref forms of enum and duck_type have been deprecated Change: 0fc0354a42f932e9d32fc4fa732ea2587bae6f43 Author: Renato Santos Date : 2014-02-17 09:13:14 +0000 Update Object.pm The non-arrayref forms of enum and duck_type have been deprecated Change: 75486967f6e3efea9327f36425e19bcc15bfa980 Author: Gabriel Andrade Date : 2014-02-13 16:20:30 +0000 Merge branch 'abort_multipart_upload' Change: 2bbd0864910fea57da9e0e4a65ed542ded571dff Author: Rusty Conover Date : 2014-02-08 00:05:02 +0000 Fix some warnings about line endings from perl critic, add a missing abstract to DeleteMultipleObjects. Change: 98d2aae91a1b6d69fc3ca84fda2311def5630eec Author: Rusty Conover Date : 2014-02-07 23:42:28 +0000 Merge remote-tracking branch 'origin/pr/31' Merge fix for calculate of etags. Change: 229dc38935549a6ceeb7140bb5fa7087720c2410 Author: Rusty Conover Date : 2014-02-07 23:40:46 +0000 Merge remote-tracking branch 'origin/pr/23' Merge multipart upload aborting change. Change: 81bcd0b07c97fb6e11b9fddacd54b4719da540ae Author: Rusty Conover Date : 2014-02-07 23:33:22 +0000 Merge remote-tracking branch 'origin/pr/33' Fixed for enums in recent versions of Moose. Change: 0cfa6b71817f36d5005dca1b95b54fe24329710e Author: Rusty Conover Date : 2014-02-07 23:24:50 +0000 Merge remote-tracking branch 'upstream/master' Change: 150d398f061272115e967bca0b568a9597a61024 Author: Tom Hukins Date : 2014-01-15 20:51:54 +0000 Moose enum fixes Moose 2.1100 deprecates the non-arrayref form of enum. Change: b0e8df3b81ba87aba3d98a8d0cd2c2029e79f8d2 Author: Robert Ward Date : 2013-12-06 11:15:11 +0000 Fixed the bug with listing multipart uploaded files Change: 6df67be5f4fcafb7d15d023c44d0dce4153f9d78 Author: Robert Ward Date : 2013-12-06 11:13:07 +0000 Added a test case demonstrating how listing a bucket fails when trying to list a multipart-uploaded file Change: 27eb6d704321b4758f2aa49f707503ba0f76702c Author: Robert Ward Date : 2013-12-06 11:08:32 +0000 Updated the client test to use README.md instead of README for a test file Change: acc526fae9ada1042f52fc9ca676ad7a1ed652fa Author: toritori0318 Date : 2013-10-02 17:04:50 +0000 Added support for cache-control header Change: 00956c12865027e77b81501e6e35272ccda091d8 Author: Haruka Iwao Date : 2013-08-27 19:47:33 +0000 Add documentation for server-side encryption Change: 0148c98300926e0806b4e76f3ddd61bc4b88fc06 Author: Haruka Iwao Date : 2013-08-27 15:13:34 +0000 Indent changes Change: 804d88e5c32e9873916ebd38f34d7501ba4b87d7 Author: Haruka Iwao Date : 2013-08-27 15:13:20 +0000 Modify a test to use AES256 encryption Change: 7eba87add514a8f56c1247c4684e044ad9b1b2c6 Author: Haruka Iwao Date : 2013-08-27 12:16:21 +0000 Support server side encryption Change: 41488fe57b0d5a3b97b678300e258bccd99aef17 Author: Gabriel Andrade Date : 2013-06-25 15:33:30 +0000 - adding support for multipart upload aborting Signed-off-by: Gabriel Andrade Change: 59d2deb250670acf70e00338087a29451b70d768 Author: Struan Bartlett Date : 2013-06-18 23:03:38 +0000 Allow 'value' property to be ScalarRef Change: e73c19a34bf480932186c6a074ed358056db7d6a Author: perlpong Date : 2013-04-04 13:09:36 +0000 fix synopsis for s3cl sync_up Change: 698ef8ff7a7c2d04dd30f7878917b3ac608af7e0 Author: Pedro Figueiredo Date : 2013-03-16 09:42:14 +0000 Merge branch 'master' of github.com:pfig/net-amazon-s3 Change: f0614f470b0a7040a1ec93c1a227c66271735292 Author: Pedro Figueiredo Date : 2013-03-16 09:41:42 +0000 Release 0.59 Change: 9b0dc37b3cc7dbe298c0eec570fac8578798cfe0 Author: Pedro Figueiredo Date : 2013-03-16 09:38:10 +0000 Release 0.59 Change: 7aa69a3d46330501a912f848703d19aae478b3b6 Author: Pedro Figueiredo Date : 2013-03-16 09:37:46 +0000 Updated change log Change: 3c246c5793d7e89043bc7e8c5867c231f040869a Author: Pedro Figueiredo Date : 2013-03-16 02:33:42 +0000 Merge pull request #19 from pfig/feature/auth-token Feature/auth token Change: 9cfa6e3460bb3b45ecba835c599422dce35994b7 Author: Pedro Figueiredo Date : 2013-03-16 09:31:07 +0000 Cosmetic, code style Change: 5ec314c0e67806f75315fbe066e813f985bf003e Author: Pedro Figueiredo Date : 2013-03-13 12:13:46 +0000 Merging pull request #17 into feature/auth-token Change: 17404fbe4ec470118c28ad276cb69df55ba36b75 Author: Pedro Figueiredo Date : 2013-03-13 11:27:29 +0000 Merge branch 'develop' of github.com:pfig/net-amazon-s3 into develop Change: e6a922ecd2c8e8038297ccc495f543be75e01e7a Author: Miquel Ruiz Date : 2013-01-17 16:28:17 +0000 Support auth via AWS STS (Security Token) Change: 909f6c2b8d069ce72a4f9555eee6488248f29340 Author: Pedro Figueiredo Date : 2012-11-28 16:46:29 +0000 Removed the Git bundle for now Change: 34d6d071ac6ad73b417f91d62f221d6dab9a8f00 Author: Pedro Figueiredo Date : 2012-11-28 16:27:46 +0000 Merge tag '0.58' into develop Release 0.58 Change: 9c188bb57fe1c5a83085a9c5c2ad3df47a26c43d Author: Pedro Figueiredo Date : 2012-11-28 16:27:28 +0000 Merge branch 'release/0.58' Change: 37c59a5f24b5e902b39f01038959be3736832665 Author: Pedro Figueiredo Date : 2012-11-28 16:27:08 +0000 Fixed synopsis code and Critic directives Change: d41bec133f6f8f19d35a7e686423eb39b4cca4c4 Author: Pedro Figueiredo Date : 2012-11-28 16:26:38 +0000 Version bump Change: 802aa4f9f40d1ecc7caa8d2abc7b15723859a060 Author: Pedro Figueiredo Date : 2012-11-28 15:48:24 +0000 Cosmetic (indentation) Change: 57f694c83a65b5cdac57e39dc87ada46eeff2a9f Author: Pedro Figueiredo Date : 2012-11-28 07:44:52 +0000 Merge pull request #14 from pfig/feature/multipart-upload Multi-part upload and multi-object delete. Change: c8d743bcfe0ff13c58893c815c4877e85e9dc31c Author: Pedro Figueiredo Date : 2012-11-28 15:42:00 +0000 Making Perl::Critic happy Change: 853062f9458686fa6b909b8271aa6c482b42777a Author: Pedro Figueiredo Date : 2012-11-28 15:26:27 +0000 Merge multipart upload patch (https://github.com/pfig/net-amazon-s3/pull/13) Change: 241b19fe34b93b74a363be4864baa219d4b3ddb0 Author: Pedro Figueiredo Date : 2012-11-28 15:12:16 +0000 Fixed bungled changelog Change: afc7f33c71c793d74f8564679242148f611dd52e Author: robert clarke Date : 2012-09-21 14:04:08 +0000 Move put_part content length calculation to Net::Amazon::S3::ClientObject, tidy up handling of intitiate_multipart_upload request Change: 602daa8d1bf984777288ab1fd99cf56919a86a81 Author: robert clarke Date : 2012-09-21 13:58:06 +0000 tidy up minor scruffiness Change: 13909a3113bf89fff186386014e7f73e67f4fb4e Author: robert clarke Date : 2012-09-21 13:36:32 +0000 sort out messy comments Change: 72447f8514fb0bba27db485d2b4f13064dc1c9fa Author: robert clarke Date : 2012-09-21 13:34:36 +0000 change tests to use File::Temp, rather than modify distribution files in place Change: 868709fddf4537175c2244ce887fb8636de8ebca Author: robert clarke Date : 2012-09-20 12:11:01 +0000 Implementation of multipart upload and multiobject delete Change: c1a8e50c5ee582ff9faf593ec33ab575072eb263 Author: Pedro Figueiredo Date : 2012-09-16 22:33:26 +0000 Merge tag '0.57' into develop Release 0.57 Change: 1b686746b5e3dfe059535685227d3c1d83dedfad Author: Pedro Figueiredo Date : 2012-09-16 22:33:09 +0000 Merge branch 'release/0.57' Change: c5adb6c4a0507a4b492924f48a25157848699e82 Author: Pedro Figueiredo Date : 2012-09-16 22:30:34 +0000 Make Test::Synopsis happy Change: ebbea88aa78ca6384be93335e28cff0ab83ceb8a Author: Pedro Figueiredo Date : 2012-09-16 22:30:16 +0000 Removing link checks and POD coverage tests for the moment Change: 7c018f12e63be162fc5897f51ea0c1f9873941d4 Author: Pedro Figueiredo Date : 2012-09-16 20:56:44 +0000 Updated dzil's configuration Change: 2897d9c437868f799c939f7c2f560c9bfa3371bf Author: Pedro Figueiredo Date : 2012-09-16 20:56:14 +0000 Make Perl::Critic happy Change: 9a30f5090a12858062a9844e3ee42fcda7a3aaaa Author: Pedro Figueiredo Date : 2012-09-16 20:13:31 +0000 Updated docs to include Content-Disposition Change: 7ffae2c5cac6fd508fb88667c2ce0d5bf083e444 Author: Pedro Figueiredo Date : 2012-09-16 11:01:49 +0000 Added Content-Disposition to put_filename() Change: 2bc8c48be031edbe96dbcdf9790855ce9578f506 Author: Pedro Figueiredo Date : 2012-09-16 11:01:08 +0000 Changed client tests to work with Pedro's account Change: d792114a4740fbfaea0ba08700a0862afbaa1421 Author: Pedro Figueiredo Date : 2012-07-18 02:17:48 +0000 Merge pull request #10 from ranguard/master Added sync_up to s3cl Change: b59f3be8a82105e38c69d8eaf6370116fe57e177 Author: Leo Lapworth Date : 2012-06-04 11:37:38 +0000 Add sync_up option to s3cl Change: d88638866f5cb29f8b68c0e5451259abdd764ecd Author: Rusty Conover Date : 2012-05-26 13:04:15 +0000 Add support for multiple delete call at S3. API allows deletion of 1,000 keys per call, which is very fast compared to deleting one key per call. Change: 7950fbcd5123e609442a8286e3ecf3d3f11cd9fe Author: Aaron Crane Date : 2012-05-25 00:58:24 +0000 Support decoding Content-Encoding on GET Change: 7d0809661234c50038363ff6b8a8d7f70348b0c4 Author: Aaron Crane Date : 2012-05-19 17:05:30 +0000 Support uploading and downloading user metadata Change: aa057978b6048c69222aaf169b85329b933f642d Author: Aaron Crane Date : 2012-05-19 17:05:30 +0000 Support reduced-redundancy storage Change: 354731025c227304dca65e26a3a3bf62b4132460 Author: Aaron Crane Date : 2012-05-19 17:05:29 +0000 Factor out shared guts of put and put_filename This should prevent future bugs like that affecting the content_disposition option. Change: 934835557aa877a5f027bc02e7e9e5dc5d6179b6 Author: Aaron Crane Date : 2012-05-19 17:05:29 +0000 Consistent error reporting for put and put_filename Change: d7f5a378e83798e6e67cebdb6fdb995423671088 Author: Aaron Crane Date : 2012-05-19 17:05:29 +0000 Document content_disposition attribute Change: c4123f8a9711e3051a977d95cda0a6be88ee283c Author: Aaron Crane Date : 2012-05-19 14:31:10 +0000 Support Content-Disposition in put_filename Change: bef170b154db03b87af2a574898bfbd238a1ae20 Author: Pedro Figueiredo Date : 2012-04-28 08:17:24 +0000 Merge pull request #5 from lordarthas/master Add support for Content-Disposition header Change: 1b9bbd79684ca605c6ca99b6bb543051653ef2b9 Author: Pedro Figueiredo Date : 2012-04-28 16:11:11 +0000 Moved script to bin, in order for the script to be installed Change: bc243b7a75e3a8fdccd3051030d4953ce8fd8cee Author: Michele Beltrame Date : 2012-02-11 10:11:50 +0000 Support for Content-Disposition header Change: 104bd19aa30fd993a02a0080940d11b9d639ea31 Author: Pedro Figueiredo Date : 2011-12-25 17:53:44 +0000 Adding a README to please GitHub Change: 91a2d2be13b19d0025730589b5b4e0f6646e2456 Author: Pedro Figueiredo Date : 2011-12-24 15:16:17 +0000 Moved build system to Dist::Zilla Change: 0cdc34cbc58e8a3a461ba1b443f53040efeabda3 Author: Pedro Figueiredo Date : 2011-12-18 22:35:25 +0000 Extra file in MANIFEST *grumble* Change: db7f8d75a3e86768a1a44be45859e3ebe2a4c0cf Author: Pedro Figueiredo Date : 2011-12-18 22:28:51 +0000 Release 0.56 Change: 500975a1d24f7c13a3920113a3f88c74ad245b5f Author: Pedro Figueiredo Date : 2011-12-18 11:46:06 +0000 Fixed spelling error (RT #69817, patch from Fabrizio Regalli) Change: e0a7eedf777449d1a41fc48dd3c46f6542487c02 Author: Pedro Figueiredo Date : 2011-12-16 21:15:33 +0000 Added use_ok tests (RT #72856) Change: 3fdd016c074dd54e72954b7e625ce2d3a07813a7 Author: Pedro Figueiredo Date : 2011-12-16 21:09:16 +0000 use URI::Escape (RT #72857) Change: 782897e7e3e2054280fa6e4d9cfa594d60e3e221 Author: Pedro Figueiredo Date : 2011-12-16 21:01:56 +0000 Bumped dependency on MooseX::StrictConstructor to 0.16 (RT #73229) Change: 14634ddd14cbb5b722b4aae8b9ddf1482d272ff5 Author: Pedro Figueiredo Date : 2011-12-12 21:08:13 +0000 RT #73193: enforce dependency versions Change: 1c51c9f3950aaa87531a61dd039e3c491b64fbcb Author: Pedro Figueiredo Date : 2011-12-10 02:09:13 +0000 Release 0.55 Change: 8e4270235a945c4876483ef3735b1f22b840bae5 Author: Pedro Figueiredo Date : 2011-12-10 02:08:03 +0000 Replaced dependency on MooseX::Types::DateTimeX with MooseX::Types::DateTime::MoreCoercions Change: 4713f68ae1194c65a7ea61f16062060b7193a9c5 Author: Pedro Figueiredo Date : 2011-12-10 02:06:38 +0000 These tests were passing for Le'on but not for me :) Change: 59a03896365cd64eb03ba87bc441b9f5f6312481 Author: Pedro Figueiredo Date : 2011-12-10 02:04:24 +0000 META.yml is generated, shouldn't be under VC Change: 5398c5905434995e54208b9ba644b1a7236bb591 Author: Dylan William Hardison Date : 2011-10-14 22:04:16 +0000 allow for more easy subclassing of ::Client Change: 970d7fa50bff429827641fe08a158e3e9652fddc Author: Steven Berler Date : 2011-09-08 17:43:51 +0000 optional 'host' parameter to use any S3-compatible host Allows Net::Amazon::S3 to be used with any S3-compatible host by setting the 'host' parameter when creating the Net::Amazon::S3 object. This allows you to override the default of 's3.amazonaws.com'. Example: my $s3 = Net::Amazon::S3->new({ aws_access_key_id => $aws_access_key_id, aws_secret_access_key => $aws_secret_access_key, retry => 1, host => 'my.s3.compatible.host.com', }); Change: c9f2e29efbeb2565b1f19d409593587b64e11a5e Author: Pedro Figueiredo Date : 2011-07-10 08:43:27 +0000 Newer MakeMaker doesn't accept array refs for AUTHOR anymore Change: 349177c96bdc814820b5da79368c8935d6975ea1 Author: Pedro Figueiredo Date : 2011-07-10 08:29:51 +0000 Release 0.54 Change: fe22ac0818cd66cc7d9b542dfa2a019a7c73f3c8 Author: Pedro Figueiredo Date : 2011-06-19 17:42:50 +0000 Patch by David Wheeler to eliminate a naked qw() warning Change: 2a18e7f5dfe061a824c7706c00f3343f082451c5 Author: Pedro Figueiredo Date : 2011-06-19 17:40:06 +0000 Removed Class::Accessor::Fast from dependency list Change: 93aaa2917088feecb9178b1471facf78ad23bf3a Author: Pavel A. Karoukin Date : 2011-05-25 14:17:54 +0000 Fix path issues Change: ec8ebc26a57e05d043e0c930d3162fedd940e82a Author: Leon Brocard Date : 2010-03-30 15:26:55 +0000 And release 0.53 Change: 1383af2ec58fc7853a9451125a6a64c94f4581fd Author: Leon Brocard Date : 2010-03-30 14:45:34 +0000 update s3cl: You need to use the module before you use it, added the mkbucket command, now you can run the help without your AWS secret key, add docs about the env variables you need to run s3cl (patches by Jesse Vincent) Change: 95b94e455f872eaff8de8214b3633dedce979d9c Author: Leon Brocard Date : 2010-03-30 11:24:19 +0000 add content_encoding to Net::Amazon::S3::Object (suggested by Egor Korablev) Change: a51659b012a6ad37775db97378a3f52f865a7dab Author: Leon Brocard Date : 2010-03-30 11:13:01 +0000 fix max_keys when listing buckets (spotted by Andrew Bryan) Change: 44786d002a750631996b479f703f8dbf26d769dc Author: Leon Brocard Date : 2010-03-30 10:49:00 +0000 add an exists method to Net::Amazon::S3::Client (suggested by David Golden) Change: 11e9f3d07072c968feefdf1230fbc61c633a6a86 Author: Leon Brocard Date : 2010-03-30 10:31:15 +0000 tiny POD fix (patch by Frank Wiegand) Change: 6c0d303bc567c0b37a2a53dd95554d0eeac8014f Author: Leon Brocard Date : 2010-03-30 10:18:35 +0000 fix authenticated urls to work with EU buckets (patch by Edmund von der Burg) Change: 35600941ac7cc4d144aa905fc3ca0fddc71a4350 Author: Leon Brocard Date : 2009-07-02 09:17:48 +0000 and release 0.52 Change: 12c6da242e871ebe362ca8ed6320358ff0cc88d8 Author: Leon Brocard Date : 2009-07-02 09:17:08 +0000 increase version prerequisites for some modules so that they are known to work Change: f9faa3759d6f22488beb0519243258cd224c7819 Author: Leon Brocard Date : 2009-05-19 08:33:45 +0000 and release 0.51 Change: 9289fc541d6ff4794919cfa9198fd687e6ef395a Author: Leon Brocard Date : 2009-05-19 08:31:44 +0000 use MooseX::Types::DateTimeX so that we work with latest Moose (noticed by Ted Zlatanov) Change: dc38f262e76d156bc82bd07b8935e46f6d034cdc Author: Leon Brocard Date : 2009-01-21 10:43:33 +0000 and release 0.50 Change: 868f1d0687ec508a3dc1bf37104eb5716eccfd01 Author: Leon Brocard Date : 2009-01-21 10:41:49 +0000 add query_string_authentication_uri() to Net::Amazon::S3::Client::Object, suggested by Meng Wong Change: b3ab3e022e58d06c95778fb47a4eb6052d7b4892 Author: Leon Brocard Date : 2009-01-20 15:12:13 +0000 make all the classes immutable Change: 4a839e882c67bf8e48d1bcb09071c923b467a524 Author: Leon Brocard Date : 2009-01-20 14:47:30 +0000 add support for an expires header when putting an object to Net::Amazon::S3::Client::Object Change: 48a2105e9dfe99a4d030f9fcaa81d6e7da77f4e7 Author: Leon Brocard Date : 2009-01-13 09:09:04 +0000 And release 0.49 Change: 259c7c551e0f357507820051f482dfe6b2206bbd Author: Leon Brocard Date : 2009-01-12 10:15:58 +0000 minor typo Change: e751d9af3f669269cb01a996e024b3171d41e2d7 Author: Leon Brocard Date : 2009-01-12 10:03:05 +0000 add support for listing a bucket with a prefix to Net::Amazon::S3::Client::Bucket Change: 86abf322a7bd8530b0e28d0ceaa57d5c35932653 Author: Leon Brocard Date : 2008-12-04 09:24:50 +0000 and release 0.48 Change: b9ac4cb3f1dcc6609f47ed69dbf663b25dbe47f0 Author: Leon Brocard Date : 2008-12-04 09:23:02 +0000 be slightly less strict about bucket names: they can contain uppercase letters, Amazon just doesn't recommend it (noticed by Simon Elliott, fixes Brackup) Change: ba254028be1d92f1d6a7ac280069d4073edd7d12 Author: Leon Brocard Date : 2008-12-02 08:03:52 +0000 and release Change: 73e0563badce5fa3a43711e74b253ba220be271a Author: Leon Brocard Date : 2008-12-02 08:03:38 +0000 fix listing with a prefix (spotted by Nobuo Danjou) Change: af4f2958c92f1c580b5bda4a63fb2c2edf8b66eb Author: Leon Brocard Date : 2008-11-24 08:55:05 +0000 and release Change: ff4d8480878988f823989c42bc7472e9712220d1 Author: Leon Brocard Date : 2008-11-24 08:39:27 +0000 import 0.46-tobe from svn ================ End of releases. ================ Net-Amazon-S3-0.80/cpanfile000644 000765 000024 00000002656 12512367152 016416 0ustar00rconoverstaff000000 000000 requires "Carp" => "0"; requires "Data::Stream::Bulk::Callback" => "0"; requires "DateTime::Format::HTTP" => "0"; requires "Digest::HMAC_SHA1" => "0"; requires "Digest::MD5" => "0"; requires "Digest::MD5::File" => "0"; requires "File::Find::Rule" => "0"; requires "File::stat" => "0"; requires "Getopt::Long" => "0"; requires "HTTP::Date" => "0"; requires "HTTP::Status" => "0"; requires "IO::File" => "1.14"; requires "LWP" => "6.03"; requires "LWP::UserAgent::Determined" => "0"; requires "MIME::Base64" => "0"; requires "MIME::Types" => "0"; requires "Moose" => "0.85"; requires "Moose::Util::TypeConstraints" => "0"; requires "MooseX::StrictConstructor" => "0.16"; requires "MooseX::Types::DateTime::MoreCoercions" => "0.07"; requires "Path::Class" => "0"; requires "Pod::Usage" => "0"; requires "Regexp::Common" => "0"; requires "Term::Encoding" => "0"; requires "Term::ProgressBar::Simple" => "0"; requires "URI" => "0"; requires "URI::Escape" => "0"; requires "URI::QueryParam" => "0"; requires "VM::EC2::Security::CredentialCache" => "0"; requires "XML::LibXML" => "0"; requires "XML::LibXML::XPathContext" => "0"; requires "strict" => "0"; requires "warnings" => "0"; on 'test' => sub { requires "File::Temp" => "0"; requires "LWP::Simple" => "0"; requires "Test::Exception" => "0"; requires "Test::More" => "0"; requires "lib" => "0"; requires "vars" => "0"; }; on 'configure' => sub { requires "ExtUtils::MakeMaker" => "0"; }; Net-Amazon-S3-0.80/dist.ini000644 000765 000024 00000000712 12512367152 016345 0ustar00rconoverstaff000000 000000 name = Net-Amazon-S3 author = Rusty Conover license = Perl_5 copyright_holder = Amazon Digital Services, Leon Brocard, Brad Fitzpatrick, Pedro Figueiredo, Rusty Conover version = 0.80 [@Filter] -bundle = @Basic option = for_basic [MetaJSON] [@GitHub] repo = rustyconover/net-amazon-s3 [Prereqs] LWP = 6.03 [AutoPrereqs] [CPANFile] [ChangelogFromGit] [Git::Contributors] [PodWeaver] [ReadmeMarkdownFromPod] [PkgVersion] Net-Amazon-S3-0.80/examples/000755 000765 000024 00000000000 12512367152 016517 5ustar00rconoverstaff000000 000000 Net-Amazon-S3-0.80/lib/000755 000765 000024 00000000000 12512367152 015447 5ustar00rconoverstaff000000 000000 Net-Amazon-S3-0.80/LICENSE000644 000765 000024 00000044221 12512367152 015711 0ustar00rconoverstaff000000 000000 This software is copyright (c) 2015 by Amazon Digital Services, Leon Brocard, Brad Fitzpatrick, Pedro Figueiredo, Rusty Conover. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. Terms of the Perl programming language system itself a) the GNU General Public License as published by the Free Software Foundation; either version 1, or (at your option) any later version, or b) the "Artistic License" --- The GNU General Public License, Version 1, February 1989 --- This software is Copyright (c) 2015 by Amazon Digital Services, Leon Brocard, Brad Fitzpatrick, Pedro Figueiredo, Rusty Conover. This is free software, licensed under: The GNU General Public License, Version 1, February 1989 GNU GENERAL PUBLIC LICENSE Version 1, February 1989 Copyright (C) 1989 Free Software Foundation, Inc. 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The license agreements of most software companies try to keep users at the mercy of those companies. By contrast, our General Public License is intended to guarantee your freedom to share and change free software--to make sure the software is free for all its users. The General Public License applies to the Free Software Foundation's software and to any other program whose authors commit to using it. You can use it for your programs, too. When we speak of free software, we are referring to freedom, not price. Specifically, the General Public License is designed to make sure that you have the freedom to give away or sell copies of free software, that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs; and that you know you can do these things. To protect your rights, we need to make restrictions that forbid anyone to deny you these rights or to ask you to surrender the rights. These restrictions translate to certain responsibilities for you if you distribute copies of the software, or if you modify it. For example, if you distribute copies of a such a program, whether gratis or for a fee, you must give the recipients all the rights that you have. You must make sure that they, too, receive or can get the source code. And you must tell them their rights. We protect your rights with two steps: (1) copyright the software, and (2) offer you this license which gives you legal permission to copy, distribute and/or modify the software. Also, for each author's protection and ours, we want to make certain that everyone understands that there is no warranty for this free software. If the software is modified by someone else and passed on, we want its recipients to know that what they have is not the original, so that any problems introduced by others will not reflect on the original authors' reputations. The precise terms and conditions for copying, distribution and modification follow. GNU GENERAL PUBLIC LICENSE TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION 0. This License Agreement applies to any program or other work which contains a notice placed by the copyright holder saying it may be distributed under the terms of this General Public License. The "Program", below, refers to any such program or work, and a "work based on the Program" means either the Program or any work containing the Program or a portion of it, either verbatim or with modifications. Each licensee is addressed as "you". 1. You may copy and distribute verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice and disclaimer of warranty; keep intact all the notices that refer to this General Public License and to the absence of any warranty; and give any other recipients of the Program a copy of this General Public License along with the Program. You may charge a fee for the physical act of transferring a copy. 2. You may modify your copy or copies of the Program or any portion of it, and copy and distribute such modifications under the terms of Paragraph 1 above, provided that you also do the following: a) cause the modified files to carry prominent notices stating that you changed the files and the date of any change; and b) cause the whole of any work that you distribute or publish, that in whole or in part contains the Program or any part thereof, either with or without modifications, to be licensed at no charge to all third parties under the terms of this General Public License (except that you may choose to grant warranty protection to some or all third parties, at your option). c) If the modified program normally reads commands interactively when run, you must cause it, when started running for such interactive use in the simplest and most usual way, to print or display an announcement including an appropriate copyright notice and a notice that there is no warranty (or else, saying that you provide a warranty) and that users may redistribute the program under these conditions, and telling the user how to view a copy of this General Public License. d) You may charge a fee for the physical act of transferring a copy, and you may at your option offer warranty protection in exchange for a fee. Mere aggregation of another independent work with the Program (or its derivative) on a volume of a storage or distribution medium does not bring the other work under the scope of these terms. 3. You may copy and distribute the Program (or a portion or derivative of it, under Paragraph 2) in object code or executable form under the terms of Paragraphs 1 and 2 above provided that you also do one of the following: a) accompany it with the complete corresponding machine-readable source code, which must be distributed under the terms of Paragraphs 1 and 2 above; or, b) accompany it with a written offer, valid for at least three years, to give any third party free (except for a nominal charge for the cost of distribution) a complete machine-readable copy of the corresponding source code, to be distributed under the terms of Paragraphs 1 and 2 above; or, c) accompany it with the information you received as to where the corresponding source code may be obtained. (This alternative is allowed only for noncommercial distribution and only if you received the program in object code or executable form alone.) Source code for a work means the preferred form of the work for making modifications to it. For an executable file, complete source code means all the source code for all modules it contains; but, as a special exception, it need not include source code for modules which are standard libraries that accompany the operating system on which the executable file runs, or for standard header files or definitions files that accompany that operating system. 4. You may not copy, modify, sublicense, distribute or transfer the Program except as expressly provided under this General Public License. Any attempt otherwise to copy, modify, sublicense, distribute or transfer the Program is void, and will automatically terminate your rights to use the Program under this License. However, parties who have received copies, or rights to use copies, from you under this General Public License will not have their licenses terminated so long as such parties remain in full compliance. 5. By copying, distributing or modifying the Program (or any work based on the Program) you indicate your acceptance of this license to do so, and all its terms and conditions. 6. Each time you redistribute the Program (or any work based on the Program), the recipient automatically receives a license from the original licensor to copy, distribute or modify the Program subject to these terms and conditions. You may not impose any further restrictions on the recipients' exercise of the rights granted herein. 7. The Free Software Foundation may publish revised and/or new versions of the General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies a version number of the license which applies to it and "any later version", you have the option of following the terms and conditions either of that version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of the license, you may choose any version ever published by the Free Software Foundation. 8. If you wish to incorporate parts of the Program into other free programs whose distribution conditions are different, write to the author to ask for permission. For software which is copyrighted by the Free Software Foundation, write to the Free Software Foundation; we sometimes make exceptions for this. Our decision will be guided by the two goals of preserving the free status of all derivatives of our free software and of promoting the sharing and reuse of software generally. NO WARRANTY 9. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 10. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. END OF TERMS AND CONDITIONS Appendix: How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to humanity, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively convey the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. Copyright (C) 19yy This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 1, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston MA 02110-1301 USA Also add information on how to contact you by electronic and paper mail. If the program is interactive, make it output a short notice like this when it starts in an interactive mode: Gnomovision version 69, Copyright (C) 19xx name of author Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details. The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, the commands you use may be called something other than `show w' and `show c'; they could even be mouse-clicks or menu items--whatever suits your program. You should also get your employer (if you work as a programmer) or your school, if any, to sign a "copyright disclaimer" for the program, if necessary. Here a sample; alter the names: Yoyodyne, Inc., hereby disclaims all copyright interest in the program `Gnomovision' (a program to direct compilers to make passes at assemblers) written by James Hacker. , 1 April 1989 Ty Coon, President of Vice That's all there is to it! --- The Artistic License 1.0 --- This software is Copyright (c) 2015 by Amazon Digital Services, Leon Brocard, Brad Fitzpatrick, Pedro Figueiredo, Rusty Conover. This is free software, licensed under: The Artistic License 1.0 The Artistic License Preamble The intent of this document is to state the conditions under which a Package may be copied, such that the Copyright Holder maintains some semblance of artistic control over the development of the package, while giving the users of the package the right to use and distribute the Package in a more-or-less customary fashion, plus the right to make reasonable modifications. Definitions: - "Package" refers to the collection of files distributed by the Copyright Holder, and derivatives of that collection of files created through textual modification. - "Standard Version" refers to such a Package if it has not been modified, or has been modified in accordance with the wishes of the Copyright Holder. - "Copyright Holder" is whoever is named in the copyright or copyrights for the package. - "You" is you, if you're thinking about copying or distributing this Package. - "Reasonable copying fee" is whatever you can justify on the basis of media cost, duplication charges, time of people involved, and so on. (You will not be required to justify it to the Copyright Holder, but only to the computing community at large as a market that must bear the fee.) - "Freely Available" means that no fee is charged for the item itself, though there may be fees involved in handling the item. It also means that recipients of the item may redistribute it under the same conditions they received it. 1. You may make and give away verbatim copies of the source form of the Standard Version of this Package without restriction, provided that you duplicate all of the original copyright notices and associated disclaimers. 2. You may apply bug fixes, portability fixes and other modifications derived from the Public Domain or from the Copyright Holder. A Package modified in such a way shall still be considered the Standard Version. 3. You may otherwise modify your copy of this Package in any way, provided that you insert a prominent notice in each changed file stating how and when you changed that file, and provided that you do at least ONE of the following: a) place your modifications in the Public Domain or otherwise make them Freely Available, such as by posting said modifications to Usenet or an equivalent medium, or placing the modifications on a major archive site such as ftp.uu.net, or by allowing the Copyright Holder to include your modifications in the Standard Version of the Package. b) use the modified Package only within your corporation or organization. c) rename any non-standard executables so the names do not conflict with standard executables, which must also be provided, and provide a separate manual page for each non-standard executable that clearly documents how it differs from the Standard Version. d) make other distribution arrangements with the Copyright Holder. 4. You may distribute the programs of this Package in object code or executable form, provided that you do at least ONE of the following: a) distribute a Standard Version of the executables and library files, together with instructions (in the manual page or equivalent) on where to get the Standard Version. b) accompany the distribution with the machine-readable source of the Package with your modifications. c) accompany any non-standard executables with their corresponding Standard Version executables, giving the non-standard executables non-standard names, and clearly documenting the differences in manual pages (or equivalent), together with instructions on where to get the Standard Version. d) make other distribution arrangements with the Copyright Holder. 5. You may charge a reasonable copying fee for any distribution of this Package. You may charge any fee you choose for support of this Package. You may not charge a fee for this Package itself. However, you may distribute this Package in aggregate with other (possibly commercial) programs as part of a larger (possibly commercial) software distribution provided that you do not advertise this Package as a product of your own. 6. The scripts and library files supplied as input to or produced as output from the programs of this Package do not automatically fall under the copyright of this Package, but belong to whomever generated them, and may be sold commercially, and may be aggregated with this Package. 7. C or perl subroutines supplied by you and linked into this Package shall not be considered part of this Package. 8. The name of the Copyright Holder may not be used to endorse or promote products derived from this software without specific prior written permission. 9. THIS PACKAGE IS PROVIDED "AS IS" AND WITHOUT ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE. The End Net-Amazon-S3-0.80/Makefile.PL000644 000765 000024 00000006025 12512367152 016656 0ustar00rconoverstaff000000 000000 # This file was automatically generated by Dist::Zilla::Plugin::MakeMaker v5.031. use strict; use warnings; use ExtUtils::MakeMaker; my %WriteMakefileArgs = ( "ABSTRACT" => "Use the Amazon S3 - Simple Storage Service", "AUTHOR" => "Rusty Conover ", "CONFIGURE_REQUIRES" => { "ExtUtils::MakeMaker" => 0 }, "DISTNAME" => "Net-Amazon-S3", "EXE_FILES" => [ "bin/s3cl" ], "LICENSE" => "perl", "NAME" => "Net::Amazon::S3", "PREREQ_PM" => { "Carp" => 0, "Data::Stream::Bulk::Callback" => 0, "DateTime::Format::HTTP" => 0, "Digest::HMAC_SHA1" => 0, "Digest::MD5" => 0, "Digest::MD5::File" => 0, "File::Find::Rule" => 0, "File::stat" => 0, "Getopt::Long" => 0, "HTTP::Date" => 0, "HTTP::Status" => 0, "IO::File" => "1.14", "LWP" => "6.03", "LWP::UserAgent::Determined" => 0, "MIME::Base64" => 0, "MIME::Types" => 0, "Moose" => "0.85", "Moose::Util::TypeConstraints" => 0, "MooseX::StrictConstructor" => "0.16", "MooseX::Types::DateTime::MoreCoercions" => "0.07", "Path::Class" => 0, "Pod::Usage" => 0, "Regexp::Common" => 0, "Term::Encoding" => 0, "Term::ProgressBar::Simple" => 0, "URI" => 0, "URI::Escape" => 0, "URI::QueryParam" => 0, "VM::EC2::Security::CredentialCache" => 0, "XML::LibXML" => 0, "XML::LibXML::XPathContext" => 0, "strict" => 0, "warnings" => 0 }, "TEST_REQUIRES" => { "File::Temp" => 0, "LWP::Simple" => 0, "Test::Exception" => 0, "Test::More" => 0, "lib" => 0, "vars" => 0 }, "VERSION" => "0.80", "test" => { "TESTS" => "t/*.t" } ); my %FallbackPrereqs = ( "Carp" => 0, "Data::Stream::Bulk::Callback" => 0, "DateTime::Format::HTTP" => 0, "Digest::HMAC_SHA1" => 0, "Digest::MD5" => 0, "Digest::MD5::File" => 0, "ExtUtils::MakeMaker" => 0, "File::Find::Rule" => 0, "File::Temp" => 0, "File::stat" => 0, "Getopt::Long" => 0, "HTTP::Date" => 0, "HTTP::Status" => 0, "IO::File" => "1.14", "LWP" => "6.03", "LWP::Simple" => 0, "LWP::UserAgent::Determined" => 0, "MIME::Base64" => 0, "MIME::Types" => 0, "Moose" => "0.85", "Moose::Util::TypeConstraints" => 0, "MooseX::StrictConstructor" => "0.16", "MooseX::Types::DateTime::MoreCoercions" => "0.07", "Path::Class" => 0, "Pod::Usage" => 0, "Regexp::Common" => 0, "Term::Encoding" => 0, "Term::ProgressBar::Simple" => 0, "Test::Exception" => 0, "Test::More" => 0, "URI" => 0, "URI::Escape" => 0, "URI::QueryParam" => 0, "VM::EC2::Security::CredentialCache" => 0, "XML::LibXML" => 0, "XML::LibXML::XPathContext" => 0, "lib" => 0, "strict" => 0, "vars" => 0, "warnings" => 0 ); unless ( eval { ExtUtils::MakeMaker->VERSION(6.63_03) } ) { delete $WriteMakefileArgs{TEST_REQUIRES}; delete $WriteMakefileArgs{BUILD_REQUIRES}; $WriteMakefileArgs{PREREQ_PM} = \%FallbackPrereqs; } delete $WriteMakefileArgs{CONFIGURE_REQUIRES} unless eval { ExtUtils::MakeMaker->VERSION(6.52) }; WriteMakefile(%WriteMakefileArgs); Net-Amazon-S3-0.80/MANIFEST000644 000765 000024 00000002532 12512367152 016034 0ustar00rconoverstaff000000 000000 # This file was automatically generated by Dist::Zilla::Plugin::Manifest v5.031. CHANGES LICENSE MANIFEST META.json META.yml Makefile.PL README README.md README.mkdn bin/s3cl cpanfile dist.ini examples/backup_cpan.pl lib/Net/Amazon/S3.pm lib/Net/Amazon/S3/Bucket.pm lib/Net/Amazon/S3/Client.pm lib/Net/Amazon/S3/Client/Bucket.pm lib/Net/Amazon/S3/Client/Object.pm lib/Net/Amazon/S3/HTTPRequest.pm lib/Net/Amazon/S3/Request.pm lib/Net/Amazon/S3/Request/AbortMultipartUpload.pm lib/Net/Amazon/S3/Request/CompleteMultipartUpload.pm lib/Net/Amazon/S3/Request/CreateBucket.pm lib/Net/Amazon/S3/Request/DeleteBucket.pm lib/Net/Amazon/S3/Request/DeleteMultiObject.pm lib/Net/Amazon/S3/Request/DeleteMultipleObjects.pm lib/Net/Amazon/S3/Request/DeleteObject.pm lib/Net/Amazon/S3/Request/GetBucketAccessControl.pm lib/Net/Amazon/S3/Request/GetBucketLocationConstraint.pm lib/Net/Amazon/S3/Request/GetObject.pm lib/Net/Amazon/S3/Request/GetObjectAccessControl.pm lib/Net/Amazon/S3/Request/InitiateMultipartUpload.pm lib/Net/Amazon/S3/Request/ListAllMyBuckets.pm lib/Net/Amazon/S3/Request/ListBucket.pm lib/Net/Amazon/S3/Request/ListParts.pm lib/Net/Amazon/S3/Request/PutObject.pm lib/Net/Amazon/S3/Request/PutPart.pm lib/Net/Amazon/S3/Request/SetBucketAccessControl.pm lib/Net/Amazon/S3/Request/SetObjectAccessControl.pm perlcritic.rc t/00use.t t/01api.t t/02client.t t/03token.t Net-Amazon-S3-0.80/META.json000644 000765 000024 00000007024 12512367152 016325 0ustar00rconoverstaff000000 000000 { "abstract" : "Use the Amazon S3 - Simple Storage Service", "author" : [ "Rusty Conover " ], "dynamic_config" : 0, "generated_by" : "Dist::Zilla version 5.031, CPAN::Meta::Converter version 2.142690", "license" : [ "perl_5" ], "meta-spec" : { "url" : "http://search.cpan.org/perldoc?CPAN::Meta::Spec", "version" : "2" }, "name" : "Net-Amazon-S3", "prereqs" : { "configure" : { "requires" : { "ExtUtils::MakeMaker" : "0" } }, "runtime" : { "requires" : { "Carp" : "0", "Data::Stream::Bulk::Callback" : "0", "DateTime::Format::HTTP" : "0", "Digest::HMAC_SHA1" : "0", "Digest::MD5" : "0", "Digest::MD5::File" : "0", "File::Find::Rule" : "0", "File::stat" : "0", "Getopt::Long" : "0", "HTTP::Date" : "0", "HTTP::Status" : "0", "IO::File" : "1.14", "LWP" : "6.03", "LWP::UserAgent::Determined" : "0", "MIME::Base64" : "0", "MIME::Types" : "0", "Moose" : "0.85", "Moose::Util::TypeConstraints" : "0", "MooseX::StrictConstructor" : "0.16", "MooseX::Types::DateTime::MoreCoercions" : "0.07", "Path::Class" : "0", "Pod::Usage" : "0", "Regexp::Common" : "0", "Term::Encoding" : "0", "Term::ProgressBar::Simple" : "0", "URI" : "0", "URI::Escape" : "0", "URI::QueryParam" : "0", "VM::EC2::Security::CredentialCache" : "0", "XML::LibXML" : "0", "XML::LibXML::XPathContext" : "0", "strict" : "0", "warnings" : "0" } }, "test" : { "requires" : { "File::Temp" : "0", "LWP::Simple" : "0", "Test::Exception" : "0", "Test::More" : "0", "lib" : "0", "vars" : "0" } } }, "release_status" : "stable", "resources" : { "bugtracker" : { "web" : "https://github.com/pfig/net-amazon-s3/issues" }, "homepage" : "http://search.cpan.org/dist/Net-Amazon-S3/", "repository" : { "type" : "git", "url" : "git://github.com/pfig/net-amazon-s3.git", "web" : "https://github.com/pfig/net-amazon-s3" } }, "version" : "0.80", "x_contributors" : [ "Aaron Crane ", "Ali Anari ", "Dylan William Hardison ", "Gabriel Andrade ", "Haruka Iwao ", "Leo Lapworth ", "Leon Brocard ", "Michele Beltrame ", "Miquel Ruiz ", "Panu Ervamaa ", "Pavel A. Karoukin ", "Pedro Figueiredo ", "Pedro Figueiredo ", "Pedro Figueiredo ", "perlpong ", "robert clarke ", "Robert Ward ", "Rusty Conover ", "Rusty Conover ", "Steven Berler ", "Struan Bartlett ", "Tom Hukins ", "toritori0318 " ] } Net-Amazon-S3-0.80/META.yml000644 000765 000024 00000004670 12512367152 016161 0ustar00rconoverstaff000000 000000 --- abstract: 'Use the Amazon S3 - Simple Storage Service' author: - 'Rusty Conover ' build_requires: File::Temp: '0' LWP::Simple: '0' Test::Exception: '0' Test::More: '0' lib: '0' vars: '0' configure_requires: ExtUtils::MakeMaker: '0' dynamic_config: 0 generated_by: 'Dist::Zilla version 5.031, CPAN::Meta::Converter version 2.142690' license: perl meta-spec: url: http://module-build.sourceforge.net/META-spec-v1.4.html version: '1.4' name: Net-Amazon-S3 requires: Carp: '0' Data::Stream::Bulk::Callback: '0' DateTime::Format::HTTP: '0' Digest::HMAC_SHA1: '0' Digest::MD5: '0' Digest::MD5::File: '0' File::Find::Rule: '0' File::stat: '0' Getopt::Long: '0' HTTP::Date: '0' HTTP::Status: '0' IO::File: '1.14' LWP: '6.03' LWP::UserAgent::Determined: '0' MIME::Base64: '0' MIME::Types: '0' Moose: '0.85' Moose::Util::TypeConstraints: '0' MooseX::StrictConstructor: '0.16' MooseX::Types::DateTime::MoreCoercions: '0.07' Path::Class: '0' Pod::Usage: '0' Regexp::Common: '0' Term::Encoding: '0' Term::ProgressBar::Simple: '0' URI: '0' URI::Escape: '0' URI::QueryParam: '0' VM::EC2::Security::CredentialCache: '0' XML::LibXML: '0' XML::LibXML::XPathContext: '0' strict: '0' warnings: '0' resources: bugtracker: https://github.com/pfig/net-amazon-s3/issues homepage: http://search.cpan.org/dist/Net-Amazon-S3/ repository: git://github.com/pfig/net-amazon-s3.git version: '0.80' x_contributors: - 'Aaron Crane ' - 'Ali Anari ' - 'Dylan William Hardison ' - 'Gabriel Andrade ' - 'Haruka Iwao ' - 'Leo Lapworth ' - 'Leon Brocard ' - 'Michele Beltrame ' - 'Miquel Ruiz ' - 'Panu Ervamaa ' - 'Pavel A. Karoukin ' - 'Pedro Figueiredo ' - 'Pedro Figueiredo ' - 'Pedro Figueiredo ' - 'perlpong ' - 'robert clarke ' - 'Robert Ward ' - 'Rusty Conover ' - 'Rusty Conover ' - 'Steven Berler ' - 'Struan Bartlett ' - 'Tom Hukins ' - 'toritori0318 ' Net-Amazon-S3-0.80/perlcritic.rc000644 000765 000024 00000000463 12512367152 017372 0ustar00rconoverstaff000000 000000 severity = 5 only = 1 force = 0 verbose = 4 top = 50 theme = (pbp || security) && bugs include = NamingConventions ClassHierarchies exclude = Variables Modules::RequirePackage RequireUseStrict RequireUseWarnings criticism-fatal = 1 color = 1 allow-unsafe = 1 pager = less Net-Amazon-S3-0.80/README000644 000765 000024 00000000707 12512367152 015565 0ustar00rconoverstaff000000 000000 This archive contains the distribution Net-Amazon-S3, version 0.80: Use the Amazon S3 - Simple Storage Service This software is copyright (c) 2015 by Amazon Digital Services, Leon Brocard, Brad Fitzpatrick, Pedro Figueiredo, Rusty Conover. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. This README file was generated by Dist::Zilla::Plugin::Readme v5.031. Net-Amazon-S3-0.80/README.md000644 000765 000024 00000004420 12512367152 016160 0ustar00rconoverstaff000000 000000 # DESCRIPTION This module provides a Perlish interface to Amazon S3. From the developer blurb: "Amazon S3 is storage for the Internet. It is designed to make web-scale computing easier for developers. Amazon S3 provides a simple web services interface that can be used to store and retrieve any amount of data, at any time, from anywhere on the web. It gives any developer access to the same highly scalable, reliable, fast, inexpensive data storage infrastructure that Amazon uses to run its own global network of web sites. The service aims to maximize benefits of scale and to pass those benefits on to developers". To find out more about S3, please visit: http://s3.amazonaws.com/ To use this module you will need to sign up to Amazon Web Services and provide an "Access Key ID" and " Secret Access Key". If you use this module, you will incurr costs as specified by Amazon. Please check the costs. If you use this module with your Access Key ID and Secret Access Key you must be responsible for these costs. I highly recommend reading all about S3, but in a nutshell data is stored in values. Values are referenced by keys, and keys are stored in buckets. Bucket names are global. Note: This is the legacy interface, please check out Net::Amazon::S3::Client instead. Development of this code happens here: http://github.com/pfig/net-amazon-s3/ Homepage for the project (just started) is at http://pfig.github.com/net-amazon-s3/ # LICENSE This module contains code modified from Amazon that contains the following notice: > This software code is made available "AS IS" without warranties of any > kind. You may copy, display, modify and redistribute the software > code either by itself or as incorporated into your code; provided that > you do not remove any proprietary notices. Your use of this software > code is at your own risk and you waive any claim against Amazon > Digital Services, Inc. or its affiliates with respect to your use of > this software code. (c) 2006 Amazon Digital Services, Inc. or its > affiliates. # AUTHOR * Leon Brocard and unknown Amazon Digital Services programmers. * Brad Fitzpatrick - return values, Bucket object. * Pedro Figueiredo - since 0.54. * Rusty Conover - since 0.80. Net-Amazon-S3-0.80/README.mkdn000644 000765 000024 00000032356 12512367152 016522 0ustar00rconoverstaff000000 000000 # NAME Net::Amazon::S3 - Use the Amazon S3 - Simple Storage Service # VERSION version 0.80 # SYNOPSIS use Net::Amazon::S3; my $aws_access_key_id = 'fill me in'; my $aws_secret_access_key = 'fill me in too'; my $s3 = Net::Amazon::S3->new( { aws_access_key_id => $aws_access_key_id, aws_secret_access_key => $aws_secret_access_key, # or use an IAM role. use_iam_role => 1 retry => 1, } ); # a bucket is a globally-unique directory # list all buckets that i own my $response = $s3->buckets; foreach my $bucket ( @{ $response->{buckets} } ) { print "You have a bucket: " . $bucket->bucket . "\n"; } # create a new bucket my $bucketname = 'acmes_photo_backups'; my $bucket = $s3->add_bucket( { bucket => $bucketname } ) or die $s3->err . ": " . $s3->errstr; # or use an existing bucket $bucket = $s3->bucket($bucketname); # store a file in the bucket $bucket->add_key_filename( '1.JPG', 'DSC06256.JPG', { content_type => 'image/jpeg', }, ) or die $s3->err . ": " . $s3->errstr; # store a value in the bucket $bucket->add_key( 'reminder.txt', 'this is where my photos are backed up' ) or die $s3->err . ": " . $s3->errstr; # list files in the bucket $response = $bucket->list_all or die $s3->err . ": " . $s3->errstr; foreach my $key ( @{ $response->{keys} } ) { my $key_name = $key->{key}; my $key_size = $key->{size}; print "Bucket contains key '$key_name' of size $key_size\n"; } # fetch file from the bucket $response = $bucket->get_key_filename( '1.JPG', 'GET', 'backup.jpg' ) or die $s3->err . ": " . $s3->errstr; # fetch value from the bucket $response = $bucket->get_key('reminder.txt') or die $s3->err . ": " . $s3->errstr; print "reminder.txt:\n"; print " content length: " . $response->{content_length} . "\n"; print " content type: " . $response->{content_type} . "\n"; print " etag: " . $response->{content_type} . "\n"; print " content: " . $response->{value} . "\n"; # delete keys $bucket->delete_key('reminder.txt') or die $s3->err . ": " . $s3->errstr; $bucket->delete_key('1.JPG') or die $s3->err . ": " . $s3->errstr; # and finally delete the bucket $bucket->delete_bucket or die $s3->err . ": " . $s3->errstr; # DESCRIPTION This module provides a Perlish interface to Amazon S3. From the developer blurb: "Amazon S3 is storage for the Internet. It is designed to make web-scale computing easier for developers. Amazon S3 provides a simple web services interface that can be used to store and retrieve any amount of data, at any time, from anywhere on the web. It gives any developer access to the same highly scalable, reliable, fast, inexpensive data storage infrastructure that Amazon uses to run its own global network of web sites. The service aims to maximize benefits of scale and to pass those benefits on to developers". To find out more about S3, please visit: http://s3.amazonaws.com/ To use this module you will need to sign up to Amazon Web Services and provide an "Access Key ID" and " Secret Access Key". If you use this module, you will incurr costs as specified by Amazon. Please check the costs. If you use this module with your Access Key ID and Secret Access Key you must be responsible for these costs. I highly recommend reading all about S3, but in a nutshell data is stored in values. Values are referenced by keys, and keys are stored in buckets. Bucket names are global. Note: This is the legacy interface, please check out [Net::Amazon::S3::Client](https://metacpan.org/pod/Net::Amazon::S3::Client) instead. Development of this code happens here: http://github.com/pfig/net-amazon-s3/ Homepage for the project (just started) is at http://pfig.github.com/net-amazon-s3/ # METHODS ## new Create a new S3 client object. Takes some arguments: - aws\_access\_key\_id Use your Access Key ID as the value of the AWSAccessKeyId parameter in requests you send to Amazon Web Services (when required). Your Access Key ID identifies you as the party responsible for the request. - aws\_secret\_access\_key Since your Access Key ID is not encrypted in requests to AWS, it could be discovered and used by anyone. Services that are not free require you to provide additional information, a request signature, to verify that a request containing your unique Access Key ID could only have come from you. DO NOT INCLUDE THIS IN SCRIPTS OR APPLICATIONS YOU DISTRIBUTE. YOU'LL BE SORRY - aws\_session\_token If you are using temporary credentials provided by the AWS Security Token Service, set the token here, and it will be added to the request in order to authenticate it. - use\_iam\_role If you'd like to use IAM provided temporary credentials, pass this option with a true value. - secure Set this to `1` if you want to use SSL-encrypted connections when talking to S3. Defaults to `0`. - timeout How many seconds should your script wait before bailing on a request to S3? Defaults to 30. - retry If this library should retry upon errors. This option is recommended. This uses exponential backoff with retries after 1, 2, 4, 8, 16, 32 seconds, as recommended by Amazon. Defaults to off. - host The S3 host endpoint to use. Defaults to 's3.amazonaws.com'. This allows you to connect to any S3-compatible host. ## buckets Returns undef on error, else hashref of results ## add\_bucket Takes a hashref: - bucket The name of the bucket you want to add - acl\_short (optional) See the set\_acl subroutine for documenation on the acl\_short options - location\_constraint (option) Sets the location constraint of the new bucket. If left unspecified, the default S3 datacenter location will be used. Otherwise, you can set it to 'EU' for a European data center - note that costs are different. Returns 0 on failure, Net::Amazon::S3::Bucket object on success ## bucket BUCKET Takes a scalar argument, the name of the bucket you're creating Returns an (unverified) bucket object from an account. Does no network access. ## delete\_bucket Takes either a [Net::Amazon::S3::Bucket](https://metacpan.org/pod/Net::Amazon::S3::Bucket) object or a hashref containing - bucket The name of the bucket to remove Returns false (and fails) if the bucket isn't empty. Returns true if the bucket is successfully deleted. ## list\_bucket List all keys in this bucket. Takes a hashref of arguments: MANDATORY - bucket The name of the bucket you want to list keys on OPTIONAL - prefix Restricts the response to only contain results that begin with the specified prefix. If you omit this optional argument, the value of prefix for your query will be the empty string. In other words, the results will be not be restricted by prefix. - delimiter If this optional, Unicode string parameter is included with your request, then keys that contain the same string between the prefix and the first occurrence of the delimiter will be rolled up into a single result element in the CommonPrefixes collection. These rolled-up keys are not returned elsewhere in the response. For example, with prefix="USA/" and delimiter="/", the matching keys "USA/Oregon/Salem" and "USA/Oregon/Portland" would be summarized in the response as a single "USA/Oregon" element in the CommonPrefixes collection. If an otherwise matching key does not contain the delimiter after the prefix, it appears in the Contents collection. Each element in the CommonPrefixes collection counts as one against the MaxKeys limit. The rolled-up keys represented by each CommonPrefixes element do not. If the Delimiter parameter is not present in your request, keys in the result set will not be rolled-up and neither the CommonPrefixes collection nor the NextMarker element will be present in the response. - max-keys This optional argument limits the number of results returned in response to your query. Amazon S3 will return no more than this number of results, but possibly less. Even if max-keys is not specified, Amazon S3 will limit the number of results in the response. Check the IsTruncated flag to see if your results are incomplete. If so, use the Marker parameter to request the next page of results. For the purpose of counting max-keys, a 'result' is either a key in the 'Contents' collection, or a delimited prefix in the 'CommonPrefixes' collection. So for delimiter requests, max-keys limits the total number of list results, not just the number of keys. - marker This optional parameter enables pagination of large result sets. `marker` specifies where in the result set to resume listing. It restricts the response to only contain results that occur alphabetically after the value of marker. To retrieve the next page of results, use the last key from the current page of results as the marker in your next request. See also `next_marker`, below. If `marker` is omitted,the first page of results is returned. Returns undef on error and a hashref of data on success: The hashref looks like this: { bucket => $bucket_name, prefix => $bucket_prefix, common_prefixes => [$prefix1,$prefix2,...] marker => $bucket_marker, next_marker => $bucket_next_available_marker, max_keys => $bucket_max_keys, is_truncated => $bucket_is_truncated_boolean keys => [$key1,$key2,...] } Explanation of bits of that: - common\_prefixes If list\_bucket was requested with a delimiter, common\_prefixes will contain a list of prefixes matching that delimiter. Drill down into these prefixes by making another request with the prefix parameter. - is\_truncated B flag that indicates whether or not all results of your query were returned in this response. If your results were truncated, you can make a follow-up paginated request using the Marker parameter to retrieve the rest of the results. - next\_marker A convenience element, useful when paginating with delimiters. The value of `next_marker`, if present, is the largest (alphabetically) of all key names and all CommonPrefixes prefixes in the response. If the `is_truncated` flag is set, request the next page of results by setting `marker` to the value of `next_marker`. This element is only present in the response if the `delimiter` parameter was sent with the request. Each key is a hashref that looks like this: { key => $key, last_modified => $last_mod_date, etag => $etag, # An MD5 sum of the stored content. size => $size, # Bytes storage_class => $storage_class # Doc? owner_id => $owner_id, owner_displayname => $owner_name } ## list\_bucket\_all List all keys in this bucket without having to worry about 'marker'. This is a convenience method, but may make multiple requests to S3 under the hood. Takes the same arguments as list\_bucket. ## add\_key DEPRECATED. DO NOT USE ## get\_key DEPRECATED. DO NOT USE ## head\_key DEPRECATED. DO NOT USE ## delete\_key DEPRECATED. DO NOT USE # LICENSE This module contains code modified from Amazon that contains the following notice: # This software code is made available "AS IS" without warranties of any # kind. You may copy, display, modify and redistribute the software # code either by itself or as incorporated into your code; provided that # you do not remove any proprietary notices. Your use of this software # code is at your own risk and you waive any claim against Amazon # Digital Services, Inc. or its affiliates with respect to your use of # this software code. (c) 2006 Amazon Digital Services, Inc. or its # affiliates. # TESTING Testing S3 is a tricky thing. Amazon wants to charge you a bit of money each time you use their service. And yes, testing counts as using. Because of this, the application's test suite skips anything approaching a real test unless you set these three environment variables: - AMAZON\_S3\_EXPENSIVE\_TESTS Doesn't matter what you set it to. Just has to be set - AWS\_ACCESS\_KEY\_ID Your AWS access key - AWS\_ACCESS\_KEY\_SECRET Your AWS sekkr1t passkey. Be forewarned that setting this environment variable on a shared system might leak that information to another user. Be careful. # AUTHOR Leon Brocard and unknown Amazon Digital Services programmers. Brad Fitzpatrick - return values, Bucket object Pedro Figueiredo - since 0.54 # SEE ALSO [Net::Amazon::S3::Bucket](https://metacpan.org/pod/Net::Amazon::S3::Bucket) # AUTHOR Rusty Conover # COPYRIGHT AND LICENSE This software is copyright (c) 2015 by Amazon Digital Services, Leon Brocard, Brad Fitzpatrick, Pedro Figueiredo, Rusty Conover. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. Net-Amazon-S3-0.80/t/000755 000765 000024 00000000000 12512367152 015144 5ustar00rconoverstaff000000 000000 Net-Amazon-S3-0.80/t/00use.t000644 000765 000024 00000000352 12512367152 016265 0ustar00rconoverstaff000000 000000 #!perl use warnings; use strict; use lib 'lib'; use Test::More tests => 4; use_ok( 'Net::Amazon::S3' ); use_ok( 'Net::Amazon::S3::Client' ); use_ok( 'Net::Amazon::S3::Client::Bucket' ); use_ok( 'Net::Amazon::S3::Client::Object' ); Net-Amazon-S3-0.80/t/01api.t000644 000765 000024 00000027651 12512367152 016256 0ustar00rconoverstaff000000 000000 #!perl use warnings; use strict; use lib 'lib'; use Digest::MD5::File qw(file_md5_hex); use Test::More; unless ( $ENV{'AMAZON_S3_EXPENSIVE_TESTS'} ) { plan skip_all => 'Testing this module for real costs money.'; } else { plan tests => 71 * 2 + 4; } use_ok('Net::Amazon::S3'); use vars qw/$OWNER_ID $OWNER_DISPLAYNAME/; my $aws_access_key_id = $ENV{'AWS_ACCESS_KEY_ID'}; my $aws_secret_access_key = $ENV{'AWS_ACCESS_KEY_SECRET'}; my $s3 = Net::Amazon::S3->new( { aws_access_key_id => $aws_access_key_id, aws_secret_access_key => $aws_secret_access_key, retry => 1, } ); # list all buckets that i own my $response = $s3->buckets; $OWNER_ID = $response->{owner_id}; $OWNER_DISPLAYNAME = $response->{owner_displayname}; TODO: { local $TODO = "These tests only work if you're pedro"; like( $response->{owner_id}, qr/^c7483d612ac7f0c0/ ); is( $response->{owner_displayname}, 'pedro_figueiredo' ); is( scalar @{ $response->{buckets} }, 6 ); } for my $location ( undef, 'EU' ) { # create a bucket # make sure it's a valid hostname for EU testing my $bucketname = 'net-amazon-s3-test-' . lc($aws_access_key_id) . '-' . time; # for testing # my $bucket = $s3->bucket($bucketname); $bucket->delete_bucket; exit; my $bucket_obj = $s3->add_bucket( { bucket => $bucketname, acl_short => 'public-read', location_constraint => $location } ) or die $s3->err . ": " . $s3->errstr; is( ref $bucket_obj, "Net::Amazon::S3::Bucket" ); is( $bucket_obj->get_location_constraint, $location ); like_acl_allusers_read($bucket_obj); ok( $bucket_obj->set_acl( { acl_short => 'private' } ) ); unlike_acl_allusers_read($bucket_obj); # another way to get a bucket object (does no network I/O, # assumes it already exists). Read Net::Amazon::S3::Bucket. $bucket_obj = $s3->bucket($bucketname); is( ref $bucket_obj, "Net::Amazon::S3::Bucket" ); # fetch contents of the bucket # note prefix, marker, max_keys options can be passed in $response = $bucket_obj->list or die $s3->err . ": " . $s3->errstr; is( $response->{bucket}, $bucketname ); is( $response->{prefix}, '' ); is( $response->{marker}, '' ); is( $response->{max_keys}, 1_000 ); is( $response->{is_truncated}, 0 ); is_deeply( $response->{keys}, [] ); is( undef, $bucket_obj->get_key("non-existing-key") ); my $keyname = 'testing.txt'; { # Create a publicly readable key, then turn it private with a short acl. # This key will persist past the end of the block. my $value = 'T'; $bucket_obj->add_key( $keyname, $value, { content_type => 'text/plain', 'x-amz-meta-colour' => 'orange', acl_short => 'public-read', } ); is_request_response_code( "http://$bucketname.s3.amazonaws.com/$keyname", 200, "can access the publicly readable key" ); like_acl_allusers_read( $bucket_obj, $keyname ); ok( $bucket_obj->set_acl( { key => $keyname, acl_short => 'private' } ) ); is_request_response_code( "http://$bucketname.s3.amazonaws.com/$keyname", 403, "cannot access the private key" ); unlike_acl_allusers_read( $bucket_obj, $keyname ); ok( $bucket_obj->set_acl( { key => $keyname, acl_xml => acl_xml_from_acl_short('public-read') } ) ); is_request_response_code( "http://$bucketname.s3.amazonaws.com/$keyname", 200, "can access the publicly readable key after acl_xml set" ); like_acl_allusers_read( $bucket_obj, $keyname ); ok( $bucket_obj->set_acl( { key => $keyname, acl_xml => acl_xml_from_acl_short('private') } ) ); is_request_response_code( "http://$bucketname.s3.amazonaws.com/$keyname", 403, "cannot access the private key after acl_xml set" ); unlike_acl_allusers_read( $bucket_obj, $keyname ); } { # Create a private key, then make it publicly readable with a short # acl. Delete it at the end so we're back to having a single key in # the bucket. my $keyname2 = 'testing2.txt'; my $value = 'T2'; $bucket_obj->add_key( $keyname2, $value, { content_type => 'text/plain', 'x-amz-meta-colour' => 'blue', acl_short => 'private', } ); is_request_response_code( "http://$bucketname.s3.amazonaws.com/$keyname2", 403, "cannot access the private key" ); unlike_acl_allusers_read( $bucket_obj, $keyname2 ); ok( $bucket_obj->set_acl( { key => $keyname2, acl_short => 'public-read' } ) ); is_request_response_code( "http://$bucketname.s3.amazonaws.com/$keyname2", 200, "can access the publicly readable key" ); like_acl_allusers_read( $bucket_obj, $keyname2 ); $bucket_obj->delete_key($keyname2); } { # Copy a key, keeping metadata my $keyname2 = 'testing2.txt'; $bucket_obj->copy_key( $keyname2, "/$bucketname/$keyname" ); is_request_response_code( "http://$bucketname.s3.amazonaws.com/$keyname2", 403, "cannot access the private key" ); # Overwrite, making publically readable $bucket_obj->copy_key( $keyname2, "/$bucketname/$keyname", { acl_short => 'public-read' } ); sleep 1; is_request_response_code( "http://$bucketname.s3.amazonaws.com/$keyname2", 200, "can access the publicly readable key" ); # Now copy it over itself, making it private $bucket_obj->edit_metadata( $keyname2, { short_acl => 'private' } ); is_request_response_code( "http://$bucketname.s3.amazonaws.com/$keyname2", 403, "cannot access the private key" ); # Get rid of it, bringing us back to only one key $bucket_obj->delete_key($keyname2); # Expect a nonexistent key copy to fail ok( !$bucket_obj->copy_key( "newkey", "/$bucketname/$keyname2" ), "Copying a nonexistent key fails" ); } # list keys in the bucket $response = $bucket_obj->list or die $s3->err . ": " . $s3->errstr; is( $response->{bucket}, $bucketname ); is( $response->{prefix}, '' ); is( $response->{marker}, '' ); is( $response->{max_keys}, 1_000 ); is( $response->{is_truncated}, 0 ); my @keys = @{ $response->{keys} }; is( @keys, 1 ); my $key = $keys[0]; is( $key->{key}, $keyname ); # the etag is the MD5 of the value is( $key->{etag}, 'b9ece18c950afbfa6b0fdbfa4ff731d3' ); is( $key->{size}, 1 ); is( $key->{owner_id}, $OWNER_ID ); is( $key->{owner_displayname}, $OWNER_DISPLAYNAME ); # You can't delete a bucket with things in it ok( !$bucket_obj->delete_bucket() ); $bucket_obj->delete_key($keyname); # now play with the file methods my $readme_md5 = file_md5_hex('README.md'); my $readme_size = -s 'README.md'; $keyname .= "2"; $bucket_obj->add_key_filename( $keyname, 'README.md', { content_type => 'text/plain', 'x-amz-meta-colour' => 'orangy', } ); $response = $bucket_obj->get_key($keyname); is( $response->{content_type}, 'text/plain' ); like( $response->{value}, qr/Amazon Digital Services/ ); is( $response->{etag}, $readme_md5 ); is( $response->{'x-amz-meta-colour'}, 'orangy' ); is( $response->{content_length}, $readme_size ); unlink('t/README.md'); $response = $bucket_obj->get_key_filename( $keyname, undef, 't/README.md' ); is( $response->{content_type}, 'text/plain' ); is( $response->{value}, '' ); is( $response->{etag}, $readme_md5 ); is( file_md5_hex('t/README.md'), $readme_md5 ); is( $response->{'x-amz-meta-colour'}, 'orangy' ); is( $response->{content_length}, $readme_size ); $bucket_obj->delete_key($keyname); # try empty files $keyname .= "3"; $bucket_obj->add_key( $keyname, '' ); $response = $bucket_obj->get_key($keyname); is( $response->{value}, '' ); is( $response->{etag}, 'd41d8cd98f00b204e9800998ecf8427e' ); is( $response->{content_type}, 'binary/octet-stream' ); is( $response->{content_length}, 0 ); $bucket_obj->delete_key($keyname); # how about using add_key_filename? $keyname .= '4'; open FILE, ">", "t/empty" or die "Can't open t/empty for write: $!"; close FILE; $bucket_obj->add_key_filename( $keyname, 't/empty' ); $response = $bucket_obj->get_key($keyname); is( $response->{value}, '' ); is( $response->{etag}, 'd41d8cd98f00b204e9800998ecf8427e' ); is( $response->{content_type}, 'binary/octet-stream' ); is( $response->{content_length}, 0 ); $bucket_obj->delete_key($keyname); unlink 't/empty'; # fetch contents of the bucket # note prefix, marker, max_keys options can be passed in $response = $bucket_obj->list or die $s3->err . ": " . $s3->errstr; is( $response->{bucket}, $bucketname ); is( $response->{prefix}, '' ); is( $response->{marker}, '' ); is( $response->{max_keys}, 1_000 ); is( $response->{is_truncated}, 0 ); is_deeply( $response->{keys}, [] ); ok( $bucket_obj->delete_bucket() ); } # see more docs in Net::Amazon::S3::Bucket # local test methods sub is_request_response_code { my ( $url, $code, $message ) = @_; my $request = HTTP::Request->new( 'GET', $url ); #warn $request->as_string(); my $response = $s3->ua->request($request); is( $response->code, $code, $message ); } sub like_acl_allusers_read { my ( $bucketobj, $keyname ) = @_; my $message = acl_allusers_read_message( 'like', @_ ); like( $bucketobj->get_acl($keyname), qr(AllUsers.+READ), $message ); } sub unlike_acl_allusers_read { my ( $bucketobj, $keyname ) = @_; my $message = acl_allusers_read_message( 'unlike', @_ ); unlike( $bucketobj->get_acl($keyname), qr(AllUsers.+READ), $message ); } sub acl_allusers_read_message { my ( $like_or_unlike, $bucketobj, $keyname ) = @_; my $message = $like_or_unlike . "_acl_allusers_read: " . $bucketobj->bucket; $message .= " - $keyname" if $keyname; return $message; } sub acl_xml_from_acl_short { my $acl_short = shift || 'private'; my $public_read = ''; if ( $acl_short eq 'public-read' ) { $public_read = qq~ http://acs.amazonaws.com/groups/global/AllUsers READ ~; } return qq~ $OWNER_ID $OWNER_DISPLAYNAME $OWNER_ID $OWNER_DISPLAYNAME FULL_CONTROL $public_read ~; } Net-Amazon-S3-0.80/t/02client.t000755 000765 000024 00000025226 12512367152 016763 0ustar00rconoverstaff000000 000000 #!perl use warnings; use strict; use lib 'lib'; use Digest::MD5::File qw(file_md5_hex); use LWP::Simple; use File::stat; use Test::More; use Test::Exception; use File::Temp qw/ :seekable /; unless ( $ENV{'AMAZON_S3_EXPENSIVE_TESTS'} ) { plan skip_all => 'Testing this module for real costs money.'; } else { plan tests => 54; } use_ok('Net::Amazon::S3'); my $aws_access_key_id = $ENV{'AWS_ACCESS_KEY_ID'}; my $aws_secret_access_key = $ENV{'AWS_ACCESS_KEY_SECRET'}; my $s3 = Net::Amazon::S3->new( aws_access_key_id => $aws_access_key_id, aws_secret_access_key => $aws_secret_access_key, retry => 1, ); my $readme_size = stat('README.md')->size; my $readme_md5hex = file_md5_hex('README.md'); my $client = Net::Amazon::S3::Client->new( s3 => $s3 ); my @buckets = $client->buckets; TODO: { local $TODO = "These tests only work if you're pedro"; my $first_bucket = $buckets[0]; like( $first_bucket->owner_id, qr/^c7483d612ac7f0c0/, 'have owner id' ); is( $first_bucket->owner_display_name, 'pedro_figueiredo', 'have display name' ); is( scalar @buckets, 6, 'have a bunch of buckets' ); } my $bucket_name = 'net-amazon-s3-test-' . lc($aws_access_key_id) . '-'. time; my $bucket = $client->create_bucket( name => $bucket_name, acl_short => 'public-read', location_constraint => 'EU', ); is( $bucket->name, $bucket_name, 'newly created bucket has correct name' ); like( $bucket->acl, qr{[a-z0-9]{64}.+?[a-z0-9]{64}.+?FULL_CONTROLhttp://acs.amazonaws.com/groups/global/AllUsersREAD}, 'newly created bucket is public-readable' ); is( $bucket->location_constraint, 'EU', 'newly created bucket is in the EU' ); my $stream = $bucket->list; until ( $stream->is_done ) { foreach my $object ( $stream->items ) { $object->delete; } } my $count = 0; $stream = $bucket->list; until ( $stream->is_done ) { foreach my $object ( $stream->items ) { $count++; } } is( $count, 0, 'newly created bucket has no objects' ); my $object = $bucket->object( key => 'this is the key' ); ok( !$object->exists, 'object does not exist yet' ); $object->put('this is the value'); ok( $object->exists, 'object now exists yet' ); my @objects; @objects = (); $stream = $bucket->list( { prefix => 'this is the key' } ); until ( $stream->is_done ) { foreach my $object ( $stream->items ) { push @objects, $object; } } is( @objects, 1, 'bucket list with prefix finds key' ); @objects = (); $stream = $bucket->list( { prefix => 'this is not the key' } ); until ( $stream->is_done ) { foreach my $object ( $stream->items ) { push @objects, $object; } } is( @objects, 0, 'bucket list with different prefix does not find key' ); @objects = (); $stream = $bucket->list; until ( $stream->is_done ) { foreach my $object ( $stream->items ) { push @objects, $object; } } is( @objects, 1, 'bucket list finds newly created key' ); is( $objects[0]->key, 'this is the key', 'newly created object has the right key' ); is( $objects[0]->etag, '94325a12f8db22ffb6934cc5f22f6698', 'newly created object has the right etag' ); is( $objects[0]->size, '17', 'newly created object has the right size' ); is( $object->get, 'this is the value', 'newly created object has the right value' ); is( $bucket->object( key => 'this is the key' )->get, 'this is the value', 'newly created object fetched by name has the right value' ); is( get( $object->uri ), undef, 'newly created object cannot be fetched by uri' ); $object->expires('2037-01-01'); is( get( $object->query_string_authentication_uri() ), 'this is the value', 'newly created object can be fetch by authentication uri' ); $object->delete; # upload a public object $object = $bucket->object( key => 'this is the public key', acl_short => 'public-read', content_type => 'text/plain', content_encoding => 'identity', expires => '2001-02-03', ); $object->put('this is the public value'); is( get( $object->uri ), 'this is the public value', 'newly created public object is publically accessible' ); is( ( head( $object->uri ) )[0], 'text/plain', 'newly created public object has the right content type' ); is( ( head( $object->uri ) )[3], $object->expires->epoch, 'newly created public object has the right expires' ); $object->delete; # delete a non-existant object $object = $bucket->object( key => 'not here' ); throws_ok { $object->get } qr/NoSuchKey/, 'getting non-existant object throws exception'; # upload a file with put_filename $object = $bucket->object( key => 'the readme' ); $object->put_filename('README.md'); @objects = (); $stream = $bucket->list; until ( $stream->is_done ) { foreach my $object ( $stream->items ) { push @objects, $object; } } is( @objects, 1, 'have newly uploaded object' ); is( $objects[0]->key, 'the readme', 'newly uploaded object has the right key' ); is( $objects[0]->etag, $readme_md5hex, 'newly uploaded object has the right etag' ); is( $objects[0]->size, $readme_size, 'newly created object has the right size' ); ok( $objects[0]->last_modified, 'newly created object has a last modified' ); $object->delete; # upload a public object with put_filename $object = $bucket->object( key => 'the public readme', acl_short => 'public-read' ); $object->put_filename('README.md'); is( length( get( $object->uri ) ), $readme_size, 'newly uploaded public object has the right size' ); $object->delete; # upload a file with put_filename with known md5hex size and AES256 encryption $object = $bucket->object( key => 'the new readme', etag => $readme_md5hex, size => $readme_size, encryption => 'AES256' ); $object->put_filename('README.md'); @objects = (); $stream = $bucket->list; until ( $stream->is_done ) { foreach my $object ( $stream->items ) { push @objects, $object; } } is( @objects, 1, 'have newly uploaded object' ); is( $objects[0]->key, 'the new readme', 'newly uploaded object has the right key' ); is( $objects[0]->etag, $readme_md5hex, 'newly uploaded object has the right etag' ); is( $objects[0]->size, $readme_size, 'newly created object has the right size' ); ok( $objects[0]->last_modified, 'newly created object has a last modified' ); # download an object with get_filename my $tmp_fh = File::Temp->new(); $object->get_filename($tmp_fh->filename); is( stat($tmp_fh->filename)->size, $readme_size, 'download has right size' ); is( file_md5_hex($tmp_fh->filename), $readme_md5hex, 'download has right etag' ); $object->delete; # upload a public object with put_filename with known md5hex and size $object = $bucket->object( key => 'the new public readme', etag => $readme_md5hex, size => $readme_size, acl_short => 'public-read' ); $object->put_filename( 'README.md', $readme_md5hex, $readme_size ); is( length( get( $object->uri ) ), $readme_size, 'newly uploaded public object has the right size' ); $object->delete; { # upload an object using multipart upload and then abort it $object = $bucket->object( key => 'new multipart file soon to be aborted', acl_short => 'public-read' ); my $upload_id; ok( $upload_id = $object->initiate_multipart_upload, "can initiate a new multipart upload -- $upload_id" ); #put part my $put_part_response; ok( $put_part_response = $object->put_part( part_number => 1, upload_id => $upload_id, value => 'x' x ( 5 * 1024 * 1024 ) ), 'Got a successful response for PUT part' ); ok( $put_part_response->header('ETag'), 'etag ok' ); ok( my $abort_response = $object->abort_multipart_upload( upload_id => $upload_id ), 'Got a successful response for DELETE multipart upload' ); ok( !$object->exists, "object has now been deleted" ); } # upload an object using multipart upload $object = $bucket->object( key => 'new multipart file', acl_short => 'public-read' ); my $upload_id; ok($upload_id = $object->initiate_multipart_upload, "can initiate a new multipart upload"); #put part my $put_part_response; ok( $put_part_response = $object->put_part(part_number => 1, upload_id => $upload_id, value => 'x' x (5 * 1024 * 1024)), 'Got a successful response for PUT part' ); my @etags; push @etags, $put_part_response->header('ETag'); ok( $put_part_response = $object->put_part(part_number => 2, upload_id => $upload_id, value => 'z' x (1024 * 1024)), 'Got a successful response for 2nd PUT part' ); push @etags, $put_part_response->header('ETag'); # TODO list part? - We've got this, but how to expose it nicely? #complete multipart upload my $complete_upload_response; ok( $complete_upload_response = $object->complete_multipart_upload( upload_id => $upload_id, part_numbers => [1,2], etags => \@etags), "successful response for complete multipart upload" ); #get the file and check that it looks like we expect ok($object->exists, "object has now been created"); $tmp_fh = File::Temp->new(); $object->get_filename($tmp_fh->filename); is( stat($tmp_fh->filename)->size, 6 * 1024 * 1024, "downloaded file has a size equivalent to the sum of it's parts"); $tmp_fh->seek((5 * 1024 * 1024) - 1, SEEK_SET);#jump to 5MB position my $test_bytes; read($tmp_fh, $test_bytes, 2); is($test_bytes, "xz", "The second chunk of the file begins in the correct place"); #test listing a multipart object $stream = $bucket->list({prefix => 'new multipart file'}); lives_ok {my @items = $stream->items} 'Listing a multipart file does not throw an exeption'; $object->delete; #test multi-object delete #make 3 identical objects @objects =(); for my $i(1..3){ my $bulk_object = $bucket->object( key => "bulk-readme-$i", etag => $readme_md5hex, size => $readme_size ); $bulk_object->put_filename('README.md'); push @objects, $bulk_object; } #now delete 2 of those objects ok($bucket->delete_multi_object(@objects[0..1]), "executed multi delete operation"); ok( !grep($_->exists, @objects[0..1]), "target objects no longer exist"); ok( $objects[2]->exists, "object not included in multi-object delete still exists" ); $objects[2]->delete; $bucket->delete; Net-Amazon-S3-0.80/t/03token.t000644 000765 000024 00000001631 12512367152 016615 0ustar00rconoverstaff000000 000000 #!perl use warnings; use strict; use lib 'lib'; use Test::More; use Test::Exception; unless ( $ENV{'AWS_ACCESS_KEY_ID'} and $ENV{'AWS_ACCESS_KEY_SECRET'} and $ENV{'AWS_ACCESS_TOKEN'} ) { plan skip_all => 'Need these vars in ENV: AWS_ACCESS_KEY_ID, ' . 'AWS_ACCESS_KEY_SECRET, AWS_ACCESS_TOKEN'; } else { plan tests => 1 + 1; } use_ok('Net::Amazon::S3'); my $aws_access_key_id = $ENV{'AWS_ACCESS_KEY_ID'}; my $aws_secret_access_key = $ENV{'AWS_ACCESS_KEY_SECRET'}; my $aws_session_token = $ENV{'AWS_ACCESS_TOKEN'}; my $s3 = Net::Amazon::S3->new( { aws_access_key_id => $aws_access_key_id, aws_secret_access_key => $aws_secret_access_key, aws_session_token => $aws_session_token, retry => 1, } ); # list all buckets that i own my $response = $s3->buckets; ok($response, "Authentication with token succeded"); Net-Amazon-S3-0.80/lib/Net/000755 000765 000024 00000000000 12512367152 016175 5ustar00rconoverstaff000000 000000 Net-Amazon-S3-0.80/lib/Net/Amazon/000755 000765 000024 00000000000 12512367152 017422 5ustar00rconoverstaff000000 000000 Net-Amazon-S3-0.80/lib/Net/Amazon/S3/000755 000765 000024 00000000000 12512367152 017707 5ustar00rconoverstaff000000 000000 Net-Amazon-S3-0.80/lib/Net/Amazon/S3.pm000755 000765 000024 00000061205 12512367152 020254 0ustar00rconoverstaff000000 000000 package Net::Amazon::S3; $Net::Amazon::S3::VERSION = '0.80'; use Moose 0.85; use MooseX::StrictConstructor 0.16; # ABSTRACT: Use the Amazon S3 - Simple Storage Service use Carp; use Digest::HMAC_SHA1; use Net::Amazon::S3::Bucket; use Net::Amazon::S3::Client; use Net::Amazon::S3::Client::Bucket; use Net::Amazon::S3::Client::Object; use Net::Amazon::S3::HTTPRequest; use Net::Amazon::S3::Request; use Net::Amazon::S3::Request::AbortMultipartUpload; use Net::Amazon::S3::Request::CompleteMultipartUpload; use Net::Amazon::S3::Request::CreateBucket; use Net::Amazon::S3::Request::DeleteBucket; use Net::Amazon::S3::Request::DeleteMultiObject; use Net::Amazon::S3::Request::DeleteObject; use Net::Amazon::S3::Request::GetBucketAccessControl; use Net::Amazon::S3::Request::GetBucketLocationConstraint; use Net::Amazon::S3::Request::GetObject; use Net::Amazon::S3::Request::GetObjectAccessControl; use Net::Amazon::S3::Request::InitiateMultipartUpload; use Net::Amazon::S3::Request::ListAllMyBuckets; use Net::Amazon::S3::Request::ListBucket; use Net::Amazon::S3::Request::ListParts; use Net::Amazon::S3::Request::PutObject; use Net::Amazon::S3::Request::PutPart; use Net::Amazon::S3::Request::SetBucketAccessControl; use Net::Amazon::S3::Request::SetObjectAccessControl; use LWP::UserAgent::Determined; use URI::Escape qw(uri_escape_utf8); use XML::LibXML; use XML::LibXML::XPathContext; has 'use_iam_role' => ( is => 'ro', isa => 'Bool', required => 0, default => 0); has 'aws_access_key_id' => ( is => 'rw', isa => 'Str', required => 0 ); has 'aws_secret_access_key' => ( is => 'rw', isa => 'Str', required => 0 ); has 'secure' => ( is => 'ro', isa => 'Bool', required => 0, default => 0 ); has 'timeout' => ( is => 'ro', isa => 'Num', required => 0, default => 30 ); has 'retry' => ( is => 'ro', isa => 'Bool', required => 0, default => 0 ); has 'host' => ( is => 'ro', isa => 'Str', required => 0, default => 's3.amazonaws.com' ); has 'libxml' => ( is => 'rw', isa => 'XML::LibXML', required => 0 ); has 'ua' => ( is => 'rw', isa => 'LWP::UserAgent', required => 0 ); has 'err' => ( is => 'rw', isa => 'Maybe[Str]', required => 0 ); has 'errstr' => ( is => 'rw', isa => 'Maybe[Str]', required => 0 ); has 'aws_session_token' => ( is => 'rw', isa => 'Str', required => 0 ); __PACKAGE__->meta->make_immutable; my $KEEP_ALIVE_CACHESIZE = 10; sub BUILD { my $self = shift; if (!$self->use_iam_role) { if (!defined($self->aws_secret_access_key) || !defined($self->aws_access_key_id)) { die("Must specify aws_secret_access_key and aws_access_key_id"); } } my $ua; if ( $self->retry ) { $ua = LWP::UserAgent::Determined->new( keep_alive => $KEEP_ALIVE_CACHESIZE, requests_redirectable => [qw(GET HEAD DELETE PUT POST)], ); $ua->timing('1,2,4,8,16,32'); } else { $ua = LWP::UserAgent->new( keep_alive => $KEEP_ALIVE_CACHESIZE, requests_redirectable => [qw(GET HEAD DELETE PUT POST)], ); } $ua->timeout( $self->timeout ); $ua->env_proxy; $self->ua($ua); $self->libxml( XML::LibXML->new ); } sub buckets { my $self = shift; my $http_request = Net::Amazon::S3::Request::ListAllMyBuckets->new( s3 => $self ) ->http_request; # die $request->http_request->as_string; my $xpc = $self->_send_request($http_request); return undef unless $xpc && !$self->_remember_errors($xpc); my $owner_id = $xpc->findvalue("//s3:Owner/s3:ID"); my $owner_displayname = $xpc->findvalue("//s3:Owner/s3:DisplayName"); my @buckets; foreach my $node ( $xpc->findnodes(".//s3:Bucket") ) { push @buckets, Net::Amazon::S3::Bucket->new( { bucket => $xpc->findvalue( ".//s3:Name", $node ), creation_date => $xpc->findvalue( ".//s3:CreationDate", $node ), account => $self, } ); } return { owner_id => $owner_id, owner_displayname => $owner_displayname, buckets => \@buckets, }; } sub add_bucket { my ( $self, $conf ) = @_; my $http_request = Net::Amazon::S3::Request::CreateBucket->new( s3 => $self, bucket => $conf->{bucket}, acl_short => $conf->{acl_short}, location_constraint => $conf->{location_constraint}, )->http_request; return 0 unless $self->_send_request_expect_nothing($http_request); return $self->bucket( $conf->{bucket} ); } sub bucket { my ( $self, $bucketname ) = @_; return Net::Amazon::S3::Bucket->new( { bucket => $bucketname, account => $self } ); } sub delete_bucket { my ( $self, $conf ) = @_; my $bucket; if ( eval { $conf->isa("Net::S3::Amazon::Bucket"); } ) { $bucket = $conf->bucket; } else { $bucket = $conf->{bucket}; } croak 'must specify bucket' unless $bucket; my $http_request = Net::Amazon::S3::Request::DeleteBucket->new( s3 => $self, bucket => $bucket, )->http_request; return $self->_send_request_expect_nothing($http_request); } sub list_bucket { my ( $self, $conf ) = @_; my $http_request = Net::Amazon::S3::Request::ListBucket->new( s3 => $self, bucket => $conf->{bucket}, delimiter => $conf->{delimiter}, max_keys => $conf->{max_keys}, marker => $conf->{marker}, prefix => $conf->{prefix}, )->http_request; my $xpc = $self->_send_request($http_request); return undef unless $xpc && !$self->_remember_errors($xpc); my $return = { bucket => $xpc->findvalue("//s3:ListBucketResult/s3:Name"), prefix => $xpc->findvalue("//s3:ListBucketResult/s3:Prefix"), marker => $xpc->findvalue("//s3:ListBucketResult/s3:Marker"), next_marker => $xpc->findvalue("//s3:ListBucketResult/s3:NextMarker"), max_keys => $xpc->findvalue("//s3:ListBucketResult/s3:MaxKeys"), is_truncated => ( scalar $xpc->findvalue("//s3:ListBucketResult/s3:IsTruncated") eq 'true' ? 1 : 0 ), }; my @keys; foreach my $node ( $xpc->findnodes(".//s3:Contents") ) { my $etag = $xpc->findvalue( ".//s3:ETag", $node ); $etag =~ s/^"//; $etag =~ s/"$//; push @keys, { key => $xpc->findvalue( ".//s3:Key", $node ), last_modified => $xpc->findvalue( ".//s3:LastModified", $node ), etag => $etag, size => $xpc->findvalue( ".//s3:Size", $node ), storage_class => $xpc->findvalue( ".//s3:StorageClass", $node ), owner_id => $xpc->findvalue( ".//s3:ID", $node ), owner_displayname => $xpc->findvalue( ".//s3:DisplayName", $node ), }; } $return->{keys} = \@keys; if ( $conf->{delimiter} ) { my @common_prefixes; my $strip_delim = qr/$conf->{delimiter}$/; foreach my $node ( $xpc->findnodes(".//s3:CommonPrefixes") ) { my $prefix = $xpc->findvalue( ".//s3:Prefix", $node ); # strip delimiter from end of prefix $prefix =~ s/$strip_delim//; push @common_prefixes, $prefix; } $return->{common_prefixes} = \@common_prefixes; } return $return; } sub list_bucket_all { my ( $self, $conf ) = @_; $conf ||= {}; my $bucket = $conf->{bucket}; croak 'must specify bucket' unless $bucket; my $response = $self->list_bucket($conf); return $response unless $response->{is_truncated}; my $all = $response; while (1) { my $next_marker = $response->{next_marker} || $response->{keys}->[-1]->{key}; $conf->{marker} = $next_marker; $conf->{bucket} = $bucket; $response = $self->list_bucket($conf); push @{ $all->{keys} }, @{ $response->{keys} }; last unless $response->{is_truncated}; } delete $all->{is_truncated}; delete $all->{next_marker}; return $all; } sub _compat_bucket { my ( $self, $conf ) = @_; return Net::Amazon::S3::Bucket->new( { account => $self, bucket => delete $conf->{bucket} } ); } # compat wrapper; deprecated as of 2005-03-23 sub add_key { my ( $self, $conf ) = @_; my $bucket = $self->_compat_bucket($conf); my $key = delete $conf->{key}; my $value = delete $conf->{value}; return $bucket->add_key( $key, $value, $conf ); } # compat wrapper; deprecated as of 2005-03-23 sub get_key { my ( $self, $conf ) = @_; my $bucket = $self->_compat_bucket($conf); return $bucket->get_key( $conf->{key} ); } # compat wrapper; deprecated as of 2005-03-23 sub head_key { my ( $self, $conf ) = @_; my $bucket = $self->_compat_bucket($conf); return $bucket->head_key( $conf->{key} ); } # compat wrapper; deprecated as of 2005-03-23 sub delete_key { my ( $self, $conf ) = @_; my $bucket = $self->_compat_bucket($conf); return $bucket->delete_key( $conf->{key} ); } sub _validate_acl_short { my ( $self, $policy_name ) = @_; if (!grep( { $policy_name eq $_ } qw(private public-read public-read-write authenticated-read) ) ) { croak "$policy_name is not a supported canned access policy"; } } # $self->_send_request($HTTP::Request) # $self->_send_request(@params_to_make_request) sub _send_request { my ( $self, $http_request ) = @_; # warn $http_request->as_string; my $response = $self->_do_http($http_request); my $content = $response->content; return $content unless $response->content_type eq 'application/xml'; return unless $content; return $self->_xpc_of_content($content); } # centralize all HTTP work, for debugging sub _do_http { my ( $self, $http_request, $filename ) = @_; confess 'Need HTTP::Request object' if ( ref($http_request) ne 'HTTP::Request' ); # convenient time to reset any error conditions $self->err(undef); $self->errstr(undef); return $self->ua->request( $http_request, $filename ); } sub _send_request_expect_nothing { my ( $self, $http_request ) = @_; # warn $http_request->as_string; my $response = $self->_do_http($http_request); return 1 if $response->code =~ /^2\d\d$/; # anything else is a failure, and we save the parsed result $self->_remember_errors( $response->content ); return 0; } sub _croak_if_response_error { my ( $self, $response ) = @_; unless ( $response->code =~ /^2\d\d$/ ) { $self->err("network_error"); $self->errstr( $response->status_line ); croak "Net::Amazon::S3: Amazon responded with " . $response->status_line . "\n"; } } sub _xpc_of_content { my ( $self, $content ) = @_; my $doc = $self->libxml->parse_string($content); # warn $doc->toString(1); my $xpc = XML::LibXML::XPathContext->new($doc); $xpc->registerNs( 's3', 'http://s3.amazonaws.com/doc/2006-03-01/' ); return $xpc; } # returns 1 if errors were found sub _remember_errors { my ( $self, $src ) = @_; # Do not try to parse non-xml unless ( ref $src || $src =~ m/^[[:space:]]*err($code); $self->errstr($src); return 1; } my $xpc = ref $src ? $src : $self->_xpc_of_content($src); if ( $xpc->findnodes("//Error") ) { $self->err( $xpc->findvalue("//Error/Code") ); $self->errstr( $xpc->findvalue("//Error/Message") ); return 1; } return 0; } sub _urlencode { my ( $self, $unencoded ) = @_; return uri_escape_utf8( $unencoded, '^A-Za-z0-9_\-\.' ); } 1; __END__ =pod =encoding UTF-8 =head1 NAME Net::Amazon::S3 - Use the Amazon S3 - Simple Storage Service =head1 VERSION version 0.80 =head1 SYNOPSIS use Net::Amazon::S3; my $aws_access_key_id = 'fill me in'; my $aws_secret_access_key = 'fill me in too'; my $s3 = Net::Amazon::S3->new( { aws_access_key_id => $aws_access_key_id, aws_secret_access_key => $aws_secret_access_key, # or use an IAM role. use_iam_role => 1 retry => 1, } ); # a bucket is a globally-unique directory # list all buckets that i own my $response = $s3->buckets; foreach my $bucket ( @{ $response->{buckets} } ) { print "You have a bucket: " . $bucket->bucket . "\n"; } # create a new bucket my $bucketname = 'acmes_photo_backups'; my $bucket = $s3->add_bucket( { bucket => $bucketname } ) or die $s3->err . ": " . $s3->errstr; # or use an existing bucket $bucket = $s3->bucket($bucketname); # store a file in the bucket $bucket->add_key_filename( '1.JPG', 'DSC06256.JPG', { content_type => 'image/jpeg', }, ) or die $s3->err . ": " . $s3->errstr; # store a value in the bucket $bucket->add_key( 'reminder.txt', 'this is where my photos are backed up' ) or die $s3->err . ": " . $s3->errstr; # list files in the bucket $response = $bucket->list_all or die $s3->err . ": " . $s3->errstr; foreach my $key ( @{ $response->{keys} } ) { my $key_name = $key->{key}; my $key_size = $key->{size}; print "Bucket contains key '$key_name' of size $key_size\n"; } # fetch file from the bucket $response = $bucket->get_key_filename( '1.JPG', 'GET', 'backup.jpg' ) or die $s3->err . ": " . $s3->errstr; # fetch value from the bucket $response = $bucket->get_key('reminder.txt') or die $s3->err . ": " . $s3->errstr; print "reminder.txt:\n"; print " content length: " . $response->{content_length} . "\n"; print " content type: " . $response->{content_type} . "\n"; print " etag: " . $response->{content_type} . "\n"; print " content: " . $response->{value} . "\n"; # delete keys $bucket->delete_key('reminder.txt') or die $s3->err . ": " . $s3->errstr; $bucket->delete_key('1.JPG') or die $s3->err . ": " . $s3->errstr; # and finally delete the bucket $bucket->delete_bucket or die $s3->err . ": " . $s3->errstr; =head1 DESCRIPTION This module provides a Perlish interface to Amazon S3. From the developer blurb: "Amazon S3 is storage for the Internet. It is designed to make web-scale computing easier for developers. Amazon S3 provides a simple web services interface that can be used to store and retrieve any amount of data, at any time, from anywhere on the web. It gives any developer access to the same highly scalable, reliable, fast, inexpensive data storage infrastructure that Amazon uses to run its own global network of web sites. The service aims to maximize benefits of scale and to pass those benefits on to developers". To find out more about S3, please visit: http://s3.amazonaws.com/ To use this module you will need to sign up to Amazon Web Services and provide an "Access Key ID" and " Secret Access Key". If you use this module, you will incurr costs as specified by Amazon. Please check the costs. If you use this module with your Access Key ID and Secret Access Key you must be responsible for these costs. I highly recommend reading all about S3, but in a nutshell data is stored in values. Values are referenced by keys, and keys are stored in buckets. Bucket names are global. Note: This is the legacy interface, please check out L instead. Development of this code happens here: http://github.com/pfig/net-amazon-s3/ Homepage for the project (just started) is at http://pfig.github.com/net-amazon-s3/ =head1 METHODS =head2 new Create a new S3 client object. Takes some arguments: =over =item aws_access_key_id Use your Access Key ID as the value of the AWSAccessKeyId parameter in requests you send to Amazon Web Services (when required). Your Access Key ID identifies you as the party responsible for the request. =item aws_secret_access_key Since your Access Key ID is not encrypted in requests to AWS, it could be discovered and used by anyone. Services that are not free require you to provide additional information, a request signature, to verify that a request containing your unique Access Key ID could only have come from you. DO NOT INCLUDE THIS IN SCRIPTS OR APPLICATIONS YOU DISTRIBUTE. YOU'LL BE SORRY =item aws_session_token If you are using temporary credentials provided by the AWS Security Token Service, set the token here, and it will be added to the request in order to authenticate it. =item use_iam_role If you'd like to use IAM provided temporary credentials, pass this option with a true value. =item secure Set this to C<1> if you want to use SSL-encrypted connections when talking to S3. Defaults to C<0>. =item timeout How many seconds should your script wait before bailing on a request to S3? Defaults to 30. =item retry If this library should retry upon errors. This option is recommended. This uses exponential backoff with retries after 1, 2, 4, 8, 16, 32 seconds, as recommended by Amazon. Defaults to off. =item host The S3 host endpoint to use. Defaults to 's3.amazonaws.com'. This allows you to connect to any S3-compatible host. =back =head2 buckets Returns undef on error, else hashref of results =head2 add_bucket Takes a hashref: =over =item bucket The name of the bucket you want to add =item acl_short (optional) See the set_acl subroutine for documenation on the acl_short options =item location_constraint (option) Sets the location constraint of the new bucket. If left unspecified, the default S3 datacenter location will be used. Otherwise, you can set it to 'EU' for a European data center - note that costs are different. =back Returns 0 on failure, Net::Amazon::S3::Bucket object on success =head2 bucket BUCKET Takes a scalar argument, the name of the bucket you're creating Returns an (unverified) bucket object from an account. Does no network access. =head2 delete_bucket Takes either a L object or a hashref containing =over =item bucket The name of the bucket to remove =back Returns false (and fails) if the bucket isn't empty. Returns true if the bucket is successfully deleted. =head2 list_bucket List all keys in this bucket. Takes a hashref of arguments: MANDATORY =over =item bucket The name of the bucket you want to list keys on =back OPTIONAL =over =item prefix Restricts the response to only contain results that begin with the specified prefix. If you omit this optional argument, the value of prefix for your query will be the empty string. In other words, the results will be not be restricted by prefix. =item delimiter If this optional, Unicode string parameter is included with your request, then keys that contain the same string between the prefix and the first occurrence of the delimiter will be rolled up into a single result element in the CommonPrefixes collection. These rolled-up keys are not returned elsewhere in the response. For example, with prefix="USA/" and delimiter="/", the matching keys "USA/Oregon/Salem" and "USA/Oregon/Portland" would be summarized in the response as a single "USA/Oregon" element in the CommonPrefixes collection. If an otherwise matching key does not contain the delimiter after the prefix, it appears in the Contents collection. Each element in the CommonPrefixes collection counts as one against the MaxKeys limit. The rolled-up keys represented by each CommonPrefixes element do not. If the Delimiter parameter is not present in your request, keys in the result set will not be rolled-up and neither the CommonPrefixes collection nor the NextMarker element will be present in the response. =item max-keys This optional argument limits the number of results returned in response to your query. Amazon S3 will return no more than this number of results, but possibly less. Even if max-keys is not specified, Amazon S3 will limit the number of results in the response. Check the IsTruncated flag to see if your results are incomplete. If so, use the Marker parameter to request the next page of results. For the purpose of counting max-keys, a 'result' is either a key in the 'Contents' collection, or a delimited prefix in the 'CommonPrefixes' collection. So for delimiter requests, max-keys limits the total number of list results, not just the number of keys. =item marker This optional parameter enables pagination of large result sets. C specifies where in the result set to resume listing. It restricts the response to only contain results that occur alphabetically after the value of marker. To retrieve the next page of results, use the last key from the current page of results as the marker in your next request. See also C, below. If C is omitted,the first page of results is returned. =back Returns undef on error and a hashref of data on success: The hashref looks like this: { bucket => $bucket_name, prefix => $bucket_prefix, common_prefixes => [$prefix1,$prefix2,...] marker => $bucket_marker, next_marker => $bucket_next_available_marker, max_keys => $bucket_max_keys, is_truncated => $bucket_is_truncated_boolean keys => [$key1,$key2,...] } Explanation of bits of that: =over =item common_prefixes If list_bucket was requested with a delimiter, common_prefixes will contain a list of prefixes matching that delimiter. Drill down into these prefixes by making another request with the prefix parameter. =item is_truncated B flag that indicates whether or not all results of your query were returned in this response. If your results were truncated, you can make a follow-up paginated request using the Marker parameter to retrieve the rest of the results. =item next_marker A convenience element, useful when paginating with delimiters. The value of C, if present, is the largest (alphabetically) of all key names and all CommonPrefixes prefixes in the response. If the C flag is set, request the next page of results by setting C to the value of C. This element is only present in the response if the C parameter was sent with the request. =back Each key is a hashref that looks like this: { key => $key, last_modified => $last_mod_date, etag => $etag, # An MD5 sum of the stored content. size => $size, # Bytes storage_class => $storage_class # Doc? owner_id => $owner_id, owner_displayname => $owner_name } =head2 list_bucket_all List all keys in this bucket without having to worry about 'marker'. This is a convenience method, but may make multiple requests to S3 under the hood. Takes the same arguments as list_bucket. =head2 add_key DEPRECATED. DO NOT USE =head2 get_key DEPRECATED. DO NOT USE =head2 head_key DEPRECATED. DO NOT USE =head2 delete_key DEPRECATED. DO NOT USE =head1 LICENSE This module contains code modified from Amazon that contains the following notice: # This software code is made available "AS IS" without warranties of any # kind. You may copy, display, modify and redistribute the software # code either by itself or as incorporated into your code; provided that # you do not remove any proprietary notices. Your use of this software # code is at your own risk and you waive any claim against Amazon # Digital Services, Inc. or its affiliates with respect to your use of # this software code. (c) 2006 Amazon Digital Services, Inc. or its # affiliates. =head1 TESTING Testing S3 is a tricky thing. Amazon wants to charge you a bit of money each time you use their service. And yes, testing counts as using. Because of this, the application's test suite skips anything approaching a real test unless you set these three environment variables: =over =item AMAZON_S3_EXPENSIVE_TESTS Doesn't matter what you set it to. Just has to be set =item AWS_ACCESS_KEY_ID Your AWS access key =item AWS_ACCESS_KEY_SECRET Your AWS sekkr1t passkey. Be forewarned that setting this environment variable on a shared system might leak that information to another user. Be careful. =back =head1 AUTHOR Leon Brocard and unknown Amazon Digital Services programmers. Brad Fitzpatrick - return values, Bucket object Pedro Figueiredo - since 0.54 =head1 SEE ALSO L =head1 AUTHOR Rusty Conover =head1 COPYRIGHT AND LICENSE This software is copyright (c) 2015 by Amazon Digital Services, Leon Brocard, Brad Fitzpatrick, Pedro Figueiredo, Rusty Conover. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. =cut Net-Amazon-S3-0.80/lib/Net/Amazon/S3/Bucket.pm000644 000765 000024 00000036121 12512367152 021465 0ustar00rconoverstaff000000 000000 package Net::Amazon::S3::Bucket; $Net::Amazon::S3::Bucket::VERSION = '0.80'; use Moose 0.85; use MooseX::StrictConstructor 0.16; use Carp; use File::stat; use IO::File 1.14; has 'account' => ( is => 'ro', isa => 'Net::Amazon::S3', required => 1 ); has 'bucket' => ( is => 'ro', isa => 'Str', required => 1 ); has 'creation_date' => ( is => 'ro', isa => 'Maybe[Str]', required => 0 ); __PACKAGE__->meta->make_immutable; # ABSTRACT: convenience object for working with Amazon S3 buckets sub _uri { my ( $self, $key ) = @_; return ($key) ? $self->bucket . "/" . $self->account->_urlencode($key) : $self->bucket . "/"; } sub _conf_to_headers { my ( $self, $conf ) = @_; $conf = {} unless defined $conf; $conf = {%$conf}; # clone it so as not to clobber the caller's copy if ( $conf->{acl_short} ) { $self->account->_validate_acl_short( $conf->{acl_short} ); $conf->{'x-amz-acl'} = $conf->{acl_short}; delete $conf->{acl_short}; } return $conf; } # returns bool sub add_key { my ( $self, $key, $value, $conf ) = @_; if ( ref($value) eq 'SCALAR' ) { $conf->{'Content-Length'} ||= -s $$value; $value = _content_sub($$value); } else { $conf->{'Content-Length'} ||= length $value; } my $acl_short; if ( $conf->{acl_short} ) { $acl_short = $conf->{acl_short}; delete $conf->{acl_short}; } my $http_request = Net::Amazon::S3::Request::PutObject->new( s3 => $self->account, bucket => $self->bucket, key => $key, value => $value, acl_short => $acl_short, headers => $conf, )->http_request; if ( ref($value) ) { # we may get a 307 redirect; ask server to signal 100 Continue # before reading the content from CODE reference (_content_sub) $http_request->header('Expect' => '100-continue'); } return $self->account->_send_request_expect_nothing($http_request); } sub add_key_filename { my ( $self, $key, $value, $conf ) = @_; return $self->add_key( $key, \$value, $conf ); } sub copy_key { my ( $self, $key, $source, $conf ) = @_; my $acl_short; if ( defined $conf ) { if ( $conf->{acl_short} ) { $acl_short = $conf->{acl_short}; delete $conf->{acl_short}; } $conf->{'x-amz-metadata-directive'} ||= 'REPLACE'; } else { $conf = {}; } $conf->{'x-amz-copy-source'} = $source; my $acct = $self->account; my $http_request = Net::Amazon::S3::Request::PutObject->new( s3 => $self->account, bucket => $self->bucket, key => $key, value => '', acl_short => $acl_short, headers => $conf, )->http_request; my $response = $acct->_do_http( $http_request ); my $xpc = $acct->_xpc_of_content( $response->content ); if ( !$response->is_success || !$xpc || $xpc->findnodes("//Error") ) { $acct->_remember_errors( $response->content ); return 0; } return 1; } sub edit_metadata { my ( $self, $key, $conf ) = @_; croak "Need configuration hash" unless defined $conf; return $self->copy_key( $key, "/" . $self->bucket . "/" . $key, $conf ); } sub head_key { my ( $self, $key ) = @_; return $self->get_key( $key, "HEAD" ); } sub get_key { my ( $self, $key, $method, $filename ) = @_; $filename = $$filename if ref $filename; my $acct = $self->account; my $http_request = Net::Amazon::S3::Request::GetObject->new( s3 => $acct, bucket => $self->bucket, key => $key, method => $method || 'GET', )->http_request; my $response = $acct->_do_http( $http_request, $filename ); if ( $response->code == 404 ) { return undef; } $acct->_croak_if_response_error($response); my $etag = $response->header('ETag'); if ($etag) { $etag =~ s/^"//; $etag =~ s/"$//; } my $return; foreach my $header ( $response->headers->header_field_names ) { $return->{ lc $header } = $response->header($header); } $return->{content_length} = $response->content_length || 0; $return->{content_type} = $response->content_type; $return->{etag} = $etag; $return->{value} = $response->content; return $return; } sub get_key_filename { my ( $self, $key, $method, $filename ) = @_; return $self->get_key( $key, $method, \$filename ); } # returns bool sub delete_key { my ( $self, $key ) = @_; croak 'must specify key' unless defined $key && length $key; my $http_request = Net::Amazon::S3::Request::DeleteObject->new( s3 => $self->account, bucket => $self->bucket, key => $key, )->http_request; return $self->account->_send_request_expect_nothing($http_request); } sub delete_bucket { my $self = shift; croak "Unexpected arguments" if @_; return $self->account->delete_bucket($self); } sub list { my $self = shift; my $conf = shift || {}; $conf->{bucket} = $self->bucket; return $self->account->list_bucket($conf); } sub list_all { my $self = shift; my $conf = shift || {}; $conf->{bucket} = $self->bucket; return $self->account->list_bucket_all($conf); } sub get_acl { my ( $self, $key ) = @_; my $account = $self->account; my $http_request; if ($key) { $http_request = Net::Amazon::S3::Request::GetObjectAccessControl->new( s3 => $account, bucket => $self->bucket, key => $key, )->http_request; } else { $http_request = Net::Amazon::S3::Request::GetBucketAccessControl->new( s3 => $account, bucket => $self->bucket, )->http_request; } my $response = $account->_do_http($http_request); if ( $response->code == 404 ) { return undef; } $account->_croak_if_response_error($response); return $response->content; } sub set_acl { my ( $self, $conf ) = @_; $conf ||= {}; my $key = $conf->{key}; my $http_request; if ($key) { $http_request = Net::Amazon::S3::Request::SetObjectAccessControl->new( s3 => $self->account, bucket => $self->bucket, key => $key, acl_short => $conf->{acl_short}, acl_xml => $conf->{acl_xml}, )->http_request; } else { $http_request = Net::Amazon::S3::Request::SetBucketAccessControl->new( s3 => $self->account, bucket => $self->bucket, acl_short => $conf->{acl_short}, acl_xml => $conf->{acl_xml}, )->http_request; } return $self->account->_send_request_expect_nothing($http_request); } sub get_location_constraint { my ($self) = @_; my $http_request = Net::Amazon::S3::Request::GetBucketLocationConstraint->new( s3 => $self->account, bucket => $self->bucket, )->http_request; my $xpc = $self->account->_send_request($http_request); return undef unless $xpc && !$self->account->_remember_errors($xpc); my $lc = $xpc->findvalue("//s3:LocationConstraint"); if ( defined $lc && $lc eq '' ) { $lc = undef; } return $lc; } # proxy up the err requests sub err { $_[0]->account->err } sub errstr { $_[0]->account->errstr } sub _content_sub { my $filename = shift; my $stat = stat($filename); my $remaining = $stat->size; my $blksize = $stat->blksize || 4096; croak "$filename not a readable file with fixed size" unless -r $filename and ( -f _ || $remaining ); my $fh = IO::File->new( $filename, 'r' ) or croak "Could not open $filename: $!"; $fh->binmode; return sub { my $buffer; # upon retries the file is closed and we must reopen it unless ( $fh->opened ) { $fh = IO::File->new( $filename, 'r' ) or croak "Could not open $filename: $!"; $fh->binmode; $remaining = $stat->size; } # warn "read remaining $remaining"; unless ( my $read = $fh->read( $buffer, $blksize ) ) { # warn "read $read buffer $buffer remaining $remaining"; croak "Error while reading upload content $filename ($remaining remaining) $!" if $! and $remaining; # otherwise, we found EOF $fh->close or croak "close of upload content $filename failed: $!"; $buffer ||= '' ; # LWP expects an emptry string on finish, read returns 0 } $remaining -= length($buffer); return $buffer; }; } 1; __END__ =pod =encoding UTF-8 =head1 NAME Net::Amazon::S3::Bucket - convenience object for working with Amazon S3 buckets =head1 VERSION version 0.80 =head1 SYNOPSIS use Net::Amazon::S3; my $bucket = $s3->bucket("foo"); ok($bucket->add_key("key", "data")); ok($bucket->add_key("key", "data", { content_type => "text/html", 'x-amz-meta-colour' => 'orange', })); # the err and errstr methods just proxy up to the Net::Amazon::S3's # objects err/errstr methods. $bucket->add_key("bar", "baz") or die $bucket->err . $bucket->errstr; # fetch a key $val = $bucket->get_key("key"); is( $val->{value}, 'data' ); is( $val->{content_type}, 'text/html' ); is( $val->{etag}, 'b9ece18c950afbfa6b0fdbfa4ff731d3' ); is( $val->{'x-amz-meta-colour'}, 'orange' ); # returns undef on missing or on error (check $bucket->err) is(undef, $bucket->get_key("non-existing-key")); die $bucket->errstr if $bucket->err; # fetch a key's metadata $val = $bucket->head_key("key"); is( $val->{value}, '' ); is( $val->{content_type}, 'text/html' ); is( $val->{etag}, 'b9ece18c950afbfa6b0fdbfa4ff731d3' ); is( $val->{'x-amz-meta-colour'}, 'orange' ); # delete a key ok($bucket->delete_key($key_name)); ok(! $bucket->delete_key("non-exist-key")); # delete the entire bucket (Amazon requires it first be empty) $bucket->delete_bucket; =head1 DESCRIPTION This module represents an S3 bucket. You get a bucket object from the Net::Amazon::S3 object. =for test_synopsis no strict 'vars' =head1 METHODS =head2 new Create a new bucket object. Expects a hash containing these two arguments: =over =item bucket =item account =back =head2 add_key Takes three positional parameters: =over =item key =item value =item configuration A hash of configuration data for this key. (See synopsis); =back Returns a boolean. =head2 add_key_filename Use this to upload a large file to S3. Takes three positional parameters: =over =item key =item filename =item configuration A hash of configuration data for this key. (See synopsis); =back Returns a boolean. =head2 copy_key Creates (or replaces) a key, copying its contents from another key elsewhere in S3. Takes the following parameters: =over =item key The key to (over)write =item source Where to copy the key from. Should be in the form C/I>/. =item conf Optional configuration hash. If present and defined, the configuration (ACL and headers) there will be used for the new key; otherwise it will be copied from the source key. =back =head2 edit_metadata Changes the metadata associated with an existing key. Arguments: =over =item key The key to edit =item conf The new configuration hash to use =back =head2 head_key KEY Takes the name of a key in this bucket and returns its configuration hash =head2 get_key $key_name [$method] Takes a key name and an optional HTTP method (which defaults to C. Fetches the key from AWS. On failure: Returns undef on missing content, throws an exception (dies) on server errors. On success: Returns a hashref of { content_type, etag, value, @meta } on success. Other values from the server are there too, with the key being lowercased. =head2 get_key_filename $key_name $method $filename Use this to download large files from S3. Takes a key name and an optional HTTP method (which defaults to C. Fetches the key from AWS and writes it to the filename. THe value returned will be empty. On failure: Returns undef on missing content, throws an exception (dies) on server errors. On success: Returns a hashref of { content_type, etag, value, @meta } on success =head2 delete_key $key_name Removes C<$key> from the bucket. Forever. It's gone after this. Returns true on success and false on failure =head2 delete_bucket Delete the current bucket object from the server. Takes no arguments. Fails if the bucket has anything in it. This is an alias for C<< $s3->delete_bucket($bucket) >> =head2 list List all keys in this bucket. see L for documentation of this method. =head2 list_all List all keys in this bucket without having to worry about 'marker'. This may make multiple requests to S3 under the hood. see L for documentation of this method. =head2 get_acl Takes one optional positional parameter =over =item key (optional) If no key is specified, it returns the acl for the bucket. =back Returns an acl in XML format. =head2 set_acl Takes a configuration hash_ref containing: =over =item acl_xml (cannot be used in conjunction with acl_short) An XML string which contains access control information which matches Amazon's published schema. There is an example of one of these XML strings in the tests for this module. =item acl_short (cannot be used in conjunction with acl_xml) You can use the shorthand notation instead of specifying XML for certain 'canned' types of acls. (from the Amazon API documentation) private: Owner gets FULL_CONTROL. No one else has any access rights. This is the default. public-read:Owner gets FULL_CONTROL and the anonymous principal is granted READ access. If this policy is used on an object, it can be read from a browser with no authentication. public-read-write:Owner gets FULL_CONTROL, the anonymous principal is granted READ and WRITE access. This is a useful policy to apply to a bucket, if you intend for any anonymous user to PUT objects into the bucket. authenticated-read:Owner gets FULL_CONTROL, and any principal authenticated as a registered Amazon S3 user is granted READ access. =item key (optional) If the key is not set, it will apply the acl to the bucket. =back Returns a boolean. =head2 get_location_constraint Retrieves the location constraint set when the bucket was created. Returns a string (eg, 'EU'), or undef if no location constraint was set. =head2 err The S3 error code for the last error the object ran into =head2 errstr A human readable error string for the last error the object ran into =head1 SEE ALSO L =head1 AUTHOR Rusty Conover =head1 COPYRIGHT AND LICENSE This software is copyright (c) 2015 by Amazon Digital Services, Leon Brocard, Brad Fitzpatrick, Pedro Figueiredo, Rusty Conover. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. =cut Net-Amazon-S3-0.80/lib/Net/Amazon/S3/Client/000755 000765 000024 00000000000 12512367152 021125 5ustar00rconoverstaff000000 000000 Net-Amazon-S3-0.80/lib/Net/Amazon/S3/Client.pm000644 000765 000024 00000014451 12512367152 021470 0ustar00rconoverstaff000000 000000 package Net::Amazon::S3::Client; $Net::Amazon::S3::Client::VERSION = '0.80'; use Moose 0.85; use HTTP::Status qw(is_error status_message); use MooseX::StrictConstructor 0.16; use Moose::Util::TypeConstraints; # ABSTRACT: An easy-to-use Amazon S3 client type 'Etag' => where { $_ =~ /^[a-z0-9]{32}(?:-\d+)?$/ }; type 'OwnerId' => where { $_ =~ /^[a-z0-9]{64}$/ }; has 's3' => ( is => 'ro', isa => 'Net::Amazon::S3', required => 1 ); __PACKAGE__->meta->make_immutable; sub bucket_class { 'Net::Amazon::S3::Client::Bucket' } sub buckets { my $self = shift; my $s3 = $self->s3; my $http_request = Net::Amazon::S3::Request::ListAllMyBuckets->new( s3 => $s3 ) ->http_request; my $xpc = $self->_send_request_xpc($http_request); my $owner_id = $xpc->findvalue('/s3:ListAllMyBucketsResult/s3:Owner/s3:ID'); my $owner_display_name = $xpc->findvalue( '/s3:ListAllMyBucketsResult/s3:Owner/s3:DisplayName'); my @buckets; foreach my $node ( $xpc->findnodes('/s3:ListAllMyBucketsResult/s3:Buckets/s3:Bucket') ) { push @buckets, $self->bucket_class->new( { client => $self, name => $xpc->findvalue( './s3:Name', $node ), creation_date => $xpc->findvalue( './s3:CreationDate', $node ), owner_id => $owner_id, owner_display_name => $owner_display_name, } ); } return @buckets; } sub create_bucket { my ( $self, %conf ) = @_; my $bucket = $self->bucket_class->new( client => $self, name => $conf{name}, ); $bucket->_create( acl_short => $conf{acl_short}, location_constraint => $conf{location_constraint}, ); return $bucket; } sub bucket { my ( $self, %conf ) = @_; return $self->bucket_class->new( client => $self, %conf, ); } sub _send_request_raw { my ( $self, $http_request, $filename ) = @_; return $self->s3->ua->request( $http_request, $filename ); } sub _send_request { my ( $self, $http_request, $filename ) = @_; my $http_response = $self->_send_request_raw( $http_request, $filename ); my $content = $http_response->content; my $content_type = $http_response->content_type; my $code = $http_response->code; if ( is_error($code) ) { if ( $content_type eq 'application/xml' ) { my $doc = $self->s3->libxml->parse_string($content); my $xpc = XML::LibXML::XPathContext->new($doc); $xpc->registerNs( 's3', 'http://s3.amazonaws.com/doc/2006-03-01/' ); if ( $xpc->findnodes('/Error') ) { my $code = $xpc->findvalue('/Error/Code'); my $message = $xpc->findvalue('/Error/Message'); confess("$code: $message"); } else { confess status_message($code); } } else { confess status_message($code); } } return $http_response; } sub _send_request_content { my ( $self, $http_request, $filename ) = @_; my $http_response = $self->_send_request( $http_request, $filename ); return $http_response->content; } sub _send_request_xpc { my ( $self, $http_request, $filename ) = @_; my $http_response = $self->_send_request( $http_request, $filename ); my $doc = $self->s3->libxml->parse_string( $http_response->content ); my $xpc = XML::LibXML::XPathContext->new($doc); $xpc->registerNs( 's3', 'http://s3.amazonaws.com/doc/2006-03-01/' ); return $xpc; } 1; __END__ =pod =encoding UTF-8 =head1 NAME Net::Amazon::S3::Client - An easy-to-use Amazon S3 client =head1 VERSION version 0.80 =head1 SYNOPSIS my $s3 = Net::Amazon::S3->new( aws_access_key_id => $aws_access_key_id, aws_secret_access_key => $aws_secret_access_key, retry => 1, ); my $client = Net::Amazon::S3::Client->new( s3 => $s3 ); # list all my buckets # returns a list of L objects my @buckets = $client->buckets; foreach my $bucket (@buckets) { print $bucket->name . "\n"; } # create a new bucket # returns a L object my $bucket = $client->create_bucket( name => $bucket_name, acl_short => 'private', location_constraint => 'US', ); # or use an existing bucket # returns a L object my $bucket = $client->bucket( name => $bucket_name ); =head1 DESCRIPTION The L module was written when the Amazon S3 service had just come out and it is a light wrapper around the APIs. Some bad API decisions were also made. The L, L and L classes are designed after years of usage to be easy to use for common tasks. These classes throw an exception when a fatal error occurs. It also is very careful to pass an MD5 of the content when uploaded to S3 and check the resultant ETag. WARNING: This is an early release of the Client classes, the APIs may change. =for test_synopsis no strict 'vars' =head1 METHODS =head2 buckets # list all my buckets # returns a list of L objects my @buckets = $client->buckets; foreach my $bucket (@buckets) { print $bucket->name . "\n"; } =head2 create_bucket # create a new bucket # returns a L object my $bucket = $client->create_bucket( name => $bucket_name, acl_short => 'private', location_constraint => 'US', ); =head2 bucket # or use an existing bucket # returns a L object my $bucket = $client->bucket( name => $bucket_name ); =head2 bucket_class # returns string "Net::Amazon::S3::Client::Bucket" # subclasses will want to override this. my $bucket_class = $client->bucket_class =head1 AUTHOR Rusty Conover =head1 COPYRIGHT AND LICENSE This software is copyright (c) 2015 by Amazon Digital Services, Leon Brocard, Brad Fitzpatrick, Pedro Figueiredo, Rusty Conover. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. =cut Net-Amazon-S3-0.80/lib/Net/Amazon/S3/HTTPRequest.pm000755 000765 000024 00000020417 12512367152 022404 0ustar00rconoverstaff000000 000000 package Net::Amazon::S3::HTTPRequest; $Net::Amazon::S3::HTTPRequest::VERSION = '0.80'; use Moose 0.85; use MooseX::StrictConstructor 0.16; use HTTP::Date; use MIME::Base64 qw( encode_base64 ); use Moose::Util::TypeConstraints; use URI::Escape qw( uri_escape_utf8 ); use URI::QueryParam; use URI; use VM::EC2::Security::CredentialCache; # ABSTRACT: Create a signed HTTP::Request my $METADATA_PREFIX = 'x-amz-meta-'; my $AMAZON_HEADER_PREFIX = 'x-amz-'; enum 'HTTPMethod' => [ qw(DELETE GET HEAD PUT POST) ]; has 's3' => ( is => 'ro', isa => 'Net::Amazon::S3', required => 1 ); has 'method' => ( is => 'ro', isa => 'HTTPMethod', required => 1 ); has 'path' => ( is => 'ro', isa => 'Str', required => 1 ); has 'headers' => ( is => 'ro', isa => 'HashRef', required => 0, default => sub { {} } ); has 'content' => ( is => 'ro', isa => 'Str|CodeRef|ScalarRef', required => 0, default => '' ); has 'metadata' => ( is => 'ro', isa => 'HashRef', required => 0, default => sub { {} } ); __PACKAGE__->meta->make_immutable; # make the HTTP::Request object sub http_request { my $self = shift; my $method = $self->method; my $path = $self->path; my $headers = $self->headers; my $content = $self->content; my $metadata = $self->metadata; my $http_headers = $self->_merge_meta( $headers, $metadata ); $self->_add_auth_header( $http_headers, $method, $path ) unless exists $headers->{Authorization}; my $protocol = $self->s3->secure ? 'https' : 'http'; my $host = $self->s3->host; my $uri = "$protocol://$host/$path"; my $request = HTTP::Request->new( $method, $uri, $http_headers, $content ); # my $req_as = $request->as_string; # $req_as =~ s/[^\n\r\x20-\x7f]/?/g; # $req_as = substr( $req_as, 0, 1024 ) . "\n\n"; # warn $req_as; return $request; } sub query_string_authentication_uri { my ( $self, $expires ) = @_; my $method = $self->method; my $path = $self->path; my $headers = $self->headers; my $aws_access_key_id = $self->s3->aws_access_key_id; my $aws_secret_access_key = $self->s3->aws_secret_access_key; my $canonical_string = $self->_canonical_string( $method, $path, $headers, $expires ); my $encoded_canonical = $self->_encode( $aws_secret_access_key, $canonical_string ); my $protocol = $self->s3->secure ? 'https' : 'http'; my $host = $self->s3->host; my $uri = "$protocol://$host/$path"; $uri = URI->new($uri); $uri->query_param( AWSAccessKeyId => $aws_access_key_id ); $uri->query_param( Expires => $expires ); $uri->query_param( Signature => $encoded_canonical ); return $uri; } sub _add_auth_header { my ( $self, $headers, $method, $path ) = @_; if ($self->s3->use_iam_role) { my $creds = VM::EC2::Security::CredentialCache->get(); defined($creds) || die("Unable to retrieve IAM role credentials"); $self->s3->aws_access_key_id($creds->accessKeyId); $self->s3->aws_secret_access_key($creds->secretAccessKey); $self->s3->aws_session_token($creds->sessionToken); } my $aws_access_key_id = $self->s3->aws_access_key_id; my $aws_secret_access_key = $self->s3->aws_secret_access_key; my $aws_session_token = $self->s3->aws_session_token; if ( not $headers->header('Date') ) { $headers->header( Date => time2str(time) ); } if ( not $headers->header('x-amz-security-token') and defined $aws_session_token ) { $headers->header( 'x-amz-security-token' => $aws_session_token ); } my $canonical_string = $self->_canonical_string( $method, $path, $headers ); my $encoded_canonical = $self->_encode( $aws_secret_access_key, $canonical_string ); $headers->header( Authorization => "AWS $aws_access_key_id:$encoded_canonical" ); } # generate a canonical string for the given parameters. expires is optional and is # only used by query string authentication. sub _canonical_string { my ( $self, $method, $path, $headers, $expires ) = @_; my %interesting_headers = (); while ( my ( $key, $value ) = each %$headers ) { my $lk = lc $key; if ( $lk eq 'content-md5' or $lk eq 'content-type' or $lk eq 'date' or $lk =~ /^$AMAZON_HEADER_PREFIX/ ) { $interesting_headers{$lk} = $self->_trim($value); } } # these keys get empty strings if they don't exist $interesting_headers{'content-type'} ||= ''; $interesting_headers{'content-md5'} ||= ''; # just in case someone used this. it's not necessary in this lib. $interesting_headers{'date'} = '' if $interesting_headers{'x-amz-date'}; # if you're using expires for query string auth, then it trumps date # (and x-amz-date) $interesting_headers{'date'} = $expires if $expires; my $buf = "$method\n"; foreach my $key ( sort keys %interesting_headers ) { if ( $key =~ /^$AMAZON_HEADER_PREFIX/ ) { $buf .= "$key:$interesting_headers{$key}\n"; } else { $buf .= "$interesting_headers{$key}\n"; } } # don't include anything after the first ? in the resource... $path =~ /^([^?]*)/; $buf .= "/$1"; # ...unless there any parameters we're interested in... if ( $path =~ /[&?](acl|torrent|location|uploads|delete)($|=|&)/ ) { $buf .= "?$1"; } elsif ( my %query_params = URI->new($path)->query_form ){ #see if the remaining parsed query string provides us with any query string or upload id if($query_params{partNumber} && $query_params{uploadId}){ #re-evaluate query string, the order of the params is important for request signing, so we can't depend on URI to do the right thing $buf .= sprintf("?partNumber=%s&uploadId=%s", $query_params{partNumber}, $query_params{uploadId}); } elsif($query_params{uploadId}){ $buf .= sprintf("?uploadId=%s",$query_params{uploadId}); } } return $buf; } # finds the hmac-sha1 hash of the canonical string and the aws secret access key and then # base64 encodes the result (optionally urlencoding after that). sub _encode { my ( $self, $aws_secret_access_key, $str, $urlencode ) = @_; my $hmac = Digest::HMAC_SHA1->new($aws_secret_access_key); $hmac->add($str); my $b64 = encode_base64( $hmac->digest, '' ); if ($urlencode) { return $self->_urlencode($b64); } else { return $b64; } } # generates an HTTP::Headers objects given one hash that represents http # headers to set and another hash that represents an object's metadata. sub _merge_meta { my ( $self, $headers, $metadata ) = @_; $headers ||= {}; $metadata ||= {}; my $http_header = HTTP::Headers->new; while ( my ( $k, $v ) = each %$headers ) { $http_header->header( $k => $v ); } while ( my ( $k, $v ) = each %$metadata ) { $http_header->header( "$METADATA_PREFIX$k" => $v ); } return $http_header; } sub _trim { my ( $self, $value ) = @_; $value =~ s/^\s+//; $value =~ s/\s+$//; return $value; } sub _urlencode { my ( $self, $unencoded ) = @_; return uri_escape_utf8( $unencoded, '^A-Za-z0-9_-' ); } 1; __END__ =pod =encoding UTF-8 =head1 NAME Net::Amazon::S3::HTTPRequest - Create a signed HTTP::Request =head1 VERSION version 0.80 =head1 SYNOPSIS my $http_request = Net::Amazon::S3::HTTPRequest->new( s3 => $self->s3, method => 'PUT', path => $self->bucket . '/', headers => $headers, content => $content, )->http_request; =head1 DESCRIPTION This module creates an HTTP::Request object that is signed appropriately for Amazon S3. =for test_synopsis no strict 'vars' =head1 METHODS =head2 http_request This method creates, signs and returns a HTTP::Request object. =head2 query_string_authentication_uri This method creates, signs and returns a query string authentication URI. =head1 AUTHOR Rusty Conover =head1 COPYRIGHT AND LICENSE This software is copyright (c) 2015 by Amazon Digital Services, Leon Brocard, Brad Fitzpatrick, Pedro Figueiredo, Rusty Conover. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. =cut Net-Amazon-S3-0.80/lib/Net/Amazon/S3/Request/000755 000765 000024 00000000000 12512367152 021337 5ustar00rconoverstaff000000 000000 Net-Amazon-S3-0.80/lib/Net/Amazon/S3/Request.pm000644 000765 000024 00000004406 12512367152 021701 0ustar00rconoverstaff000000 000000 package Net::Amazon::S3::Request; $Net::Amazon::S3::Request::VERSION = '0.80'; use Moose 0.85; use MooseX::StrictConstructor 0.16; use Moose::Util::TypeConstraints; use Regexp::Common qw /net/; # ABSTRACT: Base class for request objects enum 'AclShort' => [ qw(private public-read public-read-write authenticated-read) ]; enum 'LocationConstraint' => [ 'US', 'EU' ]; # To comply with Amazon S3 requirements, bucket names must: # Contain lowercase letters, numbers, periods (.), underscores (_), and dashes (-) # Start with a number or letter # Be between 3 and 255 characters long # Not be in an IP address style (e.g., "192.168.5.4") subtype 'BucketName1' => as 'Str' => where { $_ =~ /^[a-zA-Z0-9._-]+$/; } => message { "Bucket name ($_) must contain lowercase letters, numbers, periods (.), underscores (_), and dashes (-)"; }; subtype 'BucketName2' => as 'BucketName1' => where { $_ =~ /^[a-zA-Z0-9]/; } => message { "Bucket name ($_) must start with a number or letter"; }; subtype 'BucketName3' => as 'BucketName2' => where { length($_) >= 3 && length($_) <= 255; } => message { "Bucket name ($_) must be between 3 and 255 characters long"; }; subtype 'BucketName' => as 'BucketName3' => where { $_ !~ /^$RE{net}{IPv4}$/; } => message { "Bucket name ($_) must not be in an IP address style (e.g., '192.168.5.4')"; }; has 's3' => ( is => 'ro', isa => 'Net::Amazon::S3', required => 1 ); __PACKAGE__->meta->make_immutable; sub _uri { my ( $self, $key ) = @_; return (defined($key)) ? $self->bucket . "/" . (join '/', map {$self->s3->_urlencode($_)} split /\//, $key) : $self->bucket . "/"; } 1; __END__ =pod =encoding UTF-8 =head1 NAME Net::Amazon::S3::Request - Base class for request objects =head1 VERSION version 0.80 =head1 SYNOPSIS # do not instantiate directly =head1 DESCRIPTION This module is a base class for all the Net::Amazon::S3::Request::* classes. =head1 AUTHOR Rusty Conover =head1 COPYRIGHT AND LICENSE This software is copyright (c) 2015 by Amazon Digital Services, Leon Brocard, Brad Fitzpatrick, Pedro Figueiredo, Rusty Conover. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. =cut Net-Amazon-S3-0.80/lib/Net/Amazon/S3/Request/AbortMultipartUpload.pm000644 000765 000024 00000004371 12512367152 026020 0ustar00rconoverstaff000000 000000 package Net::Amazon::S3::Request::AbortMultipartUpload; $Net::Amazon::S3::Request::AbortMultipartUpload::VERSION = '0.80'; use Moose 0.85; use Digest::MD5 qw/md5 md5_hex/; use MIME::Base64; use Carp qw/croak/; use XML::LibXML; extends 'Net::Amazon::S3::Request'; has 'bucket' => ( is => 'ro', isa => 'BucketName', required => 1 ); has 'key' => ( is => 'ro', isa => 'Str', required => 1 ); has 'upload_id' => ( is => 'ro', isa => 'Str', required => 1 ); __PACKAGE__->meta->make_immutable; sub http_request { my $self = shift; #build signed request return Net::Amazon::S3::HTTPRequest->new( #See patch below s3 => $self->s3, method => 'DELETE', path => $self->_uri( $self->key ) . '?uploadId=' . $self->upload_id, )->http_request; } 1; =pod =encoding UTF-8 =head1 NAME Net::Amazon::S3::Request::AbortMultipartUpload - An internal class to complete a multipart upload =head1 VERSION version 0.80 =head1 SYNOPSIS my $http_request = Net::Amazon::S3::Request::AbortMultipartUpload->new( s3 => $s3, bucket => $bucket, key => $key upload_id => $upload_id, )->http_request; =head1 DESCRIPTION This module aborts a multipart upload. =head1 NAME Net::Amazon::S3::Request::AbortMultipartUpload - An internal class to abort a multipart upload =head1 VERSION version 0.59 =for test_synopsis no strict 'vars' =head1 METHODS =head2 http_request This method returns a HTTP::Request object. =head1 AUTHOR Pedro Figueiredo =head1 COPYRIGHT AND LICENSE This software is copyright (c) 2013 by Amazon Digital Services, Leon Brocard, Brad Fitzpatrick, Pedro Figueiredo. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. =head1 AUTHOR Rusty Conover =head1 COPYRIGHT AND LICENSE This software is copyright (c) 2015 by Amazon Digital Services, Leon Brocard, Brad Fitzpatrick, Pedro Figueiredo, Rusty Conover. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. =cut __END__ # ABSTRACT: An internal class to complete a multipart upload Net-Amazon-S3-0.80/lib/Net/Amazon/S3/Request/CompleteMultipartUpload.pm000755 000765 000024 00000005735 12512367152 026531 0ustar00rconoverstaff000000 000000 package Net::Amazon::S3::Request::CompleteMultipartUpload; $Net::Amazon::S3::Request::CompleteMultipartUpload::VERSION = '0.80'; use Moose 0.85; use Digest::MD5 qw/md5 md5_hex/; use MIME::Base64; use Carp qw/croak/; use XML::LibXML; extends 'Net::Amazon::S3::Request'; has 'bucket' => ( is => 'ro', isa => 'BucketName', required => 1 ); has 'etags' => ( is => 'ro', isa => 'ArrayRef', required => 1 ); has 'key' => ( is => 'ro', isa => 'Str', required => 1 ); has 'part_numbers' => ( is => 'ro', isa => 'ArrayRef', required => 1 ); has 'upload_id' => ( is => 'ro', isa => 'Str', required => 1 ); __PACKAGE__->meta->make_immutable; sub http_request { my $self = shift; croak "must have an equally sized list of etags and part numbers" unless scalar(@{$self->part_numbers}) == scalar(@{$self->etags}); #build XML doc my $xml_doc = XML::LibXML::Document->new('1.0','UTF-8'); my $root_element = $xml_doc->createElement('CompleteMultipartUpload'); $xml_doc->addChild($root_element); #add content for(my $i = 0; $i < scalar(@{$self->part_numbers}); $i++ ){ my $part = $xml_doc->createElement('Part'); $part->appendTextChild('PartNumber' => $self->part_numbers->[$i]); $part->appendTextChild('ETag' => $self->etags->[$i]); $root_element->addChild($part); } my $content = $xml_doc->toString; my $md5 = md5($content); my $md5_base64 = encode_base64($md5); chomp $md5_base64; my $header_spec = { 'Content-MD5' => $md5_base64, 'Content-Length' => length $content, 'Content-Type' => 'application/xml' }; #build signed request return Net::Amazon::S3::HTTPRequest->new( #See patch below s3 => $self->s3, method => 'POST', path => $self->_uri( $self->key ). '?uploadId='.$self->upload_id, content => $content, headers => $header_spec, )->http_request; } 1; =pod =encoding UTF-8 =head1 NAME Net::Amazon::S3::Request::CompleteMultipartUpload - An internal class to complete a multipart upload =head1 VERSION version 0.80 =head1 SYNOPSIS my $http_request = Net::Amazon::S3::Request::CompleteMultipartUpload->new( s3 => $s3, bucket => $bucket, etags => \@etags, part_numbers => \@part_numbers, )->http_request; =head1 DESCRIPTION This module completes a multipart upload. =for test_synopsis no strict 'vars' =head1 METHODS =head2 http_request This method returns a HTTP::Request object. =head1 AUTHOR Rusty Conover =head1 COPYRIGHT AND LICENSE This software is copyright (c) 2015 by Amazon Digital Services, Leon Brocard, Brad Fitzpatrick, Pedro Figueiredo, Rusty Conover. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. =cut __END__ # ABSTRACT: An internal class to complete a multipart upload Net-Amazon-S3-0.80/lib/Net/Amazon/S3/Request/CreateBucket.pm000644 000765 000024 00000004125 12512367152 024240 0ustar00rconoverstaff000000 000000 package Net::Amazon::S3::Request::CreateBucket; $Net::Amazon::S3::Request::CreateBucket::VERSION = '0.80'; use Moose 0.85; extends 'Net::Amazon::S3::Request'; # ABSTRACT: An internal class to create a bucket has 'bucket' => ( is => 'ro', isa => 'BucketName', required => 1 ); has 'acl_short' => ( is => 'ro', isa => 'Maybe[AclShort]', required => 0 ); has 'location_constraint' => ( is => 'ro', isa => 'Maybe[LocationConstraint]', required => 0 ); __PACKAGE__->meta->make_immutable; sub http_request { my $self = shift; my $headers = ( $self->acl_short ) ? { 'x-amz-acl' => $self->acl_short } : {}; my $content = ''; if ( defined $self->location_constraint && $self->location_constraint eq 'EU' ) { $content = "" . $self->location_constraint . ""; } return Net::Amazon::S3::HTTPRequest->new( s3 => $self->s3, method => 'PUT', path => $self->bucket . '/', headers => $headers, content => $content, )->http_request; } 1; __END__ =pod =encoding UTF-8 =head1 NAME Net::Amazon::S3::Request::CreateBucket - An internal class to create a bucket =head1 VERSION version 0.80 =head1 SYNOPSIS my $http_request = Net::Amazon::S3::Request::CreateBucket->new( s3 => $s3, bucket => $bucket, acl_short => $acl_short, location_constraint => $location_constraint, )->http_request; =head1 DESCRIPTION This module creates a bucket. =for test_synopsis no strict 'vars' =head1 METHODS =head2 http_request This method returns a HTTP::Request object. =head1 AUTHOR Rusty Conover =head1 COPYRIGHT AND LICENSE This software is copyright (c) 2015 by Amazon Digital Services, Leon Brocard, Brad Fitzpatrick, Pedro Figueiredo, Rusty Conover. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. =cut Net-Amazon-S3-0.80/lib/Net/Amazon/S3/Request/DeleteBucket.pm000644 000765 000024 00000002511 12512367152 024234 0ustar00rconoverstaff000000 000000 package Net::Amazon::S3::Request::DeleteBucket; $Net::Amazon::S3::Request::DeleteBucket::VERSION = '0.80'; use Moose 0.85; extends 'Net::Amazon::S3::Request'; # ABSTRACT: An internal class to delete a bucket has 'bucket' => ( is => 'ro', isa => 'BucketName', required => 1 ); __PACKAGE__->meta->make_immutable; sub http_request { my $self = shift; return Net::Amazon::S3::HTTPRequest->new( s3 => $self->s3, method => 'DELETE', path => $self->bucket . '/', )->http_request; } 1; __END__ =pod =encoding UTF-8 =head1 NAME Net::Amazon::S3::Request::DeleteBucket - An internal class to delete a bucket =head1 VERSION version 0.80 =head1 SYNOPSIS my $http_request = Net::Amazon::S3::Request::DeleteBucket->new( s3 => $s3, bucket => $bucket, )->http_request; =head1 DESCRIPTION This module deletes a bucket. =for test_synopsis no strict 'vars' =head1 METHODS =head2 http_request This method returns a HTTP::Request object. =head1 AUTHOR Rusty Conover =head1 COPYRIGHT AND LICENSE This software is copyright (c) 2015 by Amazon Digital Services, Leon Brocard, Brad Fitzpatrick, Pedro Figueiredo, Rusty Conover. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. =cut Net-Amazon-S3-0.80/lib/Net/Amazon/S3/Request/DeleteMultiObject.pm000755 000765 000024 00000005137 12512367152 025252 0ustar00rconoverstaff000000 000000 package Net::Amazon::S3::Request::DeleteMultiObject; $Net::Amazon::S3::Request::DeleteMultiObject::VERSION = '0.80'; use Moose 0.85; use Digest::MD5 qw/md5 md5_hex/; use MIME::Base64; use Carp qw/croak/; extends 'Net::Amazon::S3::Request'; has 'bucket' => ( is => 'ro', isa => 'BucketName', required => 1 ); has 'keys' => ( is => 'ro', isa => 'ArrayRef', required => 1 ); __PACKAGE__->meta->make_immutable; sub http_request { my $self = shift; #croak if we get a request for over 1000 objects croak "The maximum number of keys is 1000" if (scalar(@{$self->keys}) > 1000); #build XML doc my $xml_doc = XML::LibXML::Document->new('1.0','UTF-8'); my $root_element = $xml_doc->createElement('Delete'); $xml_doc->addChild($root_element); $root_element->appendTextChild('Quiet'=>'true'); #add content foreach my $key (@{$self->keys}){ my $obj_element = $xml_doc->createElement('Object'); $obj_element->appendTextChild('Key' => $key); $root_element->addChild($obj_element); } my $content = $xml_doc->toString; my $md5 = md5($content); my $md5_base64 = encode_base64($md5); chomp $md5_base64; my $header_spec = { 'Content-MD5' => $md5_base64, 'Content-Length' => length $content, 'Content-Type' => 'application/xml' }; #build signed request return Net::Amazon::S3::HTTPRequest->new( s3 => $self->s3, method => 'POST', path => $self->bucket . '/?delete', content => $content, headers => $header_spec, )->http_request; } 1; =pod =encoding UTF-8 =head1 NAME Net::Amazon::S3::Request::DeleteMultiObject - An internal class to delete multiple objects from a bucket =head1 VERSION version 0.80 =head1 SYNOPSIS my $http_request = Net::Amazon::S3::Request::DeleteMultiObject->new( s3 => $s3, bucket => $bucket, keys => [$key1, $key2], )->http_request; =head1 DESCRIPTION This module deletes multiple objects from a bucket. =for test_synopsis no strict 'vars' =head1 METHODS =head2 http_request This method returns a HTTP::Request object. =head1 AUTHOR Rusty Conover =head1 COPYRIGHT AND LICENSE This software is copyright (c) 2015 by Amazon Digital Services, Leon Brocard, Brad Fitzpatrick, Pedro Figueiredo, Rusty Conover. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. =cut __END__ # ABSTRACT: An internal class to delete multiple objects from a bucket Net-Amazon-S3-0.80/lib/Net/Amazon/S3/Request/DeleteMultipleObjects.pm000644 000765 000024 00000005047 12512367152 026133 0ustar00rconoverstaff000000 000000 package Net::Amazon::S3::Request::DeleteMultipleObjects; $Net::Amazon::S3::Request::DeleteMultipleObjects::VERSION = '0.80'; use Moose 0.85; use Moose::Util::TypeConstraints; use XML::LibXML; use Digest::MD5; use MIME::Base64; extends 'Net::Amazon::S3::Request'; # ABSTRACT: An internal class to delete multiple objects has 'bucket' => ( is => 'ro', isa => 'BucketName', required => 1 ); has 'keys' => ( is => 'ro', isa => 'ArrayRef[Str]', required => 1 ); __PACKAGE__->meta->make_immutable; sub http_request { my $self = shift; my $doc = XML::LibXML::Document->new("1.0", 'utf-8'); my $docroot = $doc->createElement("Delete"); $doc->setDocumentElement($docroot); my $quiet_node = $doc->createElement("Quiet"); $quiet_node->appendChild($doc->createTextNode('false')); $docroot->appendChild($quiet_node); foreach my $key (@{$self->keys}) { my $n = $doc->createElement('Object'); my $k = $doc->createElement('Key'); $k->appendChild($doc->createTextNode($key)); $n->appendChild($k); $docroot->appendChild($n); } my $delete_content = $doc->toString(1); my $md5_hex = Digest::MD5::md5_hex($delete_content); my $md5 = pack( 'H*', $md5_hex ); my $md5_base64 = encode_base64($md5); chomp $md5_base64; my $conf = { 'Content-MD5' => $md5_base64, 'Content-Length' => length($delete_content), }; return Net::Amazon::S3::HTTPRequest->new( s3 => $self->s3, method => 'POST', path => $self->_uri() . "?delete", headers => $conf, content => $delete_content, )->http_request; } 1; __END__ =pod =encoding UTF-8 =head1 NAME Net::Amazon::S3::Request::DeleteMultipleObjects - An internal class to delete multiple objects =head1 VERSION version 0.80 =head1 SYNOPSIS my $http_request = Net::Amazon::S3::Request::DeleteMultipleObjects->new( s3 => $s3, bucket => $bucket, keys => $keys, )->http_request; =head1 DESCRIPTION This module deletes multiple objects. =head1 NAME Net::Amazon::S3::Request::DeleteMultipleObjects - An internal class to delete multiple objects =head1 METHODS =head2 http_request This method returns a HTTP::Request object. =head1 AUTHOR Rusty Conover =head1 COPYRIGHT AND LICENSE This software is copyright (c) 2015 by Amazon Digital Services, Leon Brocard, Brad Fitzpatrick, Pedro Figueiredo, Rusty Conover. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. =cut Net-Amazon-S3-0.80/lib/Net/Amazon/S3/Request/DeleteObject.pm000644 000765 000024 00000002714 12512367152 024232 0ustar00rconoverstaff000000 000000 package Net::Amazon::S3::Request::DeleteObject; $Net::Amazon::S3::Request::DeleteObject::VERSION = '0.80'; use Moose 0.85; use Moose::Util::TypeConstraints; extends 'Net::Amazon::S3::Request'; # ABSTRACT: An internal class to delete an object has 'bucket' => ( is => 'ro', isa => 'BucketName', required => 1 ); has 'key' => ( is => 'ro', isa => 'Str', required => 1 ); __PACKAGE__->meta->make_immutable; sub http_request { my $self = shift; return Net::Amazon::S3::HTTPRequest->new( s3 => $self->s3, method => 'DELETE', path => $self->_uri( $self->key ), )->http_request; } 1; __END__ =pod =encoding UTF-8 =head1 NAME Net::Amazon::S3::Request::DeleteObject - An internal class to delete an object =head1 VERSION version 0.80 =head1 SYNOPSIS my $http_request = Net::Amazon::S3::Request::DeleteObject->new( s3 => $s3, bucket => $bucket, key => $key, )->http_request; =head1 DESCRIPTION This module deletes an object. =for test_synopsis no strict 'vars' =head1 METHODS =head2 http_request This method returns a HTTP::Request object. =head1 AUTHOR Rusty Conover =head1 COPYRIGHT AND LICENSE This software is copyright (c) 2015 by Amazon Digital Services, Leon Brocard, Brad Fitzpatrick, Pedro Figueiredo, Rusty Conover. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. =cut Net-Amazon-S3-0.80/lib/Net/Amazon/S3/Request/GetBucketAccessControl.pm000644 000765 000024 00000002701 12512367152 026235 0ustar00rconoverstaff000000 000000 package Net::Amazon::S3::Request::GetBucketAccessControl; $Net::Amazon::S3::Request::GetBucketAccessControl::VERSION = '0.80'; use Moose 0.85; use MooseX::StrictConstructor 0.16; extends 'Net::Amazon::S3::Request'; # ABSTRACT: An internal class to get a bucket's access control has 'bucket' => ( is => 'ro', isa => 'BucketName', required => 1 ); __PACKAGE__->meta->make_immutable; sub http_request { my $self = shift; return Net::Amazon::S3::HTTPRequest->new( s3 => $self->s3, method => 'GET', path => $self->_uri('') . '?acl', )->http_request; } 1; __END__ =pod =encoding UTF-8 =head1 NAME Net::Amazon::S3::Request::GetBucketAccessControl - An internal class to get a bucket's access control =head1 VERSION version 0.80 =head1 SYNOPSIS my $http_request = Net::Amazon::S3::Request::GetBucketAccessControl->new( s3 => $s3, bucket => $bucket, )->http_request; =head1 DESCRIPTION This module gets a bucket's access control. =for test_synopsis no strict 'vars' =head1 METHODS =head2 http_request This method returns a HTTP::Request object. =head1 AUTHOR Rusty Conover =head1 COPYRIGHT AND LICENSE This software is copyright (c) 2015 by Amazon Digital Services, Leon Brocard, Brad Fitzpatrick, Pedro Figueiredo, Rusty Conover. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. =cut Net-Amazon-S3-0.80/lib/Net/Amazon/S3/Request/GetBucketLocationConstraint.pm000644 000765 000024 00000002751 12512367152 027315 0ustar00rconoverstaff000000 000000 package Net::Amazon::S3::Request::GetBucketLocationConstraint; $Net::Amazon::S3::Request::GetBucketLocationConstraint::VERSION = '0.80'; use Moose 0.85; use MooseX::StrictConstructor 0.16; extends 'Net::Amazon::S3::Request'; # ABSTRACT: An internal class to get a bucket's location constraint has 'bucket' => ( is => 'ro', isa => 'BucketName', required => 1 ); __PACKAGE__->meta->make_immutable; sub http_request { my $self = shift; return Net::Amazon::S3::HTTPRequest->new( s3 => $self->s3, method => 'GET', path => $self->_uri('') . '?location', )->http_request; } 1; __END__ =pod =encoding UTF-8 =head1 NAME Net::Amazon::S3::Request::GetBucketLocationConstraint - An internal class to get a bucket's location constraint =head1 VERSION version 0.80 =head1 SYNOPSIS my $http_request = Net::Amazon::S3::Request::GetBucketLocationConstraint->new( s3 => $s3, bucket => $bucket, )->http_request; =head1 DESCRIPTION This module gets a bucket's location constraint. =for test_synopsis no strict 'vars' =head1 METHODS =head2 http_request This method returns a HTTP::Request object. =head1 AUTHOR Rusty Conover =head1 COPYRIGHT AND LICENSE This software is copyright (c) 2015 by Amazon Digital Services, Leon Brocard, Brad Fitzpatrick, Pedro Figueiredo, Rusty Conover. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. =cut Net-Amazon-S3-0.80/lib/Net/Amazon/S3/Request/GetObject.pm000644 000765 000024 00000003613 12512367152 023546 0ustar00rconoverstaff000000 000000 package Net::Amazon::S3::Request::GetObject; $Net::Amazon::S3::Request::GetObject::VERSION = '0.80'; use Moose 0.85; use MooseX::StrictConstructor 0.16; extends 'Net::Amazon::S3::Request'; has 'bucket' => ( is => 'ro', isa => 'BucketName', required => 1 ); has 'key' => ( is => 'ro', isa => 'Str', required => 1 ); has 'method' => ( is => 'ro', isa => 'HTTPMethod', required => 1 ); # ABSTRACT: An internal class to get an object __PACKAGE__->meta->make_immutable; sub http_request { my $self = shift; return Net::Amazon::S3::HTTPRequest->new( s3 => $self->s3, method => $self->method, path => $self->_uri( $self->key ), )->http_request; } sub query_string_authentication_uri { my ( $self, $expires ) = @_; return Net::Amazon::S3::HTTPRequest->new( s3 => $self->s3, method => $self->method, path => $self->_uri( $self->key ), )->query_string_authentication_uri($expires); } 1; __END__ =pod =encoding UTF-8 =head1 NAME Net::Amazon::S3::Request::GetObject - An internal class to get an object =head1 VERSION version 0.80 =head1 SYNOPSIS my $http_request = Net::Amazon::S3::Request::GetObject->new( s3 => $s3, bucket => $bucket, key => $key, method => 'GET', )->http_request; =head1 DESCRIPTION This module gets an object. =for test_synopsis no strict 'vars' =head1 METHODS =head2 http_request This method returns a HTTP::Request object. =head2 query_string_authentication_uri This method returns query string authentication URI. =head1 AUTHOR Rusty Conover =head1 COPYRIGHT AND LICENSE This software is copyright (c) 2015 by Amazon Digital Services, Leon Brocard, Brad Fitzpatrick, Pedro Figueiredo, Rusty Conover. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. =cut Net-Amazon-S3-0.80/lib/Net/Amazon/S3/Request/GetObjectAccessControl.pm000644 000765 000024 00000003046 12512367152 026231 0ustar00rconoverstaff000000 000000 package Net::Amazon::S3::Request::GetObjectAccessControl; $Net::Amazon::S3::Request::GetObjectAccessControl::VERSION = '0.80'; use Moose 0.85; use MooseX::StrictConstructor 0.16; extends 'Net::Amazon::S3::Request'; # ABSTRACT: An internal class to get an object's access control has 'bucket' => ( is => 'ro', isa => 'BucketName', required => 1 ); has 'key' => ( is => 'ro', isa => 'Str', required => 1 ); __PACKAGE__->meta->make_immutable; sub http_request { my $self = shift; return Net::Amazon::S3::HTTPRequest->new( s3 => $self->s3, method => 'GET', path => $self->_uri($self->key) . '?acl', )->http_request; } 1; __END__ =pod =encoding UTF-8 =head1 NAME Net::Amazon::S3::Request::GetObjectAccessControl - An internal class to get an object's access control =head1 VERSION version 0.80 =head1 SYNOPSIS my $http_request = Net::Amazon::S3::Request::GetObjectAccessControl->new( s3 => $s3, bucket => $bucket, key => $key, )->http_request; =head1 DESCRIPTION This module gets an object's access control. =for test_synopsis no strict 'vars' =head1 METHODS =head2 http_request This method returns a HTTP::Request object. =head1 AUTHOR Rusty Conover =head1 COPYRIGHT AND LICENSE This software is copyright (c) 2015 by Amazon Digital Services, Leon Brocard, Brad Fitzpatrick, Pedro Figueiredo, Rusty Conover. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. =cut Net-Amazon-S3-0.80/lib/Net/Amazon/S3/Request/InitiateMultipartUpload.pm000755 000765 000024 00000004143 12512367152 026517 0ustar00rconoverstaff000000 000000 package Net::Amazon::S3::Request::InitiateMultipartUpload; $Net::Amazon::S3::Request::InitiateMultipartUpload::VERSION = '0.80'; use Moose 0.85; use MooseX::StrictConstructor 0.16; extends 'Net::Amazon::S3::Request'; has 'bucket' => ( is => 'ro', isa => 'BucketName', required => 1 ); has 'key' => ( is => 'ro', isa => 'Str', required => 1 ); has 'acl_short' => ( is => 'ro', isa => 'Maybe[AclShort]', required => 0 ); has 'headers' => ( is => 'ro', isa => 'HashRef', required => 0, default => sub { {} } ); has 'encryption' => ( is => 'ro', isa => 'Maybe[Str]', required => 0 ); __PACKAGE__->meta->make_immutable; sub http_request { my $self = shift; my $headers = $self->headers; if ( $self->acl_short ) { $headers->{'x-amz-acl'} = $self->acl_short; } if ( defined $self->encryption ) { $headers->{'x-amz-server-side-encryption'} = $self->encryption; } return Net::Amazon::S3::HTTPRequest->new( s3 => $self->s3, method => 'POST', path => $self->_uri( $self->key ).'?uploads', headers => $self->headers, )->http_request; } 1; =pod =encoding UTF-8 =head1 NAME Net::Amazon::S3::Request::InitiateMultipartUpload - An internal class to begin a multipart upload =head1 VERSION version 0.80 =head1 SYNOPSIS my $http_request = Net::Amazon::S3::Request::InitiateMultipartUpload->new( s3 => $s3, bucket => $bucket, keys => $key, )->http_request; =head1 DESCRIPTION This module begins a multipart upload =for test_synopsis no strict 'vars' =head1 METHODS =head2 http_request This method returns a HTTP::Request object. =head1 AUTHOR Rusty Conover =head1 COPYRIGHT AND LICENSE This software is copyright (c) 2015 by Amazon Digital Services, Leon Brocard, Brad Fitzpatrick, Pedro Figueiredo, Rusty Conover. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. =cut __END__ #ABSTRACT: An internal class to begin a multipart upload Net-Amazon-S3-0.80/lib/Net/Amazon/S3/Request/ListAllMyBuckets.pm000644 000765 000024 00000002420 12512367152 025066 0ustar00rconoverstaff000000 000000 package Net::Amazon::S3::Request::ListAllMyBuckets; $Net::Amazon::S3::Request::ListAllMyBuckets::VERSION = '0.80'; use Moose 0.85; use MooseX::StrictConstructor 0.16; extends 'Net::Amazon::S3::Request'; # ABSTRACT: An internal class to list all buckets __PACKAGE__->meta->make_immutable; sub http_request { my $self = shift; return Net::Amazon::S3::HTTPRequest->new( s3 => $self->s3, method => 'GET', path => '', )->http_request; } 1; __END__ =pod =encoding UTF-8 =head1 NAME Net::Amazon::S3::Request::ListAllMyBuckets - An internal class to list all buckets =head1 VERSION version 0.80 =head1 SYNOPSIS my $http_request = Net::Amazon::S3::Request::ListAllMyBuckets->new( s3 => $s3 ) ->http_request; =head1 DESCRIPTION This module lists all buckets. =for test_synopsis no strict 'vars' =head1 METHODS =head2 http_request This method returns a HTTP::Request object. =head1 AUTHOR Rusty Conover =head1 COPYRIGHT AND LICENSE This software is copyright (c) 2015 by Amazon Digital Services, Leon Brocard, Brad Fitzpatrick, Pedro Figueiredo, Rusty Conover. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. =cut Net-Amazon-S3-0.80/lib/Net/Amazon/S3/Request/ListBucket.pm000644 000765 000024 00000004366 12512367152 023757 0ustar00rconoverstaff000000 000000 package Net::Amazon::S3::Request::ListBucket; $Net::Amazon::S3::Request::ListBucket::VERSION = '0.80'; use Moose 0.85; use MooseX::StrictConstructor 0.16; use URI::Escape qw(uri_escape_utf8); extends 'Net::Amazon::S3::Request'; # ABSTRACT: An internal class to list a bucket has 'bucket' => ( is => 'ro', isa => 'BucketName', required => 1 ); has 'prefix' => ( is => 'ro', isa => 'Maybe[Str]', required => 0 ); has 'delimiter' => ( is => 'ro', isa => 'Maybe[Str]', required => 0 ); has 'max_keys' => ( is => 'ro', isa => 'Maybe[Int]', required => 0, default => 1000 ); has 'marker' => ( is => 'ro', isa => 'Maybe[Str]', required => 0 ); __PACKAGE__->meta->make_immutable; sub http_request { my $self = shift; my $path = $self->bucket . "/"; my @post; foreach my $method ( qw(prefix delimiter max_keys marker) ) { my $value = $self->$method; next unless $value; my $key = $method; $key = 'max-keys' if $method eq 'max_keys'; push @post, $key . "=" . $self->_urlencode($value); } if (@post) { $path .= '?' . join( '&', @post ); } return Net::Amazon::S3::HTTPRequest->new( s3 => $self->s3, method => 'GET', path => $path, )->http_request; } sub _urlencode { my ( $self, $unencoded ) = @_; return uri_escape_utf8( $unencoded, '^A-Za-z0-9_-' ); } 1; __END__ =pod =encoding UTF-8 =head1 NAME Net::Amazon::S3::Request::ListBucket - An internal class to list a bucket =head1 VERSION version 0.80 =head1 SYNOPSIS my $http_request = Net::Amazon::S3::Request::ListBucket->new( s3 => $s3, bucket => $bucket, delimiter => $delimiter, max_keys => $max_keys, marker => $marker, )->http_request; =head1 DESCRIPTION This module lists a bucket. =for test_synopsis no strict 'vars' =head1 METHODS =head2 http_request This method returns a HTTP::Request object. =head1 AUTHOR Rusty Conover =head1 COPYRIGHT AND LICENSE This software is copyright (c) 2015 by Amazon Digital Services, Leon Brocard, Brad Fitzpatrick, Pedro Figueiredo, Rusty Conover. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. =cut Net-Amazon-S3-0.80/lib/Net/Amazon/S3/Request/ListParts.pm000755 000765 000024 00000003131 12512367152 023623 0ustar00rconoverstaff000000 000000 package Net::Amazon::S3::Request::ListParts; $Net::Amazon::S3::Request::ListParts::VERSION = '0.80'; use Moose 0.85; use MooseX::StrictConstructor 0.16; extends 'Net::Amazon::S3::Request'; # ABSTRACT: List the parts in a multipart upload. has 'bucket' => ( is => 'ro', isa => 'BucketName', required => 1 ); has 'key' => ( is => 'ro', isa => 'Str', required => 1 ); has 'upload_id' => ( is => 'ro', isa => 'Str', required => 1 ); has 'acl_short' => ( is => 'ro', isa => 'Maybe[AclShort]', required => 0 ); has 'headers' => ( is => 'ro', isa => 'HashRef', required => 0, default => sub { {} } ); __PACKAGE__->meta->make_immutable; sub http_request { my $self = shift; my $headers = $self->headers; if ( $self->acl_short ) { $headers->{'x-amz-acl'} = $self->acl_short; } return Net::Amazon::S3::HTTPRequest->new( s3 => $self->s3, method => 'GET', path => $self->_uri( $self->key ).'?uploadId='.$self->upload_id, headers => $self->headers, )->http_request; } 1; __END__ =pod =encoding UTF-8 =head1 NAME Net::Amazon::S3::Request::ListParts - List the parts in a multipart upload. =head1 VERSION version 0.80 =head1 AUTHOR Rusty Conover =head1 COPYRIGHT AND LICENSE This software is copyright (c) 2015 by Amazon Digital Services, Leon Brocard, Brad Fitzpatrick, Pedro Figueiredo, Rusty Conover. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. =cut Net-Amazon-S3-0.80/lib/Net/Amazon/S3/Request/PutObject.pm000644 000765 000024 00000004243 12512367152 023577 0ustar00rconoverstaff000000 000000 package Net::Amazon::S3::Request::PutObject; $Net::Amazon::S3::Request::PutObject::VERSION = '0.80'; use Moose 0.85; use MooseX::StrictConstructor 0.16; extends 'Net::Amazon::S3::Request'; # ABSTRACT: An internal class to put an object has 'bucket' => ( is => 'ro', isa => 'BucketName', required => 1 ); has 'key' => ( is => 'ro', isa => 'Str', required => 1 ); has 'value' => ( is => 'ro', isa => 'Str|CodeRef|ScalarRef', required => 1 ); has 'acl_short' => ( is => 'ro', isa => 'Maybe[AclShort]', required => 0 ); has 'headers' => ( is => 'ro', isa => 'HashRef', required => 0, default => sub { {} } ); has 'encryption' => ( is => 'ro', isa => 'Maybe[Str]', required => 0 ); __PACKAGE__->meta->make_immutable; sub http_request { my $self = shift; my $headers = $self->headers; if ( $self->acl_short ) { $headers->{'x-amz-acl'} = $self->acl_short; } if ( defined $self->encryption ) { $headers->{'x-amz-server-side-encryption'} = $self->encryption; } return Net::Amazon::S3::HTTPRequest->new( s3 => $self->s3, method => 'PUT', path => $self->_uri( $self->key ), headers => $self->headers, content => $self->value, )->http_request; } 1; __END__ =pod =encoding UTF-8 =head1 NAME Net::Amazon::S3::Request::PutObject - An internal class to put an object =head1 VERSION version 0.80 =head1 SYNOPSIS my $http_request = Net::Amazon::S3::Request::PutObject->new( s3 => $s3, bucket => $bucket, key => $key, value => $value, acl_short => $acl_short, headers => $conf, )->http_request; =head1 DESCRIPTION This module puts an object. =for test_synopsis no strict 'vars' =head1 METHODS =head2 http_request This method returns a HTTP::Request object. =head1 AUTHOR Rusty Conover =head1 COPYRIGHT AND LICENSE This software is copyright (c) 2015 by Amazon Digital Services, Leon Brocard, Brad Fitzpatrick, Pedro Figueiredo, Rusty Conover. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. =cut Net-Amazon-S3-0.80/lib/Net/Amazon/S3/Request/PutPart.pm000755 000765 000024 00000005414 12512367152 023303 0ustar00rconoverstaff000000 000000 package Net::Amazon::S3::Request::PutPart; $Net::Amazon::S3::Request::PutPart::VERSION = '0.80'; use Moose 0.85; use MooseX::StrictConstructor 0.16; extends 'Net::Amazon::S3::Request'; has 'bucket' => ( is => 'ro', isa => 'BucketName', required => 1 ); has 'key' => ( is => 'ro', isa => 'Str', required => 1 ); has 'value' => ( is => 'ro', isa => 'Str|CodeRef|ScalarRef', required => 0 ); has 'upload_id' => ( is => 'ro', isa => 'Str', required => 1 ); has 'part_number' => ( is => 'ro', isa => 'Int', required => 1 ); has 'copy_source_bucket' => ( is => 'ro', isa => 'Str', required => 0 ); has 'copy_source_key' => ( is => 'ro', isa => 'Str', required => 0 ); has 'acl_short' => ( is => 'ro', isa => 'Maybe[AclShort]', required => 0 ); has 'headers' => ( is => 'ro', isa => 'HashRef', required => 0, default => sub { {} } ); __PACKAGE__->meta->make_immutable; sub http_request { my $self = shift; my $headers = $self->headers; if ( $self->acl_short ) { $headers->{'x-amz-acl'} = $self->acl_short; } if(defined $self->copy_source_bucket && defined $self->copy_source_key){ $headers->{'x-amz-copy-source'} = $self->copy_source_bucket.'/'.$self->copy_source_key; } return Net::Amazon::S3::HTTPRequest->new( s3 => $self->s3, method => 'PUT', path => $self->_uri($self->key) . '?partNumber=' . $self->part_number . '&uploadId=' . $self->upload_id, headers => $headers, content => scalar( defined( $self->value ) ? $self->value : '' ), )->http_request; } 1; =pod =encoding UTF-8 =head1 NAME Net::Amazon::S3::Request::PutPart - An internal class to put part of a multipart upload =head1 VERSION version 0.80 =head1 SYNOPSIS my $http_request = Net::Amazon::S3::Request::PutPart->new( s3 => $s3, bucket => $bucket, key => $key, value => $value, acl_short => $acl_short, headers => $conf, part_number => $part_number, upload_id => $upload_id )->http_request; =head1 DESCRIPTION This module puts an object. =for test_synopsis no strict 'vars' =head1 METHODS =head2 http_request This method returns a HTTP::Request object. =head1 AUTHOR Rusty Conover =head1 COPYRIGHT AND LICENSE This software is copyright (c) 2015 by Amazon Digital Services, Leon Brocard, Brad Fitzpatrick, Pedro Figueiredo, Rusty Conover. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. =cut __END__ # ABSTRACT: An internal class to put part of a multipart upload Net-Amazon-S3-0.80/lib/Net/Amazon/S3/Request/SetBucketAccessControl.pm000644 000765 000024 00000004114 12512367152 026251 0ustar00rconoverstaff000000 000000 package Net::Amazon::S3::Request::SetBucketAccessControl; $Net::Amazon::S3::Request::SetBucketAccessControl::VERSION = '0.80'; use Moose 0.85; use MooseX::StrictConstructor 0.16; extends 'Net::Amazon::S3::Request'; # ABSTRACT: An internal class to set a bucket's access control has 'bucket' => ( is => 'ro', isa => 'BucketName', required => 1 ); has 'acl_short' => ( is => 'ro', isa => 'Maybe[AclShort]', required => 0 ); has 'acl_xml' => ( is => 'ro', isa => 'Maybe[Str]', required => 0 ); __PACKAGE__->meta->make_immutable; sub http_request { my $self = shift; unless ( $self->acl_xml || $self->acl_short ) { confess "need either acl_xml or acl_short"; } if ( $self->acl_xml && $self->acl_short ) { confess "can not provide both acl_xml and acl_short"; } my $headers = ( $self->acl_short ) ? { 'x-amz-acl' => $self->acl_short } : {}; my $xml = $self->acl_xml || ''; return Net::Amazon::S3::HTTPRequest->new( s3 => $self->s3, method => 'PUT', path => $self->_uri('') . '?acl', headers => $headers, content => $xml, )->http_request; } 1; __END__ =pod =encoding UTF-8 =head1 NAME Net::Amazon::S3::Request::SetBucketAccessControl - An internal class to set a bucket's access control =head1 VERSION version 0.80 =head1 SYNOPSIS my $http_request = Net::Amazon::S3::Request::SetBucketAccessControl->new( s3 => $s3, bucket => $bucket, acl_short => $acl_short, acl_xml => $acl_xml, )->http_request; =head1 DESCRIPTION This module sets a bucket's access control. =for test_synopsis no strict 'vars' =head1 METHODS =head2 http_request This method returns a HTTP::Request object. =head1 AUTHOR Rusty Conover =head1 COPYRIGHT AND LICENSE This software is copyright (c) 2015 by Amazon Digital Services, Leon Brocard, Brad Fitzpatrick, Pedro Figueiredo, Rusty Conover. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. =cut Net-Amazon-S3-0.80/lib/Net/Amazon/S3/Request/SetObjectAccessControl.pm000644 000765 000024 00000004274 12512367152 026251 0ustar00rconoverstaff000000 000000 package Net::Amazon::S3::Request::SetObjectAccessControl; $Net::Amazon::S3::Request::SetObjectAccessControl::VERSION = '0.80'; use Moose 0.85; use MooseX::StrictConstructor 0.16; extends 'Net::Amazon::S3::Request'; # ABSTRACT: An internal class to set an object's access control has 'bucket' => ( is => 'ro', isa => 'BucketName', required => 1 ); has 'key' => ( is => 'ro', isa => 'Str', required => 1 ); has 'acl_short' => ( is => 'ro', isa => 'Maybe[AclShort]', required => 0 ); has 'acl_xml' => ( is => 'ro', isa => 'Maybe[Str]', required => 0 ); __PACKAGE__->meta->make_immutable; sub http_request { my $self = shift; unless ( $self->acl_xml || $self->acl_short ) { confess "need either acl_xml or acl_short"; } if ( $self->acl_xml && $self->acl_short ) { confess "can not provide both acl_xml and acl_short"; } my $headers = ( $self->acl_short ) ? { 'x-amz-acl' => $self->acl_short } : {}; my $xml = $self->acl_xml || ''; return Net::Amazon::S3::HTTPRequest->new( s3 => $self->s3, method => 'PUT', path => $self->_uri( $self->key ) . '?acl', headers => $headers, content => $xml, )->http_request; } 1; __END__ =pod =encoding UTF-8 =head1 NAME Net::Amazon::S3::Request::SetObjectAccessControl - An internal class to set an object's access control =head1 VERSION version 0.80 =head1 SYNOPSIS my $http_request = Net::Amazon::S3::Request::SetObjectAccessControl->new( s3 => $s3, bucket => $bucket, key => $key, acl_short => $acl_short, acl_xml => $acl_xml, )->http_request; =head1 DESCRIPTION This module sets an object's access control. =for test_synopsis no strict 'vars' =head1 METHODS =head2 http_request This method returns a HTTP::Request object. =head1 AUTHOR Rusty Conover =head1 COPYRIGHT AND LICENSE This software is copyright (c) 2015 by Amazon Digital Services, Leon Brocard, Brad Fitzpatrick, Pedro Figueiredo, Rusty Conover. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. =cut Net-Amazon-S3-0.80/lib/Net/Amazon/S3/Client/Bucket.pm000755 000765 000024 00000017544 12512367152 022716 0ustar00rconoverstaff000000 000000 package Net::Amazon::S3::Client::Bucket; $Net::Amazon::S3::Client::Bucket::VERSION = '0.80'; use Moose 0.85; use MooseX::StrictConstructor 0.16; use Data::Stream::Bulk::Callback; use MooseX::Types::DateTime::MoreCoercions 0.07 qw( DateTime ); # ABSTRACT: An easy-to-use Amazon S3 client bucket has 'client' => ( is => 'ro', isa => 'Net::Amazon::S3::Client', required => 1 ); has 'name' => ( is => 'ro', isa => 'Str', required => 1 ); has 'creation_date' => ( is => 'ro', isa => DateTime, coerce => 1, required => 0 ); has 'owner_id' => ( is => 'ro', isa => 'OwnerId', required => 0 ); has 'owner_display_name' => ( is => 'ro', isa => 'Str', required => 0 ); __PACKAGE__->meta->make_immutable; sub _create { my ( $self, %conf ) = @_; my $http_request = Net::Amazon::S3::Request::CreateBucket->new( s3 => $self->client->s3, bucket => $self->name, acl_short => $conf{acl_short}, location_constraint => $conf{location_constraint}, )->http_request; $self->client->_send_request($http_request); } sub delete { my $self = shift; my $http_request = Net::Amazon::S3::Request::DeleteBucket->new( s3 => $self->client->s3, bucket => $self->name, )->http_request; $self->client->_send_request($http_request); } sub acl { my $self = shift; my $http_request = Net::Amazon::S3::Request::GetBucketAccessControl->new( s3 => $self->client->s3, bucket => $self->name, )->http_request; return $self->client->_send_request_content($http_request); } sub location_constraint { my $self = shift; my $http_request = Net::Amazon::S3::Request::GetBucketLocationConstraint->new( s3 => $self->client->s3, bucket => $self->name, )->http_request; my $xpc = $self->client->_send_request_xpc($http_request); my $lc = $xpc->findvalue('/s3:LocationConstraint'); if ( defined $lc && $lc eq '' ) { $lc = 'US'; } return $lc; } sub object_class { 'Net::Amazon::S3::Client::Object' } sub list { my ( $self, $conf ) = @_; $conf ||= {}; my $prefix = $conf->{prefix}; my $marker = undef; my $end = 0; return Data::Stream::Bulk::Callback->new( callback => sub { return undef if $end; my $http_request = Net::Amazon::S3::Request::ListBucket->new( s3 => $self->client->s3, bucket => $self->name, marker => $marker, prefix => $prefix, )->http_request; my $xpc = $self->client->_send_request_xpc($http_request); my @objects; foreach my $node ( $xpc->findnodes('/s3:ListBucketResult/s3:Contents') ) { my $etag = $xpc->findvalue( "./s3:ETag", $node ); $etag =~ s/^"//; $etag =~ s/"$//; # storage_class => $xpc->findvalue( ".//s3:StorageClass", $node ), # owner_id => $xpc->findvalue( ".//s3:ID", $node ), # owner_displayname => # $xpc->findvalue( ".//s3:DisplayName", $node ), push @objects, $self->object_class->new( client => $self->client, bucket => $self, key => $xpc->findvalue( './s3:Key', $node ), last_modified => $xpc->findvalue( './s3:LastModified', $node ), etag => $etag, size => $xpc->findvalue( './s3:Size', $node ), ); } return undef unless @objects; my $is_truncated = scalar $xpc->findvalue( '/s3:ListBucketResult/s3:IsTruncated') eq 'true' ? 1 : 0; $end = 1 unless $is_truncated; $marker = $xpc->findvalue('/s3:ListBucketResult/s3:NextMarker') || $objects[-1]->key; return \@objects; } ); } sub delete_multi_object { my $self = shift; my @objects = @_; return unless( scalar(@objects) ); # Since delete can handle up to 1000 requests, be a little bit nicer # and slice up requests and also allow keys to be strings # rather than only objects. my $last_result; while (scalar(@objects) > 0) { my $http_request = Net::Amazon::S3::Request::DeleteMultiObject->new( s3 => $self->client->s3, bucket => $self->name, keys => [map { if (ref($_)) { $_->key } else { $_ } } splice @objects, 0, ((scalar(@objects) > 1000) ? 1000 : scalar(@objects))] )->http_request; $last_result = $self->client->_send_request($http_request); if (!$last_result->is_success()) { last; } } return $last_result; } sub object { my ( $self, %conf ) = @_; return $self->object_class->new( client => $self->client, bucket => $self, %conf, ); } 1; __END__ =pod =encoding UTF-8 =head1 NAME Net::Amazon::S3::Client::Bucket - An easy-to-use Amazon S3 client bucket =head1 VERSION version 0.80 =head1 SYNOPSIS # return the bucket name print $bucket->name . "\n"; # return the bucket location constraint print "Bucket is in the " . $bucket->location_constraint . "\n"; # return the ACL XML my $acl = $bucket->acl; # list objects in the bucket # this returns a L object which returns a # stream of L objects, as it may # have to issue multiple API requests my $stream = $bucket->list; until ( $stream->is_done ) { foreach my $object ( $stream->items ) { ... } } # or list by a prefix my $prefix_stream = $bucket->list( { prefix => 'logs/' } ); # returns a L, which can then # be used to get or put my $object = $bucket->object( key => 'this is the key' ); # delete the bucket (it must be empty) $bucket->delete; =head1 DESCRIPTION This module represents buckets. =for test_synopsis no strict 'vars' =head1 METHODS =head2 acl # return the ACL XML my $acl = $bucket->acl; =head2 delete # delete the bucket (it must be empty) $bucket->delete; =head2 list # list objects in the bucket # this returns a L object which returns a # stream of L objects, as it may # have to issue multiple API requests my $stream = $bucket->list; until ( $stream->is_done ) { foreach my $object ( $stream->items ) { ... } } # or list by a prefix my $prefix_stream = $bucket->list( { prefix => 'logs/' } ); =head2 location_constraint # return the bucket location constraint print "Bucket is in the " . $bucket->location_constraint . "\n"; =head2 name # return the bucket name print $bucket->name . "\n"; =head2 object # returns a L, which can then # be used to get or put my $object = $bucket->object( key => 'this is the key' ); =head2 delete_multi_object # delete multiple objects using a multi object delete operation # Accepts a list of L objects. $bucket->delete_multi_object($object1, $object2) =head2 object_class # returns string "Net::Amazon::S3::Client::Object" # allowing subclasses to add behavior. my $object_class = $bucket->object_class; =head1 AUTHOR Rusty Conover =head1 COPYRIGHT AND LICENSE This software is copyright (c) 2015 by Amazon Digital Services, Leon Brocard, Brad Fitzpatrick, Pedro Figueiredo, Rusty Conover. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. =cut Net-Amazon-S3-0.80/lib/Net/Amazon/S3/Client/Object.pm000755 000765 000024 00000043262 12512367152 022703 0ustar00rconoverstaff000000 000000 package Net::Amazon::S3::Client::Object; $Net::Amazon::S3::Client::Object::VERSION = '0.80'; use Moose 0.85; use MooseX::StrictConstructor 0.16; use DateTime::Format::HTTP; use Digest::MD5 qw(md5 md5_hex); use Digest::MD5::File qw(file_md5 file_md5_hex); use File::stat; use MIME::Base64; use Moose::Util::TypeConstraints; use MooseX::Types::DateTime::MoreCoercions 0.07 qw( DateTime ); use IO::File 1.14; # ABSTRACT: An easy-to-use Amazon S3 client object enum 'AclShort' => [ qw(private public-read public-read-write authenticated-read) ]; enum 'StorageClass' => [ qw(standard reduced_redundancy) ]; has 'client' => ( is => 'ro', isa => 'Net::Amazon::S3::Client', required => 1 ); has 'bucket' => ( is => 'ro', isa => 'Net::Amazon::S3::Client::Bucket', required => 1 ); has 'key' => ( is => 'ro', isa => 'Str', required => 1 ); has 'etag' => ( is => 'ro', isa => 'Etag', required => 0 ); has 'size' => ( is => 'ro', isa => 'Int', required => 0 ); has 'last_modified' => ( is => 'ro', isa => DateTime, coerce => 1, required => 0 ); has 'expires' => ( is => 'rw', isa => DateTime, coerce => 1, required => 0 ); has 'acl_short' => ( is => 'ro', isa => 'AclShort', required => 0, default => 'private' ); has 'content_type' => ( is => 'ro', isa => 'Str', required => 0, default => 'binary/octet-stream' ); has 'content_disposition' => ( is => 'ro', isa => 'Str', required => 0, ); has 'content_encoding' => ( is => 'ro', isa => 'Str', required => 0, ); has 'cache_control' => ( is => 'ro', isa => 'Str', required => 0, ); has 'storage_class' => ( is => 'ro', isa => 'StorageClass', required => 0, default => 'standard', ); has 'user_metadata' => ( is => 'ro', isa => 'HashRef', required => 0, default => sub { {} }, ); has 'encryption' => ( is => 'ro', isa => 'Maybe[Str]', required => 0, ); __PACKAGE__->meta->make_immutable; sub exists { my $self = shift; my $http_request = Net::Amazon::S3::Request::GetObject->new( s3 => $self->client->s3, bucket => $self->bucket->name, key => $self->key, method => 'HEAD', )->http_request; my $http_response = $self->client->_send_request_raw($http_request); return $http_response->code == 200 ? 1 : 0; } sub _get { my $self = shift; my $http_request = Net::Amazon::S3::Request::GetObject->new( s3 => $self->client->s3, bucket => $self->bucket->name, key => $self->key, method => 'GET', )->http_request; my $http_response = $self->client->_send_request($http_request); my $content = $http_response->content; $self->_load_user_metadata($http_response); my $md5_hex = md5_hex($content); my $etag = $self->etag || $self->_etag($http_response); confess 'Corrupted download' if( !$self->_is_multipart_etag($etag) && $etag ne $md5_hex); return $http_response; } sub get { my $self = shift; return $self->_get->content; } sub get_decoded { my $self = shift; return $self->_get->decoded_content(@_); } sub get_callback { my ( $self, $callback ) = @_; my $http_request = Net::Amazon::S3::Request::GetObject->new( s3 => $self->client->s3, bucket => $self->bucket->name, key => $self->key, method => 'GET', )->http_request; my $http_response = $self->client->_send_request( $http_request, $callback ); return $http_response; } sub get_filename { my ( $self, $filename ) = @_; my $http_request = Net::Amazon::S3::Request::GetObject->new( s3 => $self->client->s3, bucket => $self->bucket->name, key => $self->key, method => 'GET', )->http_request; my $http_response = $self->client->_send_request( $http_request, $filename ); $self->_load_user_metadata($http_response); my $md5_hex = file_md5_hex($filename); my $etag = $self->etag || $self->_etag($http_response); confess 'Corrupted download' if( !$self->_is_multipart_etag($etag) && $etag ne $md5_hex); } sub _load_user_metadata { my ( $self, $http_response ) = @_; my %user_metadata; for my $header_name ($http_response->header_field_names) { my ($metadata_name) = lc($header_name) =~ /\A x-amz-meta- (.*) \z/xms or next; $user_metadata{$metadata_name} = $http_response->header($header_name); } %{ $self->user_metadata } = %user_metadata; } sub put { my ( $self, $value ) = @_; $self->_put( $value, length $value, md5_hex($value) ); } sub _put { my ( $self, $value, $size, $md5_hex ) = @_; my $md5_base64 = encode_base64( pack( 'H*', $md5_hex ) ); chomp $md5_base64; my $conf = { 'Content-MD5' => $md5_base64, 'Content-Length' => $size, 'Content-Type' => $self->content_type, }; if ( $self->expires ) { $conf->{Expires} = DateTime::Format::HTTP->format_datetime( $self->expires ); } if ( $self->content_encoding ) { $conf->{'Content-Encoding'} = $self->content_encoding; } if ( $self->content_disposition ) { $conf->{'Content-Disposition'} = $self->content_disposition; } if ( $self->cache_control ) { $conf->{'Cache-Control'} = $self->cache_control; } if ( $self->storage_class && $self->storage_class ne 'standard' ) { $conf->{'x-amz-storage-class'} = uc $self->storage_class; } $conf->{"x-amz-meta-\L$_"} = $self->user_metadata->{$_} for keys %{ $self->user_metadata }; my $http_request = Net::Amazon::S3::Request::PutObject->new( s3 => $self->client->s3, bucket => $self->bucket->name, key => $self->key, value => $value, headers => $conf, acl_short => $self->acl_short, encryption => $self->encryption, )->http_request; my $http_response = $self->client->_send_request($http_request); confess 'Error uploading ' . $http_response->as_string if $http_response->code != 200; my $etag = $self->_etag($http_response); confess 'Corrupted upload' if $etag ne $md5_hex; } sub put_filename { my ( $self, $filename ) = @_; my $md5_hex = $self->etag || file_md5_hex($filename); my $size = $self->size; unless ($size) { my $stat = stat($filename) || confess("No $filename: $!"); $size = $stat->size; } $self->_put( $self->_content_sub($filename), $size, $md5_hex ); } sub delete { my $self = shift; my $http_request = Net::Amazon::S3::Request::DeleteObject->new( s3 => $self->client->s3, bucket => $self->bucket->name, key => $self->key, )->http_request; $self->client->_send_request($http_request); } sub initiate_multipart_upload { my $self = shift; my $http_request = Net::Amazon::S3::Request::InitiateMultipartUpload->new( s3 => $self->client->s3, bucket => $self->bucket->name, key => $self->key, encryption => $self->encryption, )->http_request; my $xpc = $self->client->_send_request_xpc($http_request); my $upload_id = $xpc->findvalue('//s3:UploadId'); confess "Couldn't get upload id from initiate_multipart_upload response XML" unless $upload_id; return $upload_id; } sub complete_multipart_upload { my $self = shift; my %args = ref($_[0]) ? %{$_[0]} : @_; #set default args $args{s3} = $self->client->s3; $args{key} = $self->key; $args{bucket} = $self->bucket->name; my $http_request = Net::Amazon::S3::Request::CompleteMultipartUpload->new(%args)->http_request; return $self->client->_send_request($http_request); } sub abort_multipart_upload { my $self = shift; my %args = ref($_[0]) ? %{$_[0]} : @_; #set default args $args{s3} = $self->client->s3; $args{key} = $self->key; $args{bucket} = $self->bucket->name; my $http_request = Net::Amazon::S3::Request::AbortMultipartUpload->new(%args)->http_request; return $self->client->_send_request($http_request); } sub put_part { my $self = shift; my %args = ref($_[0]) ? %{$_[0]} : @_; #set default args $args{s3} = $self->client->s3; $args{key} = $self->key; $args{bucket} = $self->bucket->name; #work out content length header $args{headers}->{'Content-Length'} = length $args{value} if(defined $args{value}); my $http_request = Net::Amazon::S3::Request::PutPart->new(%args)->http_request; return $self->client->_send_request($http_request); } sub list_parts { confess "Not implemented"; # TODO - Net::Amazon::S3::Request:ListParts is implemented, but need to # define better interface at this level. Currently returns raw XML. } sub uri { my $self = shift; return Net::Amazon::S3::Request::GetObject->new( s3 => $self->client->s3, bucket => $self->bucket->name, key => $self->key, method => 'GET', )->http_request->uri; } sub query_string_authentication_uri { my $self = shift; return Net::Amazon::S3::Request::GetObject->new( s3 => $self->client->s3, bucket => $self->bucket->name, key => $self->key, method => 'GET', )->query_string_authentication_uri( $self->expires->epoch ); } sub _content_sub { my $self = shift; my $filename = shift; my $stat = stat($filename); my $remaining = $stat->size; my $blksize = $stat->blksize || 4096; confess "$filename not a readable file with fixed size" unless -r $filename and ( -f _ || $remaining ); my $fh = IO::File->new( $filename, 'r' ) or confess "Could not open $filename: $!"; $fh->binmode; return sub { my $buffer; # upon retries the file is closed and we must reopen it unless ( $fh->opened ) { $fh = IO::File->new( $filename, 'r' ) or confess "Could not open $filename: $!"; $fh->binmode; $remaining = $stat->size; } # warn "read remaining $remaining"; unless ( my $read = $fh->read( $buffer, $blksize ) ) { # warn "read $read buffer $buffer remaining $remaining"; confess "Error while reading upload content $filename ($remaining remaining) $!" if $! and $remaining; # otherwise, we found EOF $fh->close or confess "close of upload content $filename failed: $!"; $buffer ||= '' ; # LWP expects an emptry string on finish, read returns 0 } $remaining -= length($buffer); return $buffer; }; } sub _etag { my ( $self, $http_response ) = @_; my $etag = $http_response->header('ETag'); if ($etag) { $etag =~ s/^"//; $etag =~ s/"$//; } return $etag; } sub _is_multipart_etag { my ( $self, $etag ) = @_; return 1 if($etag =~ /\-\d+$/); } 1; __END__ =pod =encoding UTF-8 =head1 NAME Net::Amazon::S3::Client::Object - An easy-to-use Amazon S3 client object =head1 VERSION version 0.80 =head1 SYNOPSIS # show the key print $object->key . "\n"; # show the etag of an existing object (if fetched by listing # a bucket) print $object->etag . "\n"; # show the size of an existing object (if fetched by listing # a bucket) print $object->size . "\n"; # to create a new object my $object = $bucket->object( key => 'this is the key' ); $object->put('this is the value'); # to get the vaue of an object my $value = $object->get; # to see if an object exists if ($object->exists) { ... } # to delete an object $object->delete; # to create a new object which is publically-accessible with a # content-type of text/plain which expires on 2010-01-02 my $object = $bucket->object( key => 'this is the public key', acl_short => 'public-read', content_type => 'text/plain', expires => '2010-01-02', ); $object->put('this is the public value'); # return the URI of a publically-accessible object my $uri = $object->uri; # to store a new object with server-side encryption enabled my $object = $bucket->object( key => 'my secret', encryption => 'AES256', ); $object->put('this data will be stored using encryption.'); # upload a file my $object = $bucket->object( key => 'images/my_hat.jpg', content_type => 'image/jpeg', ); $object->put_filename('hat.jpg'); # upload a file if you already know its md5_hex and size my $object = $bucket->object( key => 'images/my_hat.jpg', content_type => 'image/jpeg', etag => $md5_hex, size => $size, ); $object->put_filename('hat.jpg'); # download the value of the object into a file my $object = $bucket->object( key => 'images/my_hat.jpg' ); $object->get_filename('hat_backup.jpg'); # use query string authentication my $object = $bucket->object( key => 'images/my_hat.jpg', expires => '2009-03-01', ); my $uri = $object->query_string_authentication_uri(); =head1 DESCRIPTION This module represents objects in buckets. =for test_synopsis no strict 'vars' =head1 METHODS =head2 etag # show the etag of an existing object (if fetched by listing # a bucket) print $object->etag . "\n"; =head2 delete # to delete an object $object->delete; =head2 exists # to see if an object exists if ($object->exists) { ... } =head2 get # to get the vaue of an object my $value = $object->get; =head2 get_decoded # get the value of an object, and decode any Content-Encoding and/or # charset; see decoded_content in HTTP::Response my $value = $object->get_decoded; =head2 get_filename # download the value of the object into a file my $object = $bucket->object( key => 'images/my_hat.jpg' ); $object->get_filename('hat_backup.jpg'); =head2 key # show the key print $object->key . "\n"; =head2 put # to create a new object my $object = $bucket->object( key => 'this is the key' ); $object->put('this is the value'); # to create a new object which is publically-accessible with a # content-type of text/plain my $object = $bucket->object( key => 'this is the public key', acl_short => 'public-read', content_type => 'text/plain', ); $object->put('this is the public value'); You may also set Content-Encoding using C, and Content-Disposition using C. You may specify the S3 storage class by setting C to either C or C; the default is C. =head2 put_filename # upload a file my $object = $bucket->object( key => 'images/my_hat.jpg', content_type => 'image/jpeg', ); $object->put_filename('hat.jpg'); # upload a file if you already know its md5_hex and size my $object = $bucket->object( key => 'images/my_hat.jpg', content_type => 'image/jpeg', etag => $md5_hex, size => $size, ); $object->put_filename('hat.jpg'); You may also set Content-Encoding using C, and Content-Disposition using C. You may specify the S3 storage class by setting C to either C or C; the default is C. User metadata may be set by providing a non-empty hashref as C. =head2 query_string_authentication_uri # use query string authentication my $object = $bucket->object( key => 'images/my_hat.jpg', expires => '2009-03-01', ); my $uri = $object->query_string_authentication_uri(); =head2 size # show the size of an existing object (if fetched by listing # a bucket) print $object->size . "\n"; =head2 uri # return the URI of a publically-accessible object my $uri = $object->uri; =head2 initiate_multipart_upload #initiate a new multipart upload for this object my $object = $bucket->object( key => 'massive_video.avi' ); my $upload_id = $object->initiate_multipart_upload; =head2 put_part #add a part to a multipart upload my $put_part_response = $object->put_part( upload_id => $upload_id, part_number => 1, value => $chunk_content, ); my $part_etag = $put_part_response->header('ETag') Returns an L object. It is necessary to keep the ETags for each part, as these are required to complete the upload. =head2 complete_multipart_upload #complete a multipart upload $object->complete_multipart_upload( upload_id => $upload_id, etags => [$etag_1, $etag_2], part_numbers => [$part_number_1, $part_number2], ); The etag and part_numbers parameters are ordered lists specifying the part numbers and ETags for each individual part of the multipart upload. =head2 user_metadata my $object = $bucket->object(key => $key); my $content = $object->get; # or use $object->get_filename($filename) # return the user metadata downloaded, as a hashref my $user_metadata = $object->user_metadata; To upload an object with user metadata, set C at construction time to a hashref, with no C prefixes on the key names. When downloading an object, the C, C and C ethods set the contents of C to the same format. =head1 AUTHOR Rusty Conover =head1 COPYRIGHT AND LICENSE This software is copyright (c) 2015 by Amazon Digital Services, Leon Brocard, Brad Fitzpatrick, Pedro Figueiredo, Rusty Conover. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. =cut Net-Amazon-S3-0.80/examples/backup_cpan.pl000755 000765 000024 00000007633 12512367152 021336 0ustar00rconoverstaff000000 000000 #!/home/acme/bin/perl use strict; use warnings; use lib 'lib'; use Data::Stream::Bulk::Path::Class; use Net::Amazon::S3; use Perl6::Say; use Path::Class; use Set::Object; use Term::ProgressBar::Simple; use List::Util qw(sum); use Digest::MD5::File qw(file_md5_hex); use BerkeleyDB::Manager; use Cwd; use Config; my $m = BerkeleyDB::Manager->new( home => Path::Class::Dir->new(cwd), db_class => 'BerkeleyDB::Hash', create => 1, ); my $db = $m->open_db( file => 'md5_cache' ); my $s3 = Net::Amazon::S3->new( aws_access_key_id => 'XXX', aws_secret_access_key => 'XXX', retry => 1, ); my $client = Net::Amazon::S3::Client->new( s3 => $s3 ); my $bucket = $client->bucket( name => 'minicpan' ); my $root = '/home/acme/Public/minicpan/'; my $file_stream = Data::Stream::Bulk::Path::Class->new( dir => Path::Class::Dir->new($root), only_files => 1, ); my %files; my $file_set = Set::Object->new(); until ( $file_stream->is_done ) { foreach my $filename ( $file_stream->items ) { my $key = $filename->relative($root)->stringify; #[rootname]path/to/file.txt:,,, my $stat = $filename->stat; my $ctime = $stat->ctime; my $mtime = $stat->mtime; my $size = $stat->size; my $inodenum = $stat->ino; my $cachekey = "$key:$ctime,$mtime,$size,$inodenum"; $db->db_get( $cachekey, my $md5_hex ); if ($md5_hex) { #say "hit $cachekey $md5hex"; } else { $md5_hex = file_md5_hex($filename) || die "Failed to find MD5 for $filename"; $m->txn_do( sub { $db->db_put( $cachekey, $md5_hex ); } ); #say "miss $cachekey $md5_hex"; } $files{$key} = { filename => $filename, key => $key, md5_hex => $md5_hex, size => -s $filename, }; $file_set->insert($key); } } my %objects; my $s3_set = Set::Object->new(); my $object_stream = $bucket->list; until ( $object_stream->is_done ) { foreach my $object ( $object_stream->items ) { my $key = $object->key; $objects{$key} = { filename => file( $root, $key )->stringify, key => $key, md5_hex => $object->etag, size => $object->size, }; # say $object->key . ' ' . $object->size . ' ' . $object->etag; $s3_set->insert( $object->key ); } } my @to_add; my @to_delete; foreach my $key ( sort keys %files ) { my $file = $files{$key}; my $object = $objects{$key}; if ($object) { if ( $file->{md5_hex} eq $object->{md5_hex} ) { # say "$key same"; } else { # say "$key different"; push @to_add, $file; } } else { #say "$key missing"; push @to_add, $file; } } foreach my $key ( sort keys %objects ) { my $object = $objects{$key}; my $file = $files{$key}; if ($file) { } else { # say "$key to delete"; push @to_delete, $object; } } my $total_size = sum map { file( $_->{filename} )->stat->size } @to_add; $total_size += scalar(@to_delete); my $progress = Term::ProgressBar::Simple->new($total_size); foreach my $file (@to_add) { my $key = $file->{key}; my $filename = $file->{filename}; my $md5_hex = $file->{md5_hex}; my $size = $file->{size}; # say "put $key"; $progress += $size; my $object = $bucket->object( key => $key, etag => $md5_hex, size => $size ); $object->put_filename($filename); } foreach my $object (@to_delete) { my $key = $object->{key}; my $filename = $object->{filename}; my $object = $bucket->object(key => $key); # say "delete $key"; $object->delete; $progress++; } Net-Amazon-S3-0.80/bin/s3cl000755 000765 000024 00000030430 12512367152 016243 0ustar00rconoverstaff000000 000000 #!/usr/bin/env perl use strict; use warnings; use Getopt::Long; use Pod::Usage; use Path::Class; use File::Find::Rule; use Digest::MD5 qw(md5_hex); use Net::Amazon::S3; use MIME::Types qw(by_suffix); use Term::ProgressBar::Simple; # PODNAME: s3cl # ABSTRACT: Command line for Amazon s3 cloud storage my $s3; my %args; my %commands = ( mkbucket => \&mk_bucket, buckets => \&buckets, ls => \&ls, rm => \&rm, cp => \&cp, sync => \&sync, sync_up => \&sync_up, help => \&helper, ); main(); sub main { terminal(); get_options(); init_s3(); my $command = shift @ARGV || "help"; $commands{$command} or helper("Unknown command: $command"); $commands{$command}->(); } sub init_s3 { # TODO: read key_id and secret from config file? # use AppConfig; # TODO: probably nicer to put all of this in Net::Amazon::S3::CommandLine # and have simple call to that from here. my $aws_access_key_id = $ENV{'AWS_ACCESS_KEY_ID'}; my $aws_secret_access_key = $ENV{'AWS_ACCESS_KEY_SECRET'}; $s3 = Net::Amazon::S3->new( { aws_access_key_id => $aws_access_key_id, aws_secret_access_key => $aws_secret_access_key, retry => 1, } ); } sub sync { my $dest = $args{dest_or_source} || ''; helper("No destination supplied") if $dest eq ''; helper("Can not write to: $args{dest_or_source}") unless -w $dest; my $bucket = _get_bucket(); my $list = ls('data'); foreach my $key ( @{ $list->{keys} } ) { my $source = file( $key->{key} ); my $destination = file( $dest, $source ); $destination->dir->mkpath(); warn "$source -> $destination"; my $response = $bucket->get_key_filename( $source->stringify, 'GET', $destination->stringify ) or die $s3->err . ": " . $s3->errstr; } } sub sync_up { my $source = $args{dest_or_source} || ''; my $prefix = $args{prefix_or_key} || ''; my $acl_short = $args{acl_short}; helper("No source supplied") if $source eq ''; helper("Can not read directory: $args{dest_or_source}") unless -d $source; # Work out our local files my @files = File::Find::Rule->file()->in( ($source) ); my $progress = Term::ProgressBar::Simple->new( scalar(@files) ); my $bucket = _get_bucket(); # Get a list of all the remote files my $remote_file_list = $bucket->list_all( { prefix => $prefix } ) or die $s3->err . ": " . $s3->errstr; # Now hash, so we can look up a specific key to find the etag my %remote_files; foreach my $key_meta ( @{ $remote_file_list->{keys} } ) { my $key = $key_meta->{key}; $remote_files{$key} = $key_meta; } my $dir = dir($source); my $dir_string = $dir->stringify; my $mimetypes = MIME::Types->new; foreach my $f (@files) { my $file = file($f); my ( $mediatype, $encoding ) = by_suffix $file->basename(); # Assume plain text unless we can work i unless ($mediatype) { if ( -T $file ) { $mediatype = 'text/plain'; } else { $progress++; $progress->message("$f - NOT uploading"); warn "Not uploading: $file"; warn "Unknown mime type, submit patch to MIME::Types"; next; } } my $content = $file->slurp(); my $md5 = md5_hex($content); my $key = $file->stringify; $key =~ s/$dir_string//; # remove our local path for the dir $key =~ s{^/}{}; # remove the trailing slash $key = "$prefix$key"; # Add the prefix if there is one if ( my $remote = $remote_files{$key} ) { if ( $remote->{etag} eq $md5 ) { $progress->message("$key - $mediatype - not changed"); next; } } $bucket->add_key_filename( $key, $f, { content_type => $mediatype, }, ) or die $s3->err . ": " . $s3->errstr; if ($acl_short) { $bucket->set_acl( { key => $key, acl_short => $acl_short, } ) || die $s3->err . ": " . $s3->errstr; } $progress->message("$key - $mediatype - uploaded"); $progress++; } } sub cp { my $dest = $args{dest_or_source} || ''; helper("No destination supplied") if $dest eq ''; my $key = $args{prefix_or_key} || helper("No key supplied"); if ( -d $dest ) { # If we have a directory we need to add the file name $dest = file( $dest, file($key)->basename ); } my $bucket = _get_bucket(); unless ( $bucket->get_key_filename( "$key", 'GET', "$dest" ) ) { die $s3->err . ": " . $s3->errstr if $s3->err; die "Could not copy $key from bucket $args{bucket}"; } } sub ls { my $mode = shift || 'print'; my $bucket = _get_bucket(); my $ls_conf; $ls_conf->{prefix} = $args{prefix_or_key} if $args{prefix_or_key}; # list files in the bucket my $response = $bucket->list_all($ls_conf) or die $s3->err . ": " . $s3->errstr; return $response if $mode eq 'data'; foreach my $key ( @{ $response->{keys} } ) { my $key_last_modified = $key->{last_modified}; # 2008-07-14T22:31:10.000Z $key_last_modified =~ s/:\d{2}\.\d{3}Z$//; my $key_name = $key->{key}; my $key_size = $key->{size}; print "$key_size $key_last_modified $key_name\n"; } } sub rm { my $bucket = _get_bucket(); helper("Must have a :") unless $args{prefix_or_key}; my $res = "NO"; if ( $args{force} ) { $res = 'y'; } else { print "\nOnce deleted there is no way to retrieve this key again." . "\nAre you sure you want to delete $args{bucket}:$args{prefix_or_key}? y/N\n"; ( $res = ) =~ s/\n//; } if ( $res eq 'y' ) { # delete key in this bucket my $response = $bucket->delete_key( $args{prefix_or_key} ) or die $s3->err . ": " . $s3->errstr; } } sub mk_bucket { my $bucketname = $args{bucket}; my $bucket = $s3->add_bucket( { bucket => $bucketname, location_constraint => 'EU' } ) or die $s3->err . ": " . $s3->errstr; } sub buckets { my $response = $s3->buckets; my $num = scalar @{ $response->{buckets} || [] }; print "You have $num bucket"; print "s" if $num != 1; print ":\n"; foreach my $bucket ( @{ $response->{buckets} } ) { print '- ' . $bucket->bucket . "\n"; } } sub terminal { my $encoding = eval { require Term::Encoding; Term::Encoding::get_encoding(); } || "utf-8"; binmode STDOUT, ":encoding($encoding)"; } # TODO: Replace with AppConfig this is ick! sub get_options { my $help = 0; my $man = 0; my $force = 0; my $loc = "US"; my $bucket = ""; GetOptions( \%args, "bucket=s", "jurisdiction=s", "acl_short=s", "f|force" => \$force, "h|help|?" => \$help, "man" => \$man, ) or pod2usage(2); $args{force} = $force; foreach my $arg (@ARGV) { if ( $arg =~ /:/ ) { my ( $b, $rest ) = split( ":", $arg ); $args{bucket} = $b; $args{prefix_or_key} = $rest; } } # For cp / sync etc $args{dest_or_source} = $ARGV[2] if $ARGV[2]; pod2usage(1) if $help || @ARGV == 0; pod2usage( -verbose => 2 ) if $man; } sub _get_bucket { helper("No bucket supplied") unless $args{bucket}; my $bucket = $s3->bucket( $args{bucket} ); die $s3->err . ": " . $s3->errstr if $s3->err; helper("Could not get bucket $args{bucket}") unless $bucket; return $bucket; } sub helper { my $msg = shift; if ($msg) { pod2usage( -message => $msg, -exitval => 2 ); } exit; } =pod =encoding UTF-8 =head1 NAME s3cl - Command line for Amazon s3 cloud storage =head1 VERSION version 0.80 =head1 SYNOPSIS s3cl command [options] s3cl buckets s3cl mkbucket --bucket some_bucket_name --jurisdiction [EU|US] s3cl ls :[prefix] s3cl cp : /path/[filename] s3cl sync :[prefix] /path/ s3cl sync_up [--acl_short=public-read] :[prefix] /path/ s3cl rm : Options: -help brief help message -man full documentation We take NO responsibility for the costs incured through using this script. To run this script, you need to set a pair of environment variables: AWS_ACCESS_KEY_ID AWS_ACCESS_KEY_SECRET =head1 DESCRIPTION This program gives a command line interface to Amazons s3 storage service. It does not limit the number of requests (which may cost you more money than if you did it a different way!) and each request costs Money (although some costs from EC2 may be $0.0, check latest from Amazon costs page) - we take NO reponsibility for your bill. =head1 AUTHOR Rusty Conover =head1 COPYRIGHT AND LICENSE This software is copyright (c) 2015 by Amazon Digital Services, Leon Brocard, Brad Fitzpatrick, Pedro Figueiredo, Rusty Conover. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. =cut __DATA__ =head1 COMMANDS =over 4 =item B s3cl buckets List all buckets for this account. =item B s3cl mkbucket --bucket sombucketname [--jurisdiction [EU|US]] Create a new bucket, optionally specifying what jurisdiction it should be created in. =item B s3cl ls :[prefix] List contents of a bucket, the optional B can be partial, in which case all keys matching this as the start of the key name will be returned. If no B is supplied all keys of the bucket will be returned. =item B s3cl cp : target_file s3cl cp : target_directory Copy a single key from the bucket to the target file, or into the target_directory. =item B s3cl sync :[prefix] target_dir Downloads all files matching the prefix into a directory structure replicating that of the prefix and all 'sub-directories'. It will download ALL files - even if already on your local disk: http://www.amazon.com/gp/browse.html?node=16427261 # Data transfer "in" and "out" refers to transfer into and out # of Amazon S3. Data transferred between Amazon EC2 and # Amazon S3, is free of charge (i.e., $0.00 per GB), except # data transferred between Amazon EC2 and Amazon S3-Europe, # which will be charged at regular rates. =item B s3cl sync_up [--acl_short=public-read] :[prefix] /path/ Upload all the files below /path/ to S3, with an optional prefix at the start of the key name. The existing S3 files and meta data are fetched from S3 and the md5 (etag) is compaired to what is on the local disk, files are not upload if the content has not changed. Use --acl_short to set access control, options from L this is only applied when the file is uploaded. Each files content-type is worked out using L, if this does not match 'text/plain' is used for ASCII text files, otherwise a warning is issued and the file is NOT uploaded. Currently this does NOT remove old files from S3, and if there is any change to a file then the entire file will be reuploaded. =item B s3cl rm : Remove a key(file) from the bucket, removing a non-existent file is not classed as an error. Once removed the key (file) can not be restored - so use with care! =back =head1 ABOUT This module contains code modified from Amazon that contains the following notice (which is also applicicable to this code): # This software code is made available "AS IS" without # warranties of any kind. You may copy, display, modify and # redistribute the software code either by itself or as incorporated # into your code; provided that you do not remove any proprietary # notices. Your use of this software code is at your own risk and # you waive any claim against Amazon Digital Services, Inc. or its # affiliates with respect to your use of this software code. # (c) 2006 Amazon Digital Services, Inc. or its affiliates. =head1 AUTHOR Leo Lapworth - Part of the HinuHinu project =cut