HTTP-Async-0.33/0000755000175000017500000000000013036443474011745 5ustar alexalexHTTP-Async-0.33/lib/0000755000175000017500000000000013036443474012513 5ustar alexalexHTTP-Async-0.33/lib/HTTP/0000755000175000017500000000000013036443474013272 5ustar alexalexHTTP-Async-0.33/lib/HTTP/Async.pm0000644000175000017500000007004513036443254014707 0ustar alexalexuse strict; use warnings; package HTTP::Async; our $VERSION = '0.33'; use Carp; use Data::Dumper; use HTTP::Response; use IO::Select; use Net::HTTP::NB; use Net::HTTP; use URI; use Time::HiRes qw( time sleep ); =head1 NAME HTTP::Async - process multiple HTTP requests in parallel without blocking. =head1 SYNOPSIS Create an object and add some requests to it: use HTTP::Async; my $async = HTTP::Async->new; # create some requests and add them to the queue. $async->add( HTTP::Request->new( GET => 'http://www.perl.org/' ) ); $async->add( HTTP::Request->new( GET => 'http://www.ecclestoad.co.uk/' ) ); and then EITHER process the responses as they come back: while ( my $response = $async->wait_for_next_response ) { # Do some processing with $response } OR do something else if there is no response ready: while ( $async->not_empty ) { if ( my $response = $async->next_response ) { # deal with $response } else { # do something else } } OR just use the async object to fetch stuff in the background and deal with the responses at the end. # Do some long code... for ( 1 .. 100 ) { some_function(); $async->poke; # lets it check for incoming data. } while ( my $response = $async->wait_for_next_response ) { # Do some processing with $response } =head1 DESCRIPTION Although using the conventional C is fast and easy it does have some drawbacks - the code execution blocks until the request has been completed and it is only possible to process one request at a time. C attempts to address these limitations. It gives you a 'Async' object that you can add requests to, and then get the requests off as they finish. The actual sending and receiving of the requests is abstracted. As soon as you add a request it is transmitted, if there are too many requests in progress at the moment they are queued. There is no concept of starting or stopping - it runs continuously. Whilst it is waiting to receive data it returns control to the code that called it meaning that you can carry out processing whilst fetching data from the network. All without forking or threading - it is actually done using C