URI-Find-20140709000755000765000024 012357350402 13235 5ustar00schwernstaff000000000000URI-Find-20140709/Build.PL000444000765000024 154312357350402 14671 0ustar00schwernstaff000000000000#!/usr/bin/perl -w use Module::Build 0.30; require 5.006; my $build = Module::Build->new( module_name => 'URI::Find', configure_requires => { Module::Build => '0.30' }, build_requires => { Test::More => '0.88', Module::Build => '0.30', }, requires => { perl => '5.8.9', URI => '1.60', }, license => 'perl', dist_author => 'Michael G Schwern ', meta_merge => { resources => { homepage => 'http://search.cpan.org/dist/URI-Find', bugtracker => 'http://github.com/schwern/URI-Find/issues/', repository => 'http://github.com/schwern/URI-Find/', } }, recursive_test_files => 1, ); $build->create_build_script; URI-Find-20140709/Changes000444000765000024 1443712357350402 14716 0ustar00schwernstaff00000000000020140709 Wed Jul 9 16:28:37 PDT 2014 New Features * The "git" scheme is supported. (Schwern) * svn, ssh and svn+ssh schemes are supported. [rt.cpan.org 57490] (Schwern) * Added a --schemeless option to urifind. (Schwern) Bug Fixes * http:// is no longer matched [rt.cpan.org 63283] (Schwern) Backwards Incompatibilities * Previously, URIs stringified to their canonical version. Now they stringify as written. This results in less loss of information. For example. "Blah HTTP:://FOO.COM" previously would stringify as "http://foo.com/" and now it will stringify as "HTTP://FOO.COM". To restore the old behavior you can call $uri->canonical. (Schwern) Distribution Changes * No longer using URI::URL. (Schwern) * Now requires URI 1.60 for Unicode support. (Schwern) 20140702 Wed Jul 2 13:41:47 PDT 2014 New Features * IDNA (aka Unicode) domains are now supported. [github 3] (GwenDragon) * The list of TLDs for schemeless matching has been updated. [github 3] (GwenDragon) Bug Fixes * Handle balanced [], {} and quotes in addition to (). [rt.cpan.org 85053] (Schwern) * Don't mangle IPv6 URLs. [rt.cpan.org 85053] (Schwern) * Schemeless is more accurate about two letter TLDs. [github 3] (GwenDragon) Distribution Changes * Switched the issue tracker to Github. (Schwern) 20111103 Thu Nov 3 12:14:21 PDT 2011 Bug Fixes * URI::URL::strict will no longer leak out of find() if the callback or filter fails. [rt.cpan.org 71153] (Carl Chambers) 20111020 Thu Oct 20 17:31:56 PDT 2011 Bug Fixes * Things which look like URIs, but aren't, are now properly escaped like other text. [rt.cpan.org 71658] New Features * Balanced parens in URIs are no longer stripped. Example: "http://example.com/foo(bar)" (Merten Falk) 20100505 Wed May 5 18:48:44 PDT 2010 Test Fixes * Fixed t/urifind/find.t on Windows 20100504.1039 Tue May 4 10:39:23 PDT 2010 Doc Fixes * Forgot to mention that we ship with urifind now. 20100504 Tue May 4 10:29:52 PDT 2010 New Features * Added a urifind program. (Darren Chamberlain) Bug Fixes * The final semi-colon was being strippped form URLs found in HTML that ended with HTML entities. (Michael Peters) Example: http://google.com/search?q=<html> * URLs with leading dots, pluses and minuses are now found. [rt.cpan.org 57032] Example: stuff...http://example.com 20100211 Thu Feb 11 04:02:26 PST 2010 Bug Fixes * Finding URIs inside brackets was pretty badly broken by the last release. (Michael Peters) 20090319 Thu Mar 19 12:17:53 PDT 2009 Bug Fixes * Schemeless now ignores the case of the TLD. New Features * Updated the list of accepted domains for finding schemeless URIs from the latest ICANN list. Docs * Add LICENSE section * Remove wildly out of date CAVEATS * Added an example of how to get a list of all URIs. * Updated INSTALL section to reflect new dependencies and Module::Build installation process * Regenerated the README file 20090316 Mon Mar 16 16:18:10 PDT 2009 New Features * Added optional replacement function to find(). Now you can not only replace URLs found, but also the rest of the text around them in one fell swoop. (Mike Schilli) [rt.cpan.org 20486] * Whitespace inside <...> is now ignored as per the suggestion of RFC 3986 appendix C. [rt.cpan.org 20483] Other * Michael G Schwern is now primary maintainer again. Thanks for all your work, Roderick! * Repository moved to http://github.com/schwern/uri-find * Now requires Test::More * Verisoning scheme changed to ISO date integers * Minimum Perl version is now 5.6.0. 0.16 Fri Jul 22 06:00:24 EDT 2005 - Oops, make the URI::Find::Schemeless->top_level_domain_re case insensitive, as it should be and the docs claimed it was. Thanks to Todd Eigenschink. 0.15 Tue Mar 22 07:23:17 EST 2005 - Have all functions croak if invoked with the wrong number of arguments. Add URI::Find->badinvo. https://rt.cpan.org/NoAuth/Bug.html?id=1845 - Mention DARREN's urifind script in the man page. - Oops, URI::URL::strict was turned on and left on. Put it back the way you found it. Thanks to Chris Nandor. https://rt.cpan.org/NoAuth/Bug.html?id=11906 - Schemeless.pm: - Find 'intag.com'. - Get $tldRe from a new class method, ->top_level_domain_re. - Update top level domain list. 0.14 Sat Oct 9 08:20:04 EDT 2004 - Add copyright notice. - Add ] to main $cruftSet, } to schemeless $cruftSet, for [http://square.com] and {brace.com}. - quotemeta() $cruftSet. 0.13 Mon Jul 1 10:37:54 EDT 2002 - Don't find any schemeless URIs with a plain URI::Find. Previously it'd find ones which started with "ftp." and "www.", but it was more prone to false positives than URI::Find::Schemeless. - Have schemeless_to_schemed use http:// except in the specific case in which it uses ftp://. Remove URI::Find::Schemeless's version. 0.12 Wed Mar 20 14:39:21 EST 2002 - Improve the "wrap each URI found in an HTML anchor" example. - Release a new version so CPAN sees the maintainer change. 0.11 Thu Jul 26 14:43:49 EDT 2001 - Michael passed the module to Roderick for maintenance. - Improve test suite. - Tweak URI::Find::Schemeless not to find Foo.p[ml]. 0.10 Mon Jul 10 20:14:08 EDT 2000 - Rearchitected the internals to allow simple subclassing - Added URI::Find::Schemeless (thanks Roderick) 0.04 Sat Feb 26 09:05:11 GMT 2000 - Added # to the uric set of characters so HTML anchors are caught. 0.03 Tue Feb 1 16:15:22 EST 2000 - Added some heuristic discussion to the docs. - Added some heuristics to avoid picking up perl module names - Improved schemeless URI heuristic to avoid picking up usenet board names. - Handling the case better as suggested in RFC 2396 Apdx E - Added ; to the cruft heuristic 0.02 Tue Feb 1 13:11:56 EST 2000 - Added heuristic to handle 'URL:http://www.foo.com' - Added heuristic to handle trailing quotes. 0.01 Mon Jan 31 19:12:23 EST 2000 - First working version released to CPAN. URI-Find-20140709/INSTALL000444000765000024 153512357350402 14427 0ustar00schwernstaff000000000000WHAT IS THIS? This is URI::Find, a perl module. Please see the README that comes with this distribution. HOW DO I INSTALL IT? To install this module, cd to the directory that contains this README file and type the following: perl Build.PL ./Build ./Build test ./Build install To install this module into a specific directory, do: perl Build.PL --install_base /name/of/the/directory ...the rest is the same... Please also read the perlmodinstall man page, if available. WHAT VERSION OF PERL DO I NEED? perl 5.8.9 or higher WHAT MODULES DO I NEED? To build, test and install the module you need: Module::Build 0.30 or higher Test::More 0.88 or higher To run the module you need: URI 1.60 or higher They can all be found on http://search.cpan.org/ or by running your CPAN shell.URI-Find-20140709/MANIFEST000444000765000024 45112357350402 14503 0ustar00schwernstaff000000000000bin/urifind Build.PL Changes INSTALL lib/URI/Find.pm lib/URI/Find/Schemeless.pm MANIFEST This list of files MANIFEST.SKIP META.json META.yml README t/filter.t t/Find.t t/html.t t/is_schemed.t t/load-schemeless.t t/rfc3986_appendix_c.t t/urifind/find.t t/urifind/pod.t t/urifind/sciencenews TODO URI-Find-20140709/MANIFEST.SKIP000444000765000024 215512357350402 15273 0ustar00schwernstaff000000000000 #!start included /Users/schwern/perl5/perlbrew/perls/perl-5.14.1/lib/5.14.1/ExtUtils/MANIFEST.SKIP # Avoid version control files. \bRCS\b \bCVS\b \bSCCS\b ,v$ \B\.svn\b \B\.git\b \B\.gitignore\b \b_darcs\b \B\.cvsignore$ # Avoid VMS specific MakeMaker generated files \bDescrip.MMS$ \bDESCRIP.MMS$ \bdescrip.mms$ # Avoid Makemaker generated and utility files. \bMANIFEST\.bak \bMakefile$ \bblib/ \bMakeMaker-\d \bpm_to_blib\.ts$ \bpm_to_blib$ \bblibdirs\.ts$ # 6.18 through 6.25 generated this # Avoid Module::Build generated and utility files. \bBuild$ \b_build/ \bBuild.bat$ \bBuild.COM$ \bBUILD.COM$ \bbuild.com$ # Avoid temp and backup files. ~$ \.old$ \#$ \b\.# \.bak$ \.tmp$ \.# \.rej$ # Avoid OS-specific files/dirs # Mac OSX metadata \B\.DS_Store # Mac OSX SMB mount metadata files \B\._ # Avoid Devel::Cover and Devel::CoverX::Covered files. \bcover_db\b \bcovered\b # Avoid MYMETA files ^MYMETA\. #!end included /Users/schwern/perl5/perlbrew/perls/perl-5.14.1/lib/5.14.1/ExtUtils/MANIFEST.SKIP # Avoid patches and diff files lying around \.patch$ \.diff$ # Don't ship the Travis config. ^\.travis\.yml$ URI-Find-20140709/META.json000444000765000024 255512357350402 15022 0ustar00schwernstaff000000000000{ "abstract" : "Find URIs in arbitrary text", "author" : [ "Michael G Schwern " ], "dynamic_config" : 1, "generated_by" : "Module::Build version 0.4205", "license" : [ "perl_5" ], "meta-spec" : { "url" : "http://search.cpan.org/perldoc?CPAN::Meta::Spec", "version" : "2" }, "name" : "URI-Find", "prereqs" : { "build" : { "requires" : { "Module::Build" : "0.30", "Test::More" : "0.88" } }, "configure" : { "requires" : { "Module::Build" : "0.30" } }, "runtime" : { "requires" : { "URI" : "1.60", "perl" : "v5.8.9" } } }, "provides" : { "URI::Find" : { "file" : "lib/URI/Find.pm", "version" : "20140709" }, "URI::Find::Schemeless" : { "file" : "lib/URI/Find/Schemeless.pm", "version" : "20140709" } }, "release_status" : "stable", "resources" : { "bugtracker" : { "web" : "http://github.com/schwern/URI-Find/issues/" }, "homepage" : "http://search.cpan.org/dist/URI-Find", "license" : [ "http://dev.perl.org/licenses/" ], "repository" : { "url" : "http://github.com/schwern/URI-Find/" } }, "version" : "20140709" } URI-Find-20140709/META.yml000444000765000024 152212357350402 14643 0ustar00schwernstaff000000000000--- abstract: 'Find URIs in arbitrary text' author: - 'Michael G Schwern ' build_requires: Module::Build: '0.30' Test::More: '0.88' configure_requires: Module::Build: '0.30' dynamic_config: 1 generated_by: 'Module::Build version 0.4205, CPAN::Meta::Converter version 2.141520' license: perl meta-spec: url: http://module-build.sourceforge.net/META-spec-v1.4.html version: '1.4' name: URI-Find provides: URI::Find: file: lib/URI/Find.pm version: '20140709' URI::Find::Schemeless: file: lib/URI/Find/Schemeless.pm version: '20140709' requires: URI: '1.60' perl: v5.8.9 resources: bugtracker: http://github.com/schwern/URI-Find/issues/ homepage: http://search.cpan.org/dist/URI-Find license: http://dev.perl.org/licenses/ repository: http://github.com/schwern/URI-Find/ version: '20140709' URI-Find-20140709/README000444000765000024 461712357350402 14262 0ustar00schwernstaff000000000000NAME URI::Find - Find URIs in arbitrary text SYNOPSIS require URI::Find; my $finder = URI::Find->new(\&callback); $how_many_found = $finder->find(\$text); DESCRIPTION This module does one thing: Finds URIs and URLs in plain text. It finds them quickly and it finds them all (or what URI::URL considers a URI to be.) It only finds URIs which include a scheme (http:// or the like), for something a bit less strict have a look at URI::Find::Schemeless. For a command-line interface, see Darren Chamberlain's "urifind" script. It's available from his CPAN directory, . EXAMPLES Store a list of all URIs (normalized) in the document. my @uris; my $finder = URI::Find->new(sub { my($uri) = shift; push @uris, $uri; }); $finder->find(\$text); Print the original URI text found and the normalized representation. my $finder = URI::Find->new(sub { my($uri, $orig_uri) = @_; print "The text '$orig_uri' represents '$uri'\n"; return $orig_uri; }); $finder->find(\$text); Check each URI in document to see if it exists. use LWP::Simple; my $finder = URI::Find->new(sub { my($uri, $orig_uri) = @_; if( head $uri ) { print "$orig_uri is okay\n"; } else { print "$orig_uri cannot be found\n"; } return $orig_uri; }); $finder->find(\$text); Turn plain text into HTML, with each URI found wrapped in an HTML anchor. use CGI qw(escapeHTML); use URI::Find; my $finder = URI::Find->new(sub { my($uri, $orig_uri) = @_; return qq|$orig_uri|; }); $finder->find(\$text, \&escapeHTML); print "
$text
"; AUTHOR Michael G Schwern with insight from Uri Gutman, Greg Bacon, Jeff Pinyan, Roderick Schertler and others. Roderick Schertler maintained versions 0.11 to 0.16. LICENSE Copyright 2000, 2009 by Michael G Schwern . This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. See http://www.perlfoundation.org/artistic_license_1_0 SEE ALSO URI::Find::Schemeless, URI::URL, URI, RFC 3986 Appendix C URI-Find-20140709/TODO000444000765000024 75412357350402 14050 0ustar00schwernstaff000000000000- parameterize top level domain list in Schemeless.pm? - shouldn't have picked this out: $url = 'http://'.rand(1000000).'@anonymizer.com/'.$url (/url 63); - find email addresses - $text =~ s((?$1)g; - see also Email::Find - I'd think this should either be leaving off the parenthesized part or including the close paren: http://www.tbjck.com(86-10-85893372) -> http://www.tbjck.com(86-10-85893372/ URI-Find-20140709/bin000755000765000024 012357350402 14005 5ustar00schwernstaff000000000000URI-Find-20140709/bin/urifind000444000765000024 1615612357350402 15556 0ustar00schwernstaff000000000000#!/usr/local/bin/perl -w # ---------------------------------------------------------------------- # urifind - find URIs in a document and dump them to STDOUT. # Copyright (C) 2003 darren chamberlain # ---------------------------------------------------------------------- use strict; our $VERSION = 20140709; use File::Basename qw(basename); use Getopt::Long qw(GetOptions); use IO::File; use URI::Find; use URI::Find::Schemeless; # What to do, and how my $help = 0; my $version = 0; my $sort = 0; my $reverse = 0; my $unique = 0; my $prefix = 0; my $noprefix = 0; my @pats = (); my @schemes = (); my $dump = 0; my $schemeless = 0; Getopt::Long::Configure(qw{no_ignore_case bundling}); GetOptions( 's!' => \$sort, 'u!' => \$unique, 'p!' => \$prefix, 'n!' => \$noprefix, 'r!' => \$reverse, 'h!' => \$help, 'v!' => \$version, 'd!' => sub { $dump = 1 }, 'D!' => sub { $dump = 2 }, 'P=s@' => \@pats, 'S=s@' => \@schemes, 'schemeless!' => \$schemeless, ); if ($help || $version) { my $prog = basename($0); if ($help) { print < 1) { my $prog = basename $0; die "Can't specify -p and -n at the same time; try $prog -h\n"; } # Print filename with matches? -p / -n # If there is more than one file, then show filenames by # default, unless explicitly asked not to (-n) if (@ARGV > 1) { $prefix = 1 unless $noprefix; } else { $prefix = 0 unless $prefix; } # Add schemes to the list of regexen if (@schemes) { unshift @pats => sprintf '^(\b%s\b):' => join '\b|\b' => @schemes; } # If we are dumping (-d, -D), then dump. Exit if -D. if ($dump) { print STDERR "\$scheme = '" . (defined $pats[0] ? $pats[0] : '') . "'\n"; print STDERR "\@pats = ('" . join("', '", @pats) . "')\n"; exit if $dump == 2; } # Find the URIs for my $argv (@ARGV) { my ($name, $fh, $data); $argv = \*STDIN if ($argv eq '-'); if (ref $argv eq 'GLOB') { local $/; $data = <$argv>; $name = '' } else { local $/; $fh = IO::File->new($argv) or die "Can't open $argv: $!"; $data = <$fh>; $name = $argv; } my $class = $schemeless ? "URI::Find::Schemeless" : "URI::Find"; my $finder = $class->new(sub { push @uris => [ $name, $_[0] ] }); $finder->find(\$data); } # Apply patterns, in @pats for my $pat (@pats) { @uris = grep { $_->[1] =~ /$pat/ } @uris; } # Remove redundant links if ($unique) { my %unique; @uris = grep { ++$unique{$_->[1]} == 1 } @uris; } # Sort links, possibly in reverse if ($sort || $reverse) { if ($reverse) { @uris = sort { $b->[1] cmp $a->[1] } @uris; } else { @uris = sort { $a->[1] cmp $b->[1] } @uris; } } # Flatten the arrayrefs if ($prefix) { @uris = map { join ': ' => @$_ } @uris; } else { @uris = map { $_->[1] } @uris; } print map { "$_\n" } @uris; exit 0; __END__ =head1 NAME urifind - find URIs in a document and dump them to STDOUT. =head1 SYNOPSIS $ urifind file =head1 DESCRIPTION F is a simple script that finds URIs in one or more files (using C), and outputs them to to STDOUT. That's it. To find all the URIs in F, use: $ urifind file1 To find the URIs in multiple files, simply list them as arguments: $ urifind file1 file2 file3 F will read from C if no files are given or if a filename of C<-> is specified: $ wget http://www.boston.com/ -O - | urifind When multiple files are listed, F prefixes each found URI with the file from which it came: $ urifind file1 file2 file1: http://www.boston.com/index.html file2: http://use.perl.org/ This can be turned on for single files with the C<-p> ("prefix") switch: $urifind -p file3 file1: http://fsck.com/rt/ It can also be turned off for multiple files with the C<-n> ("no prefix") switch: $ urifind -n file1 file2 http://www.boston.com/index.html http://use.perl.org/ By default, URIs will be displayed in the order found; to sort them ascii-betically, use the C<-s> ("sort") option. To reverse sort them, use the C<-r> ("reverse") flag (C<-r> implies C<-s>). $ urifind -s file1 file2 http://use.perl.org/ http://www.boston.com/index.html mailto:webmaster@boston.com $ urifind -r file1 file2 mailto:webmaster@boston.com http://www.boston.com/index.html http://use.perl.org/ Finally, F supports limiting the returned URIs by scheme or by arbitrary pattern, using the C<-S> option (for schemes) and the C<-P> option. Both C<-S> and C<-P> can be specified multiple times: $ urifind -S mailto file1 mailto:webmaster@boston.com $ urifind -S mailto -S http file1 mailto:webmaster@boston.com http://www.boston.com/index.html C<-P> takes an arbitrary Perl regex. It might need to be protected from the shell: $ urifind -P 's?html?' file1 http://www.boston.com/index.html $ urifind -P '\.org\b' -S http file4 http://www.gnu.org/software/wget/wget.html Add a C<-d> to have F dump the refexen generated from C<-S> and C<-P> to C. C<-D> does the same but exits immediately: $ urifind -P '\.org\b' -S http -D $scheme = '^(\bhttp\b):' @pats = ('^(\bhttp\b):', '\.org\b') To remove duplicates from the results, use the C<-u> ("unique") switch. =head1 OPTION SUMMARY =over 4 =item -s Sort results. =item -r Reverse sort results (implies -s). =item -u Return unique results only. =item -n Don't include filename in output. =item -p Include filename in output (0 by default, but 1 if multiple files are included on the command line). =item -P $re Print only lines matching regex '$re' (may be specified multiple times). =item -S $scheme Only this scheme (may be specified multiple times). =item -h Help summary. =item -v Display version and exit. =item -d Dump compiled regexes for C<-S> and C<-P> to C. =item -D Same as C<-d>, but exit after dumping. =back =head1 AUTHOR darren chamberlain Edarren@cpan.orgE =head1 COPYRIGHT (C) 2003 darren chamberlain This library is free software; you may distribute it and/or modify it under the same terms as Perl itself. =head1 SEE ALSO L URI-Find-20140709/lib000755000765000024 012357350402 14003 5ustar00schwernstaff000000000000URI-Find-20140709/lib/URI000755000765000024 012357350402 14442 5ustar00schwernstaff000000000000URI-Find-20140709/lib/URI/Find.pm000444000765000024 3453012357350402 16042 0ustar00schwernstaff000000000000# Copyright (c) 2000, 2009 Michael G. Schwern. All rights reserved. # This program is free software; you can redistribute it and/or modify # it under the same terms as Perl itself. package URI::Find; require 5.006; use strict; use base qw(Exporter); use vars qw($VERSION @EXPORT); $VERSION = 20140709; @EXPORT = qw(find_uris); use constant YES => (1==1); use constant NO => !YES; use Carp qw(croak); require URI; my $reserved = q(;/?:@&=+$,[]); my $mark = q(-_.!~*'()); my $unreserved = "A-Za-z0-9\Q$mark\E"; my $uric = quotemeta($reserved) . '\p{isAlpha}' . $unreserved . "%"; # URI scheme pattern without the non-alpha numerics. # Those are extremely uncommon and interfere with the match. my($schemeRe) = qr/[a-zA-Z][a-zA-Z0-9\+]*/; my($uricSet) = $uric; # use new set # Some schemes which URI.pm does not explicitly support. my $extraSchemesRe = qr{^(?:git|svn|ssh|svn\+ssh)$}; # We need to avoid picking up 'HTTP::Request::Common' so we have a # subset of uric without a colon ("I have no colon and yet I must poop") my($uricCheat) = __PACKAGE__->uric_set; $uricCheat =~ tr/://d; # Identifying characters accidentally picked up with a URI. my($cruftSet) = q{])\},.'";}; #'# =head1 NAME URI::Find - Find URIs in arbitrary text =head1 SYNOPSIS require URI::Find; my $finder = URI::Find->new(\&callback); $how_many_found = $finder->find(\$text); =head1 DESCRIPTION This module does one thing: Finds URIs and URLs in plain text. It finds them quickly and it finds them B (or what URI.pm considers a URI to be.) It only finds URIs which include a scheme (http:// or the like), for something a bit less strict have a look at L. For a command-line interface, L is provided. =head2 Public Methods =over 4 =item B my $finder = URI::Find->new(\&callback); Creates a new URI::Find object. &callback is a function which is called on each URI found. It is passed two arguments, the first is a URI object representing the URI found. The second is the original text of the URI found. The return value of the callback will replace the original URI in the text. =cut sub new { @_ == 2 || __PACKAGE__->badinvo; my($proto, $callback) = @_; my($class) = ref $proto || $proto; my $self = bless {}, $class; $self->{callback} = $callback; return $self; } =item B my $how_many_found = $finder->find(\$text); $text is a string to search and possibly modify with your callback. Alternatively, C can be called with a replacement function for the rest of the text: use CGI qw(escapeHTML); # ... my $how_many_found = $finder->find(\$text, \&escapeHTML); will not only call the callback function for every URL found (and perform the replacement instructions therein), but also run the rest of the text through C. This makes it easier to turn plain text which contains URLs into HTML (see example below). =cut sub find { @_ == 2 || @_ == 3 || __PACKAGE__->badinvo; my($self, $r_text, $escape_func) = @_; # Might be slower, but it makes the code simpler $escape_func ||= sub { return $_[0] }; # Store the escape func in the object temporarily for use # by other methods. local $self->{escape_func} = $escape_func; $self->{_uris_found} = 0; # Yes, evil. Basically, look for something vaguely resembling a URL, # then hand it off to URI for examination. If it passes, throw # it to a callback and put the result in its place. local $SIG{__DIE__} = 'DEFAULT'; my $uri_cand; my $uri; my $uriRe = sprintf '(?:%s|%s)', $self->uri_re, $self->schemeless_uri_re; $$r_text =~ s{ (.*?) (?:(<(?:URL:)?)(.+?)(>)|($uriRe)) | (.+?)$ }{ my $replace = ''; if( defined $6 ) { $replace = $escape_func->($6); } else { my $maybe_uri = ''; $replace = $escape_func->($1) if length $1; if( defined $2 ) { $maybe_uri = $3; my $is_uri = do { # Don't alter $1... $maybe_uri =~ s/\s+//g; $maybe_uri =~ /^$uriRe/; }; if( $is_uri ) { $replace .= $escape_func->($2); $replace .= $self->_uri_filter($maybe_uri); $replace .= $escape_func->($4); } else { # the whole text inside of the <...> was not a url, but # maybe it has a url (like an HTML link) my $has_uri = do { # Don't alter $1... $maybe_uri = $3; $maybe_uri =~ /$uriRe/; }; if( $has_uri ) { my $pre = $2; my $post = $4; do { $self->find(\$maybe_uri, $escape_func) }; $replace .= $escape_func->($pre); $replace .= $maybe_uri; # already escaped by find() $replace .= $escape_func->($post); } else { $replace .= $escape_func->($2.$3.$4); } } } else { $replace .= $self->_uri_filter($5); } } $replace; }gsex; return $self->{_uris_found}; } sub _uri_filter { my($self, $orig_match) = @_; # A heuristic. Often you'll see things like: # "I saw this site, http://www.foo.com, and its really neat!" # or "Foo Industries (at http://www.foo.com)" # We want to avoid picking up the trailing paren, period or comma. # Of course, this might wreck a perfectly valid URI, more often than # not it corrects a parse mistake. $orig_match = $self->decruft($orig_match); my $replacement = ''; if( my $uri = $self->_is_uri(\$orig_match) ) { # It's a URI $self->{_uris_found}++; $replacement = $self->{callback}->($uri, $orig_match); } else { # False alarm $replacement = $self->{escape_func}->($orig_match); } # Return recrufted replacement return $self->recruft($replacement); } =back =head2 Protected Methods I got a bunch of mail from people asking if I'd add certain features to URI::Find. Most wanted the search to be less restrictive, do more heuristics, etc... Since many of the requests were contradictory, I'm letting people create their own custom subclasses to do what they want. The following are methods internal to URI::Find which a subclass can override to change the way URI::Find acts. They are only to be called B a URI::Find subclass. Users of this module are NOT to use these methods. =over =item B my $uri_re = $self->uri_re; Returns the regex for finding absolute, schemed URIs (http://www.foo.com and such). This, combined with schemeless_uri_re() is what finds candidate URIs. Usually this method does not have to be overridden. =cut sub uri_re { @_ == 1 || __PACKAGE__->badinvo; my($self) = shift; return sprintf '%s:[%s][%s#]*', $schemeRe, $uricCheat, $self->uric_set; } =item B my $schemeless_re = $self->schemeless_uri_re; Returns the regex for finding schemeless URIs (www.foo.com and such) and other things which might be URIs. By default this will match nothing (though it used to try to find schemeless URIs which started with C and C). Many people will want to override this method. See L for a subclass does a reasonable job of finding URIs which might be missing the scheme. =cut sub schemeless_uri_re { @_ == 1 || __PACKAGE__->badinvo; my($self) = shift; return qr/\b\B/; # match nothing } =item B my $uric_set = $self->uric_set; Returns a set matching the 'uric' set defined in RFC 2396 suitable for putting into a character set ([]) in a regex. You almost never have to override this. =cut sub uric_set { @_ == 1 || __PACKAGE__->badinvo; return $uricSet; } =item B my $cruft_set = $self->cruft_set; Returns a set of characters which are considered garbage. Used by decruft(). =cut sub cruft_set { @_ == 1 || __PACKAGE__->badinvo; return $cruftSet; } =item B my $uri = $self->decruft($uri); Sometimes garbage characters like periods and parenthesis get accidentally matched along with the URI. In order for the URI to be properly identified, it must sometimes be "decrufted", the garbage characters stripped. This method takes a candidate URI and strips off any cruft it finds. =cut my %balanced_cruft = ( '(' => ')', '{' => '}', '[' => ']', '"' => '"', q['] => q['], ); sub decruft { @_ == 2 || __PACKAGE__->badinvo; my($self, $orig_match) = @_; $self->{start_cruft} = ''; $self->{end_cruft} = ''; if( $orig_match =~ s/([\Q$cruftSet\E]+)$// ) { # urls can end with HTML entities if found in HTML so let's put back semicolons # if this looks like the case my $cruft = $1; if( $cruft =~ /^;/ && $orig_match =~ /\&(\#[1-9]\d{1,3}|[a-zA-Z]{2,8})$/) { $orig_match .= ';'; $cruft =~ s/^;//; } while( my($open, $close) = each %balanced_cruft ) { $self->recruft_balanced(\$orig_match, \$cruft, $open, $close); } $self->{end_cruft} = $cruft if $cruft; } return $orig_match; } sub recruft_balanced { my $self = shift; my($orig_match, $cruft, $open, $close) = @_; my $open_count = () = $$orig_match =~ m{\Q$open}g; my $close_count = () = $$orig_match =~ m{\Q$close}g; if ( $$cruft =~ /\Q$close\E$/ && $open_count == ( $close_count + 1 ) ) { $$orig_match .= $close; $$cruft =~ s/\Q$close\E$//; } return; } =item B my $uri = $self->recruft($uri); This method puts back the cruft taken off with decruft(). This is necessary because the cruft is destructively removed from the string before invoking the user's callback, so it has to be put back afterwards. =cut #'# sub recruft { @_ == 2 || __PACKAGE__->badinvo; my($self, $uri) = @_; return $self->{start_cruft} . $uri . $self->{end_cruft}; } =item B my $schemed_uri = $self->schemeless_to_schemed($schemeless_uri); This takes a schemeless URI and returns an absolute, schemed URI. The standard implementation supplies ftp:// for URIs which start with ftp., and http:// otherwise. =cut sub schemeless_to_schemed { @_ == 2 || __PACKAGE__->badinvo; my($self, $uri_cand) = @_; $uri_cand =~ s|^( $obj->is_schemed($uri); Returns whether or not the given URI is schemed or schemeless. True for schemed, false for schemeless. =cut sub is_schemed { @_ == 2 || __PACKAGE__->badinvo; my($self, $uri) = @_; return scalar $uri =~ /^ __PACKAGE__->badinvo($extra_levels, $msg) This is used to complain about bogus subroutine/method invocations. The args are optional. =cut sub badinvo { my $package = shift; my $level = @_ ? shift : 0; my $msg = @_ ? " (" . shift() . ")" : ''; my $subname = (caller $level + 1)[3]; croak "Bogus invocation of $subname$msg"; } =back =head2 Old Functions The old find_uri() function is still around and it works, but its deprecated. =cut # Old interface. sub find_uris (\$&) { @_ == 2 || __PACKAGE__->badinvo; my($r_text, $callback) = @_; my $self = __PACKAGE__->new($callback); return $self->find($r_text); } =head1 EXAMPLES Store a list of all URIs (normalized) in the document. my @uris; my $finder = URI::Find->new(sub { my($uri) = shift; push @uris, $uri; }); $finder->find(\$text); Print the original URI text found and the normalized representation. my $finder = URI::Find->new(sub { my($uri, $orig_uri) = @_; print "The text '$orig_uri' represents '$uri'\n"; return $orig_uri; }); $finder->find(\$text); Check each URI in document to see if it exists. use LWP::Simple; my $finder = URI::Find->new(sub { my($uri, $orig_uri) = @_; if( head $uri ) { print "$orig_uri is okay\n"; } else { print "$orig_uri cannot be found\n"; } return $orig_uri; }); $finder->find(\$text); Turn plain text into HTML, with each URI found wrapped in an HTML anchor. use CGI qw(escapeHTML); use URI::Find; my $finder = URI::Find->new(sub { my($uri, $orig_uri) = @_; return qq|$orig_uri|; }); $finder->find(\$text, \&escapeHTML); print "
$text
"; =cut sub _is_uri { @_ == 2 || __PACKAGE__->badinvo; my($self, $r_uri_cand) = @_; my $uri = $$r_uri_cand; # Translate schemeless to schemed if necessary. $uri = $self->schemeless_to_schemed($uri) if $uri =~ $self->schemeless_uri_re and $uri !~ /^new($uri); # Throw out anything with an invalid scheme. my $has_invalid_scheme = $uri->isa("URI::_foreign") && $uri->scheme !~ $extraSchemesRe; # Toss out things like http:// but keep file:/// my $is_empty = $uri =~ m{^$schemeRe://$}; undef $uri if $has_invalid_scheme || $is_empty; }; if($@ || !defined $uri) { # leave everything untouched, its not a URI. return NO; } else { # Its a URI. return $uri; } } =head1 NOTES Will not find URLs with Internationalized Domain Names or pretty much any non-ascii stuff in them. See L =head1 AUTHOR Michael G Schwern with insight from Uri Gutman, Greg Bacon, Jeff Pinyan, Roderick Schertler and others. Roderick Schertler maintained versions 0.11 to 0.16. Darren Chamberlain wrote urifind. =head1 LICENSE Copyright 2000, 2009-2010 by Michael G Schwern Eschwern@pobox.comE. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. See F =head1 SEE ALSO L, L, L, RFC 3986 Appendix C =cut 1; URI-Find-20140709/lib/URI/Find000755000765000024 012357350402 15322 5ustar00schwernstaff000000000000URI-Find-20140709/lib/URI/Find/Schemeless.pm000444000765000024 3021212357350402 20126 0ustar00schwernstaff000000000000# Copyright (c) 2000, 2009 Michael G. Schwern. All rights reserved. # This program is free software; you can redistribute it and/or modify # it under the same terms as Perl itself. package URI::Find::Schemeless; use strict; use base qw(URI::Find); # base.pm error in 5.005_03 prevents it from loading URI::Find if I'm # required first. use URI::Find (); use vars qw($VERSION); $VERSION = 20140709; my($dnsSet) = '\p{isAlpha}A-Za-z0-9-'; # extended for IDNA domains my($cruftSet) = __PACKAGE__->cruft_set . '<>?}'; my($tldRe) = __PACKAGE__->top_level_domain_re; my($uricSet) = __PACKAGE__->uric_set; =head1 NAME URI::Find::Schemeless - Find schemeless URIs in arbitrary text. =head1 SYNOPSIS require URI::Find::Schemeless; my $finder = URI::Find::Schemeless->new(\&callback); The rest is the same as URI::Find. =head1 DESCRIPTION URI::Find finds absolute URIs in plain text with some weak heuristics for finding schemeless URIs. This subclass is for finding things which might be URIs in free text. Things like "www.foo.com" and "lifes.a.bitch.if.you.aint.got.net". The heuristics are such that it hopefully finds a minimum of false positives, but there's no easy way for it know if "COMMAND.COM" refers to a web site or a file. =cut sub schemeless_uri_re { @_ == 1 || __PACKAGE__->badinvo; return qr{ # Originally I constrained what couldn't be before the match # like this: don't match email addresses, and don't start # anywhere but at the beginning of a host name # (?()\{\}\[\]]) ) # hostname (?: [$dnsSet]+(?:\.[$dnsSet]+)*\.$tldRe | (?:\d{1,3}\.){3}\d{1,3} ) # not inet_aton() complete (?: (?=[\s\Q$cruftSet\E]) # followed by unrelated thing (?!\.\w) # but don't stop mid foo.xx.bar (?top_level_domain_re; Returns the regex for matching top level DNS domains. The regex shouldn't be anchored, it shouldn't do any capturing matches, and it should make itself ignore case. =cut sub top_level_domain_re { @_ == 1 || __PACKAGE__->badinvo; my($self) = shift; use utf8; # Updated from http://www.iana.org/domains/root/db/ with new TLDs my $plain = join '|', qw( AERO ARPA ASIA BIZ CAT COM COOP EDU GOV INFO INT JOBS MIL MOBI MUSEUM NAME NET ORG PRO TEL TRAVEL ac academy accountants active actor ad ae aero af ag agency ai airforce al am an ao aq ar archi army arpa as asia associates at attorney au audio autos aw ax axa az ba bar bargains bayern bb bd be beer berlin best bf bg bh bi bid bike bio biz bj bl black blackfriday blue bm bmw bn bo boutique bq br brussels bs bt build builders buzz bv bw by bz bzh ca cab camera camp capetown capital cards care career careers cash cat catering cc cd center ceo cf cg ch cheap christmas church ci citic ck cl claims cleaning clinic clothing club cm cn co codes coffee college cologne com community company computer condos construction consulting contractors cooking cool coop country cr credit creditcard cruises cu cv cw cx cy cz dance dating de degree democrat dental dentist desi diamonds digital directory discount dj dk dm dnp do domains durban dz ec edu education ee eg eh email engineer engineering enterprises equipment er es estate et eu eus events exchange expert exposed fail farm feedback fi finance financial fish fishing fitness fj fk flights florist fm fo foo foundation fr frogans fund furniture futbol ga gal gallery gb gd ge gf gg gh gi gift gives gl glass global globo gm gmo gn gop gov gp gq gr graphics gratis green gripe gs gt gu guide guitars guru gw gy hamburg haus hiphop hiv hk hm hn holdings holiday homes horse host house hr ht hu id ie il im immobilien in industries info ink institute insure int international investments io iq ir is it je jetzt jm jo jobs joburg jp juegos kaufen ke kg kh ki kim kitchen kiwi km kn koeln kp kr kred kw ky kz la land lawyer lb lc lease li life lighting limited limo link lk loans london lotto lr ls lt lu luxe luxury lv ly ma maison management mango market marketing mc md me media meet menu mf mg mh miami mil mini mk ml mm mn mo mobi moda moe monash mortgage moscow motorcycles mp mq mr ms mt mu museum mv mw mx my mz na nagoya name navy nc ne net neustar nf ng nhk ni ninja nl no np nr nu nyc nz okinawa om onl org organic ovh pa paris partners parts pe pf pg ph photo photography photos physio pics pictures pink pk pl plumbing pm pn post pr press pro productions properties ps pt pub pw py qa qpon quebec re recipes red rehab reise reisen ren rentals repair report republican rest reviews rich rio ro rocks rodeo rs ru ruhr rw ryukyu sa saarland sb sc schule scot sd se services sexy sg sh shiksha shoes si singles sj sk sl sm sn so social software sohu solar solutions soy space sr ss st su supplies supply support surf surgery sv sx sy systems sz tattoo tax tc td technology tel tf tg th tienda tips tirol tj tk tl tm tn to today tokyo tools town toys tp tr trade training travel tt tv tw tz ua ug uk um university uno us uy uz va vacations vc ve vegas ventures versicherung vet vg vi viajes villas vision vlaanderen vn vodka vote voting voto voyage vu wang watch webcam website wed wf wien wiki works ws wtc wtf 测试 परीक्षा 集团 在线 한국 ভারত موقع বাংলা 公益 公司 移动 我爱你 москва испытание қаз онлайн сайт срб 테스트 орг 삼성 சிங்கப்பூர் 商标 商城 дети мкд טעסט 中文网 中信 中国 中國 భారత్ ලංකා 測試 ભારત भारत آزمایشی பரிட்சை संगठन 网络 укр 香港 δοκιμή إختبار 台湾 台灣 мон الجزائر عمان ایران امارات بازار پاکستان الاردن بھارت المغرب السعودية سودان مليسيا شبكة გე 机构 组织机构 ไทย سورية рф تونس みんな 世界 ਭਾਰਤ 网址 游戏 مصر قطر இலங்கை இந்தியா 新加坡 فلسطين テスト 政务 xxx xyz yachts ye yokohama yt za zm zone zw ); return qr/(?:$plain)/i; } =head1 AUTHOR Original code by Roderick Schertler , adapted by Michael G Schwern . Currently maintained by Roderick Schertler . =head1 SEE ALSO L =cut 1; URI-Find-20140709/t000755000765000024 012357350402 13500 5ustar00schwernstaff000000000000URI-Find-20140709/t/filter.t000444000765000024 261212357350402 15310 0ustar00schwernstaff000000000000#!/usr/bin/perl -w # Test the filter function use strict; use Test::More 'no_plan'; use URI::Find; my @tasks = ( ["Foo&Bar http://abc.com.", "Foo&Bar xx&."], ["http://abc.com. http://abc.com.", "xx&. xx&."], ["http://abc.com?foo=bar&baz=foo", "xx&"], ["& http://abc.com?foo=bar&baz=foo", "& xx&"], ["http://abc.com?foo=bar&baz=foo &", "xx& &"], ["Foo&Bar http://abc.com", "Foo&Bar xx&"], ["http://abc.com. Foo&Bar", "xx&. Foo&Bar"], ["Foo&Bar http://abc.com. Foo&Bar", "Foo&Bar xx&. Foo&Bar"], ["Foo&Bar\nhttp://abc.com.\nFoo&Bar", "Foo&Bar\nxx&.\nFoo&Bar"], ["Foo&Bar\nhttp://abc.com. http://def.com.\nFoo&Bar", "Foo&Bar\nxx&. xx&.\nFoo&Bar"], # Thing which looks like a URL but isn't ["noturi:& should also be escaped", "noturi:& should also be escaped"], # Thing which looks like a URL inside brackets, but isn't ["Something & whatever", "Something & whatever"], # Non-URL nested inside brackets [q{}, q{}], ); for my $task (@tasks) { my($str, $result) = @$task; my $org = $str; my $f = URI::Find->new(sub { return "xx&" }); $f->find(\$str, \&simple_escape); is($str, $result, "escape $org"); } sub simple_escape { my($toencode) = @_; $toencode =~ s{&}{&}gso; return $toencode; } URI-Find-20140709/t/Find.t000444000765000024 2355512357350402 14734 0ustar00schwernstaff000000000000#!/usr/bin/perl -w use strict; use open ':std', ':encoding(utf8)'; use Test::More 'no_plan'; use_ok 'URI::Find'; use_ok 'URI::Find::Schemeless'; my $No_joined = @ARGV && $ARGV[0] eq '--no-joined' ? shift : 0; # %Run contains one entry for each type of finder. Keys are mnemonics, # required to be a single letter. The values are hashes, keys are names # (used only for output) and values are the subs which actually run the # tests. Each is invoked with a reference to the text to scan and a # code reference, and runs the finder on that text with that callback, # returning the number of matches. my %Run; BEGIN { %Run = ( # plain P => { old_interface => sub { run_function(\&find_uris, @_) }, regular => sub { run_object('URI::Find', @_) }, }, # schemeless S => { schemeless => sub { run_object('URI::Find::Schemeless', @_) }, }, ); die if grep { length != 1 } keys %Run; } # A spec is a reference to a 2-element list. The first is a string # which contains the %Run keys which will find the URL, the second is # the URL itself. Eg: # # [PS => 'http://www.foo.com/'] # found by both P and S # [S => 'http://asdf.foo.com/'] # only found by S # # %Tests maps from input text to a list of specs which describe the URLs # which will be found. If the value is a reference to an empty list, no # URLs will be found in the key. # # As a special case, a %Tests value can be initialized as a string. # This will be replaced with a spec which indicates that all finders # will locate that as the only URL in the key. my %Tests; BEGIN { my $all = join '', keys %Run; use utf8; %Tests = ( 'Something something something.travel and stuff' => [[ S => 'http://something.travel' ]], '' => 'http://www.perl.com', '' => 'ftp://ftp.site.org', '' => [[ S => 'ftp://ftp.site.org' ]], 'Make sure "http://www.foo.com" is caught' => 'http://www.foo.com', 'http://www.foo.com' => 'http://www.foo.com', 'www.foo.com' => [[ S => 'http://www.foo.com' ]], 'ftp.foo.com' => [[ S => 'ftp://ftp.foo.com' ]], 'gopher://moo.foo.com' => 'gopher://moo.foo.com', 'I saw this site, http://www.foo.com, and its really neat!' => 'http://www.foo.com', 'Foo Industries (at http://www.foo.com)' => 'http://www.foo.com', 'Oh, dear. Another message from Dejanews. http://www.deja.com/%5BST_rn=ps%5D/qs.xp?ST=PS&svcclass=dnyr&QRY=lwall&defaultOp=AND&DBS=1&OP=dnquery.xp&LNG=ALL&subjects=&groups=&authors=&fromdate=&todate=&showsort=score&maxhits=25 How fun.' => 'http://www.deja.com/%5BST_rn=ps%5D/qs.xp?ST=PS&svcclass=dnyr&QRY=lwall&defaultOp=AND&DBS=1&OP=dnquery.xp&LNG=ALL&subjects=&groups=&authors=&fromdate=&todate=&showsort=score&maxhits=25', 'Hmmm, Storyserver from news.com. http://news.cnet.com/news/0-1004-200-1537811.html?tag=st.ne.1002.thed.1004-200-1537811 How nice.' => [[S => 'http://news.com'], [$all => 'http://news.cnet.com/news/0-1004-200-1537811.html?tag=st.ne.1002.thed.1004-200-1537811']], '$html = get("http://www.perl.com/");' => 'http://www.perl.com/', q|my $url = url('http://www.perl.com/cgi-bin/cpan_mod');| => 'http://www.perl.com/cgi-bin/cpan_mod', 'http://www.perl.org/support/online_support.html#mail' => 'http://www.perl.org/support/online_support.html#mail', 'irc.lightning.net irc.mcs.net' => [[S => 'http://irc.lightning.net'], [S => 'http://irc.mcs.net']], 'foo.bar.xx/~baz/' => [], 'foo.bar.xx/~baz/ abcd.efgh.mil, none.such/asdf/ hi.there.org' => [[S => 'http://abcd.efgh.mil'], [S => 'http://hi.there.org']], 'foo:<1.2.3.4>' => [[S => 'http://1.2.3.4']], 'mail.eserv.com.au? failed before ? designated end' => [[S => 'http://mail.eserv.com.au']], 'foo.info/himom ftp.bar.biz' => [[S => 'http://foo.info/himom'], [S => 'ftp://ftp.bar.biz']], '(http://round.com)' => 'http://round.com', '[http://square.com]' => 'http://square.com', '{http://brace.com}' => 'http://brace.com', '' => 'http://angle.com', '(round.com)' => [[S => 'http://round.com' ]], '[square.com]' => [[S => 'http://square.com' ]], '{brace.com}' => [[S => 'http://brace.com' ]], '' => [[S => 'http://angle.com' ]], 'intag.com' => [[S => 'http://intag.com' ]], '[mailto:somebody@company.ext]' => 'mailto:somebody@company.ext', 'HTtp://MIXED-Case.Com' => 'HTtp://MIXED-Case.Com', "The technology of magnetic energy has become so powerful an entire ". "house can...http://bit.ly/8yEdeb" => "http://bit.ly/8yEdeb", 'http://www.foo.com/bar((baz)blah)' => 'http://www.foo.com/bar((baz)blah)', 'https://[2607:5300:60:1509::228d:413a]' => 'https://[2607:5300:60:1509::228d:413a]', '[https://[2607:5300:60:1509::228d:413a]]' => 'https://[2607:5300:60:1509::228d:413a]', # Tests for file: "origin file:///Users/schwern/devel/URI-Find/ (fetch)" => 'file:///Users/schwern/devel/URI-Find/', "This is how you express the root path file:/// as a URL" => 'file:///', # Tests for git: 'GwenDragon git://github.com/GwenDragon/uri-find.git (fetch)' => 'git://github.com/GwenDragon/uri-find.git', # Tests for svn+ssh: "URLs like svn+ssh://example.net aren't found" => 'svn+ssh://example.net', # Tests for IDNA domains 'http://müller.de' => 'http://xn--mller-kva.de', 'http://موقع.وزارة-الاتصالات.مصر' => 'http://xn--4gbrim.xn----ymcbaaajlc6dj7bxne2c.xn--wgbh1c', 'http://правительство.рф' => 'http://xn--80aealotwbjpid2k.xn--p1ai', 'http://北京大学.中國' => 'http://xn--1lq90ic7fzpc.xn--fiqz9s', 'http://北京大学.cn' => 'http://xn--1lq90ic7fzpc.cn', # Test new TLDs 'http://my.test.transport' => 'http://my.test.transport', 'http://regierung.bayern' => 'http://regierung.bayern', 'http://kaiser-senf.gmbh/shop/' => 'http://kaiser-senf.gmbh/shop/', 'Have vacation in lovely Bavaria and visit tourist.in.bayern to go to King Ludwig New Schwanstein. For political information see website regierung.bayern to get more.' => [[S => 'http://tourist.in.bayern' ], [S => 'http://regierung.bayern' ]], 'The mießlich-österlich-mück.ag was established in 2032 by M. Ostrich.' => [[S => 'http://xn--mielich-sterlich-mck-dwb52cye.ag' ]], # False tests 'HTTP::Request::Common' => [], 'comp.infosystems.www.authoring.cgi' => [], 'MIME/Lite.pm' => [], 'foo@bar.baz.com' => [], 'Foo.pm' => [], 'Foo.pl' => [], 'hi Foo.pm Foo.pl mom' => [], 'x comp.ai.nat-lang libdb.so.3 x' => [], 'x comp.ai.nat-lang libdb.so.3 x' => [], 'www.marselisl www.info@skive-hallerne.dk' => [], 'bogusscheme://foo.com/' => [], 'http:' => [], 'http://' => [], # XXX broken # q{$url = 'http://'.rand(1000000).'@anonymizer.com/'.$url;} # => [], ); # Convert plain string values to a list of 1 spec which indicates # that all finders will find that as the only URL. for (@Tests{keys %Tests}) { $_ = [[$all, $_]] if !ref; } # Run everything together as one big test. $Tests{join "\n", keys %Tests} = [map { @$_ } values %Tests] unless $No_joined; # Each test yields 3 tests for each finder (return value matches # number returned, matches equal expected matches, text was not # modified). my $finders = 0; $finders += keys %{ $Run{$_} } for keys %Run; } # Given a run type and a list of specs, return the URLs which that type # should find. sub specs_to_urls { my ($this_type, @spec) = @_; my @out; for (@spec) { my ($found_by_types, $url) = @$_; push @out, $url if index($found_by_types, $this_type) >= 0; } return @out; } sub run_function { my ($rfunc, $rtext, $callback) = @_; return $rfunc->($rtext, $callback); } sub run_object { my ($class, $rtext, $callback) = @_; my $finder = $class->new($callback); return $finder->find($rtext); } sub run { my ($orig_text, @spec) = @_; note "# testing [$orig_text]\n"; for my $run_type (keys %Run) { note "# run type $run_type\n"; while( my($run_name, $run_sub) = each %{ $Run{$run_type} } ) { note "# running $run_name\n"; my @want = specs_to_urls $run_type, @spec; my $text = $orig_text; my @out; my $n = $run_sub->(\$text, sub { push @out, $_[0]; $_[1] }); is $n, @out, "return value length"; is_deeply \@out, \@want, "output" or diag("Original text: $text"); is $text, $orig_text, "text unmodified"; } } } while( my($text, $rspec_list) = each %Tests ) { run $text, @$rspec_list; } URI-Find-20140709/t/html.t000444000765000024 225612357350402 14773 0ustar00schwernstaff000000000000#!/usr/bin/perl -w use strict; use warnings; my $Example = <<"END"; Yes, Jim, I found it under "http://www.w3.org/Addressing/", but you can probably pick it up from the RFC. Note the warning. Also . Search for some entities. END # Which should find these URIs my @Uris = ( "http://www.w3.org/Addressing/", "ftp://foo.example.com/rfc/", "http://www.ics.uci.edu/pub/ietf/uri/historical.html#WARNING", "http://google.com/search?q=<html>", ); use Test::More tests => 5; use URI::Find; my @found; my $finder = URI::Find->new(sub { my($uri) = @_; push @found, $uri; return "Link " . scalar @found; }); $finder->find(\$Example); is_deeply \@found, \@Uris, "found links in HTML"; like($Example, qr/"Link 1"/, 'link 1 replaced'); like($Example, qr/ 1], ["foo.com" => 0], ); for my $test (@tests) { my($uri, $want) = @$test; is !!URI::Find->is_schemed($uri), !!$want, "is_schemed($uri)"; } URI-Find-20140709/t/load-schemeless.t000444000765000024 42012357350402 17046 0ustar00schwernstaff000000000000#!/usr/bin/perl -w # An error in base.pm in 5.005_03 causes it not to load URI::Find when # invoked from URI::Find::Schemeless. Prevent regression. use strict; use Test::More tests => 2; require_ok 'URI::Find::Schemeless'; new_ok 'URI::Find::Schemeless' => [sub {}]; URI-Find-20140709/t/rfc3986_appendix_c.t000444000765000024 171612357350402 17325 0ustar00schwernstaff000000000000#!/usr/bin/perl -w use strict; # RFC 3986 Appendix C covers "Delimiting a URI in Context" # and it has this example... my $Example = <<"END"; Yes, Jim, I found it under "http://www.w3.org/Addressing/", but you can probably pick it up from . Note the warning in . Also . END # Which should find these URIs my @Uris = ( "http://www.w3.org/Addressing/", "ftp://foo.example.com/rfc/", "http://www.ics.uci.edu/pub/ietf/uri/historical.html#WARNING", ); use Test::More tests => 4; use URI::Find; my @found; my $finder = URI::Find->new(sub { my($uri) = @_; push @found, $uri; return "Link " . scalar @found; }); $finder->find(\$Example); is_deeply \@found, \@Uris, "RFC 3986 Appendix C example"; like($Example, qr/"Link 1"/, 'replaced link 1'); like($Example, qr//, 'replaced link 2'); like($Example, qr//, 'replaced link 3'); URI-Find-20140709/t/urifind000755000765000024 012357350402 15140 5ustar00schwernstaff000000000000URI-Find-20140709/t/urifind/find.t000444000765000024 434012357350402 16403 0ustar00schwernstaff000000000000#!/usr/bin/perl -w # vim:set ft=perl: use strict; use Test::More; use File::Spec; ok(my $ifile = File::Spec->catfile(qw(t urifind sciencenews)), "Test file found"); my $urifind = File::Spec->catfile(qw(blib script urifind)); my @data = `$^X $urifind $ifile`; is(@data, 13, "Correct number of elements"); is((grep /mailto:/ => @data), 4, "Found 4 mailto links"); is((grep /http:/ => @data), 9, "Found 9 mailto links"); @data = `$^X $urifind $ifile -p`; my $count = 0; is(@data, 13, "*Still* correct number of elements"); is((grep /^\Q$ifile/ => @data), @data, "All elements are prefixed with the path when $urifind invoked with -p"); @data = `$^X $urifind -n $ifile /dev/null`; is(@data, 13, "*Still* correct number of elements"); is((grep !/^\Q$ifile/ => @data), (@data), "All elements are not prefixed with the path when ($urifind,". " '/dev/null') invoked with -n"); @data = `$^X $urifind -S http $ifile`; is(@data, 9, "Correct number of 'http' elements"); @data = `$^X $urifind -S mailto $ifile`; is(@data, 4, "Correct number of 'mailto' elements"); @data = `$^X $urifind -S mailto -S http $ifile`; is(@data, 13, "Correct number of ('http', 'mailto') elements"); @data = `$^X $urifind < $ifile`; is(@data, 13, "Correct number elements when given data on STDIN"); @data = `$^X $urifind -S http -P \.org $ifile`; is(@data, 8, "Correct number elements when invoked with -P \.org -S http"); @data = `$^X $urifind --schemeless $ifile`; chomp @data; is_deeply \@data, [map { "$_" } qw( http://66.33.90.123 http://efwd.dnsix.com mailto:eletter@lists.sciencenews.org mailto:eletter-help@lists.sciencenews.org mailto:eletter-unsubscribe@lists.sciencenews.org mailto:eletter-subscribe@lists.sciencenews.org http://www.sciencenews.org http://www.sciencenews.org/20030705/fob1.asp http://www.sciencenews.org/20030705/fob5.asp http://www.sciencenews.org/20030705/bob8.asp http://www.sciencenews.org/20030705/mathtrek.asp http://www.sciencenews.org/20030705/food.asp http://www.sciencenews.org http://www.sciencenews.org/20030705/toc.asp http://www.sciencenews.org http://www.sciencenews.org/20030705/fob2.asp http://www.sciencenews.org/20030705/fob3.asp http:/% )]; done_testing; URI-Find-20140709/t/urifind/pod.t000444000765000024 56412357350402 16231 0ustar00schwernstaff000000000000#!/usr/bin/perl # vim: set ft=perl: # Stolen from Andy Lester, from # use Test::More; use File::Spec; use strict; eval "use Test::Pod 0.95"; if ($@) { plan skip_all => "Test::Pod v0.95 required for testing POD"; } else { plan tests => 1; Test::Pod::pod_file_ok(File::Spec->catfile(qw(blib script urifind))); } URI-Find-20140709/t/urifind/sciencenews000444000765000024 1103612357350402 17547 0ustar00schwernstaff000000000000From eletter-return-306-dlc+sciencenews=sevenroot.org@lists.sciencenews.org Mon Jul 7 07:33:57 2003 Return-Path: Received: from [66.33.90.123] (helo=ns2.dbinteractive.com) by efwd.dnsix.com with smtp (Exim 3.36 #1) id 19YtS5-0007Qd-00 for xxxxx@xxxxxxxxx.xxx Sat, 05 Jul 2003 13:16:09 -0700 Received: (qmail 27761 invoked by uid 502); 5 Jul 2003 16:00:01 -0000 Mailing-List: contact eletter-help@lists.sciencenews.org; run by ezmlm From: e-LETTER@lists.sciencenews.org Subject: Science News e-LETTER Precedence: bulk X-No-Archive: yes List-Post: List-Help: List-Unsubscribe: List-Subscribe: Delivered-To: mailing list eletter@lists.sciencenews.org Message-Id: Bcc: Date: Sat, 05 Jul 2003 13:16:09 -0700 X-Bogosity: Ham, spamicity=0.000000, algorithm=fisher Status: RO Content-Length: 3542 Lines: 69 WEEKLY e-LETTER from SCIENCE NEWS July 5, 2003 This week's articles focus on the detection of five-quark particles, the rise of dengue fever in the Americas, the first example of an animal navigating by moonlight polarity, the use of viruses, bacteria, and fungi to engineer new structures, and more. The cover story looks at how human ancestors settled into one ecosystem after another. Food for Thought ponders the anticholesterol benefits of soy greens. MathTrek puzzles over alphamagic squares. ================================== Science News is an award-winning weekly newsmagazine covering the most important research in all fields of science. Published since 1922, its 16 pages are packed with short, accurate articles that appeal to both general readers and scientists. ---------------------------------- To subscribe to Science News magazine, go to www.sciencenews.org ================================== THIS WEEK'S FEATURED ARTICLES: [Physics] Wild Bunch: First five-quark particle turns up Physicists have uncovered strong evidence for a family of five-quark particles after decades of finding no subatomic particles with more than three of the fundamental building blocks known as quarks. http://www.sciencenews.org/20030705/fob1.asp [Behavior] Till IL-6 Do Us Part: Elderly caregivers show harmful immune effect Elderly people caring for their incapacitated spouses experienced dramatic average increases in the blood concentration of a protein involved in immune regulation, a trend that puts them at risk for a variety of serious illnesses. http://www.sciencenews.org/20030705/fob5.asp [Materials Science] Microbial Materials: Scientists co-opt viruses, bacteria, and fungi to build new structures Microorganisms can be coaxed into producing high-tech components and can themselves serve as valuable ingredients in new classes of materials. http://www.sciencenews.org/20030705/bob8.asp THIS WEEK'S ONLINE FEATURES: [MATHTREK] Alphamagic Squares http://www.sciencenews.org/20030705/mathtrek.asp [FOOD FOR THOUGHT] Soy Greens--The Coming Health Food? http://www.sciencenews.org/20030705/food.asp ---------------------------------- To subscribe to Science News magazine, go to www.sciencenews.org ---------------------------------- Week of July 5, 2003; Vol. 164 No. 1 THIS WEEK'S TABLE OF CONTENTS: http://www.sciencenews.org/20030705/toc.asp References and sources for all articles are available online at www.sciencenews.org *********************************** REGISTERED SUBSCRIBERS to the print edition of Science News also have online access to the full text of the following articles: [Biology] A Matter of Taste: Mutated fruit flies bypass the salt By creating mutant fruit flies with an impaired capacity to taste salt, researchers have identified several genes that contribute to this sensory system in insects. http://www.sciencenews.org/20030705/fob2.asp [Biomedicine] Lethal Emergence: Tracing the rise of dengue fever in the Americas Using the genetics of viruses, scientists have tracked a virulent form of dengue virus in Latin America back to its roots in India. http://www.sciencenews.org/20030705/fob3.asp [Zoology] Moonlighting: Beetles navigate by lunar polarity A south African dung beetle is the first animal found to align its path by detecting the polarization of moonlight. http:/% --------------------------------------------------------------------- To unsubscribe, e-mail: eletter-unsubscribe@lists.sciencenews.org For additional commands, e-mail: eletter-help@lists.sciencenews.org