debian/0000755000000000000000000000000011616025267007174 5ustar debian/control0000644000000000000000000000304111616025267010575 0ustar Source: libstring-tokenizer-perl Section: perl Priority: optional Build-Depends: debhelper (>= 8) Build-Depends-Indep: perl, libtest-pod-perl, libtest-pod-coverage-perl Maintainer: Debian Perl Group Uploaders: Ben Webb Standards-Version: 3.9.2 Homepage: http://search.cpan.org/dist/String-Tokenizer/ Vcs-Git: git://git.debian.org/pkg-perl/packages/libstring-tokenizer-perl.git Vcs-Browser: http://anonscm.debian.org/gitweb/?p=pkg-perl/packages/libstring-tokenizer-perl.git Package: libstring-tokenizer-perl Architecture: all Depends: ${misc:Depends}, ${perl:Depends} Description: simple string tokenizer String::Tokenizer is a simple string tokenizer which takes a string and splits it on whitespace. It also optionally takes a string of characters to use as delimiters, and returns them with the token set as well. This allows for splitting the string in many different ways. . This is a very basic tokenizer, so more complex needs should be either addressed with a custom written tokenizer or post-processing of the output generated by this module. Basically, this will not fill everyones needs, but it spans a gap between simple split / /, $string and the other options that involve much larger and complex modules. . Also note that this is not a lexical analyser. Many people confuse tokenization with lexical analysis. A tokenizer mearly splits its input into specific chunks, a lexical analyzer classifies those chunks. Sometimes these two steps are combined, but not here. debian/copyright0000644000000000000000000000212011616025267011122 0ustar Format-Specification: http://svn.debian.org/wsvn/dep/web/deps/dep5.mdwn?op=file&rev=135 Maintainer: Stevan Little, Source: http://search.cpan.org/dist/String-Tokenizer/ Name: String-Tokenizer Files: * Copyright: 2004, Infinity Interactive, Inc. License: Artistic or GPL-1+ Files: debian/* Copyright: 2011, Ben Webb License: Artistic or GPL-1+ License: Artistic This program is free software; you can redistribute it and/or modify it under the terms of the Artistic License, which comes with Perl. . On Debian GNU/Linux systems, the complete text of the Artistic License can be found in `/usr/share/common-licenses/Artistic'. License: GPL-1+ This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 1, or (at your option) any later version. . On Debian GNU/Linux systems, the complete text of version 1 of the General Public License can be found in `/usr/share/common-licenses/GPL-1'. debian/watch0000644000000000000000000000017511616025267010230 0ustar version=3 http://search.cpan.org/dist/String-Tokenizer/ .*/String-Tokenizer-v?(\d[\d.-]+)\.(?:tar(?:\.gz|\.bz2)?|tgz|zip)$ debian/patches/0000755000000000000000000000000011616025267010623 5ustar debian/patches/manpage_spelling.patch0000644000000000000000000001771711616025267015166 0ustar Description: spelling fixes Origin: vendor Forwarded: no Author: Ben Webb Last-Update: 2011-08-02 --- a/lib/String/Tokenizer.pm +++ b/lib/String/Tokenizer.pm @@ -278,13 +278,13 @@ # create tokenizer which retains whitespace my $st = String::Tokenizer->new( - 'this is a test with, (signifigant) whitespace', + 'this is a test with, (significant) whitespace', ',()', String::Tokenizer->RETAIN_WHITESPACE ); # this will print: - # 'this', ' ', 'is', ' ', 'a', ' ', 'test', ' ', 'with', ' ', '(', 'signifigant', ')', ' ', 'whitespace' + # 'this', ' ', 'is', ' ', 'a', ' ', 'test', ' ', 'with', ' ', '(', 'significant', ')', ' ', 'whitespace' print "'" . (join "', '" => $tokenizer->getTokens()) . "'"; # get a token iterator @@ -309,9 +309,9 @@ A simple string tokenizer which takes a string and splits it on whitespace. It also optionally takes a string of characters to use as delimiters, and returns them with the token set as well. This allows for splitting the string in many different ways. -This is a very basic tokenizer, so more complex needs should be either addressed with a custom written tokenizer or post-processing of the output generated by this module. Basically, this will not fill everyones needs, but it spans a gap between simple C and the other options that involve much larger and complex modules. +This is a very basic tokenizer, so more complex needs should be either addressed with a custom written tokenizer or post-processing of the output generated by this module. Basically, this will not fill everyone's needs, but it spans a gap between simple C and the other options that involve much larger and complex modules. -Also note that this is not a lexical analyser. Many people confuse tokenization with lexical analysis. A tokenizer mearly splits its input into specific chunks, a lexical analyzer classifies those chunks. Sometimes these two steps are combined, but not here. +Also note that this is not a lexical analyser. Many people confuse tokenization with lexical analysis. A tokenizer merely splits its input into specific chunks, a lexical analyzer classifies those chunks. Sometimes these two steps are combined, but not here. =head1 METHODS @@ -331,15 +331,15 @@ =item B -Takes a C<$string> to tokenize, and optionally a set of C<$delimiter> characters to facilitate the tokenization and the type of whitespace handling with C<$handle_whitespace>. The C<$string> parameter and the C<$handle_whitespace> parameter are pretty obvious, the C<$delimiter> parameter is not as transparent. C<$delimiter> is a string of characters, these characters are then seperated into individual characters and are used to split the C<$string> with. So given this string: +Takes a C<$string> to tokenize, and optionally a set of C<$delimiter> characters to facilitate the tokenization and the type of whitespace handling with C<$handle_whitespace>. The C<$string> parameter and the C<$handle_whitespace> parameter are pretty obvious, the C<$delimiter> parameter is not as transparent. C<$delimiter> is a string of characters, these characters are then separated into individual characters and are used to split the C<$string> with. So given this string: (5 + (100 * (20 - 35)) + 4) -The C method without a C<$delimiter> parameter would return the following comma seperated list of tokens: +The C method without a C<$delimiter> parameter would return the following comma separated list of tokens: '(5', '+', '(100', '*', '(20', '-', '35))', '+', '4)' -However, if you were to pass the following set of delimiters C<(, )> to C, you would get the following comma seperated list of tokens: +However, if you were to pass the following set of delimiters C<(, )> to C, you would get the following comma separated list of tokens: '(', '5', '+', '(', '100', '*', '(', '20', '-', '35', ')', ')', '+', '4', ')' @@ -349,17 +349,17 @@ as some languages do. Then you would give this delimiter C<+*-()> to arrive at the same result. -If you decide that whitespace is signifigant in your string, then you need to specify that like this: +If you decide that whitespace is significant in your string, then you need to specify that like this: my $st = String::Tokenizer->new( - 'this is a test with, (signifigant) whitespace', + 'this is a test with, (significant) whitespace', ',()', String::Tokenizer->RETAIN_WHITESPACE ); A call to C on this instance would result in the following token set. - 'this', ' ', 'is', ' ', 'a', ' ', 'test', ' ', 'with', ' ', '(', 'signifigant', ')', ' ', 'whitespace' + 'this', ' ', 'is', ' ', 'a', ' ', 'test', ' ', 'with', ' ', '(', 'significant', ')', ' ', 'whitespace' All running whitespace is grouped together into a single token, we make no attempt to split it into its individual parts. @@ -375,7 +375,7 @@ =head1 INNER CLASS -A B instance is returned from the B's C method and serves as yet another means of iterating through an array of tokens. The simplest way would be to call C and just manipulate the array yourself, or push the array into another object. However, iterating through a set of tokens tends to get messy when done manually. So here I have provided the B to address those common token processing idioms. It is basically a bi-directional iterator which can look ahead, skip and be reset to the begining. +A B instance is returned from the B's C method and serves as yet another means of iterating through an array of tokens. The simplest way would be to call C and just manipulate the array yourself, or push the array into another object. However, iterating through a set of tokens tends to get messy when done manually. So here I have provided the B to address those common token processing idioms. It is basically a bi-directional iterator which can look ahead, skip and be reset to the beginning. B B is an inner class, which means that only B objects can create an instance of it. That said, if B's C method is called from outside of the B package, an exception is thrown. @@ -388,7 +388,7 @@ =item B -This will reset the interal counter, bringing it back to the begining of the token list. +This will reset the internal counter, bringing it back to the beginning of the token list. =item B @@ -396,7 +396,7 @@ =item B -This will return true (1) if the begining of the token list has been reached, and false (0) otherwise. +This will return true (1) if the beginning of the token list has been reached, and false (0) otherwise. =item B @@ -478,7 +478,7 @@ =item B -Along with being a tokenizer, it also provides a means of moving through the resulting tokens, allowing for skipping of tokens and such. But this module looks as if it hasnt been updated from 0.01 and that was uploaded in since 2002. The author (Simon Cozens) includes it in the section of L entitled "The Embarrassing Past". From what I can guess, he does not intend to maintain it anymore. +Along with being a tokenizer, it also provides a means of moving through the resulting tokens, allowing for skipping of tokens and such. But this module looks as if it hasn't been updated from 0.01 and that was uploaded in since 2002. The author (Simon Cozens) includes it in the section of L entitled "The Embarrassing Past". From what I can guess, he does not intend to maintain it anymore. =item B debian/patches/series0000644000000000000000000000002711616025267012037 0ustar manpage_spelling.patch debian/compat0000644000000000000000000000000211616025267010372 0ustar 8 debian/changelog0000644000000000000000000000025211616025267011045 0ustar libstring-tokenizer-perl (0.05-1) unstable; urgency=low * Initial Release. (closes: #636288) -- Ben Webb Tue, 02 Aug 2011 15:04:34 +0000 debian/source/0000755000000000000000000000000011616025267010474 5ustar debian/source/format0000644000000000000000000000001411616025267011702 0ustar 3.0 (quilt) debian/rules0000755000000000000000000000003611616025267010253 0ustar #!/usr/bin/make -f %: dh $@