Skip Menu |

This queue is for tickets about the String-Tokenizer CPAN distribution.

Report information
The Basics
Id: 70040
Status: new
Priority: 0/
Queue: String-Tokenizer

People
Owner: Nobody in particular
Requestors: bjwebb67 [...] googlemail.com
Cc:
AdminCc:

Bug Information
Severity: Normal
Broken in: 0.05
Fixed in: (no value)



Subject: Manpage Spelling Mistakes
The attached patch, which was created for the debian package libstring-tokenizer-perl (which is now in sid) fixes some minor spelling mistakes in the manpage.
Subject: manpage_spelling.patch
Description: spelling fixes Origin: vendor Forwarded: no Author: Ben Webb <bjwebb67@googlemail.com> Last-Update: 2011-08-02 --- a/lib/String/Tokenizer.pm +++ b/lib/String/Tokenizer.pm @@ -278,13 +278,13 @@ # create tokenizer which retains whitespace my $st = String::Tokenizer->new( - 'this is a test with, (signifigant) whitespace', + 'this is a test with, (significant) whitespace', ',()', String::Tokenizer->RETAIN_WHITESPACE ); # this will print: - # 'this', ' ', 'is', ' ', 'a', ' ', 'test', ' ', 'with', ' ', '(', 'signifigant', ')', ' ', 'whitespace' + # 'this', ' ', 'is', ' ', 'a', ' ', 'test', ' ', 'with', ' ', '(', 'significant', ')', ' ', 'whitespace' print "'" . (join "', '" => $tokenizer->getTokens()) . "'"; # get a token iterator @@ -309,9 +309,9 @@ A simple string tokenizer which takes a string and splits it on whitespace. It also optionally takes a string of characters to use as delimiters, and returns them with the token set as well. This allows for splitting the string in many different ways. -This is a very basic tokenizer, so more complex needs should be either addressed with a custom written tokenizer or post-processing of the output generated by this module. Basically, this will not fill everyones needs, but it spans a gap between simple C<split / /, $string> and the other options that involve much larger and complex modules. +This is a very basic tokenizer, so more complex needs should be either addressed with a custom written tokenizer or post-processing of the output generated by this module. Basically, this will not fill everyone's needs, but it spans a gap between simple C<split / /, $string> and the other options that involve much larger and complex modules. -Also note that this is not a lexical analyser. Many people confuse tokenization with lexical analysis. A tokenizer mearly splits its input into specific chunks, a lexical analyzer classifies those chunks. Sometimes these two steps are combined, but not here. +Also note that this is not a lexical analyser. Many people confuse tokenization with lexical analysis. A tokenizer merely splits its input into specific chunks, a lexical analyzer classifies those chunks. Sometimes these two steps are combined, but not here. =head1 METHODS @@ -331,15 +331,15 @@ =item B<tokenize ($string, $delimiters, $handle_whitespace)> -Takes a C<$string> to tokenize, and optionally a set of C<$delimiter> characters to facilitate the tokenization and the type of whitespace handling with C<$handle_whitespace>. The C<$string> parameter and the C<$handle_whitespace> parameter are pretty obvious, the C<$delimiter> parameter is not as transparent. C<$delimiter> is a string of characters, these characters are then seperated into individual characters and are used to split the C<$string> with. So given this string: +Takes a C<$string> to tokenize, and optionally a set of C<$delimiter> characters to facilitate the tokenization and the type of whitespace handling with C<$handle_whitespace>. The C<$string> parameter and the C<$handle_whitespace> parameter are pretty obvious, the C<$delimiter> parameter is not as transparent. C<$delimiter> is a string of characters, these characters are then separated into individual characters and are used to split the C<$string> with. So given this string: (5 + (100 * (20 - 35)) + 4) -The C<tokenize> method without a C<$delimiter> parameter would return the following comma seperated list of tokens: +The C<tokenize> method without a C<$delimiter> parameter would return the following comma separated list of tokens: '(5', '+', '(100', '*', '(20', '-', '35))', '+', '4)' -However, if you were to pass the following set of delimiters C<(, )> to C<tokenize>, you would get the following comma seperated list of tokens: +However, if you were to pass the following set of delimiters C<(, )> to C<tokenize>, you would get the following comma separated list of tokens: '(', '5', '+', '(', '100', '*', '(', '20', '-', '35', ')', ')', '+', '4', ')' @@ -349,17 +349,17 @@ as some languages do. Then you would give this delimiter C<+*-()> to arrive at the same result. -If you decide that whitespace is signifigant in your string, then you need to specify that like this: +If you decide that whitespace is significant in your string, then you need to specify that like this: my $st = String::Tokenizer->new( - 'this is a test with, (signifigant) whitespace', + 'this is a test with, (significant) whitespace', ',()', String::Tokenizer->RETAIN_WHITESPACE ); A call to C<getTokens> on this instance would result in the following token set. - 'this', ' ', 'is', ' ', 'a', ' ', 'test', ' ', 'with', ' ', '(', 'signifigant', ')', ' ', 'whitespace' + 'this', ' ', 'is', ' ', 'a', ' ', 'test', ' ', 'with', ' ', '(', 'significant', ')', ' ', 'whitespace' All running whitespace is grouped together into a single token, we make no attempt to split it into its individual parts. @@ -375,7 +375,7 @@ =head1 INNER CLASS -A B<String::Tokenizer::Iterator> instance is returned from the B<String::Tokenizer>'s C<iterator> method and serves as yet another means of iterating through an array of tokens. The simplest way would be to call C<getTokens> and just manipulate the array yourself, or push the array into another object. However, iterating through a set of tokens tends to get messy when done manually. So here I have provided the B<String::Tokenizer::Iterator> to address those common token processing idioms. It is basically a bi-directional iterator which can look ahead, skip and be reset to the begining. +A B<String::Tokenizer::Iterator> instance is returned from the B<String::Tokenizer>'s C<iterator> method and serves as yet another means of iterating through an array of tokens. The simplest way would be to call C<getTokens> and just manipulate the array yourself, or push the array into another object. However, iterating through a set of tokens tends to get messy when done manually. So here I have provided the B<String::Tokenizer::Iterator> to address those common token processing idioms. It is basically a bi-directional iterator which can look ahead, skip and be reset to the beginning. B<NOTE:> B<String::Tokenizer::Iterator> is an inner class, which means that only B<String::Tokenizer> objects can create an instance of it. That said, if B<String::Tokenizer::Iterator>'s C<new> method is called from outside of the B<String::Tokenizer> package, an exception is thrown. @@ -388,7 +388,7 @@ =item B<reset> -This will reset the interal counter, bringing it back to the begining of the token list. +This will reset the internal counter, bringing it back to the beginning of the token list. =item B<hasNextToken> @@ -396,7 +396,7 @@ =item B<hasPrevToken> -This will return true (1) if the begining of the token list has been reached, and false (0) otherwise. +This will return true (1) if the beginning of the token list has been reached, and false (0) otherwise. =item B<nextToken> @@ -478,7 +478,7 @@ =item B<String::Tokeniser> -Along with being a tokenizer, it also provides a means of moving through the resulting tokens, allowing for skipping of tokens and such. But this module looks as if it hasnt been updated from 0.01 and that was uploaded in since 2002. The author (Simon Cozens) includes it in the section of L<Acme::OneHundredNotOut> entitled "The Embarrassing Past". From what I can guess, he does not intend to maintain it anymore. +Along with being a tokenizer, it also provides a means of moving through the resulting tokens, allowing for skipping of tokens and such. But this module looks as if it hasn't been updated from 0.01 and that was uploaded in since 2002. The author (Simon Cozens) includes it in the section of L<Acme::OneHundredNotOut> entitled "The Embarrassing Past". From what I can guess, he does not intend to maintain it anymore. =item B<Parse::Tokens>