On 2012-02-04T23:47:34Z, NEILB wrote:
Show quoted text> This sort of module might be used to generate passwords for users of a
> system, and it could currently very easily generate an offensive
> password.
What is or is not offensive varies between contexts. There is nothing
to stop you checking for offensive words after the passphrase has been
generated according to your own definition of what is offensive.
Regexp::Common::Profanity includes a regular expression which matches
most words that are commonly thought to be offensive, and may help you
here.
The pod does already document that the passphrases may be potentially
offensive. I'd be happy to accept a documentation patch that goes into
more detail, perhaps with a code sample demonstrating filtering
passphrases.
Even passphrases made up of entirely inoffensive words could end up
arranged in offensive ways. For example the five word pass phrase "i
did you from behind" could be randomly generated (each of those five
words are in the English dictionary).
If post-processing isn't your cup of tea, pre-processing is possible.
It is also quite simple to subclass Crypt::XkcdPassword::Words::EN.
{
package Crypt::XkcdPassword::Words::EN::Nice
use Regexp::Common;
use parent 'Crypt::XkcdPassword::Words::EN';
my @words;
sub words {
my ($class) = @_;
unless (@words) {
my $parent_list = $class->SUPER::words;
@words = map { $RE{profanity} =~ $_ ? () : ($_) } @$parent_list;
}
\@words;
}
}
my $generator = Crypt::XkcdPassword->new(words => 'EN::Nice');
say $generator->make_password;
So for now, I'll reject the issue, but I'm happy to have it reopened
with a patch.