* Ken Williams via RT [03/11/2013 03:12 PM]:
Show quoted text> <URL:
https://rt.cpan.org/Ticket/Display.html?id=83880 >
>
> Thanks Adam.
>
> By opening $fh the way we do, it should automatically be closed when it
> goes out of scope, which is preferable to manually closing. Do you have
> an example that demonstrates the problem?
Hi Ken,
The problem was with my production code running in NFS environment, thus
I am not sure if I will be able to crate a minimum testcase showing this
on a local disk. The code was doing something like this:
my $directory_with_files = Path::Class::Dir( <some directory> );
while ( my $file = $directory_with_files->next ) {
# skip anything but *.file
next if substr( $file->stringify, -5, 5 ) ne '.file';
my @lines = $file->slurp( chomp => 1 );
# here process the @lines array
}
# system("ls -la $directory_with_files");
$directory_with_files->rmtree;
# system("ls -la $directory_with_files");
The error was coming from the File::Path module. I debugged this by
listing the files with the shell 'ls -la' command and one of the files
being opened was renamed to ".nfs<some_random_number_here>", which means
that the file was still open while the removal operation was in
progress. The $file->slurp() operation was the only thing that was
opening these files.
When I changed the code like this, it started to work without any problems:
while ( my $file = $directory_with_files->next ) {
# skip anything but *.file
next if substr( $file->stringify, -5, 5 ) ne '.file';
my $fh = $file->openr;
foreach my $line (<$fh>) {
# here process the lines of each file
}
$fh->close;
}
$directory_with_files->rmtree;
I will try to send you a real case and it's output on an NFS filesystem
later today.
Thanks,
/Adam