Skip Menu |

Preferred bug tracker

Please visit the preferred bug tracker to report your issue.

This queue is for tickets about the Storable CPAN distribution.

Report information
The Basics
Id: 8133
Status: new
Priority: 0/
Queue: Storable

People
Owner: Nobody in particular
Requestors: jbisbee [...] yahoo.com
Cc:
AdminCc:

Bug Information
Severity: Critical
Broken in:
  • 1.0.13
  • 1.0.14
  • 2.00
  • 2.02
  • 2.03
  • 2.04
  • 2.05
  • 2.06
  • 2.07
  • 2.08
  • 2.09
  • 2.10
  • 2.11
  • 2.12
  • 2.13
Fixed in: (no value)

Attachments


Subject: Instant Perl "Out of Memory" Error When Attempting to "retrieve" Small Storable File
I've verified this issue with both perl 5.6.0 / Storable 1.013 perl 5.6.2 / Storable 2.13 I looked at the Storable.xs code in 2.13 and turned on the DEBUGME flag and reran my test retrieve code. (test.pl) #!/usr/bin/perl use Storable qw(retrieve); $Storable::DEBUGME = 1; retrieve(shift); Then ran it ./test.pl out-of-memory.storable and got this output (truncated to show the error) ** extending kbuf to -184127888 bytes (had 128) Out of memory! I tracked it down to retreive_hash() and this code KBUFCHK((STRLEN)size); /* Grow hash key read pool if needed */ Storables are created every couple of days that trigger this bug and because these are profiles and we have no way of trapping the exception, our we have users that are kill httpd processes each time they attempt to load our site until we remove the offending storable file. I've attached the 'out-of-memory.storable' file so you can easily reproduce this problem. From my basic knowledge I'm wondering if we could just ASSERT if you attempt to grow the hash key read pool if its not a positive integer. (I stress basic) Thanks, Jeff
Download out-of-memory.storable
application/octet-stream 8.1k

Message body not shown because it is not plain text.

From: David Pisoni <dpisoni at shopzilla dot com>
I had a similar situation to this, and I traced it out in the Storable.xs. Basically when I ran my test data, the trace was like this (paraphrasing): Found hash ref declaration, step in found scalar value declaration of length <some astronomical number> heap is too small to fit scalar, allocate <some astronomical number of bytes> to fit OUT OF MEMORY There's no arguing that the frozen data was corrupted – it should not have declared insane lengths. But in this case Storable should throw an exception that can be caught – not "out of memory". So it is a bug – there is no boundary checking on the size of data segments – but this would likely only be needed if the data became corrupted in this particular way. The only real- world scenario I could come up with to cause this would be if you froze a huge (5GB) value on a 64-bit system and then attempted to unfreeze on a 32-bit system.