Skip Menu |

Preferred bug tracker

Please visit the preferred bug tracker to report your issue.

This queue is for tickets about the Tie-Cache-LRU CPAN distribution.

Report information
The Basics
Id: 34962
Status: resolved
Priority: 0/
Queue: Tie-Cache-LRU

People
Owner: Nobody in particular
Requestors: jpl [...] research.att.com
Cc:
AdminCc:

Bug Information
Severity: (no value)
Broken in: (no value)
Fixed in: (no value)



CC: jpl [...] research.att.com
Subject: Tie::Cache::LRU
Date: Sun, 13 Apr 2008 14:51:31 -0400 (EDT)
To: bug-Tie-Cache-LRU [...] rt.cpan.org
From: "John P. Linderman" <jpl [...] research.att.com>
Show quoted text
> in the future just mail it > all directly to bug-Tie-Cache-LRU@rt.cpan.org
Roger. A bullet-proof (but space intensive) solution to the each() problem would be to implement FIRSTKEY by building an array of keys (hey, we already have two copies, why not a third?) in the desired order, then popping/shifting to implement NEXTKEY. DELETE should check the last/first element, if it exists, to honor the approval implied in perldoc -f each This seems perfectly acceptable to me, but another alternative would be to have FIRSTKEY and NEXTKEY look one key ahead, so references to the "current" key wouldn't matter (but references to other keys might cause the index to "jump" unacceptably). There probably should be a count maintained of how many keys are "left", to avoid any possibility of loops. The count/sort implementation "lucks out" on this. The counts may change, but the position in the cache array do not, so I can safely take the element following the last key, as Array and LinkedList currently do. Truth be told, I don't expect to use keys(), values() or each() on the cache, and probably not delete(). I just want to have fast access to the last several items. So I'm less upset that their implementation might be a trifle expensive, if I can hold down the expense of ordinary FETCH and STORE operations. But it makes sense to implement all the operations, and, obviously, to implement them correctly. Enjoy your travels! -- jpl
CC: bug-Tie-Cache-LRU [...] rt.cpan.org
Subject: Re: Tie::Cache::LRU::Array
Date: Thu, 23 Oct 2008 08:30:50 -0400
To: Michael G Schwern <schwern [...] pobox.com>
From: "John P. Linderman" <jpl [...] research.att.com>
Consider yourself poked :-). The dates haven't changed on CPAN, so I'm assuming the each() bug is still there. There are two problems, as I recall. each() never terminates. That's definitely bad. And keys() returns items in most recent to least recent order. So if you access the cache using those keys, you reverse the order of the items. Of course, there's no telling what people will do with the keys, so there's no way to prevent the cached items from getting scrambled. But, for the common case of accessing *all* the items *in the order that keys returns them*, least recent to most recent order leaves the order unchanged. People may be depending on the existing order, so that's a tougher call. But the side-effect should at least be documented. -- jpl Show quoted text
> I'm currently traveling and don't have a lot of time for email. I'll look at > this stuff eventually, haven't seen anything about Tie::Cache::LRU in a while, > but there's a much greater chance I'll get back to it if it's in the bug > tracker. I'll forward what you've sent so far, but in the future just mail it > all directly to bug-Tie-Cache-LRU@rt.cpan.org > > Thanks for sending in the comments, I will get to it. Feel free to poke me if > I don't. > > -- > But there's no sense crying over every mistake. > You just keep on trying till you run out of cake. > -- Jonathan Coulton, "Still Alive"
Done. A new version has been uploaded. Since I didn't want to spend too much time on this so I went with the simplest, most robust implementation. FIRSTKEY caches a snapshot of the node list and NEXTKEY doles them out. Mixing each + delete works as promised in perldoc.
Subject: Re: [rt.cpan.org #34962] Resolved: Tie::Cache::LRU
Date: Fri, 24 Oct 2008 06:13:37 -0400
To: bug-Tie-Cache-LRU [...] rt.cpan.org
From: "John P. Linderman" <jpl [...] research.att.com>
hmmmm. Clicking on download got me this... The requested URL /pub/CPAN/authors/id/M/MS/MSCHWERN/Tie-Cache-LRU-20081023.211 6.tar.gz was not found on this server. Looks like the Tie-Cache-LRU-0.21.tar.gz is still there. -- jpl
Subject: Re: [rt.cpan.org #34962] Resolved: Tie::Cache::LRU
Date: Fri, 24 Oct 2008 03:27:31 -0700
To: bug-Tie-Cache-LRU [...] rt.cpan.org
From: Michael G Schwern <schwern [...] pobox.com>
John P. Linderman via RT wrote: Show quoted text
> Queue: Tie-Cache-LRU > Ticket <URL: http://rt.cpan.org/Ticket/Display.html?id=34962 > > > hmmmm. Clicking on download got me this... > > The requested URL /pub/CPAN/authors/id/M/MS/MSCHWERN/Tie-Cache-LRU-20081023.211 > 6.tar.gz was not found on this server. > > Looks like the Tie-Cache-LRU-0.21.tar.gz is still there. -- jpl
CPAN mirrors take time to update. funet.fi is always first. http://ftp.funet.fi/pub/languages/perl/CPAN/authors/id/M/MS/MSCHWERN/Tie-Cache-LRU-20081023.2116.tar.gz -- The interface should be as clean as newly fallen snow and its behavior as explicit as Japanese eel porn.
CC: jpl [...] research.att.com
Subject: Re: [rt.cpan.org #34962] Resolved: Tie::Cache::LRU
Date: Fri, 24 Oct 2008 08:28:47 -0400
To: bug-Tie-Cache-LRU [...] rt.cpan.org
From: "John P. Linderman" <jpl [...] research.att.com>
Got it, thanks. Iterating over values(%cache) still reverses the cache order, but that's not really a bug. -- jpl use Tie::Cache::LRU::Array; tie %cache, 'Tie::Cache::LRU::Array', 5; $cache{first} = 1; $cache{second} = 2; print $_, "\n" for (values(%cache)); # 2 1 print $_, "\n" for (values(%cache)); # 1 2 Show quoted text
> CPAN mirrors take time to update. funet.fi is always first. > http://ftp.funet.fi/pub/languages/perl/CPAN/authors/id/M/MS/MSCHWERN/Tie-Cache-LRU-20081023.2116.tar.gz > > > -- > The interface should be as clean as newly fallen snow and its behavior > as explicit as Japanese eel porn.
Closing this as there's a decent fix for each() released and the module is largely obsolete. Consider CHI.