CC: | jpl [...] research.att.com |
Subject: | Tie::Cache::LRU |
Date: | Sun, 13 Apr 2008 14:51:31 -0400 (EDT) |
To: | bug-Tie-Cache-LRU [...] rt.cpan.org |
From: | "John P. Linderman" <jpl [...] research.att.com> |
Show quoted text
> in the future just mail it
> all directly to bug-Tie-Cache-LRU@rt.cpan.org
Roger. A bullet-proof (but space intensive) solution to the
each() problem would be to implement FIRSTKEY by building an
array of keys (hey, we already have two copies, why not a
third?) in the desired order, then popping/shifting to
implement NEXTKEY. DELETE should check the last/first
element, if it exists, to honor the approval implied in
perldoc -f each
This seems perfectly acceptable to me, but another alternative
would be to have FIRSTKEY and NEXTKEY look one key ahead,
so references to the "current" key wouldn't matter (but
references to other keys might cause the index to "jump"
unacceptably). There probably should be a count maintained
of how many keys are "left", to avoid any possibility of
loops.
The count/sort implementation "lucks out" on this.
The counts may change, but the position in the cache array
do not, so I can safely take the element following the
last key, as Array and LinkedList currently do.
Truth be told, I don't expect to use keys(), values() or
each() on the cache, and probably not delete(). I just
want to have fast access to the last several items.
So I'm less upset that their implementation might be
a trifle expensive, if I can hold down the expense of
ordinary FETCH and STORE operations. But it makes sense
to implement all the operations, and, obviously, to
implement them correctly.
Enjoy your travels! -- jpl