On 7/10/06, Jay Savage <[EMAIL PROTECTED]> wrote:
On 7/10/06, Charles K. Clarkson <[EMAIL PROTECTED]> wrote:
> Mr. Shawn H. Corey wrote:

>
>     We could do a unique check only when the array is accessed
> instead of every time a value is added. Then we used the cached
> result until another element is added.
>

I'd be inclined to to flip that around: check for uniqueness on add,
sort and do something like :

   #!/usr/bin/perl -w
   use strict;

   my %recent;
   my $limit = 10;

   sub get_list {
       my @sorted = sort {$recent{$b} <=> $recent{$a}} keys %recent;
       foreach (@sorted[$limit..$#sorted]){
          delete $recent{$_};
       }
       return @sorted[0..$limit-1];
   }

   # example usage:

   foreach ('a'..'z') {
      $recent{$_} = time;
      sleep 1;
   }

   print get_list(), "\n";

   while (my ($k, $v) = each %recent) {
      print "$k: $v\n";
   }


Lost a little bit of the message there somehow. should be:

  "check for uniqueness on add, sort and truncate on use."

I also meant to say something about this not keeping the list at a
fixed length. If the time between calls to get_list is long relative
to the number of items being added, the hash could potentially grow
quite large, which will also decrease the efficiency of the sort. On
the whole, though, it should be pretty speedy. It will also yield
unpredicatable results where intervals between adds are less than a
second, but you can get around the with Time::HiRes::gettimeofday();

--
--------------------------------------------------
This email and attachment(s): [  ] blogable; [ x ] ask first; [  ]
private and confidential

daggerquill [at] gmail [dot] com
http://www.tuaw.com  http://www.dpguru.com  http://www.engatiki.org

values of β will give rise to dom!

Reply via email to