what about?
my @numbers = 1..100;
while($#numbers >= 0) {
my $index = rand $numofques;
my $element = $numbers[$index];
$numbers[$index] = pop @numbers; # replace the number we used with the
last element and shrink array
printf "%-3d ", $element;
}
printf "\n"
Wouldn't this run faster?
> -----Original Message-----
> From: drieux [mailto:[EMAIL PROTECTED]]
> Sent: Monday, April 29, 2002 5:27 PM
> To: [EMAIL PROTECTED]
> Subject: Re: Scripts picks random elements from array , but it repeats
> som etimes
>
>
>
> On Monday, April 29, 2002, at 02:03 , Bob Showalter wrote:
>
> > splice() will be slower as the size of the array grows. If I take
> > your benchmark and change to array from 1..100 to 1..10000, I get
> > the following results for 100 iterations (on an old Pentium 266):
> >
> > Benchmark: timing 100 iterations of consume, decLoop...
> > consume: 51 wallclock secs (50.72 usr + 0.03 sys =
> 50.75 CPU) @ 1.97/
> > s
> > (n=100)
> > decLoop: 19 wallclock secs (19.45 usr + 0.01 sys =
> 19.46 CPU) @ 5.14/
> > s
> > (n=100)
>
> yeah - I started to think about that since I am also looking at
> felix's dwarfs of fisher_yates_shuffle( \@array ) approach - which
> starts making more sense when your array starts growing into
> strings of stuff....
>
> details when I update the benchmark test here in a bit -
>
> since clearly we want to start looking at larger bodies of data....
>
>
>
> ciao
> drieux
>
> ---
>
>
> --
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
>
----------------------------------------------------------------------------
--------------------
The views and opinions expressed in this email message are the sender's
own, and do not necessarily represent the views and opinions of Summit
Systems Inc.
--
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]