Send Beginners mailing list submissions to
        [email protected]

To subscribe or unsubscribe via the World Wide Web, visit
        http://www.haskell.org/mailman/listinfo/beginners
or, via email, send a message with subject or body 'help' to
        [email protected]

You can reach the person managing the list at
        [email protected]

When replying, please edit your Subject line so it is more specific
than "Re: Contents of Beginners digest..."


Today's Topics:

   1. Re:  sorting almost sorted list (Yitzchak Gale)
   2. Re:  sorting almost sorted list (Yitzchak Gale)
   3. Re:  sorting almost sorted list (Daniel Fischer)
   4. Re:  sorting almost sorted list (Daniel Fischer)
   5. Re:  sorting almost sorted list (Yitzchak Gale)
   6. Re:  sorting almost sorted list (Daniel Fischer)
   7. Re:  sorting almost sorted list (David Fletcher)
   8. Re:  sorting almost sorted list (Daniel Fischer)


----------------------------------------------------------------------

Message: 1
Date: Tue, 27 Sep 2011 14:46:52 +0300
From: Yitzchak Gale <[email protected]>
Subject: Re: [Haskell-beginners] sorting almost sorted list
To: Daniel Fischer <[email protected]>
Cc: Haskell Beginners <[email protected]>
Message-ID:
        <caorualzumfvouv_h8gyyup6vmkfmbutz9y_gmzwxudkopue...@mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1

I wrote:
> After running some tests, it seems empirically that
> you can use m `div` 2 instead of m for lists of even
> length, which gives you a nice speedup for larger
> values of m. For lists of prime length, the best
> you can do seems to be m-1. It's more complicated
> for lists of non-prime odd length.
> I can't immediately see why any of that is true.

Well, obviously, that weird behavior has to do with
the regularity of my test data. If I randomize more,
the results become more believable. It seems that
m-1 or m-2 work empirically, but that could just
be because the probability of a collision is extremely
low.

The result for the more regularly shaped test data,
where the results have an interesting dependence
on prime factorization, is still fascinating.
I hope Daniel will comment. :)

Here is the more randomized test data
(sorry, I really should be using QuickCheck
for this instead of doing it by hand):

test n k = fmap (all isSorted . map (sortAlmostBy n compare)) $ rands2 k

isSorted xs = and . zipWith (<=) xs $ drop 1 xs

rands1 d = do
    cs <- replicateM (2000 `div` d) $ randomRS (0, d-1)
    fmap concat $ zipWithM mkChunk (scanl (+) 0 cs) cs
  where
    mkChunk x y = replicateM y $ randomRS (x, x+y-1)

rands2 d = fmap (evalState . replicateM 100 $ rands1 d) newStdGen



------------------------------

Message: 2
Date: Tue, 27 Sep 2011 14:50:56 +0300
From: Yitzchak Gale <[email protected]>
Subject: Re: [Haskell-beginners] sorting almost sorted list
To: Daniel Fischer <[email protected]>
Cc: Haskell Beginners <[email protected]>
Message-ID:
        <CAOrUaLaoj3P5PCfJ4vSeqjLYLEMX2UdTB8sdX=xvnb_syj-...@mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1

Sorry to keep replying to myself here.

I wrote:
> Here is the more randomized test data:

I forgot one function:

randomRS = state . randomR

"state" and "evalState" are from
Control.Monad.Trans.State.

> test n k = fmap (all isSorted . map (sortAlmostBy n compare)) $ rands2 k
>
> isSorted xs = and . zipWith (<=) xs $ drop 1 xs
>
> rands1 d = do
> ? ?cs <- replicateM (2000 `div` d) $ randomRS (0, d-1)
> ? ?fmap concat $ zipWithM mkChunk (scanl (+) 0 cs) cs
> ?where
> ? ?mkChunk x y = replicateM y $ randomRS (x, x+y-1)
>
> rands2 d = fmap (evalState . replicateM 100 $ rands1 d) newStdGen



------------------------------

Message: 3
Date: Tue, 27 Sep 2011 16:04:08 +0200
From: Daniel Fischer <[email protected]>
Subject: Re: [Haskell-beginners] sorting almost sorted list
To: [email protected], [email protected]
Message-ID: <[email protected]>
Content-Type: Text/Plain;  charset="iso-8859-1"

On Tuesday 27 September 2011, 11:32:35, Yitzchak Gale wrote:
> I wrote:
> > Can you guarantee for some value of m that for each note N,
> > only the first m notes following N might end earlier than N?
> > If so, then the following simple algorithm is linear
> > and runs in constant space. You could then use:
> > sortAlmostBy m (comparing endTime)
> > ...You might be able to do a little better than this.
> 
> After running some tests, it seems empirically that
> you can use m `div` 2 instead of m for lists of even
> length, which gives you a nice speedup for larger
> values of m. For lists of prime length, the best
> you can do seems to be m-1. It's more complicated
> for lists of non-prime odd length.
> 
> I can't immediately see why any of that is true.
> 
> Daniel to the rescue perhaps?

I have to confess, I'm not even sure what you mean.
Okay, m is the maximal number of following notes that may end earlier than 
the current. But to which list(s) does the length refer?

> 
> test n k = fmap (all isSorted . map (sortAlmostBy n compare)) $ rands k

With what parameters did you call test?

Prelude Almost System.Random> test 12 16
True
(0.12 secs, 178509776 bytes)
Prelude Almost System.Random> test 12 17
False
(0.00 secs, 592904 bytes)

The only big variation in runtimes I find is due to a violation of the 
preconditions of sortAlmostBy, when all shortcuts.
(But the most of the runtime is used for generating the lists. If I 
completely evaluate such a list of lists prior to sorting, then m has a 
measurable impact on time even when all doesn't shortcut, however, in that 
situation it's basically monotonic in m, as expected).

> 
> isSorted xs = and . zipWith (<=) xs $ drop 1 xs
> 
> rands d = fmap (take 100 . map (concat . zipWith (map . (+)) [0,d ..]
> . chunksOf d) . chunksOf 1000 . randomRs (0, d-1)) newStdGen

You generate 100 lists of length 1000, such that the entries at index
k*d <= i < (k+1)*d are between k*d (inclusive) and (k+1)*d (exclusive).

Now, if you split those lists into chunks of length m, the preconditions of 
sortAlmostBy are only guaranteed to hold if each of the d-long chunks from 
the generation meets at most two m-long chunks.
That is the case if (and only if, if the entire list is long enough)

m + gcd m d >= d

If m + gcd m d < d, you will sooner or later encounter a d-chunk meeting 
three (or more) m-chunks. The probability that the first element of that 
d-chunk is larger than the last is nearly 1/2 [(d-1)/2d], so it doesn't 
take many such situations until you get a non-sorted list with 
sortAlmostBy m.

So, if running time is monotonic in m, under the assumption that the 
preconditions of sortAlmostBy hold, the optimal choice of m is

min { m \in N : m + gcd m d >= d }

For even d, that's d/2, for d prime it's d-1, generally, it's (1 - 1/p)*d, 
where p is the smallest prime factor of d.

I haven't analysed your merging function, but at first glance, merging c 
chunks of length m each is O(c*m); so, since sorting each chunk is 
O(m*log m), it'd overall be

O(c*m*log m) = O(n*log m) = O(n*log (n/c))

It has larger overhead than sortBy, so if n is small or m is large, sortBy 
is likely to be better, but if n is large and m relatively small, 
sortAlmostBy can be significantly (but not dramatically, excluding extreme 
cases where you can do much better) faster.



------------------------------

Message: 4
Date: Tue, 27 Sep 2011 16:51:42 +0200
From: Daniel Fischer <[email protected]>
Subject: Re: [Haskell-beginners] sorting almost sorted list
To: [email protected]
Cc: Haskell Beginners <[email protected]>
Message-ID: <[email protected]>
Content-Type: Text/Plain;  charset="iso-8859-1"

On Tuesday 27 September 2011, 13:46:52, Yitzchak Gale wrote:
> I wrote:
> > After running some tests, it seems empirically that
> > you can use m `div` 2 instead of m for lists of even
> > length, which gives you a nice speedup for larger
> > values of m. For lists of prime length, the best
> > you can do seems to be m-1. It's more complicated
> > for lists of non-prime odd length.
> > I can't immediately see why any of that is true.
> 
> Well, obviously, that weird behavior has to do with
> the regularity of my test data. If I randomize more,
> the results become more believable. It seems that
> m-1 or m-2 work empirically, but that could just
> be because the probability of a collision is extremely
> low.

Yes. With less regular data, we need to consider all chunks of length d in 
the input, not only those starting at index k*d.
If any of them meets three m-chunks (where m is the parameter to 
sortAlmostBy), the precondition of sortAlmostBy may be violated.
The smallest m to guarantee the preconditions is then m = d-1.
But with less regular data, if the list is almost monotonically increasing, 
the probability that the last element of a d-chunk is smaller than the 
first is significantly lower than 1/2, so you have a much bigger chance 
that m = d-2 will work than in the regular case for odd d.
[To disambiguate: parenthesise as ... than (in the ... odd d).]



------------------------------

Message: 5
Date: Tue, 27 Sep 2011 19:34:01 +0300
From: Yitzchak Gale <[email protected]>
Subject: Re: [Haskell-beginners] sorting almost sorted list
To: Daniel Fischer <[email protected]>
Cc: [email protected]
Message-ID:
        <CAOrUaLbV26imViL=q2zsndnqxakefckb5zhoj1qxsrnommf...@mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1

Hi Daniel,

Thanks for clearing up the mystery
about the dependence on prime factorization!
Very interesting.

> at first glance, merging c
> chunks of length m each is O(c*m); so, since sorting each chunk is
> O(m*log m), it'd overall be
>
> O(c*m*log m) = O(n*log m) = O(n*log (n/c))

No. In the given use case m is a small constant and
c is proportional to n, so the overall complexity is O(n).

> It has larger overhead than sortBy,

That's true. But it's asymptotically better.

Given Dennis' use case, it sounds like
m will probably always be < 10.
Input size will typically be a few hundred,
but could be a few thousand. sortBy
would indeed be just fine in those cases.

> so if n is small or m is large, sortBy
> is likely to be better, but if n is large and m relatively small,
> sortAlmostBy can be significantly (but not dramatically,
> excluding extreme cases where you can do much better) faster.

At most m will be on the order of 100 if we
need to process all the parts of a large
symphony orchestra combined together
(a very unlikely use case admittedly),
and in that case the input size could be as
much as 10^6, or possibly even more for
something like the Mahler.

(Come to think of it, it's pretty amazing that
Mahler wrote so many notes and actually got
people to play them.)

There I believe sortAlmostBy could be quite
an improvement. I'd be interested to hear
ideas to make it even better though.

Thanks,
Yitz



------------------------------

Message: 6
Date: Tue, 27 Sep 2011 20:16:18 +0200
From: Daniel Fischer <[email protected]>
Subject: Re: [Haskell-beginners] sorting almost sorted list
To: [email protected]
Cc: [email protected]
Message-ID: <[email protected]>
Content-Type: Text/Plain;  charset="iso-8859-1"

On Tuesday 27 September 2011, 18:34:01, Yitzchak Gale wrote:
> Hi Daniel,
> 
> Thanks for clearing up the mystery
> about the dependence on prime factorization!
> Very interesting.
> 
> > at first glance, merging c
> > chunks of length m each is O(c*m); so, since sorting each chunk is
> > O(m*log m), it'd overall be
> > 
> > O(c*m*log m) = O(n*log m) = O(n*log (n/c))
> 
> No. In the given use case m is a small constant and
> c is proportional to n, so the overall complexity is O(n).

Not a contradiction. If m is a constant (> 1), O(n*log m) = O(n)
(and sorting chunks of length m is O(m*log m) = O(1) then).

> 
> > It has larger overhead than sortBy,
> 
> That's true. But it's asymptotically better.
> 
> Given Dennis' use case, it sounds like
> m will probably always be < 10.
> Input size will typically be a few hundred,
> but could be a few thousand. sortBy
> would indeed be just fine in those cases.
> 
> > so if n is small or m is large, sortBy
> > is likely to be better, but if n is large and m relatively small,
> > sortAlmostBy can be significantly (but not dramatically,
> > excluding extreme cases where you can do much better) faster.
> 
> At most m will be on the order of 100 if we
> need to process all the parts of a large
> symphony orchestra combined together
> (a very unlikely use case admittedly),
> and in that case the input size could be as
> much as 10^6, or possibly even more for
> something like the Mahler.

Yeah, but log (10^6) is still a fairly small number, so counting reduction 
steps, it wouldn't be dramatically better. However, I forgot to take memory 
usage into account, so in terms of wall-clock (or CPU) time, it could be 
dramatically better even for not too large n and no too small m.

> 
> (Come to think of it, it's pretty amazing that
> Mahler wrote so many notes and actually got
> people to play them.)

People do stranger things. People listen to Wagner, for example.

> 
> There I believe sortAlmostBy could be quite
> an improvement.

Certainly. (Just included Data.List.sort in the benchmark, for these short 
lists, it takes less than twice as long as sortAlmostBy)

> I'd be interested to hear ideas to make it even better though.

mergeAABy :: (a -> a -> Ordering) -> [[a]] -> [a]
mergeAABy cmp ((x:xs):(y:ys):zss) = foo x xs y ys zss
  where
    foo u us v vs wss =
      case cmp u v of
        GT -> v : case vs of
                    (b:bs) -> foo u us b bs wss
                    [] -> u : us ++ mergeAABy cmp wss
        _ -> u : case us of
                   (c:cs) -> foo c cs v vs wss
                   [] -> case wss of
                           ((w:ws):kss) -> foo v vs w ws kss
                           _ -> v : vs
mergeAABy _ [xs] = xs
mergeAABy _ _ = []

seems to be a bit faster (I have made the lists longer, substituted 2000 by 
5000):

dafis@schwartz:~/Haskell/BeginnersTesting> ./benchAlmost -nis 20 -kis 20
(20,20)  -- print parameters to make sure they're evaluated
284090563  -- print sum of all random Ints to make sure they're evaluated
warming up
estimating clock resolution...
mean is 2.183860 us (320001 iterations)
found 41404 outliers among 319999 samples (12.9%)
  2443 (0.8%) low severe
  38961 (12.2%) high severe
estimating cost of a clock call...
mean is 50.65112 ns (14 iterations)
found 2 outliers among 14 samples (14.3%)
  2 (14.3%) high mild

benchmarking Yitz
collecting 100 samples, 1 iterations each, in estimated 6.537795 s
mean: 67.05025 ms, lb 66.59363 ms, ub 67.62786 ms, ci 0.950
std dev: 2.616184 ms, lb 2.179318 ms, ub 3.100722 ms, ci 0.950

benchmarking Mine
collecting 100 samples, 1 iterations each, in estimated 5.566311 s
mean: 56.91069 ms, lb 56.54846 ms, ub 57.37602 ms, ci 0.950
std dev: 2.100948 ms, lb 1.722233 ms, ub 2.585056 ms, ci 0.950

With n = k = 50:

benchmarking Yitz
collecting 100 samples, 1 iterations each, in estimated 7.366776 s
mean: 75.63603 ms, lb 75.17202 ms, ub 76.20199 ms, ci 0.950
std dev: 2.617970 ms, lb 2.227238 ms, ub 3.040563 ms, ci 0.950

benchmarking Mine
collecting 100 samples, 1 iterations each, in estimated 6.517696 s
mean: 68.00708 ms, lb 67.26089 ms, ub 69.04105 ms, ci 0.950
std dev: 4.463628 ms, lb 3.484169 ms, ub 5.950722 ms, ci 0.950

n = k = 100:

benchmarking Yitz
collecting 100 samples, 1 iterations each, in estimated 9.756303 s
mean: 87.39906 ms, lb 86.90397 ms, ub 87.99232 ms, ci 0.950
std dev: 2.763955 ms, lb 2.382008 ms, ub 3.168909 ms, ci 0.950

benchmarking Mine
collecting 100 samples, 1 iterations each, in estimated 7.505393 s
mean: 77.05801 ms, lb 76.51954 ms, ub 77.74727 ms, ci 0.950
std dev: 3.117756 ms, lb 2.594126 ms, ub 3.769922 ms, ci 0.950



------------------------------

Message: 7
Date: Tue, 27 Sep 2011 21:33:06 +0100
From: "David Fletcher" <[email protected]>
Subject: Re: [Haskell-beginners] sorting almost sorted list
To: [email protected]
Message-ID: <[email protected]>
Content-Type: text/plain; charset=utf-8; format=flowed; delsp=yes

On Mon, 26 Sep 2011 19:46:29 -0700 Dennis Raddle wrote:

> I have a problem from music. I have a list of notes sorted by begin
> time. I want to sort them by end time.
>
> Notes that are sorted by begin time are usually close to sorted by end
> time, because notes tend to cluster within a small range of durations.
>
> What algorithms are available to me (hopefully in a library) that are
> good with this kind of thing?
>

How about something like this?  It keeps an output queue sorted by the end
time.  We allow notes to exit the queue and become part of the result as
soon as we can guarantee we aren't going to see any notes with an earlier
end time.


data Note = Note { begin :: Integer, end :: Integer }

noteSort :: [Note] -> [Note]
noteSort = noteSort' []

noteSort' :: [Note] -> [Note] -> [Note]
noteSort' outq [] = outq
noteSort' outq (x:xs) = out ++ noteSort' outq'' xs
           where (out, outq') = span (`entirelyBefore` x) outq
                 outq'' = insertBy (comparing end) x outq'

entirelyBefore :: Note -> Note -> Bool
a `entirelyBefore` b = end a < begin b


Here I've used a sorted list for the queue, so in the worst case (with a
note that lasts for the whole piece), it degrades into an insertion sort
of the whole thing.  But you could replace that list with a heap, and then
it would degrade into a heapsort.

David.



------------------------------

Message: 8
Date: Wed, 28 Sep 2011 00:00:06 +0200
From: Daniel Fischer <[email protected]>
Subject: Re: [Haskell-beginners] sorting almost sorted list
To: [email protected]
Message-ID: <[email protected]>
Content-Type: Text/Plain;  charset="iso-8859-1"

On Tuesday 27 September 2011, 04:46:29, Dennis Raddle wrote:
> I have a problem from music. I have a list of notes sorted by begin
> time. I want to sort them by end time.
> 
> Notes that are sorted by begin time are usually close to sorted by end
> time, because notes tend to cluster within a small range of durations.
> 
> What algorithms are available to me (hopefully in a library) that are
> good with this kind of thing?

Data.List.sort/sortBy should be fine most of the time. Most algorithms that 
perform significantly better in some situations are rather specific and not 
available in libraries.  Nobody will implement them until they're needed, 
and then the implementor may think that it's not worth publishing as a 
library as it's too specialised. Thus I doubt you'll find a library 
function adapted to your special needs.

Leaving that aside, which algorithms are best depends on various things.
The most important are, I think, what kind of almost-sortedness you have 
and what your space and laziness requirements are.
Most sorting algorithms require O(n) space and can't lazily produce the 
output[1], but in your situation, that is possible (assuming the data is 
sufficiently nice, if the last note to begin is the first to end, you need 
O(n) space and can't produce incremental output).

Regarding the kind of almost-sortedness, if you have long monotonic runs 
with few out-of-place elements in between, like

[2 .. 1000] ++ [1] ++ [1002 .. 5000] ++ [1001] ++ [5001 .. 10000],

Data.List.sort[By] will be quite good, less so if you have some jittering 
superimposed on a monotonic list, like

concat [[n+1,n] | n <- [0, 2 .. 10000]].

I suppose your situation is more like the second.

Then an insertion sort usually does rather well, it can often outperform 
algorithms with lower (worst case) complexity. (If the average displacement 
needed for sorting is d, insertion sort takes O(n*(d+1)) time; if d is much 
smaller than log n, insertion sort is very good.)
Like Data.List's mergesort, insertion sort is generic, needs O(n) space, 
and can't produce incremental output. It is easy to implement,

inSortBy cmp = foldr ins []
  where
    ins x [] = [x]
    ins x (y:ys) = case cmp x y of
                     GT -> y : ins x ys
                     _ -> x : y : ys

but all is not roses; for long input lists, that builds a large thunk which 
may blow your stack. Insertion sort works best on mutable arrays where you 
don't have the problem of building large thunks. However, it also works 
well on short enough lists. Benchmarking with Yitz's list generator 
(version 2), it can be more than twice as fast as Yitz's sortAlmostBy when 
the lists are at most a few thousand elements long and the average 
displacement is small, but it is much slower if the lists are a few ten-
thousand elements long or the average displacement is large.
Better at dealing with longer lists and larger displacements is the left-
fold version of insertion sort,

linSortBy cmp = reverse . foldl' ins []
  where
    ins [] a = [a]
    ins l@(b:bs) a =
      case cmp a b of
        LT -> case ins bs a of
                k@(_:_) -> b : k
                [] -> error "impossible"
        _  -> a : l

(note: This is adapted to the case of an almost sorted list, for an almost 
reverse-sorted list, you'd use the obvious modification of ins and not need 
the reverse at the end, giving better performance. For a more or less 
random list, no version of insertion sort does well.)
The strictness is essential, using foldl instead of foldl' leads to worse 
performance than the right fold gives, making the insertion ins less strict 
isn't nearly as bad, but still hurts - a bit if the displacements are 
small, more if they're large.
This version of insertion sort can keep up with sortAlmostBy much longer, 
but it suffers from the same problem as the right-fold insertion sort, only 
less.

Moving to specifically tailored sorting algorithms, we have Yitz's nice 
sortAlmostBy. It requires that you know a bound for the number of notes 
beginning not before but ending before any given note, but if you do, it 
doesn't suffer too badly if you overestimate (as long as your estimate is 
still small relative to the length of the list). Of course, if you 
underestimate, it'll probably produce wrong results.

Then, more specifically tailored to the problem, the algorithm posted by 
David Fletcher earlier, taking advantage of the fact that you know the 
starting times of the notes and that a note can't end before it began (and 
that the list is sorted by starting time).
My (more generic) implementation of it:

sortABy :: (a -> a -> Ordering) -> (a -> a -> Ordering) -> [a] -> [a]
sortABy _ _ [] = []
sortABy cmp1 cmp2 (x:xs) = go [] x xs
  where
    ins a [] = [a]
    ins a l@(b:bs) = case cmp1 a b of
                       LT -> a : l
                       _ -> case ins a bs of
                              k@(_:_) -> b : k
                              [] -> error "oops"
    go !store y [] = y : store
    go store y zzs@(z:zs) =
        case cmp2 y z of
          GT -> case cmp1 y z of
                  GT -> go (ins y store) z zs
                  _  -> go (ins z store) y zs
          _  -> y : case store of
                      (u:us) -> go us u zzs
                      [] -> go [] z zs

The first comparison function compares the end times, the second compares 
the end time of the first argument to the starting time of the second.
As for the left-fold insertion sort, making the insertion stricter gains a 
bit of speed (or more than a bit for larger displacements).
It has the advantage over sortAlmostBy that you don't need to know a bound,
and under favourable circumstances (if there never are many notes beginning 
during any given note's lifetime), can be much faster (much meaning 
something like a factor of 1.5 or so).
However, it doesn't take larger displacements well, and in extreme cases 
degrades to a badly adapted insertion sort.
You could prevent that by using a good heap/priority queue for the store 
instead of the list. That would make it slower in the good cases, but 
guarantee better worst-case behaviour.

Both specialised algorithms can produce incremental output and run in 
constant space if the preconditions are satisfied. Which is better depends 
on the nature of your data.

Cheers,
Daniel


[1] Generic sorting algorithms can at best produce incremental output after 
the entire input has been partially processed, since the minimum could 
occur at any position. Thus they necessarily use at least O(n) space. 
[ignoring obscenities like

sort [] = []
sort xs = let (x,ct) = findMinCount xs
          in replicate ct x ++ sort' x (recalculate xs)

findMinCount (x:xs) = go 1 x xs
  where
    go k x [] = (x,k)
    go k x (y:ys)
      | y < x = go 1 y ys
      | y == x = go (k+1) x ys
      | otherwise = go k x ys

sort' x xs =
  case dropWhile (<= x) xs of
    [] -> []
    (y:ys) -> let (z,ct) = findMinCount1 x 1 y ys
              in replicate ct z ++ sort' z (recalculate xs)

findMinCount1 x k y [] = (y,k)
findMinCount1 x k y (z:zs)
  | z <= x = findMinCount1 x k y zs
  | z < y  = findMinCount1 x 1 z zs
  | z == y = findMinCount1 x (k+1) y zs
  | otherwise = findMinCount1 x k y zs

which relies on (==) identifying only truly indistinguishable values and a 
hypothetical 'recalculate' which recalculates the list lazily in O(1) 
space. But it was fun to come up with.]



------------------------------

_______________________________________________
Beginners mailing list
[email protected]
http://www.haskell.org/mailman/listinfo/beginners


End of Beginners Digest, Vol 39, Issue 35
*****************************************

Reply via email to