On 6/19/06, Sara Kenedy <[EMAIL PROTECTED]> wrote:
Hello,
I want to creat a function sumList as below examples:
sumList [("s1",2),("s2",4),("s3",3),("s4",2)] = ("#", 1/2 + 1/4 + 1/3 + 1/2)
sumList [("s1",2),("s2",4),("s3",3) = ("#", 1/2 + 1/4 + 1/3)
I attempted it as following:
sumList :: (Fr
Hello,
I want to creat a function sumList as below examples:
sumList [("s1",2),("s2",4),("s3",3),("s4",2)] = ("#", 1/2 + 1/4 + 1/3 + 1/2)
sumList [("s1",2),("s2",4),("s3",3) = ("#", 1/2 + 1/4 + 1/3)
I attempted it as following:
sumList :: (Fractional a) => [(String,a)] -> (String, a)
sumList [
G'day all.
Quoting Mathew Mills <[EMAIL PROTECTED]>:
> I guess I don't get any points for an approximate solution, ay?
If only there was an iterative algorithm. Then you could use your
method to get a great initial estimate...
Cheers,
Andrew Bromage
Hi
hugs (winhugs) is rather fast, loading several KLOCs per second, and
finally you can compile program with ghc
WinHugs (but not Hugs) also has a feature called auto-reload - if you
change and save any of the source code files it will automatically
detect and reload them - no more ":r" :)
An
Hello Joel,
Sunday, June 18, 2006, 4:30:35 PM, you wrote:
> I think that developing HTML scrapers requires short tweak-compile-
> run cycles and is probably best done in Perl, Python, Ruby, i.e.
> dynamic languages but I wonder if someone has found otherwise.
hugs (winhugs) is rather fast, loa
Has anyone explored destructuring HTML with Parsec? Any other ideas
on how to best do this?
I'm looking to scrape bits of information from more or less
unstructured HTML pages. I'm looking to structure, tag and classify
the content afterwards.
I think that developing HTML scrapers require
Hello Sara,
Sunday, June 18, 2006, 9:04:05 AM, you wrote:
you should write stop alternatives first, because Haskell tries
alternatives just sequentially, topmost first:
listOfString :: [(String,String)] -> [String]
listOfString [("","")] = []
listOfString [(s1,s2)] = s1: listOfString (lex s2)