Am Freitag 18 September 2009 04:06:11 schrieb Ryan Ingram:
> I am confused about why this thread is talking about unsafePerformIO at
> all.  It seems like everything you all want to do can be accomplished with
> the much less evil unsafeInterleaveIO instead.  (Which is still a bit evil;
> but it's the difference between stealing cookies from the cookie jar and
> committing genocide)

I find that remark in rather bad taste.

>
> I wrote this function recently for a quick'n'dirty script:
> > readFiles :: [FilePath] -> String
> > readFiles [] = return ""
> > readFiles (f:fs) = do
> >     f_data <- readFile f
> >     rest <- unsafeInterleaveIO (readFiles fs)
> >     return (f_data ++ rest)
>
> It lazily reads from many files and concatenates all the input.  But I
> probably wouldn't use it in a serious application.
>
>   -- ryan

But that does something completely different from what Cristiano wants to do.
He wants to read many files files quasi-parallel.
As far as I can tell, he needs to read a small chunk from the beginning of 
every file, 
then, depending on what he got from that, he needs to read the rest of some 
files.
If he reads all the files lazily, he (maybe) runs into the open file limit (a 
semi-closed 
handle is still open from the OS' point of view, isn't it?).
So he has to close the first files before he opens the Nth.
But what if later he finds out that he has to read the body of a previously 
closed file?

I would separate the reading of headers and bodies, reopening the files whose 
body is 
needed, for some (maybe compelling) reason he wants to do it differently.

_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe

Reply via email to