Re: [Haskell] Classes with no data type

2006-10-10 Thread Chad Scherrer

Bob,

If you want to display them differently depending on how they were
generated, I would wrap each one in a newtype. Then you can make the
newtypes instances of different typeclasses. The wrapper remembers how
the structure was generated, and using typeclasses, you should be able
to treat each in the appropriate way pretty easily.

-Chad


Hi,
   I've met an interesting problem in terms of how to type a data
structure and the functions that operate upon it.

The problem centres around a single data type.  This data type can be
constructed in multiple ways using different functions, depending on
the options the user specifies.  That's all simple enough.  The
problem really comes later on.  Depending on the function used
generate the data structure I want to use different functions later
on for example, to display the data.

Thus I have a typical classes problem, in that I have several
implementations of essentially the same function for different
circumstances.  The problem is, they must all operate on the same
data type, so I cannot define them as seperate instances.

Anyone got any ideas how to type this?

Bob


--

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


End of Haskell Digest, Vol 38, Issue 6
**




--

Chad Scherrer

"Time flies like an arrow; fruit flies like a banana" -- Groucho Marx
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] ByteString tokens

2006-10-02 Thread Chad Scherrer

On 10/2/06, Duncan Coutts <[EMAIL PROTECTED]> wrote:


If you need the previous implementation you can use this:

-- | Like 'splitWith', except that sequences of adjacent separators are
-- treated as a single separator. eg.
--
-- > tokens (=='a') "aabbaca" == ["bb","c"]
--
tokens :: (Word8 -> Bool) -> ByteString -> [ByteString]
tokens f = List.filter (not.null) . splitWith f

Duncan


Ok, I'll just do it that way. Thanks!

--

Chad Scherrer

"Time flies like an arrow; fruit flies like a banana" -- Groucho Marx
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


[Haskell] ByteString tokens

2006-10-02 Thread Chad Scherrer

The Haddock documentation days there is a function

tokens :: (Char -> Bool) -> ByteString -> [ByteString]
in Data.ByteString.Lazy.Char8

But in ghci, I get this:


% ./ghci
  ___ ___ _
 / _ \ /\  /\/ __(_)
/ /_\// /_/ / /  | |  GHC Interactive, version 6.5.20061001, for
Haskell 98.
/ /_\\/ __  / /___| |  http://www.haskell.org/ghc/
\/\/ /_/\/|_|  Type :? for help.

Loading package base ... linking ... done.
Prelude> :m Data.ByteString.Lazy.Char8
Prelude Data.ByteString.Lazy.Char8> :t tokens

:1:0: Not in scope: `tokens'

Any idea where it went?

Thanks,

--

Chad Scherrer

"Time flies like an arrow; fruit flies like a banana" -- Groucho Marx
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] Speed of ByteString.Lazy

2006-06-29 Thread Chad Scherrer
No. I suppose "man wc" would have helped, but this has been entertaining, anyway. 
Times for lc and wc -l seem comparable over a couple of runs. So in any
case, it's encouraging that it's so easy to reach speeds comparable to
(presumably) highly-tuned C code like this.
-Chad
On 6/29/06, Robby Findler <[EMAIL PROTECTED]> wrote:
Just out of curiosity, did you try "wc -l"?RobbyOn Jun 29, 2006, at 1:18 PM, Chad Scherrer wrote:> I have a bunch of data files where each line represents a data> point. It's nice to be able to quickly tell how many data points I
> have. I had been using wc, like this:>> % cat *.txt | /usr/bin/time wc> 2350570 4701140 49149973> 5.81user 0.03system 0:06.08elapsed 95%CPU (0avgtext+0avgdata> 0maxresident)k
> 0inputs+0outputs (152major+18minor)pagefaults 0swaps>> I only really care about the line count and the time it takes. For> larger data sets, I was getting tired of waiting for wc, and I> wondered whether 
ByteString.Lazy could help me do better. So I> wrote a 2-liner:>> import qualified Data.ByteString.Lazy.Char8 as L> main = L.getContents >>= print . L.count '\n'>> ... and compiled this as "lc". It doesn't get much simpler than
> that. How does it perform?>> % cat *.txt | /usr/bin/time lc> 2350570> 0.09user 0.13system 0:00.24elapsed 89%CPU (0avgtext+0avgdata> 0maxresident)k> 0inputs+0outputs (199major+211minor)pagefaults 0swaps
>> Wow. 64 times as fast for this run, with almost no effort on my> part. Granted, wc is doing more work, but the number of words and> characters aren't interesting to me in this case, anyway. I can't
> imagine (implementation time)*(execution time) being much shorter.> Thanks, Don!>> -->> Chad Scherrer>> "Time flies like an arrow; fruit flies like a banana" -- Groucho Marx
> ___> Haskell mailing list> Haskell@haskell.org> 
http://www.haskell.org/mailman/listinfo/haskell-- Chad Scherrer"Time flies like an arrow; fruit flies like a banana" -- Groucho Marx
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


[Haskell] Speed of ByteString.Lazy

2006-06-29 Thread Chad Scherrer
I have a bunch of data files where each line represents a data point.
It's nice to be able to quickly tell how many data points I have. I had
been using wc, like this:

% cat *.txt | /usr/bin/time wc
2350570 4701140 49149973
5.81user 0.03system 0:06.08elapsed 95%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (152major+18minor)pagefaults 0swaps

I only really care about the line count and the time it takes. For
larger data sets, I was getting tired of waiting for wc, and I wondered
whether ByteString.Lazy could help me do better. So I wrote a 2-liner:

import qualified Data.ByteString.Lazy.Char8 as L
main = L.getContents >>= print . L.count '\n'

... and compiled this as "lc". It doesn't get much simpler than that. How does it perform?

% cat *.txt | /usr/bin/time lc
2350570
0.09user 0.13system 0:00.24elapsed 89%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (199major+211minor)pagefaults 0swaps

Wow. 64 times as fast for this run, with almost no effort on my part.
Granted, wc is doing more work, but the number of words and characters
aren't interesting to me in this case, anyway. I can't imagine
(implementation time)*(execution time) being much shorter. Thanks, Don!
-- Chad Scherrer"Time flies like an arrow; fruit flies like a banana" -- Groucho Marx
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] installing streams library

2006-05-19 Thread Chad Scherrer
Thanks, Bulat. I'm looking forward to trying it out this weekend.

Is there any indication what fast IO approach might work its way into
the standard libraries? It would be nice for idiomatic Haskell to be
really fast by default, and I'd love to be able to show off the language
shootout implications to coworkers.

Cabal doesn't seem obvious to me. Currently my uninformed point of view
is that the whole thing is too complicated to be worth the benefit it
provides. Since it generally seems popular, I'm guessing that means it's
either much simpler than it seems, or else the benefit provided is just
enormous. Can anyone point me to a sales pitch, or provide some
motivation for me to RTFM?

Thanks,

Chad

On Sat, 2006-05-20 at 10:17 +0400, Bulat Ziganshin wrote:
> Hello Chad,
> 
> Friday, May 19, 2006, 10:40:56 PM, you wrote:
> 
> > It sounds like Bulat has gotten some impressive I/O speedups with
> > his Streams library. I'd like to try this out, but I'm having some
> > trouble installing it. I'm using GHC on Linux.
> 
> yes, and current (still unpublished) version is even better than previous
> one
> 
> > My first attempt was looking around on this page:
> > http://www.haskell.org/haskellwiki/Library/Streams
> 
> > There's a really nice description, but no signs of where to
> > actually get the library.
> 
> the old hawiki don't allowed to publish links to site narod.ru,
> because it's public site used also by spammers. i plan to request
> some place on the haskell.org site for my projects to make
> possible links publishing
> 
> > From here I was able to download it, but there's no information
> > regarding how this needs to be set up. There are directories "Data",
> > "Examples", and "System", which I assume are supposed to be plugged
> > into the hierarchical module structure, but how do I do that? I
> > thought this might have something to do with Cabal (I've not yet
> > used that), but the Cabal manual talks about a .cabal file, which doesn't 
> > exist here.
> 
> > Does this follow some standard approach that I'm not familiar with?
> > Where should I look to learn more?
> 
> you will be laughing but i still not learned how cabal should be used
> and just pack all the library modules together. you can either copy
> Data & System dirs to the place where your program lies or use "-i"
> ghc switch to point compiler where it can find additional modules:
> 
> ghc --make -i/Haskell/StreamsLib  YourProgram.hs
> 
> in this case, ghc will search, for example, module "Data.Ref" in file
> /Haskell/StreamsLib/Data/Ref.hs (after looking to ./Data/Ref.hs)
> 
> directory Examples contains examples of using library, you should copy
> files from this dir to root library directory in order to compile them
> 
> 
> 

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


[Haskell] installing streams library

2006-05-19 Thread Chad Scherrer
It sounds like Bulat has gotten some impressive I/O speedups with his Streams library. I'd like to try this out, but I'm having some trouble installing it. I'm using GHC on Linux.My first attempt was looking around on this page:
http://www.haskell.org/haskellwiki/Library/StreamsThere's a really nice description, but no signs of where to actually get the library. Eventually (thanks to Google) I tracked down this message:
http://article.gmane.org/gmane.comp.lang.haskell.general/13625From here I was able to download it, but there's no information regarding how this needs to be set up. There are directories "Data", "Examples", and "System", which I assume are supposed to be plugged into the hierarchical module structure, but how do I do that? I thought this might have something to do with Cabal (I've not yet used that), but the Cabal manual talks about a .cabal file, which doesn't exist here.
Does this follow some standard approach that I'm not familiar with? Where should I look to learn more?Thanks,-- Chad Scherrer"Time flies like an arrow; fruit flies like a banana" -- Groucho Marx
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


[Haskell] Re: Haskell Digest, Vol 33, Issue 3

2006-05-09 Thread Chad Scherrer
Phil, thanks for the response.
 
> I was thinking about the dynamic behavior of par, and there's something> that's a little confusing to me. Am I right in understanding that (x
> `par` y) evaluates to y, and may or may not evaluate x along the way?The reason that par doesn't necessarily evaluate it's 1st argument is thattypical parallel Haskell programs contain vast amounts of potential
parallelism. So rather than create a relatively heavyweight thread and beforced to administer it, e.g. schedule it for every possible _expression_that could be evaluated in parallel, the _expression_ is 'sparked', 
i.e. alightweight action that simply notes that the _expression_ *could* beevaluated in parallel.

I guess this comes down to how par is used. If par is used where
evaluation would otherwise be entirely lazy, I wouldn't worry too much
about reasoning about whether the sparked thread actually starts or
not. But it would be nice to take code that has had seq introduced in
some places to avoid stack overflows from excessive laziness and be
able to replace seq with something else that would keep the same
strictness, but evaluate in parallel. Of course, this could be done at
a higher level than the cut-and-paste approach.

While I've got you're attention, I wonder if you could help me
understand the current state of GPH, and how it relates to recently
available shared-memory parallelism available in GHC. It seems to me
the ease of parallelism in Haskell could be its "killer app", at least
for the folks I interact with. It would be nice if the same code could
be used on GPH and parallel GHC, but I'm not sure whether that's the
case. Having GHC run in parallel out of the box is great, but if it's
moved to a distributed-memory system, I'm also not sure to what extent
increased latency is taken into account. 
Thanks,
Chad Scherrer"Time flies like an arrow; fruit flies like a banana" -- Groucho Marx
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell