Re: [Haskell-cafe] debugging memory corruption

2012-12-02 Thread Evan Laforge
Thanks for the response.

On Sat, Dec 1, 2012 at 5:23 PM, Alexander Kjeldaas
alexander.kjeld...@gmail.com wrote:

 What I've mostly done in similar circumstances (jni)

 1. Create an interface (virtual functions or template) for the FFI in C++
 that covers everything you use. Then create one test implementation and one
 real implementation. The test implementation must allocate resources
 whenever the real FFI does so. Doing memory allocation works. This makes it
 possible to test all your FFI in C++ using valgrind.

If I understand correctly, this sounds like what I was talking about,
i.e. to stub out the C++ side and drive that from haskell to try to
repro.  That way I don't have to have windows popping up and do the
simulation at the level of mouse clicks.  The danger is that it turns
out to be lots of work to implement, but still somehow doesn't
reproduce the problem.  That could happen if the bug is in C++, but
only turns up during manual manipulation.

Or maybe you're talking about the other way around, stub out the
haskell and replace it with C++ and then run that in valgrind?  That
seems unlikely to be helpful, because if the bug is in the haskell FFI
code then rewriting that all in C++ is just going to replace it with
possibly also buggy C++ code.

It seems to me like valgrind just plain doesn't work for haskell,
maybe because the ghc runtime uses its own allocator?  So if the bug
is in haskell I can't find it with valgrind.  If the bug is in C++,
well, I already have a pure C++ version (that talks to the C++
interface in a very simplistic way), and it can run under valgrind,
which doesn't turn up any out of bounds errors.

 2. Add tracing support to the real implementation and replay support to the
 test implementation.

I'm not sure this would work, since the whole thing is that the bug is
nondeterministic.  I feel like the only way to get it to come out is
to do a bunch of random stuff for a period of time.  It's likely that
whether it happens or not depends on the memory layout for that
particular run, and as far as I know you can't make that consistent.
Or can you?

 3. Upload to Hackage.

Is the suggestion that people who love debugging hard problems will
swarm out of the woodwork and help me find the problem?  I should be
so lucky :)

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Fwd: Conduit and pipelined protocol processing using a threadpool

2012-12-02 Thread Alexander V Vershilov
Forwarding to maillist:


Hi, Nicolas.

Just my 2 cents.
You can try to 'break' your server side application in two parts,
reader and writer then you can have:

main = do
   ch1 - ... -- inbound channel of type (message ,outchannel)
   replicateM n $ forkIO $ sourceTBMChan ch1 $$ loop1
   -- ^^ you can move this code into myApp if you need per client workers
   runTCPServer config $ \ad - do
 ch2 - ... -- outbound channel
 {- now will start 2 threads for reader and writer)
 bracket (forkIO $ sourceTBMChan ch2 $= appSource ad)
  (\t - threadDestroy t  atomically (closeTBMChan ch2))
  (const $ appSource ad $$ loop2 ch2)
   where loop1 = do
 mm - await
 case mm of
   Just (m, out) - process m = atomically (writeTBMChan out)  loop1
   Nothing - return ()
{- using conduit-stm in loop2 will fail as it will close channel
at the end of the processing -}
loop2 ch = await = \x - when (isJust x) (writeTBMChan ch
(fromJust x, ch)  loop2 ch)

I have not checked this code, but the same approach works 2 projects.

--
Alexander Vershilov

On 28 November 2012 09:21, Nicolas Trangez nico...@incubaid.com wrote:
 On Wed, 2012-11-28 at 09:17 +0200, Michael Snoyman wrote:



 On Tue, Nov 27, 2012 at 7:25 PM, Nicolas Trangez
 nico...@incubaid.com wrote:
 Michael,

 On Tue, 2012-11-27 at 17:14 +0200, Michael Snoyman wrote:
  I think the stm-conduit package[1] may be helpful for this
 use case.
  Each time you get a new command, you can fork a thread and
 give it the
  TBMChan to write to, and you can use sourceTBMChan to get a
 source to
  send to the client.


 That's +- what I had in mind. I did find stm-conduit before
 and did try
 to get the thing working using it, but these attempts failed.

 I attached an example which might clarify what I intend to do.
 I'm aware
 it contains several potential bugs (leaking threads etc), but
 that's
 beside the question ;-)

 If only I could figure out what to put on the 3 lines of
 comment I left
 in there...

 Thanks for your help,

 Nicolas



 The issue is that you're trying to put everything into a single
 Conduit, which forces reading and writing to occur in a single thread
 of execution. Since you want your writing to be triggered by a
 separate event (data being available on the Chan), you're running into
 limitations.


 The reason network-conduit provides a Source for the incoming data and
 a Sink for outgoing data is specifically to address your use case. You
 want to take the data from the Source and put it into the Chan in one
 thread, and take the data from the other Chan and put it into the Sink
 in a separate thread. Something like:


 myApp appdata = do
 chan1 - ...
 chan2 - ...
 replicateM_ 5 $ forkIO $ worker chan1 chan2
 forkIO $ appSource appdata $$ sinkTBMChan chan1
 sourceTBMChan chan2 $$ appSink appdata


 You'll also want to make sure to close chan1 and chan2 to make sure
 that your threads stop running.

 Thanks, I +- figured that out last night. Only thing left is managing
 the handshake before forking off the workers, if possible (otherwise
 things become very messy IMHO), but ResumableSources etc might be of use
 here.

 Maybe there's a library somewhere in here...

 Thanks a bunch,

 Nicolas



 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



--
С уважением,
Вершилов Александр ( mail-to: alexander.vershi...@gmail.com )


-- 
С уважением,
Вершилов Александр ( mail-to: alexander.vershi...@gmail.com )

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Can not use ST monad with polymorphic function

2012-12-02 Thread Heinrich Apfelmus

David Menendez wrote:

On Thu, Nov 29, 2012 at 7:50 AM, Dmitry Kulagin dmitry.kula...@gmail.comwrote:


Thank you, MigMit!

If I replace your type FoldSTVoid with:
data FoldMVoid = FoldMVoid {runFold :: Monad m = (Int - m ()) - m ()}

then everything works magically with any monad!
That is exactly what I wanted, though I still do not quite understand why
wrapping the type solves the problem



Short answer: It's because GHC's type system is predicative.

Basically, quantified types can't be given as arguments to type
constructors (other than -, which is its own thing). I'm not entirely sure
why, but it apparently makes the type system very complicated from a
theoretical standpoint. By wrapping the quantified type in a newtype, the
argument to IO becomes simple enough not to cause problems.


GHC has an extension -XImpredicativeTypes that lifts this restriction, 
but in my experience, it doesn't work very well. A newtype


 data Foo = Foo { bar :: forall a . baz a }

usually works best.


Best regards,
Heinrich Apfelmus

--
http://apfelmus.nfshost.com


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Design of a DSL in Haskell

2012-12-02 Thread Joerg Fritsch
This is probably a very basic question.

I am working on a DSL that eventuyally would allow me to say:

import language.cwmwl
main = runCWMWL $ do
eval (isFib::, 1000, ?BOOL)

I have just started to work on the interpreter-function runCWMWL and I wonder 
whether it is possible to escape to real Haskell somehow (and how?) either 
inside ot outside the do-block.

I thought of providing a defautl-wrapper for some required prelude functions 
(such as print) inside my interpreter but I wonder if there are more elegant 
ways to co-loacate a DSL and Haskell without falling back to being a normal 
library only.

--Joerg

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Design of a DSL in Haskell

2012-12-02 Thread Rustom Mody
On Sun, Dec 2, 2012 at 7:31 PM, Joerg Fritsch frit...@joerg.cc wrote:

 This is probably a very basic question.

 I am working on a DSL that eventuyally would allow me to say:

 import language.cwmwl

 main = runCWMWL $ do

 eval (isFib::, 1000, ?BOOL)


 I have just started to work on the interpreter-function runCWMWL and I
 wonder whether it is possible to escape to real Haskell somehow (and how?)
 either inside ot outside the do-block.

 I thought of providing a defautl-wrapper for some required prelude
 functions (such as print) inside my interpreter but I wonder if there are
 more elegant ways to co-loacate a DSL and Haskell without falling back to
 being a normal library only.

 --Joerg



+1
I am also interested in the DSL-in-Haskell possibilities

[I am assuming Joerg that you're familiar with the basic ideas and
terminology like
http://martinfowler.com/bliki/DomainSpecificLanguage.html and the links
therein]

Rusi

-- 
http://www.the-magus.in
http://blog.languager.org
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Is there any movement/need in new 'base' package and co?

2012-12-02 Thread Евгений Пермяков
It is clear, that current 'core' Haskell packages bear some weight of 
legacy and back-compatibility, and accumulates garbage/problems over 
time. Some of the garbage can be dropped and some was here for so long, 
that it can't be dropped without major buzz.


For example, we have *Monad* type-class, that is hardwired into Haskell 
syntax. However, it allows arbitrary fail by definition, and not all 
monads should provide this option. Still, it is almost impossible to 
move the method into separate interface, as every *Monad* instance will 
have to be rewritten.


Lack of standard packed text/binary string type is also a problem. It is 
not so problem on itself, as problem of support: different io packages, 
parser tools and so on have to choose what string type to use, which 
lead to fragmentation of library field.


Hierarchy of numeric classes is also often questioned.

More points of problems may be found if one will study Hackage long enough.

So, the questions arise:
- When problems will ruin the language?
- When and which actions are needed to avoid this?
- If major rewrite will be initiated, what problems it should target?
- And how to make the transition easier?



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Is there any movement/need in new 'base' package and co?

2012-12-02 Thread Stephen Tetley
Haskell 2010 avoided library revisions - 12 years had elapsed since
the last language definition and updating the language was deemed the
priority. There have been suggestions on the Libraries list that the
next major language revision should also look at the core libraries.



On 2 December 2012 18:59, Евгений Пермяков permea...@gmail.com wrote:
 So, the questions arise:
 - When problems will ruin the language?
 - When and which actions are needed to avoid this?
 - If major rewrite will be initiated, what problems it should target?
 - And how to make the transition easier?


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Design of a DSL in Haskell

2012-12-02 Thread Joerg Fritsch
Rusi,

I have read Fowler's book.(that is focusing on Java by the way) and could not 
find the answer there, I think it is a typical textbook.
I think this is a good start by the way: 
http://www.cse.chalmers.se/edu/year/2011/course/TIN321/lectures/bnfc-tutorial.html

--Joerg


On Dec 2, 2012, at 5:45 PM, Rustom Mody wrote:

 On Sun, Dec 2, 2012 at 7:31 PM, Joerg Fritsch frit...@joerg.cc wrote:
 This is probably a very basic question.
 
 I am working on a DSL that eventuyally would allow me to say:
 
 
 import language.cwmwl
 main = runCWMWL $ do
 eval (isFib::, 1000, ?BOOL)
 
 I have just started to work on the interpreter-function runCWMWL and I wonder 
 whether it is possible to escape to real Haskell somehow (and how?) either 
 inside ot outside the do-block.
 
 I thought of providing a defautl-wrapper for some required prelude functions 
 (such as print) inside my interpreter but I wonder if there are more elegant 
 ways to co-loacate a DSL and Haskell without falling back to being a normal 
 library only.
 
 --Joerg
 
 
 
 +1
 I am also interested in the DSL-in-Haskell possibilities
 
 [I am assuming Joerg that you're familiar with the basic ideas and 
 terminology like
 http://martinfowler.com/bliki/DomainSpecificLanguage.html and the links 
 therein]
 
 Rusi
 
 -- 
 http://www.the-magus.in
 http://blog.languager.org
 
 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to incrementally update list

2012-12-02 Thread Albert Y. C. Lai

On 12-11-30 01:16 PM, Mark Thom wrote:

Is there a paper or other single resource that will help me thoroughly
understand non-strictness in Haskell?


See my http://www.vex.net/~trebla/haskell/lazy.xhtml

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Help optimize fannkuch program

2012-12-02 Thread Branimir Maksimovic

Well, playing with Haskell I have literally trasnlated my c++ program 
http://shootout.alioth.debian.org/u64q/program.php?test=fannkuchreduxlang=gppid=3and
 got decent performance but not that good in comparisonwith c++ On my machine 
Haskell runs 52 secs while c++ 30 secs.(There is Haskell entry that is fastest 
but unfortunately does not runs on test machine is on par with 
c++http://shootout.alioth.debian.org/u64q/program.php?test=fannkuchreduxlang=ghcid=3)There
 is something which I have missing since programsare identical.Aa with previous 
entries you gurus here helped a lot in both helpand learning experience.I 
simply love Haskell ;)I plan to contribute this program as it is much faster 
than current runningentry 
http://shootout.alioth.debian.org/u64q/program.php?test=fannkuchreduxlang=ghcid=2even
 if it is multithreaded and my is not.
This is program:
{-# LANGUAGE CPP, BangPatterns #-}{-  The Computer Language Benchmarks Game
http://shootout.alioth.debian.org/
contributed by Branimir Maksimovic
-}
import System.Environmentimport Text.Printfimport Data.Bits
import qualified Data.Vector.Unboxed.Mutable as VMimport qualified 
Data.Vector.Generic.Mutable as VGimport qualified Data.Vector.Unboxed as V
main = do   n - getArgs = readIO.head(checksum,maxflips) - fannkuch 
n   printf %d\nPfannkuchen(%d) = %d\n checksum n maxflips
fannkuch n = do !perm -  V.unsafeThaw $ V.fromList [1..n]  !tperm -  
VG.new n !cnt -  VG.replicate n 0   let loop :: Int - Int 
- Int - IO(Int,Int)loop !c !m !pc = do !b 
- next_permutation perm n cnt   if b == False then return 
(c,m) else do VM.unsafeCopy 
tperm perm!flips -  count_flips tperm 0
  loop (c + (if pc .. 1 == 0 then flips else -flips))  
   (max m flips)   
(pc+1) r - loop 0 0 1 return r

next_permutation :: VM.IOVector Int - Int - VM.IOVector Int- 
IO(Bool)next_permutation !perm !n !cnt =do  !i - loop 1
if(i = n)  then return False   
else do !v - VM.unsafeRead cnt i   
VM.unsafeWrite cnt i (v+1)  return True 
where   loop :: Int - IO(Int)  loop !i 
| i  n = do  !tmp 
- VM.unsafeRead perm 0let  
 rotate :: Int - IO()   rotate !j =
 if j = i  
 then do 
VM.unsafeWrite perm i tmp   
return ()   else do 
!v - VM.unsafeRead perm (j+1)  
VM.unsafeWrite perm j v 
rotate (j+1)  rotate 0  
  !v - VM.unsafeRead cnt i 
  if v = i then do 
VM.unsafeWrite cnt i 0  
loop (i+1)  else return i   
| otherwise = return i  
count_flips :: VM.IOVector Int - Int - IO(Int)count_flips !tperm !flips = do  
!f - VM.unsafeRead tperm 0 if f == 1   then
return flipselse do VG.reverse $ 
VM.unsafeSlice 0 f tperm   count_flips tperm (flips+1)


  ___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Vector mysteries

2012-12-02 Thread Fixie Fixie
Hi

I am trying to implement Show for a storable-mutable-vector.

Does anyone have a clue why the type-system is killing me :-)

Code is below...

Cheers

Felix



{-# LANGUAGE ExistentialQuantification #-}

import Control.Monad (liftM2)
import qualified Data.Vector.Unboxed as V
import qualified Data.Vector.Unboxed.Mutable as MV
import Data.Vector.Storable.Mutable
import GHC.Prim (RealWorld)
import Control.Monad.Primitive
import Control.Monad
import qualified Data.Vector as VEC
import qualified Data.Vector.Generic.Mutable as GM
import Data.Int
import Data.Typeable.Internal
import Data.Primitive

data Row = forall m s. (Storable s, MV.Unbox s, Prim s, PrimMonad m) = Row 
(MV.MVector (PrimState m) s)

instance Show Row where
    show (Row row) = do
                    let xx = MV.read row 0
                    Done

main :: IO ()
main = do
    print Done

I get this error:

    Could not deduce (PrimState m ~ PrimState m0)
    from the context (Storable s, MV.Unbox s, Prim s, PrimMonad m)
      bound by a pattern with constructor
                 Row :: forall (m :: * - *) s.
                        (Storable s, MV.Unbox s, Prim s, PrimMonad m) =
                        MV.MVector (PrimState m) s - Row,
               in an equation for `show'
    NB: `PrimState' is a type function, and may not be injective
    Expected type: MV.MVector (PrimState m0) s
      Actual type: MV.MVector (PrimState m) s
    In the first argument of `MV.read', namely `row'
    In the expression: MV.read row 0
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to incrementally update list

2012-12-02 Thread Mark Thom
Thanks Albert, I believe that's the second time you've helped me this
weekend. I'm meiji11 in #haskell.

Cheers.

On Sun, Dec 2, 2012 at 1:34 PM, Albert Y. C. Lai tre...@vex.net wrote:

 On 12-11-30 01:16 PM, Mark Thom wrote:

 Is there a paper or other single resource that will help me thoroughly
 understand non-strictness in Haskell?


 See my 
 http://www.vex.net/~trebla/**haskell/lazy.xhtmlhttp://www.vex.net/~trebla/haskell/lazy.xhtml


 __**_
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to incrementally update list

2012-12-02 Thread Mark Thom
And thanks to everyone for the links and other suggestions, of course..

On Sun, Dec 2, 2012 at 5:09 PM, Mark Thom markjordant...@gmail.com wrote:

 Thanks Albert, I believe that's the second time you've helped me this
 weekend. I'm meiji11 in #haskell.

 Cheers.


 On Sun, Dec 2, 2012 at 1:34 PM, Albert Y. C. Lai tre...@vex.net wrote:

 On 12-11-30 01:16 PM, Mark Thom wrote:

 Is there a paper or other single resource that will help me thoroughly
 understand non-strictness in Haskell?


 See my 
 http://www.vex.net/~trebla/**haskell/lazy.xhtmlhttp://www.vex.net/~trebla/haskell/lazy.xhtml


 __**_
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] [ANN] xmobar 0.16 released

2012-12-02 Thread Jose A. Ortega Ruiz

I've just released xmobar 0.16, the lightweight text-based system
monitor written in Haskell.  See http://projects.haskell.org/xmobar, and
full release notes below.

As always, thanks a bunch to the many contributors, who keep helping me
improving xmobar and making it more and more useful.  The continuous
flow of patches and suggestions is one the nicest things about hacking
on and maitaining xmobar.

This is a somewhat risky release, because some users are reporting
problems with GHC 7.6.1 and xmonad (see links below), but there are so
many new features and fixes, and happy users of the unstable version,
that i thought a release was in order.  Please, do not hesitate to
complain about problems you might encounter: i'll do my best to fix them
in subsequent releases.

Thanks!

## Version 0.16 (Dec 3, 2012)

_New features_

  - New monitor `AutoMPD`, which uses asynchronous events to display
MPD status (thanks to Ben Boeckel).
  - New monitor `BufferedPipeReader` displaying data from multiple
pipes (thanks to Jochen Keil).
  - New monitor `DynNetwork`, which detects the active interface
automatically, by Reto Hablützel
  - New monitor, `Locks`, displaying the status of lock keys, by
Patrick Chilton.
  - Extension for DBUS signal handling (Jochen Keil)
  - Hide/Reveal: one can send signals to xmobar and make it (un)hide
itself (Jochen again).
  - `PipeReader`'s default text is now configurable, by Reto Hablützel.
  - Dependencies updated to latest mtl and libmpd (thanks to Sergei
Trofimovich).
  - Dependencies on the deprecated dbus-core removed in favour of
dbus 0.10 (thanks to Jochen Keil).
  - MPris2 now includes genre and composer among its fields.

_Bug fixes_

  - `DiskIO` now can report overall activity in all partitions of a device
which is not mounted itself (e.g., sda when sda1, sda3, etc. are
the mounted partitions).  Thanks to John Soros. See [github #73].
  - `DiskU`, the disk usage monitor, works again correctly on Linux,
instead of randomly crashing every now and then, and reporting
wrong used size.
  - When using antialiased fonts, we were causing a memory leak in the
X server by repeatedly allocating colors that, apparently, the
server doesn't know how to get rid of (even when told so!).  We're
caching them now and X server memory doesn't grow.
  - Compilation errors and warnings with GHC 7.6 removed (thanks to
Raghavendra D Prabhu for his reports in [github #71]).

_Known problems_

Some users have reported problems with xmobar compiled with GHC 7.6 in
ArchLinux: see [github #78] and pointers therein.  Please, send
reports of any problems or successes in that regard so that we can fix
any remaining issues.  Thanks!

[github #71]: https://github.com/jaor/xmobar/issues/71
[github #73]: https://github.com/jaor/xmobar/issues/73
[github #78]: https://github.com/jaor/xmobar/issues/78

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Is anyone working on a sparse matrix library in Haskell?

2012-12-02 Thread wren ng thornton

On 12/1/12 11:58 PM, Kim-Ee Yeoh wrote:

On Sun, Dec 2, 2012 at 10:52 AM, wren ng thornton w...@freegeek.org wrote:

My goal for all this is in setting up the type system, not performance.
I figure there are other folks who know and care a lot more about the
numerical tricks of giving the actual implementations.


You've got my support -- old-school optimizations, numerical, compiler, or
otherwise, are deadly boring. Death to them, I say! If we don't explore
uncharted waters, who will?


Well, there are interesting things to optimization[1], it's just that 
that's not my main area and I have few enough round tuits as it is. I 
also don't spend much time thinking about hardware, but I'm terribly 
glad there're other folks who really care about it.



[1] For example, while matrix multiplication is associative, how exactly 
you associate things has a major impact on performance. 
Performance-minded compilers for linear algebra thus choose how to 
associate things by running an algorithm which is essentially the same 
as the chart-parsing algorithms in NLP. As a NLPer, I think that's 
awesome; and since I'm not sure if anyone else has made that connection 
before, it'd be nice to see what each side could learn from the other. 
One thing I'd like to get out of the type classes I'm working on is the 
ability to define a DSL which allows this sort of optimization.


--
Live well,
~wren

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Is anyone working on a sparse matrix library in Haskell?

2012-12-02 Thread Mark Flamer
Thanks to all for the feedback. As I investigate the structures for
organizing a library of sparse matrix representations a bit more and look
into the repa 3 library, I cant help but wonder if these spare matrix types
could just be additional instances of Source and Target in repa. Does anyone
know of any reason why this would be a bad idea? I see that the lib was
designed to be extended with new matrix representations. Just a thought.



--
View this message in context: 
http://haskell.1045720.n5.nabble.com/Is-anyone-working-on-a-sparse-matrix-library-in-Haskell-tp5721452p5721679.html
Sent from the Haskell - Haskell-Cafe mailing list archive at Nabble.com.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Naive matrix multiplication with Accelerate

2012-12-02 Thread Clark Gaebel
Hello cafe,

I've recently started learning about cuda and hetrogenous programming, and
have been using accelerate [1] to help me out. Right now, I'm running into
trouble in that I can't call parallel code from sequential code. Turns out
GPUs aren't exactly like Repa =P.

Here's what I have so far:

import qualified Data.Array.Accelerate as A
import Data.Array.Accelerate ( (:.)(..)
 , Acc
 , Vector
 , Scalar
 , Elt
 , fold
 , slice
 , constant
 , Array
 , Z(..), DIM1, DIM2
 , fromList
 , All(..)
 , generate
 , lift, unlift
 , shape
 )
import Data.Array.Accelerate.Interpreter ( run )

dotP :: (Num a, Elt a) = Acc (Vector a) - Acc (Vector a) - Acc (Scalar a)
dotP xs ys = fold (+) 0 $ A.zipWith (*) xs ys

type Matrix a = Array DIM2 a

getRow :: Elt a = Int - Acc (Matrix a) - Acc (Vector a)
getRow n mat = slice mat . constant $ Z :. n :. All

-- Naive matrix multiplication:
--
-- index (i, j) is equal to the ith row of 'a' `dot` the jth row of 'b'
matMul :: A.Acc (Matrix Double) - A.Acc (Matrix Double) - A.Acc (Matrix
Double)
matMul a b' = A.generate (constant $ Z :. nrows :. ncols) $
\ix -
  let (Z :. i :. j) = unlift ix
   in getRow i a `dotP` getRow j b
where
b = A.transpose b' -- I assume row indexing is faster than column
indexing...
(Z :. nrows :.   _  ) = unlift $ shape a
(Z :.   _   :. ncols) = unlift $ shape b


This, of course, gives me errors right now because I'm calling getRow and
dotP from within the generation function, which expects Exp[ression]s, not
Acc[elerated computation]s.

So maybe I need to replace that line with an inner for loop? Is there an
easy way to do that with Accelerate?

Thanks for your help,
  - Clark

[1] http://hackage.haskell.org/package/accelerate
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] To my boss: The code is cool, but it is about 100 times slower than the old one...

2012-12-02 Thread Ivan Salazar
I see.

I read some chapters from Purely Functional Data Structures when I was in
college in order to understand some tree algorithms, but not the whole book.
Do you think that could help me to understand performance problems with
code (poorly) written in Haskell?

From reading your post, I can guess the book is a good start, but what
about Haskell/GHC specific hacks?
Is there a good written source for gathering the required knowledge?

For example, I remember I struggled a lot with Arrays once, and just days
after digging I found that Arrays are a no-no in Haskell. That was before
finding this maiking list, though.

On Dec 1, 2012 9:18 PM, wren ng thornton w...@freegeek.org wrote:

 On 11/29/12 2:17 PM, Ivan Salazar wrote:

 The bad side is that direct translation of algorithms are almost always
 very slow and the work needed to make them perform is very mind bending.


 Indeed. The thing is, all algorithms make (implicit) assumptions about
the cost model of the underlying language. The problem comes about because
the assumptions made in most algorithms are not valid in Haskell. This
isn't just an issue of laziness; the entire computational model of Haskell
(e.g., STG) is radically different from imperative languages.

 For example, in many cases just passing arguments around (a la the State
monad) is much faster than using ST and explicit mutation. GHC does a lot
of clever code reorganization, but mutation breaks countless purity laws
and so it inhibits the application of these optimizations. Unfortunately,
the vast majority of algorithms out there assume you're working in a
language where mutation is essentially free. I'm not talking about any
significant theoretical issues about using mutation or not; I'm just
talking about the implicit assumptions that algorithm implementers make. If
you believe mutation is essentially free then it makes sense to create an
initial object and then incrementally mutate it this way and that until you
get to where you want. But, a great deal of the time there's nothing
actually stopping us from gathering all the information and allocating the
final result directly. However, if you're not used to thinking in those
terms, then the conceptual reorganization required to see how to gather all
the data at once is indeed mind-bending.

 --
 Live well,
 ~wren


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe
On Dec 1, 2012 9:18 PM, wren ng thornton w...@freegeek.org wrote:

 On 11/29/12 2:17 PM, Ivan Salazar wrote:

 The bad side is that direct translation of algorithms are almost always
 very slow and the work needed to make them perform is very mind bending.


 Indeed. The thing is, all algorithms make (implicit) assumptions about the
 cost model of the underlying language. The problem comes about because the
 assumptions made in most algorithms are not valid in Haskell. This isn't
 just an issue of laziness; the entire computational model of Haskell (e.g.,
 STG) is radically different from imperative languages.

 For example, in many cases just passing arguments around (a la the State
 monad) is much faster than using ST and explicit mutation. GHC does a lot
 of clever code reorganization, but mutation breaks countless purity laws
 and so it inhibits the application of these optimizations. Unfortunately,
 the vast majority of algorithms out there assume you're working in a
 language where mutation is essentially free. I'm not talking about any
 significant theoretical issues about using mutation or not; I'm just
 talking about the implicit assumptions that algorithm implementers make. If
 you believe mutation is essentially free then it makes sense to create an
 initial object and then incrementally mutate it this way and that until you
 get to where you want. But, a great deal of the time there's nothing
 actually stopping us from gathering all the information and allocating the
 final result directly. However, if you're not used to thinking in those
 terms, then the conceptual reorganization required to see how to gather all
 the data at once is indeed mind-bending.

 --
 Live well,
 ~wren

 __**_
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Naive matrix multiplication with Accelerate

2012-12-02 Thread Trevor L. McDonell
Hi Clark,

The trick is that most accelerate operations work over multidimensional arrays, 
so you can still get around the fact that we are limited to flat 
data-parallelism only.

Here is matrix multiplication in Accelerate, lifted from the first Repa paper 
[1].


import Data.Array.Accelerate as A

type Matrix a = Array DIM2 a

matMul :: (IsNum e, Elt e) = Acc (Matrix e) - Acc (Matrix e) - Acc (Matrix e)
matMul arr brr
  = A.fold (+) 0
  $ A.zipWith (*) arrRepl brrRepl
  where
Z :. rowsA :. _ = unlift (shape arr):: Z :. Exp Int :. Exp Int
Z :. _ :. colsB = unlift (shape brr):: Z :. Exp Int :. Exp Int

arrRepl = A.replicate (lift $ Z :. All   :. colsB :. All) arr
brrRepl = A.replicate (lift $ Z :. rowsA :. All   :. All) 
(A.transpose brr)


If you use github sources rather than the hackage package, those intermediate 
replicates will get fused away.


Cheers,
-Trev

 [1] http://www.cse.unsw.edu.au/~chak/papers/KCLPL10.html




On 03/12/2012, at 5:07 PM, Clark Gaebel cgae...@uwaterloo.ca wrote:

 Hello cafe,
 
 I've recently started learning about cuda and hetrogenous programming, and 
 have been using accelerate [1] to help me out. Right now, I'm running into 
 trouble in that I can't call parallel code from sequential code. Turns out 
 GPUs aren't exactly like Repa =P.
 
 Here's what I have so far:
 
 import qualified Data.Array.Accelerate as A
 import Data.Array.Accelerate ( (:.)(..)
  , Acc
  , Vector
  , Scalar
  , Elt
  , fold
  , slice
  , constant
  , Array
  , Z(..), DIM1, DIM2
  , fromList
  , All(..)
  , generate
  , lift, unlift
  , shape
  )
 import Data.Array.Accelerate.Interpreter ( run )
 
 dotP :: (Num a, Elt a) = Acc (Vector a) - Acc (Vector a) - Acc (Scalar a)
 dotP xs ys = fold (+) 0 $ A.zipWith (*) xs ys
 
 type Matrix a = Array DIM2 a
 
 getRow :: Elt a = Int - Acc (Matrix a) - Acc (Vector a)
 getRow n mat = slice mat . constant $ Z :. n :. All
 
 -- Naive matrix multiplication:
 --
 -- index (i, j) is equal to the ith row of 'a' `dot` the jth row of 'b'
 matMul :: A.Acc (Matrix Double) - A.Acc (Matrix Double) - A.Acc (Matrix 
 Double)
 matMul a b' = A.generate (constant $ Z :. nrows :. ncols) $
 \ix -
   let (Z :. i :. j) = unlift ix
in getRow i a `dotP` getRow j b
 where
 b = A.transpose b' -- I assume row indexing is faster than column 
 indexing...
 (Z :. nrows :.   _  ) = unlift $ shape a
 (Z :.   _   :. ncols) = unlift $ shape b
 
 
 This, of course, gives me errors right now because I'm calling getRow and 
 dotP from within the generation function, which expects Exp[ression]s, not 
 Acc[elerated computation]s.
 
 So maybe I need to replace that line with an inner for loop? Is there an easy 
 way to do that with Accelerate?
 
 Thanks for your help,
   - Clark
 
 [1] http://hackage.haskell.org/package/accelerate
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe