Re: [Haskell-cafe] Combining Network Descriptions in Reactive.Banana

2012-07-10 Thread wren ng thornton

On 6/27/12 2:57 PM, Alexander Foremny wrote:

Sweet! Thank you very much!

Just out of curiosity: how does this differ from the following, not
compiling type signature?


library :: forall t. NetworkDescription t (Behavior t String) -  IO ()


The type:

(forall a. F a) - G

is isomorphic to:

exists a. (F a - G)

which differs in the obvious way from:

forall a. (F a - G)



Or, from a game-theoretic perspective: with the first type it's the 
callee that gets to decide which type A is; so the caller only knows 
that such an A exists, but they have no knowledge or control over what 
it is. Whereas with the latter type, it's the caller that gets to choose 
A, and the callee has to deal with it no matter which type it is.


--
Live well,
~wren

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Interest in typed relational algebra library?

2012-07-10 Thread oleg

 And yes to first order predicate calculus too!

Just two weeks ago Chung-chieh Shan and I were explaining at NASSLLI
the embedding in Haskell of the higher-order predicate logic with two
base types (so-called Ty2). The embedding supports type-safe
simplification of formulas (which was really needed for our
applications). The embedding is extensible: you can add models and
more constants.

http://okmij.org/ftp/gengo/NASSLLI10/course.html#semantics


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Connecting Travis CI and hackage

2012-07-10 Thread Alessandro Vermeulen
You have full internet access on the Travis machines. The problem is that
you only have the data from your public repo. So you could upload the
package through the command line with cabal. But that means storing your
credentials in the public repository, which is a bad idea obviously.

In short: contact the people from Travis and they will be happy to help
you. Maybe you can store some environment variables in your account that
are made available for the build script.

- Alessandro
On Jul 8, 2012 7:28 PM, Dmitry Malikov malikov@gmail.com wrote:

  But in the meantime, regarding what you suggest here, couldn't be done
 through tags rather than branches? Tagging a release version could trigger
 testing and, if testing runs fine, upload to hackage.
 Testing should be done on travis virtual machine. So this question is not
 about `when' or `where from' new release should be generated. It's about
 how uploading the new package to hackage could be done from the travis
 machine.

 --
 Best regards,
 dmitry malikov
 !


 __**_
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] DPH matrix product

2012-07-10 Thread Manuel M T Chakravarty
Firstly, especially when you are talking about performance, please provided 
detailed information on (a) the versions of the compiler and libraries that you 
used and (b) of the command line options that you used for compilation.

Secondly, your function 'transposeP' doesn't make for a good nested 
data-parallel program. I haven't looked at the generated code, but I wouldn't 
be surprised if it doesn't optimise very well.

The core benefit of nested data parallelism is that the sub-arrays in a nested 
array of arrays can be of varying size leading to irregular parallelism. 
However, that flexibility comes at a price, namely that it is a fairly 
inefficient representation for *rectangular arrays*, such as regular 
two-dimensional matrices (as in your example). It shouldn't be quite as 
inefficient as what you report, but it will never be competitive with a 
dedicated regular representation.

Hence, for code, such as yours, we recommend to use our Repa library: 
http://hackage.haskell.org/package/repa

It generates very fast code for regular array problems, see also 
http://www.cse.unsw.edu.au/~chak/papers/LCKP12.html

Manuel


mblanco blanco...@gmail.com:
 Hi, I'm trying to implement a matrix product example using DPH. This is the 
 code:
 ---
 type MMultType = Double
 type Matrix = [:[:MMultType:]:]
 type MVector = [:MMultType:]
 type Matrix_wrapper = PArray (PArray MMultType)
 
 {-# NOINLINE matMult_wrapper #-}
 matMult_wrapper :: Matrix_wrapper - Matrix_wrapper - Matrix_wrapper
 matMult_wrapper mA mB = toPArrayP (mapP toPArrayP (matMult (fromNestedPArrayP 
 mA) (fromNestedPArrayP mB)))
 
 matMult :: Matrix - Matrix - Matrix
 matMult mA mB = mapP (\row - mapP (\col - dotp row col) (transposeP mB)) mA
 
 dotp :: MVector - MVector - MMultType
 dotp row col = D.sumP (zipWithP (D.*) row col)
 
 transposeP :: Matrix - Matrix
 transposeP m = 
 let
 h = lengthP m
 w = lengthP (m !: 0)
 rh = I.enumFromToP 0 (h I.- 1)
 rw = I.enumFromToP 0 (w I.- 1)
 in
 if h I.== 0 then [: :]
 else mapP (\y - mapP (\x - m !: x !: y) rh) rw
 ---
 
 My problem is at execution time, on matrices of size 300*300 the program does 
 finish (although it is very slow), but on 700*700 it consumes GBs of RAM 
 until the process is aborted.
 
 In the paper Work Efficient Higher-Order Vectorisation it is explained that 
 a work complexity problem (wich involved unnecesary array replication) was 
 recently treated. So at first I thought the code implementation related to 
 the paper had not been uploaded to hackage. But as I understand it must have 
 been, as that seems to be the motive of the dph-lifted-vseg package.
 
 Does anybody notice the problem with the example or if the problem is related 
 to the subject treated in the paper?
 
 Thanks in advance!
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] DPH matrix product

2012-07-10 Thread Mauro Blanco
Thanks for both answers

I have used repa with the newer interface for the same example, but I
wanted to have another example using DPH. I know repa is more suited for
regular representations, but I wanted to express the same program in DPH
where I don´t have to worry of nested parallel computation.

The transposeP  did not seem to be the problem as only executing transposeP
on big matrices generated no memory issues. But, something like this (on
square matrices) still have  memory problems:
matMult :: Matrix - Matrix - Matrix
matMult mA mB = mapP (\row - mapP (\col - dotp row col) *mB*) mA

dotp :: MVector - MVector - MMultType
dotp row col = D.sumP (zipWithP (D.*) row col)

Later today (or tomorrow) I will post exact OS, GHC and libraries version
as the command line options and execution information on the simplified
example.

Thanks again

On Tue, Jul 10, 2012 at 8:43 AM, Manuel M T Chakravarty 
c...@cse.unsw.edu.au wrote:

 Firstly, especially when you are talking about performance, please
 provided detailed information on (a) the versions of the compiler and
 libraries that you used and (b) of the command line options that you used
 for compilation.

 Secondly, your function 'transposeP' doesn't make for a good nested
 data-parallel program. I haven't looked at the generated code, but I
 wouldn't be surprised if it doesn't optimise very well.

 The core benefit of nested data parallelism is that the sub-arrays in a
 nested array of arrays can be of varying size leading to irregular
 parallelism. However, that flexibility comes at a price, namely that it is
 a fairly inefficient representation for *rectangular arrays*, such as
 regular two-dimensional matrices (as in your example). It shouldn't be
 quite as inefficient as what you report, but it will never be competitive
 with a dedicated regular representation.

 Hence, for code, such as yours, we recommend to use our Repa library:
 http://hackage.haskell.org/package/repa

 It generates very fast code for regular array problems, see also
 http://www.cse.unsw.edu.au/~chak/papers/LCKP12.html

 Manuel


 mblanco blanco...@gmail.com:

 Hi, I'm trying to implement a matrix product example using DPH. This is
 the code:
 --**--**
 --**-
 type MMultType = Double
 type Matrix = [:[:MMultType:]:]
 type MVector = [:MMultType:]
 type Matrix_wrapper = PArray (PArray MMultType)

 {-# NOINLINE matMult_wrapper #-}
 matMult_wrapper :: Matrix_wrapper - Matrix_wrapper - Matrix_wrapper
 matMult_wrapper mA mB = toPArrayP (mapP toPArrayP (matMult
 (fromNestedPArrayP mA) (fromNestedPArrayP mB)))

 matMult :: Matrix - Matrix - Matrix
 matMult mA mB = mapP (\row - mapP (\col - dotp row col) (transposeP mB))
 mA

 dotp :: MVector - MVector - MMultType
 dotp row col = D.sumP (zipWithP (D.*) row col)

 transposeP :: Matrix - Matrix
 transposeP m =
 let
 h = lengthP m
 w = lengthP (m !: 0)
 rh = I.enumFromToP 0 (h I.- 1)
 rw = I.enumFromToP 0 (w I.- 1)
 in
 if h I.== 0 then [: :]
 else mapP (\y - mapP (\x - m !: x !: y) rh) rw
 --**--**
 --**-

 My problem is at execution time, on matrices of size 300*300 the program
 does finish (although it is very slow), but on 700*700 it consumes GBs of
 RAM until the process is aborted.

 In the paper Work Efficient Higher-Order Vectorisation it is explained
 that a work complexity problem (wich involved unnecesary array replication)
 was recently treated. So at first I thought the code implementation related
 to the paper had not been uploaded to hackage. But as I understand it must
 have been, as that seems to be the motive of the dph-lifted-vseg package.

 Does anybody notice the problem with the example or if the problem is
 related to the subject treated in the paper?

 Thanks in advance!
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe





-- 
Mauro Blanco
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Any good tool to write Haskell documents including tests?

2012-07-10 Thread Takayuki Muranushi
Hello,

I have been a forgetful person, and lots of things I have only
pretended to understand. I want to change this. So, to educate myself,
I'd like to write documented tests for many libraries I meet, and also
publish them onto the web so that others may find them useful or find
mistakes for me. OK, blog articles are good, but they have no (forced)
tests.

Maybe some of you have practiced this or developping such tools. I see
some candidate tools, too. What is your suggestion for this?

I have tried doctest, because of its read–eval–print loop (REPL) style I liked.

https://github.com/nushio3/practice/tree/master/control-monad-loop

It produces html as attached to this mail. It's pretty, but I'd like
to have more control on HTML.
Maybe Gitit + Doctest in Pandoc is a good alternative?



I'd also like to know what is a good way to publish a small executable
examples with automated dependency install capability. Can you point
out any problems with following Makefile + cabal ? What are better
ways to do this?

https://github.com/nushio3/practice/blob/master/control-monad-loop/Loop.cabal
https://github.com/nushio3/practice/blob/master/control-monad-loop/Makefile

Thanks in advance,

-- 
Takayuki MURANUSHI
The Hakubi Center for Advanced Research, Kyoto University
http://www.hakubi.kyoto-u.ac.jp/02_mem/h22/muranushi.html


MyFirstLoop.pdf
Description: Adobe PDF document
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Any good tool to write Haskell documents including tests?

2012-07-10 Thread Strake
On 10/07/2012, Takayuki Muranushi muranu...@gmail.com wrote:
 Hello,

 I have been a forgetful person, and lots of things I have only
 pretended to understand. I want to change this. So, to educate myself,
 I'd like to write documented tests for many libraries I meet, and also
 publish them onto the web so that others may find them useful or find
 mistakes for me. OK, blog articles are good, but they have no (forced)
 tests.

 Maybe some of you have practiced this or developping such tools. I see
 some candidate tools, too. What is your suggestion for this?

I like SmallCheck myself. Define testable properties of the library,
and SmallCheck will verify them for all cases to a given depth. The
tests can be documented with Haddock like any Haskell code.

http://hackage.haskell.org/package/smallcheck

Another similar option is QuickCheck, which will randomly generate
rather than enumerate.

http://hackage.haskell.org/package/QuickCheck

Cheers,
Strake

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] hiring haskell programmer.

2012-07-10 Thread Vagif Verdi
Hello. 
My company is looking to hire a haskell developer. South California 
(San Dimas), full time job, local only (no telecommute) 

We use yesod (haskell web framework) for internal web application and 
web services, and compojure (clojure web framework) for customer 
facing web site. 

All development will be ether with haskell (if libraries permit) or 
clojure on JVM. 

If interested please contact me. 

Regards, 
Vagif Verdi 
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Memory corruption issues when using newAlignedPinnedByteArray, GC kicking in?

2012-07-10 Thread Nicolas Trangez
All,

While working on my vector-simd library, I noticed somehow memory I'm
using gets corrupted/overwritten. I reworked this into a test case, and
would love to get some help on how to fix this.

Previously I used some custom FFI calls to C to allocate aligned memory,
which yields correct results, but this has a significant (+- 10x)
performance impact on my benchmarks. Later on I discovered the
newAlignedPinnedByteArray# function, and wrote some code using this.

Here's what I did in the test case: I created an MVector instance, with
the exact same implementation as vector's
Data.Vector.Storable.Mutable.MVector instance, except for basicUnsafeNew
where I pass one more argument to mallocVector [1].

I also use 3 different versions of mallocVector (depending on
compile-time flags):

mallocVectorOrig [2]: This is the upstream version, discarding the
integer argument I added.

Then here's my first attempt, very similar to the implementation of
mallocPlainForeignPtrBytes [3] at [4] using GHC.* libraries.

Finally there's something similar at [5] which uses the 'primitive'
library.

The test case creates vectors of increasing size, then checks whether
they contain the expected values. For the default implementation this
works correctly. For both others it fails at some random size, and the
values stored in the vector are not exactly what they should be.

I don't understand what's going on here. I suspect I lack a reference
(or something along those lines) so GC kicks in, or maybe the buffer
gets relocated, whilst it shouldn't.

Basically I'd need something like

GHC.ForeignPtr.mallocPlainAlignedForeignPtrBytes :: Int - Int - IO
(ForeignPtr a)

Thanks,

Nicolas

[1] https://gist.github.com/3084806#LC37
[2] https://gist.github.com/3084806#LC119
[3]
http://hackage.haskell.org/packages/archive/base/latest/doc/html/src/GHC-ForeignPtr.html
[4] https://gist.github.com/3084806#LC100
[5] https://gist.github.com/3084806#LC81




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Any good tool to write Haskell documents including tests?

2012-07-10 Thread Takayuki Muranushi
Thank you for your quick response!

I have used QuickCheck, but SmallCheck I didn't. Thank you! Then I'll
try to build such tests into Gitit.

2012/7/11 Strake strake...@gmail.com:
 On 10/07/2012, Takayuki Muranushi muranu...@gmail.com wrote:
 Hello,

 I have been a forgetful person, and lots of things I have only
 pretended to understand. I want to change this. So, to educate myself,
 I'd like to write documented tests for many libraries I meet, and also
 publish them onto the web so that others may find them useful or find
 mistakes for me. OK, blog articles are good, but they have no (forced)
 tests.

 Maybe some of you have practiced this or developping such tools. I see
 some candidate tools, too. What is your suggestion for this?

 I like SmallCheck myself. Define testable properties of the library,
 and SmallCheck will verify them for all cases to a given depth. The
 tests can be documented with Haddock like any Haskell code.

 http://hackage.haskell.org/package/smallcheck

 Another similar option is QuickCheck, which will randomly generate
 rather than enumerate.

 http://hackage.haskell.org/package/QuickCheck

 Cheers,
 Strake



-- 
Takayuki MURANUSHI
The Hakubi Center for Advanced Research, Kyoto University
http://www.hakubi.kyoto-u.ac.jp/02_mem/h22/muranushi.html

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Memory corruption issues when using newAlignedPinnedByteArray, GC kicking in?

2012-07-10 Thread Thomas Schilling
I think you should ask this question on the glasgow-haskell-users
mailing list: http://www.haskell.org/mailman/listinfo/glasgow-haskell-users

On 10 July 2012 18:20, Nicolas Trangez nico...@incubaid.com wrote:
 All,

 While working on my vector-simd library, I noticed somehow memory I'm
 using gets corrupted/overwritten. I reworked this into a test case, and
 would love to get some help on how to fix this.

 Previously I used some custom FFI calls to C to allocate aligned memory,
 which yields correct results, but this has a significant (+- 10x)
 performance impact on my benchmarks. Later on I discovered the
 newAlignedPinnedByteArray# function, and wrote some code using this.

 Here's what I did in the test case: I created an MVector instance, with
 the exact same implementation as vector's
 Data.Vector.Storable.Mutable.MVector instance, except for basicUnsafeNew
 where I pass one more argument to mallocVector [1].

 I also use 3 different versions of mallocVector (depending on
 compile-time flags):

 mallocVectorOrig [2]: This is the upstream version, discarding the
 integer argument I added.

 Then here's my first attempt, very similar to the implementation of
 mallocPlainForeignPtrBytes [3] at [4] using GHC.* libraries.

 Finally there's something similar at [5] which uses the 'primitive'
 library.

 The test case creates vectors of increasing size, then checks whether
 they contain the expected values. For the default implementation this
 works correctly. For both others it fails at some random size, and the
 values stored in the vector are not exactly what they should be.

 I don't understand what's going on here. I suspect I lack a reference
 (or something along those lines) so GC kicks in, or maybe the buffer
 gets relocated, whilst it shouldn't.

 Basically I'd need something like

 GHC.ForeignPtr.mallocPlainAlignedForeignPtrBytes :: Int - Int - IO
 (ForeignPtr a)

 Thanks,

 Nicolas

 [1] https://gist.github.com/3084806#LC37
 [2] https://gist.github.com/3084806#LC119
 [3]
 http://hackage.haskell.org/packages/archive/base/latest/doc/html/src/GHC-ForeignPtr.html
 [4] https://gist.github.com/3084806#LC100
 [5] https://gist.github.com/3084806#LC81




 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



-- 
Push the envelope. Watch it bend.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Any good tool to write Haskell documents including tests?

2012-07-10 Thread Michael Orlitzky
On 07/10/12 10:20, Takayuki Muranushi wrote:
 Hello,
 
 I have been a forgetful person, and lots of things I have only
 pretended to understand. I want to change this. So, to educate myself,
 I'd like to write documented tests for many libraries I meet, and also
 publish them onto the web so that others may find them useful or find
 mistakes for me. OK, blog articles are good, but they have no (forced)
 tests.
 
 Maybe some of you have practiced this or developping such tools. I see
 some candidate tools, too. What is your suggestion for this?
 
 I have tried doctest, because of its read–eval–print loop (REPL) style I 
 liked.
 
 https://github.com/nushio3/practice/tree/master/control-monad-loop
 
 It produces html as attached to this mail. It's pretty, but I'd like
 to have more control on HTML.
 Maybe Gitit + Doctest in Pandoc is a good alternative?

I know this isn't what you asked for, but: please submit these tests
upstream when you're done. The lack of basic examples for library
functions is a huge barrier-to-entry for almost every library on hackage.

I think it would be a big help -- the fact that the code actually
executes and can be checked automatically makes it easy for the
maintainer to include them.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] ANN: unification-fd 0.8.0

2012-07-10 Thread wren ng thornton


-- unification-fd 0.8.0


The unification-fd package offers generic functions for single-sorted 
first-order structural unification (think of programming in Prolog, or 
of the metavariables in type inference)[1][2]. The library *is* 
sufficient for implementing higher-rank type systems a la [Peyton Jones, 
Vytiniotis, Weirich, Shields], but bear in mind that unification 
variables are the metavariables of type inference--- not the 
type-variables. As of this version, the library is also sufficient for 
implementing (non-recursive) feature structure unification.


An effort has been made to make the package as portable as possible. 
However, because it uses the ST monad and the mtl-2 package it can't be 
H98 nor H2010. However, it only uses the following common extensions 
which should be well supported[3]:


Rank2Types
MultiParamTypeClasses
FunctionalDependencies -- Alas, necessary for type inference
FlexibleContexts   -- Necessary for practical use of MPTCs
FlexibleInstances  -- Necessary for practical use of MPTCs
UndecidableInstances   -- For Show instances due to two-level types



-- Changes (since 0.7.0)


This release is another API breaking release, though hopefully minor. In 
particular, the type of the zipMatch method for Unifiable has changed from:


zipMatch :: t a - t b - Maybe (t (a, b))

to:

zipMatch :: t a - t a - Maybe (t (Either a (a, a)))

With the new type each of the unification subproblems can be declared as 
fully resolved (Left xy) or as still pending solution (Right (x,y))--- 
whereas the previous type only allowed the latter. This extension is 
necessary for implementing unification of feature structures (e.g., t ~ 
Map k). Given two feature structures which do not fully overlap, we can 
now declare that the non-intersecting parts have been unified and only 
require further unification for the parts which intersect--- whereas 
previously we would've been forced to declare the non-intersecting parts 
as requiring unification with themselves, which will always succeed and 
introduces extra work.


This feature is one I had available in earlier unpublished versions of 
the code, but errantly removed it when simplifying things for publishing 
on Hackage. To fix extant Unifiable instances, just wrap each tuple with 
Right. For those who care about such things, I have not been able to 
discern any noticeable difference in performance between the two 
versions. If you can provide benchmarks demonstrating otherwise, I'd be 
pleased to look at them.



In addition, the Show instance for Fix has been adjusted to use 
showsPrec in lieu of show, correcting some infelicities with the output.




-- Description


The unification API is generic in the type of the structures being 
unified and in the implementation of unification variables, following 
the two-level types pearl of Sheard (2001). This style mixes well with 
Swierstra (2008), though an implementation of the latter is not included 
in this package.


That is, all you have to do is define the functor whose fixed-point is 
the recursive type you're interested in:


-- The non-recursive structure of terms
data S a = ...

-- The recursive term type
type Term = Fix S

And then provide an instance for Unifiable, where zipMatch performs one 
level of equality testing for terms and returns the one-level spine 
filled with pairs of subterms to be recursively checked, or Nothing if 
this level doesn't match. Each subterm can be separately marked as being 
resolved, Left xy, or as requiring further unification, Right(x,y).


class (Traversable t) = Unifiable t where
zipMatch :: t a - t a - Maybe (t (Either a (a, a)))

The choice of which variable implementation to use is defined by 
similarly simple classes Variable and BindingMonad. We store the 
variable bindings in a monad, for obvious reasons. In case it's not 
obvious, see Dijkstra et al. (2008) for benchmarks demonstrating the 
cost of naively applying bindings eagerly.


There are currently two implementations of variables provided: one based 
on STRefs, and another based on a state monad carrying an IntMap. The 
former has the benefit of O(1) access time, but the latter is plenty 
fast and has the benefit of supporting backtracking. Backtracking itself 
is provided by the logict package and is described in Kiselyov et al. 
(2005).


In addition to this modularity, unification-fd implements a number of 
optimizations over the algorithm presented in Sheard (2001)--- which is 
also the algorithm presented in Cardelli (1987).


* Their implementation uses path compression, which we retain. Though we 
modify the compression algorithm in order to make sharing 

[Haskell-cafe] Getting a segmentation fault when starting/stopping the RTS, from C, several times.

2012-07-10 Thread Captain Freako
Hi experts,

Should I expect the following C code to run to completion, or am I trying
to do something that was never intended?

Thanks,
-db

C code:

  1 #include stdio.h
  2 #include stdlib.h
  3 #include HsFFI.h
  4
  5 int main()
  6 {
  7 int argc = 1, i;
  8 char* argv[] = {ghcDll, NULL}; // argv must end with NULL
  9 char** args = argv;
 10
 11 for(i=0; i10; i++) {
 12 fprintf(stderr, Starting up the RTS...\n);
 13 hs_init(argc, args);
 14 printf(Shutting down the RTS...\n);
 15 hs_exit();
 16 }
 17 }

This is what I get, when I compile and run it:

dbanas@dbanas-eeepc:~/prj/haskell/amitool$ make test_rts
rm -f test_rts
ghc -o test_rts -lHSrts -lm -lffi -lrt test_rts.o
dbanas@dbanas-eeepc:~/prj/haskell/amitool$ ./test_rts
Starting up the RTS...
Shutting down the RTS...
Starting up the RTS...
Shutting down the RTS...
Segmentation fault
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Getting a segmentation fault when starting/stopping the RTS, from C, several times.

2012-07-10 Thread Brandon Allbery
On Tue, Jul 10, 2012 at 10:55 PM, Captain Freako capn.fre...@gmail.comwrote:

 Hi experts,

 Should I expect the following C code to run to completion, or am I trying
 to do something that was never intended?


Quoth the Fine Manual (8.2.1.1. Using your own
main()http://www.haskell.org/ghc/docs/latest/html/users_guide/ffi-ghc.html#using-own-main
):

There can be multiple calls to hs_init(), but each one should be matched by
one (and only one) call to
hs_exit()[14http://www.haskell.org/ghc/docs/latest/html/users_guide/ffi-ghc.html#ftn.id740774
].

So this should theoretically work.

-- 
brandon s allbery  allber...@gmail.com
wandering unix systems administrator (available) (412) 475-9364 vm/sms
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] DPH matrix product

2012-07-10 Thread Mauro Blanco
Hi again.

This is the system information:
- Ubuntu 12.04 32-bit
- Intel® Core™2 Duo CPU T5270 @ 1.40GHz × 2
- 2.9 GiB RAM

GHC version:
- GHC 7.4.1

DPH libraries:
- dph-base-0.6.1.1
- (dph-lifted-base-0.6.1.1)
- (dph-lifted-vseg-0.6.1.2)
- (dph-prim-interface-0.6.1.1)
- (dph-prim-par-0.6.1.1)
- (dph-prim-seq-0.6.1.1)

Compilation flags:
I'm using two combinations of flags, taken from different sources. In both
cases results are identical:
- From https://github.com/ghc/packages-dph: -rtsopts -threaded -fllvm
-optlo-O3 -Odph -fcpr-off -fno-liberate-case -package dph-lifted-vseg
- From dph-examples: -rtsopts -threaded -fllvm -Odph -package
dph-lifted-vseg -fcpr-off -fno-liberate-case -fsimpl-tick-factor=1000

Execution flags:
+RTS -N

Tests:
Computing the product of two 400*400 matrices takes 6.037993 seconds.
Computing the product of two 600*600 matrices yields out of memory
(requested 1728053248 bytes).

DPH code:
-
{-# NOINLINE matMult_wrapper #-}
matMult_wrapper :: Matrix_wrapper - Matrix_wrapper - Matrix_wrapper
matMult_wrapper mA mB = toPArrayP (mapP toPArrayP (matMult
(fromNestedPArrayP mA) (fromNestedPArrayP mB)))

matMult :: Matrix - Matrix - Matrix
matMult mA mB = mapP (\row - mapP (\col - dotp row col) mB) mA
-- I removed the call to transposeP, so It's no longer an actual matrix
product

dotp :: MVector - MVector - MMultType
dotp row col = D.sumP (zipWithP (D.*) row col)
-
I'm attaching the files with the full example code
If there is any other information needed, please let me know

Any help is very appreciated

On Tue, Jul 10, 2012 at 9:51 AM, Mauro Blanco blanco...@gmail.com wrote:

 Thanks for both answers

 I have used repa with the newer interface for the same example, but I
 wanted to have another example using DPH. I know repa is more suited for
 regular representations, but I wanted to express the same program in DPH
 where I don´t have to worry of nested parallel computation.

 The transposeP  did not seem to be the problem as only
 executing transposeP on big matrices generated no memory issues. But,
 something like this (on square matrices) still have  memory problems:
 matMult :: Matrix - Matrix - Matrix
 matMult mA mB = mapP (\row - mapP (\col - dotp row col) *mB*) mA

 dotp :: MVector - MVector - MMultType
 dotp row col = D.sumP (zipWithP (D.*) row col)

 Later today (or tomorrow) I will post exact OS, GHC and libraries version
 as the command line options and execution information on the simplified
 example.

 Thanks again


 On Tue, Jul 10, 2012 at 8:43 AM, Manuel M T Chakravarty 
 c...@cse.unsw.edu.au wrote:

 Firstly, especially when you are talking about performance, please
 provided detailed information on (a) the versions of the compiler and
 libraries that you used and (b) of the command line options that you used
 for compilation.

 Secondly, your function 'transposeP' doesn't make for a good nested
 data-parallel program. I haven't looked at the generated code, but I
 wouldn't be surprised if it doesn't optimise very well.

 The core benefit of nested data parallelism is that the sub-arrays in a
 nested array of arrays can be of varying size leading to irregular
 parallelism. However, that flexibility comes at a price, namely that it is
 a fairly inefficient representation for *rectangular arrays*, such as
 regular two-dimensional matrices (as in your example). It shouldn't be
 quite as inefficient as what you report, but it will never be competitive
 with a dedicated regular representation.

 Hence, for code, such as yours, we recommend to use our Repa library:
 http://hackage.haskell.org/package/repa

  It generates very fast code for regular array problems, see also
 http://www.cse.unsw.edu.au/~chak/papers/LCKP12.html

 Manuel


 mblanco blanco...@gmail.com:

 Hi, I'm trying to implement a matrix product example using DPH. This is
 the code:
 --**--**
 --**-
 type MMultType = Double
 type Matrix = [:[:MMultType:]:]
 type MVector = [:MMultType:]
 type Matrix_wrapper = PArray (PArray MMultType)

 {-# NOINLINE matMult_wrapper #-}
 matMult_wrapper :: Matrix_wrapper - Matrix_wrapper - Matrix_wrapper
 matMult_wrapper mA mB = toPArrayP (mapP toPArrayP (matMult
 (fromNestedPArrayP mA) (fromNestedPArrayP mB)))

 matMult :: Matrix - Matrix - Matrix
 matMult mA mB = mapP (\row - mapP (\col - dotp row col) (transposeP
 mB)) mA

 dotp :: MVector - MVector - MMultType
 dotp row col = D.sumP (zipWithP (D.*) row col)

 transposeP :: Matrix - Matrix
 transposeP m =
 let
 h = lengthP m
 w = lengthP (m !: 0)
 rh = I.enumFromToP 0 (h I.- 1)
 rw = I.enumFromToP 0 (w I.- 1)
 in
 if h I.== 0 then [: :]
 else mapP (\y - mapP (\x - m !: x !: y) rh) rw
 

[Haskell-cafe] There is some article mentioning that FP is mainly good for mathematical and list programming.

2012-07-10 Thread KC
There is some article mentioning that functional programming is mainly good
for mathematical programming and list programming because otherwise it is
to cryptic for business and database applications.

Being semi-intelligent, I thought I could reduce what others think of as
cryptic but maybe instead of 18 wheels I have only 17 wheels. :D

-- 
--
Regards,
KC
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Bad interface problem.

2012-07-10 Thread Magicloud Magiclouds
I cleaned out everything, no luck

On Fri, Jul 6, 2012 at 2:14 AM, Albert Y. C. Lai tre...@vex.net wrote:
 On 12-07-03 04:19 AM, Magicloud Magiclouds wrote:

 $ cabal --upgrade-dependencies --enable-documentation
 --force-reinstalls --solver=topdown QuickCheck-2.5
 Test/QuickCheck/All.hs:15:1:
  Bad interface file:

 /home/magicloud/.cabal/lib/template-haskell-2.6.0.0/ghc-7.4.2/Language/Haskell/TH.hi
  Something is amiss; requested module
 template-haskell-2.6.0.0:Language.Haskell.TH differs from name found
 in the interface file template-haskell:Language.Haskell.TH


 I think things are so messed up that it is time to clean out everything. See
 my
 http://www.vex.net/~trebla/haskell/sicp.xhtml#remove

 In fact, time to read the whole article and avoid unsafe re-installs and
 upgrades.

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



-- 
竹密岂妨流水过
山高哪阻野云飞

And for G+, please use magiclouds#gmail.com.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Bad interface problem.

2012-07-10 Thread Claude Heiland-Allen
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi,

On 11/07/12 05:51, Magicloud Magiclouds wrote:
 I cleaned out everything, no luck
 
 On Fri, Jul 6, 2012 at 2:14 AM, Albert Y. C. Lai tre...@vex.net
 wrote:
 On 12-07-03 04:19 AM, Magicloud Magiclouds wrote:
 template-haskell-2.6.0.0:Language.Haskell.TH differs from name
 found in the interface file
 template-haskell:Language.Haskell.TH

You installed a bad template-haskell version.  You can only use a
version corresponding to your ghc version.

I had a similar problem recently.  My solution process was as follows:

1. check which template-haskell version came with my ghc:

$ ghc -V
The Glorious Glasgow Haskell Compilation System, version 7.4.2
$ ghc-pkg list template-haskell
/home/claude/opt/lib/ghc-7.4.2/package.conf.d
   template-haskell-2.7.0.0
/home/claude/.ghc/x86_64-linux-7.4.2/package.conf.d
$

2. make sure to forbid every other version of template-haskell
(because it will break horribly, as you found):

$ cabal install --constraint='template-haskell==2.7.0.0' foo

3. if foo fails to install because it thinks it needs a different
version of template-haskell, try adjusting dependencies in foo.cabal

4. if foo installs and works with the adjusted dependencies, let the
maintainer know

 I think things are so messed up that it is time to clean out
 everything. See my 
 http://www.vex.net/~trebla/haskell/sicp.xhtml#remove
 
 In fact, time to read the whole article and avoid unsafe
 re-installs and upgrades.

It's a good read for sure!  Perhaps it could be updated to add a
problem I ran into recently:

cabal install --solver=modular --avoid-reinstalls sounds perfect, if
sicp.xhtml scared you properly.   But excessively avoiding reinstalls
is bad, as cabal-install seems to install a different allowable
version instead.  The result for me was horrible diamond dependency
problems - half my packages were built with one version of
mtl/transformers, and the other half with a different version of
mtl/transformers.

When I then tried to ghci using some modules from both halves of my
installed packages, I got very very confusing type errors complaining
about almost-identical-looking types not being identical.


Claude
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.12 (GNU/Linux)

iQEcBAEBAgAGBQJP/RBbAAoJEHZDo4jueIiW164IALlHcaauJX2AjBZTDExU0mKC
wlH+dIbaKkl8H1IMIXQnWSX0GxFGMsbPTdBXf/BC2CMXTcSJr8YMiyKewMAs734g
DijNU/x/nQlcVruOk1c8EAijIKs938vT3dF0j863+afMAA+cRWlyLWfV50Y7AIG6
4hF0Fr5Q73GwonFzTXuX+iWLxBL1i2jXgPjKJvNTJZr+iGn5txCj+6ZpJyfIXaaw
PZtQrnX/37vQ/ctbKsnDqRQI27/ENJyW3zm76Gax47EIpMvL8fHzEg8IpyR9/eR8
8ZfGKYNA1EsARHT3KS6pBPsVQdhn/qYInVZ5NYQ1r/kd9D6nqoy5pETdz3z/23Y=
=Rzob
-END PGP SIGNATURE-

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe