Re: [Haskell-cafe] GHC 6.11 missing time package?

2009-02-01 Thread Lyle Kopnicky
I tried building it from hackage. I got an error:
Setup.hs: sh: createProcess: does not exist (No such file or directory)

...which is very similar to the error I get if I try to build it using 6.10:

Setup.hs: sh: runGenProcess: does not exist (No such file or directory)

I don't know if there's something wrong with the package, or I don't have
something set up right to build it on Windows.

On Sat, Jan 31, 2009 at 10:13 PM, Antoine Latter aslat...@gmail.com wrote:

 2009/1/31 Lyle Kopnicky li...@qseep.net:
  Hi folks,
  I'm getting ready to release a piece of software. Unfortunately due to a
 bug
  in GHC 6.10 on Windows it does not handle Ctrl+C properly. Since the bug
 has
  been fixed (thank you Simon Marlow), I figured I'd download a 6.11 build
 (I
  grabbed the 2009-01-29 version).
  Unfortunately, my project won't build with it because it's missing the
  time-1.1 package. This is sad because I had gone through the trouble to
  rewrite my use of oldtime to use time, thinking it was more future-proof.
 Is
  this an oversight in the nightly build, or is this package going out of
 GHC?
  Thanks,
  Lyle

 Hackage seems to have it:

 http://hackage.haskell.org/cgi-bin/hackage-scripts/package/time

 Does that work?

 -Antoine

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] circular dependencies in cabal

2009-02-01 Thread Valentyn Kamyshenko

Hello all,

when I tried to install plugins package with cabal, I've got the  
following error:


# sudo cabal install plugins --global
Resolving dependencies...
cabal: dependencies conflict: ghc-6.10.1 requires process ==1.0.1.1  
however
process-1.0.1.1 was excluded because ghc-6.10.1 requires process  
==1.0.1.0


It looks like both versions of process package are currently required:

# ghc-pkg unregister process-1.0.1.0
ghc-pkg: unregistering process-1.0.1.0 would break the following  
packages: haddock-2.3.0 ghc-6.10.1 Cabal-1.6.0.1 gnuplot-0.2  
pandoc-1.0.0.1 Graphalyze-0.5 haddock-2.4.1 kibro-0.4.2  
panda-2008.11.7 (use --force to override)


# ghc-pkg unregister process-1.0.1.1
ghc-pkg: unregistering process-1.0.1.1 would break the following  
packages: haddock-2.3.0 ghc-6.10.1 haskell-src-1.0.1.3 polyparse-1.1  
Graphalyze-0.5 cpphs-1.6 hscolour-1.10.1 haddock-2.4.1 HaXml-1.19.4  
hcheat-2008.11.6 rss-3000.1.0 kibro-0.4.2 panda-2008.11.7  
haskell98-1.0.1.0 hxt-8.2.0 hcheat-2008.11.14 hxt-filter-8.2.0 xml- 
parsec-1.0.3 graphviz-2008.9.20 readline-1.0.1.0 uulib-0.9.5  
derive-0.1.4 hslogger-1.0.6 MissingH-1.0.2.1  
HStringTemplateHelpers-0.0.3 HSHHelpers-0.17 haskell-src-exts-0.4.4  
haskell-src-exts-0.4.4.1 haskell-src-exts-0.4.5 ConfigFile-1.0.4  
HStringTemplateHelpers-0.0.4 haskell-src-exts-0.4.6 kibro-0.4.3  
panda-2008.12.16 HStringTemplateHelpers-0.0.6 SybWidget-0.4.0  
wxcore-0.10.5 wx-0.10.5 xtc-1.0 HStringTemplateHelpers-0.0.8  
wxcore-0.10.7 wx-0.10.6 HNM-0.1 HNM-0.1.1 wxcore-0.10.12 wxcore-0.11.0  
wx-0.11.0 HSHHelpers-0.18 haskell-src-exts-0.4.8 darcs-2.2.0  
hslogger-1.0.7 MissingH-1.0.3 HSH-1.2.6 HStringTemplateHelpers-0.0.10  
HSHHelpers-0.19 hscolour-1.11 HNM-0.1.2 pandoc-1.1 mps-2008.11.6  
hcheat-2008.11.25 panda-2009.1.20 testpack-1.0.0 convertible-1.0.1  
gnuplot-0.3 HDBC-2.0.0 HDBC-2.0.1 HDBC-sqlite3-2.0.0.0 HDBC- 
postgresql-2.0.0.0 (use --force to override)


Any suggestions?

-- Valentyn
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Question re: denotational semantics in SPJ's Implementation of FPLs book

2009-02-01 Thread Devin Mullins
I'm reading SPJ's The Implementation of Functional Programming Languages, and
on page 32, it defines the multiplication operator in its extended lambda
calculus as:
  Eval[[ * ]] a   b   = a x b
  Eval[[ * ]] _|_ b   = _|_
  Eval[[ * ]] a   _|_ = _|_

Is that complete? What are the denotational semantics of * applied to things
not in the domain of the multiplication operator x, such as TRUE (in the
extended lambda defined by this book) and functions (in normal lambda calc)? Do
these things just eval to bottom? Or is this just to be ignored, since the
extended calculus will only be applied to properly typed expressions in the
context of this book?
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] circular dependencies in cabal

2009-02-01 Thread Marc Weber
  Any suggestions?

a) ignore it and hope you don't get segfaults or problems.

b) choose one process libraries and rebuild the other packages using
that one.


About a)
I'm not totally sure what could happen. I just can say That i've used
different cabal versions and it went fine. I guess that the problem is 

A using P-1.0
B using P-1.2

you using A and B, passing data indirectly from A to B and A and B
having different compilation options or different behaviour. I'm not too
sure about this. Maybe this can give you an idea what could happen.
On the other hand you do use runXY or system from process only without
passing data from one to the other so chances are good that your app
will work nevertheless.

Marc Weber
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] circular dependencies in cabal

2009-02-01 Thread Valentyn Kamyshenko
well, the first and most immediate problem is that I can not even  
fetch the package from hackage using cabal:


# cabal fetch plugins Resolving dependencies...
cabal: dependencies conflict: ghc-6.10.1 requires process ==1.0.1.1  
however
process-1.0.1.1 was excluded because ghc-6.10.1 requires process  
==1.0.1.0


so, although I believe the problem with different versions of the same  
package to co-exist on my computer may be ignored in mane cases, this  
case makes life very inconvenient.


-- Valentyn

On Feb 1, 2009, at 1:57 AM, Marc Weber wrote:


Any suggestions?


a) ignore it and hope you don't get segfaults or problems.

b) choose one process libraries and rebuild the other packages using
that one.


About a)
I'm not totally sure what could happen. I just can say That i've used
different cabal versions and it went fine. I guess that the problem is

A using P-1.0
B using P-1.2

you using A and B, passing data indirectly from A to B and A and B
having different compilation options or different behaviour. I'm not  
too

sure about this. Maybe this can give you an idea what could happen.
On the other hand you do use runXY or system from process only without
passing data from one to the other so chances are good that your app
will work nevertheless.

Marc Weber
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Why binding to existing widget toolkits doesn't make any sense

2009-02-01 Thread Duncan Coutts
On Sat, 2009-01-31 at 22:47 +0100, Peter Verswyvelen wrote:
 I should have mentioned that my tests have been done only on Windows
 and OSX.

Ah, right. Well there are Win32 and Quartz backends too.

 I guess I would have to try on a system that supports XRender to
 compare. 

 Unfortunately, the target audience of our application are mostly
 windows and OSX users, so although it would be great that Cairo
 performs fast on unix variants, it would be of little value to us,
 unless of course XRender also runs on Windows/OSX somehow :)

I have heard that in some cases there is a mismatch in the semantics of
Cairo and Quartz which requires falling back to software rendering in
some cases. It may be you're hitting those cases. You could send your
examples to the cairo folks:

http://cairographics.org/FAQ/#profiling


Duncan


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] hslogger bugs or features?

2009-02-01 Thread Brandon S. Allbery KF8NH

On 2009 Jan 31, at 20:28, Marc Weber wrote:

 tmp %./test1
 /tmp nixos
 ALERT test, should be shown and should create the sublogger
 ALERT test, should not be shown cause we have changed to EMERGENCY

which is quite confusing because I haven't told hslogger explicitely
to use a log level printing ALERTs on A.B.C. so I'd expect that only
the first message is shown. This behaviour is explained by the
inheritance of the loglevel when hslogger creates them (without
attaching handlers) automatically.

I don't want the logging behaviour depend on wether a log line has  
been

emitted before or not.
Do you agree? Have I missed something?



At least some of what you've missed is that this is inherited from the  
C syslog library; possibly this should be using withSysLog (options)  
$ ... to bracket a use of syslog with appropriate openlog()/closelog()  
and changing top level options should only be attempted in the  
openlog() call.


--
brandon s. allbery [solaris,freebsd,perl,pugs,haskell] allb...@kf8nh.com
system administrator [openafs,heimdal,too many hats] allb...@ece.cmu.edu
electrical and computer engineering, carnegie mellon universityKF8NH




PGP.sig
Description: This is a digitally signed message part
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANN: HDBC v2.0 now available

2009-02-01 Thread Duncan Coutts
On Fri, 2009-01-30 at 18:29 -0600, John Goerzen wrote:
 On Fri, Jan 30, 2009 at 03:50:30PM -0800, Michael Snoyman wrote:
   [3 of 7] Compiling Database.HDBC.Statement ( Database/HDBC/Statement.hs,
   dist/build/Database/HDBC/Statement.o )
  
   Database/HDBC/Statement.hs:113:9:
  Type constructor `Exception' used as a class
  In the instance declaration for `Exception SqlError'
   cabal: Error: some packages failed to install:
   HDBC-2.0.0 failed during the building phase. The exception was:
   exit: ExitFailure 1
 
 That's *WEIRD*.  
 
 It's as if you have the old base from GHC 6.8.  Is cabal-install doing
 something (weird|evil|smart) here?

Yes.

 Leads me to think even more that there's cabal-install trickery here.

Yup.

 Can someone enlighten us on that?  Is cabal-install playing tricks
 with base?

Yes.

The difference (in the released version of cabal-install) between the
configure and install commands is that configure uses a dumb algorithm
for deciding what versions of dependencies to use while install uses a
smarter algorithm. Obviously this is confusing and in the next release
configure will use the smart algorithm too.

At the moment configure picks the latest versions of all packages
irrespective of whether this is likely to work or not. For developers
using ghc-6.10 with base 3 and 4 this means they end up with base 4.
They then likely do not notice that their package actually needs base
version 4.

The install command uses the constraint solver and takes into account
some global preferences from hackage. These are soft preferences / soft
constraints. They do not override constraints specified in the .cabal
file or on the command line. However for the huge number of packages on
hackage that worked with ghc-6.8 but failed to specify a constraint on
the version of base, using a preference of base 3 over base 4 enables
those packages to continue to build.

So in the next cabal-install release (which should be pretty soon now)
configure will do the same thing and pick base 3 unless you specify
build-depends base = 4.

So the solution is for HDBC-2.0 to specify the version of base that it
needs more accurately. It appears that it does need version 4 so it
should use:

build-depends: base == 4.*


On a similar issue, I am going to make Hackage enforce that packages
specify an upper bounds on the version of base. Hopefully doing that
will make people also consider the appropriate lower bound. I don't see
an obvious way of automatically requiring accuracy in selecting the base
version between 3 and 4. It is perfectly possible to have a package that
works with both. Perhaps we should make not-specifying an upper bound an
error and not specifying a lower bound a warning.

Thoughts?

Duncan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANN: HDBC v2.0 now available

2009-02-01 Thread Duncan Coutts
On Fri, 2009-01-30 at 18:31 -0600, John Goerzen wrote:

 I can't hard-code base = 4 into .cabal because that would break for
 GHC 6.8 users.  I have CPP code that selects what to compile based on
 GHC version.

Ahh, but the version of base is no longer determined by the version of
GHC, so using cpp tests on the ghc version is not right (besides it does
not work for non-ghc if that is relevant).

In future (from Cabal-1.6 onwards) you can use macros to test the
version of the package rather than the version of ghc:

#if MIN_VERSION_base(4,0,0)
...
#endif

but since you are trying to support ghc-6.8 (and Cabal-1.2) you cannot
rely on these macros yet. You can use this trick:

flag base4

library
  if flag(base4)
build-depends: base = 4
cpp-options:   -DBASE4
  else
build-depends: base  4

Duncan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] circular dependencies in cabal

2009-02-01 Thread Duncan Coutts
On Sun, 2009-02-01 at 01:33 -0800, Valentyn Kamyshenko wrote:
 Hello all,
 
 when I tried to install plugins package with cabal, I've got the  
 following error:
 
 # sudo cabal install plugins --global
 Resolving dependencies...
 cabal: dependencies conflict: ghc-6.10.1 requires process ==1.0.1.1  
 however
 process-1.0.1.1 was excluded because ghc-6.10.1 requires process  
 ==1.0.1.0

For the most part I refer you to:

http://haskell.org/pipermail/haskell-cafe/2009-January/054523.html

However the difference is that you've got this problem only within the
global package db rather than due to overlap in the global and user
package db.

 It looks like both versions of process package are currently required:

It looks like you installed process-1.0.1.1 and then rebuilt almost
every other package against it. Of course you cannot rebuild the ghc
package but you did rebuild some of its dependencies which is why it now
depends on multiple versions of the process package.

Generally rebuilding a package without also rebuilding the packages that
depend on it is a bit dodgy (it can lead to linker errors or segfaults).
Unfortunately cabal-install does not prevent you from shooting yourself
in the foot in these circumstances.

 Any suggestions?

Aim for a situation where you only have one version of the various core
packages. If you do not need to install packages globally then
installing them per-user means you at least cannot break the global
packages.

Duncan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHC 6.11 missing time package?

2009-02-01 Thread Krzysztof Skrzętnicki
It looks that you need MSYS or Cygwin to complete this build. Here you can
find instructions regarding MSYS (and also GLUT, but you can ignore that
part):
http://www.haskell.org/pipermail/haskell-cafe/2007-September/031535.html

All best

Christopher Skrzętnicki

2009/2/1 Lyle Kopnicky li...@qseep.net

 I tried building it from hackage. I got an error:
 Setup.hs: sh: createProcess: does not exist (No such file or directory)

 ...which is very similar to the error I get if I try to build it using
 6.10:

 Setup.hs: sh: runGenProcess: does not exist (No such file or directory)

 I don't know if there's something wrong with the package, or I don't have
 something set up right to build it on Windows.

 On Sat, Jan 31, 2009 at 10:13 PM, Antoine Latter aslat...@gmail.comwrote:

 2009/1/31 Lyle Kopnicky li...@qseep.net:
  Hi folks,
  I'm getting ready to release a piece of software. Unfortunately due to a
 bug
  in GHC 6.10 on Windows it does not handle Ctrl+C properly. Since the bug
 has
  been fixed (thank you Simon Marlow), I figured I'd download a 6.11 build
 (I
  grabbed the 2009-01-29 version).
  Unfortunately, my project won't build with it because it's missing the
  time-1.1 package. This is sad because I had gone through the trouble to
  rewrite my use of oldtime to use time, thinking it was more
 future-proof. Is
  this an oversight in the nightly build, or is this package going out of
 GHC?
  Thanks,
  Lyle

 Hackage seems to have it:

 http://hackage.haskell.org/cgi-bin/hackage-scripts/package/time

 Does that work?

 -Antoine



 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANN: HDBC v2.0 now available

2009-02-01 Thread Niklas Broberg
 So in the next cabal-install release (which should be pretty soon now)
 configure will do the same thing and pick base 3 unless you specify
 build-depends base = 4.

... and so there will never be any incentive for these many packages
to migrate to base-4, which also has consequences for packages that do
want to use base-4, but also want to depend on such packages. And so
base-3 will live on in eternity, and there was never any point in
doing that new base release at all.

I really really think this is the wrong way to go. Occasional
destruction is desperately needed for progress, else things will
invariably stagnate.

I would suggest as a less stagnating approach to issue a warning/hint
when a package with no explicit version dependency for base fails to
build. The hint could suggest to the user trying to build the package
that they can use 'cabal instal --package=base-3'.

Cheers,

/Niklas
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] 1,000 packages, so let's build a few!

2009-02-01 Thread Duncan Coutts
On Sat, 2009-01-31 at 14:02 -0800, Don Stewart wrote:

 not really :) e.g. my output on a Windows Vista system with GHC
 6.10.1
 cabal install sdl

 Configuring SDL-0.5.4...
 setup.exe: sh: runGenProcess: does not exist (No such file or directory)

 Isn't this missing C library dependencies, which cabal head now warns
 about?

No, it's about packages using configure scripts which require MSYS on
Windows.

In principle we should be able to notice this while doing the package
dependency planning and report that we cannot install the package
because it needs sh.exe.

Duncan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Panic with GTK2HS 0.10.0 RC under Vista with GHCi 6.10.1; GHCi or GTK bug?

2009-02-01 Thread Peter Verswyvelen
Often when trying to run my GTK2HS application the first time with GHCi I
get

: panic! (the 'impossible' happened)
 (GHC version 6.10.1 for i386-unknown-mingw32):
loadObj: failed

Please report this as a GHC bug:  http://www.haskell.org/ghc/reportabug


This only occurs when I forgot to set my current directory correctly; then
GHCi gives an error since it can't find modules
GHCi, version 6.10.1: http://www.haskell.org/ghc/  :? for help
Loading package ghc-prim ... linking ... done.
Loading package integer ... linking ... done.
Loading package base ... linking ... done.

Boxes.hs:23:17:
   Could not find module `NM8.GUI.Layout':
 Use -v to see a list of the files searched for.
Failed, modules loaded: none.

If I then correctly set my current directory using the :cd command, and try
again, I get the panic crash.

When I start GHCi again and immediately set the correct current directory,
it works fine.

I haven't tried to isolate the problem, but maybe others have experienced
this problem?

It's not really a show-stopper, just but a bit annoying
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANN: HDBC v2.0 now available

2009-02-01 Thread Duncan Coutts
On Sun, 2009-02-01 at 15:56 +0100, Niklas Broberg wrote:
  So in the next cabal-install release (which should be pretty soon now)
  configure will do the same thing and pick base 3 unless you specify
  build-depends base = 4.
 
 ... and so there will never be any incentive for these many packages
 to migrate to base-4, which also has consequences for packages that do
 want to use base-4, but also want to depend on such packages. And so
 base-3 will live on in eternity, and there was never any point in
 doing that new base release at all.

 I really really think this is the wrong way to go. Occasional
 destruction is desperately needed for progress, else things will
 invariably stagnate.

I disagree. Having everything fail (we measured it as ~90% of hackage)
when people upgraded to ghc-6.10 would have been a disaster. Do you
recall the screaming, wailing and gnashing of teeth after the release of
ghc-6.8 when most of hackage broke? We (I mean ghc and cabal hackers)
got a lot of flak for not making the upgrade process easier and
needlessly breaking everyone's  perfectly good packages.

This time round we went to a lot of effort to make the upgrade process
smooth. And for the most part it was. Only a small proportion of hackage
packages broke.

Now I agree that there is a problem with new packages where the
configure selects base 4 but install selects base 3. I've improved that
in the darcs version.

You're also right that during the lifespan of base 4 we need to
encourage new releases to start working with it because we cannot stick
with base 3 for ever. Doing that with warnings hints etc is the way to
go. Destruction is not such a friendly approach. We do not need to make
the users suffer, we just need to inform and persuade developers
uploading new releases to do the right thing.

Duncan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Verifying Haskell Programs

2009-02-01 Thread Thomas DuBuisson
On Sun, Feb 1, 2009 at 12:54 PM, Paulo J. Matos pocma...@gmail.com wrote:
 What's the state of the art of automatically
 verifying properties of programs written in Haskell?

This is a large field that isn't as black and white as many people
frame it.  You can write a proof [1] then translate that into Haskell,
you can write Haskell then prove key functions, using a case totality
checker you could prove it doesn't have any partial functions that
will cause an abnormal exit [2], some research has been performed into
information flow at UPenn [3], and SPJ/Zu have been looking at static
contract checking [4] for some time now - which I hope sees the light
of day in GHC 6.12.  While this work has been going on, folks at
Portland State and a few others (such as Andy Gill [8], NICTA [5], and
Peng Li to an extent) have been applying FP to the systems world [6]
[7].

Hope this helps,
Thomas

[1] Perhaps using Isabelle, isabelle.in.tum.de.
[2] Neil built CATCH for just this purpose (though it isn't in GHC
yet), www-users.cs.york.ac.uk/~ndm/catch/
[3] www.cis.upenn.edu/~stevez/
[4] www.cl.cam.ac.uk/~nx200/
[5] http://ertos.nicta.com.au/research/l4/
[6] Strongly typed memory areas, http://web.cecs.pdx.edu/~mpj/pubs/bytedata.html
[7] Some work on non-inference as well as thoughts on building a
hypervisor, http://web.cecs.pdx.edu/~rebekah/
[8] Timber language - no, I haven't looked at it yet myself.
http://timber-lang.org/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Verifying Haskell Programs

2009-02-01 Thread Paulo J. Matos
Hi all,

Much is talked that Haskell, since it is purely functional is easier
to be verified.
However, most of the research I have seen in software verification
(either through model checking or theorem proving) targets C/C++ or
subsets of these. What's the state of the art of automatically
verifying properties of programs written in Haskell?

Cheers,

-- 
Paulo Jorge Matos - pocmatos at gmail.com
Webpage: http://www.personal.soton.ac.uk/pocm
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] type and data constructors in CT

2009-02-01 Thread Ben Moseley


On 31 Jan 2009, at 20:54, Gregg Reynolds wrote:

On Sat, Jan 31, 2009 at 1:02 PM, Ben Moseley ben_mose...@mac.com  
wrote:
You can view a polymorphic unary type constructor of type :: a -  
T as a

polymorphic function.


Shouldn't that be * :: a - T a  ?


Yes, you're right. And when I say polymorphic unary type constructor  
I really mean polymorphic unary /data/ constructor ...



In general, polymorphic functions correspond roughly to natural
transformations (in this case from the identity functor to T).



Are you saying a type constructor is a nat trans and not a functor
(component)?


Nope ... what I was trying to say is that the data constructor bit is  
like a nat trans. (You're right that a unary type constructor often  
does correspond to a functor - providing there's a corresponding arrow/ 
function component).



 Seems much more like a plain ol' functor mapping of
object to object to me - the objects being types.  Can you clarify
what you mean about the correspondence with natural transformations?


So, the idea is that any polymorphic Haskell function (including Data  
constructors) can be seen as a natural transformation - so a  
function from any object (ie type) to an arrow (ie function). So,  
take listToMaybe :: [a] - Maybe a ... this can be seen as a natural  
transformation from the List functor ([] type constructor) to the  
Maybe functor (Maybe type constructor) which is a function from any  
type a (eg 'Int') to an arrow (ie Haskell function) eg  
listToMaybe :: [Int] - Maybe Int.


Hope that makes somewhat more sense.

Cheers,

--Ben


I admit I haven't thought through polymorphic function, mainly
because there doesn't seem to be any such beast in category theory,
and to be honest I've always thought the metaphor is misleading.
After all, a function by definition cannot be polymorphic.  It seems
like fancy name for a syntactic convenience to me - a way to express
/intensionally/ a set of equations without writing them all out
explicitly.

Thanks,

gregg



--Ben

On 31 Jan 2009, at 17:00, Gregg Reynolds wrote:


Hi,

I think I've finally figured out what a monad is, but there's one
thing I  haven't seen addressed in category theory stuff I've found
online.  That is the relation between type constructors and data
constructors.

As I understand it, a type constructor Tcon a is basically the  
object

component of a functor T that maps one Haskell type to another.
Haskell types are construed as the objects of category  
HaskellType.

I think that's a pretty straightforward interpretation of the CT
definition of functor.

But a data constructor Dcon a is an /element/ mapping taking  
elements

(values) of one type to elements of another type.  So it too can be
construed as a functor, if each type itself is construed as a
category.

So this gives us two functors, but they operate on different things,
and I don't see how to get from one to the other in CT terms.  Or
rather, they're obviously related, but I don't see how to express  
that

relation formally.

If somebody could point me in the right direction I'd be grateful.
Might even write a tutorial.  Can't have too many of those.

Thanks,

Gregg
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] 1,000 packages, so let's build a few!

2009-02-01 Thread Duncan Coutts
On Sat, 2009-01-31 at 16:50 -0800, Don Stewart wrote:

 Windows people need to set up a wind...@haskell.org to sort out their
 packaging issues, like we have for debian, arch, gentoo, freebsd and
 other distros.
 
 Unless people take action to get things working well on their platform,
 it will be slow going.

Actually instead of going off into another mailing list I would
encourage them to volunteer on the cabal-devel mailing list to help out.
There is lots we could do to improve the experience on Windows and half
the problem is we do not have enough people working on it or testing
things.

Duncan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANN: HDBC v2.0 now available

2009-02-01 Thread Niklas Broberg
 I really really think this is the wrong way to go. Occasional
 destruction is desperately needed for progress, else things will
 invariably stagnate.

 I disagree. Having everything fail (we measured it as ~90% of hackage)
 when people upgraded to ghc-6.10 would have been a disaster. Do you
 recall the screaming, wailing and gnashing of teeth after the release of
 ghc-6.8 when most of hackage broke? We (I mean ghc and cabal hackers)
 got a lot of flak for not making the upgrade process easier and
 needlessly breaking everyone's  perfectly good packages.

90%? That sounds like an awful lot for the relatively minor changes
with base-4. I guess a number of packages will use exceptions, and
others with use Data and Typeable, but 90%? I don't doubt you when you
say that you measured it though, just marvelling.

If 90% of hackage would truly break, then I agree that we might need
more caution than my radical approach. But I'm not fully convinced
there either. After all, unlike with the ghc-6.8 release, all that's
needed for a package to work again is to upload a new .cabal that
makes the dependency on base-3 explicit, if an author really doesn't
want to update the codebase. And even for those packages where library
authors don't do that simple step, all that's needed of the user of
the library is to specify the base-3 constraint when running
cabal-install.

My main complaint is really that there is currently no incentive
whatsoever for library authors to migrate. If we make base-4 the
default, it will require just a little bit of work to make packages
that depend on base-3 work anyway, as seen above. It's not so much
work that it should incur any screaming, wailing and teeth gnashing.
But it should be just enough work to encourage action in one way or
another, either truly migrating the code or just making the dependency
explicit as I noted above. I think it would be hard to find a more
accurate, and non-intrusive, incentive. :-)

I definitely agree with your suggestion to make hackage require an
upper bound on base. But that's to make us future proof, it won't
solve the issue here and now.

Cheers,

/Niklas
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANN: HDBC v2.0 now available

2009-02-01 Thread Duncan Coutts
On Sun, 2009-02-01 at 15:56 +0100, Niklas Broberg wrote:
  So in the next cabal-install release (which should be pretty soon now)
  configure will do the same thing and pick base 3 unless you specify
  build-depends base = 4.
 
 ... and so there will never be any incentive for these many packages
 to migrate to base-4, which also has consequences for packages that do
 want to use base-4, but also want to depend on such packages.

Actually a package that uses base 4 can depend on other packages that
use base 3. They all co-exist fine. This already happens now.

 I would suggest as a less stagnating approach to issue a warning/hint
 when a package with no explicit version dependency for base fails to
 build.

So my plan is to make hackage require an upper bound on the version of
base for all new packages. That should avoid the need to use the
preferences hack the next time around.

As for what mechanisms we use to persuade package authors to use base 4
over base 3 for new releases, I'm open to suggestions.

One possibility is to warn when configuring if we used the preference to
select base 3 rather than 4. That is, if the solver had a free choice
between 3 and 4 then that's probably a sign that it needs updating. Of
course that doesn't help for packages that work with 3 or 4 perfectly
well.

We could try a mechanism for base not using the preference system but
something more special. For example we could only apply the base 3
preference if there is no upper bound on the version of base. So if the
package says:

build-depends: base = 3   5

then we say great, and don't use any default preference (and thus pick
base 4). If on the other hand the package says:

build-depends: base

Then we pick base 3 and we reject all new packages uploaded to hackage
like this. They must all specify an upper bound. We could also warn at
configuration time.

So that's the suggestion, we'd only use the base 3 preference if there
is no upper bound on the version of base. That means it should continue
to work for old packages and new ones will default to base 4.

What do you think?

Duncan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] ANN: regex-xmlschema

2009-02-01 Thread Uwe Schmidt
I'm pleased to announce (yet another) package for
processing text with regular expressions: regex-xmlschema

The W3C XML Schema specification (http://www.w3.org/TR/xmlschema11-2/#regexs)
defines a language for regular expressions. This language is used in the
XML Schema spec when defining the data type library part.

This regex-xmlschema package contains a complete implementation of this spec.
It is implemented with the technique of derivations of regular expression.

Main features are:
* full support of Unicode including all Unicode code blocks
  and character properties
* a purely functional interface
* 100% Haskell, no other packages except parsec needed
* cabal build file
* extensions for intersection, set difference, exclusive or and interleave
  of regular sets (regular expressions),
* extensions for subexpression matches
* functions for matching, for grep like searching,
  for stream like editing (sed like) and for tokenizing.

With this package, it becomes rather easy to build lightweight tokenizers
e.g. for colourizing arbitrary programming languages, like hscolor does it
for Haskell.

The package is available from Hackage:
http://hackage.haskell.org/cgi-bin/hackage-scripts/package/regex-xmlschema;,
there's a darcs repo for the latest source:
http://darcs2.fh-wedel.de/hxt/regex/;
and a wiki page, describing the extension and giving some examples
for using the library:
http://www.haskell.org/haskellwiki/Regular_expressions_for_XML_Schema;

Cheers,

  Uwe

-- 

Uwe Schmidt
FH Wedel
http://www.fh-wedel.de/~si/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] type and data constructors in CT

2009-02-01 Thread Gregg Reynolds
On Sun, Feb 1, 2009 at 8:26 AM, Ben Moseley ben_mose...@mac.com wrote:

]
 So, the idea is that any polymorphic Haskell function (including Data
 constructors) can be seen as a natural transformation - so a function from
 any object (ie type) to an arrow (ie function). So, take listToMaybe :: [a]
 - Maybe a ... this can be seen as a natural transformation from the List
 functor ([] type constructor) to the Maybe functor (Maybe type constructor)
 which is a function from any type a (eg 'Int') to an arrow (ie Haskell
 function) eg listToMaybe :: [Int] - Maybe Int.


Aha, hadn't thought of that.  In other terms, a natural transformation
is how one gets from one lifted value to different lift of the same
value - from a lift to a hoist, as it were.   Just calling it a
function doesn't do it justice.  Very enlightening, thanks.

I'm beginning to think Category Theory is just about the coolest thing
since sliced bread.  If I understand correctly, we can represent a
data constructor in two ways (at least):

   qua functor:   Dcon : a - T a

  qua nat trans  Dcon : Id a - T a

and thanks to the magic of CT we can think of these as equivalent
(involving some kind of isomorphism).  BTW, I should mention I'm not a
mathematician, in case it's not blindingly obvious.  My interest is
literary - I'm just naive enough to think this stuff could be
explained in plain English, but I'm not quite there yet.

Thanks much,
-gregg
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANN: HDBC v2.0 now available

2009-02-01 Thread Duncan Coutts
On Sun, 2009-02-01 at 17:22 +0100, Niklas Broberg wrote:
  I really really think this is the wrong way to go. Occasional
  destruction is desperately needed for progress, else things will
  invariably stagnate.
 
  I disagree. Having everything fail (we measured it as ~90% of hackage)
  when people upgraded to ghc-6.10 would have been a disaster. Do you
  recall the screaming, wailing and gnashing of teeth after the release of
  ghc-6.8 when most of hackage broke? We (I mean ghc and cabal hackers)
  got a lot of flak for not making the upgrade process easier and
  needlessly breaking everyone's  perfectly good packages.
 
 90%? That sounds like an awful lot for the relatively minor changes
 with base-4. I guess a number of packages will use exceptions, and
 others with use Data and Typeable, but 90%? I don't doubt you when you
 say that you measured it though, just marvelling.

http://www.haskell.org/pipermail/glasgow-haskell-users/2008-October/015654.html

Most non-trivial packages either use exceptions or depend on a package
that uses exceptions (or syb).

The 90% is from memory however. I can't find our exact measurements. I
think we may have given up trying to measure precisely after it became
obvious that nothing was working.

 If 90% of hackage would truly break, then I agree that we might need
 more caution than my radical approach. But I'm not fully convinced
 there either.

I think in the end we managed to get it down to around 5% breakage which
I felt was a pretty good result.

 After all, unlike with the ghc-6.8 release, all that's
 needed for a package to work again is to upload a new .cabal that
 makes the dependency on base-3 explicit,

For the most part that was all that was required for 6.8 too, to add
dependencies on the new packages split out of base. People still
complained. A lot. :-)

 if an author really doesn't want to update the codebase. And even for
 those packages where library authors don't do that simple step, all
 that's needed of the user of the library is to specify the base-3
 constraint when running cabal-install.

But people do not know that. It has to work by default (even if it
warns).

 My main complaint is really that there is currently no incentive
 whatsoever for library authors to migrate. If we make base-4 the
 default, it will require just a little bit of work to make packages
 that depend on base-3 work anyway, as seen above. It's not so much
 work that it should incur any screaming, wailing and teeth gnashing.
 But it should be just enough work to encourage action in one way or
 another, either truly migrating the code or just making the dependency
 explicit as I noted above. I think it would be hard to find a more
 accurate, and non-intrusive, incentive. :-)

I think the right place to put incentives is in checks in new uploads to
hackage. Making existing packages break should be avoided when possible,
even when the changes the developer needs to make are trivial. Many many
end users have no idea how to make these trivial changes and it is not
our intention to punish them. From the perspective of an end user just
trying to install something it either works or it doesn't. When a large
proportion stop working is when the teeth gnashing starts. Also remember
that many packages do not have very active maintainers.

 I definitely agree with your suggestion to make hackage require an
 upper bound on base. But that's to make us future proof,

True.

 it won't solve the issue here and now.

What do you think about the suggestions in my other reply? I hope that
is a reasonable suggestion to encourage a transition to base 4 without
punishing end users.

Duncan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] ANN: gitit 0.5.1

2009-02-01 Thread John MacFarlane
I've just uploaded gitit 0.5.1 to HackageDb. Gitit is a wiki program
that uses git or darcs as a filestore and HAppS as a server.

Changes:

* Major code reorganization, making gitit more modular.
* Gitit can now optionally be built using Happstack instead of HAppS
  (just use -fhappstack when cabal installing).
* Fixed bug with directories that had the same names as pages.
* Added code from HAppS-Extra to fix cookie parsing problems.
* New command-line options for --port, --debug.
* New debug feature prints the date, the raw request, and
  the processed request data to standard output on each request.
* Files with .page extension can no longer be uploaded.
* Apostrophes and quotation marks now allowed in page names.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] 1,000 packages, so let's build a few!

2009-02-01 Thread Duncan Coutts
On Sun, 2009-02-01 at 16:50 +, Sebastian Sylvan wrote:

  Isn't this missing C library dependencies, which cabal head now warns
  about?
 
  No, it's about packages using configure scripts which require MSYS on
  Windows.
 
  In principle we should be able to notice this while doing the package
  dependency planning and report that we cannot install the package
  because it needs sh.exe.
 
 
 I wonder *why* packages need sh though? Isn't cabal supposed to allow you to 
 do that kind of scripting in Haskell rather than calling into 
 platform-dependent shell scripts?

It does enable people to make portable packages, yes. It does not
prevent people from using ./configure scripts, though as a community we
do try to encourage people only to use them where it is essential.

 Are there any specific reasons why people feel the need to use sh?

If you look at existing packages that use configure scripts you'll see
some are pretty tricky, doing lots of checks in system header files for
sizes of structures, values of constants and the presence of C functions
in various header files.

Some are trivial and should be done away with. For example the ones that
just check if a C header / lib is present are unnecessary (and typically
do not work correctly). The next point release of Cabal can do these
checks automatically, eg:

Configuring foo-1.0...
cabal: Missing dependencies on foreign libraries:
* Missing header file: foo.h
* Missing C libraries: foo, bar, baz
This problem can usually be solved by installing the system
packages that provide these libraries (you may need the -dev
versions). If the libraries are already installed but in a
non-standard location then you can use the flags
--extra-include-dirs= and --extra-lib-dirs= to specify where
they are.

 If the package is unix-only for other reasons (e.g. bindings to X or 
 whatever) then it's obviously not a problem, but it appears to me that 
 there's lots of packages that should be portable in principle that won't 
 build on windows because it needs to run sh...

We need to do a survey of the existing packages that use configure
scripts and identify what they are doing exactly. What we want to do is
work out what checks they are performing and which ones could be turned
into portable checks in cabal or in Setup.hs scripts. We want to know,
if we added feature X to Cabal, then Y packages could give up their
configure scripts.

This would be an excellent task for some volunteer to take on.
http://hackage.haskell.org/trac/hackage/ticket/482


Duncan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Why binding to existing widget toolkitsdoesn't make any sense

2009-02-01 Thread Claus Reinke

A common standard would be useful, but OpenVG doesn't look
like ready soon. On the declarative side, there's also SVG..


Seems I've got to qualify that remark somewhat - apart from some
commercial/closed-source implementations, there is also a open
source ANSI C one (implemented on top of OpenGL, so it isn't
quite the same as direct OpenGL hardware support, but it offers
the same API and might be a good target for a Haskell binding..):

http://ivanleben.blogspot.com/2007/07/shivavg-open-source-ansi-c-openvg.html

Also, an OpenVG backend for Cairo, which seems to have so 
many backends that it might be the safest user-level choice?


http://lists.cairographics.org/archives/cairo/2008-January/012833.html
http://lists.cairographics.org/archives/cairo/2008-January/012840.html

Claus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] 1,000 packages, so let's build a few!

2009-02-01 Thread Sebastian Sylvan



--
From: Duncan Coutts duncan.cou...@worc.ox.ac.uk
Sent: Sunday, February 01, 2009 2:59 PM
To: Don Stewart d...@galois.com
Cc: haskell-cafe@haskell.org
Subject: Re: [Haskell-cafe] 1,000 packages, so let's build a few!


On Sat, 2009-01-31 at 14:02 -0800, Don Stewart wrote:


not really :) e.g. my output on a Windows Vista system with GHC
6.10.1
cabal install sdl



Configuring SDL-0.5.4...
setup.exe: sh: runGenProcess: does not exist (No such file or 
 directory)



Isn't this missing C library dependencies, which cabal head now warns
about?


No, it's about packages using configure scripts which require MSYS on
Windows.

In principle we should be able to notice this while doing the package
dependency planning and report that we cannot install the package
because it needs sh.exe.



I wonder *why* packages need sh though? Isn't cabal supposed to allow you to 
do that kind of scripting in Haskell rather than calling into 
platform-dependent shell scripts? Are there any specific reasons why people 
feel the need to use sh?
If the package is unix-only for other reasons (e.g. bindings to X or 
whatever) then it's obviously not a problem, but it appears to me that 
there's lots of packages that should be portable in principle that won't 
build on windows because it needs to run sh...




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] type and data constructors in CT

2009-02-01 Thread Gregg Reynolds
On Sat, Jan 31, 2009 at 3:14 PM, David Menendez d...@zednenem.com wrote:

 There's a paper about defining catamorphisms for GADTs and nested
 recursive types that models type constructors that way.

If you recall a title or author I'll google it.

 So this gives us two functors, but they operate on different things,
 and I don't see how to get from one to the other in CT terms.  Or
 rather, they're obviously related, but I don't see how to express that
 relation formally.

 Again, what sort of relationship are you thinking of? Data

Ok, good question.  I guess the problem I'm having is one of
abstraction management.  CT prefers to disregard the contents of its
objects, imposing a kind of blood-brain barrier between the object and
its internal structure.  Typical definitions of functor, for example,
make no reference to the elements of an object; a functor is just a
pair of morphisms, one taking objects to objects, the other morphisms
to morphisms.  This leaves the naive reader (i.e. me) to wonder how it
is that the internal stuff is related to the functor stuff.

For example: is it true that the object component of a functor
necessarily has a bundle of specific functions relating the internal
elements of the objects?  If so, is the object component merely an
abstraction of the bundle?  Or is it ontologically a different thing?
Hence my question about constructors in Haskell: the type constructor
operates on the opaque object (type); the data constructor operates on
the values (type as transparent object?).  A type and its values seem
to be different kinds of things, so there must be some way to
explicitly account for their relationship.  I figure either I'm
missing something about functors, or I need a more sophisticated
understanding of type-as-category.

Anyway, thanks to you and the other responders I have enough to go
away and figure it out (I hope).

Thanks,

gregg
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Haskeline, pcre-light, iconv and Cabal on OSX

2009-02-01 Thread Antoine Latter
Funny story,

If I do the following three things, I get errors on my Intel Mac OS 10.5:

 * Build an executable with Cabal
 * Have the executable have a build-dep of pcre-light in the .cabal
 * use haskeline in the executable itself

I get crazy linker errors relating to haskeline and libiconv:

Shell output:

$ cabal clean  cabal configure  cabal build
cleaning...
Configuring test-0.0.0...
Preprocessing executables for test-0.0.0...
Building test-0.0.0...
[1 of 1] Compiling Main ( test.hs, dist/build/test/test-tmp/Main.o )
Linking dist/build/test/test ...
Undefined symbols:
  _iconv_open, referenced from:
  _s9Qa_info in libHShaskeline-0.6.0.1.a(IConv.o)
  _iconv_close, referenced from:
  _iconv_close$non_lazy_ptr in libHShaskeline-0.6.0.1.a(IConv.o)
  _iconv, referenced from:
  _sa0K_info in libHShaskeline-0.6.0.1.a(IConv.o)
ld: symbol(s) not found
collect2: ld returned 1 exit status


But all three above conditions need to be true - if I build using 'ghc
--make' everything works great, even if the executable imports
pcre-light and haskeline.  If I have build-deps on haskeline and
pcre-light, but don't actually import haskeline, everything also works
great.

Here are the files I've used:

test.hs:

import System.Console.Haskeline

main :: IO ()
main = print Hello!


test.cabal

Name:test
version: 0.0.0
cabal-version:   = 1.2
build-type:  Simple

Executable test
main-is: test.hs
build-depends:   base, haskeline, pcre-light=0.3


Is there some way I need to be building haskeline on OS X to make this work?

Thanks,
Antoine

more details:


$ cabal --version
cabal-install version 0.6.0
using version 1.6.0.1 of the Cabal library



$ ghc --version
The Glorious Glasgow Haskell Compilation System, version 6.10.0.20081007


links:
pcre-light: 
http://hackage.haskell.org/cgi-bin/hackage-scripts/package/pcre-light
haskeline: http://hackage.haskell.org/cgi-bin/hackage-scripts/package/haskeline
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskeline, pcre-light, iconv and Cabal on OSX

2009-02-01 Thread Thomas Davie
This is caused by OS X's libiconv being entirely CPP macros, the FFI  
has nothing to get hold of.  IIRC there's a ghc bug report open for it.


Bob

On 1 Feb 2009, at 18:57, Antoine Latter wrote:


Funny story,

If I do the following three things, I get errors on my Intel Mac OS  
10.5:


* Build an executable with Cabal
* Have the executable have a build-dep of pcre-light in the .cabal
* use haskeline in the executable itself

I get crazy linker errors relating to haskeline and libiconv:

Shell output:



$ cabal clean  cabal configure  cabal build
cleaning...
Configuring test-0.0.0...
Preprocessing executables for test-0.0.0...
Building test-0.0.0...
[1 of 1] Compiling Main ( test.hs, dist/build/test/test- 
tmp/Main.o )

Linking dist/build/test/test ...
Undefined symbols:
 _iconv_open, referenced from:
 _s9Qa_info in libHShaskeline-0.6.0.1.a(IConv.o)
 _iconv_close, referenced from:
 _iconv_close$non_lazy_ptr in libHShaskeline-0.6.0.1.a(IConv.o)
 _iconv, referenced from:
 _sa0K_info in libHShaskeline-0.6.0.1.a(IConv.o)
ld: symbol(s) not found
collect2: ld returned 1 exit status


But all three above conditions need to be true - if I build using 'ghc
--make' everything works great, even if the executable imports
pcre-light and haskeline.  If I have build-deps on haskeline and
pcre-light, but don't actually import haskeline, everything also works
great.

Here are the files I've used:

test.hs:



import System.Console.Haskeline

main :: IO ()
main = print Hello!


test.cabal



Name:test
version: 0.0.0
cabal-version:   = 1.2
build-type:  Simple

Executable test
   main-is: test.hs
   build-depends:   base, haskeline, pcre-light=0.3


Is there some way I need to be building haskeline on OS X to make  
this work?


Thanks,
Antoine

more details:




$ cabal --version
cabal-install version 0.6.0
using version 1.6.0.1 of the Cabal library





$ ghc --version
The Glorious Glasgow Haskell Compilation System, version  
6.10.0.20081007



links:
pcre-light: 
http://hackage.haskell.org/cgi-bin/hackage-scripts/package/pcre-light
haskeline: http://hackage.haskell.org/cgi-bin/hackage-scripts/package/haskeline
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskeline, pcre-light, iconv and Cabal on OSX

2009-02-01 Thread Antoine Latter
On Sun, Feb 1, 2009 at 12:01 PM, Thomas Davie tom.da...@gmail.com wrote:
 This is caused by OS X's libiconv being entirely CPP macros, the FFI has
 nothing to get hold of.  IIRC there's a ghc bug report open for it.

 Bob


So why does it sometimes work?  I can write and compile executables
using haskeline, both with 'ghc --make' and 'cabal configure  cabal
build'.

This sounds like something I can patch haskeline to account for, then?

Thanks,

Antoine
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] type and data constructors in CT

2009-02-01 Thread Gregg Reynolds
On Sat, Jan 31, 2009 at 4:26 PM, wren ng thornton w...@freegeek.org wrote:

 But a data constructor Dcon a is an /element/ mapping taking elements
 (values) of one type to elements of another type.  So it too can be
 construed as a functor, if each type itself is construed as a
 category.

 Actually no, it's not a functor. It's a (collection of) morphism(s). Let's
 again assume a single-argument Dcon for simplicity. The Haskell type |Dcon
 :: forall a. a - Tcon a| is represented by a collection of morphisms
 |Dcon_{X} : X - Tcon X| for each X in Ob(Hask).

Ok, I see I elided that step.  So my question is about the relation
between the individual (specialized) Dcon and the associated Tcon.
I.e. Dcon 3 is a value of type Tcon Int, inferred by the type system.
So it looks to me like the relation between the Tcon functor and the
Dcon functions is basically ad hoc.  You use Tcon to construct new
types; you can define any function you wish to map values into that
type.  The Haskell data notation is just a syntactic convenience for
doing both at once, but the functor has no necessary relation to the
functions.  (Flailing wildly...)

 It's important to remember that Tcon is the object half of an *endo*functor
 |Tcon : Hask - Hask| and not just any functor. We can view the object half
 of an endofunctor as a collection of morphisms on a category; not
 necessarily real morphisms that exist within that category, but more like an
 overlay on a graph. In some cases, this overlay forms a subcategory (that
 is, they are all indeed morphisms in the original category). And this is
 what we have with data constructors: they are (pieces of) the image of the
 endofunctor from within the category itself.

(unknotting arms...) Uh, er, hmm. I'm still having abstraction
vertigo, since we have (I think) data constructors qua generic
thingees that work at the level of the category, and the same,
specialized to a type, qua functions that operate on the internals of
the categorical objects.  It's the moving back and forth from type and
value that escapes me, and I'm not sure I'm even describing the issue
properly.   I shall go away and thing about this and then write the
answer down.

Thanks much, you've given me a lot to think about.

-gregg
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] type and data constructors in CT

2009-02-01 Thread Gregg Reynolds
On Sat, Jan 31, 2009 at 5:11 PM, Derek Elkins derek.a.elk...@gmail.com wrote:

 But a data constructor Dcon a is an /element/ mapping taking elements
 (values) of one type to elements of another type.  So it too can be
 construed as a functor, if each type itself is construed as a
 category.

 What are elements of a type?  How are you construing a type as a
 category?  One answer is that you are viewing types as sets.  Ignoring

That's a good question.  I've been hoping that viewing types as sets
is good enough but that doesn't seem to be the case.

 Most articles that apply CT to Haskell take one of two approaches.  They
 either talk about a category of Haskell types and functions with no
 explanation of what those actually are, i.e. the understanding that it
 behaves like (idealized) Haskell, or they refer to some existing
 approach to (idealized) semantics, e.g. sets or domains.  In either
 case, the meaning of the objects and arrows is effectively taken for
 granted.

You can say that again.

 An approach along the lines you are suggesting would be useful for a
 categorical semantics of Haskell, but it would just be one possible
 semantics among many.  For most of the aforementioned articles, the only
 value of such a thing would be to be able to formally prove that the
 constructions talked about exist (except that they usually don't for
 technical reasons.)  Usually in those articles, the readers are assumed
 to know Haskell and to not know much about category theory, so trying to
 explain types and functions to them categorically is unnecessary and
 obfuscatory.  It would make sense if you were going the other way,
 explaining Haskell to categorists.

Actually, what I have in mind using the concepts and terminology of CT
to explain Haskell to ordinary programmers.  The idea is not
necessarily to prove stuff, but to build the intuitions needed to
think about computing at a higher level of abstraction before
proceeding to Haskell.  I suspect this might be the quicker road to
Haskell mastery; even if not it's a fun writing project.  Call me
nutty, but I think the basic concepts of CT - category, functor,
natural transformation, monad - are actually pretty simple, even
though it took much gnashing of teeth for me to acquire a basic
intuition.  You don't have to be a mathematician to see the basic
structural ideas and get some idea of their usefulness, /if/ you have
appropriate pedagogical material.  Alas, I've found that clear, simple
explanations are scattered across lots of different documents.

Thanks for your help; I can see I've got a few more things to work on
(like type).

-gregg
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskeline, pcre-light, iconv and Cabal on OSX

2009-02-01 Thread Antoine Latter
On Sun, Feb 1, 2009 at 12:04 PM, Antoine Latter aslat...@gmail.com wrote:
 On Sun, Feb 1, 2009 at 12:01 PM, Thomas Davie tom.da...@gmail.com wrote:
 This is caused by OS X's libiconv being entirely CPP macros, the FFI has
 nothing to get hold of.  IIRC there's a ghc bug report open for it.

 Bob


 So why does it sometimes work?  I can write and compile executables
 using haskeline, both with 'ghc --make' and 'cabal configure  cabal
 build'.

 This sounds like something I can patch haskeline to account for, then?



After a bit of digging, I saw this snippet in the .cabal file for the
iconv package on hackage:


  -- We need to compile via C because on some platforms (notably darwin)
  -- iconv is a macro rather than real C function. doh!
  ghc-options: -fvia-C -Wall


But it looks like the 'iconv' package is broken in the exact same way
for me - I get the same crazy linker errors.

Thanks,

Antoine
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Howto debug erros occuring just by linking to non-Haskell libraries

2009-02-01 Thread Mads Lindstrøm
Hi Haskeleers

I am trying to track down the cause of an error occurring in the Linux
version of wxHaskell. The error occurs before the main function is
executed. That is, it occurs if one imports the wxHaskell libraries, and
it occurs even if one do not execute any wxHaskell function.
Specifically, GLib complains that a initialization-function has not been
called.

My guess is, that it has something to do with the wxWidgets (and gtk or
glib) libraries wxHaskell links to.

Some people have suggested it may be an error in the way GHC links with
C libraries. But that is all guesswork, I would need something more
solid if I were to file GHC bugreport.

Here is a simple Haskell program to illustrate the problem:

  module Main where

  import Graphics.UI.WX

  main = print sdfjkl

and when we compile and execute this program we get:

  (process:13986): GLib-GObject-CRITICAL **: gtype.c:2240: initialization 
assertion failed, use IA__g_type_init() prior to this function
  
  (process:13986): Gdk-CRITICAL **: gdk_cursor_new_for_display: assertion 
`GDK_IS_DISPLAY (display)' failed
  
  (process:13986): GLib-GObject-CRITICAL **: gtype.c:2240: initialization 
assertion failed, use IA__g_type_init() prior to this function
  
  (process:13986): Gdk-CRITICAL **: gdk_cursor_new_for_display: assertion 
`GDK_IS_DISPLAY (display)' failed
  sdfjkl

So even if I do not call any wxHaskell functions, just by linking to
wxHaskell, I get these errors.

As Jeroen Janssen reports here
http://www.mail-archive.com/wxhaskell-us...@lists.sourceforge.net/msg00540.html 
, he is experiencing the same error, but his wxPython programs works. Thus, 
wxHaskell (or GHC or ...) must do something different.

Could this error be related to the order which external libraries are
loaded in? If so, do anybody know how to specify the load order?

Can anybody help med with a good approach for debugging this error?


Greetings,

Mads Lindstrøm



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Complex C99 type in Foreign

2009-02-01 Thread Don Stewart
briqueabraque:
 Hi,
 
 Are there plans to include C99 'complex' type
 in Foreign, maybe as CFloatComplex, CDoubleComplex
 and CLongDoubleComplex? This seems an easy addition
 to the standard and would allow binding of a few
 interesting libraries, like GSL.
 

A separate library for new types to add to Foreign would be the easiest
way forward. Just put the foreign-c99 package on Hackage?

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskeline, pcre-light, iconv and Cabal on OSX

2009-02-01 Thread Thomas Davie


On 1 Feb 2009, at 19:43, Antoine Latter wrote:

On Sun, Feb 1, 2009 at 12:04 PM, Antoine Latter aslat...@gmail.com  
wrote:
On Sun, Feb 1, 2009 at 12:01 PM, Thomas Davie tom.da...@gmail.com  
wrote:
This is caused by OS X's libiconv being entirely CPP macros, the  
FFI has

nothing to get hold of.  IIRC there's a ghc bug report open for it.

Bob



So why does it sometimes work?  I can write and compile executables
using haskeline, both with 'ghc --make' and 'cabal configure  cabal
build'.

This sounds like something I can patch haskeline to account for,  
then?





After a bit of digging, I saw this snippet in the .cabal file for the
iconv package on hackage:



 -- We need to compile via C because on some platforms (notably  
darwin)

 -- iconv is a macro rather than real C function. doh!
 ghc-options: -fvia-C -Wall


But it looks like the 'iconv' package is broken in the exact same way
for me - I get the same crazy linker errors.


Yep, darwin is OS X :)

Bob
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskeline, pcre-light, iconv and Cabal on OSX

2009-02-01 Thread Judah Jacobson
On Sun, Feb 1, 2009 at 10:04 AM, Antoine Latter aslat...@gmail.com wrote:
 On Sun, Feb 1, 2009 at 12:01 PM, Thomas Davie tom.da...@gmail.com wrote:
 This is caused by OS X's libiconv being entirely CPP macros, the FFI has
 nothing to get hold of.  IIRC there's a ghc bug report open for it.

 Bob


 So why does it sometimes work?  I can write and compile executables
 using haskeline, both with 'ghc --make' and 'cabal configure  cabal
 build'.

 This sounds like something I can patch haskeline to account for, then?


The OS X system libiconv is actually OK; it's the MacPorts libiconv
that has the CPP macros.  When the cabal package depends on pcre-light
it pulls in all of pcre-light's options; and like me, you probably
compiled pcre-light to link against /opt/local/lib.

To confirm this I ran ghc --make -L/opt/local/lib test.hs on my OS X
machine and saw the same error as you.

Thanks for the report; I'm not sure of what the right solution is, but
I opened a ticket on Haskeline's bug tracker:

http://trac.haskell.org/haskeline/ticket/74

Feel free to add comments, propose a fix or suggest a workaround.

-Judah
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] pure crisis :)

2009-02-01 Thread Bulat Ziganshin
Hello haskell-cafe,

pure functional denotation for crisis:

(_|_)

-- 
Best regards,
 Bulat  mailto:bulat.zigans...@gmail.com

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Network.UrlDisp

2009-02-01 Thread Pieter Laeremans
Hello,

Has anyone some exampe usages of : Network.UrlDisp ?

thanks in advance,

Pieter

-- 
Pieter Laeremans pie...@laeremans.org
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Haskeline, pcre-light, iconv and Cabal on OSX

2009-02-01 Thread Antoine Latter
On Sun, Feb 1, 2009 at 11:57 AM, Antoine Latter aslat...@gmail.com wrote:
 Funny story,

 If I do the following three things, I get errors on my Intel Mac OS 10.5:

  * Build an executable with Cabal
  * Have the executable have a build-dep of pcre-light in the .cabal
  * use haskeline in the executable itself

 I get crazy linker errors relating to haskeline and libiconv:

 Shell output:

 $ cabal clean  cabal configure  cabal build
 cleaning...
 Configuring test-0.0.0...
 Preprocessing executables for test-0.0.0...
 Building test-0.0.0...
 [1 of 1] Compiling Main ( test.hs, 
 dist/build/test/test-tmp/Main.o )
 Linking dist/build/test/test ...
 Undefined symbols:
  _iconv_open, referenced from:
  _s9Qa_info in libHShaskeline-0.6.0.1.a(IConv.o)
  _iconv_close, referenced from:
  _iconv_close$non_lazy_ptr in libHShaskeline-0.6.0.1.a(IConv.o)
  _iconv, referenced from:
  _sa0K_info in libHShaskeline-0.6.0.1.a(IConv.o)
 ld: symbol(s) not found
 collect2: ld returned 1 exit status
 

 But all three above conditions need to be true - if I build using 'ghc
 --make' everything works great, even if the executable imports
 pcre-light and haskeline.  If I have build-deps on haskeline and
 pcre-light, but don't actually import haskeline, everything also works
 great.

 Here are the files I've used:

 test.hs:

 import System.Console.Haskeline

 main :: IO ()
 main = print Hello!
 

 test.cabal

 Name:test
 version: 0.0.0
 cabal-version:   = 1.2
 build-type:  Simple

 Executable test
main-is: test.hs
build-depends:   base, haskeline, pcre-light=0.3
 

 Is there some way I need to be building haskeline on OS X to make this work?

 Thanks,
 Antoine

 more details:


 $ cabal --version
 cabal-install version 0.6.0
 using version 1.6.0.1 of the Cabal library
 


 $ ghc --version
 The Glorious Glasgow Haskell Compilation System, version 6.10.0.20081007
 

 links:
 pcre-light: 
 http://hackage.haskell.org/cgi-bin/hackage-scripts/package/pcre-light
 haskeline: 
 http://hackage.haskell.org/cgi-bin/hackage-scripts/package/haskeline


For folks following along at home, the above example starts working
again if I hard-code some LD flags into my example .cabal file:


Name:test
version: 0.0.0
cabal-version:   = 1.2
build-type:  Simple

Executable test
main-is: test.hs
build-depends:   base, iconv, pcre-light=0.3
ld-options:  -L/usr/lib -L/opt/local/lib


This way I make sure that I'm linking against the good version of
iconv instead of the Mac Ports version.

I'm not sure what a general solution is.

-Antoine
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANN: HDBC v2.0 now available

2009-02-01 Thread John Goerzen
Duncan Coutts wrote:
 On Sun, 2009-02-01 at 15:56 +0100, Niklas Broberg wrote:
 So in the next cabal-install release (which should be pretty soon now)
 configure will do the same thing and pick base 3 unless you specify
 build-depends base = 4.
 ... and so there will never be any incentive for these many packages
 to migrate to base-4, which also has consequences for packages that do
 want to use base-4, but also want to depend on such packages.
 
 Actually a package that uses base 4 can depend on other packages that
 use base 3. They all co-exist fine. This already happens now.
 
 I would suggest as a less stagnating approach to issue a warning/hint
 when a package with no explicit version dependency for base fails to
 build.
 
 So my plan is to make hackage require an upper bound on the version of
 base for all new packages. That should avoid the need to use the
 preferences hack the next time around.

Hrm.  I can see why you might do that, if you keep the old base around.

On the other hand, what if base 5 introduces less invasive changes?

We have had, for awhile, a number of Haskell packages in Debian that
require a version of GHC greater than version x and less than version
x+1.  This has not worked out entirely well for us.  Granted, Cabal is
different because GHC 6.10 has two versions of base.

But still, I feel queasy about stating that my package won't work with
base 5 when I haven't even seen base 5 yet.

While we're at it, isn't this a more general problem that could occur
elsewhere?  Does base really need to be a special case?  What if, say,
utf8-string, binary, or some other commonly-used package had to break API?

 As for what mechanisms we use to persuade package authors to use base 4
 over base 3 for new releases, I'm open to suggestions.

Well, I'd say this will be difficult to achieve for at least two years,
since GHC 6.8 is in current and future shipping versions of numerous
Linux distributions, and such may well be the version of GHC most
readily accessible to quite a few users.

I am taking the approach of supporting *both*, but that is certainly not
the most simple approach (it's not all that complex either, once you
know how).

-- John
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Why binding to existing widget toolkitsdoesn't make any sense

2009-02-01 Thread Peter Verswyvelen
Nice! Well well well, we have exciting times ahead!

If this implementation is somewhat stable, I guess we need to put a binding
on Hackage :)

Of course Microsoft provides the new Direct2D API for something similar, but
having a multi platform solution is preferable.

I also played a bit with an implementation for rendering conics on the GPU:
http://staffwww.itn.liu.se/~stegu/GLSL-conics/

That runs about 100 times faster as software rendered Cairo or GDI+ on my
PC.

On Sun, Feb 1, 2009 at 6:21 PM, Claus Reinke claus.rei...@talk21.comwrote:

 A common standard would be useful, but OpenVG doesn't look
 like ready soon. On the declarative side, there's also SVG..


 Seems I've got to qualify that remark somewhat - apart from some
 commercial/closed-source implementations, there is also a open
 source ANSI C one (implemented on top of OpenGL, so it isn't
 quite the same as direct OpenGL hardware support, but it offers
 the same API and might be a good target for a Haskell binding..):


 http://ivanleben.blogspot.com/2007/07/shivavg-open-source-ansi-c-openvg.html

 Also, an OpenVG backend for Cairo, which seems to have so many backends
 that it might be the safest user-level choice?

 http://lists.cairographics.org/archives/cairo/2008-January/012833.html
 http://lists.cairographics.org/archives/cairo/2008-January/012840.html

 Claus


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] pure crisis :)

2009-02-01 Thread Paul Johnson

Bulat Ziganshin wrote:

Hello haskell-cafe,

pure functional denotation for crisis:

(_|_)
  

See also:
  http://www.haskell.org/haskellwiki/Humor/Enron

  
http://paulspontifications.blogspot.com/2008/09/why-banks-collapsed-and-how-paper-on.html


Paul.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Why binding to existing widget toolkits doesn't make any sense

2009-02-01 Thread Antony Courtney
Hi Conal,

On Sat, Jan 31, 2009 at 12:10 AM, Conal Elliott co...@conal.net wrote:
 Hopefully some enterprising Haskell hacker will wrap Cairo in a nice
 purely functional API.

 Jefferson Heard is working on such a thing, called Hieroglyph. [...]

 In the process, I realized more clearly that the *very goal* of making a
 purely functional wrapper around an imperative library leads to muddled
 thinking.  It's easy to hide the IO without really eliminating it from the
 semantics, especially if the goal is defined in terms of an IO-based
 library.  Much harder, and I think much more rewarding, is to design
 semantically, from the ground up, and then figure out how to implement the
 elegant semantics with the odds  ends at hand (like Cairo, OpenGL, GPU
 architectures, ...).

Exciting!

I was very much trying to achieve this with Haven back in 2002:

   http://www.haskell.org/haven

As the slides on that page state pretty explicitly, I tried to focus
on the semantics and to design a purely functional representation of a
vector graphics scene that was not tied to any particular
implementation.  My primary claim for success is that the
representation of Picture in Haven type checks and doesn't appeal to
IO; IO only creeps in when we attempt to render a Picture.

Does the Haven API live up to your goal of semantic purity for a
vector graphics library?  If not, where specifically does it fall
short?

I look forward to seeing and reading more about Hieroglyph.  The
typography and visual presentation of Jefferson's online booklet looks
fantastic.  A high quality, purely functional vector graphics API for
Haskell with portable and robust implementations will be a great thing
for the Haskell world.

Regards,

-Antony
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANN: HDBC v2.0 now available

2009-02-01 Thread Duncan Coutts
On Sun, 2009-02-01 at 13:58 -0600, John Goerzen wrote:

  So my plan is to make hackage require an upper bound on the version of
  base for all new packages. That should avoid the need to use the
  preferences hack the next time around.
 
 Hrm.  I can see why you might do that, if you keep the old base around.
 
 On the other hand, what if base 5 introduces less invasive changes?

Then it would not be base 5, it would be 4.something. That's the point
of the versioning policy.

http://haskell.org/haskellwiki/Package_versioning_policy

 We have had, for awhile, a number of Haskell packages in Debian that
 require a version of GHC greater than version x and less than version
 x+1.  This has not worked out entirely well for us.  Granted, Cabal is
 different because GHC 6.10 has two versions of base.
 
 But still, I feel queasy about stating that my package won't work with
 base 5 when I haven't even seen base 5 yet.

But it's easy to change it when you do see base 5 but it's annoying for
everyone when it fails messily. If you take the conservative approach
it's clear to end users and everyone else. We can track the status much
more easily.

 While we're at it, isn't this a more general problem that could occur
 elsewhere?  Does base really need to be a special case?  What if, say,
 utf8-string, binary, or some other commonly-used package had to break API?

That is what the package versioning policy is for. Any package can
opt-in to following it. In which case other packages that depend on said
package can rely on it and use appropriate version constraints.

All the core and platform packages follow the versioning policy.

The plan is to add tool support to let packages declare that they follow
the policy (eg: version-policy: PVP-1) and then we can check that the
package really is following the policy and we can also make helpful
suggestions to other packages that depend on it (ie tell them what
version constraints to use).

http://hackage.haskell.org/trac/hackage/ticket/434#comment:1

  As for what mechanisms we use to persuade package authors to use base 4
  over base 3 for new releases, I'm open to suggestions.
 
 Well, I'd say this will be difficult to achieve for at least two years,
 since GHC 6.8 is in current and future shipping versions of numerous
 Linux distributions, and such may well be the version of GHC most
 readily accessible to quite a few users.

However I'm not sure that it will be possible for ghc-6.12 to support
base 3 4 and 5. Some kinds of changes (especially to types) make
supporting multiple versions impossible.

 I am taking the approach of supporting *both*, but that is certainly not
 the most simple approach (it's not all that complex either, once you
 know how).

I'm not advocating dropping support for base 3. So I guess I am
advocating supporting both. I'm also not sure we need everyone to switch
now, but it'll become more important when we get nearer to ghc-6.12 next
autumn.

Duncan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] System.Posix.Files.isDirectory and System.Posix.Files.isSymbolicLink

2009-02-01 Thread Erik de Castro Lopo
Hi all,

The following code creates a symbolic link in the current directory
and then uses System.Posix.Files.getFileStatus to get the status of
the link.

However, isDirectory returns True and isSymbolicLink returns False
which is very different from what the stat() system call on a POSIX
system would return. I consider this a bug.

I'm using ghc-6.8.2. Has this been fixed in a later version?

Cheers,
Erik


module Main where

import qualified System.Directory
import qualified System.Posix.Files

main :: IO ()
main = do
let linkName = tmp-link
cwd - System.Directory.getCurrentDirectory
System.Posix.Files.createSymbolicLink /tmp linkName
stat - System.Posix.Files.getFileStatus linkName
if System.Posix.Files.isDirectory stat
then putStrLn Is a directory?
else putStrLn Not a directory.
if System.Posix.Files.isSymbolicLink stat
then putStrLn Is a symlink
else putStrLn Not a symlink?



-- 
-
Erik de Castro Lopo
-
Anyone who considers arithmetical methods of producing random
digits is, of course, in a state of sin. - John Von Neumann (1951)
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Takusen 0.8.3 install problems

2009-02-01 Thread Alistair Bayley
  You can probably just remove the Setup.lhs and build with defaults
  (we're doing that at galois, we use Takusen).
 
  -- Don

I'm surprised this works, unless you also change the imports of
Control.Exception to Control.OldException. The new exception module is
part of the reason it's taking me a while to port to 6.10.1. Nearly
there though; only the haddock failures to fix and then we can
release.

Alistair
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Takusen 0.8.3 install problems

2009-02-01 Thread Don Stewart
alistair:
   You can probably just remove the Setup.lhs and build with defaults
   (we're doing that at galois, we use Takusen).
  
   -- Don
 
 I'm surprised this works, unless you also change the imports of
 Control.Exception to Control.OldException. The new exception module is
 part of the reason it's taking me a while to port to 6.10.1. Nearly
 there though; only the haddock failures to fix and then we can
 release.

build-depends: base  4
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] System.Posix.Files.isDirectory and System.Posix.Files.isSymbolicLink

2009-02-01 Thread Erik de Castro Lopo
Erik de Castro Lopo wrote:

 The following code creates a symbolic link in the current directory
 and then uses System.Posix.Files.getFileStatus to get the status of
 the link.

If I use getSymbolicLinkStatus instead of getFileStatus I get the
result I expect. However, using getSymbolicLinkStatus instead of
getFileStatus is highly counter-intuitive.

Furthermore System.Directory.doesDirectoryExist seems to use
getFileStatus so that if one tries to walk a directory tree one will
also follow symlinks and if those links are circular you get an infinite
loop.

Erik
-- 
-
Erik de Castro Lopo
-
Christianity: The belief that some cosmic Jewish Zombie can make you live
forever if you symbolically eat his flesh and telepathically tell him that
you accept him as your master, so he can remove an evil force from your
soul that is present in humanity because a rib-woman was convinced by a
talking snake to eat from a magical tree.
-- http://uncyclopedia.org/wiki/Christianity
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] cabal list can't find Glob.cabal file?

2009-02-01 Thread Dougal Stanton
I get a curious message when trying to run 'cabal list':

$ cabal list

  omit some lines...
  ..
  Latest version available: 0.3
  Category: Network
  Synopsis: Pure bindings for the MaxMind IP database.
  License:  OtherLicense

cabal: Couldn't read cabal file ./Glob/0.1/Glob.cabal


Any ideas? As far as Hackage and the local index are concerned, 0.1
isn't even a recent version of Glob. Why should it be looking for the
file?

cabal-install version 0.5.1
using version 1.4.0.1 of the Cabal library



-- 
Dougal Stanton
dou...@dougalstanton.net // http://www.dougalstanton.net
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Why binding to existing widget toolkits doesn't make any sense

2009-02-01 Thread Jeff Heard
Everyone, I'll be releasing Hieroglyph this week.  Right now I'm unit
testing and I've been out of town this past weekend without much
opportunity to work on it.  It's not yet a complete functional
re-working of Cairo -- for instance, right now patterns aren't
supported, and Pango layouts aren't either -- but it should become so.
 I'll also be forking Hieroglyph to develop a complete,
pure-functional 2D graphics toolkit.

-- Jeff

2009/1/31 Peter Verswyvelen bugf...@gmail.com:
 Hi Conal,
 Do you have any links to this interesting work of Jefferson Heard? Blogs or
 something? I failed to Google it, I mainly found his OpenGL TrueType
 bindings on Hackage and his
 beautiful http://bluheron.europa.renci.org/docs/BeautifulCode.pdf
 Regarding semantics, modern GPUs are able to render 2D graphics (e.g. filled
 or stroked curves) as real functions / relations; you don't need fine
 tessellation anymore since these computational monsters have become so fast
 that per pixel inside / outside testing are feasible now. It's basically a
 simple form of real-time ray-tracing :)  A quick search revealed another
 paper using these
 techniques http://alice.loria.fr/publications/papers/2005/VTM/vtm.pdf
 Cheers,
 Peter
 2009/1/31 Conal Elliott co...@conal.net

 Hi Antony,


 Hopefully some enterprising Haskell hacker will wrap Cairo in a nice
 purely functional API.

 Jefferson Heard is working on such a thing, called Hieroglyph.  Lately
 I've been helping him simplify the design and shift it toward a clear,
 composable semantic basis, i.e. genuinely functional (as in the Fruit
 paper), meaning that it can be understood  reasoned about in precise terms
 via model that is much simpler than IO.

 In the process, I realized more clearly that the *very goal* of making a
 purely functional wrapper around an imperative library leads to muddled
 thinking.  It's easy to hide the IO without really eliminating it from the
 semantics, especially if the goal is defined in terms of an IO-based
 library.  Much harder, and I think much more rewarding, is to design
 semantically, from the ground up, and then figure out how to implement the
 elegant semantics with the odds  ends at hand (like Cairo, OpenGL, GPU
 architectures, ...).

 Regards,

 - Conal

 On Fri, Jan 30, 2009 at 1:56 PM, Antony Courtney
 antony.court...@gmail.com wrote:

 On Fri, Jan 30, 2009 at 4:25 PM, Bryan O'Sullivan b...@serpentine.com
 wrote:
  On Fri, Jan 30, 2009 at 1:11 PM, Antony Courtney
  antony.court...@gmail.com
  wrote:
 
  A 2-D vector graphics library such as Java2D ( or Quartz on OS/X or
  GDI+ on Windows ) supports things like computing tight bounding
  rectangles for arbitrary shapes, hit testing for determining whether a
  point is inside or outside a shape and constructive area geometry for
  shape compositing and clipping without dropping down to a raster
  representation.
 
  These are the kinds of capabilities provided by Cairo, which is very
  pleasant to use (PDF-style imaging model) and quite portable. There are
  already Cairo bindings provided by gtk2hs, too.
 

 Hi Bryan,

 Nice to hear from you!  Been a while...

 Just had a quick look and it does indeed appear that Cairo now
 supports some of the features I mention above (bounds calculations and
 hit testing).  Cairo has clearly come a long way from when I was last
 working on Fruit and Haven in 2003/2004;  back then it looked like it
 only provided a way to render or rasterize vector graphics on to
 bitmap surfaces and not much else.

 It's not clear to me if the Cairo API in its current form supports
 vector-level clipping or constructive area geometry, and it looks like
 the API is still pretty render-centric (e.g. is it possible to obtain
 the vector representation of rendering text in a particular font?).
 That might make it challenging to use Cairo for something like the
 Haven API, but maybe one can live without that level of generality.

 In any case: delighted to see progress on this front!  Hopefully some
 enterprising Haskell hacker will wrap Cairo in a nice purely
 functional API.

-Antony
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Why binding to existing widget toolkits doesn't make any sense

2009-02-01 Thread Jeff Heard
Thanks, Peter, for the paper link...  I'll look at this, as it's
exactly what it sounds like I want for the future of Hieroglyph...

2009/1/31 Peter Verswyvelen bugf...@gmail.com:
 Hi Conal,
 Do you have any links to this interesting work of Jefferson Heard? Blogs or
 something? I failed to Google it, I mainly found his OpenGL TrueType
 bindings on Hackage and his
 beautiful http://bluheron.europa.renci.org/docs/BeautifulCode.pdf
 Regarding semantics, modern GPUs are able to render 2D graphics (e.g. filled
 or stroked curves) as real functions / relations; you don't need fine
 tessellation anymore since these computational monsters have become so fast
 that per pixel inside / outside testing are feasible now. It's basically a
 simple form of real-time ray-tracing :)  A quick search revealed another
 paper using these
 techniques http://alice.loria.fr/publications/papers/2005/VTM/vtm.pdf
 Cheers,
 Peter
 2009/1/31 Conal Elliott co...@conal.net

 Hi Antony,


 Hopefully some enterprising Haskell hacker will wrap Cairo in a nice
 purely functional API.

 Jefferson Heard is working on such a thing, called Hieroglyph.  Lately
 I've been helping him simplify the design and shift it toward a clear,
 composable semantic basis, i.e. genuinely functional (as in the Fruit
 paper), meaning that it can be understood  reasoned about in precise terms
 via model that is much simpler than IO.

 In the process, I realized more clearly that the *very goal* of making a
 purely functional wrapper around an imperative library leads to muddled
 thinking.  It's easy to hide the IO without really eliminating it from the
 semantics, especially if the goal is defined in terms of an IO-based
 library.  Much harder, and I think much more rewarding, is to design
 semantically, from the ground up, and then figure out how to implement the
 elegant semantics with the odds  ends at hand (like Cairo, OpenGL, GPU
 architectures, ...).

 Regards,

 - Conal

 On Fri, Jan 30, 2009 at 1:56 PM, Antony Courtney
 antony.court...@gmail.com wrote:

 On Fri, Jan 30, 2009 at 4:25 PM, Bryan O'Sullivan b...@serpentine.com
 wrote:
  On Fri, Jan 30, 2009 at 1:11 PM, Antony Courtney
  antony.court...@gmail.com
  wrote:
 
  A 2-D vector graphics library such as Java2D ( or Quartz on OS/X or
  GDI+ on Windows ) supports things like computing tight bounding
  rectangles for arbitrary shapes, hit testing for determining whether a
  point is inside or outside a shape and constructive area geometry for
  shape compositing and clipping without dropping down to a raster
  representation.
 
  These are the kinds of capabilities provided by Cairo, which is very
  pleasant to use (PDF-style imaging model) and quite portable. There are
  already Cairo bindings provided by gtk2hs, too.
 

 Hi Bryan,

 Nice to hear from you!  Been a while...

 Just had a quick look and it does indeed appear that Cairo now
 supports some of the features I mention above (bounds calculations and
 hit testing).  Cairo has clearly come a long way from when I was last
 working on Fruit and Haven in 2003/2004;  back then it looked like it
 only provided a way to render or rasterize vector graphics on to
 bitmap surfaces and not much else.

 It's not clear to me if the Cairo API in its current form supports
 vector-level clipping or constructive area geometry, and it looks like
 the API is still pretty render-centric (e.g. is it possible to obtain
 the vector representation of rendering text in a particular font?).
 That might make it challenging to use Cairo for something like the
 Haven API, but maybe one can live without that level of generality.

 In any case: delighted to see progress on this front!  Hopefully some
 enterprising Haskell hacker will wrap Cairo in a nice purely
 functional API.

-Antony
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Why binding to existing widget toolkits doesn't make any sense

2009-02-01 Thread Jeff Heard
Oh, and by functional, I mean that it isn't a complete re-wrapping of
the library, not that you have IO creeping in all over the place.
Pardon my unclearness.

On Sun, Feb 1, 2009 at 6:10 PM, Jeff Heard jefferson.r.he...@gmail.com wrote:
 Everyone, I'll be releasing Hieroglyph this week.  Right now I'm unit
 testing and I've been out of town this past weekend without much
 opportunity to work on it.  It's not yet a complete functional
 re-working of Cairo -- for instance, right now patterns aren't
 supported, and Pango layouts aren't either -- but it should become so.
  I'll also be forking Hieroglyph to develop a complete,
 pure-functional 2D graphics toolkit.

 -- Jeff

 2009/1/31 Peter Verswyvelen bugf...@gmail.com:
 Hi Conal,
 Do you have any links to this interesting work of Jefferson Heard? Blogs or
 something? I failed to Google it, I mainly found his OpenGL TrueType
 bindings on Hackage and his
 beautiful http://bluheron.europa.renci.org/docs/BeautifulCode.pdf
 Regarding semantics, modern GPUs are able to render 2D graphics (e.g. filled
 or stroked curves) as real functions / relations; you don't need fine
 tessellation anymore since these computational monsters have become so fast
 that per pixel inside / outside testing are feasible now. It's basically a
 simple form of real-time ray-tracing :)  A quick search revealed another
 paper using these
 techniques http://alice.loria.fr/publications/papers/2005/VTM/vtm.pdf
 Cheers,
 Peter
 2009/1/31 Conal Elliott co...@conal.net

 Hi Antony,


 Hopefully some enterprising Haskell hacker will wrap Cairo in a nice
 purely functional API.

 Jefferson Heard is working on such a thing, called Hieroglyph.  Lately
 I've been helping him simplify the design and shift it toward a clear,
 composable semantic basis, i.e. genuinely functional (as in the Fruit
 paper), meaning that it can be understood  reasoned about in precise terms
 via model that is much simpler than IO.

 In the process, I realized more clearly that the *very goal* of making a
 purely functional wrapper around an imperative library leads to muddled
 thinking.  It's easy to hide the IO without really eliminating it from the
 semantics, especially if the goal is defined in terms of an IO-based
 library.  Much harder, and I think much more rewarding, is to design
 semantically, from the ground up, and then figure out how to implement the
 elegant semantics with the odds  ends at hand (like Cairo, OpenGL, GPU
 architectures, ...).

 Regards,

 - Conal

 On Fri, Jan 30, 2009 at 1:56 PM, Antony Courtney
 antony.court...@gmail.com wrote:

 On Fri, Jan 30, 2009 at 4:25 PM, Bryan O'Sullivan b...@serpentine.com
 wrote:
  On Fri, Jan 30, 2009 at 1:11 PM, Antony Courtney
  antony.court...@gmail.com
  wrote:
 
  A 2-D vector graphics library such as Java2D ( or Quartz on OS/X or
  GDI+ on Windows ) supports things like computing tight bounding
  rectangles for arbitrary shapes, hit testing for determining whether a
  point is inside or outside a shape and constructive area geometry for
  shape compositing and clipping without dropping down to a raster
  representation.
 
  These are the kinds of capabilities provided by Cairo, which is very
  pleasant to use (PDF-style imaging model) and quite portable. There are
  already Cairo bindings provided by gtk2hs, too.
 

 Hi Bryan,

 Nice to hear from you!  Been a while...

 Just had a quick look and it does indeed appear that Cairo now
 supports some of the features I mention above (bounds calculations and
 hit testing).  Cairo has clearly come a long way from when I was last
 working on Fruit and Haven in 2003/2004;  back then it looked like it
 only provided a way to render or rasterize vector graphics on to
 bitmap surfaces and not much else.

 It's not clear to me if the Cairo API in its current form supports
 vector-level clipping or constructive area geometry, and it looks like
 the API is still pretty render-centric (e.g. is it possible to obtain
 the vector representation of rendering text in a particular font?).
 That might make it challenging to use Cairo for something like the
 Haven API, but maybe one can live without that level of generality.

 In any case: delighted to see progress on this front!  Hopefully some
 enterprising Haskell hacker will wrap Cairo in a nice purely
 functional API.

-Antony
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Why binding to existing widget toolkits doesn't make any sense

2009-02-01 Thread Peter Verswyvelen
Cool! Looking forward to it.
On Mon, Feb 2, 2009 at 12:10 AM, Jeff Heard jefferson.r.he...@gmail.comwrote:

 Everyone, I'll be releasing Hieroglyph this week.  Right now I'm unit
 testing and I've been out of town this past weekend without much
 opportunity to work on it.  It's not yet a complete functional
 re-working of Cairo -- for instance, right now patterns aren't
 supported, and Pango layouts aren't either -- but it should become so.
  I'll also be forking Hieroglyph to develop a complete,
 pure-functional 2D graphics toolkit.

 -- Jeff

 2009/1/31 Peter Verswyvelen bugf...@gmail.com:
  Hi Conal,
  Do you have any links to this interesting work of Jefferson Heard? Blogs
 or
  something? I failed to Google it, I mainly found his OpenGL TrueType
  bindings on Hackage and his
  beautiful http://bluheron.europa.renci.org/docs/BeautifulCode.pdf
  Regarding semantics, modern GPUs are able to render 2D graphics (e.g.
 filled
  or stroked curves) as real functions / relations; you don't need fine
  tessellation anymore since these computational monsters have become so
 fast
  that per pixel inside / outside testing are feasible now. It's basically
 a
  simple form of real-time ray-tracing :)  A quick search revealed another
  paper using these
  techniques http://alice.loria.fr/publications/papers/2005/VTM/vtm.pdf
  Cheers,
  Peter
  2009/1/31 Conal Elliott co...@conal.net
 
  Hi Antony,
 
 
  Hopefully some enterprising Haskell hacker will wrap Cairo in a nice
  purely functional API.
 
  Jefferson Heard is working on such a thing, called Hieroglyph.  Lately
  I've been helping him simplify the design and shift it toward a clear,
  composable semantic basis, i.e. genuinely functional (as in the Fruit
  paper), meaning that it can be understood  reasoned about in precise
 terms
  via model that is much simpler than IO.
 
  In the process, I realized more clearly that the *very goal* of making a
  purely functional wrapper around an imperative library leads to muddled
  thinking.  It's easy to hide the IO without really eliminating it from
 the
  semantics, especially if the goal is defined in terms of an IO-based
  library.  Much harder, and I think much more rewarding, is to design
  semantically, from the ground up, and then figure out how to implement
 the
  elegant semantics with the odds  ends at hand (like Cairo, OpenGL, GPU
  architectures, ...).
 
  Regards,
 
  - Conal
 
  On Fri, Jan 30, 2009 at 1:56 PM, Antony Courtney
  antony.court...@gmail.com wrote:
 
  On Fri, Jan 30, 2009 at 4:25 PM, Bryan O'Sullivan b...@serpentine.com
  wrote:
   On Fri, Jan 30, 2009 at 1:11 PM, Antony Courtney
   antony.court...@gmail.com
   wrote:
  
   A 2-D vector graphics library such as Java2D ( or Quartz on OS/X or
   GDI+ on Windows ) supports things like computing tight bounding
   rectangles for arbitrary shapes, hit testing for determining whether
 a
   point is inside or outside a shape and constructive area geometry
 for
   shape compositing and clipping without dropping down to a raster
   representation.
  
   These are the kinds of capabilities provided by Cairo, which is very
   pleasant to use (PDF-style imaging model) and quite portable. There
 are
   already Cairo bindings provided by gtk2hs, too.
  
 
  Hi Bryan,
 
  Nice to hear from you!  Been a while...
 
  Just had a quick look and it does indeed appear that Cairo now
  supports some of the features I mention above (bounds calculations and
  hit testing).  Cairo has clearly come a long way from when I was last
  working on Fruit and Haven in 2003/2004;  back then it looked like it
  only provided a way to render or rasterize vector graphics on to
  bitmap surfaces and not much else.
 
  It's not clear to me if the Cairo API in its current form supports
  vector-level clipping or constructive area geometry, and it looks like
  the API is still pretty render-centric (e.g. is it possible to obtain
  the vector representation of rendering text in a particular font?).
  That might make it challenging to use Cairo for something like the
  Haven API, but maybe one can live without that level of generality.
 
  In any case: delighted to see progress on this front!  Hopefully some
  enterprising Haskell hacker will wrap Cairo in a nice purely
  functional API.
 
 -Antony
  ___
  Haskell-Cafe mailing list
  Haskell-Cafe@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-cafe
 
 
  ___
  Haskell-Cafe mailing list
  Haskell-Cafe@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-cafe
 
 
 
  ___
  Haskell-Cafe mailing list
  Haskell-Cafe@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-cafe
 
 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] UDP

2009-02-01 Thread Andrew Coppin

John Van Enk wrote:

Try something like this:

module Main where

import Network.Socket

main = withSocketsDo $ do
-- Make a UDP socket
s - socket AF_INET Datagram defaultProtocol

-- We want to listen on all interfaces (0.0.0.0)
bindAddr - inet_addr 0.0.0.0

-- Bind to 0.0.0.0:3 http://0.0.0.0:3
bindSocket s (SockAddrInet 3 bindAddr)

-- Read a message of max length 1000 from some one
(msg,len,from) - recvFrom s 1000

putStrLn $ Got the following message from  ++ (show from)
putStrLn msg

Does this help? As Stephan said, you missed the bind step.


That works great, thanks.

Yeah, I just assumed that the bind step was only necessary for 
connection-oriented protocols. (Interestingly enough, the matching 
send program doesn't bind at all, yet seems to work fine...)


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Complex C99 type in Foreign

2009-02-01 Thread Maurí­cio

Are there plans to include C99 'complex' type
in Foreign, maybe as CFloatComplex, CDoubleComplex
and CLongDoubleComplex? This seems an easy addition
to the standard and would allow binding of a few
interesting libraries, like GSL.


A separate library for new types to add to Foreign would be the easiest
way forward. Just put the foreign-c99 package on Hackage?


As far as I know, this is not possible. (I tried for
a long time to do that, actually, until I reallized
it could not be done.)

If it's not true, i.e., I could actually have some
arbitrary sized parameter as argument to a function
or as a return value (and not its pointer), what
did I saw wrong? I understand only Foreign.C.C*
types or forall a. = Foreign.Ptr.Ptr a can be used
like that.

Maurício

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Complex C99 type in Foreign

2009-02-01 Thread Don Stewart
briqueabraque:
 Are there plans to include C99 'complex' type
 in Foreign, maybe as CFloatComplex, CDoubleComplex
 and CLongDoubleComplex? This seems an easy addition
 to the standard and would allow binding of a few
 interesting libraries, like GSL.
 
 A separate library for new types to add to Foreign would be the easiest
 way forward. Just put the foreign-c99 package on Hackage?
 
 As far as I know, this is not possible. (I tried for
 a long time to do that, actually, until I reallized
 it could not be done.)
 
 If it's not true, i.e., I could actually have some
 arbitrary sized parameter as argument to a function
 or as a return value (and not its pointer), what
 did I saw wrong? I understand only Foreign.C.C*
 types or forall a. = Foreign.Ptr.Ptr a can be used
 like that.

Oh, you mean you need to teach the compiler about unboxed complex types?

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] cabal list can't find Glob.cabal file?

2009-02-01 Thread Duncan Coutts
On Sun, 2009-02-01 at 23:04 +, Dougal Stanton wrote:
 I get a curious message when trying to run 'cabal list':
 
 $ cabal list
 
   omit some lines...
   ..
   Latest version available: 0.3
   Category: Network
   Synopsis: Pure bindings for the MaxMind IP database.
   License:  OtherLicense
 
 cabal: Couldn't read cabal file ./Glob/0.1/Glob.cabal

Cabal-1.4 cannot read some .cabal files that use new syntactic
constructs added in Cabal-1.6. The cabal-install program does not handle
this fact very gracefully.

 Any ideas? As far as Hackage and the local index are concerned, 0.1
 isn't even a recent version of Glob. Why should it be looking for the
 file?

It's a good point, it should not need to parse the .cabal file of the
older version. However that's just a performance thing, the newer
versions will have the same issue.

 cabal-install version 0.5.1
 using version 1.4.0.1 of the Cabal library

The solution is to upgrade:

$ cabal install cabal-install

$ cabal --version
cabal-install version 0.6.0
using version 1.6.0.1 of the Cabal library 


Duncan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] System.Posix.Files.isDirectory and System.Posix.Files.isSymbolicLink

2009-02-01 Thread Duncan Coutts
On Mon, 2009-02-02 at 09:49 +1100, Erik de Castro Lopo wrote:
 Hi all,
 
 The following code creates a symbolic link in the current directory
 and then uses System.Posix.Files.getFileStatus to get the status of
 the link.
 
 However, isDirectory returns True and isSymbolicLink returns False
 which is very different from what the stat() system call on a POSIX
 system would return. I consider this a bug.

No, it is the correct POSIX behaviour. You are thinking of lstat() which
is what getSymbolicLinkStatus uses. The getFileStatus function calls
stat().

The documentation makes this clear:
http://www.haskell.org/ghc/docs/latest/html/libraries/unix/System-Posix-Files.html#5

getFileStatus :: FilePath - IO FileStatus

getFileStatus path calls gets the FileStatus information (user
ID, size, access times, etc.) for the file path.

Note: calls stat.


getSymbolicLinkStatus :: FilePath - IO FileStatus

Acts as getFileStatus except when the FilePath refers to a
symbolic link. In that case the FileStatus information of the
symbolic link itself is returned instead of that of the file it
points to.

Note: calls lstat.


Duncan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskeline, pcre-light, iconv and Cabal on OSX

2009-02-01 Thread Duncan Coutts
[re-sending the cc to -cafe as I sent from the wrong address the first time]

On Sun, 2009-02-01 at 12:43 -0600, Antoine Latter wrote:

 After a bit of digging, I saw this snippet in the .cabal file for the
 iconv package on hackage:
 
 
   -- We need to compile via C because on some platforms (notably darwin)
   -- iconv is a macro rather than real C function. doh!
   ghc-options: -fvia-C -Wall
 
 
 But it looks like the 'iconv' package is broken in the exact same way
 for me - I get the same crazy linker errors.

Yes, the workaround of using -fvia-C stopped working in ghc-6.10. I will
have to adapt the iconv package to use a C wrapper.

Someone said that it is just the macports version of iconv that has this
problem but I don't understand that at all. If we're using default
ghc/gcc then we should not be looking in any non-standard include
directories at all.

The other thing that makes no sense is that
the /usr/lib/libiconv.dywhatever file apparently contains both
_iconv_open and _libiconv_open so why can't we link to the ordinary
_iconv_open one?

Duncan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Network.UrlDisp

2009-02-01 Thread Sterling Clover
I'm not working on hvac at the moment, but the UrlDisp code was split  
out of it by Artyom Shalkhakov. So some of the old example code from  
hvac should still be of interest. The following, for example, was an  
implementation of a threaded message board:


http://code.haskell.org/~sclv/hvac/Examples/hvac-board.hs

Note that this code uses the infix combinators. After a chorus of  
it's worse than perl! went up, standard message dispatch  
combinators were added as well. I explained briefly how they worked  
here:


http://fmapfixreturn.wordpress.com/2008/05/21/some-concepts-behind- 
hvac/#comment-94


The validation stuff used both in the code example and the blog post  
is not part of UrlDisp, by the way.


Cheers,
S.

On Feb 1, 2009, at 2:46 PM, Pieter Laeremans wrote:


Hello,

Has anyone some exampe usages of : Network.UrlDisp ?

thanks in advance,

Pieter

--
Pieter Laeremans pie...@laeremans.org
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Why binding to existing widget toolkits doesn't make any sense

2009-02-01 Thread Stephen Tetley
Hello

I've written a Haskell binding to the Shiva-VG OpenVG implementation.

Hopefully it should appear on Hackage in the next couple of days - but
for the moment it is available here:


http://slackwise.org/spt/files/OpenVG-0.1.tar.gz

I've tested it on MacOSX leopard and Windows with MinGW / MSys, if
anyone could check it on Linux that would be handy.
Thanks.


Best regards

Stephen
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskeline, pcre-light, iconv and Cabal on OSX

2009-02-01 Thread Judah Jacobson
On Sun, Feb 1, 2009 at 4:07 PM, Duncan Coutts
duncan.cou...@worc.ox.ac.uk wrote:
 [re-sending the cc to -cafe as I sent from the wrong address the first time]

 On Sun, 2009-02-01 at 12:43 -0600, Antoine Latter wrote:

 After a bit of digging, I saw this snippet in the .cabal file for the
 iconv package on hackage:

 
   -- We need to compile via C because on some platforms (notably darwin)
   -- iconv is a macro rather than real C function. doh!
   ghc-options: -fvia-C -Wall
 

 But it looks like the 'iconv' package is broken in the exact same way
 for me - I get the same crazy linker errors.

 Yes, the workaround of using -fvia-C stopped working in ghc-6.10. I will
 have to adapt the iconv package to use a C wrapper.

 Someone said that it is just the macports version of iconv that has this
 problem but I don't understand that at all. If we're using default
 ghc/gcc then we should not be looking in any non-standard include
 directories at all.

The pcre library isn't installed by default, so an OS X users might
get it from MacPorts (which installs it in /opt/local/lib).  And when
building the Haskell pcre-light package, they'd do something like

cabal install pcre-light --extra-lib-dirs=/opt/local/lib

But then any other package that depends on pcre-light will also get
the same linker option.

 The other thing that makes no sense is that
 the /usr/lib/libiconv.dywhatever file apparently contains both
 _iconv_open and _libiconv_open so why can't we link to the ordinary
 _iconv_open one?

The problem is that with -L/opt/local/lib (which is now passed to any
package depending on pcre-light), the linker uses
/opt/local/lib/libiconv.*  and ignores /usr/lib/libiconv.* altogether.

Hope that helps explain it better,
-Judah
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Question re: denotational semantics in SPJ's Implementation of FPLs book

2009-02-01 Thread Bernie Pope

On 01/02/2009, at 8:49 PM, Devin Mullins wrote:

I'm reading SPJ's The Implementation of Functional Programming  
Languages, and
on page 32, it defines the multiplication operator in its extended  
lambda

calculus as:
 Eval[[ * ]] a   b   = a x b
 Eval[[ * ]] _|_ b   = _|_
 Eval[[ * ]] a   _|_ = _|_

Is that complete? What are the denotational semantics of * applied  
to things
not in the domain of the multiplication operator x, such as TRUE (in  
the
extended lambda defined by this book) and functions (in normal  
lambda calc)? Do
these things just eval to bottom? Or is this just to be ignored,  
since the
extended calculus will only be applied to properly typed  
expressions in the

context of this book?


Hi Devin,

I don't think that the section of the book in question is intended to  
be rigorous. (page 30. this account is greatly simplified).


As noted in the text, the domain of values produced by Eval is not  
specified, so it is hard to be precise (though there is a reference to  
Scott's Domain Theory).


However, I agree with you that the equation for multiplication looks  
under-specified. Obviously any reasonable domain will include (non- 
bottom) values on which multiplication is not defined. Without any  
explicit quantification, it is hard to say what values 'a' and 'b'  
range over. It is possible that they range over the entire domain, or  
some proper subset. The text also assumes we all agree on the  
definition of the 'x' (multiplication) function. Though it ought to be  
specified in a more rigorous exposition.


As you suggest, it may be possible to work around this by imposing  
some kind of typing restriction to the language, though this does not  
appear to be stated in this particular section of the book. Perhaps  
this is mentioned elsewhere; I have not checked.


Another possible solution is to sweep it all into the definition of  
the 'x' function. But if that were so, why bother handling bottom  
explicitly, but not the other possible cases?


I guess the best we can say is that the meaning of multiplication  
applied to non-bottom non-numeric arguments is not defined (in this  
section of the text).


Cheers,
Bernie.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Complex C99 type in Foreign

2009-02-01 Thread Maurí­cio

Are there plans to include C99 'complex' type
in Foreign, maybe as CFloatComplex, CDoubleComplex
and CLongDoubleComplex? (...)



A separate library for new types to add to Foreign would be the easiest
way forward. (...)



If it's not true, i.e., I could actually have some
arbitrary sized parameter as argument to a function
or as a return value (and not its pointer), what
did I saw wrong? (...)



Oh, you mean you need to teach the compiler about

 unboxed complex types?

I think so. Take this, for instance:


#include complex.h
double complex ccos(double complex z);
float complex ccosf(float complex z);
long double complex ccosl(long double complex z);


To bind to ccos* functions I believe I would need
a native CComplex. The GSL numeric library also
makes use of something like that, although it
defines its own 'complex' structure.

I'm writing a binding to the standard C library,
and at the same time collecting a list of standard
types that could be usefull in Foreign. I'm thinking
about writing a ticket to ask for inclusion of a few
of them. 'int32_t' and 'int64_t' as, say, CInt32 and
CInt64 could be nice for portability.

Maurício

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Why binding to existing widget toolkits doesn't make any sense

2009-02-01 Thread Conal Elliott
Hi Antony,

My primary claim for success is that the
 representation of Picture in Haven type checks and doesn't appeal to
 IO; IO only creeps in when we attempt to render a Picture.


You did something much more meaningful to me that what you say here.

It is easy to define a type that satisfies these conditions but is as
semantically intractable as IO.  As an absurd demonstration, clone some
large chunk of the current set of IO primitive interfaces (return, (=),
getChar, forkIO, various FFI-imported things, ...) into a GADT called
NotIO.  Then write a 'render :: NotIO a - IO a' that interprets NotIO as
IO.  One could call NotIO a purely functional wrapper.  Or we could just
use IO itself.

In the words of Lewis Carroll:

 That's another thing we've learned from your Nation, said Mein Herr,
 map-making. But we've carried it much further than you. What do you
 consider the largest map that would be really useful?

 *About six inches to the mile.*

 Only six inches!exclaimed Mein Herr. We very soon got to six yards to
 the mile. Then we tried a hundred yards to the mile. And then came the
 grandest idea of all! We actually made a map of the country, on the scale of
 a mile to the mile!

 *Have you used it much? I enquired.*

 It has never been spread out, yet, said Mein Herr: the farmers objected:
 they said it would cover the whole country, and shut out the sunlight! So we
 now use the country itself, as its own map, and I assure you it does nearly
 as well.

While my example and Lewis Carroll's are intentionally absurd, I'm concerned
that purely functional wrappers can be just as meaningless but less
aparently so.

I think what you did in Haven (based on memories of our conversations at the
time and looking at your slides just now) is substantively different.  You
gave precise, complete, and tractably simple *denotation* to your types.
Complete enough to define the correctness of the rendering process.

Does the Haven API live up to your goal of semantic purity for a
 vector graphics library?  If not, where specifically does it fall short?


Yes, if my understanding about denotational precision and completeness is
correct.  Is it?

Regards,  - Conal


On Sun, Feb 1, 2009 at 2:37 PM, Antony Courtney
antony.court...@gmail.comwrote:

 Hi Conal,

 On Sat, Jan 31, 2009 at 12:10 AM, Conal Elliott co...@conal.net wrote:
  Hopefully some enterprising Haskell hacker will wrap Cairo in a nice
  purely functional API.
 
  Jefferson Heard is working on such a thing, called Hieroglyph. [...]
 
  In the process, I realized more clearly that the *very goal* of making a
  purely functional wrapper around an imperative library leads to muddled
  thinking.  It's easy to hide the IO without really eliminating it from
 the
  semantics, especially if the goal is defined in terms of an IO-based
  library.  Much harder, and I think much more rewarding, is to design
  semantically, from the ground up, and then figure out how to implement
 the
  elegant semantics with the odds  ends at hand (like Cairo, OpenGL, GPU
  architectures, ...).

 Exciting!

 I was very much trying to achieve this with Haven back in 2002:

   http://www.haskell.org/haven

 As the slides on that page state pretty explicitly, I tried to focus
 on the semantics and to design a purely functional representation of a
 vector graphics scene that was not tied to any particular
 implementation.  My primary claim for success is that the
 representation of Picture in Haven type checks and doesn't appeal to
 IO; IO only creeps in when we attempt to render a Picture.

 Does the Haven API live up to your goal of semantic purity for a
 vector graphics library?  If not, where specifically does it fall
 short?

 I look forward to seeing and reading more about Hieroglyph.  The
 typography and visual presentation of Jefferson's online booklet looks
 fantastic.  A high quality, purely functional vector graphics API for
 Haskell with portable and robust implementations will be a great thing
 for the Haskell world.

 Regards,

-Antony

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] UDP

2009-02-01 Thread Vimal
2009/2/1 Andrew Coppin andrewcop...@btinternet.com:


 Yeah, I just assumed that the bind step was only necessary for
 connection-oriented protocols. (Interestingly enough, the matching send
 program doesn't bind at all, yet seems to work fine...)


socket() system call creates a socket (a descriptor) that you can
identify. bind() creates an identity for the socket so that
applications outside can refer to it (using ip:port); it also enables
the kernel to pass the received data to your application. sendto()
doesn't require that identity.

-- 
Vimal
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskeline, pcre-light, iconv and Cabal on OSX

2009-02-01 Thread Duncan Coutts
On Sun, 2009-02-01 at 17:10 -0800, Judah Jacobson wrote:

  Someone said that it is just the macports version of iconv that has this
  problem but I don't understand that at all. If we're using default
  ghc/gcc then we should not be looking in any non-standard include
  directories at all.
 
 The pcre library isn't installed by default, so an OS X users might
 get it from MacPorts (which installs it in /opt/local/lib).  And when
 building the Haskell pcre-light package, they'd do something like
 
 cabal install pcre-light --extra-lib-dirs=/opt/local/lib
 
 But then any other package that depends on pcre-light will also get
 the same linker option.

Yes. Sigh. A limitation of the C linker search path model.

  The other thing that makes no sense is that
  the /usr/lib/libiconv.dywhatever file apparently contains both
  _iconv_open and _libiconv_open so why can't we link to the ordinary
  _iconv_open one?
 
 The problem is that with -L/opt/local/lib (which is now passed to any
 package depending on pcre-light), the linker uses
 /opt/local/lib/libiconv.*  and ignores /usr/lib/libiconv.* altogether.
 
 Hope that helps explain it better,

Yes, thanks.

I wonder if it wouldn't be better to search the standard lib dirs first.
I'm sure the whole issue is a can of worms.

Duncan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Why binding to existing widget toolkits doesn't make any sense

2009-02-01 Thread Antony Courtney
 My primary claim for success is that the
 representation of Picture in Haven type checks and doesn't appeal to
 IO; IO only creeps in when we attempt to render a Picture.

 You did something much more meaningful to me that what you say here.


Thanks. ;-)

 [...]

 I think what you did in Haven (based on memories of our conversations at the
 time and looking at your slides just now) is substantively different.  You
 gave precise, complete, and tractably simple *denotation* to your types.
 Complete enough to define the correctness of the rendering process.

 Does the Haven API live up to your goal of semantic purity for a
 vector graphics library?  If not, where specifically does it fall short?

 Yes, if my understanding about denotational precision and completeness is
 correct.  Is it?


Yes and no.

Yes in the sense that every type in Haven had a simple definition
using Haskell's type system and I used these types to specify
signatures of a set of functions for 2D geometry that was relatively
complete.

No in the sense that I never bothered to implement the various
geometric functions directly in Haskell; I depended on the underlying
implementation to do so.  For simple things like points, lines and
affine transforms I don't think this should be too controversial, but
it's a bit less clear for clipping and constructive area geometry on
complicated Bezier paths.  From a library user's point of view there
isn't much distinction between what I did and a pure implementation,
but I can't really claim it's a rigorous or complete semantics without
a pure reference implementation, and there's obviously no way to prove
claims such as Shapes form a monoid without giving a direct
definition of the composition and clipping operators.

-Antony
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Newtype deriving with functional dependencies

2009-02-01 Thread Louis Wasserman
Is there any sensible way to make

newtype FooT m e = FooT (StateT Bar m e) deriving (MonadState)

work to give instance MonadState Bar (FooT m e)?

That is, I'm asking if there would be a semantically sensible way of
modifying GeneralizedNewtypeDeriving to handle multi-parameter type classes
when there is a functional dependency involved, assuming by default that the
newtype is the more general of the types, perhaps?

Louis Wasserman
wasserman.lo...@gmail.com
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] circular dependencies in cabal

2009-02-01 Thread Valentyn Kamyshenko
So, in practical terms, you suggest that no new version of the package  
that ghc package depends on (directly or indirectly) should ever be  
installed?
For example, as soon as process-1.0.1.1 is installed on my computer,  
I'll have this problem with every package that depends on process?
Another question: would not cabal-install automatically fetch the most  
recent version of the process package, as soon as I will try to  
install

a package that depends on it (such as, for example, plugins)?

-- Valentyn.

On Feb 1, 2009, at 6:53 AM, Duncan Coutts wrote:


On Sun, 2009-02-01 at 01:33 -0800, Valentyn Kamyshenko wrote:

Hello all,

when I tried to install plugins package with cabal, I've got the
following error:

# sudo cabal install plugins --global
Resolving dependencies...
cabal: dependencies conflict: ghc-6.10.1 requires process ==1.0.1.1
however
process-1.0.1.1 was excluded because ghc-6.10.1 requires process
==1.0.1.0


For the most part I refer you to:

http://haskell.org/pipermail/haskell-cafe/2009-January/054523.html

However the difference is that you've got this problem only within the
global package db rather than due to overlap in the global and user
package db.

It looks like both versions of process package are currently  
required:


It looks like you installed process-1.0.1.1 and then rebuilt almost
every other package against it. Of course you cannot rebuild the ghc
package but you did rebuild some of its dependencies which is why it  
now

depends on multiple versions of the process package.

Generally rebuilding a package without also rebuilding the packages  
that
depend on it is a bit dodgy (it can lead to linker errors or  
segfaults).
Unfortunately cabal-install does not prevent you from shooting  
yourself

in the foot in these circumstances.


Any suggestions?


Aim for a situation where you only have one version of the various  
core

packages. If you do not need to install packages globally then
installing them per-user means you at least cannot break the global
packages.

Duncan



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Newtype deriving with functional dependencies

2009-02-01 Thread Daniel Gorín

On Feb 2, 2009, at 1:06 AM, Louis Wasserman wrote:


Is there any sensible way to make

newtype FooT m e = FooT (StateT Bar m e) deriving (MonadState)

work to give instance MonadState Bar (FooT m e)?

That is, I'm asking if there would be a semantically sensible way of  
modifying GeneralizedNewtypeDeriving to handle multi-parameter type  
classes when there is a functional dependency involved, assuming by  
default that the newtype is the more general of the types, perhaps?


Louis Wasserman
wasserman.lo...@gmail.com
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe



did you try this?

newtype FooT m e = FooT (StateT Bar m e) deriving (Monad, MonadState  
Bar)___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Why binding to existing widget toolkits doesn't make any sense

2009-02-01 Thread Jeff Heard
I will happily check it on Linux.  I'm only vaguely familiar with
OpenVG... In theory it's a good API, and would support exactly what
I'd need for a backend to Hieroglyph that isn't Cairo based, but we'd
still need a good image API and probably to bind to Pango to get text
and layout support.

For Image APIs, by the way, I suggest that someone, maybe me, but
someone, look at the VIPS toolkit, as it's probably already the most
Haskell-like toolkit, as it's lazy and concurrent all the way down
past the C layer and supports fully composable operators.  The authors
haven't formalized it as far as functional programming goes, but it
was definitely in the back of their brains when they were coming up
with it.  The other advantage is that the V stands for Very Large.
VIPS can handle images of unlimited size.

-- Jeff

On Sun, Feb 1, 2009 at 7:32 PM, Stephen Tetley stephen.tet...@gmail.com wrote:
 Hello

 I've written a Haskell binding to the Shiva-VG OpenVG implementation.

 Hopefully it should appear on Hackage in the next couple of days - but
 for the moment it is available here:


 http://slackwise.org/spt/files/OpenVG-0.1.tar.gz

 I've tested it on MacOSX leopard and Windows with MinGW / MSys, if
 anyone could check it on Linux that would be handy.
 Thanks.


 Best regards

 Stephen
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Parallel term reduction

2009-02-01 Thread John D. Ramsdell
I have a reduction system in which a rule takes a term and returns a
set of terms.
The reduction system creates a tree that originates at a starting
value called the root.
For most problems, the reduction system terminates, but a step count
limit protects
from non-termination.  Rule application is expensive, so it is
essential that a rule is
never applied to the same problem twice.  This check makes my program
sequential,
in that parallel annotations don't improve performance on SMPs.  There
isn't even an
obvious place to add them in my program, at least not to me.  How do
people write
parallel reduction systems that avoid redundant rule application?

John

 module Main (main) where

 import System.Time (ClockTime(..), getClockTime

 data Eq a = Item a
 = Item { item :: a,
  parent :: Maybe (Item a) }

 instance Eq a = Eq (Item a) where
 x == y = item x == item y

The reduction system takes a rule, a step count, and an initial value,
and computes a tree of reductions.  The order of the items in the returned
list is irrelevant, because the tree is assembled as a post processing step.

 reduce :: (Eq a, Monad m) = (a - [a]) - Int - a - m [Item a]
 reduce rule limit root =
 step rule limit [top] [top]
 where
  top = Item { item = root, parent = Nothing }

step rule limit seen todo, where seen in the items already seen, and todo
is the items on the queue.

 step :: (Eq a, Monad m) = (a - [a]) - Int -
 [Item a] - [Item a] - m [Item a]
 step _ limit _ _
 | limit = 0 = fail Step limit exceeded
 step _ _ seen [] = return seen
 step rule limit seen (next : rest) =
 loop seen rest children
 where
  children = map child (rule (item next))
  child i = Item { item = i, parent = Just next }
  loop seen rest [] =
  step rule (limit - 1) seen rest
  loop seen rest (kid : kids) =
  if elem kid seen then
 loop seen rest kids
  else
 loop (kid : seen) (rest ++ [kid]) kids

A silly rule

 rule :: Int - [Int]
 rule n = filter (= 0) [n - 1, n - 2, n - 3]

 secDiff :: ClockTime - ClockTime - Float
 secDiff (TOD secs1 psecs1) (TOD secs2 psecs2)
 = fromInteger (psecs2 - psecs1) / 1e12 + fromInteger (secs2 - secs1)

 main :: IO ()
 main =
 do
   t0 - getClockTime
   ns - reduce rule 2 5000
   t1 - getClockTime
   putStrLn $ length:  ++ show (length ns)
   putStrLn $ time:  ++ show (secDiff t0 t1) ++  seconds

The makefile

PROG= reduce
GHCFLAGS = -Wall -fno-warn-name-shadowing -O

%:  %.lhs
ghc $(GHCFLAGS) -o $@ $

all:$(PROG)

clean:
-rm *.o *.hi $(PROG)
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: 1,000 packages, so let's build a few!

2009-02-01 Thread Benjamin L . Russell
On Sun, 01 Feb 2009 15:01:28 +, Duncan Coutts
duncan.cou...@worc.ox.ac.uk wrote:

On Sat, 2009-01-31 at 16:50 -0800, Don Stewart wrote:

 Windows people need to set up a wind...@haskell.org to sort out their
 packaging issues, like we have for debian, arch, gentoo, freebsd and
 other distros.
 
 Unless people take action to get things working well on their platform,
 it will be slow going.

Actually instead of going off into another mailing list I would
encourage them to volunteer on the cabal-devel mailing list to help out.
There is lots we could do to improve the experience on Windows and half
the problem is we do not have enough people working on it or testing
things.

That sounds like a great idea, but what specifically should Windows
users do to help out?  If we try to install a package on Windows and
encounter a bug that we can't figure out, would it be sufficient to
subscribe at http://www.haskell.org/mailman/listinfo/cabal-devel and
to submit a bug report to cabal-de...@haskell.org ?

-- Benjamin L. Russell
-- 
Benjamin L. Russell  /   DekuDekuplex at Yahoo dot com
http://dekudekuplex.wordpress.com/
Translator/Interpreter / Mobile:  +011 81 80-3603-6725
Furuike ya, kawazu tobikomu mizu no oto. 
-- Matsuo Basho^ 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Network.UrlDisp

2009-02-01 Thread Artyom Shalkhakov
Hello,

2009/2/2 Pieter Laeremans pie...@laeremans.org:
 Has anyone some exampe usages of : Network.UrlDisp ?

I'll write it up in a few days. Right now, you can read the blog posts
of Sterling Clover, topics covered there still apply.

I would be grateful if anybody told me how to upload Haddock
documentation to Hackage. The code itself contains documentation,
but I couldn't get it to show up on Hackage.

Cheers,
Artyom Shalkhakov.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Network.UrlDisp

2009-02-01 Thread Brent Yorgey
On Mon, Feb 02, 2009 at 10:55:52AM +0600, Artyom Shalkhakov wrote:
 Hello,
 
 2009/2/2 Pieter Laeremans pie...@laeremans.org:
  Has anyone some exampe usages of : Network.UrlDisp ?
 
 I'll write it up in a few days. Right now, you can read the blog posts
 of Sterling Clover, topics covered there still apply.
 
 I would be grateful if anybody told me how to upload Haddock
 documentation to Hackage. The code itself contains documentation,
 but I couldn't get it to show up on Hackage.

Haddock documentation is automatically built for packages on Hackage,
but you might have to wait a while (up to a day?) for it to get built.
If there are still no links to Haddock documentation, check the build
log.

-Brent
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Network.UrlDisp

2009-02-01 Thread Artyom Shalkhakov
Hello,

2009/2/2 Brent Yorgey byor...@seas.upenn.edu:
 I would be grateful if anybody told me how to upload Haddock
 documentation to Hackage. The code itself contains documentation,
 but I couldn't get it to show up on Hackage.

 Haddock documentation is automatically built for packages on Hackage,
 but you might have to wait a while (up to a day?) for it to get built.
 If there are still no links to Haddock documentation, check the build
 log.

Everything seems to be fine. Thanks!

Cheers,
Artyom Shalkhakov.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe