[Haskell-cafe] GSoC and Machine learning

2010-03-30 Thread Ketil Malde

Hi,

Once upon a time, I proposed a GSoC project for a machine learning
library. 

I still get some email from prospective students about this, whom I
discourage as best I can by saying I don't have the time or interest to
pursue it, and that chances aren't so great since you guys tend to
prefer language-related stuff instead of application-related stuff.

But if anybody disagrees with my sentiments and is willing to mentor
this, there are some smart students looking for an opportunity.  I'd be
happy to forward any requests.

-k
-- 
If I haven't seen further, it is by standing in the footprints of giants
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Where are the haskell elders?

2010-03-30 Thread Ketil Malde
Don Stewart d...@galois.com writes:

 I notice that posts from the Haskell elders are pretty rare now. Only
 every now and then we hear from them.

I'm not sure who the 'elders' are, but generally grown-ups with a day
time job (professorships, say) tend to be busy people, without much time
for chatting.

 Because there is too much noise on this list, Günther

This is -café, I think some off-topicity should be allowed. Although for
the future, we might do well to avoid political/religious themes.  Like
the advantages of Ocaml over Haskell, for instance.

I think the off-topic threads are often fun, and there are always
thoughtful messages with interesting links, and they always outnumber
the obnoxious or offensive¹ ones.  But in general, some issues just seem
to upset a lot of people, and are better avoided.

-k

¹ IMO, YMMV.  Clearly a lot of people are more easily offended, or have
higher standards of what is interesting, than I.
-- 
If I haven't seen further, it is by standing in the footprints of giants
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Shootout update

2010-03-30 Thread Simon Marlow
The shootout (sorry, Computer Language Benchmarks Game) recently updated 
to GHC 6.12.1, and many of the results got worse.  Isaac Gouy has added 
the +RTS -qg flag to partially fix it, but that turns off the parallel 
GC completely and we know that in most cases better results can be had 
by leaving it on.  We really need to tune the flags for these benchmarks 
properly.


http://shootout.alioth.debian.org/u64q/haskell.php

It may be that we have to back off to +RTS -N3 in some cases to avoid 
the last-core problem (http://hackage.haskell.org/trac/ghc/ticket/3553), 
at least until 6.12.2.


Any volunteers with a quad-core to take a look at these programs and 
optimise them for 6.12.1?


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Haskell-friendly Linux Distribution

2010-03-30 Thread Ketil Malde

Jason Dagit:

 The reason I started telling everyone to avoid GHC in apt was the way
 it was packaged. [..]
 If they are lucky they figure out which apt package to install. 

I think people who are too lazy to bother to find out how their
distribution works, should avoid any distribution.

  % apt-cache search foo
  % sudo apt-get install libghc6-foo\*

Erik de Castro Lopo mle...@mega-nerd.com writes:

 Debian doesn't have 'The Haskell Platform', it has a package named
 haskell-platform

Ubuntu (10.4) doesn't seem to?  Is this an omission?

-k
-- 
If I haven't seen further, it is by standing in the footprints of giants
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Haskell-friendly Linux Distribution

2010-03-30 Thread Ivan Lazar Miljenovic
Ketil Malde ke...@malde.org writes:
 I think people who are too lazy to bother to find out how their
 distribution works, should avoid any distribution.

   % apt-cache search foo
   % sudo apt-get install libghc6-foo\*

Agreed (to the extent that someone who can't be bothered figuring out an
advanced distribution like Gentoo or LFS should try a simpler one
first like Ubuntu before completely giving up).

 Erik de Castro Lopo mle...@mega-nerd.com writes:

 Debian doesn't have 'The Haskell Platform', it has a package named
 haskell-platform

 Ubuntu (10.4) doesn't seem to?  Is this an omission?

Hasn't been ported yet IIRC.

-- 
Ivan Lazar Miljenovic
ivan.miljeno...@gmail.com
IvanMiljenovic.wordpress.com
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Are there any female Haskellers?

2010-03-30 Thread Achim Schneider
Richard O'Keefe o...@cs.otago.ac.nz wrote:

 I grant you that driving cars is recent (:-) (:-)!

And shoes! Never leave home with them. Well, at least spring till fall.

-- 
(c) this sig last receiving data processing entity. Inspect headers
for copyright history. All rights reserved. Copying, hiring, renting,
performance and/or quoting of this signature prohibited.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: building encoding on Windows?

2010-03-30 Thread Tillmann Rendel

Ivan Miljenovic wrote:

The Haskell Platform is supposed to be a development environment...


No-one ever said it was a _complete_ development environment and that
you'd never need any other libraries, tools, etc.


On http://hackage.haskell.org/platform/contents.html, someone wrote:

The Haskell Platform is a comprehensive, robust development
environment for programming in Haskell. For new users the platform
makes it trivial to get up and running with a full Haskell
development environment


Given

(1) The Haskell platform aims to provide a complete Haskell development
environment.

(2) A complete Haskell development environment contains a C development
environment.

I conclude

(3) The Haskell platform aims to provide a C development
environment.

I have been told that every non-Windows OS comes with a C development 
environment anyway, so this may be only true on Windows. But on Windows, 
I think it is true, and the Haskell Platform *for Windows* should 
contain a C development environment. So how can I help?


Joachim Breitner wrote in an otherwise unrelated thread:

(with his Debian-Haskell-Group member hat on)


Is there a Windows-Haskell-Group promoting and facilitating Haskell on 
Windows?


  Tillmann
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: building encoding on Windows?

2010-03-30 Thread Ivan Lazar Miljenovic
Tillmann Rendel ren...@informatik.uni-marburg.de writes:
 On http://hackage.haskell.org/platform/contents.html, someone wrote:
 The Haskell Platform is a comprehensive, robust development
 environment for programming in Haskell. For new users the platform
 makes it trivial to get up and running with a full Haskell
 development environment

 Given

 (1) The Haskell platform aims to provide a complete Haskell development
 environment.

Define development environment.


 (2) A complete Haskell development environment contains a C development
 environment.

I don't think this is a logical premise.

 Joachim Breitner wrote in an otherwise unrelated thread:
 (with his Debian-Haskell-Group member hat on)

 Is there a Windows-Haskell-Group promoting and facilitating Haskell on
 Windows?

In a sense, I wish there were: too often it seems that Windows users
complain about X either not working or not being available on Windows
(where X is some Haskell library/application/etc.).  Maybe if there was
a semi-official Windows-Haskell group of package maintainers, this
kind of stuff could be alleviated (through the use of testing, creating
installers, etc.).  The same goes with Mac OSX.

-- 
Ivan Lazar Miljenovic
ivan.miljeno...@gmail.com
IvanMiljenovic.wordpress.com
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Fwd: [Haskell-cafe] Are there any female Haskellers?

2010-03-30 Thread Alberto G. Corona
.

2010/3/29 Jason Dusek jason.du...@gmail.com

2010/03/29 Alberto G. Corona agocor...@gmail.com:
  [...] What we evolved with is a general hability: to play with
  things to achieve what we need from them, (besides other
  abilities). The pleasure to acheve ends by using available
  means. [...]  A tool is someting used to solve a class of
  problems. It does not matter if it is something phisical or
  conceptual. [...] The more general is a tool, the more we feel
  pleasure playing with it

  So the adaptation you are saying men have in greater degree
  than women is pleasure in tool using, broadly defined to
  include taming animals, debate, programming, sword play,
  carpentry and more? What are you attributing to men is not
  so much superiority of ability but greater motivation?

 --
 Jason Dusek


n terms of natural selection, greater motivation for and greater innate
hability are both positiverly correlated in response to an evolutionary
pressure (in beings that have learning capabilities). for example, cats are
better at catching mouse, and they enjoy to play catching them. A live being
end up developping better innate habilities (and is more motivation)  for
whatever practises more. This is called baldwin effect (some common general
learning for the task end up fixed innately). Motivation match ability and
viceversa.  This is evolutionarily stable.

 It makes no evolutionary sense that woman and men have the same abilities
and tendencias because they had different activities since before they were
even humans. The brain has limited computation resources. The optimal
behaviours and strategies are in many cases different for each sex. This
happen for almost all the animal kingdom. Why humans would be different?.
 No matter they are very similar in some aspects, they are different and
very different in others (fortunatelly). Nothing that your grandparent
didn´t know.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Two Google Summer of Code project proposals

2010-03-30 Thread Johan Tibell
I'm not able to log in to Trac to update these proposals at the moment
so I'll add some notes here for now.

On 3/15/10, Johan Tibell johan.tib...@gmail.com wrote:
   A high-performance HTML combinator library using Data.Text

 http://hackage.haskell.org/trac/summer-of-code/ticket/1580

   Being both fast and safe, Haskell would make a great
   replacement for e.g. Python and Ruby for server
   applications. However, good library support for web
   applications is sorely missing. To write web applications you
   need at least three components: a web application server, a
   data storage layer, and an HTML generation library. The goal of
   this project is to address the last of the three, as the two
   are already getting some attention from other Haskell
   developers.

Several students have expressed interest in working on this project
and several people hacked on a possible implementation of such a
library, BlazeHtml, during ZuriHac. (I'm not sure what the current
status of BlazeHtml is, I'm sure some of the people hacking on it can
fill me in.)

Getting character encodings and other tricky parts of the HTML
standard right is tricky and so is creating a high-performance library
with a good API. I encourage any students who wish to apply for this
project (or any other Summer of Code project for that matter) to show
that you understand the problem by submitting a design draft together
with your SoC application. A design draft could for example contain
the important parts of the API and some notes on tricky issues and how
you plan to deal with them.

For an example of a tricky issue read up on the interaction between
Unicode and HTML

  http://en.wikipedia.org/wiki/Unicode_and_HTML

Cheers,

Johan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Haskell-friendly Linux Distribution

2010-03-30 Thread Marco Túlio Gontijo e Silva
Hi Ivan.

Excerpts from Ivan Miljenovic's message of Ter Mar 30 00:01:19 -0300 2010:
 On 30 March 2010 13:55, Jason Dagit da...@codersbase.com wrote:
(...)
  [..] now trying to profile something, oh wait, some problem again.
 
 Agreed, if Debian didn't include the profiling libraries with GHC
 (though is this due to how Debian does packages?).

The profiling libraries included in ghc6 are available in the ghc6-prof
package.

Greetings.
(...)
-- 
marcot
http://wiki.debian.org/MarcoSilva
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Programming with categories

2010-03-30 Thread Sjoerd Visscher
And Dan Piponi has a nice collection of blogposts about this topic, for which 
he has just created an overview:

http://blog.sigfpe.com/2010/03/partial-ordering-of-some-category.html

greetings,
Sjoerd

On Mar 29, 2010, at 8:29 PM, Edward Kmett wrote:

 One place to start might be category-extras on hackage, which covers a wide 
 array of category theoretic constructs at least as they pertain to the 
 category of Haskell types.
 
 http://hackage.haskell.org/package/category-extras
 
 There is also Sjoerd Visscher's data-category:
 
 http://hackage.haskell.org/package/data-category
 
 Beyond that feel free to ask questions.
 
 -Edward Kmett
 
 
 On Mon, Mar 29, 2010 at 6:09 AM, Francisco Vieira de Souza 
 vieira.u...@gmail.com wrote:
 Hi Haskell-cafe.
 I'm trying to use Haskell to program with Categories, but I didn't have 
 seeing some works in this area nor references about this. Does someone can 
 help me ins this subject?
 Thanks in advance,
 Vieira
 
 -- 
 Para saber quantos amigos você tem, dê uma festa. 
 'Para saber a qualidade deles, fique doente!
 
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe
 
 
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

--
Sjoerd Visscher
http://w3future.com




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] building encoding on Windows?

2010-03-30 Thread Han Joosten

The haskell platform should take care of a lot of installation pain,
specially for the non-technical users. A new version is due to be release
pretty soon (somewhere begin april). It has Mingw and Msys included, and
also some pre-built binaries like cabal and haddock. It should be possible
for a lot of packages to say 'cabal install package' at the command prompt
to get your package up and running. I think that this is pretty cool, and
most non-technical users should be able to get this to work without a lot of
pain. 

Cheers,

Han Joosten
-- 
View this message in context: 
http://old.nabble.com/building-%22encoding%22-on-Windows--tp28072747p28081652.html
Sent from the Haskell - Haskell-Cafe mailing list archive at Nabble.com.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Are there any female Haskellers?

2010-03-30 Thread David Leimbach
On Tue, Mar 30, 2010 at 4:13 AM, Alberto G. Corona agocor...@gmail.comwrote:



 .

 2010/3/29 Jason Dusek jason.du...@gmail.com

 2010/03/29 Alberto G. Corona agocor...@gmail.com:
  [...] What we evolved with is a general hability: to play with
  things to achieve what we need from them, (besides other
  abilities). The pleasure to acheve ends by using available
  means. [...]  A tool is someting used to solve a class of
  problems. It does not matter if it is something phisical or
  conceptual. [...] The more general is a tool, the more we feel
  pleasure playing with it

  So the adaptation you are saying men have in greater degree
  than women is pleasure in tool using, broadly defined to
  include taming animals, debate, programming, sword play,
  carpentry and more? What are you attributing to men is not
  so much superiority of ability but greater motivation?

 --
 Jason Dusek


 n terms of natural selection, greater motivation for and greater innate
 hability are both positiverly correlated in response to an evolutionary
 pressure (in beings that have learning capabilities). for example, cats are
 better at catching mouse, and they enjoy to play catching them. A live being
 end up developping better innate habilities (and is more motivation)  for
 whatever practises more. This is called baldwin effect (some common general
 learning for the task end up fixed innately). Motivation match ability and
 viceversa.  This is evolutionarily stable.

  It makes no evolutionary sense that woman and men have the same abilities
 and tendencias because they had different activities since before they were
 even humans. The brain has limited computation resources. The optimal
 behaviours and strategies are in many cases different for each sex. This
 happen for almost all the animal kingdom. Why humans would be different?.
  No matter they are very similar in some aspects, they are different and
 very different in others (fortunatelly). Nothing that your grandparent
 didn´t know.


What does any of this have to do with Haskell?  Please move this off list.


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Haskell-friendly Linux Distribution

2010-03-30 Thread Joachim Breitner
Hi Jason and other,

thanks for the suggestions, the Debian Haskell Team is eager to learn
why people do or don’t use the packaged libraries.

Am Dienstag, den 30.03.2010, 14:01 +1100 schrieb Ivan Miljenovic:
 On 30 March 2010 13:55, Jason Dagit da...@codersbase.com wrote:
  [..] now trying to profile something, oh wait, some problem again.
 
 Agreed, if Debian didn't include the profiling libraries with GHC
 (though is this due to how Debian does packages?).

The profiling data is put in -prof packages, i.e. ghc-prof,
libghc6-network-prof etc. Indeed, there is no easy way to tell the
package system: Whenever I install a Haskell -dev package, please
install the -prof package as well.

It has been proposed to just drop the -prof packages and include it in
the -dev package, as disk space is cheap. But ghc6-prof does weigh 
254M, and not everybody who wants to modify his xmonad config wants to
install that.

 Unless it still doesn't provide profiling libraries, the extralibs
 problem is no more.  There is, however, the Haskell Platform (which
 Debian seems to have almost had complete support for until the new one
 came out; now they've got to start again... _ ).

No big deal this time, only minor version bumps and then rebuilding all
depending libraries. Maybe we will do this with ghc6-6.12.2, maybe
before.

Greetings,
Joachim

-- 
Joachim nomeata Breitner
Debian Developer
  nome...@debian.org | ICQ# 74513189 | GPG-Keyid: 4743206C
  JID: nome...@joachim-breitner.de | http://people.debian.org/~nomeata


signature.asc
Description: This is a digitally signed message part
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell at Indian Universities?

2010-03-30 Thread Martin DeMello
On Tue, Mar 30, 2010 at 2:07 AM, Joachim Breitner
m...@joachim-breitner.de wrote:
 I’m a computer science student in Germany and I’d like to spend one
 semester as an exchange student in India. I have no specific plans yet
 where I want to go, and I’m wondering if there are universities in India
 where Haskell is basis for research or at least used as a tool. Haskell
 and functional programming is quite underrepresented at my university,
 so maybe this might be a good opportunity for me to combine Haskell and
 academia.

Here's a prof at IIT-Bombay who's into Haskell; you could write to him
for further suggestions.

http://www.cse.iitb.ac.in/~as

martin
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Haskell-friendly Linux Distribution

2010-03-30 Thread Ketil Malde
Joachim Breitner nome...@debian.org writes:

 The profiling data is put in -prof packages, i.e. ghc-prof,
 libghc6-network-prof etc. Indeed, there is no easy way to tell the
 package system: Whenever I install a Haskell -dev package, please
 install the -prof package as well.

One option might to add a fourth package: a virtual package that includes
all the others.  (E.g. a libghc6-network that would pull
libghc6-network-dev, -prof and -doc.)  I generally just add a wildcard
(apt-get install libghc6-network-\*), though, which isn't a lot harder.

-k
-- 
If I haven't seen further, it is by standing in the footprints of giants
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Shootout update

2010-03-30 Thread Graham Klyne

Simon Marlow wrote:

We really need to tune the flags for these benchmarks properly.


Do I sense the hidden hand of Goodharts law? :)

-- http://en.wikipedia.org/wiki/Goodhart's_law

#g





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] More Language.C work for Google's Summer of Code

2010-03-30 Thread Serguey Zefirov
I tried to devise a C preprocessor, but then I figured out that I
could write something like that:
---
#define A(arg) A_start (arg) A_end

#define A_start this is A_start definition.
#define A_end this is A_end definition.

A (
#undef A_start
#define A_start A_end
)
---

gcc preprocesses it into the following:
---
# 1 a.c
# 1 built-in
# 1 command-line
# 1 a.c





this is A_end definition. () this is A_end definition.
---

Another woes are filenames in angle brackets for #include. They
require special case for tokenizer.

So I given it (fully compliant C preprocessor) up. ;)

Other than that, C preprocessor looks simple.

I hardly qualify as a student, though.

2010/3/30 Aaron Tomb at...@galois.com:
 The first is to integrate preprocessing into the library. Currently, the
 library calls out to GCC to preprocess source files before parsing them.
 This has some unfortunate consequences, however, because comments and macro
 information are lost. A number of program analyses could benefit from
 metadata encoded in comments, because C doesn't have any sort of formal
 annotation mechanism, but in the current state we have to resort to ugly
 hacks (at best) to get at the contents of comments. Also, effective
 diagnostic messages need to be closely tied to original source code. In the
 presence of pre-processed macros, column number information is unreliable,
 so it can be difficult to describe to a user exactly what portion of a
 program a particular analysis refers to. An integrated preprocessor could
 retain comments and remember information about macros, eliminating both of
 these problems.

 The second possible project is to create a nicer interface for traversals
 over Language.C ASTs. Currently, the symbol table is built to include only
 information about global declarations and those other declarations currently
 in scope. Therefore, when performing multiple traversals over an AST, each
 traversal must re-analyze all global declarations and the entire AST of the
 function of interest. A better solution might be to build a traversal that
 creates a single symbol table describing all declarations in a translation
 unit (including function- and block-scoped variables), for easy reference
 during further traversals. It may also be valuable to have this traversal
 produce a slightly-simplified AST in the process. I'm not thinking of
 anything as radical as the simplifications performed by something like CIL,
 however. It might simply be enough to transform variable references into a
 form suitable for easy lookup in a complete symbol table like I've just
 described. Other simple transformations such as making all implicit casts
 explicit, or normalizing compound initializers, could also be good.

 A third possibility, which would probably depend on the integrated
 preprocessor, would be to create an exact pretty-printer. That is, a
 pretty-printing function such that pretty . parse is the identity.
 Currently, parse . pretty should be the identity, but it's not true the
 other way around. An exact pretty-printer would be very useful in creating
 rich presentations of C source code --- think LXR on steroids.

 If you're interested in any combination of these, or anything similar, let
 me know. The deadline is approaching quickly, but I'd be happy to work
 together with a student to flesh any of these out into a full proposal.

 Thanks,
 Aaron

 --
 Aaron Tomb
 Galois, Inc. (http://www.galois.com)
 at...@galois.com
 Phone: (503) 808-7206
 Fax: (503) 350-0833

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Announce: Haskell Platform 2010.1.0.0 (beta) release

2010-03-30 Thread Benjamin L. Russell
Don Stewart d...@galois.com writes:

 Live from (post-) Zurihac, I'm pleased to announce the 2010.1.0.0 (beta 
 branch)
 release of the Haskell Platform, supporting GHC 6.12.

 http://hackage.haskell.org/platform/

 The Haskell Platform is a comprehensive, robust development environment for
 programming in Haskell. For new users the platform makes it trivial to get up
 and running with a full Haskell development environment. For experienced
 developers, the platform provides a comprehensive, standard base for 
 commercial
 and open source Haskell development that maximises interoperability and
 stability of your code.

 The 2010.1.0.0 release is a beta release for the GHC 6.12 series of 
 compilers. 
 It currently doesn't provide a Windows installer (which defaults to GHC 6.10.4
 for now). We expect to make the stable branch with GHC 6.12.2 in soon.

 This release includes a binary installer for Mac OS X Snow Leopard, as well as
 source bundles for an Unix system, and a new design.

 The Haskell Platform would not have been possible without the hard work
 of the Cabal development team, the Hackage developers and maintainers,
 the individual compiler, tool and library authors who contributed to the
 suite, and the distro maintainers who build and distribute the Haskell
 Platform.

Sorry for the late response, but just out of curiosity, are there any
plans to provide a binary installer for either the Haskell Platform or
GHC 6.12.1 for Mac OS X Leopard for the PowerPC CPU (as opposed to for
the Intel x86 CPU)?  I just checked the download-related Web sites for
both the Haskell Platform for the Mac (see
http://hackage.haskell.org/platform/mac.html) and for GHC 6.12.1 (see
http://www.haskell.org/ghc/download_ghc_6_12_1.html), but could find no
relevant information.

Currently, I'm using GHC 6.8.2, but this is an outdated version.

-- Benjamin L. Russell


 Thanks!

 -- Don  (for The Platform Infrastructure Team)

-- 
Benjamin L. Russell  /   DekuDekuplex at Yahoo dot com
http://dekudekuplex.wordpress.com/
Translator/Interpreter / Mobile:  +011 81 80-3603-6725
Furuike ya, kawazu tobikomu mizu no oto. -- Matsuo Basho^ 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: GSOC idea: Haskell JVM bytecode library

2010-03-30 Thread Ashley Yakeley

Alexandru Scvortov wrote:
 I'm thinking of writing a library for 

analyzing/generating/manipulating JVM
bytecode.  To be clear, this library would allow one to load and work with JVM 
classfiles; it wouldn't be a compiler, interpretor or a GHC backend.


You might be interested in http://semantic.org/jvm-bridge/. It's a bit 
bit-rotted, though.


-- Ashley
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] More Language.C work for Google's Summer of Code

2010-03-30 Thread Stephen Tetley
On 30 March 2010 18:55, Serguey Zefirov sergu...@gmail.com wrote:


 Other than that, C preprocessor looks simple.



Ah no - apparently anything but simple.

You might want to see Jean-Marie Favre's (very readable, amusing)
papers on subject. Much of the behaviour of CPP is not defined and
often inaccurately described, certainly it wouldn't appear to make an
ideal one summer, student project.


http://megaplanet.org/jean-marie-favre/papers/CPPDenotationalSemantics.pdf

There are some others as well from his home page.

Best wishes

Stephen
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Hackage - Machine Learning

2010-03-30 Thread Jeremy Ong
On a suggestion from Ketil Maede, the listed mentor for the Google
Summer of Code Machine Learning Project on Hackage, I am sending this
email as an interested student. The listed mentor is busy
unfortunately :D.

Are there people on the list interested in mentoring this project?
Apparently, the project was close to getting funded last time. I am a
senior currently studying math and physics with a phd in math being
the ultimate (short term?) goal. Contact me if you would like to see
more credentials or what have you.

cheers,
Jeremy

--
Jeremy Ong
Mathematics
Applied  Engineering Physics
Cornell University '10
http://www.jeremyong.com

Education is the process of casting false pearls before real swine.
Prof. Irwin Edman (1896–1954)



On Tue, Mar 30, 2010 at 3:35 AM, Ketil Malde ke...@malde.org wrote:
 Jeremy Ong jc...@cornell.edu writes:

 haskell-fu although I would love to improve that as well. Let me know
 if any involvement from me is a possibility.

 From you, yes, from me, no.

 I no longer have sufficient time nor interest to pursue this project,
 but it did come close to getting funded the last time, so if you're
 interested, I suggest sending a message to the haskell-cafe list and see
 if you can get somebody to mentor.

 That said, most of the projects that get funded tend to be language
 related ones, i.e. development tools or infrastructure or that sort of
 thing.

 Good luck!

 -k
 --
 If I haven't seen further, it is by standing in the footprints of giants

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: More Language.C work for Google's Summer of Code

2010-03-30 Thread Achim Schneider
Stephen Tetley stephen.tet...@gmail.com wrote:

 Much of the behaviour of CPP is not defined and  often inaccurately
 described, certainly it wouldn't appear to make an ideal one summer,
 student project.
 
If you get

http://ldeniau.web.cern.ch/ldeniau/cos.html

to work, virtually everything else should work, too.

Macro languages haven't been in fashion in the last decades, so you
have to locate a veritable fan to work on this.

There are, after all, still people writing TeX macros. There's got to
be some CPP zealots, out there.


-- 
(c) this sig last receiving data processing entity. Inspect headers
for copyright history. All rights reserved. Copying, hiring, renting,
performance and/or quoting of this signature prohibited.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] More Language.C work for Google's Summer of Code

2010-03-30 Thread austin seipp
(sorry for the dupe aaron! forgot to add haskell-cafe to senders list!)

Perhaps the best course of action would be to try and extend cpphs to
do things like this? From the looks of the interface, it can already
do some of these things e.g. do not strip comments from a file:

http://hackage.haskell.org/packages/archive/cpphs/1.11/doc/html/Language-Preprocessor-Cpphs.html#t%3ABoolOptions

Malcolm would have to attest to how complete it is w.r.t. say, gcc's
preprocessor, but if this were to be a SOC project, extending cpphs to
include needed functionality would probably be much more realistic
than writing a new one.

On Tue, Mar 30, 2010 at 12:30 PM, Aaron Tomb at...@galois.com wrote:
 Hello,

 I'm wondering whether there's anyone on the list with an interest in doing
 additional work on the Language.C library for the Summer of Code. There are
 a few enhancements that I'd be very interested seeing, and I'd love be a
 mentor for such a project if there's a student interested in working on
 them.

 The first is to integrate preprocessing into the library. Currently, the
 library calls out to GCC to preprocess source files before parsing them.
 This has some unfortunate consequences, however, because comments and macro
 information are lost. A number of program analyses could benefit from
 metadata encoded in comments, because C doesn't have any sort of formal
 annotation mechanism, but in the current state we have to resort to ugly
 hacks (at best) to get at the contents of comments. Also, effective
 diagnostic messages need to be closely tied to original source code. In the
 presence of pre-processed macros, column number information is unreliable,
 so it can be difficult to describe to a user exactly what portion of a
 program a particular analysis refers to. An integrated preprocessor could
 retain comments and remember information about macros, eliminating both of
 these problems.

 The second possible project is to create a nicer interface for traversals
 over Language.C ASTs. Currently, the symbol table is built to include only
 information about global declarations and those other declarations currently
 in scope. Therefore, when performing multiple traversals over an AST, each
 traversal must re-analyze all global declarations and the entire AST of the
 function of interest. A better solution might be to build a traversal that
 creates a single symbol table describing all declarations in a translation
 unit (including function- and block-scoped variables), for easy reference
 during further traversals. It may also be valuable to have this traversal
 produce a slightly-simplified AST in the process. I'm not thinking of
 anything as radical as the simplifications performed by something like CIL,
 however. It might simply be enough to transform variable references into a
 form suitable for easy lookup in a complete symbol table like I've just
 described. Other simple transformations such as making all implicit casts
 explicit, or normalizing compound initializers, could also be good.

 A third possibility, which would probably depend on the integrated
 preprocessor, would be to create an exact pretty-printer. That is, a
 pretty-printing function such that pretty . parse is the identity.
 Currently, parse . pretty should be the identity, but it's not true the
 other way around. An exact pretty-printer would be very useful in creating
 rich presentations of C source code --- think LXR on steroids.

 If you're interested in any combination of these, or anything similar, let
 me know. The deadline is approaching quickly, but I'd be happy to work
 together with a student to flesh any of these out into a full proposal.

 Thanks,
 Aaron

 --
 Aaron Tomb
 Galois, Inc. (http://www.galois.com)
 at...@galois.com
 Phone: (503) 808-7206
 Fax: (503) 350-0833

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
- Austin
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: ANN: data-category, restricted categories

2010-03-30 Thread Ashley Yakeley

Edward Kmett wrote:
Of course, you can argue that we already look at products and coproducts 
through fuzzy lenses that don't see the extra bottom, and that it is 
close enough to view () as Unit and Unit as Void, or go so far as to 
unify Unit and Void, even though one is always inhabited and the other 
should never be.


The alternative is to use _consistently_ fuzzy lenses and not consider 
bottom to be a value. I call this the bottomless interpretation. I 
prefer it, because it's easier to reason about.


In the bottomless interpretation, laws for Functor, Monad etc. work. 
Many widely-accepted instances of these classes fail these laws when 
bottom is considered a value. Even reflexivity of Eq fails.


Bear in mind bottom includes non-termination. For instance:

  x :: Integer
  x = x + 1

  data Nothing
  n :: Nothing
  n = seq x undefined

x is bottom, since calculation of it doesn't terminate, but one cannot 
write a program even in IO to determine that x is bottom. And if Nothing 
is inhabited with a value, does n have that value? Or does the 
calculation to find which value n is not terminate, so n never gets a value?


I avoid explicit undefined in my programs, and also hopefully 
non-termination. Then the bottomless interpretation becomes useful, for 
instance, to consider Nothing as an initial object of Hask particularly 
when using GADTs.


I also dislike Void for a type declared empty, since it reminds me of 
the C/C++/C#/Java return type void. In those languages, a function of 
return type void may either terminate or not, exactly like Haskell's ().


--
Ashley Yakeley
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: ANN: data-category, restricted categories

2010-03-30 Thread wagnerdm

Quoting Ashley Yakeley ash...@semantic.org:


  data Nothing

I avoid explicit undefined in my programs, and also hopefully  
non-termination. Then the bottomless interpretation becomes useful,  
for instance, to consider Nothing as an initial object of Hask  
particularly when using GADTs.


Forgive me if this is stupid--I'm something of a category theory  
newbie--but I don't see that Hask necessarily has an initial object in  
the bottomless interpretation. Suppose I write


data Nothing2

Then if I understand this correctly, for Nothing to be an initial  
object, there would have to be a function f :: Nothing - Nothing2,  
which seems hard without bottom. This is a difference between Hask and  
Set, I guess: we can't write down the empty function. (I suppose  
unsafeCoerce might have that type, but surely if you're throwing out  
undefined you're throwing out the more frightening things, too...)


~d
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] GSOC Haskell Project

2010-03-30 Thread Mihai Maruseac
Hi,

I'd like to introduce my idea for the Haskell GSOC of this year. In
fact, you already know about it, since I've talked about it here on
the haskell-cafe, on my blog and on reddit (even on #haskell one day).

Basically, what I'm trying to do is a new debugger for Haskell, one
that would be very intuitive for beginners, a graphical one. I've
given some examples and more details on my blog [0], [1], also linked
on reditt and other places.

This is not the application, I'm posting this only to receive some
kind of feedback before writing it. I know that it seems to be a
little too ambitious but I do think that I can divide the work into
sessions and finish what I'll start this summer during the next year
and following.

[0]: http://pgraycode.wordpress.com/2010/03/20/haskell-project-idea/
[1]: http://pgraycode.wordpress.com/2010/03/24/visual-haskell-debugger-part-2/

Thanks for your attention,

-- 
Mihai Maruseac
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: ANN: data-category, restricted categories

2010-03-30 Thread Ashley Yakeley

Ashley Yakeley wrote:

Edward Kmett wrote:
Of course, you can argue that we already look at products and 
coproducts through fuzzy lenses that don't see the extra bottom, and 
that it is close enough to view () as Unit and Unit as Void, or go so 
far as to unify Unit and Void, even though one is always inhabited and 
the other should never be.


The alternative is to use _consistently_ fuzzy lenses and not consider 
bottom to be a value. I call this the bottomless interpretation. I 
prefer it, because it's easier to reason about.


In the bottomless interpretation, laws for Functor, Monad etc. work. 
Many widely-accepted instances of these classes fail these laws when 
bottom is considered a value. Even reflexivity of Eq fails.


Worse than that, if bottom is a value, then Hask is not a category! Note 
that while undefined is bottom, (id . undefined) and (undefined . id) 
are not.


That's a fuzzy lens...

--
Ashley Yakeley
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: ANN: data-category, restricted categories

2010-03-30 Thread Ashley Yakeley

wagne...@seas.upenn.edu wrote:
Forgive me if this is stupid--I'm something of a category theory 
newbie--but I don't see that Hask necessarily has an initial object in 
the bottomless interpretation. Suppose I write


data Nothing2

Then if I understand this correctly, for Nothing to be an initial 
object, there would have to be a function f :: Nothing - Nothing2, 
which seems hard without bottom.


 This is a difference between Hask and
 Set, I guess: we can't write down the empty function.

Right. It's an unfortunate limitation of the Haskell language that one 
cannot AFAIK write this:


 f :: Nothing - Nothing2;
 f n = case n of
 {
 };

However, one can work around it with this function:

 never :: Nothing - a
 never n = seq n undefined;

Of course, this workaround uses undefined, but at least never has the 
property that it doesn't return bottom unless it is passed bottom.


--
Ashley Yakeley
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Announce: Haskell Platform 2010.1.0.0 (beta) release

2010-03-30 Thread Gregory Collins
dekudekup...@yahoo.com (Benjamin L. Russell) writes:

 Sorry for the late response, but just out of curiosity, are there any
 plans to provide a binary installer for either the Haskell Platform or
 GHC 6.12.1 for Mac OS X Leopard for the PowerPC CPU (as opposed to for
 the Intel x86 CPU)?  I just checked the download-related Web sites for
 both the Haskell Platform for the Mac (see
 http://hackage.haskell.org/platform/mac.html) and for GHC 6.12.1 (see
 http://www.haskell.org/ghc/download_ghc_6_12_1.html), but could find
 no relevant information.

Short answer: no. Someone with the appropriate hardware would have to
volunteer to take charge of building both the platform libs (easier) and
the GHC installer (harder).

G
-- 
Gregory Collins g...@gregorycollins.net
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] More Language.C work for Google's Summer of Code

2010-03-30 Thread Edward Amsden
I'd be very much interested in working on this library for GSoC. I'm
currently working on an idea for another project, but I'm not certain
how widely beneficial it would be. The preprocessor and
pretty-printing projects sound especially intriguing.

On Tue, Mar 30, 2010 at 1:30 PM, Aaron Tomb at...@galois.com wrote:
 Hello,

 I'm wondering whether there's anyone on the list with an interest in doing
 additional work on the Language.C library for the Summer of Code. There are
 a few enhancements that I'd be very interested seeing, and I'd love be a
 mentor for such a project if there's a student interested in working on
 them.

 The first is to integrate preprocessing into the library. Currently, the
 library calls out to GCC to preprocess source files before parsing them.
 This has some unfortunate consequences, however, because comments and macro
 information are lost. A number of program analyses could benefit from
 metadata encoded in comments, because C doesn't have any sort of formal
 annotation mechanism, but in the current state we have to resort to ugly
 hacks (at best) to get at the contents of comments. Also, effective
 diagnostic messages need to be closely tied to original source code. In the
 presence of pre-processed macros, column number information is unreliable,
 so it can be difficult to describe to a user exactly what portion of a
 program a particular analysis refers to. An integrated preprocessor could
 retain comments and remember information about macros, eliminating both of
 these problems.

 The second possible project is to create a nicer interface for traversals
 over Language.C ASTs. Currently, the symbol table is built to include only
 information about global declarations and those other declarations currently
 in scope. Therefore, when performing multiple traversals over an AST, each
 traversal must re-analyze all global declarations and the entire AST of the
 function of interest. A better solution might be to build a traversal that
 creates a single symbol table describing all declarations in a translation
 unit (including function- and block-scoped variables), for easy reference
 during further traversals. It may also be valuable to have this traversal
 produce a slightly-simplified AST in the process. I'm not thinking of
 anything as radical as the simplifications performed by something like CIL,
 however. It might simply be enough to transform variable references into a
 form suitable for easy lookup in a complete symbol table like I've just
 described. Other simple transformations such as making all implicit casts
 explicit, or normalizing compound initializers, could also be good.

 A third possibility, which would probably depend on the integrated
 preprocessor, would be to create an exact pretty-printer. That is, a
 pretty-printing function such that pretty . parse is the identity.
 Currently, parse . pretty should be the identity, but it's not true the
 other way around. An exact pretty-printer would be very useful in creating
 rich presentations of C source code --- think LXR on steroids.

 If you're interested in any combination of these, or anything similar, let
 me know. The deadline is approaching quickly, but I'd be happy to work
 together with a student to flesh any of these out into a full proposal.

 Thanks,
 Aaron

 --
 Aaron Tomb
 Galois, Inc. (http://www.galois.com)
 at...@galois.com
 Phone: (503) 808-7206
 Fax: (503) 350-0833

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] More Language.C work for Google's Summer of Code

2010-03-30 Thread Aaron Tomb
Yes, that would definitely be one productive way forward. One concern  
is that Language.C is BSD-licensed (and it would be nice to keep it  
that way), and cpphs is LGPL. However, if cpphs remained a separate  
program, producing C + extra stuff as output, and the Language.C  
parser understood the extra stuff, this could accomplish what I'm  
interested in. It would be interesting, even, to just extend the  
Language.C parser to support comments, and to tell cpphs to leave them  
in.


There's also another pre-processor, mcpp [1], that is quite featureful  
and robust, and which supports an output mode with special syntax  
describing the origin of the code resulting from macro expansion.


Aaron

[1] http://mcpp.sourceforge.net/

On Mar 30, 2010, at 12:14 PM, austin seipp wrote:

(sorry for the dupe aaron! forgot to add haskell-cafe to senders  
list!)


Perhaps the best course of action would be to try and extend cpphs to
do things like this? From the looks of the interface, it can already
do some of these things e.g. do not strip comments from a file:

http://hackage.haskell.org/packages/archive/cpphs/1.11/doc/html/Language-Preprocessor-Cpphs.html#t%3ABoolOptions

Malcolm would have to attest to how complete it is w.r.t. say, gcc's
preprocessor, but if this were to be a SOC project, extending cpphs to
include needed functionality would probably be much more realistic
than writing a new one.

On Tue, Mar 30, 2010 at 12:30 PM, Aaron Tomb at...@galois.com wrote:

Hello,

I'm wondering whether there's anyone on the list with an interest  
in doing
additional work on the Language.C library for the Summer of Code.  
There are
a few enhancements that I'd be very interested seeing, and I'd love  
be a
mentor for such a project if there's a student interested in  
working on

them.

The first is to integrate preprocessing into the library.  
Currently, the
library calls out to GCC to preprocess source files before parsing  
them.
This has some unfortunate consequences, however, because comments  
and macro

information are lost. A number of program analyses could benefit from
metadata encoded in comments, because C doesn't have any sort of  
formal
annotation mechanism, but in the current state we have to resort to  
ugly

hacks (at best) to get at the contents of comments. Also, effective
diagnostic messages need to be closely tied to original source  
code. In the
presence of pre-processed macros, column number information is  
unreliable,
so it can be difficult to describe to a user exactly what portion  
of a
program a particular analysis refers to. An integrated preprocessor  
could
retain comments and remember information about macros, eliminating  
both of

these problems.

The second possible project is to create a nicer interface for  
traversals
over Language.C ASTs. Currently, the symbol table is built to  
include only
information about global declarations and those other declarations  
currently
in scope. Therefore, when performing multiple traversals over an  
AST, each
traversal must re-analyze all global declarations and the entire  
AST of the
function of interest. A better solution might be to build a  
traversal that
creates a single symbol table describing all declarations in a  
translation
unit (including function- and block-scoped variables), for easy  
reference
during further traversals. It may also be valuable to have this  
traversal

produce a slightly-simplified AST in the process. I'm not thinking of
anything as radical as the simplifications performed by something  
like CIL,
however. It might simply be enough to transform variable references  
into a
form suitable for easy lookup in a complete symbol table like I've  
just
described. Other simple transformations such as making all implicit  
casts

explicit, or normalizing compound initializers, could also be good.

A third possibility, which would probably depend on the integrated
preprocessor, would be to create an exact pretty-printer. That is, a
pretty-printing function such that pretty . parse is the identity.
Currently, parse . pretty should be the identity, but it's not true  
the
other way around. An exact pretty-printer would be very useful in  
creating

rich presentations of C source code --- think LXR on steroids.

If you're interested in any combination of these, or anything  
similar, let
me know. The deadline is approaching quickly, but I'd be happy to  
work
together with a student to flesh any of these out into a full  
proposal.


Thanks,
Aaron

--
Aaron Tomb
Galois, Inc. (http://www.galois.com)
at...@galois.com
Phone: (503) 808-7206
Fax: (503) 350-0833

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe





--
- Austin
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: [Haskell-cafe] Re: ANN: data-category, restricted categories

2010-03-30 Thread Edward Kmett
The uniqueness of the definition of Nothing only holds up to isomorphism.

This holds for many unique types, products, sums, etc. are all subject to
this multiplicity of definition when looked at through the concrete-minded
eye of the computer scientist.

The mathematician on the other hand can put on his fuzzy goggles and just
say that they are all the same up to isomorphism. =)

-Edward Kmett

On Tue, Mar 30, 2010 at 3:45 PM, wagne...@seas.upenn.edu wrote:

 Quoting Ashley Yakeley ash...@semantic.org:

   data Nothing


 I avoid explicit undefined in my programs, and also hopefully
 non-termination. Then the bottomless interpretation becomes useful, for
 instance, to consider Nothing as an initial object of Hask particularly when
 using GADTs.


 Forgive me if this is stupid--I'm something of a category theory
 newbie--but I don't see that Hask necessarily has an initial object in the
 bottomless interpretation. Suppose I write

 data Nothing2

 Then if I understand this correctly, for Nothing to be an initial object,
 there would have to be a function f :: Nothing - Nothing2, which seems hard
 without bottom. This is a difference between Hask and Set, I guess: we can't
 write down the empty function. (I suppose unsafeCoerce might have that
 type, but surely if you're throwing out undefined you're throwing out the
 more frightening things, too...)

 ~d

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANN: data-category, restricted categories

2010-03-30 Thread Edward Kmett
Very true. I oversimplified matters by mistake.

One question, I suppose, is does seq distinguish the arrows, or does it
distinguish the exponential objects in the category? since you are using it
as an object in order to apply seq, and does that distinction matter? I'd
hazard not, but its curious to me.

2010/3/26 David Menendez d...@zednenem.com

 On Fri, Mar 26, 2010 at 11:07 AM, Edward Kmett ekm...@gmail.com wrote:
 
  On Fri, Mar 26, 2010 at 11:04 AM, Edward Kmett ekm...@gmail.com wrote:
 
  -- as long as you're ignoring 'seq'
  terminateSeq :: a - Unit
  terminateSeq a = a `seq` unit
 
 
  Er ignore that language about seq. a `seq` unit is either another bottom
 or
  undefined, so there remains one canonical morphism even in the presence
 of
  seq (ignoring unsafePerformIO) =)

 It all depends on how you define equality for functions. If you mean
 indistinguishable in contexts which may involve seq, then there are at
 least two values of type Unit - ().

 foo :: (Unit - ()) - ()
 foo x = x `seq` ()

 foo terminate = ()
 foo undefined = undefined

 Even this uses the convention that undefined = error whatever =
 loop, which isn't technically true, since you can use exception
 handling to write code which treats them differently.

 --
 Dave Menendez d...@zednenem.com
 http://www.eyrie.org/~zednenem/ http://www.eyrie.org/%7Ezednenem/

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] More Language.C work for Google's Summer of Code

2010-03-30 Thread Nick Bowler
On 19:54 Tue 30 Mar , Stephen Tetley wrote:
 On 30 March 2010 18:55, Serguey Zefirov sergu...@gmail.com wrote:
  Other than that, C preprocessor looks simple.
 
 Ah no - apparently anything but simple.

I would describe it as simple but somewhat annoying.  This means that
guessing at its specification will not result in anything resembling a
correct implementation, but reading the specification and implementing
accordingly is straightforward.

Probably the hardest part is expression evaluation.

 You might want to see Jean-Marie Favre's (very readable, amusing)
 papers on subject. Much of the behaviour of CPP is not defined and
 often inaccurately described, certainly it wouldn't appear to make an
 ideal one summer, student project.

The only specification of the C preprocessor that matters is the one
contained in the specification of the C programming language.  The
accuracy of any other description of it is not relevant.  C is quite
possibly the language with the greatest quantity of inaccurate
descriptions in existence (scratch that, C++ is likely worse).

As with most of the C programming language, a lot of the behaviour is
implementation-defined or even undefined, as you suggest.  For example:

/* implementation-defined */
#pragma launch_missiles

/* undefined */
#define explosion defined
#if explosion
# pragma launch_missiles
#endif

This makes a preprocessor /easier/ to implement, because in these cases
the implementer can do /whatever she wants/, including doing nothing or
starting the missile launch procedure.  In the implementation-defined
case, the implementor must additionally write the decision down
somewhere, i.e. Upon execution of a #pragma launch_missiles directive,
all missiles are launched.

 http://megaplanet.org/jean-marie-favre/papers/CPPDenotationalSemantics.pdf

If this paper had criticised the actual C standard as opposed to a
working draft, it would have been easier to take it seriously.  I find
the published standard quite clear about the requirements of a C
preprocessor.

Nevertheless, assuming that the complaints of the paper remain valid, it
appears to boil down to The C is preprocessor is weird, and one must
read its whole specification to understand all of it.  It also seems to
contain a bit of The C standard does not precisely describe the GNU C
preprocessor.

This work is certainly within the scope of a summer project.

-- 
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] More Language.C work for Google's Summer of Code

2010-03-30 Thread Aaron Tomb

That's very good to hear!

When it comes to preprocessing and exact printing, I think that there  
are various stages of completeness that we could support.


  1) Add support for parsing comments to the Language.C parser. Keep  
using an external pre-processor but tell it to leave comments in the  
source code. The cpphs pre-processor can do this. The trickiest bit  
here would have to do with where to record the comments in the AST.  
What AST node is a given comment associate with? We could probably  
come up with some general rules, and perhaps certain comments, in  
weird locations, would still be ignored.


  2) Support correct column numbers for source locations. This falls  
short of complete macro support, but covers one of the key problems  
that macros introduce. The mcpp preprocessor [1] has a special  
diagnostic mode where it adds special comments describing the origin  
of code that resulted from macro expansion. If the parser retained  
comments, we could use this information to help with exact pretty- 
printing.


  3) Modify the pretty-printer to take position information into  
account when pretty-printing (at least optionally). As long as macro  
definitions themselves (as well as #ifdef, etc.) are not in the AST,  
the output will still not be exactly the same as the input, but it'll  
come closer.


  4) Add full support for parsing and expanding macros internally, so  
that both macro definitions and expansions appear in the Language.C  
AST. This is probably a huge project, partly because macros do not  
have to obey the tree structure of the C language in any way. This is  
perhaps beyond the scope of a summer project, but the other steps  
could help prepare for it in the future, and still fully address some  
of the problems caused by the preprocessor along the way.


Do you think you'd be interested in some subset or variation of 1, 2,  
and 3? Are there other ideas you have? Things I've missed? Things  
you'd do differently?


Thanks,
Aaron


[1] http://mcpp.sourceforge.net/


On Mar 30, 2010, at 1:46 PM, Edward Amsden wrote:


I'd be very much interested in working on this library for GSoC. I'm
currently working on an idea for another project, but I'm not certain
how widely beneficial it would be. The preprocessor and
pretty-printing projects sound especially intriguing.

On Tue, Mar 30, 2010 at 1:30 PM, Aaron Tomb at...@galois.com wrote:

Hello,

I'm wondering whether there's anyone on the list with an interest  
in doing
additional work on the Language.C library for the Summer of Code.  
There are
a few enhancements that I'd be very interested seeing, and I'd love  
be a
mentor for such a project if there's a student interested in  
working on

them.

The first is to integrate preprocessing into the library.  
Currently, the
library calls out to GCC to preprocess source files before parsing  
them.
This has some unfortunate consequences, however, because comments  
and macro

information are lost. A number of program analyses could benefit from
metadata encoded in comments, because C doesn't have any sort of  
formal
annotation mechanism, but in the current state we have to resort to  
ugly

hacks (at best) to get at the contents of comments. Also, effective
diagnostic messages need to be closely tied to original source  
code. In the
presence of pre-processed macros, column number information is  
unreliable,
so it can be difficult to describe to a user exactly what portion  
of a
program a particular analysis refers to. An integrated preprocessor  
could
retain comments and remember information about macros, eliminating  
both of

these problems.

The second possible project is to create a nicer interface for  
traversals
over Language.C ASTs. Currently, the symbol table is built to  
include only
information about global declarations and those other declarations  
currently
in scope. Therefore, when performing multiple traversals over an  
AST, each
traversal must re-analyze all global declarations and the entire  
AST of the
function of interest. A better solution might be to build a  
traversal that
creates a single symbol table describing all declarations in a  
translation
unit (including function- and block-scoped variables), for easy  
reference
during further traversals. It may also be valuable to have this  
traversal

produce a slightly-simplified AST in the process. I'm not thinking of
anything as radical as the simplifications performed by something  
like CIL,
however. It might simply be enough to transform variable references  
into a
form suitable for easy lookup in a complete symbol table like I've  
just
described. Other simple transformations such as making all implicit  
casts

explicit, or normalizing compound initializers, could also be good.

A third possibility, which would probably depend on the integrated
preprocessor, would be to create an exact pretty-printer. That is, a
pretty-printing function such that pretty . parse is the identity.

[Haskell-cafe] GSOC Idea: Simple audio interface

2010-03-30 Thread Edward Amsden
I would like to write an audio interface for haskell for GSoC. The
idea is to have a simple frontend (possibly an analog to the IO
'interact' function) that would permit writing simple functions to
process audio.  The interface would permit easy usage of various audio
APIs on various platforms to permit the writing of portable audio
programs in Haskell. Initial ideas include interfaces to PortAudio,
Jack, ALSA, CoreAudio, and/or SDL. The goal is to lower the barrier to
writing audio functions in Haskell and to permit quick experimentation
with audio generating or processing functions.

I'd love to hear feedback on this idea and to know of anyone would be
interested in mentoring.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] More Language.C work for Google's Summer of Code

2010-03-30 Thread Tom Hawkins
On Tue, Mar 30, 2010 at 7:30 PM, Aaron Tomb at...@galois.com wrote:
 Hello,

 I'm wondering whether there's anyone on the list with an interest in doing
 additional work on the Language.C library for the Summer of Code. There are
 a few enhancements that I'd be very interested seeing, and I'd love be a
 mentor for such a project if there's a student interested in working on
 them.

Here's another suggestion: A transformer to convert Language.C's AST
to RTL, thus hiding a lot of tedious details like structures, case
statements, variable declarations, typedefs, etc.

I started writing a model checker [1] based on Language.C, but got so
bogged down in all the details of C I lost interest.

-Tom

[1] http://hackage.haskell.org/package/afv
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Release: pqueue-1.0.1

2010-03-30 Thread Louis Wasserman
I've finally released my priority queue package, pqueue, on Hackage.  This
is the direct result of my efforts to design a priority queue for
containers, but instead, I decided to put together this package and submit
it for addition to the Haskell Platform.  There's already been a huge
discussion about which data structure to use, but I'm definitely sticking
with this structure for this major version release.

I'm not entirely sure of the process to submit to HP, so if there's
something in particular I should do, tell me plz.

The hackage link is http://hackage.haskell.org/package/pqueue, the darcs
repo is http://code.haskell.org/containers-pqueue/

Louis Wasserman
wasserman.lo...@gmail.com
http://profiles.google.com/wasserman.louis
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: ANN: data-category, restricted categories

2010-03-30 Thread Conor McBride
Getting back to the question, whatever happened to empty case  
expressions? We should not need bottom to write total functions from  
empty types.


Correspondingly, we should have that the map from an empty type to  
another given type is unique extensionally, although it may have many  
implementations. Wouldn't that make any empty type initial? Of course,  
one does need one's isogoggles on to see the uniqueness of the initial  
object.


An empty type is remarkably useful, e.g. as the type of free variables  
in closed terms, or as the value component of the monadic type of a  
server process. If we need bottom to achieve vacuous satisfaction,  
something is a touch amiss.


Cheers

Conor

On 30 Mar 2010, at 22:02, Edward Kmett ekm...@gmail.com wrote:

The uniqueness of the definition of Nothing only holds up to  
isomorphism.


This holds for many unique types, products, sums, etc. are all  
subject to this multiplicity of definition when looked at through  
the concrete-minded eye of the computer scientist.


The mathematician on the other hand can put on his fuzzy goggles and  
just say that they are all the same up to isomorphism. =)


-Edward Kmett

On Tue, Mar 30, 2010 at 3:45 PM, wagne...@seas.upenn.edu wrote:
Quoting Ashley Yakeley ash...@semantic.org:

 data Nothing


I avoid explicit undefined in my programs, and also hopefully non- 
termination. Then the bottomless interpretation becomes useful, for  
instance, to consider Nothing as an initial object of Hask  
particularly when using GADTs.


Forgive me if this is stupid--I'm something of a category theory  
newbie--but I don't see that Hask necessarily has an initial object  
in the bottomless interpretation. Suppose I write


data Nothing2

Then if I understand this correctly, for Nothing to be an initial  
object, there would have to be a function f :: Nothing - Nothing2,  
which seems hard without bottom. This is a difference between Hask  
and Set, I guess: we can't write down the empty function. (I  
suppose unsafeCoerce might have that type, but surely if you're  
throwing out undefined you're throwing out the more frightening  
things, too...)


~d

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] More Language.C work for Google's Summer of Code

2010-03-30 Thread Aaron Tomb

On Mar 30, 2010, at 3:16 PM, Tom Hawkins wrote:


On Tue, Mar 30, 2010 at 7:30 PM, Aaron Tomb at...@galois.com wrote:

Hello,

I'm wondering whether there's anyone on the list with an interest  
in doing
additional work on the Language.C library for the Summer of Code.  
There are
a few enhancements that I'd be very interested seeing, and I'd love  
be a
mentor for such a project if there's a student interested in  
working on

them.


Here's another suggestion: A transformer to convert Language.C's AST
to RTL, thus hiding a lot of tedious details like structures, case
statements, variable declarations, typedefs, etc.

I started writing a model checker [1] based on Language.C, but got so
bogged down in all the details of C I lost interest.


I would also love to have something along these lines, and would be  
happy to mentor such a project.


On a related note, I have some code sitting around that converts  
Language.C ASTs into a variant of Guarded Commands, and I expect I'll  
release that at some point. For the moment, it's a little too  
intimately tied to the program it's part of, though.


Aaron
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: ANN: data-category, restricted categories

2010-03-30 Thread wagnerdm
I believe I was claiming that, in the absence of undefined, Nothing  
and Nothing2 *aren't* isomorphic (in the CT sense).


But this is straying dangerously far from Ashley's point, which I  
think is a perfectly good one: Hask without bottom is friendlier than  
Hask with bottom.


~d

Quoting Edward Kmett ekm...@gmail.com:


The uniqueness of the definition of Nothing only holds up to isomorphism.

This holds for many unique types, products, sums, etc. are all subject to
this multiplicity of definition when looked at through the concrete-minded
eye of the computer scientist.

The mathematician on the other hand can put on his fuzzy goggles and just
say that they are all the same up to isomorphism. =)

-Edward Kmett

On Tue, Mar 30, 2010 at 3:45 PM, wagne...@seas.upenn.edu wrote:


Quoting Ashley Yakeley ash...@semantic.org:

  data Nothing



I avoid explicit undefined in my programs, and also hopefully
non-termination. Then the bottomless interpretation becomes useful, for
instance, to consider Nothing as an initial object of Hask  
particularly when

using GADTs.



Forgive me if this is stupid--I'm something of a category theory
newbie--but I don't see that Hask necessarily has an initial object in the
bottomless interpretation. Suppose I write

data Nothing2

Then if I understand this correctly, for Nothing to be an initial object,
there would have to be a function f :: Nothing - Nothing2, which seems hard
without bottom. This is a difference between Hask and Set, I guess: we can't
write down the empty function. (I suppose unsafeCoerce might have that
type, but surely if you're throwing out undefined you're throwing out the
more frightening things, too...)

~d

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe






___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: ANN: data-category, restricted categories

2010-03-30 Thread Ashley Yakeley

wagne...@seas.upenn.edu wrote:
I believe I was claiming that, in the absence of undefined, Nothing and 
Nothing2 *aren't* isomorphic (in the CT sense).


Well, this is only due to Haskell's difficulty with empty case 
expressions. If that were fixed, they would be isomorphic even without 
undefined.


--
Ashley Yakeley
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: ANN: data-category, restricted categories

2010-03-30 Thread Ross Paterson
On Tue, Mar 30, 2010 at 11:26:39PM +0100, Conor McBride wrote:
 Getting back to the question, whatever happened to empty case expressions? We
 should not need bottom to write total functions from empty types.

Empty types?  Toto, I've a feeling we're not in Haskell anymore.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] How to use unsafePerformIO properly (safely?)

2010-03-30 Thread Ivan Miljenovic
I use the dreaded unsafePerformIO for a few functions in my graphviz
library ( 
http://hackage.haskell.org/packages/archive/graphviz/2999.8.0.0/doc/html/src/Data-GraphViz.html
).  However, a few months ago someone informed me that the
documentation for unsafePerformIO had some steps that should be
followed whenever it's used:
http://www.haskell.org/ghc/docs/latest/html/libraries/base-4.2.0.0/System-IO-Unsafe.html
.

Looking through this documentation, I'm unsure on how to deal with the
last two bullet points (adding NOINLINE pragmas is easy).  The code
doesn't combine IO actions, etc. and I don't deal with mutable
variables, so do I have to worry about them?

-- 
Ivan Lazar Miljenovic
ivan.miljeno...@gmail.com
IvanMiljenovic.wordpress.com
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] More Language.C work for Google's Summer of Code

2010-03-30 Thread Edward Amsden
On Tue, Mar 30, 2010 at 5:14 PM, Aaron Tomb at...@galois.com wrote:
 That's very good to hear!

 When it comes to preprocessing and exact printing, I think that there are
 various stages of completeness that we could support.

  1) Add support for parsing comments to the Language.C parser. Keep using an
 external pre-processor but tell it to leave comments in the source code. The
 cpphs pre-processor can do this. The trickiest bit here would have to do
 with where to record the comments in the AST. What AST node is a given
 comment associate with? We could probably come up with some general rules,
 and perhaps certain comments, in weird locations, would still be ignored.


  2) Support correct column numbers for source locations. This falls short of
 complete macro support, but covers one of the key problems that macros
 introduce. The mcpp preprocessor [1] has a special diagnostic mode where it
 adds special comments describing the origin of code that resulted from macro
 expansion. If the parser retained comments, we could use this information to
 help with exact pretty-printing.

  3) Modify the pretty-printer to take position information into account when
 pretty-printing (at least optionally). As long as macro definitions
 themselves (as well as #ifdef, etc.) are not in the AST, the output will
 still not be exactly the same as the input, but it'll come closer.

  4) Add full support for parsing and expanding macros internally, so that
 both macro definitions and expansions appear in the Language.C AST. This is
 probably a huge project, partly because macros do not have to obey the tree
 structure of the C language in any way. This is perhaps beyond the scope of
 a summer project, but the other steps could help prepare for it in the
 future, and still fully address some of the problems caused by the
 preprocessor along the way.
I haven't looked at the C spec on macros, but I'm pretty motivated and
would like to shoot for a big project.


 Do you think you'd be interested in some subset or variation of 1, 2, and 3?
 Are there other ideas you have? Things I've missed? Things you'd do
 differently?

I'm very interested in all 3 of them, and actually somewhat in #4,
though I'll have to do some reading to understand why you're saying
it's such a big undertaking.


 Thanks,
 Aaron


 [1] http://mcpp.sourceforge.net/


 On Mar 30, 2010, at 1:46 PM, Edward Amsden wrote:

 I'd be very much interested in working on this library for GSoC. I'm
 currently working on an idea for another project, but I'm not certain
 how widely beneficial it would be. The preprocessor and
 pretty-printing projects sound especially intriguing.

 On Tue, Mar 30, 2010 at 1:30 PM, Aaron Tomb at...@galois.com wrote:

 Hello,

 I'm wondering whether there's anyone on the list with an interest in
 doing
 additional work on the Language.C library for the Summer of Code. There
 are
 a few enhancements that I'd be very interested seeing, and I'd love be a
 mentor for such a project if there's a student interested in working on
 them.

 The first is to integrate preprocessing into the library. Currently, the
 library calls out to GCC to preprocess source files before parsing them.
 This has some unfortunate consequences, however, because comments and
 macro
 information are lost. A number of program analyses could benefit from
 metadata encoded in comments, because C doesn't have any sort of formal
 annotation mechanism, but in the current state we have to resort to ugly
 hacks (at best) to get at the contents of comments. Also, effective
 diagnostic messages need to be closely tied to original source code. In
 the
 presence of pre-processed macros, column number information is
 unreliable,
 so it can be difficult to describe to a user exactly what portion of a
 program a particular analysis refers to. An integrated preprocessor could
 retain comments and remember information about macros, eliminating both
 of
 these problems.

 The second possible project is to create a nicer interface for traversals
 over Language.C ASTs. Currently, the symbol table is built to include
 only
 information about global declarations and those other declarations
 currently
 in scope. Therefore, when performing multiple traversals over an AST,
 each
 traversal must re-analyze all global declarations and the entire AST of
 the
 function of interest. A better solution might be to build a traversal
 that
 creates a single symbol table describing all declarations in a
 translation
 unit (including function- and block-scoped variables), for easy reference
 during further traversals. It may also be valuable to have this traversal
 produce a slightly-simplified AST in the process. I'm not thinking of
 anything as radical as the simplifications performed by something like
 CIL,
 however. It might simply be enough to transform variable references into
 a
 form suitable for easy lookup in a complete symbol table like I've just
 described. Other simple transformations such 

Re: [Haskell-cafe] Re: ANN: data-category, restricted categories

2010-03-30 Thread Lennart Augustsson
Of course Haskell' should have an empty case.  As soon as empty data
declarations are allowed then empty case must be allowed just by using
common sense.

On Tue, Mar 30, 2010 at 11:03 PM, Ashley Yakeley ash...@semantic.org wrote:
 wagne...@seas.upenn.edu wrote:

 I believe I was claiming that, in the absence of undefined, Nothing and
 Nothing2 *aren't* isomorphic (in the CT sense).

 Well, this is only due to Haskell's difficulty with empty case expressions.
 If that were fixed, they would be isomorphic even without undefined.

 --
 Ashley Yakeley
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: [Haskell] Release: pqueue-1.0.1

2010-03-30 Thread Johan Tibell
See http://trac.haskell.org/haskell-platform/wiki/AddingPackages

Cheers,
Johan

On Mar 31, 2010 5:21 AM, Louis Wasserman wasserman.lo...@gmail.com
wrote:

I've finally released my priority queue package, pqueue, on Hackage.  This
is the direct result of my efforts to design a priority queue for
containers, but instead, I decided to put together this package and submit
it for addition to the Haskell Platform.  There's already been a huge
discussion about which data structure to use, but I'm definitely sticking
with this structure for this major version release.

I'm not entirely sure of the process to submit to HP, so if there's
something in particular I should do, tell me plz.

The hackage link is http://hackage.haskell.org/package/pqueue, the darcs
repo is http://code.haskell.org/containers-pqueue/

Louis Wasserman
wasserman.lo...@gmail.com
http://profiles.google.com/wasserman.louis

___
Haskell mailing list
hask...@haskell.org
http://www.haskell.org/mailman/listinfo/haskell
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Data.Graph?

2010-03-30 Thread Lee Pike
Apologies if this request isn't 'appropriate' for this venue (perhaps  
it's a Haskell' request and/or has been discussed before)...


I'd like it if there were a Data.Graph in the base libraries with  
basic graph-theoretic operations.  Is this something that's been  
discussed?


For now, it appears that Graphalyze on Hackage is the most complete  
library for graph analysis; is that right?  (I actually usually just  
want a pretty small subset of its functionality.)


Thanks,
Lee
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Data.Graph?

2010-03-30 Thread Ivan Miljenovic
Sorry for the duplicate email Lee, but I somehow forgot to CC the
mailing list :s

On 31 March 2010 13:12, Lee Pike leep...@gmail.com wrote:
 I'd like it if there were a Data.Graph in the base libraries with basic
 graph-theoretic operations.  Is this something that's been discussed?

I'm kinda working on a replacement to Data.Graph that will provide
graph-theoretic operations to a variety of graph types.

 For now, it appears that Graphalyze on Hackage is the most complete library
 for graph analysis; is that right?  (I actually usually just want a pretty
 small subset of its functionality.)

Yay, someone likes my code! :p

I've been thinking about splitting off the algorithms section of
Graphalyze for a while; maybe I should do so now... (though I was
going to merge it into the above mentioned so-far-mainly-vapourware
library...).

There are a few other alternatives:

* FGL has a variety of graph operations (but I ended up
re-implementing a lot of the ones I wanted in Graphalyze because FGL
returns lists of nodes and I wanted the resulting graphs for things
like connected components, etc.).
* The dom-lt library
* GraphSCC
* hgal (which is a really atrocious port of nauty that is extremely
inefficient; I've started work on a replacement)
* astar (which is generic for all graph types since you provide
functions on the graph as arguments)

With the exception of FGL, all of these are basically libraries that
implement one particular algorithm/operation.

-- 
Ivan Lazar Miljenovic
ivan.miljeno...@gmail.com
IvanMiljenovic.wordpress.com
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] More Language.C work for Google's Summer of Code

2010-03-30 Thread wren ng thornton

Stephen Tetley wrote:

Much of the behaviour of CPP is not defined and
often inaccurately described, certainly it wouldn't appear to make an
ideal one summer, student project.


But to give Language.C integrated support for preprocessing, one needn't
implement CPP. They only need to implement the right API for a 
preprocessor to communicate with the parser/analyzer.


Considering all the folks outside of C who use the CPP
*cough*Haskell*cough* having a stand-alone CPP would be good in its own
right. In fact, I seem to recall there's already one of those floating
around somewhere... ;)

I think it'd be far cooler and more useful to give Language.C integrated
preprocessor support without hard-wiring it to the CPP. Especially given
as there are divergent semantics for different CPP implementations, and
given we could easily imagine wanting to use another preprocessor (e.g.,
for annotations, documentation, etc)

--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Seeking advice about monadic traversal functions

2010-03-30 Thread Darryn Reid

Hi, I hope that this is an appropriate place to post this question; I
initially posted to Haskell-beginners since that seemed appropriate to
my knowledge level. Brent Yorgey kindly suggested I should post here
instead.

Just for a little background: I'm an experienced computer scientist and
mathematician but am very new to Haskell (I love it, I might add, but
have much to learn).

I've coded a (fairly) general rewriting traversal - I suspect the
approach might be generalisable to all tree-like types, but this doesn't
concern me much right now. My purpose is for building theorem provers
for a number of logics, separating the overall control mechanism from
the specific rules for each logic.

The beauty of Haskell here is in being able to abstract the traversal
away from the specific reduction rules and languages of the different
logics. I have paired down the code here to numbers rather than modal
formulas for the sake of clarity and simplicity. My question is
two-fold:
1. Does my representation of the traversal seem good or would it be
better expressed differently? If so, I'd appreciate the thoughts of more
experienced Haskell users.
2. I cannot really see how to easily extend this to a queue-based
breadth-first traversal, which would give me fairness. I'm sure others
must have a good idea of how to do what I'm doing here except in
breadth-first order; I'd appreciate it very much if someone could show
me how to make a second breadth-first version.

Thanks in advance for any help!

Darryn.

start of code.
-
import Control.Monad
import Control.Monad.Identity

-- Tableau is a tree containing formulas of type a, in 
-- Fork (disjunctive) nodes and Single (conjunctive) nodes.
-- Paths from root to Nil each represent an interpretation.
data Tableau a =   Nil   -- End of a path
 | Single a (Tableau a)  -- Conjunction
 | Fork (Tableau a) (Tableau a)  -- Disjunction
 deriving (Eq, Show)

-- 'Cut' directs the traversal to move back up the tree rather than into
-- the current subtree, while 'Next' directs the traversal to proceed
-- into the subtree.
data Action = Cut | Next 
  deriving (Eq, Show)

-- The type of the rewrite function to perform at each node in
-- the traversal of the tree. The Rewriter returns the new
-- subtree to replace the existing subtree rooted at the
-- traversal node, and a direction controlling the traversal
-- to the next node.
type Rewriter m a = Tableau a - m (Tableau a, Action)

-- The traversal function, depth-first order.
rewrite :: (Monad m) = Rewriter m a - Tableau a - m (Tableau a)
rewrite rf t = do (t', d) - rf t
  rewrite' rf d t'

-- Worker function for traversal, really its main body.
rewrite' :: (Monad m) = Rewriter m a - Action - Tableau a - m
(Tableau a)
rewrite' rf Cut  t = return t
rewrite' rf Next Nil   = return Nil
rewrite' rf Next (Single x t1) = do t1' - rewrite rf t1
return (Single x t1')
rewrite' rf Next (Fork t1 t2)  = do t1' - rewrite rf t1
t2' - rewrite rf t2
return (Fork t1' t2')

-- Some examples to test the code with:
-- ghci let t0 = (Single 2 (Single 3 Nil))
-- ghci let t1 = (Single 1 (Fork t0 (Fork (Single 4 Nil) (Single 5
Nil
-- ghci let t2 = (Fork (Single 10 Nil) (Single 11 Nil))
-- ghci let t3 = (Fork (Single 3 Nil) (Single 3 Nil))

-- Running test4 t1 t2 produces the expected result; running 
-- test4 t1 t3 does not terminate, because this endlessly 
-- generates new patterns that match the substitution condition:
test4 :: (Num t) = Tableau t - Tableau t - Tableau t
test4 tab tab' = runIdentity (rewrite rf tab)
where rf (Single 3 Nil) = return (tab', Next)
  rf t  = return (t, Next)

-- Running test5 t1 t3, however, does terminate because the Cut
-- prevents the traversal from entering the newly spliced subtrees
-- that match the pattern.
test5 :: (Num t) = Tableau t - Tableau t - Tableau t
test5 tab tab' = runIdentity (rewrite rf tab)
where rf (Single 3 Nil) = return (tab', Cut)
  rf t  = return (t, Next)
-
end of code.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] why doesn't time allow diff of localtimes ?

2010-03-30 Thread briand

which is a variation of the question, why can't I compare localtimes ?

or am I missing something in Time (yet again).

Thanks,

Brian
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] why doesn't time allow diff of localtimes ?

2010-03-30 Thread wagnerdm
Two values of LocalTime may well be computed with respect to different  
timezones, which makes the operation you ask for dangerous. First  
convert to UTCTime (with localTimeToUTC), then compare.


Cheers,
~d

Quoting bri...@aracnet.com:


which is a variation of the question, why can't I compare localtimes ?
or am I missing something in Time (yet again).

Thanks,
Brian

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe