Re: [Haskell-cafe] Proposal: new function for lifting

2013-09-27 Thread Nick Vanderweit
Sorry for sending this twice; I didn't reply to the list initially.

I thought people [1] were generally talking about lift from
Control.Monad.Trans:

class MonadTrans t where
lift :: Monad m = m a - t m a

The idea being that lifting through a monad stack feels tedious. The
proposed solution is to use instances to do the lifting for you, like in
mtl. So we've got instances like:

MonadState s m = MonadState s (ReaderT r m)

Which let you automatically lift get/put/modify up a stack, without doing
any work.

This is different from liftM*, which are about applying a pure function
to monadic arguments. This can be done quite nicely with ($) and (*)
from Data.Functor and Control.Applicative, respectively. Your first
example can be written:


(+) $ (Just 42) * Nothing


Nick

[1]
http://blog.ezyang.com/2013/09/if-youre-using-lift-youre-doing-it-wrong-probably/



signature.asc
Description: OpenPGP digital signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Looking for numbers to support using haskell

2013-09-23 Thread Nick Vanderweit
I'd be interested in more studies in this space. Does anyone know of
empirical studies on program robustness vs. other languages?


Nick

On 09/23/2013 11:31 AM, MigMit wrote:
 The classical reference is, I think, the paper “Haskell vs. Ada vs. C++ vs. 
 Awk vs. ... An Experiment in Software Prototyping Productivity”
 
 On Sep 23, 2013, at 9:20 PM, Mike Meyer m...@mired.org wrote:
 
 Hi all,

 I'm looking for articles that provide some technical support for why Haskell 
 rocks. Not just cheerleading, but something with a bit of real information 
 in it - a comparison of code snippets in multiple languages, or the results 
 of a study on programmer productivity (given all the noise and heat on the 
 topic of type checking, surely there's a study somewhere that actually 
 provides light as well), etc.

 Basically, I'd love things that would turn into an elevator pitch of I can 
 show you how to be X times more productive than you are using Y, and then 
 the article provides the evidence to support that claim.

 Thanks,
 Mike
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe
 
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe
 



signature.asc
Description: OpenPGP digital signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Substantial (1:10??) system dependencies of runtime performance??

2013-02-02 Thread Nick Rudnick
Dear all,

for quite a while now, I have experienced this issue with some curiosity;
yesterday I had it again, when a program that took well over one hour
before only needed about ten minutes, after a system reboot (recent Ubuntu)
and with no browser started -- finally deciding to post this.

I still can't reproduce these effects, but there is indication it is
connected with browser use (mostly Google Chrome, with usually 10's of
windows and ~100 folders open) and especially use of video players; closing
or killing doesn't seem to set free resources, a reboot or at least suspend
to disk seems to be necessary (suspend to RAM doesn't seem enough).

Roughly, I would say the differences in runtime can reach a factor as much
as 1:10 at many times -- and so I am curious whether this subject has
already been observed or even better discussed elsewhere. I have spoken to
somebody, and our only plausible conclusion was that software like web
browsers is able to somewhat aggressively claim system resources higher in
the privilege hierarchy (cache?? register??), so that they are not
available to other programs any more.

I hope this is interesting to others, too, I guess it is an issue for
anybody programming computation intensive code to be run on standard
systems with other applications running there, too, and having to predict
the estimated runtime to the client.

Maybe I have overseen some libs which are already able to scan the system
state in this regard, or even tell Haskell to behave 'less nice' when other
applications are known to be of lower priority??

Thanks a lot in advance, Nick
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Substantial (1:10??) system dependencies of runtime performance??

2013-02-02 Thread Nick Rudnick
Hi Gwern,

thanks for the interesting info. I quite often have processing of CSV file
data of about 100M-1G done.

Thanks a lot, Nick

2013/2/2 Gwern Branwen gwe...@gmail.com

 On Sat, Feb 2, 2013 at 3:19 PM, Nick Rudnick nick.rudn...@gmail.com
 wrote:
  Roughly, I would say the differences in runtime can reach a factor as
 much
  as 1:10 at many times -- and so I am curious whether this subject has
  already been observed or even better discussed elsewhere. I have spoken
 to
  somebody, and our only plausible conclusion was that software like web
  browsers is able to somewhat aggressively claim system resources higher
 in
  the privilege hierarchy (cache?? register??), so that they are not
 available
  to other programs any more.

 Maybe the Haskell program requires a lot of disk IO? That could easily
 lead to a big performance change since disk is so slow compared to
 everything else these days. You could try looking with 'lsof' to see
 if the browser has a ton of files open or try running the Haskell
 program with higher or lower disk IO priority via 'ionice'.

 --
 gwern
 http://www.gwern.net

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] foldr (.) id

2012-10-26 Thread Nick Vanderweit
Funny, I was thinking this morning about using something like this to convert 
to/from Church numerals:

church n = foldl (.) id . replicate n
unchurch f = f succ 0


I think it's a nice pattern.

Nick

On Friday, October 26, 2012 11:41:18 AM Greg Fitzgerald wrote:
 Hi Haskellers,
 
 I've recently found myself using the expression: foldr (.) id to compose
 a list (or Foldable) of functions.  It's especially useful when I need to
 map a function over the list before composing.  Does this function, or the
 more general foldr fmap id, defined in a library anywhere?  I googled and
 hoogled, but no luck so far.
 
 Thanks,
 Greg

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Over general types are too easy to make.

2012-08-31 Thread Nick Vanderweit
It is often the case that using GADTs with phantom types can allow you to 
constrain which functions can operate on the results of which constructors. I 
believe this is common practice now in such situations.


Nick

On Friday, August 31, 2012 09:32:37 PM Paolino wrote:
 Hello Timothy
 
 GADTs let you catch more errors at compile time. With them you can give
 different types to constructors of the same datatype.
 
 regards
 paolino
 2012/8/31 timothyho...@seznam.cz
  
  Sure, but that's relying on the promise that you're passing it a valid
  BadFrog...  Consider then:
  
  
  deBadFrog $ BadFrogType (BadBar { badFoo = 1})
  
  
  -- Původní zpráva --
  Od: John Wiegley jo...@newartisans.com
  Datum: 31. 8. 2012
  Předmět: Re: [Haskell-cafe] Over general types are too easy to make.
  
   timothyho...@seznam.cz writes:
   data BadFoo =
   BadBar{
   badFoo::Int} |
   BadFrog{
   badFrog::String,
   badChicken::Int}
   
   This is fine, until we want to write a function that acts on Frogs but
  
  not
  
   on Bars. The best we can do is throw a runtime error when passed a Bar
  
  and
  
   not a Foo:
  You can use wrapper types to solve this:
  
  data BadBarType = BadBarType BadFoo
  data BadFrogType = BadFrogType BadFoo
  
  Now you can have:
  
  deBadFrog :: BadFrogType - String
  
  And call it as:
  
  deBadFrog $ BadFrogType (BadFrog { badFrog = Hey, badChicken = 1})
  
  Needless to say, you will have to create helper functions for creating
  Bars
  and Frogs, and not allow your BadBar or BadFrog value constructors to be
  visible outside your module.
  
  John
  
  ___
  Haskell-Cafe mailing list
  Haskell-Cafe@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-cafe
  
  
  ___
  Haskell-Cafe mailing list
  Haskell-Cafe@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] parsec: parserFail multiple error messages

2012-08-08 Thread Nick Vanderweit
I found a similar question asked in June 2009 on the haskell-beginners 
archives, titled Clearing Parsec error messages. A hack that was proposed 
(http://www.haskell.org/pipermail/beginners/2009-June/001809.html) was to 
insert a dummy character into the stream, consume it, and then fail. Still, 
I'd like to see if there is a cleaner way to modify the error state in the 
Parsec monad.


NIck

On Wednesday, August 08, 2012 03:24:31 PM silly wrote:
 I am trying to create a parsec parser that parses an integer and then
 checks if that integer has the right size. If not, it generates an
 error.
 I tried the following:
 
 8---
 import Text.Parsec
 import Text.Parsec.String
 
 integer :: Parser Int
 integer  = do s - many1 digit
   let n = read s
   if n  65535 then
   parserFail integer overflow
   else
   return n
 8---
 
 The problem is that when I try this
 
 parse integer  7
 
 I get the following error:
 
 Left (line 1, column 6):
 unexpected end of input
 expecting digit
 integer overflow
 
 ie there are three error messages but I only want the last one. Is
 there something I can do about this?
 
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] IMAGE_FILE_LARGE_ADDRESS_AWARE (4GB on Win64) ... any best practices??

2012-07-31 Thread Nick Rudnick
Dear Haskellers,

did anybody of you stumble about surprisingly having a 2GB memory limit on
Win64? I admit I didn't get it at once (just about to finish a complete
memcheck... ;-) -- but of course there already is a discussion of this:


http://stackoverflow.com/questions/10743041/making-use-of-all-available-ram-in-a-haskell-program

Unfortunately, this left me a little stupid about how to actually get a
program running with IMAGE_FILE_LARGE_ADDRESS_AWARE -- did anybody try this
successfully (or even not...) and has experience to share?

Usually a Linuxer, I am just about to begin exploring the possibilities of
Haskell on Windows, please forgive.

Thanks a lot advance, Nick
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Efficient temporary file storage??

2012-01-23 Thread Nick Rudnick
Dear all,

if you want to temporarily store haskell data in a file – do you have a
special way to get it done efficiently?

In an offline, standalone app, I am continuously reusing data volumes of
about 200MB, representing Map like tables of a rather simple structure,

key: (Int,Int,Int)
value: [((Int,Int),LinkId)]


which take quite a good deal of time to produce.

Is there a recommendation about how to 'park' such data tables most
efficiently in files – any format acceptable, quick loading time is the
most desirable thing.

Thanks a lot in advance, Nick
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Hackage feature request: E-mail author when a package breaks

2011-11-01 Thread Nick Bowler
On 2011-11-01 12:59 +0100, Daniel Díaz Casanueva wrote:
 How about to a new optional Cabal field like mail-report? (don't bother
 about this name, I chose it randomly)
 
 If a build failure happens, or there is some relevant information about
 your package, Hackage will send a mail to the direction specified in that
 field. A field which content will NOT appear in the package page, so
 internet bots can't record so easily your mail direction to send you real
 spam. This is the reason because I write my direction in the name at
 domine dot com form (since a while ago), in spite of I would really like
 to receive mails about fails in those packages I maintain.
 
 Furthermore, since the field would be optional, you still can avoid to
 receive these mails.

Doing anything like this in the .cabal file is a mistake, since there is
no way to change it after uploading.

If your mail address changes, or if you don't want to maintain a package
any more, or if you simply change your mind about receiving status
updates by email, then if this gets hardcoded in the .cabal file you
have no recourse.

Cheers,
-- 
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Categorized Weaknesses from the State of Haskell 2011 Survey

2011-09-15 Thread Nick Knowlson
I think a few examples can go a long way.

I remembered seeing a lot of requests for examples in the results, so I went
back and skimmed the spreadsheet. I found that 11 of the 34 responses under
Library Documentation explicitly called out examples as desirable.

Combined with Heinrich's experience, this sounds pretty promising to me.

Cheers,
Nick

On 15 September 2011 05:24, Heinrich Apfelmus apfel...@quantentunnel.dewrote:

 Malcolm Wallace wrote:

 In fact, my wish as a library author would be: please tell me what
 you, as a beginner to this library, would like to do with it when you
 first pick it up?  Then perhaps I could write a tutorial that answers
 the questions people actually ask, and tells them how to get the
 stuff done that they want to do.  I have tried writing documentation,
 but it seems that people do not know how to find, or use it.
 Navigating an API you do not know is hard.  I'd like to signpost it
 better.


 From my experience, people are very good at learning patterns from
 examples, so a list of simple examples with increasing difficulty or a
 cookbook-style tutorial work very well. In comparison, learning from general
 descriptions is much harder and usually done by learning from examples
 anyway.


 A case in point might by my own reactive-banana library.

  
 http://haskell.org/**haskellwiki/Reactive-bananahttp://haskell.org/haskellwiki/Reactive-banana

 I have extensive haddocks and many examples ranging from simple to
 complicated

  
 http://haskell.org/**haskellwiki/Reactive-banana/**Exampleshttp://haskell.org/haskellwiki/Reactive-banana/Examples

 but so far, I never wrote a tutorial or introductory documentation.
 Curiously, instead of sending complaints, people send me suggestions and
 code. I interpret this as a sign that my library is easy to understand (if
 you know Applicative Functors, that is) even though a key part of the
 documentation is still missing.



 Best regards,
 Heinrich Apfelmus

 --
 http://apfelmus.nfshost.com


 __**_
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
http://nickknowlson.com
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Categorized Weaknesses from the State of Haskell 2011 Survey

2011-09-12 Thread Nick Knowlson
Hello all,

I did a followup analysis of the free-form responses to What is Haskell's
most glaring weakness / blind spot / problem in the State of Haskell 2011
Survey. The article is up at:
http://nickknowlson.com/blog/2011/09/12/haskell-survey-categorized-weaknesses/

I think it has a lot of interesting information in it - I hope some others
will find it useful now too.

Cheers,
Nick
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Comment Syntax

2011-06-06 Thread Nick Bowler
On 2011-06-06 13:08 -0400, Albert Y. C. Lai wrote:
 Recall that the problem is not with isolated characters, but whole strings.
[...]
 in LaTeX, %%@#$^* is a comment.

This example probably does not help your position.

Since (La)TeX allows the comment character to be changed at any time,
the above is not necessarily a comment.  Furthermore, even with the
default character classifications, \% does not introduce a comment.
\% not introducing a comment in (La)TeX doesn't seem a whole lot
different from --- not introducing a comment in Haskell.

-- 
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Comment Syntax

2011-06-06 Thread Nick Bowler
On 2011-06-06 13:39 -0400, Nick Bowler wrote:
 On 2011-06-06 13:08 -0400, Albert Y. C. Lai wrote:
  Recall that the problem is not with isolated characters, but whole strings.
 [...]
  in LaTeX, %%@#$^* is a comment.
 
 This example probably does not help your position.
 
 Since (La)TeX allows the comment character to be changed at any time,
 the above is not necessarily a comment.  Furthermore, even with the
 default character classifications, \% does not introduce a comment.
 \% not introducing a comment in (La)TeX doesn't seem a whole lot
 different from --- not introducing a comment in Haskell.

And as was pointed out elsethread, --- /does/ in fact introduce a
comment in Haskell.  So the above should read:

  \% ... doesn't seem a whole lot different from --| ...

-- 
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Calling a C function that returns a struct by value

2011-05-25 Thread Nick Bowler
On 2011-05-25 16:19 -0400, Matthew Steele wrote:
  From Haskell, I want to call a C function that returns a struct by  
 value (rather than, say, returning a pointer).  For example:
 
typedef struct { double x; double y; } point_t;
point_t polar(double theta);
 
 I can create a Haskell type Point and make it an instance of Storable  
 easily enough:
 
data Point = Point CDouble CDouble
instance Storable Point where -- insert obvious code here

Note that there may be an arbitrary amount of padding bytes between the
two members of your struct, as well as after the last member, so
defining this Storable instance portably is somewhat tricky and
non-obvious.

 And now I want to do something like this:
 
foreign import ccall unsafe polar :: CDouble - IO Point
 
 ...but that doesn't appear to be legal (at least in GHC 6.12).  Is  
 there any way to import this C function into Haskell _without_ having  
 any additional wrapper C code?

No, the Haskell FFI simply cannot describe C functions with struct
parameters or return values.  You will need to call this function with a
C wrapper.

Assuming you got the Storable instance correct, something like

  void wrap_polar(double theta, point_t *out)
  {
 *out = polar(theta);
  }

should do nicely.  Alternately, you can avoid the tricky business of
defining a Storable instance for your struct entirely with something
like:

  void wrap_polar(double theta, double *x, double *y)
  {
 point_t tmp = polar(theta);
 *x = tmp.x;
 *y = tmp.y;
  }

Cheers,
-- 
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Exception for NaN

2011-05-12 Thread Nick Bowler
On 2011-05-12 21:14 +0400, Grigory Sarnitskiy wrote:
 How do I make my program stop whenever it gets somewhere NaN as a
 result during a calculation? If there is no appropriate flag for ghc
 maybe there exist flags for C to use in optc.

Under IEEE 754 floating point arithmetic, any operation which produces
a quiet NaN from non-NaN inputs will also raise the invalid floating
point exception.  Unfortunately, GHC provides no support whatsoever for
IEEE 754 floating point exceptions.

You may neverthelesss be able to hack it, assuming appropriate hardware
and software support.  The result will not be portable.  For example,
the GNU C library provides the feenableexcept function.  Completely
untested, but on such systems, if you call feenableexcept(FE_INVALID)
from C at the start of your program, your program should receive SIGFPE
whenever the invalid floating point exception is raised by a floating
point operation.  This will likely cause the program to terminate.
Other systems may provide a similar mechanism.

Whether this hack actually works properly with GHC's runtime system
is another issue entirely.  Furthermore, GHC may perform program
transformations that affect the generation of these exceptions in
a negative way.

 I don't want NaN to propagate, it is merely stupid, it should be terminated.

NaN propagation is not stupid.  Frequently, components of a computation
that end up being NaN turn out to be irrelevant at a later point, in
which case the NaNs can be discarded.

Cheers,
-- 
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A small Darcs anomoly

2011-04-28 Thread Nick Bowler
On 2011-04-28 08:21 -0600, Chris Smith wrote:
 It seems to me the same problems could be solved without the necessary
 increase in complexity by:
 
 (a) Keeping repositories in sibling directories with names.
 
 (b) Keeping a working directory that you build in as one of these, and
 switching it to match various other named repositories as needed.  Then
 your build files are still there.

Unfortunately, sharing a build directory between separate repositories
does not work.  After a build from one repository, all the outputs from
that build will have modification times more recent than all the files
in the other repository.

When switching branches, git (and other systems) update the mtimes on
all files that changed, which will cause build systems to notice that
the outputs are out of date.  'cd' does not do this.  Thus, if you have
separate repo directories (call them A and B) with different versions of
some file, and you share a build directory between them, it is very
likely that after building A, a subsequent build of B will fail to work
correctly.

-- 
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A small Darcs anomoly

2011-04-28 Thread Nick Bowler
On 2011-04-28 15:23 +, malcolm.wallace wrote:
 Then I suggest that your build tools are broken.  Rebuilding should
 not depend on an _ordering_ between modification times of source and
 object, merely on whether the timestamp of the source file is
 different to its timestamp the last time we looked.  (This requires
 your build tools to keep a journal/log, yes, but it is the only safe
 way to do it.)

Right.  The /order/ of the timestamps is wrong when a build directory is
shared between repositories (isn't that what I said?).  Try it yourself
with cabal: it will fail.

Consider two repos, A and B, each with different versions of foo.x,
that (when compiled) produces the output foo.y.  We store the build in
the directory C.

Initially, say A/foo.x has a mtime of 1, and B/foo.x has an mtime of 2.

We do a build of A, producing the output file C/foo.y.  say C/foo.y now
has a mtime of 3.

Now we do a build in B.  The build system sees that C/foo.y has a
mtime of 3, which is newer than B/foo.x's mtime of 2.  The build
system therefore does not rebuild C/foo.y.

 It is relatively common to change source files to have an older
 timestamp rather than a newer one.  This should not cause your build
 system to ignore them.  It can happen for any number of reasons:
 restoring from backup, switching repository, bisecting the history of
 a repo, clock skew on different machines, 

All of these operations update the mtimes on the files...

-- 
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A small Darcs anomoly

2011-04-26 Thread Nick Bowler
On 2011-04-26 15:51 +0200, Daniel Fischer wrote:
 On Tuesday 26 April 2011 15:35:42, Ivan Lazar Miljenovic wrote:
  How do you see how git branches are related to each other?
 
 To some extent, you can see such a relation in gitk. For mercurial, hg glog 
 also shows a bit. I suppose there's also something to visualise branches in 
 bazaar, but I've never used that, so I don't know.
 
 So, with gitk/glog, you can see that foo branched off bar after commit 
 0de8793fa1bc..., then checkout/update to that commit [or bar's head], 
 checkout/update to foo's head/tip and compare.

No need to do a checkout; gitk can visualize any or all branches of the
repository simultaneously.

-- 
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] QuickCheck, (Ord a)= [a] - Property problem

2011-04-21 Thread Nick Smallbone
larry.liuxinyu liuxiny...@gmail.com writes:

 Somebody told me that:
 Eduard Sergeev • BTW, more recent QuickCheck (from Haskell Platform
 2011.2.0.X - contains QuickCheck-2.4.0.1) seems to identifies the
 problem correctly:

 *** Failed! Falsifiable (after 3 tests and 2 shrinks):
 [0,1]
 False

I don't think this can be true: the problem occurs in GHCi and there's
no way for QuickCheck to detect it. And when I tested it I got the same
problem. There must be some difference between the properties you both
tested...

Nick


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] QuickCheck, (Ord a)= [a] - Property problem

2011-04-20 Thread Nick Smallbone
larry.liuxinyu liuxiny...@gmail.com writes:

 prop_foo :: (Ord a) = [a] - Property
 prop_foo xs = not (null xs) == maximum xs == minimum xs

 This is an extreme case that the property is always wrong.

 However, QuickCheck produces:
 *Main test prop_foo
 OK, passed 100 tests.

 Why this happen? If I use verboseCheck, I can find the sample test
 data are as the following:
 *MainverboseCheck prop_foo
 ...
 97:
 [(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),()]
 98:
 [(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),
 (),(),(),(),(),(),()]
 99:
 [(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),(),
 (),(),()]
 OK, passed 100 tests.

This is an unfortunate feature of GHCi: if the thing you want to
evaluate has a polymorphic type then all the type variables default to
(), see:
  
http://www.haskell.org/ghc/docs/7.0.3/html/users_guide/interactive-evaluation.html#extended-default-rules
So prop_foo is only tested for lists of (). Nasty.

The usual way to work around it is to declare all your properties
monomorphic, so write:
  prop_foo :: [Integer] - Property

 This works at least, However, since 'a''b', they are order-able, what
 if I want to test prop_foo works for char?

Testing with Integers should always[*] be enough because of
parametricity.

Nick

[*] For certain values of always :)


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Fast conversion between Vector Double and Vector CDouble

2011-04-04 Thread Nick Bowler
On 2011-04-04 13:54 +0200, Bas van Dijk wrote:
 However, I noticed I had a lot of conversions from Vector Double to
 Vector CDouble and visa versa in my code:
 
 import Data.Vector.Storable ( Vector )
 
 mapRealToFrac ∷ (Storable α, Storable β, Real α, Fractional β)
   ⇒ Vector α → Vector β
 mapRealToFrac = VS.map realToFrac
 
 When I replace this with:
 
 mapRealToFrac = unsafeCoerce
 
 My application computes the same result but does it 28 times faster!

Note that even if Double and CDouble have identical representations,
unsafeCoerce does not perform the same conversion as realToFrac -- the
latter does conversion to/from Rational and thus munges all values not
representable therein.  This also happens to be why it is slow.
Some real examples, in GHCi:

   realToFrac (0/0 :: Double) :: CDouble
  -Infinity
 
   unsafeCoerce (0/0 :: Double) :: CDouble
  NaN
 
   realToFrac (-0 :: Double) :: CDouble
  0.0
 
   unsafeCoerce (-0 :: Double) :: CDouble
  -0.0

Using realToFrac to convert between different floating types is even
more fun:

   realToFrac (1/0 :: Float) :: Double
  3.402823669209385e38

Nice!

 My question are:
 
 1) Is this always safe? In other words: are the runtime
 representations of Double and CDouble always equivalent or do they
 vary between platforms?

Probably not, but realToFrac isn't really safe, either (as above).

 2) Can the same improvement be accomplished using RULE pragma's?

No, because the two methods do not compute the same function.
However, there are (or were) broken RULE pragmas in GHC which
do this sort of transformation.  Such RULEs make realToFrac
really fun because your program's correctness will depend on
whether or not GHC decides to inline things.

-- 
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] bug in Prelude.words?

2011-03-28 Thread Nick Bowler
On 2011-03-28 16:20 +, malcolm.wallace wrote:
 But what about the author?  Surely there is no reason to use a
 non-breaking space unless they intend it to mean that the characters
 before and after it belong to the same logical unit-of-comprehension?

The non-breaking part of non-breaking space refers to breaking text
into lines.  In other words, if two words are separated by a
non-breaking space, then a line break will not be put between those
words.  A non-breaking space does *not* make two words into one word.

-- 
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] I think I've found a regression in Haskell Platform, but what do I do next?

2011-03-08 Thread Nick Frolov
On Tue, 2011-02-22 at 07:12 -0800, Johan Tibell wrote:
 On Tue, Feb 22, 2011 at 7:08 AM, Nick Frolov n...@mkmks.org wrote:
  Something has definitely changed between these two ghc versions. Since
  I've spent considerable amount of time on finding a workaround for the
  problem, I'd like to file a real bug report, but I'm not completely sure
  where to do it because there are several pieces of software involved
  (ghc itself, 'process', 'satchmo' and 'minisat2' SAT solver). Could
  somebody guide me please?
 
 There were a few I/O manager related bugs, one related to forking
 processes. These should be fixed in GHC 7.0.2 (and the next HP
 release), due soon.

I've just reproduced the bug with GHC 7.0.2. It seems that it still has
to be reported, but, again, I have no idea if it's caused by a possible
regression in GHC. It could be that one of the libraries relies on the
wrong behaviour of GHC 6.12.*, right?


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] I think I've found a regression in Haskell Platform, but what do I do next?

2011-02-22 Thread Nick Frolov
Hi,

Recently I had to switch from ghc 6.12.3 to ghc 7.0.1 because of the bug
#4235. This switch has introduced a problem with one of third-party
libraries I use ('satchmo', an interface for SAT solvers).

The problem is that writing to stdin of a process started by
System.Process.runInteractiveCommand silently fails provided it's stderr
is not read. If stderr of the spawned process is redirected to /dev/null
or fully read by the parent process, then problem does not come up. The
troubled code is available at http://bit.ly/dNwrIV (lines 43-45).

Changes between versions of the 'process' library (1.0.1.3 - 1.0.1.4)
bundled with ghc 6.12.3 and 7.0.1 only include one bug fix, which is not
the source of the problem described, as far as I can understand. The
problem is not present with ghc 6.12.* at all. 

Something has definitely changed between these two ghc versions. Since
I've spent considerable amount of time on finding a workaround for the
problem, I'd like to file a real bug report, but I'm not completely sure
where to do it because there are several pieces of software involved
(ghc itself, 'process', 'satchmo' and 'minisat2' SAT solver). Could
somebody guide me please?



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] linear logic

2011-02-22 Thread Nick Rudnick

Hi Vasili,

not understanding clearly «in a categorical logic sense» -- but I can be 
sure you already checked out coherent spaces, which might be regarded as 
underlying Girard's original works in this sense?? I have a faint idea 
about improvements, but I don't have them present at the moment.


Curiously -- is it allowed to ask about the motivation?

Cheers, Nick

On 02/22/2011 09:13 PM, Vasili I. Galchin wrote:

Hello,

What is the category that is used to interpret linear logic in
a categorical logic sense?

Thank you,


Vasili

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Possibility to implant Haskell GC into PostgreSQL interesting?

2011-02-22 Thread Nick Rudnick

Dear all,

recently, at an email conversation with pgsql hackers I had a quick 
shot, asking about their position to somebody replacing their palloc GC 
-- having roughly in mind that either here or on a Mercury mailing list 
(where there's a similar case with a pure declarative language and a 
Boehm GC), where there was a conclusion a non-pure GC would be a major 
hindrance to deeper interaction.


Ok, I found the answer worth a discussion here; as far as I understood, 
they don't oppose the idea that the PostgreSQL GC might be a candidate 
for an update. I see three issues:


(a) The most open question to me is the gain from the Haskell 
perspective; most critical: Would a Haskell GC inside PostgreSQL mean a 
significant change or rather a drop in the bucket? Once this may be 
answered optimistically, there comes the question about possible 
applications -- i.e., what can be done with such a DBMS system. Knowing 
about efforts like (http://groups.inf.ed.ac.uk/links/) I would like to 
let this open for discussion.


Let me please drop here a quote that I believe their object relational 
efforts seem to have gotten stuck at PostgreSQL due to the conceptual 
clash of OO with the relational algebra underlying PostgreSQL -- which 
in turn seems to harmonize much better with Hindley-Milner  Co. (System 
F??)


(b) The question I personally can say least about are the expenditures 
to be expected for a such project. I would be very interested in some 
statements. I have limited knowledge about the PostgreSQL GC and would 
assume it is much simpler than, e.g. the GHC GC.


(c) Gain from PostgreSQL perspective: This IMO should be answered 
easiest, hoping the Haskell GC experts to be able to answer easily how 
much is the overhead to be payed for pure declarativity, and the chances 
(e.g. parallelism, multi cores??), too.


Besides it might be interesting to see inhowfar a considerable overhead 
problem may be alleviated by a 'plugin' architecture allowing future 
PostgreSQL users to switch between a set of GCs.


I would be very interested about any comments, Cheers, Nick

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] H98, OOHaskell - getting started with objects in Haskell

2011-01-14 Thread Nick Rudnick

Hi Philipp,

depending on what engineering calculations you are interested in, you 
might like http://timber-lang.org/ , a direct descendant of O'Haskell, 
targeted at embedded real-time systems.


If you are just stepping out of the OO programming world, it might be 
helpful to imagine OO as a rather narrow specialization of a concept 
called type, so that from the FP perspective it is just one of so many 
alternatives so that it gets lost a little -- which may be a useful 
translation to use.


So the short answer to where's OO? in Haskell might just be data, 
while the expressive freedom of type classes / families might surprise 
you. There have been some people playing with cellular automata, Google 
helps, e.g.:


http://mjsottile.wordpress.com/
http://trac.haskell.org/gloss/

Both cases might give you an impression how it's done with Haskell types.

If you really are interested in using the OO class concept together with 
the Haskell type system by a more than practical motivation, an expert 
in the field who is interested in the subject for a long time is Peter 
Padawitz (http://fldit-www.cs.uni-dortmund.de/~peter/ 
http://fldit-www.cs.uni-dortmund.de/%7Epeter/); he has presented a 
beautiful synthesis based on category theory, swinging types 
(http://fldit-www.cs.uni-dortmund.de/~peter/Swinging.html 
http://fldit-www.cs.uni-dortmund.de/%7Epeter/Swinging.html). Of 
course, he did also use O'Haskell for his programming works in the past.


Cheers,

Nick

On 01/14/2011 12:23 AM, gutti wrote:

Hi,

thanks for all Your answers (and again I'm amazed how active and good this
forum is).

I expected OOHaskell to be on the somewhat extended side, but I didn't
expect it to be so uncommon.
This very strong and clear feedback is indeed very valuable.

I think I see the complexities of OO-programming in larger programs
(unforeseen interactions), but I'm still doubtfull, that functional
programming can equally adress all tasks equally efficient.

I'm especially interestes in engineering calculation tasks where cellular
automata could be used. In that case all u have to do is to give the class
the right properties and that let it grow.

Such a localised intelligence approach seems an ideal OO - task. I don't
know whether something functional could achieve the same.

Sounds like a nice challenge. -- I'll chew on a small example.

Cheers Phil



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Re: Reply-To: Header in Mailinglists

2010-11-22 Thread Nick Bowler
On 2010-11-21 08:24 +, Malcolm Wallace wrote:
 If the list were to add a Reply-To: header, but only in the case  
 where one was not already present, that would seem to me to be ideal.   
 (None of the internet polemics against Reply-To that I have seen, have  
 considered this modest suggestion.)

This still breaks the reply-to-author feature.

 In the past, I have carefully used the Reply-To header to direct  
 responses to a particular mailing list of many (e.g. when cross- 
 posting an announcement).  Yet because there is a culture of Reply- 
 To: is bad, and most MUAs do not have a ReplyToList option, most  
 respondents end up pushing Reply to all, which ignores my setting of  
 Reply-To:, and spams more people than necessary.

MUAs will honour the Reply-To header when using the reply-to-all
function: the problem is that Reply-To does not mean what you think it
means.  The header indicates where *you* want to receive replies.  So
the reply-to-all function will reply to *you* (by using the value in
Reply-To), and to everyone else by copying the To and Cc lists.

There is another header, Mail-Followup-To, which tells MUAs to also drop
the To and CC lists.  I know several posters to this very list use it.
However, it needs to be used with care because it can fragment cross-
list discussions and/or prevent non-subscribers from receiving messages.

-- 
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Finding the contents of haskell platform?

2010-11-05 Thread Nick Bowler
On 2010-11-05 21:05 +, Stephen Tetley wrote:
 On 5 November 2010 20:08, Andrew Coppin andrewcop...@btinternet.com wrote:
  Would it be hard to replace - with a real Unicode arrow character?
 
 It should be quite easy - whether a given font has an arrow readily
 available is a different matter. It might be be simpler to drop into
 the Symbol font (should be present for all broswers) and use
 arrowright - code 0o256.

Except that the Symbol font family is not available in all browsers.

-- 
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Red links in the new haskell theme

2010-11-01 Thread Nick Bowler
On 2010-10-30 08:11 -0700, Mark Lentczner wrote:
 1) HTML supports the concept of alternate style sheets. If present,
 then the idea was that browsers would give the user the choice,
 somewhere, to choose among them. While Firefox does this (View  Page
 Style),

The implementation in Firefox is such that the style sheet resets to the
default if you reload the page or follow any link.  This makes the
feature completely useless in practice.

-- 
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Generating random tuples

2010-11-01 Thread Nick Bowler
On 2010-11-01 19:18 +0100, Jacek Generowicz wrote:
 I'm toying with generating random objects (for example tuples) and  
 started wondering what pearls of wisdom Cafe might have on the matter.  
 Two obvious points (relating to my toy code, shown below) are
 
 1) The meaning of the limits required by randomR is not obvious for  
 types such as tuples (you could come up with some definition, but it  
 wouldn't be unique: how would you allow for different ones?[*]; you  
 might decide that having such limits is nonsensical and not want to  
 provide a randomR: would you then leave it undefinded?).

Indeed, the Random class has a fairly narrow everything fits on the
real line view of the world: not only is the talk about closed
intervals ambiguous in general, but so is the talk about uniform
distributions on those intervals.  That being said, there is an Ord
instance for tuples (a lexicographic ordering) and for this case I think
it would make the most sense to use that: select an element from the set
{ x : lo = x = hi }

-- 
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Generating random tuples

2010-11-01 Thread Nick Bowler
On 2010-11-01 20:09 +0100, Daniel Fischer wrote:
 On Monday 01 November 2010 19:55:22, Nick Bowler wrote:
  That being said, there is an Ord instance for tuples (a
  lexicographic ordering) and for this case I think it would make the
  most sense to use that: select an element from the set
  { x : lo = x = hi }
 
 Really bad for
 
 lo, hi :: (Int,Integer)
 lo = (0,0)
 hi = (3,4)

Good point, that's not so hot.

 the product (partial) order seems much better to me.

Indeed it does.

-- 
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: OpenGL Speed!

2010-09-20 Thread Nick Bowler
On 2010-09-18 13:57 +1000, Lie Ryan wrote:
 On 09/18/10 07:49, Mats Klingberg wrote:
  On Friday September 17 2010 19.53.01, Lie Ryan wrote:
  It depends. Updating 800x600 screen at 24-bit color 30 times per second
  requires 800*600*24*30 = 34560 bytes/s = 329 MB/s which is larger
  
  Shouldn't that be bits/s, or 800*600*3*30 = 41 MB/s?
  
 
 yep, blame that on lack of sleep, I guess...

Nevertheless, 4x AGP (circa 2000) can easily sustain the significantly
exaggerated rate of 329 MB/s.

-- 
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why do unsafe foreign calls block other threads?

2010-08-05 Thread Nick Bowler
On 2010-08-03 15:23 -0700, John Meacham wrote:
 It is more an accident of ghc's design than anything, the same mechanism
 that allowed threads to call back into the runtime also allowed them to
 be non blocking so the previously used 'safe' and 'unsafe' terms got
 re-used. personally, I really don't like those terms, they are
 non-descriptive in terms of what they actually mean and presuppose a RTS
 similar to ghcs current design. 'reentrant' and 'blocking' which could
 be specified independently would be better and would be more
 future-proof against changes in the RTS or between compilers.

I thought safe meant the foreign function is allowed to call Haskell
functions, which seems to not have anything to do with whether the
function is re-entrant (a very strong condition).

-- 
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] OpenGL Speed!

2010-07-29 Thread Nick Bowler
On 2010-07-29 11:30 -0600, Luke Palmer wrote:
 If you are trying to redraw in realtime, eg. 30 FPS or so, I don't
 think you're going to be able to.  There is just not enough GPU
 bandwidth (and probably not enough CPU).

Updating an 800x600 texture at 30fps on a somewhat modern system is
absolutely *not* a problem.

-- 
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Haskell Forum

2010-07-27 Thread Nick Bowler
On 2010-07-27 19:59 +0100, Andrew Coppin wrote:
 Darrin Chandler wrote:
  IOW, if people use the proper and well known features of NNTP it would
  be a better world than the one we have were people do not use proper and
  well known features of SMTP.
 
 SMTP is designed for delivering messages point-to-point. If your email 
 provider incorrectly marks half the list traffic as spam, you can't read 
 it.

This has nothing to do with SMTP, and everything to do with your email
provider being worthless.

 If your PC dies and you lose all your email, you cannot get it back 
 again.

Assuming you've never heard of list archives or backups, sure.

 If you hit reply, it only replies to the one person who wrote the
 message, not to the list.

Every mail client worth its salt has a 'reply to group' function, which
performs as advertised.  In fact, I can't even name a single one that
does not have this function.

 And every person has to download every single message ever sent.
 Because, let's face it, all a list server does is receive emails and
 then re-send them to everybody.

This point is valid, but not really relevant since the advent of DSL.  A
week's traffic on linux-kernel is about 30 megabytes.  Haskell-cafe is
about 4.

 If your mail system isn't operational at the moment when the email is
 sent, you'll never receive it and cannot ever get it afterwards.

This is not an accurate reflection of reality.

 I constantly have trouble with this mailing list. Even gmane can't seem 
 to thread it properly. But I've never had any trouble with threading in 
 any NNTP group, ever.

Mutt seems to have no trouble threading it properly.  I haven't
encountered an issue with gmane and this list, although admittedly I
don't use it often.

 [Well, apart from that stupid Thunderbird bug they still haven't fixed 
 yet. But that's a client bug. Use a different client and it goes away.]

The same can be said about email threading.

-- 
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell Forum

2010-07-26 Thread Nick Bowler
On 10:37 Mon 26 Jul , Job Vranish wrote:
 I agree. A web forum would be more friendly to newcomers, easier to browse,
 and better organized, than the mailing list.

I don't understand this sentiment at all.  How are web forums easier to
browse than list archives?  Especially given that there are usually
multiple archives for each ML, with a variety of ways to use them (e.g.,
I tend to use gmane with my newsreader for this purpose).

 Some people will still prefer the mailing list of course, but I think there
 will be enough demand to justify a forum :)

Wine has a web forum that is directly connected to their mailing lists:
each post on the forum is sent to the corresponding list and vice versa.
The web forum interface doesn't support proper threading, but it
otherwise seems to work OK.  Perhaps something like that would be
useful?

-- 
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Haskell Forum

2010-07-26 Thread Nick Bowler
On 08:15 Mon 26 Jul , Kevin Jardine wrote:
 Other topics I am interested in are served by both a web forum and a
 mailing list, usually with different content and participants in both.
 In my experience, routing one kind of content to another does not work
 very well because of issues of spam control, moderation, topic
 subdivisions, the ability to correct posts, and threading (usually web
 forums have these things and mailing lists do not).

Since when do mailing lists not have threading?  Web forums with proper
support for threading seem to be few and far apart.

-- 
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell Forum

2010-07-26 Thread Nick Bowler
On 20:56 Mon 26 Jul , Andrew Coppin wrote:
 My personal preference would be for NNTP. It seems to handle threading 
 much better. You can easily kill threads you're not interested in, and 
 thereafter not bother downloading them. You can use several different 
 client programs. And so on. However, last time I voiced this opinion, 
 people started talking about something called usenet, which I've never 
 heard of...

Conveniently, all of the haskell mailing lists have an NNTP interface
available.  Add news.gmane.org as a server in your newsreader and
subscribe to gmane.comp.lang.haskell.cafe.

-- 
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Haskell Forum

2010-07-26 Thread Nick Bowler
On 13:28 Mon 26 Jul , Kevin Jardine wrote:
 On Jul 26, 10:10 pm, Evan Laforge qdun...@gmail.com wrote:
 
  Interesting, I've never figured out why some people prefer forums, but
  you're proof that they exist :)  
 
 This debate is eerily similar to several others I've seen (for
 example, on the interactive fiction mailing list).
 
 In every case I've seen, a web forum vs. mailing list debate has been
 pointless at best and sometimes turned into a flame war. I think that
 it's best for people who prefer a web forum to establish one and use
 it, and for those who prefer the mailing list approach to continue to
 use that.

It seems to me, then, that a wine-like web forum - mailing list
gateway would satisfy everyone without fragmenting the community?

See http://forum.winehq.org/viewforum.php?f=2.

-- 
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Haskell Forum

2010-07-26 Thread Nick Bowler
On 13:58 Mon 26 Jul , John Meacham wrote:
 There already is an NNTP - mailing list gateway via gmane that gives a
 nice forumy and threaded web interface for those with insufficient email
 readers. Adding a completely different interface seems unnecessary and
 fragmentary.
 
 http://news.gmane.org/gmane.comp.lang.haskell.cafe

Ah, I didn't realise the gmane web interface supported followups (I knew
the NNTP interface did, and mentioned this elsewhere in this thread).
Looks like we've already got a web forum, then, so I guess there's
nothing to do! :)

-- 
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Heavy lift-ing

2010-07-23 Thread Nick Bowler
On 11:43 Fri 23 Jul , michael rice wrote:
 Hi,
 
 I don't understand what's taking place here.
 
 From Hoogle:
 
 =
 
 liftM2 :: Monad  m = (a1 - a2 - r) - m a1 - m a2 - m r
 
 Promote a function to a monad, scanning the monadic arguments from left to 
 right. For example,
 
     liftM2 (+) [0,1] [0,2] = [0,2,1,3]
     liftM2 (+) (Just 1) Nothing = Nothing
 
 =
 
 What does it mean to promote a function to a monad?

Consider fmap, which 'promotes a function to a functor':

  fmap :: Functor f = (a - b) - f a - f b

This might be easier to understand if you fully parenthesise this:

  fmap :: Functor f = (a - b) - (f a - f b)

In other words, fmap takes a function on ordinary values as input, and
outputs a function on a particular Functor.

Now consider liftM, which 'promotes a function to a monad':

  liftM :: Monad m = (a - b) - m a - m b

Hey, this looks almost the same as fmap (it is)!  Now, monads have
additional structure which allows us to promote more complicated
functions, for example:

  liftM2 :: Monad m = (a - b - c) - m a - m b - m c

which, when fully parenthesised, looks like

  liftM2 :: Monad m = (a - b - c) - (m a - m b - m c)

What we have now is that we can promote a 'two argument' function to
Monads (this is not possible on mere Functors, hence there's no fmap2).

 It would seem that the monad values must understand the function
 that's being promoted, like Ints understand (+).

Yes, liftM2 (+) gives you a new function with type
 
  (Num a, Monad m) = m a - m a - m a

 But how does one add [0,1] and [0,2] to get [0,2,1,3]?

liftM2 (+) [0,1] [0,2] gives the list

  [0+0, 0+2, 1+0, 1+2]

(recall that (=) in the list monad is concatMap).

-- 
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Small flexible projects a possible niche for Haskell - your statement, please...

2010-07-18 Thread Nick Rudnick

Paul, this is what we are interested in... :-)

Taken that Haskell has lots of combinator constructs on various levels 
as you said -- might I ask what are your personal favourites among them...?


Your mentioning of early coding initiative taken domain experts and 
programmers in one person for early demand strongly reminds me of our 
concepts of knowledge techniques -- it is my hope that this is possible. 


Thanks a lot,

   Nick

Paul Johnson wrote:

On 16/07/10 05:41, Nick Rudnick wrote:



In consequence, an 8-student-project with two B.Sc. theses is raised 
as a pilot to examine the possibilities of using Haskell in the 
combination small team with limited resources and experience in a 
startup setting - we want to find out whether Haskell can be an offer 
competitive whith languages like Ruby  Co. in such a setting.




I'm not sure exactly what you are asking, but I'm going to try to 
answer the question Does Haskell have a niche in small, flexible 
projects?


I think the answer is a definite yes.  I also think that Haskell can 
do great things in bigger projects as well, but successful 
technologies often start out with a niche that was previously poorly 
served, and then move out from there.


Haskell developers generally start by writing down an axiomatic 
definition of the problem domain.  To a developer raised in 
traditional top down development this looks like a jump into coding, 
and furthermore coding at the lowest level.  In fact it is a 
foundation step in the architecture, because Haskell works well with a 
bottom up approach.  The property that makes this work is 
composability, which says that you can take primitive elements and 
integrate them into bigger units without having to worry about mutual 
compatibility.  A Haskell library will typically define a data type 
Foo and then have functions with types along the lines of mungFoo 
:: Something - Foo - Foo.  This combinator style of library give 
you the
basic building blocks for manipulating Foos, along with a guarantee 
that the output will always be a valid Foo.  So you can build up your 
own applications that work at the Foo level rather than down in the 
coding level of flow control and updated variables like conventional 
programs.  This lets domain experts read and comment on the code, 
which reduces defect rates a lot.


But these combinator libraries are also highly reusable because they 
describe an entire domain rather than just being designed to fit a 
single application.  So the best bet is to analyse a domain, write a 
combinator library that models the domain, and then produce a series 
of related programs for specific applications within that domain.  
That will let a small team be amazingly productive.


Paul.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Small flexible projects a possible niche for Haskell - your statement, please...

2010-07-15 Thread Nick Rudnick

Dear all,

besides good ambitions in many other areas, it is interesting to see 
that a great number of present Haskell projects is run by a very small 
number of persons and even some parts of the usual developer's toolkit, 
like e.g. Haddock, seem to contribute to it.


Has the Haskell culture produced an environment which is especially apt 
for such development in small groups, possibly with low grade of 
division of labor?


In the last three years at Duisburg-Essen university, very small but 
application oriented introductions to up to 100 rather non-CS centric 
students raised an interest whether there might be a such niche for 
Haskell application -- as there seems to be some evidence that certain 
perceptions of a steep learning curve of Haskell may be in significant 
correlation with an already existing imperative language culture.


In consequence, an 8-student-project with two B.Sc. theses is raised as 
a pilot to examine the possibilities of using Haskell in the combination 
small team with limited resources and experience in a startup setting - 
we want to find out whether Haskell can be an offer competitive whith 
languages like Ruby  Co. in such a setting.


An additional focus is the question inhowfar Haskell might be an enabler 
in allowing a greater extent of change in the organization, like people 
coming and going, or choosing new roles -- here we allow to *disregard* 
the problem of teaching Haskell to innocents to prevent such questions 
from dominating the whole of the discussion: This might be another 
project. Our premise is the availability of a sufficient number of 
people at an mediocre to intermediate level in the environment.


We hope this might be interesting to the Haskell community, as Haskell 
seems to be underrepresented in this regard, and there seem to be active 
prejudices by the imperative community -- which unfortunately in a 
positive correlation with general programming experience, to an 
observing third might lead to an impression that a such rejection of 
Haskell is a matter of computing competence.


Now we -- especially the two students at their B.Sc. thesis, Markus 
Dönges and Lukas Fisch -- are very interested in your quote, possibly


o   aspects of Haskell technology you perceive as relevant or helpful,

o   examples in the Haskell culture / community which might be relevant,

o   experiences of your own and around you, and *especially*,

o   language properties,constructs and extensions you would see as 
enablers in this regard.



Thank you very much in advance... :-)

   Nick


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Comments on Haskell 2010 Report

2010-07-13 Thread Nick Bowler
On 16:21 Fri 09 Jul , John Meacham wrote:
 I would think it is a typo in the report. Every language out there seems
 to think 0**0 is 1 and 0**y | y /= 0 is 0. I am not sure whether it is
 mandated by the IEEE standard but a quick review doesn't say they should
 be undefined (and the report mentions all the operations with undefined
 results)

IEEE 754 has three different power operations.  They are recommended
operations, which means that supporting them is optional.

pown only allows integral exponents, and the standard says the following:

pown (x, 0) is 1 for any x (even a zero, quiet NaN, or infinity)

pow handles integral exponents as a special case, and is similar:

pow (x, ±0) is 1 for any x (even a zero, quiet NaN, or infinity)

powr is defined as exp(y*log(x)).

powr (±0, ±0) signals the invalid operation exception
[NB: this means that the operation returns a quiet NaN].

In C, the pow function corresponds to the pow operation here,
assuming the implementation conforms to annex F of the standard (an
optional feature).
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Float instance of Data.Bits

2010-07-12 Thread Nick Bowler
On 22:02 Sat 10 Jul , Sam Martin wrote:
  Note that the Haskell report does not require IEEE 754 binary encodings.
  In fact, it permits 'Float' to be a decimal floating point type.
 
 True. Although I don't really understand why? Or rather, I don't
 understand why it can't be at least slightly more specific and at
 least state that Float is a 32-bit floating point value and Double is
 a 64-bit floating point value. The exact handling of various
 exceptions and denormals tends to vary across hardware, but this at
 least allows you to get at the representation.

Because this precludes efficient Haskell implementations on platforms
which do not support these formats in hardware (but might support other
formats).  Also, there are other problems if the Float and Double types
do not match the corresponding types of the system's C implementation.

 I realise it'll be platform-specific (assuming isIEEE returns false),
 but then so is the behaviour of your code if you don't require IEEE
 support.

IEEE 754 defines five basic floating point formats:

   binary32:   24  bits precision and a maximum exponent of 127.
   binary64:   53  bits precision and a maximum exponent of 1023.
   binary128:  113 bits precision and a maximum exponent of 16383.
   decimal64:  16 digits precision and a maximum exponent of 384.
   decimal128: 34 digits precision and a maximum exponent of 6144.

Each of these formats has a specific binary representation (with the
lovely storage hacks we all know and love).  Additionally, there are two
more interchange formats which are not intended for use in arithmetic:

   binary16:   11 bits precision and a maximum exponent of 15.
   decimal32:  7 digits precision and a maximum exponent of 96.

Furthermore, there are so-called Extended and Extensible formats,
which satisfy certain requirements of the standard, but are permitted
to have any encoding whatsoever.

A Haskell implementation tuned for development of financial applications
would likely define Float as the decimal64 format, and Double as the
decimal128 format. isIEEE would return True since these types and
related operations conform to the IEEE floating point standard.

-- 
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Float instance of Data.Bits

2010-07-09 Thread Nick Bowler
On 15:32 Fri 09 Jul , Sam Martin wrote:
 There are plenty of other examples of bit twiddling floats. Floats have
 a well defined bit representation (if a slightly complex one) so it's
 perfectly reasonable to be able to manipulate it.

Note that the Haskell report does not require IEEE 754 binary encodings.
In fact, it permits 'Float' to be a decimal floating point type.

-- 
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Merge hsql and HDBC -- there can only be one!

2010-07-07 Thread Nick Rudnick

Hi Chris,


these are good questions -- actually, you might have mentioned Takusen, too.

Clearly, HDBC is the largest of these projects, and there are lots of 
things well done there.


Takusen has an interesting approach, and I would like to see a 
discussion here about the practical outcomes, as I have done no testing yet.


I myself quite a time ago had an opportunity to do a Haskell job with a 
PostgreSQL backend for a client, where I tried out all three and got 
hsql running easiest. A maintainer was vacant, so I stepped in happily 
-- doing refactorings, fixing problems at request, giving advice to people.


I can say that I am quite a little PostgreSQL centric and that I have a 
GIS project in sight, for which I want to try to adapt hsql.


Cheers,

   Nick


Christopher Done wrote:

One thing that would be nice is a unification of the general database
libraries hsql and HDBC. What is the difference between them? Why are
there two, and why are there sets of drivers for both (duplication of
effort?)? I've used both in the past but I can't discern a real big
difference (I used the hsql-sqlite library and the HDBC-postgresql
library, whichever worked...). It seems the best thing to do is either
actively merge them together and encourage the community to move from
one to the other -- judging from what I've read HDBC is more up to
date and newer than hsql -- or have some documentation with damn good
reasons to choose one or the other, because currently this is a
needless source of confusion and possible duplication of effort for
Haskell's database libraries.

I wasn't going to post until I'd actually researched the difference
myself properly but I didn't get chance to have a look over the
weekend, but I thought I'd pose the question. Do people actually care
about this?
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe

  


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Functional programming techniques in small projects

2010-06-21 Thread Nick Rudnick

Hi Markus,

I am afraid your questions are formulated quite narrowly so that people 
you might like to reach might not feel addressed -- so it might be 
helpful to ask yourself how your subject might look in the perspective 
of an average Haskeller, if a such dies exist at all.


At first, please explain what you understand as post mass production 
and how you expect this to be in a relationship with Haskell.


Then, agile software development is used at projects of various sizes -- 
but I guess you want to use this term to emphasize *small* projects -- 
inhowfar do you actually require such to follow agile specifications, 
how about small and one-man-projects which do not follow agile at all?


You are speaking about »student-driven software development«... it might 
be hard for some people to imagine what you mean by this and -- again -- 
inhowfar this relates to Haskell.


Could you please be a little more explicit?


All the best,

   Nick



Markus Dönges wrote:

Hello Community,

I am a student from the University of Duisburg-Essen, Germany, and at 
the moment I am writing my bachelor thesis about Post-Mass-Prodcution 
Softwaresupply/ -development in case of an University administration. 
This approach deals with student-driven software development in which 
functional programming techniques (ie, Haskell) and agile development 
approaches matter.


I am looking for opinions and statements on how small (agile) software 
development projects may benefit from functional programming 
techniques (perhaps in contrast to imperative programming techniques).


Since I am at the very start, I would appreciate further literature 
advices. In addition, does anybody know particular people who are 
familiar with this topic?


Any answer is appreciated :-)

Regards, Markus
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proper Handling of Exceptional IEEE Floating Point Numbers

2010-04-22 Thread Nick Bowler
On 16:34 Thu 22 Apr , Barak A. Pearlmutter wrote:
 Comparison of exceptional IEEE floating point numbers, like Nan, seems
 to have some bugs in ghci (version 6.12.1).
 
 These are correct, according to the IEEE floating point standards:
 
 Prelude 0  (0/0)
 False
...
 But these are inconsistent with the above, and arguably incorrect:
...
 Prelude compare 0 (0/0)
 GT
...
 I'd suggest that compare involving a NaN should yield
 
 error violation of the law of the excluded middle

The problem stems from the fact that Float and Double are instances of a
class for totally ordered data types (namely Ord), which they are not.

While it might be worthwhile to make compare error in this case, the
consequences of this instance are much, much worse.  For example, max
is not commutative (as you have observed).  Data.Map.insert with Double
keys can cause elements to disappear from the map (at least as far as
Data.Map.lookup is concerned).  Using sort on a list of doubles
exposes the underlying sorting algorithm used.

-- 
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proper Handling of Exceptional IEEE Floating Point Numbers

2010-04-22 Thread Nick Bowler
On 13:30 Thu 22 Apr , Casey McCann wrote:
 On Thu, Apr 22, 2010 at 11:34 AM, Barak A. Pearlmutter ba...@cs.nuim.ie 
 wrote:
  Comparison of exceptional IEEE floating point numbers, like Nan, seems
  to have some bugs in ghci (version 6.12.1).
 
 Arguably, the bug in question is the mere existence of Eq and Ord
 instances for IEEE floats. They don't, can't, and never will work
 correctly. A similar topic was discussed here not too long ago; IEEE
 floating point so-called numbers lack reflexive equality and
 associativity of addition and multiplication, among other properties
 one might take for granted in anything calling itself a number.

Lack of reflexivity in the Eq instance is, in my opinion, an extremely
minor detail.  I can't think of any library functions off-hand that both

 (a) Might reasonably be used in the context of floating point
 computation.
 (b) In the presence of NaNs, depend on reflexivity of (==) for correct
 behaviour.

Now, lack of totality of the Ord instance is actually a severe problem,
because I can immediately think of a function that is both useful and
depends on this: sort.  If we define list is sorted as every element
except the last is less than or equal to its successor, sort does not
necessarily produce a sorted list!  In fact, as I posted elsewhere, the
result of sort in this case depends on the particular algorithm used.

For all intents and purposes, a class for partial orders would be
totally fine for floating point.  Sure, it's not reflexive in the
presence of NaNs.  Sure, it's not antisymmetric in the presence of
negative zeros.  On the other hand, it does satisfy a restricted form
of reflexivity and antisymmetry:

  * x == y implies x = y
  * x = y and y = x implies x == y

-- 
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: instance Eq (a - b)

2010-04-15 Thread Nick Bowler
On 03:53 Thu 15 Apr , rocon...@theorem.ca wrote:
 On Wed, 14 Apr 2010, Ashley Yakeley wrote:
 
  On 2010-04-14 14:58, Ashley Yakeley wrote:
  On 2010-04-14 13:59, rocon...@theorem.ca wrote:
  
  There is some notion of value, let's call it proper value, such that
  bottom is not one.
  
  In other words bottom is not a proper value.
  
  Define a proper value to be a value x such that x == x.
  
  So neither undefined nor (0.0/0.0) are proper values
  
  In fact proper values are not just subsets of values but are also
  quotients.
  
  thus (-0.0) and 0.0 denote the same proper value even though they are
  represented by different Haskell values.
  
  The trouble is, there are functions that can distinguish -0.0 and 0.0.
  Do we call them bad functions, or are the Eq instances for Float and
  Double broken?
 
 I'd call them disrespectful functions, or maybe nowadays I might call them
 improper functions.  The good functions are respectful functions or
 proper functions.

snip from other post
 Try using the (x == y) == (f x = g y) test yourself.

Your definitions seem very strange, because according to this, the
functions

  f :: Double - Double
  f x = 1/x

and 

  g :: Double - Double
  g x = 1/x

are not equal, since (-0.0 == 0.0) yet f (-0.0) /= g (0.0).

-- 
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: GSoC: Hackage 2.0

2010-04-09 Thread Nick Bowler
On 10:21 Fri 09 Apr , Job Vranish wrote:
 On Fri, Apr 9, 2010 at 9:46 AM, Ivan Lazar Miljenovic 
 ivan.miljeno...@gmail.com wrote:
 
  Job Vranish job.vran...@gmail.com writes:
   I vote for adding a feature that would let people post comments/code
   snippets to the documentation of other peoples packages :)
 
  You mean turn every hackage project page into a mini wiki?
 
  Yep.

My worry with this is that users will fill carefully written
documentation with irreverent nonsense or, worse, factual errors.
Moderation seems necessary.

   It would be even nicer if you could post comments to individual haskell
   definitions on the haddock page, and then hide most of them by default
   under an expander of some sort.
 
  Rather than, you know, providing the maintainer with a patch with some
  improved documentation?
 
 This is often more difficult than it sounds. The biggest obstacle to this
 approach is that a new hackage version of the package must to be uploaded to
 update the documentation and the authors (me included) tend to prefer to
 push new packages only when there are significant changes.

It seems to me that the solution to this particular problem is to allow
package maintainers to publish updated documentation separately from new
packages.

 Steps involved currently:
 0. pull down package source to build manually
 1. add documentation/code snippet to source
 2. build haddock documentation
 3. debug bad/ugly syntax / missing line breaks that break haddock
 4. generate a patch
 5. email patch to author
 6. wait a week for the author to actually get around to applying the patch
 to whatever repository the source resides
 7. wait several weeks for the author to release the next version of the
 package

I suspect that most maintainers are amenable to simple emails containing
change requests where documentation is concerned (please change the
first sentence of bazify's documentation to ...),  which means you can
skip steps 0 through 4.

 Steps involved with mini wiki:
 0. add [code] [/code] tags (or whatever)
 1. copy
 2. paste
 3. submit
 
 I think making this process easier would greatly increase the community
 involvement in the generation of documentation and improve the quality of
 the documentation as a whole.
 
 I would imaging that this would not be a trivial task, but I think even
 something super simple (like what they have for the php documentation) would
 be much better than nothing.

PHP's comments are a fine example of what I *don't* want to see
polluting my documentation.  There is very little signal to be found
amongst that noise.

 On Fri, Apr 9, 2010 at 10:46 AM, Malcolm Wallace 
 malcolm.wall...@cs.york.ac.uk wrote:
  How much cooler it would be, if the wiki-like comment on Hackage could
  automatically be converted into a darcs/git/whatever patch, and mailed to
  the package author/maintainer by Hackage itself.
 
  This would indeed be awesome :)
 
 Though I think I would prefer to select, from a list of comments, which ones
 I would like to include, and then click the Download darcs/git/whatever
 patch button. (rather than get hit by emails )

The resulting patches will be patently useless, because the only sort of
changes they can make is to append bullet points to existing documentation.
Unless you are proposing that users can perform any change whatsoever on
hackage?

-- 
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] More Language.C work for Google's Summer of Code

2010-03-30 Thread Nick Bowler
On 19:54 Tue 30 Mar , Stephen Tetley wrote:
 On 30 March 2010 18:55, Serguey Zefirov sergu...@gmail.com wrote:
  Other than that, C preprocessor looks simple.
 
 Ah no - apparently anything but simple.

I would describe it as simple but somewhat annoying.  This means that
guessing at its specification will not result in anything resembling a
correct implementation, but reading the specification and implementing
accordingly is straightforward.

Probably the hardest part is expression evaluation.

 You might want to see Jean-Marie Favre's (very readable, amusing)
 papers on subject. Much of the behaviour of CPP is not defined and
 often inaccurately described, certainly it wouldn't appear to make an
 ideal one summer, student project.

The only specification of the C preprocessor that matters is the one
contained in the specification of the C programming language.  The
accuracy of any other description of it is not relevant.  C is quite
possibly the language with the greatest quantity of inaccurate
descriptions in existence (scratch that, C++ is likely worse).

As with most of the C programming language, a lot of the behaviour is
implementation-defined or even undefined, as you suggest.  For example:

/* implementation-defined */
#pragma launch_missiles

/* undefined */
#define explosion defined
#if explosion
# pragma launch_missiles
#endif

This makes a preprocessor /easier/ to implement, because in these cases
the implementer can do /whatever she wants/, including doing nothing or
starting the missile launch procedure.  In the implementation-defined
case, the implementor must additionally write the decision down
somewhere, i.e. Upon execution of a #pragma launch_missiles directive,
all missiles are launched.

 http://megaplanet.org/jean-marie-favre/papers/CPPDenotationalSemantics.pdf

If this paper had criticised the actual C standard as opposed to a
working draft, it would have been easier to take it seriously.  I find
the published standard quite clear about the requirements of a C
preprocessor.

Nevertheless, assuming that the complaints of the paper remain valid, it
appears to boil down to The C is preprocessor is weird, and one must
read its whole specification to understand all of it.  It also seems to
contain a bit of The C standard does not precisely describe the GNU C
preprocessor.

This work is certainly within the scope of a summer project.

-- 
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Bytestrings and [Char]

2010-03-23 Thread Nick Bowler
On 18:11 Tue 23 Mar , Iustin Pop wrote:
 I agree with the principle of correctness, but let's be honest - it's
 (many) orders of magnitude between ByteString and String and Text, not
 just a few percentage points…
 
 I've been struggling with this problem too and it's not nice. Every time
 one uses the system readFile  friends (anything that doesn't read via
 ByteStrings), it hell slow.
 
 Test: read a file and compute its size in chars. Input text file is
 ~40MB in size, has one non-ASCII char. The test might seem stupid but it
 is a simple one. ghc 6.12.1.
 
 Data.ByteString.Lazy (bytestring readFile + length) -  10 miliseconds,
 incorrect length (as expected).
 
 Data.ByteString.Lazy.UTF8 (system readFile + fromString + length) - 11
 seconds, correct length.
 
 Data.Text.Lazy (system readFile + pack + length) - 26s, correct length.
 
 String (system readfile + length) - ~1 second, correct length.

Is this a mistake?  Your own report shows String  readFile being an
order of magnitude faster than everything else, contrary to your earlier
claim.

-- 
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Bytestrings and [Char]

2010-03-23 Thread Nick Bowler
On 18:25 Tue 23 Mar , Iustin Pop wrote:
 On Tue, Mar 23, 2010 at 01:21:49PM -0400, Nick Bowler wrote:
  On 18:11 Tue 23 Mar , Iustin Pop wrote:
   I agree with the principle of correctness, but let's be honest - it's
   (many) orders of magnitude between ByteString and String and Text, not
   just a few percentage points…
   
   I've been struggling with this problem too and it's not nice. Every time
   one uses the system readFile  friends (anything that doesn't read via
   ByteStrings), it hell slow.
   
   Test: read a file and compute its size in chars. Input text file is
   ~40MB in size, has one non-ASCII char. The test might seem stupid but it
   is a simple one. ghc 6.12.1.
   
   Data.ByteString.Lazy (bytestring readFile + length) -  10 miliseconds,
   incorrect length (as expected).
   
   Data.ByteString.Lazy.UTF8 (system readFile + fromString + length) - 11
   seconds, correct length.
   
   Data.Text.Lazy (system readFile + pack + length) - 26s, correct length.
   
   String (system readfile + length) - ~1 second, correct length.
  
  Is this a mistake?  Your own report shows String  readFile being an
  order of magnitude faster than everything else, contrary to your earlier
  claim.
 
 No, it's not a mistake. String is faster than pack to Text and length, but 
 it's
 100 times slower than ByteString.

Only if you don't care about obtaining the correct answer, in which case
you may as well just say const 42 or somesuch, which is even faster.

 My whole point is that difference between byte processing and char processing
 in Haskell is not a few percentage points, but order of magnitude. I would
 really like to have only the 6x penalty that Python shows, for example.

Hang on a second... less than 10 milliseconds to read 40 megabytes from
disk?  Something's fishy here.

I ran my own tests with a 400M file (419430400 bytes) consisting almost
exclusively of the letter 'a' with two Japanese characters placed at
every multiple of 40 megabytes (UTF-8 encoded).

With Prelude.readFile/length and 5 runs, I see

  10145ms, 10087ms, 10223ms, 10321ms, 10216ms.

with approximately 10% of that time spent performing GC each run.

With Data.Bytestring.Lazy.readFile/length and 5 runs, I see

  8223ms, 8192ms, 8077ms, 8091ms, 8174ms.

with approximately 20% of that time spent performing GC each run.
Maybe there's some magic command line options to tune the GC for our
purposes, but I only managed to make things slower.  Thus, I'll handwave
a bit and just shave off the GC time from each result.

Prelude: 9178ms mean with a standard deviation of 159ms.
Data.ByteString.Lazy: 6521ms mean with a standard deviation of 103ms.

Therefore, we managed a throughput of 43 MB/s with the Prelude (and got
the right answer), while we managed 61 MB/s with lazy ByteStrings (and
got the wrong answer).  My disk won't go much, if at all, faster than
the second result, so that's good.

So that's a 30% reduction in throughput.  I'd say that's a lot worse
than a few percentage points, but certainly not orders of magnitude.

On the other hand, using Data.ByteString.Lazy.readFile and
Data.ByteString.Lazy.UTF8.length, we get results of around 12000ms with
approximately 5% of that time spent in GC, which is rather worse than
the Prelude.  Data.Text.Lazy.IO.readFile and Data.Text.Lazy.length are
even worse, with results of around 25 *seconds* (!!) and 2% of that time
spent in GC.

GNU wc computes the correct answer as quickly as lazy bytestrings
compute the wrong answer.  With perl 5.8, slurping the entire file as
UTF-8 computes the correct answer just as slowly as Prelude.  In my
first ever Python program (with python 2.6), I tried to read the entire
file as a unicode string and it quickly crashes due to running out of
memory (yikes!), so it earns a DNF.

So, for computing the right answer with this simple test, it looks like
the Prelude is the best option.  We tie with Perl and lose only to GNU
wc (which is written in C).  Really, though, it would be nice to close
that gap.

-- 
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ptys and hGetContent problem

2010-03-08 Thread Nick Bowler
On 20:38 Mon 08 Mar , Mathijs Kwik wrote:
 Hi all,
 
 I found this blogpost from Bryan O'Sullivan
 http://www.serpentine.com/blog/2008/09/30/unix-hacking-in-haskell-better-pseudoterminal-support/
 and I wanted to try it out.
 
 Before moving to an interactive command (which needs pty), I just did
 a small test for ls -l / to see if it worked.
 I got it to compile, but when running, it throws an exception when
 reaching the end of the output (in this case because I evaluate the
 length to force reading all).
 Main: /dev/ptmx: hGetContents: hardware fault (Input/output error)

You have just stumbled into the wonderful world of pseudo-terminals,
where their behaviour is subtly different on every bloody platform.  It
appears that on your platform, after the last user closes the slave port
(i.e. after your child process terminates), subsequent reads from the
master port return EIO.

One would normally detect this condition with the poll system call, by
looking for POLLHUP on the master port.

On some platforms (but evidently not yours), the last close of the slave
port causes the behaviour you seem to have expected, where a subsequent
read returns 0.

 What's wrong? :)

Presumably the problem is that handle-based I/O is not suitable for
pseudo-terminal masters.  Definitely not lazy I/O.

 And further...
 If I do want to use an interactive program which needs input, how do I
 send ctrl-d or ctrl-c?
 tail -f needs ctrl-c (or I need to kill the process)

These so-called control characters are normally configured by termios.
If suitably configured, the appropriate action will be performed when
the control characters are written to the master port.

-- 
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: New OpenGL package: efficient way to convert datatypes?

2010-03-05 Thread Nick Bowler
On 14:30 Fri 05 Mar , Achim Schneider wrote:
 Nick Bowler nbow...@elliptictech.com wrote:
  I meant to say that fromRational . toRational is not appropriate for
  converting values from one floating point type to another floating
  point type.
 
 It gets even worse: My GPU doesn't know about doubles and its floats
 aren't IEEE, at all (not that Haskell Doubles are guaranteed to be IEEE
 iirc)

AFAIK, GLDouble is a newtype wrapper around CDouble, though, and doesn't
correspond to a GPU-internal type.  Even if it did, if we are converting
to a type that doesn't support infinities, then is is reasonable for the
conversion to not support them, either.  I'd want to see a call to error
in this case, but perhaps allowing unsafe optimisations (see below).

 I think the situation calls for a split interface: One to satisfy the
 numericists / scientific IEEE users, and one to satisfy performance.

I think this is a job for the compiler rather than the interface.  For
example, GCC has -ffinite-math-only, -fno-signed-zeros, etc., which
allow the compiler to make assumptions about the program that would not
normally be valid.

Nevertheless, for the issue at hand (Double=CDouble=GLDouble), there
is a conversion interface that should satisfy everyone (both fast and
correct): the one that compiles to nothing at all.

-- 
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] New OpenGL package: efficient way to convert datatypes?

2010-03-04 Thread Nick Bowler
On 16:20 Thu 04 Mar , Daniel Fischer wrote:
 Yes, without rules, realToFrac = fromRational . toRational.

snip

 I think one would have to add {-# RULES #-} pragmas to 
 Graphics.Rendering.OpenGL.Raw.Core31.TypesInternal, along the lines of
 
 {-# RULES
 realToFrac/CDouble-GLdouble  realToFrac x = GLdouble x
 realToFrac/GLdouble - CDouble realToFrac (GLdouble x) = x
   #-}

These rules are, alas, *not* equivalent to fromRational . toRational.

Unfortunately, realToFrac is quite broken with respect to floating point
conversions, because fromRational . toRational is entirely the wrong
thing to do.  I've tried to start some discussion on the haskell-prime
mailing list about fixing this wart.  In the interim, the OpenGL package
could probably provide its own CDouble=GLDouble conversions, but sadly
the only way to correctly perform Double=CDouble is unsafeCoerce.

-- 
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] New OpenGL package: efficient way to convert datatypes?

2010-03-04 Thread Nick Bowler
On 17:45 Thu 04 Mar , Daniel Fischer wrote:
 Am Donnerstag 04 März 2010 16:45:03 schrieb Nick Bowler:
  On 16:20 Thu 04 Mar , Daniel Fischer wrote:
   Yes, without rules, realToFrac = fromRational . toRational.
 
  snip
 
   I think one would have to add {-# RULES #-} pragmas to
   Graphics.Rendering.OpenGL.Raw.Core31.TypesInternal, along the lines of
  
   {-# RULES
   realToFrac/CDouble-GLdouble  realToFrac x = GLdouble x
   realToFrac/GLdouble - CDouble realToFrac (GLdouble x) = x
 #-}
 
  These rules are, alas, *not* equivalent to fromRational . toRational.
 
 But these rules are probably what one really wants for a [C]Double - 
 GLdouble conversion.

I agree that the conversions described by the rules are precisely what
one really wants.  However, this doesn't make them valid rules for
realToFrac, because they do not do the same thing as realToFrac does.
They break referential transparency by allowing to write functions whose
behaviour depends on whether or not realToFrac was inlined by the ghc
(see below for an example).

  Unfortunately, realToFrac is quite broken with respect to floating point
  conversions, because fromRational . toRational is entirely the wrong
  thing to do.
 
 entirely? For
 
 realToFrac :: (Real a, Fractional b) = a - b
 
 I think you can't do much else that gives something more or less 
 reasonable. For (almost?) any concrete conversion, you can do something 
 much better (regarding performance and often values), but I don't think 
 there's a generic solution.

Sorry, I guess I wasn't very clear.  I didn't mean to say that
fromRational . toRational is a bad implementation of realToFrac.  I
meant to say that fromRational . toRational is not appropriate for
converting values from one floating point type to another floating point
type.  Corollary: realToFrac is not appropriate for converting values
from one floating point type to another floating point type.

The existence of floating point values which are not representable in a
rational causes problems when you use toRational in a conversion.  See
the recent discussion on the haskell-prime mailing list

  http://thread.gmane.org/gmane.comp.lang.haskell.prime/3146

or the trac ticket on the issue

  http://hackage.haskell.org/trac/ghc/ticket/3676

for further details.

  I've tried to start some discussion on the haskell-prime
  mailing list about fixing this wart.  In the interim, the OpenGL package
  could probably provide its own CDouble=GLDouble conversions, but sadly
 
 s/could/should/, IMO.
 
  the only way to correctly perform Double=CDouble is unsafeCoerce.
 
 Are you sure? In Foreign.C.Types, I find
 
 {-# RULES
 realToFrac/a-CFloatrealToFrac = \x - CFloat   (realToFrac x)
 realToFrac/a-CDouble   realToFrac = \x - CDouble  (realToFrac x)
 
 realToFrac/CFloat-arealToFrac = \(CFloat   x) - realToFrac x
 realToFrac/CDouble-a   realToFrac = \(CDouble  x) - realToFrac x
  #-}

Even though these are the conversions we actually want to do, these
rules are also invalid.  I'm not at all surprised to see this, since we
have the following:

 {-# RULES
 realToFrac/Double-Double realToFrac = id :: Double - Double
   #-}
 
 (why isn't that in GHC.Real, anyway?), it should do the correct thing - not 
 that it's prettier than unsafeCoerce.

This rule does exist, in GHC.Float (at least with 6.12.1), and is
another bug.  It does the wrong thing because fromRational . toRational
:: Double - Double is *not* the identity function on Doubles.  As
mentioned before, the result is that we can write programs which behave
differently when realToFrac gets inlined.

Try using GHC to compile the following program with and without -O:

  compiledWithOptimisation :: Bool
  compiledWithOptimisation = isNegativeZero . realToFrac $ -0.0

  main :: IO ()
  main = putStrLn $ if compiledWithOptimisation
  then Optimised :)
  else Not optimised :(

-- 
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] hdbc-mysql fails to compile

2010-02-25 Thread Nick Rudnick

Hi Thomas,

up to 3/3/2010 I am looking after nearly 100 Haskell newbies in their 
project end phase -- but Marc Weber promised to kick my ass in time so I 
look after the hsql-XXX repos.


Anyway, I just uploaded 1.8.1, since it seems to work.

Cheers,

   Nick

Thomas Girod wrote:

replying to myself. there is only one line to fix in Setup.lhs


  (mysqlConfigProg, _) - requireProgram verbosity
  mysqlConfigProgram AnyVersion (withPrograms lbi)


becomes :


  (mysqlConfigProg, _) - requireProgram verbosity
  mysqlConfigProgram (withPrograms lbi)


obviously the new cabal version makes a difference between
requireProgram and requireProgramVersion


On Thu, Feb 25, 2010 at 08:01:34PM +0100, Thomas Girod wrote:
  

Hi there. Looks like hdbc-mysql cannot compile against GHC-6.12 --
confirmed by the hackage build logs. [1]

anyone know a way to dodge the problem ?

cheers,

Tom

[1]
http://hackage.haskell.org/packages/archive/HDBC-mysql/0.6/logs/failure/ghc-6.12

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe

  


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Problems linking hsql-mysql

2010-02-23 Thread Nick Rudnick

Hi Maciej,

I will try to reproduce the error -- could you send me your both *.cabal 
files (for hsql  hsql-mysql), if you have changed anything, and your 
system configuration.


Is this extremely urgent? I ask this, as these days exactly the end 
phase of the projects of about 100 beginner Haskell students, which I am 
the one to look after. So I am in a little of slow motion regarding 
other things... ;-)


But in case you are in emergency, please let me know... At first sight, 
I would say it looks like a configuration problem... ;-)


Cheers,

   Nick

Maciej Podgurski wrote:

Hi,

I have problems linking a simple test program that imports 
Database.HSQL.MySQL via ghc --make Test.hs. The error only occurs when 
importing this module of hsql-mysql-1.7.1. I pasted the building 
output here: http://hpaste.org/fastcgi/hpaste.fcgi/view?id=22911 
(actually it was even longer but hpaste only takes 30k).


When I do ghc -e main Test.hs or starting it from GHCi, everything 
works fine and no errors occur. So I guess all needed .lib files are 
there but GHC can't find them when linking (adding the flag -Lpath to 
mysql/lib/opt doesn't help). Anyone an idea?



Best wishes,

Maciej
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Category Theory woes

2010-02-20 Thread Nick Rudnick

A place in the hall of fame and thank you for mentioning clopen... ;-)

Just wanting to present open/closed as and example of improvable maths 
terminology, I oversaw this even more evident defect in it and even 
copied it into my improvement proposal, bordered/unbordered:


It is questionable style to name two properties, if they can occur 
combined, as an antagonistic pair...!


Acccordingly, it is more elegant to draw such terms from independent 
domains.


This subject seems to drive me crazy... I actually pondered on 
improvement, and came to:


«faceless» in replacement of «open»

Rough explanation: The «limit» of a closed set can by the limit of 
another closed set that may even share only this limit -- a faceless set 
has -- under the given perspective -- no such part to «face» to beyond. 
Any comments?


But the big question is now: What (non antagonistic) name can be found 
for the other property??


Any ideas...??

Cheers,

   Nick



Ergonomic terminology comes not for free, giving a quick answer here 
would be «maths style» with replacing


Michael Matsko wrote:

Nick,

Actually, clopen is a set that is both closed and open.  Not one 
that is neither.  Except in the case of half-open intervals, I can't 
remember talking much in topology about sets with a partial boundary.





Alexander Solla wrote:


Clopen means a set is both closed and open, not that it's partially 
bordered.



Daniel Fischer wrote:


And we'd be very wrong. There are sets which are simultaneously open and 
closed. It is bad enough with the terminology as is, throwing in the 
boundary (which is an even more difficult concept than open/closed) would 
only make things worse.
  


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Category Theory woes

2010-02-20 Thread Nick Rudnick

Richard O'Keefe wrote:


On Feb 19, 2010, at 2:48 PM, Nick Rudnick wrote:
Please tell me the aspect you feel uneasy with, and please give me 
your opinion, whether (in case of accepting this) you would rather 
choose to consider Human as referrer and Int as referee of the 
opposite -- for I think this is a deep question.

I've read enough philosophy to be wary of treating reference
as a simple concept.  And linguistically, referees are people
you find telling rugby players naughty naughty.  Don't you
mean referrer and referent?
Yes, thanks. I am not a native English speaker, and in my mother tongue, 
a referent is somebody who refers, so I missed the guess... Such 
statements are exactly what I was looking for... So, as a reference is 
directed, it is possible to distinguish


referrer ::= the one which does refer to s.th.

referent ::= one which is referred to by s.th.

Of course a basic point about language is that the association
between sounds and meanings is (for the most part) arbitrary.
I would rather like to say it is not strictly determined, as an 
evolutionary tendence towards, say ergonomy, cannot be overlooked, can it?



Why should the terminology of mathematics be any different?

;-) Realizing an evolutionary tendence towards ergonony, is my subject...

Why is a small dark floating cloud, indicating rain, called
a water-dog?  Water, yes, but dog?  Why are the brackets at
each end of a fire-place called fire-dogs?  Why are unusually
attractive women called foxes (the females of that species
being vixens, and both sexes smelly)?  
:-)) The shape of the genitals, which might come into associative 
imagination of the hopeful observer?? (The same with cats, bears, etc.) 
[... desperately afraid of getting kicked out of this mailing list ;-))]


Thanks for this beautiful example and, honestly, again I ask again 
whether we may regard this as «just noise»: In contrary, aren't such 
usages not paradigmatical examples of memes, which as products of 
memetic evolution, should be studied for their motivational value?


Let me guess: Our cerebral language system is highly coupled with our 
intentional system, so that it helps learning to have motivating 
«animation» enclosed... Isn't this in use in contemporary learning 
environments...?


The problem I see is that common maths claims an exception in claiming 
that, in it's domain, namings are no more than noise -- possible 
motivated by an extreme rejection of anything between «strictly formally 
determined» and «noise». This standpoint again does not realize the 
developments in foundations of mathematics of at least the century ago 
-- put roughly, this comes close to Hilbert's programme...


To my mind, any of the breakthroughs of the last decades -- like 
incompleteness, strange attractors, algorithmic information theory, 
CCCs, and not the least computing science itself with metaprogramming, 
soft computing, its linear types/modes and monads (!) -- have to do with 
constructs which emancipate such claims of ex ante predetermination. 
Isn't category theory pretty much a part of all this?



What's the logic in
doggedness being a term of praise but bitchiness of opprobrium?

Sexism...??


We can hope for mathematical terms to be used consistently,
but asking for them to be transparent is probably too much to
hope for.  (We can and should use intention-revealing names
in a program, but doing it across the totality of all programs
is something never achieved and probably never achievable.)
We have jokers: Evolutionary media, like markdown or even stylesheet may 
allow us to switch and translate in a moment, and many more useful 
gimmicks... Online collaboration platforms...


And we can stay pragmatical: If we can reach a (broad, to my 
estimate...) public, which originally would have to say «the book has 
really left me dumbfounded» (so the originator of this thread) and offer 
them an entertaining intuitive way -- why not even in a 
self-configurable way? -- category theory could be introduced to 
contemporary culture.


Personally, I can't accept statements like (in another posting) «You 
need a lot of training in abstraction to learn very abstract concepts. 
Joe Sixpack's common sense isn't prepared for that.»


Instead, I think that there is good evidence to believe that there are 
lots of isomorphisms to be found between every day's life and 
terminology and concepts category theory -- *not* to be confused with 
its *applications to maths*...


And, to close in your figurative style:

Which woman gets hurt by a change of clothes?

Cheers,

Nick




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Restricted categories

2010-02-20 Thread Nick Rudnick

Alexander Solla wrote:
You specifically ask withConstraintsOf to accept only Suitable2's when 
you say
withConstraintsOf :: Suitable2 m a b = m a b - (Constraints m a b 
- k) - k


But you aren't saying that the argument of withConstraintsOf IS a 
Suitable2, when you say:

instance (RCategory c1, RCategory c2) = RCategory (c1 :***: c2) where
id = withResConstraints $ \ProdConstraints - id :***: id
-- f@(f1 :***: f2) . g@(g1 :***: g2) =
-- withResConstraints $ \ProdConstraints -
-- withConstraintsOf f $ \ProdConstraints -
-- withConstraintsOf g $ \ProdConstraints -
-- (f1 . g1) :***: (f2 . g2) 
As I understand, Sjoerd expects this to be done at the definition of (.) 
in the type class RCategory, so that an instance method can relay on the 
constraints collected by it:

class RCategory (~) where
  id :: Suitable2 (~) a a = a ~ a
  (.) :: (Suitable2 (~) b c, Suitable2 (~) a b, Suitable2 (~) a c) = b ~ c - a 
~ b - a ~ c
  
A simple example: 


class Show el= ExceptionNote el where
comment:: Show exception= exception- el- String

instance ExceptionNote Int where
comment exception refId = show refId ++ :  ++ show exception

Here you don't need to constrain ?exception? to be of ?Show? at the 
instance declaration. So it does not appear wrong for Sjoerd to expect f 
and g to already be of Suitable2...


This is exciting stuff, I am really a little astonished about the giant 
leap Haskell has made since my efforts to translate the examples of 
Rydeheart  Burstall, which actually was my intro to categories, from ML 
to Haskell. This looks very elegant... Maybe it's time for a second 
edition of the unique approach of Rydeheart  Burstall on basis of 
Haskell? Wow, really cool stuff... :-)


Cheers,

Nick
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Category Theory woes

2010-02-18 Thread Nick Rudnick
IM(H??)O, a really introductive book on category theory still is to be 
written -- if category theory is really that fundamental (what I 
believe, due to its lifting of restrictions usually implicit at 
'orthodox maths'), than it should find a reflection in our every day's 
common sense, shouldn't it?


In this case, I would regard it as desirable to -- in best refactoring 
manner -- to identify a wording in this language instead of the abuse of 
terminology quite common in maths, e.g.


* the definition of open/closed sets in topology with the boundary 
elements of a closed set to considerable extent regardable as facing to 
an «outside» (so that reversing these terms could even appear more 
intuitive, or «bordered» instead of closed and «unbordered» instead of 
open), or
* the abuse of abandoning imaginary notions in favour person's last 
names in tribute to successful mathematicians... Actually, that pupils 
get to know a certain lemma as «Zorn's lemma» does not raise public 
conciousness of Mr. Zorn (even among mathematicians, I am afraid) very 
much, does it?
* 'folkloristic' dropping of terminology -- even in Germany, where the 
term «ring» seems to originate from, since at least a century nowbody 
has the least idea it once had an alternative meaning «gang,band,group», 
which still seems unsatisfactory...


Here computing science has explored ways to do much better than this, 
and it might be time category theory is claimed by computer scientists 
in this regard. Once such a project has succeeded, I bet, mathematicians 
will pick up themselves these work to get into category theory... ;-)


As an example, let's play a little:

Arrows: Arrows are more fundamental than objects, in fact, categories 
may be defined with arrows only. Although I like the term arrow (more 
than 'morphism'), I intuitively would find the term «reference» less 
contradictive with the actual intention, as this term

* is very general,
* reflects well dual asymmetry,
* does harmoniously transcend the atomary/structured object perspective 
--  a an object may be in reference to another *by* substructure  (in 
the beginning, I was quite confused lack of explicit explicatation in 
this regard, as «arrow/morphism» at least to me impled objekt mapping 
XOR collection mapping).


Categories: In every day's language, a category is a completely 
different thing, without the least association with a reference system 
that has a composition which is reflective and associative. To identify 
a more intuitive term, we can ponder its properties,


* reflexivity: This I would interpret as «the references of a category 
may be regarded as a certain generalization of id», saying that 
references inside a category represent some kind of similarity (which in 
the most restrictive cases is equality).


* associativity: This I would interpret as «you can *fold* it», i.e. the 
behaviour is invariant to the order of composing references to composite 
references -- leading to «the behaviour is completely determined by the 
lower level reference structure» and therefore «derivations from lower 
level are possible»


Here, finding an appropriate term seems more delicate; maybe a neologism 
would do good work. Here one proposal:


* consequence/?consequentiality? : Pro: Reflects well reflexivity, 
associativity and duality; describing categories as «structures of 
(inner) consequence» seems to fit exceptionally well. The pictorial 
meaning of a «con-sequence» may well reflect the graphical structure. 
Gives a fine picture of the «intermediating forces» in observation and 
the «psychologism» becoming possible (- cf. CCCs, Toposes). Con: 
Personalized meaning has an association with somewhat unfriendly behaviour.


Anybody to drop a comment on this?

Cheers,

   Nick


Sean Leather wrote:

On Thu, Feb 18, 2010 at 04:27, Nick Rudnick wrote:

I haven't seen anybody mentioning «Joy of Cats» by  Adámek,
Herrlich  Strecker:

It is available online, and is very well-equipped with thorough
explanations, examples, exercises  funny illustrations, I would
say best of university lecture style:
http://katmat.math.uni-bremen.de/acc/. (Actually, the name of the
book is a joke on the set theorists' book «Joy of Set», which
again is a joke on «Joy of Sex», which I once found in my parents'
bookshelf... ;-))


This book reads quite nicely! I love the illustrations that pervade 
the technical description, providing comedic relief. I might have to 
go back a re-learn CT... again. Excellent recommendation!


For those looking for resources on category theory, here are my 
collected references: 
http://www.citeulike.org/user/spl/tag/category-theory


Sean


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe
  


___
Haskell

Re: [Haskell-cafe] Category Theory woes

2010-02-18 Thread Nick Rudnick

Hi Daniel,

;-)) agreed, but is the word «Ring» itself in use? The same about the 
English language...  de.wikipedia says:


« Die Namensgebung /Ring/ bezieht sich nicht auf etwas anschaulich 
Ringförmiges, sondern auf einen organisierten Zusammenschluss von 
Elementen zu einem Ganzen. Diese Wortbedeutung ist in der deutschen 
Sprache ansonsten weitgehend verloren gegangen. Einige 
ältereVereinsbezeichnungen /wiki/Verein (wie z. B. Deutscher Ring 
/wiki/Deutscher_Ring, Weißer Ring /wiki/Wei%C3%9Fer_Ring_e._V.) oder 
Ausdrücke wie „Verbrecherring“ weisen noch auf diese Bedeutung hin. Das 
Konzept des Ringes geht auf Richard Dedekind 
/wiki/Richard_Dedekind zurück; die Bezeichnung /Ring/ wurde allerdings 
von David Hilbert /wiki/David_Hilbert eingeführt.» 
(http://de.wikipedia.org/wiki/Ringtheorie)


How many students are wondering confused about what is «the hollow» in a 
ring every year worlwide, since Hilbert made this unreflected wording, 
by just picking another term around «collection»? Although not a 
mathematician, I've visited several maths lectures, for interest, having 
the same problem. Then I began asking everybody I could reach -- and 
even maths professors could not tell me why this thing is called a «ring».


Thanks for your examples: A «gang» {of smugglers|car thieves} shows even 
the original meaning -- once knowed -- does not reflect the 
characteristics of the mathematical structure.


Cheers,

   Nick

Daniel Fischer wrote:

Am Donnerstag 18 Februar 2010 14:48:08 schrieb Nick Rudnick:
  

even in Germany, where the
term «ring» seems to originate from, since at least a century nowbody
has the least idea it once had an alternative meaning «gang,band,group»,



Wrong. The term Ring is still in use with that meaning in composites like 
Schmugglerring, Autoschieberring, ...


  


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Category Theory woes

2010-02-18 Thread Nick Rudnick

Hi Hans,

agreed, but, in my eyes, you directly point to the problem:

* doesn't this just delegate the problem to the topic of limit 
operations, i.e., in how far is the term «closed» here more perspicuous?


* that's (for a very simple concept) the way that maths prescribes:
+ historical background: «I take closed as coming from being closed 
under limit operations - the origin from analysis.»
+ definition backtracking: «A closure operation c is defined by the 
property c(c(x)) = c(x). If one takes c(X) = the set of limit points of 
X, then it is the smallest closed set under this operation. The closed 
sets X are those that satisfy c(X) = X. Naming the complements of the 
closed sets open might have been introduced as an opposite of closed.»


418 bytes in my file system... how many in my brain...? Is it efficient, 
inevitable? The most fundamentalist justification I heard in this regard 
is: «It keeps people off from thinking the could go without the 
definition...» Meanwhile, we backtrack definition trees filling books, 
no, even more... In my eyes, this comes equal to claiming: «You have 
nothing to understand this beyond the provided authoritative definitions 
-- your understanding is done by strictly following these.»


Back to the case of open/closed, given we have an idea about sets -- we 
in most cases are able to derive the concept of two disjunct sets facing 
each other ourselves, don't we? The only lore missing is just a Bool: 
Which term fits which idea? With a reliable terminology using 
«bordered/unbordered», there is no ambiguity, and we can pass on 
reading, without any additional effort.


Picking such an opportunity thus may save a lot of time and even error 
-- allowing you to utilize your individual knowledge and experience. I 
have hope that this approach would be of great help in learning category 
theory.


All the best,

   Nick


Hans Aberg wrote:

On 18 Feb 2010, at 14:48, Nick Rudnick wrote:

* the definition of open/closed sets in topology with the boundary 
elements of a closed set to considerable extent regardable as facing 
to an «outside» (so that reversing these terms could even appear more 
intuitive, or «bordered» instead of closed and «unbordered» instead 
of open),


I take closed as coming from being closed under limit operations - 
the origin from analysis. A closure operation c is defined by the 
property c(c(x)) = c(x). If one takes c(X) = the set of limit points 
of X, then it is the smallest closed set under this operation. The 
closed sets X are those that satisfy c(X) = X. Naming the complements 
of the closed sets open might have been introduced as an opposite of 
closed.


  Hans





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Category Theory woes

2010-02-18 Thread Nick Rudnick

Gregg Reynolds wrote:
On Thu, Feb 18, 2010 at 7:48 AM, Nick Rudnick 
joerg.rudn...@t-online.de mailto:joerg.rudn...@t-online.de wrote:


IM(H??)O, a really introductive book on category theory still is
to be written -- if category theory is really that fundamental
(what I believe, due to its lifting of restrictions usually
implicit at 'orthodox maths'), than it should find a reflection in
our every day's common sense, shouldn't it?


Goldblatt works for me.
Accidentially, I have Goldblatt here, although I didn't read it before 
-- you agree with me it's far away from every day's common sense, even 
for a hobby coder?? I mean, this is not «Head first categories», is it? 
;-)) With «every day's common sense» I did not mean «a mathematician's 
every day's common sense», but that of, e.g., a housewife or a child...


But I have became curious now for Goldblatt...
 



* the definition of open/closed sets in topology with the boundary
elements of a closed set to considerable extent regardable as
facing to an «outside» (so that reversing these terms could even
appear more intuitive, or «bordered» instead of closed and
«unbordered» instead of open),


Both have a border, just in different places.

Which elements form the border of an open set??



As an example, let's play a little:

Arrows: Arrows are more fundamental than objects, in fact,
categories may be defined with arrows only. Although I like the
term arrow (more than 'morphism'), I intuitively would find the
term «reference» less contradictive with the actual intention, as
this term

Arrows don't refer. 
A *referrer* (object) refers to a *referee* (object) by a *reference* 
(arrow).
 


Categories: In every day's language, a category is a completely
different thing, without the least


Not necesssarily (for Kantians, Aristoteleans?)
Are you sure...?? See 
http://en.wikipedia.org/wiki/Categories_(Aristotle) ...
  If memory serves, MacLane says somewhere that he and Eilenberg 
picked the term category as an explicit play on the same term in 
philosophy.
In general I find mathematical terminology well-chosen and revealing, 
if one takes the trouble to do a little digging.  If you want to know 
what terminological chaos really looks like try linguistics.
;-) For linguistics, granted... In regard of «a little digging», don't 
you think terminology work takes a great share, especially at 
interdisciplinary efforts? Wouldn't it be great to be able to drop, say 
20% or even more, of such efforts and be able to progress more fluidly ?


-g



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Fwd: [Haskell-cafe] Category Theory woes

2010-02-18 Thread Nick Rudnick

Hi Mike,

so an open set does not contain elements constituting a border/boundary 
of it, does it?


But a closed set does, doesn't it?

Cheers,

   Nick

Michael Matsko wrote:


- Forwarded Message -
From: Michael Matsko msmat...@comcast.net
To: Nick Rudnick joerg.rudn...@t-online.de
Sent: Thursday, February 18, 2010 2:16:18 PM GMT -05:00 US/Canada Eastern
Subject: Re: [Haskell-cafe] Category Theory woes

Gregg,

 

   Topologically speaking, the border of an open set is called the 
boundary of the set.  The boundary is defined as the closure of the 
set minus the set itself.  As an example consider the open interval 
(0,1) on the real line.  The closure of the set is [0,1], the closed 
interval on 0, 1.  The boundary would be the points 0 and 1.


 


Mike Matsko


- Original Message -
From: Nick Rudnick joerg.rudn...@t-online.de
To: Gregg Reynolds d...@mobileink.com
Cc: Haskell Café List haskell-cafe@haskell.org
Sent: Thursday, February 18, 2010 1:55:31 PM GMT -05:00 US/Canada Eastern
Subject: Re: [Haskell-cafe] Category Theory woes

Gregg Reynolds wrote:

On Thu, Feb 18, 2010 at 7:48 AM, Nick Rudnick
joerg.rudn...@t-online.de mailto:joerg.rudn...@t-online.de wrote:

IM(H??)O, a really introductive book on category theory still
is to be written -- if category theory is really that
fundamental (what I believe, due to its lifting of
restrictions usually implicit at 'orthodox maths'), than it
should find a reflection in our every day's common sense,
shouldn't it?


Goldblatt works for me.

Accidentially, I have Goldblatt here, although I didn't read it before 
-- you agree with me it's far away from every day's common sense, even 
for a hobby coder?? I mean, this is not «Head first categories», is 
it? ;-)) With «every day's common sense» I did not mean «a 
mathematician's every day's common sense», but that of, e.g., a 
housewife or a child...


But I have became curious now for Goldblatt...

 



* the definition of open/closed sets in topology with the
boundary elements of a closed set to considerable extent
regardable as facing to an «outside» (so that reversing these
terms could even appear more intuitive, or «bordered» instead
of closed and «unbordered» instead of open),


Both have a border, just in different places.

Which elements form the border of an open set??



As an example, let's play a little:

Arrows: Arrows are more fundamental than objects, in fact,
categories may be defined with arrows only. Although I like
the term arrow (more than 'morphism'), I intuitively would
find the term «reference» less contradictive with the actual
intention, as this term

Arrows don't refer. 

A *referrer* (object) refers to a *referee* (object) by a *reference* 
(arrow).


 


Categories: In every day's language, a category is a
completely different thing, without the least


Not necesssarily (for Kantians, Aristoteleans?)

Are you sure...?? See 
http://en.wikipedia.org/wiki/Categories_(Aristotle) ...


  If memory serves, MacLane says somewhere that he and Eilenberg
picked the term category as an explicit play on the same term in
philosophy.

In general I find mathematical terminology well-chosen and
revealing, if one takes the trouble to do a little digging.  If
you want to know what terminological chaos really looks like try
linguistics.

;-) For linguistics, granted... In regard of «a little digging», don't 
you think terminology work takes a great share, especially at 
interdisciplinary efforts? Wouldn't it be great to be able to drop, 
say 20% or even more, of such efforts and be able to progress more 
fluidly ?



-g



___ Haskell-Cafe mailing 
list Haskell-Cafe@haskell.org 
http://www.haskell.org/mailman/listinfo/haskell-cafe



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe
  


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Category Theory woes

2010-02-18 Thread Nick Rudnick

Hi Mike,


of course... But in the same spirit, one could introduce a 
straightforward extension, «partially bordered», which would be as least 
as good as «clopen»... ;-)


I must admit we've come a little off the topic -- how to introduce to 
category theory. The intent was to present some examples that 
mathematical terminology culture is not that exemplary as one should 
expect, but to motivate an open discussion about how one might «rename 
refactor» category theory (of 2:48 PM).


I would be very interested in other people's proposals... :-)

Michael Matsko wrote:


Nick,

 

   That is correct.  An open set contains no point on its boundary. 

 

   A closed set contains its boundary, i.e. for a closed set c, 
Closure(c) = c. 

 

   Note that for a general set, which is neither closed or open (say 
the half closed interval (0,1]), may contain points on its boundary.  
Every set contains its interior, which is the part of the set without 
its boundary and is contained in its closure - for a given set x, 
Interior(x) is a subset of x is a subset of Closure(x). 

 


Mike

 
- Original Message -

From: Nick Rudnick joerg.rudn...@t-online.de
To: Michael Matsko msmat...@comcast.net
Cc: haskell-cafe@haskell.org
Sent: Thursday, February 18, 2010 3:15:49 PM GMT -05:00 US/Canada Eastern
Subject: Re: Fwd: [Haskell-cafe] Category Theory woes

Hi Mike,

so an open set does not contain elements constituting a 
border/boundary of it, does it?


But a closed set does, doesn't it?

Cheers,

Nick

Michael Matsko wrote:


- Forwarded Message -
From: Michael Matsko msmat...@comcast.net
To: Nick Rudnick joerg.rudn...@t-online.de
Sent: Thursday, February 18, 2010 2:16:18 PM GMT -05:00 US/Canada
Eastern
Subject: Re: [Haskell-cafe] Category Theory woes

Gregg,

 


   Topologically speaking, the border of an open set is called the
boundary of the set.  The boundary is defined as the closure of
the set minus the set itself.  As an example consider the open
interval (0,1) on the real line.  The closure of the set is [0,1],
the closed interval on 0, 1.  The boundary would be the points 0
and 1.

 


Mike Matsko


- Original Message -
From: Nick Rudnick joerg.rudn...@t-online.de
To: Gregg Reynolds d...@mobileink.com
Cc: Haskell Café List haskell-cafe@haskell.org
Sent: Thursday, February 18, 2010 1:55:31 PM GMT -05:00 US/Canada
Eastern
Subject: Re: [Haskell-cafe] Category Theory woes

Gregg Reynolds wrote:

On Thu, Feb 18, 2010 at 7:48 AM, Nick Rudnick
joerg.rudn...@t-online.de mailto:joerg.rudn...@t-online.de
wrote:

IM(H??)O, a really introductive book on category theory
still is to be written -- if category theory is really
that fundamental (what I believe, due to its lifting of
restrictions usually implicit at 'orthodox maths'), than
it should find a reflection in our every day's common
sense, shouldn't it?


Goldblatt works for me.

Accidentially, I have Goldblatt here, although I didn't read it
before -- you agree with me it's far away from every day's common
sense, even for a hobby coder?? I mean, this is not «Head first
categories», is it? ;-)) With «every day's common sense» I did not
mean «a mathematician's every day's common sense», but that of,
e.g., a housewife or a child...

But I have became curious now for Goldblatt...

 



* the definition of open/closed sets in topology with the
boundary elements of a closed set to considerable extent
regardable as facing to an «outside» (so that reversing
these terms could even appear more intuitive, or
«bordered» instead of closed and «unbordered» instead of
open),


Both have a border, just in different places.

Which elements form the border of an open set??



As an example, let's play a little:

Arrows: Arrows are more fundamental than objects, in fact,
categories may be defined with arrows only. Although I
like the term arrow (more than 'morphism'), I intuitively
would find the term «reference» less contradictive with
the actual intention, as this term

Arrows don't refer. 


A *referrer* (object) refers to a *referee* (object) by a
*reference* (arrow).

 


Categories: In every day's language, a category is a
completely different thing, without the least


Not necesssarily (for Kantians, Aristoteleans?)

Are you sure...?? See
http://en.wikipedia.org/wiki/Categories_(Aristotle) ...

  If memory serves, MacLane says somewhere that he and
Eilenberg picked the term category as an explicit play on
the same term in philosophy.

In general I find mathematical

Re: [Haskell-cafe] Category Theory woes

2010-02-18 Thread Nick Rudnick

Hans Aberg wrote:

On 18 Feb 2010, at 19:19, Nick Rudnick wrote:


agreed, but, in my eyes, you directly point to the problem:

* doesn't this just delegate the problem to the topic of limit 
operations, i.e., in how far is the term «closed» here more perspicuous?


* that's (for a very simple concept) the way that maths prescribes:
+ historical background: «I take closed as coming from being closed 
under limit operations - the origin from analysis.»
+ definition backtracking: «A closure operation c is defined by the 
property c(c(x)) = c(x). If one takes c(X) = the set of limit points 
of X, then it is the smallest closed set under this operation. The 
closed sets X are those that satisfy c(X) = X. Naming the complements 
of the closed sets open might have been introduced as an opposite of 
closed.»


418 bytes in my file system... how many in my brain...? Is it 
efficient, inevitable?


Yes, it is efficient conceptually. The idea of closed sets let to 
topology, and in combination with abstractions of differential 
geometry led to cohomology theory which needed category theory solving 
problems in number theory, used in a computer language called Haskell 
using a feature called Currying, named after a logician and 
mathematician, though only one person.

It is SUCCESSFUL, NO MATTER... :-)

But I spoke about efficiency, in the Pareto sense 
(http://en.wikipedia.org/wiki/Pareto_efficiency)... Can we say that the 
way in which things are done now cannot be improved??


Hans, you were the most specific response to my actual intention -- 
could I clear up the reference thing for you?


All the best,

   Nick




  Hans





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Category Theory woes

2010-02-18 Thread Nick Rudnick

Hi Alexander,

my actual posting was about rename refactoring category theory; 
closed/open was just presented as an example for suboptimal terminology 
in maths. But of course, bordered/unbordered would be extended by e.g. 
«partially bordered» and the same holds.


Cheers,

   Nick

Alexander Solla wrote:


On Feb 18, 2010, at 10:19 AM, Nick Rudnick wrote:

Back to the case of open/closed, given we have an idea about sets -- 
we in most cases are able to derive the concept of two disjunct sets 
facing each other ourselves, don't we? The only lore missing is just 
a Bool: Which term fits which idea? With a reliable terminology using 
«bordered/unbordered», there is no ambiguity, and we can pass on 
reading, without any additional effort.



There are sets that only partially contain their boundary.  They are 
neither open nor closed, in the usual topology.  Consider (0,1] in the 
Real number line.  It contains 1, a boundary point.  It does not 
contain 0.  It is not an open set OR a closed set in the usual 
topology for R.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Category Theory woes

2010-02-18 Thread Nick Rudnick

Gregg Reynolds wrote:
On Thu, Feb 18, 2010 at 1:31 PM, Daniel Fischer 
daniel.is.fisc...@web.de mailto:daniel.is.fisc...@web.de wrote:


Am Donnerstag 18 Februar 2010 19:55:31 schrieb Nick Rudnick:
 Gregg Reynolds wrote:

 


 -- you agree with me it's far away from every day's common
sense, even
 for a hobby coder?? I mean, this is not «Head first categories»,
is it?
 ;-)) With «every day's common sense» I did not mean «a
mathematician's
 every day's common sense», but that of, e.g., a housewife or a
child...

Doesn't work. You need a lot of training in abstraction to learn very
abstract concepts. Joe Sixpack's common sense isn't prepared for that.


True enough, but I also tend to think that with a little imagination 
even many of the most abstract concepts can be illustrated with 
intuitive, concrete examples, and it's a fun (to me) challenge to try 
come up with them.  For example, associativity can be nicely 
illustrated in terms of donning socks and shoes - it's not hard to 
imagine putting socks into shoes before putting feet into socks.  A 
little weird, but easily understandable.  My guess is that with a 
little effort one could find good concrete examples of at least 
category, functor, and natural transformation.  Hmm, how is a 
cake-mixer like a cement-mixer?  They're structurally and functionally 
isomorphic.  Objects in the category Mixer?
:-) This comes close to what I mean -- the beauty of category theory 
does not end at the borders of mathematical subjects...


IMHO we are just beginning to discovery of the categorical world beyond 
mathematics, and I think many findings original to computer science, but 
less to maths may be of value then.


And I am definitely more optimistic on «Joe Sixpack's common sense», 
which still surpasses a good lot of things possible with AI -- no 
categories at all there?? I can't believe...
 


  Both have a border, just in different places.

 Which elements form the border of an open set??

The boundary of an open set is the boundary of its complement.
The boundary may be empty (happens if and only if the set is
simultaneously
open and closed, clopen, as some say).

Right, that was what I meant; the point being that boundary (or 
border, or periphery or whatever) is not sufficient to capture the 
idea of closed v. open.
;-)) I did not claim «bordered» is the best choice, I just said 
closed/open is NOT... IMHO this also does not affect what I understand 
as a refactoring -- just imagine Coq had a refactoring browser; all 
combinations of terms are possible as before, aren't they? But it was 
not my aim to begin enumerating all variations of «bordered», 
«unbordered», «partially ordered» and STOP...


Should I come QUICKLY with a pendant to «clopen» now? This would be 
«MATHS STYLE»...!


I neither say finding an appropriate word here is a quickshot, nor I 
claim trying so is ridiculous, as it is impossible.


I think it is WORK, which is to be done in OPEN DISCUSSION -- and that, 
at the long end, the result might be rewarding, similar as the effort 
put into a rename refactoring will reveal rewarding. ;-))


Trying a refactored category theory (with a dictionary in the 
appendix...) might open access to many interesting people and subjects 
otherwise out of reach. And deeply contemplating terminology cannot 
hurt, at the least...



All the best,

   Nick
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Category Theory woes

2010-02-18 Thread Nick Rudnick

Hans Aberg wrote:

On 18 Feb 2010, at 23:02, Nick Rudnick wrote:

418 bytes in my file system... how many in my brain...? Is it 
efficient, inevitable?


Yes, it is efficient conceptually. The idea of closed sets let to 
topology, and in combination with abstractions of differential 
geometry led to cohomology theory which needed category theory 
solving problems in number theory, used in a computer language 
called Haskell using a feature called Currying, named after a 
logician and mathematician, though only one person.

It is SUCCESSFUL, NO MATTER... :-)

But I spoke about efficiency, in the Pareto sense 
(http://en.wikipedia.org/wiki/Pareto_efficiency)... Can we say that 
the way in which things are done now cannot be improved??


Hans, you were the most specific response to my actual intention -- 
could I clear up the reference thing for you?


That seems to be an economic theory version of utilitarianism - the 
problem is that when dealing with concepts there may be no optimizing 
function to agree upon. There is an Occam's razor one may try to apply 
in the case of axiomatic systems, but one then finds it may be more 
practical with one that is not minimal.
Exactly. By this I justify my questioning of inviolability of the state 
of art of maths terminology -- an open discussion should be allowed at 
any time...


As for the naming problem, it is more of a linguistic problem: the 
names were somehow handed by tradition, and it may be difficult to 
change them. For example, there is a rumor that kangaroo means I do 
not understand in a native language; assuming this to be true, it 
might be difficult to change it.
Completely d'accord. This is indeed a strong problem, and I fully agree 
if you say that, for maths, trying this is for people with fondness for 
speaker's corner... :-)) But for category theory, a subject (too!) many 
people are complaining about, blind for its beauty, a such book -- 
ideally in children's language and illustrations, of course with a 
dictionary to original terminology in the appendix! -- could be of great 
positive influence on category theory itself. And the deep contemplation 
encompassing the *collective* creation should be most rewarding in 
itself -- a journey without haste into the depths of the subject.


Mathematicians though stick to their own concepts and definitions 
individually. For example, I had conversations with one who calls 
monads triads, and then one has to cope with that.

Yes. But isn't it also an enrichment by some way?

All the best,

Nick
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Category Theory woes

2010-02-18 Thread Nick Rudnick

Alexander Solla wrote:


On Feb 18, 2010, at 2:08 PM, Nick Rudnick wrote:

my actual posting was about rename refactoring category theory; 
closed/open was just presented as an example for suboptimal 
terminology in maths. But of course, bordered/unbordered would be 
extended by e.g. «partially bordered» and the same holds.


And my point was that your terminology was suboptimal for just the 
same reasons.  The difficulty of mathematics is hardly the funny names.

:-) Criticism... Criticism is good at this place... Opens up things...


Perhaps you're not familiar with the development of Category theory.  
Hans Aberg gave a brief development.  Basically, Category theory is 
the RESULT of the refactoring you're asking about.  Category theory's 
beginnings are found in work on differential topology (where functors 
and higher order constructs took on a life of their own), and the 
unification of topology, lattice theory, and universal algebra (in 
order to ground that higher order stuff).  Distinct models and notions 
of computation were unified, using arrows and objects. 

Now, you could have a legitimate gripe about current category theory 
terminology.  But I am not so sure.  We can simplify lots of 
things.  Morphisms can become arrows or functions.  Auto- can become 
self-.  Homo- can become same-.  Functors can become Category 
arrows.  Does it help?  You tell me.

I think I understand what you mean and completely agree...

The project in my imagination is different, I read on...


But if we're ever going to do anything interesting with Category 
theory, we're going to have to go into the realm of dealing with SOME 
kind of algebra.  We need examples, and the mathematically tractable 
ones have names like group, monoid, ring, field, 
sigma-algebras, lattices, logics, topologies, geometries.  
They are arbitrary names, grounded in history.  Any other choice is 
just as arbitrary, if not more so.  The closest thing algebras have to 
a unique name is their signature -- basically their axiomatization -- 
or a long descriptive name in terms of arbitrary names and adjectives 
(the Cartesian product of a Cartesian closed category and a groupoid 
with groupoid addition induced by).  The case for Pareto 
efficiency is here:  is changing the name of these kinds of structures 
wholesale a win for efficiency?  The answer is no.  Everybody would 
have to learn the new, arbitrary names, instead of just some people 
having to learn the old arbitrary names.

Ok...


Let's compare this to the monad fallacy.  It is said every beginner 
Haskell programmer write a monad tutorial, and often falls into the 
monad fallacy of thinking that there is only one interpretation for 
monadism.  Monads are relatively straightforward.  Their power comes 
from the fact that many different kinds of things are monadic -- 
sequencing, state, function application.  What name should we use for 
monads instead?  Which interpretation must we favor, despite the fact 
that others will find it counter-intuitive?  Or should we choose to 
not favor one, and just pick a new arbitrary name?
The short answer: If the work I imagine would be done by exchanging here 
a word and there on the quick -- it would be again maths style, with 
difference only in justifying it with naivity instead of resignation.


The idea I have is different: DEEP CONTEMPLATION stands in the 
beginning, gathering the constructive criticism of the sharpest minds 
possible, hard discussions and debates full of temperament -- all of 
this already rewarding in itself. The participants are united in the 
spirit to create a masterpiece, and to explore details in depths for 
which time was missing before. It could be great fun for everybody to 
improve one's deep intuition of category theory.


This book might be comparable to a programming language, hypertext like 
a wikibook and maybe in development forever. It will have an appendix 
(or later a special mode) with a translation of all new termini into the 
original ones.


I do believe deeply that this is possible. By all criticism on Bourbaki 
-- I was among the generation of pupils taught set theory in elementary 
school; looking back, I regard it as a rewarding effort. Why should 
category theory not be able to achieve the same, maybe with other means 
than plastic chips?


All the best,

   Nick




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Category Theory woes

2010-02-18 Thread Nick Rudnick

Daniel Fischer wrote:

Am Donnerstag 18 Februar 2010 19:19:36 schrieb Nick Rudnick:
  

Hi Hans,

agreed, but, in my eyes, you directly point to the problem:

* doesn't this just delegate the problem to the topic of limit
operations, i.e., in how far is the term «closed» here more perspicuous?



It's fairly natural in German, abgeschlossen: closed, finished, complete; 
offen: open, ongoing.


  

* that's (for a very simple concept)



That concept (open and closed sets, topology more generally) is *not* very 
simple. It has many surprising aspects.
  
«concept» is a word of many meanings; to become more specific: Its 
*definition* is...
  

the way that maths prescribes:
+ historical background: «I take closed as coming from being closed
under limit operations - the origin from analysis.»
+ definition backtracking: «A closure operation c is defined by the
property c(c(x)) = c(x).



Actually, that's incomplete, missing are
- c(x) contains x
- c(x) is minimal among the sets containing x with y = c(y).
  
Even more workload to master... This strengthens the thesis that 
definition recognition requires a considerable amount of one's effort...

If one takes c(X) = the set of limit points of



Not limit points, Berührpunkte (touching points).

  

X, then it is the smallest closed set under this operation. The closed
sets X are those that satisfy c(X) = X. Naming the complements of the
closed sets open might have been introduced as an opposite of closed.»

418 bytes in my file system... how many in my brain...? Is it efficient,
inevitable? The most fundamentalist justification I heard in this regard
is: «It keeps people off from thinking the could go without the
definition...» Meanwhile, we backtrack definition trees filling books,
no, even more... In my eyes, this comes equal to claiming: «You have
nothing to understand this beyond the provided authoritative definitions
-- your understanding is done by strictly following these.»



But you can't understand it except by familiarising yourself with the 
definitions and investigating their consequences.
The name of a concept can only help you remembering what the definition 
was. Choosing obvious names tends to be misleading, because there usually 
are things satisfying the definition which do not behave like the obvious 
name implies.
  
So if you state that the used definitions are completely unpredictable 
so that they have to be studied completely -- which already ignores that 
human brain is an analogous «machine» --, you, by information theory, 
imply that these definitions are somewhat arbitrary, don't you? This in 
my eyes would contradict the concept such definition systems have about 
themselves.


To my best knowledge it is one of the best known characteristics of 
category theory that it revealed in how many cases maths is a repetition 
of certain patterns. Speaking categorically it is good practice to 
transfer knowledge from on domain to another, once the required 
isomorphisms could be established. This way, category theory itself has 
successfully torn down borders between several subdisciplines of maths 
and beyond.


I just propose to expand the same to common sense matters...

Back to the case of open/closed, given we have an idea about sets -- we
in most cases are able to derive the concept of two disjunct sets facing
each other ourselves, don't we? The only lore missing is just a Bool:
Which term fits which idea? With a reliable terminology using
«bordered/unbordered», there is no ambiguity, and we can pass on
reading, without any additional effort.



And we'd be very wrong. There are sets which are simultaneously open and 
closed. It is bad enough with the terminology as is, throwing in the 
boundary (which is an even more difficult concept than open/closed) would 
only make things worse.
  
Really? As «open == not closed» can similarly be implied, 
bordered/unbordered even in this concern remains at least equal...

Picking such an opportunity thus may save a lot of time and even error
-- allowing you to utilize your individual knowledge and experience. I



When learning a formal theory, individual knowledge and experience (except 
coming from similar enough disciplines) tend to be misleading more than 
helpful.
  

Why does the opposite work well for computing science?

All the best,

   Nick

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Category Theory woes

2010-02-18 Thread Nick Rudnick

Hi Alexander,

please be more specific -- what is your proposal?

Seems as if you had more to say...

   Nick

Alexander Solla wrote:


On Feb 18, 2010, at 4:49 PM, Nick Rudnick wrote:


Why does the opposite work well for computing science?


Does it?  I remember a peer trying to convince me to use the factory 
pattern in a language that supports functors.  I told him I would do 
my task my way, and he could change it later if he wanted.  He told me 
an hour later he tried a trivial implementation, and found that the 
source was twice as long as my REAL implementation, split across 
multiple files in an unattractive way, all while losing conceptual 
clarity.  He immediately switched to using functors too.  He didn't 
even know he wanted a functor, because the name factory clouded his 
interpretation.


Software development is full of people inventing creative new ways to 
use the wrong tool for the job.




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Category Theory woes

2010-02-18 Thread Nick Rudnick

Hi,

wow, a topic specific response, at last... But I wish you would be more 
specific... ;-)



A *referrer* (object) refers to a *referee* (object) by a *reference*
(arrow).



Doesn't work for me. Not in Ens (sets, maps), Grp (groups, homomorphisms), 
Top (topological spaces, continuous mappings), Diff (differential 
manifolds, smooth mappings), ... .
  

Why not begin with SET and functions...

Every human has a certain age, so that there is a function, ageOf:: 
Human- Int, which can be regarded as a certain way of a reference 
relationship between Human and Int, in that by agoOf,


* Int reflects a certain aspect of Human, and, on the other hand,
* the structure of Human can be traced to Int.

Please tell me the aspect you feel uneasy with, and please give me your 
opinion, whether (in case of accepting this) you would rather choose to 
consider Human as referrer and Int as referee of the opposite -- for I 
think this is a deep question.


Thank you in advance,

   Nick



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Category Theory woes

2010-02-17 Thread Nick Rudnick
I haven't seen anybody mentioning «Joy of Cats» by  Adámek, Herrlich  
Strecker:


It is available online, and is very well-equipped with thorough 
explanations, examples, exercises  funny illustrations, I would say 
best of university lecture style: http://katmat.math.uni-bremen.de/acc/. 
(Actually, the name of the book is a joke on the set theorists' book 
«Joy of Set», which again is a joke on «Joy of Sex», which I once found 
in my parents' bookshelf... ;-))


Another alternative: Personally, I had difficulties with the somewhat 
arbitrary terminology, at times a hindrance to intuitive understanding - 
and found intuitive access by programming examples, and the book was 
«Computational Category Theory» by Rydeheart  Burstall, also now 
available online at http://www.cs.man.ac.uk/~david/categories/book/, 
done with the functional language ML. Later I translated parts of it to 
Haskell which was great fun, and the books content is more beginner 
level than any other book I've seen yet.


The is also a programming language project dedicated to category theory, 
«Charity», at the university of Calgary: 
http://pll.cpsc.ucalgary.ca/charity1/www/home.html.


Any volunteers in doing a RENAME REFACTORING of category theory together 
with me?? ;-))


Cheers,

  Nick


Mark Spezzano wrote:

Hi all,

I'm trying to learn Haskell and have come across Monads. I kind of understand monads now, but I would really like to understand where they come from. So I got a copy of Barr and Well's Category Theory for Computing Science Third Edition, but the book has really left me dumbfounded. It's a good book. But I'm just having trouble with the proofs in Chapter 1--let alone reading the rest of the text. 

Are there any references to things like Hom Sets and Hom Functions in the literature somewhere and how to use them? The only book I know that uses them is this one. 


Has anyone else found it frustratingly difficult to find details on 
easy-to-diget material on Category theory. The Chapter that I'm stuck on is 
actually labelled Preliminaries and so I reason that if I can't do this, then 
there's not much hope for me understanding the rest of the book...

Maybe there are books on Discrete maths or Algebra or Set Theory that deal more 
with Hom Sets and Hom Functions?

Thanks,

Mark Spezzano.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe

  


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: OT: Literature on translation of lambda calculus to combinators

2010-01-29 Thread Nick Smallbone
Job Vranish jvran...@gmail.com writes:

 Ideally we'd like the type of convert to be something like:
 convert :: LambdaExpr - SKIExpr
 but this breaks in several places, such as the nested converts in the RHS of 
 the rule:
 convert (Lambda x (Lambda y e)) | occursFree x e = convert (Lambda x (convert 
 (Lambda y e)))

 A while ago I tried modifying the algorithm to be pure top-down so that it 
 wouldn't have this problem, but I
 didn't have much luck.

 Anybody know of a way to fix this?

The way to do it is, when you see an expression Lambda x e, first
convert e to a combinatory expression (which will have x as a free
variable, and will obviously have no lambdas). Then you don't need
nested converts at all.

Not-really-tested code follows.

Nick

data Lambda = Var String
| Apply Lambda Lambda
| Lambda String Lambda deriving Show

data Combinatory = VarC String
 | ApplyC Combinatory Combinatory
 | S
 | K
 | I deriving Show

compile :: Lambda - Combinatory
compile (Var x) = VarC x
compile (Apply t u) = ApplyC (compile t) (compile u)
compile (Lambda x t) = lambda x (compile t)

lambda :: String - Combinatory - Combinatory
lambda x t | x `notElem` vars t = ApplyC K t
lambda x (VarC y) | x == y = I
lambda x (ApplyC t u) = ApplyC (ApplyC S (lambda x t)) (lambda x u)

vars :: Combinatory - [String]
vars (VarC x) = [x]
vars (ApplyC t u) = vars t ++ vars u
vars _ = []

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Any way to catch segmentation fault?

2009-03-18 Thread Nick Rudnick
Hi all,

doing some work with a regex library I stumbled over some segmentation
faults due to illegal byte combinatations.

Looking for a way to get this caught some way, I failed in finding any place
at GHC (6.10.1) or Hackage libraries where this is covered -- a quick check
with HUnit revealed it to be crashing by this phenomenon.

So I would like to ask about the state of this issue,

o   Is there principally no way to (generally) catch segmentation faults??
(This would be hard to believe, as the described kind of noise is to be
expected at production systems, especially with user generated content.)

o   Are segmentation faults «prohibited» in Haskell so the advice is just to
change the used library and forget the whole stuff??

o   Did I oversee the generic mechanism for catching of segmentation
faults?? (If yes, do you please give me a hint??)

o   If no, is there a workaround which might be practicable??


Thanks a lot in advance,

Dorinho
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Tim Sweeney (the gamer)

2008-01-10 Thread Nick Rolfe
On 10/01/2008, Galchin Vasili [EMAIL PROTECTED] wrote:
 Hello,

  I have been reading with great interested Tim Sweeney's slides on the
 Next Generation Programming Language. Does anybody know his email address?

Vasili is referring to these slides, which will probably interest many
people on this list:

http://morpheus.cs.ucdavis.edu/papers/sweeny.pdf

He refers to Haskell and its strengths (and some of its weaknesses) quite a bit.

For those who don't know him, Tim Sweeney is the main programmer
behind Epic Games's popular Unreal Engine. When he talks, many game
developers will listen. Perhaps more importantly, anything he does
will affect a large number of game developers.

Apologies if this has been posted before.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Tetris

2007-11-20 Thread nick ralabate
Speaking of Tetris and Space Invaders, you might be interested in this project:

http://www.geocities.jp/takascience/haskell/monadius_en.html

It's a clone of Gradius written in Haskell.


-Nick
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Code layout in Emacs' haskell-mode

2007-05-14 Thread Nick Meyer

Hi Christopher,

I have also noticed that haskell-mode (and indeed Haskell) can be finicky
sometimes.  I usually put module [Name] where all on the same line and
leave imports on the left margin, so I hadn't experienced the first
problem you mentioned.  However, I do notice that if I re-arrange your
second example so that do and the first putStrLn are on the same line,
emacs offers the following indentation:

module Num where
import IO

main = do putStrLn Enter a number: 
 inp - getLine
 let n = read inp
 if n == 0
 then putStrLn Zero
 else putStrLn NotZero

(that's with all the expressions in the do block lining up vertically, if
that doesn't show up in a fixed-width font), it works!  I would think that
your original indentation gave an error in that GHC would see then and
else and assume they were new expressions, but then I would expect that
this would have the same problem.  If anyone can shed some light on this,
that would be nice.

Thanks,
Nick Meyer
[EMAIL PROTECTED]

On 5/14/07, Christopher L Conway [EMAIL PROTECTED] wrote:

I am new to Haskell---and also to languages with the off-side
rule--and working my way through Hal Daume's tutorial. I'm a little
confused by the support for code layout in Emacs' haskell-mode. Is it
buggy, or am I doing something wrong.

For example, here's the Hello, world example from the tutorial, with
the indentation induced by pounding Tab in haskell-mode.

test.hs:
module Test
where

  import IO

main = do
  putStrLn Hello, world

Prelude :l test
[1 of 1] Compiling Test ( test.hs, interpreted )

test.hs:12:0: parse error on input `main'

In emacs, every line but the one with where reports Sole
indentation. With where, I have the option of having it flush left
or indented four spaces; import wants to be two spaces in from
where. Moving where doesn't change the error. But if I manually move
import flush left (which is the way it's shown in the tutorial, BTW):

module Test
where

import IO

main = do
  putStrLn Hello, world

Prelude :l test
[1 of 1] Compiling Test ( test.hs, interpreted )
Ok, modules loaded: Test.

I have a similar problem with the layout of if-then-else...

num.hs:
module Num
where

import IO

main = do
  putStrLn Enter a number: 
  inp - getLine
  let n = read inp
  if n == 0
  then putStrLn Zero
  else putStrLn NotZero

Prelude :l num
[1 of 1] Compiling Num  ( num.hs, interpreted )

num.hs:11:2: parse error (possibly incorrect indentation)

Again, if I hit tab on the then or else lines, emacs reports Sole
indentation. But if I manually change the indentation, it works.

module Num
where

import IO

main = do
  putStrLn Enter a number: 
  inp - getLine
  let n = read inp
  if n == 0
 then putStrLn Zero
 else putStrLn NotZero

Prelude :l num
[1 of 1] Compiling Num  ( num.hs, interpreted )
Ok, modules loaded: Num.

This is particularly weird because if-then-else doesn't always act this

way:


exp.hs:
module Exp
where

my_exponent a n =
if n == 0
then 1
else a * my_exponent a (n-1)

Prelude :l exp
[1 of 1] Compiling Exp  ( exp.hs, interpreted )
Ok, modules loaded: Exp.

I suppose this might have something to do with the do-notation...

Does haskell-mode support code layout? Are there conventions I need to
know about to make it behave properly? I have haskell-mode version
2.1-1 installed from the Ubuntu feisty repository.

Thanks,
Chris
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Does laziness make big difference?

2007-02-19 Thread Nick

apfelmus,

Cool! I really like your examples! Thank you.

Nick.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Does laziness make big difference?

2007-02-18 Thread Nick




Peter,
Roughly, I'd say you can fudge laziness in data structures
in a strict language without too much bother. (I don't have much
experience with this, but the existence of a streams library for OCaml
is the sort of thing I mean. There are plenty of papers on co-iterative
streams and suchlike that show the general pattern.)

Yes, agree. And this was my initial point.

If you wish to add control structures you would need to use the lazy
keyword a lot, e.g.:
  
  
if cond then *lazy* S1 else *lazy* S2
  
  
and for more complicated structures it's not going to be always clear
what needs to be suspended. Laziness is a conservative default here.
(If you want to write an EDSL in a non-lazy language, you'll need to
use some kind of preprocessor / macros / ... - in other words, a
two-level language - or do thunking by hand, as above, or live with
doing too much evaluation.)
One way to gauge how useful laziness really is might be to look through
big ML projects and see how often they introduce thunks manually. A
thunk there is usually something like "fn () = ..." IIRC. Also
IIRC, Concurrent ML is full of them.
Probably, dealing with macros is not so scary and Paul Graham and Piter
Siebel show that it is quite easy. :-)

Ok, let's go from the another side:
I have searched through Darcs source and found 17 datastructures with
strict members (for example data Patch = NamedP !PatchInfo ![PatchInfo]
!Patch) and 36 occurrence of this dreaded seq. And Darcs is an
application being not speed critical.

And if one try to write cipher decoder on Haskell, I guess he has to
make his program full of '!' and 'seq' (or FFI).

Dare I say the tradeoff is between a relatively simple
operational model (so you can judge space and time usage easily) and
semantic simplicity (e.g. the beta rule is unrestricted, easing program
transformation).

Cool! Thank you.

Best regards,
Nick.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Does laziness make big difference?

2007-02-18 Thread Nick

Jerzy Karczmarczuk,


You have strict languages as Scheme or ML, with some possibilities to do
lazy programming. Go ahead! (But you will pay a price. The laziness in
Scheme introduced by the delay macro can produce a lot of inefficient
code, much worse than coded at the base level).

Maybe I am not clear enough, but this is the price I try to measure. :-)

The question is NOT open. The question has been answered a long time ago
in a liberal manner. You have both. You *choose* your programming 
approach.

You choose your language, if you don't like it, you go elsewhere, or you
produce another one, of your own.
Yes, I agree, the world of programming is very rich. But you probably 
know, there are quite a few of curious people (at least in the Russian 
community) that begin to be interested in other (than mainstream) 
languages. Of course, one moment they meet Haskell, and get exited of 
its excellent expressive capabilities, but finally ask the same question:


   What advantages does lazy language have?

And you see, it is incorrect to answer: Relax, no advantages at all, 
take a look at ML or Scheme, because it is just not true. But in order 
to invite new members to the community, we have to answer this question 
(and plus 100 other boring questions) over and over again. Especially it 
is even harder to avoid another holy war, because on the other side 
there are languages with advanced expressiveness features and macrosystem.
Haskell chose a particular schema, that implied a *very concrete* 
decision

concerning the underlying abstract machine model, and the implementation.
It is a bit frustrating reading over and over the complaints of people 
who
never needed, so they dont appreciate laziness, who want to revert 
Haskell

to strict. As if we were really obliged to live inside of a specific Iron
Curtain, where only one paradigm is legal.
You misunderstood me, I don't try to revert Haskell to strict. I like 
Haskell as is. My motivation is different.


Best regards,
Nick.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Is lazyness make big difference?

2007-02-15 Thread Nick

Hi all,

(Another topic stolen from a Russian forum discussion).

As everyone know, there are lot of strict languages, that have 
possibilities to switch on lazy evaluation when needed.


But the examples that I saw on Haskell, there was not much use of lazy 
evaluation, often there were just several lazy points, and the rest 
could be done strictly without loss of generality. For example, in 
finding primes:


   main= print primes
   primes  = 2:filter is_prime [3,5..]
   is_prime n  = all (\p- n `mod` p /= 0) (takeWhile (\p- p*p=n) primes)

We can rewrite this in strict languages with lazy constructs. For 
example, in Scala (of course stream is not only lazily evaluated thing 
there)


   def main(args: Array[String]): Unit = {
   val n = Integer.parseInt(args(0))
   System.out.println(primes(ints(2)) take n toList)
   }

   def primes(nums: Stream[Int]): Stream[Int] =
   Stream.cons(nums.head,
   primes ((nums tail) filter (x = x % nums.head != 0)) )

   def ints(n: Int): Stream[Int] =
   Stream.cons(n, ints(n+1))

I think the Haskell solution is more compact due to syntactic sugar, 
curring and parentheses-free-ness, *not* lazy evaluation.


According to one guy's analogy: the Real World is strict - in order to 
drink tea, you have to put the cattle on the fire, wait until water 
boils, brew tea and then drink. Not the cattle is put on the fire, water 
boils and the tea is brewed when you take the empty cup to start 
drinking. :-)


The question is the following: how big the gap between strict languages 
with lazy constructs and Haskell? Does the default lazyness have 
irrefutable advantage over default strictness?


Thank you for the attention.

With best regards,
Nick.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Is lazyness make big difference?

2007-02-15 Thread Nick




Gleb,

It seems
you miss the point here: not only logger should be lazy, but all calls
to logger's methods:
  
  
logger.debug(formatLongMessage(args)); // formatLongMessage should not
  
   // waste CPU cycles if debug
  
   // logging is off
  

Hmm, nope. Let me change the debug signature from
debug(message : String) : Unit
  
to
debug(message : = String) : Unit

or (that is substantially the same)
 // annotations and macros will be in Scala very soon
:-P
  debug(@lazy message : String) : Unit

and all the calls remain the same. So the example shows, that the
default lazyness saves me from typing word "lazy" a whole two times! It
doesn't look for me as "irrefutable advantage", sorry ;-)

Best regards,
Nick.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Does laziness make big difference?

2007-02-15 Thread Nick




Dougal,


  
According to one guy's analogy: the Real World is strict - in order to 
drink tea, you have to put the cattle on the fire, wait until water 
boils, brew tea and then drink. Not the cattle is put on the fire, water 
boils and the tea is brewed when you take the empty cup to start 
drinking. :-)

  
  
I think the word you meant there is "kettle", since "cattle" are what
get turned into burgers ;-) Still, the idea of water-boil-tea-brew
happening by demand would probably save electricity in our
energy-conscious world. Don't boil a full kettle for a single cuppa!
  

Of course, I meant "kettle" that should be put on the fire. I was not
intended to singe poor cows for a cup of tea.:-D
And to continue this analogy: when we decide to drink tea, we run the
algorithm (water-boil-tea-brew) from begin to end strictly and
imperatively. We resolve the dependencies water - ... - brew
only the first time at the stage of "designing" the algorithm (by means
of reasoning and common sense), but after finishing the "design" we can
just run it unbounded times for any amount of tea. :-)


  
The question is the following: how big the gap between strict languages 
with lazy constructs and Haskell? Does the default lazyness have 
irrefutable advantage over default strictness?

  
  
That kinda leads into thoughts of the Turing tar-pit, where everything
is possible but hopelessly obfuscated by the constraints of the
language.

I think default laziness, to answer your actual question, has advantage
in terms of thought process. It helps me consider things in terms of
dependencies. To go back to the analogy: in the strict style it's very
easy to boil the kettle and then let the water go cold. This is a waste
of energy (CPU time, or whatever).

So whether it's *computationally* more valuable, I don't know. But I
find that it helps me to order my thoughts about what a problem *needs*.

Well, as I understood, for your way of thinking lazy language is more
suitable. I see, this is good motive for you (or people like you, I
guess you have solid mathematical background or something like that) to
choose Haskell or other language with default laziness, but the
question in general remains open...

Thank you for the answer.

Best regards,
Nick.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Does laziness make big difference?

2007-02-15 Thread Nick




Nicolas,
Someone already mentioned John Hughes paper. Another
resource is SPJ's
  
"hair shirt" slides (also discusses type classes and monads).
  
  
http://research.microsoft.com/~simonpj/papers/haskell-retrospective/HaskellRetrospective.pdf
  
  
Laziness is addressed beginning on slide 19.
  

I've even read "History of Haskell" paper, and accordingly to it the
default laziness was chosen because it was rather unite factor for
scientists, not because it had some big advantage:

Then, in the late 70s and early 80s, something new happened. A
series of seminal publications ignited an explosion of interest in the
idea of lazy (or non-strict, or call-by-need) functional languages as
a vehicle for writing serious programs.

So, as I understand, choosing default laziness was just experimental
design decision in order to answer the question: "how good lazy
language can be". I am practically convinced, that lazy evaluation
included in the _right_ places can be extremely useful. I didn't state
the question as "strict vs lazy", my question is different - "default
laziness with optional strict -vs- default strictness with optional
lazy". And sorry, but the question remains open.

Best regards,
Nick.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Connected!

2007-02-04 Thread Nick

Bulat Ziganshin wrote:

Hello haskell-cafe,

i've just got ADSL connection here! it's slow (64k) and not cheap, but
at least it is completely different from dial-up i've used before

That's great! I'm using GPRS, so you can imagine how painful it is :-)

ps Ru = Добро пожаловать в Декларативное программирование: 
rsdn.ru/forum/?group=decl
ps En = Welcome to Declarative programming (forum on Russian software 
developer network) ;-)


Best regards,
Nick.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Interoperability with other languages and haskell in industry

2004-09-17 Thread Vincenzo aka Nick Name
On Thursday 16 September 2004 20:27, Andy Moran wrote:
 I'd like to say that this approach has worked for us time and time
 again, but, to date, we've never had to rewrite a slow component in C
 :-)  For us, C interoperability has always been a case of linking to
 third party software, or for writing test harnesses to test generated
 C.


The point is that perhaps we will not have a prototype but a single 
implementation (not that I think it's a good idea in the general case, 
but we will write a relatively simple bookkeeping application). However 
I realize that one can write a great part of the software in a single 
language. The point is providing an escape to java, C++, C#, python or 
other in vogue languages in case we find that it's difficult to 
interface with legacy systems, or we don't find a coder to hire in the 
future. So the point is not to rewrite something in C for efficiency, 
but rather to be able to say ok, this component is written in haskell 
and will stay this way, but the rest of the system won't be haskell 
anymore. However:

 Things are different if your application is multi-process and/or
 distributed, and you're not going to be using an established protocol
 (like HTTP, for instance).  In that case, you might want to look at
 HDirect (giving access to CORBA, COM, DCOM), if you need to talk to
 CORBA/COM/DCOM objects.  There are many simple solutions to RPC
 available too, if that's all you need.

I see that there is for example xmlrpc that should fit my little 
interoperability needs, and would have liked to hear some experience on 
that route. Your reply is incouraging, though, since you didn't need 
any other language at all. That's my hope, too.

Bye and waiting for that other famous hakell-using company that I didn't 
mention to attend this discussion :)

Vincenzo
___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Interoperability with other languages and haskell in industry

2004-09-16 Thread Vincenzo aka Nick Name
Again, I will try to take benefit of the thread on the senior list to 
ask a question to everybody who uses haskell in industry (so you people 
at Galois Connection can't avoid to answer, I know you are there :D ): 
are your solutions entierely written in haskell, or are there parts 
written in other languages? If so, how do you let all parts 
interoperate? Do you use some form of RPC, or CORBA, do you just use a 
database to store common data, do you use custom protocols (e.g. 
command line arguments) or what? Do you have experience with wrong ways 
to achieve this goal?

I ask this because it might be that in the next years our sleeping 
company will produce some software, and I can easily convince other 
people to use new languages, if I can ensure them that in case it 
proves difficult for any reason, we can finish with a certain module 
and implement the rest of the system using more conventional 
technologies.

Thanks

Vincenzo
___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Haskell and sound

2004-05-09 Thread Vincenzo aka Nick Name
On Saturday 08 May 2004 13:16, Sven Panne wrote:
 Apart from that, having a binding for SDL would be nice, too, and
 somebody is already working on it, IIRC.

I would like to try these bindings.

V.
___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Is hdirect just broken?

2004-05-06 Thread Vincenzo aka Nick Name
On Thursday 06 May 2004 13:36, Vincenzo aka Nick Name wrote:
 Can greencard support callbacks? If yes, can someone provide a simple
 example?

Ok, I finally found Alistair Reid's tutorial, which I forgot to read 
again, and well, I see that greencard does not support callbacks. My 
alternatives are plain FFI and hdirect. I just renounced to the 
[unique] tag and having optionally null fields in a record, but now I 
have another problem:

===
typedef int (*readlinkT)([in,string] const char *, 
[size_is(size),string,out] char * buf, [hide] unsigned int size);

ghc  -package hdirect -fglasgow-exts HSFuse.hs -c

HSFuse.hs:190: Variable not in scope: `buf'
===

The offending line is, in the following fragment of code, the one 
beginning with let size =, of course there is no buf, if the 
argument is called out_buf!

===
wrap_ReadlinkT readlinkT_meth anon1 out_buf size =
  do
anon1 - HDirect.unmarshallString anon1
(res__buf, res__o_ReadlinkT) - readlinkT_meth anon1 size
let size = (Prelude.fromIntegral (Prelude.length buf) ::  
 Data.Word.Word32)
buf - HDirect.marshallString buf
HDirect.writePtr (Foreign.Ptr.castPtr out_buf) res__buf
Prelude.return (res__buf, res__o_ReadlinkT)
===

Now, if I may ask the final question: is it me not understanding how I 
should do things, or is hdirect completely broken and I am the only one 
not knowing this? What tool should I use to write a libfuse binding? Is 
plain FFI my only hope?

Thanks

Vincenzo
___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Is hdirect just broken?

2004-05-06 Thread Vincenzo aka Nick Name
On Thursday 06 May 2004 16:10, Vincenzo aka Nick Name wrote:
 [hide] unsigned

oh yes, I know, [hide] does not exist in hdirect but this does not 
change things :)

V.

-- 
Non so chi colpire perciò non posso agire
[Afterhours]

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Toy application advice wanted

2004-05-05 Thread Vincenzo aka Nick Name
On Wednesday 05 May 2004 04:46, Ben Lippmeier wrote:
 http://www.haskell.org/libraries and look at how many seperate GUI
 libraries there are - I counted 16 - then ask what made the developer
 for the 16th one choose to start over.

The fact that the 16th one is a wxwindows binding justifies this quite 
well :)

V.

-- 
Si puo' vincere una guerra in due e forse anche da solo
si puo' estrarre il cuore anche al piu' nero assassino
ma e' piu' difficile cambiare un' idea
[Litfiba]

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


  1   2   >