Re: [Haskell-cafe] Can subclass override its super-class' defaultimplementation of a function?

2009-04-27 Thread Claus Reinke

Basically, I have a bunch of instances that have a common functionality but
I'd like to be able to group those instances and give each group a different
default implementation of that functionality. It's so easy to do this in
Java, for example, but I have no idea how to do it in Haskell. The above
example will not compile, btw.


Before going into language extensions, is there anything wrong with:

class C a where foo :: a -> Double

fooA a = 5.0 --default A
fooB a = 7.0 -- default B

data Blah = Blah
data Bar = Bar

instance C Blah where foo = fooA
instance C Bar  where foo = fooB

blah = Blah
bar = Bar

main = do
 print $ foo blah -- should print out 5
 print $ foo bar  -- should print out 7

It seems to match your spec.

Claus


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] breaking too long lines

2009-04-25 Thread Claus Reinke

* with practically every modern IDE (say, Eclipse for Java), indentation is a
non-issue. 


How so? In future IDEs, source code might just be a view on an internal
representation, but we've have that kind of IDE in the past, and some users
developed a definite dislike to tools that wouldn't let them adjust layout to
whatever they thought best suited to the task at hand.

Unless you mean semantics-based editing, where the IDE should be
able to compensate for unwanted side-effects of, say, renaming, or adding/
removing parameters, without resorting to just pretty-printing everything.

But as long as programmers need to resort to text editing, IDEs can at
best guess their intentions, and some layout styles will be less prone than
other to needing adjustments after typical editing tasks.


* indentation should be by fixed amounts (e.g. 4 spaces for each level)
and not depend on lengths of identifiers (because you might later change
them)
(well, actually you won't because we don't have Refactor->Rename
that would be aware of namespaces, modules etc.)


I hasn't been maintained for quite a while, and was only Haskell'98,
but Renaming was the first refactoring implemented in HaRe, and 
HaRe was module-aware from about version 0.2:


http://haskell.org/pipermail/haskell/2004-January/013508.html

HaRe also adjusted indentation if Rename changed identifier lengths.
According to its homepage, support for hierarchical modules was added
in version 0.3 (the language concept, not the hierarchical libraries code 
tended to depend on even then, let alone the cabal/hackage of today)


Claus


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Overriding a Prelude function?

2009-04-24 Thread Claus Reinke

As far as referential transparency is concerned, you'd need to keep
your reference levels straight, which can seem confusing if you can
only communicate via representations(*:-) 


That came out more confusing than it needs to be, so let me try that
again, from scratch:

Reduction systems take programs that are representations of problems
and produce programs that are representations of solutions. One level
of representation, so users can pretend there is no representation: for
them, reduction systems reduce programs to simpler programs.


From the implementation perspective, representations of programs
are translated downwards into executable code, the code is executed, 
the system states resulting at the end (or when the reduction count runs 
out) are translated upwards into representations of programs again.


Slightly more intricate, but users still see only programs reducing to
other programs. It doesn't matter whether the end programs are values,
functions, or partially reduced applications - they are always represented
in the same high-level form users started out with, without user programs
having to deal explicitly in representations. Referential transparency is
never in question.

In current Haskell implementations, representations of programs
are translated downwards into executable code, the code is executed.

There is something missing here: the resulting system states are never
translated back into language-level representations, so users have to
deal with the gap between high-level and low-level representations.
To do that, we use 'show' and 'IO' in our programs - we write 
programs that talk about representations of a subset of programs, 
namely 'Show'able values, and about printing those representations.


So we end up with representations of programs-that-print-out-
representations-of-values; the programs are translated downwards
into executable code, the code is executed, doing so prints out
representations of values. What GHCi and Hugs do is to wrap
a 'putStrLn . show' around our input programs, so there is some
chance of getting a representation of result values back. GHCi
also allows us to make and inspect (show values or types of) 
bindings at the prompt, but it doesn't give us a representation 
of the program context we are constructing.


When using a reduction system, I was able to focus on programs.
When using a Haskell implementation, I have to think about
representations of results, and how I get them out of a running
program. If the intended result is something for which programs
cannot produce or print a representation (functions, partially
reduced applications) I am stuck. I might ask for a debugger,
if only to get some kinds of insight into what my program is
doing when it isn't printing representations of values for me,
but I won't get a language-level view of program execution
via representations of intermediate programs.

Hope that is a less confusing view of the differences,
Claus


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Overriding a Prelude function?

2009-04-23 Thread Claus Reinke

Well, than, what would you expect from this:

let {f x = g x;
 g 0 = 0;
 g n = f (n-1)}
in show f


Well, not show, because any show instance for functions breaks r.t.  But the
interactive interpreter, if it is not subject to r.t., might show:

let { f x = g x;
 g 0 = 0;
 g n = f (n-1) } in f


Actually, most of those reduction systems were strongly, but 
dynamically typed, so the original example would give something

like this (no reductions possible):

   let {f x = g x;
 g 0 = 0;
 g n = f (n-1)}
   in show f

And if you applied something like 'show' to something it can work
with (let's wrap it in a function to make things interesting),

   \x->show [1,2]

you might get, unsurprisingly (but watch your representation levels!)

   \x->"[1,2]"

As far as referential transparency is concerned, you'd need to keep
your reference levels straight, which can seem confusing if you can
only communicate via representations(*:-) The easy part is semantics
and inverse

   eval :: representation -> meaning
   reify :: meaning -> representation

Since several program representations can have the same meaning, 
the inverse is not usually a function, but a relation, so Haskell picks 
a partial implementation of 'reify', named 'show', applicable only to 
things that have a uniquely determined representation. And since

Haskell implementations can't display things by themselves, they
use 'show' to show at least some things.

Now for the tricky part: as we write down or implement a semantics,
we're using a secondary level of representation, so we have representations
of program representations and representations of program meanings
(we might say that the 'semantics of 3', '[| 3 |]',  is the number '3'). 

In other words, we have a representation of 'eval' that maps 
representations of representations to representations of meanings.


So when we write down a program, it is a program to us, but just a string
to be interpreted for the implementation. And the implementation can
give back a representation of the meaning to us, without ever passing
such representation to other parts of the program (no 'show' involved).

Concretely, we type in '(\x->\y->x+2) 1', and the implementation 
gives back '\y->3' without having to support any kind of reification 
operation in the program itself - no problem with referential 
transparency there.


Claus

(*:-) ".. The name of the song is called `Haddocks' Eyes'"
   "Oh, that's the name of the song, is it?" Alice said, trying to
   feel interested. "No, you don't understand," the Knight said,
   looking a little vexed. "That's what the name is called. The
   name really is `The Aged Aged Man.'"
   "Then I ought to have said `That's what the song is called'?"
   Alice corrected herself. "No, you oughtn't: that's quite another
   thing! The song is called `Ways And Means': but that is only
   what it's called, you know!"
   "Well, what is the song, then?" said Alice, who was by this
   time completely bewildered. "I was coming to that," the Knight
   said. "The song really is `A-sitting On A Gate'.."
   (Lewis Caroll, Through The Looking-Glass)


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Overriding a Prelude function?

2009-04-23 Thread Claus Reinke

*Main> :t rollDie ~>> (rollDie ~>> rollDie)
rollDie ~>> (rollDie ~>> rollDie) :: Seed -> (Int, Seed)

This is a function. How exactly do you want ghci to show it? When you 
figure that out, feel free to make an instance of Show for it.


Just because user programs can't show function internals (they can
show parts of in/out tables, and with sufficient overloading they can
even show approximations of representation) that does not mean
that the language implementation can't show function internals.

That is a feature that I've always missed in Haskell implementations.

In the reduction systems I used before Haskell, one could enter a
function, reduce it (stepwise, or to whnf, or even to hnf) and get
the resulting function back in the editor, ready for inspection, editing,
application, and further reduction. That kind of feature needs to be 
designed into the implementation early on, but it gives a whole new 
dimension to "programming with functions" (I recall how difficult it

was for the designers and implementers of those reduction systems
to explain that difference to reviewers).

Claus

PS. and, yes, before anyone suspects otherwise, those reduction
   systems did compiled graph reduction - they just didn't expose
   users to all those low-level details. 



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: ANNOUNCE: Utrecht Haskell Compiler (UHC) --first release

2009-04-23 Thread Claus Reinke
Let's turn this around. You invest 4 months of your life coming out 
with your own experimental Haskell compiler designed to easily test 
new language features. Then a bunch of ungrateful wretches on Haskell 
Cafe demand that you stop distributing your compiler until you have 
full support for Haskell 98. :-)


Do you think that's fair?


|Well, if it doesn't implement the full standard, perhaps it should rather be called 
|UVNABNQHC (Utrecht very nearly, almost but not quite Haskell compiler)?


uHC: unsafeCompileHaskell ?-)

joking and bikeshedding aside:

- Haskell'98 is a fixed standard. Haskell'98 (revised) is a revised version of
   the same standard. The discussion on what is in either is over. Unless 
   someone wants to start and edit a new revision of Haskell'98. Or someone

   wants to write about experience with and criticism of the existing standards.
   None of which seems to relate to this thread's subject, though either would
   fit into other threads on this mailing list.

- the UHC announcement states (emphasis added): "UHC supports 
   _almost all_ Haskell98 features plus many experimental extensions".

   Once they start claiming to have a full Haskell'98 implementation,
   everybody can start filing bug reports. Actually, you can start doing
   that now as they explicitly relate UHC to Haskell'98, not Haskell,
   not Haskell'. But once you've filed a bug report about a deviation
   from the version of the standard being referred to, it is up to them.

- there are one or two more interesting things to discuss about UHC.
   That would require some actual work to find out more about it.

- implementing a Haskell compiler requires a lot of work. So does
   detailing language extensions, to say nothing about providing 
   supporting evidence for suggested language extensions by 
   actually implementing them side-by-side with Haskell's other
   features. 


- anyone who gets through the work of implementing something,
   let alone a Haskell compiler, to the stage of releasing/announcing
   it, is likely looking forward to getting feedback on their work.

   In reality, the only feedback most projects get is from bug reports
   (and not always those), web access logs, and rumours on blogs
   or irc. One really, really, does not need one's project name to be 
   used for other unrelated entertainment as well.


May I respectfully suggest that further postings under _this_ subject 
give something back to the UHC implementers, in the form of investing 
some actual work and time to find out about the fruits of their work?


Claus

PS. Sorry for going meta about this. Just one reader's and fellow
   programmer's over-sensitive opinion. Feel free to colour bikesheds 
   or have interesting discussions on non-UHC-specific Haskell 
   standard issues, in non-UHC-specific threads. Or to ignore this 
   suggestion alltogether.


PPS. If you want to see future announcements of real software
   appear on the haskell@ list only, excluding the haskell-bikeshed@ 
   list. That is assuming that people will still be motivated to implement

   such software, if vapourware could trigger the same responses.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Non-atomic "atoms" for type-level programming

2009-04-22 Thread Claus Reinke

in case anyone stumbles over my ad-hoc notations, that should have been:


module A[type label] where x = undefined :: label
module B[type label] where x = undefined :: label



module C[type label] where
import A[label]
import B[label]
ok = [A.x,B.x]


assuming that:

- 'module X[types]' means a module parameterized by 'types'
- 'import X[types]' means a module import with parameters 'types'.

Claus


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Non-atomic "atoms" for type-level programming

2009-04-22 Thread Claus Reinke
thanks for your elaborations. I'm still not convinced that a common name 
(e.g. TT :. Tr :. Tu :. Te) is a better interface than a common import 
(e.g. TypeLevel.Bool.True). In both cases, the authors of all modules 
have to actively collaborate, either to define common names, or to 
define common imports.


It is not a solution, it is a workaround. All it does is offer users another 
choice, so they can now say whether they are talking about shared or

locally defined labels:

module A where 
import Data.Label

data MyLabel
x = [$l|label|]
y = undefined::MyLabel

module B where 
import Data.Label

data MyLabel
x = [$l|label|]
y = undefined::MyLabel

module C where
import Data.Label
import A
import B
ok = [A.x,B.x]
fails = [A.y,B.y]

It does so by offering a meta-level commonality: A and B do not have
to agree on a common module to declare all their common types (the 
author of Data.Label has no idea what labels its importers might use,
other than the alphabet the labels are constructed from), they only 
need to agree on a common way of declaring all their shareable types.


But I begin to see how type-level atoms could help to, e.g., implement 
more advanced module system as type-level embedded DSLs in Haskell.


Well, atoms make labels, labels form extensible record fields, extensible 
records can be used as first-class modules, but there'd still be a lot missing.


This is really just a small step, and though it is a useful one, it has no such
high aspirations, yet. When I wrote the first-class-labels proposal for
Haskell', I was thinking about possible implementations (outlined in the
haskell prime wiki page I referred to) but they always looked as if they'd
require substantial changes. This workaround suggests that a few 
well-placed localised changes to GHC might be sufficient to get first-class

labels - just split the modifications over two, only losely coupled areas:

- provide label constructors
- provide label usage (preferably hiding the internal structure)

Until someone does that, quasiquoting offers a workaround, so people
can resume playing with things like type-level numbers, extensible record
libraries with and without label ordering, etc. I've filed a feature request
for type-level quasiquoting, in case anyone else has such needs:-)

http://hackage.haskell.org/trac/ghc/ticket/3177


Standard ML's answer to that kind of issue is type sharing.


Does type sharing help with making modules retroactively compatible?


It has been too long since I looked at this in detail, but yes. The way
I recall it (and the early example in [1] seems to confirm this, though
SML has changed after that paper was published) is that modules
have signatures, and type sharing constraints assert that parts of
these signatures have to match up (in Haskell-speak: imagine modules
as records, with two records R1 a and R2 b, then we can use a type 
'a~b => R1 a -> R2 b -> T' to assert that both records share the

same type parameter; only that Haskell modules aren't records
and aren't parameterized..).

It would be as if one could write modules parameterised by types,
instead of declaring them locally, and being able to share a type
parameter over several imports:

module A(type label) where x = undefined :: label
module B(type label) where x = undefined :: label

module C(type label) where
import A[label]
import B[label]
ok = [A.x,B.x]

Claus

[1] http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.14.3595
   Robert Harper and Mark Lillibridge,
   A Type-Theoretic Approach to Higher-Order Modules with Sharing


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: ANNOUNCE: Utrecht Haskell Compiler (UHC) --first release

2009-04-22 Thread Claus Reinke

Installing executable(s) in /home/david/.cabal/bin
why the hell would cabal install binaries in a subdirectory of a
hidden directory. Why not /home/david/bin or /home/david/local/bin ?


Yes, this is clearly suboptimal but getting agreement on where to put it
has not proved easy. There are users that will scream and shout if we
install to $HOME/bin by default.


Having learned from experience that user preferences differ wildly,
even on similar platforms, not to mention a variety of platforms or,
even worse, intentionally different forks of the same platform, and 
that trying to guess what defaults might be sensible, let alone acceptable,

can be a losing game, I'd like to offer an alternative view:

   if there is no universally acceptable default, do not use a default

Next to not being bothered with configurations they agree with, users
like to be in control, or at least be informed about what is going on,
and precisely how to change it, *before* anything happens that they
do not like.

cabal install could, on its first invocation, point to its configuration
file, explaining precisely the minimum number of changes required
to get it working (with potential defaults being present in the config
file, commented out and explained, the config file could be a config
mini-tutorial).

This would depend on few things to be acceptable:

- configuration should be straightforward, with explanations of
   possible consequences being clear and close at hand; if there
   is no config file, the tool should be able to generate a partial
   one (with disputed choices commented out) for further editing

- configuration should be persistent (never overwrite old config
   file without user permission; propagate old config to new tool
   version)

That way, nothing would happen until users are satisfied that things
will happen exactly as they like it and, once that is done, they won't
have to think about this again (until cabal changes substantially and
needs to ask for further user advice, which seems better than silently
changing behaviour).

If you want cabal to be installable in settings where no user is
available, you could either generate a full config file before install,
or add an --i-really-don't-care-about-config-settings option.

This road isn't perfect, but it can be less unacceptable than any
arbitrary set of default choices.

Claus


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] UPDATE: haskellmode for Vim now at projects.haskell.org (+screencast; -)

2009-04-21 Thread Claus Reinke

|Yes, I found that one myself.  My old ftplugin/haskell.vim used to
|have autoindent set, but importing your haskell mode overwrote that
|(slightly irritating behaviour if you ask me ;-).  Putting my old
|config into ftplugin/haskell_mine.vim restored the behaviour.

Yes, one cannot have two "major" haskell modes claiming the same 
file, as the file identifies the mode. Fortunately, Vim provides separate 
files for separate aspects of filetype-specific functionality (syntax, indent, 
plugins), allows to split filetype plugins over several files as well (haskellmode 
currently only sets 'include' and 'includeexpr' in 'ftplugin/haskell.vim'; everything 
else is in separate files), and provides several versions of its configuration 
directory trees, to allow for standard plugins, system-wide overrides, 
user-overrides, ..


For user-specific modifications, the main 'ftplugin/' is probably not
the right place (see ':help runtimepath' for directory layout and paths
searched), although adding a unique postfix to the file name works,
as you've noticed (if we get too many generally available haskell 
plugins, we could move them to their own directory..). If you don't 
want to use your 'vimrc' file, then perhaps one of the 'after' directories 
(also in system-wide and user-specific varieties)? See ':help ftplugin-overrule'.


You might also want to look at ':help vimball' - by default, it installs
files in the first directory tree found on 'runtimepath', but you can
give a different path explicitly (eg, if your ~/.vim is first in the list,
but you don't want the plugins to be installed there, so that 
~/.vim/ftplugin/ isn't touched). There doesn't seem to be a documented 
way of asking for confirmation before overwriting exising files of the 
same names, though - perhaps you could suggest this to the vimball 
maintainer?



|2. My browser doesn't urldecode its command line arguments, so
|locations like
"file:///usr/share/doc/ghc6-doc/libraries/base/Prelude.html%23v%253AfromIntegral"
|don't work. I have to manually fix it up in the location bar. Do you
|have any tip for dealing with that?

That is the first I hear about such an issue. As it seems to work with
most browsers (I tend to use Opera most, but -long ago- tested with
IE and FireFox as well), I would need to know about the browser in question
to see what is going on and think about workarounds.


|These are my g:haddock_ setting:
|let g:haddock_browser="/usr/bin/x-www-browser"
|let g:haddock_browser_callformat = "%s %s &"
|
|It's on a Debian system, hence the use of x-www-browser.  I've tried
|using both epiphany and iceweasel directly as well, with exactly the
|same behaviour.

One thing I learn from trying to support haskellmode for Vim is how
different platforms are, sometime being different is their whole purpose!-)

So, x-www-browser is a dispatcher for calling the default browser,
and epiphany and iceweasel are browsers?-) And you're sure that the
dispatcher doesn't do an extra encoding. And calling the dispatcher
directly with the url 


file:///usr/share/doc/ghc6-doc/libraries/base/Prelude.html#v%3AfromIntegral

works without extra encodings appearing?


file://localhost/c:/ghc/ghc-6.11.20090320/doc/libraries/base/Prelude.html#v%3AfromIntegral

>

|That is what I have in mine as well (though it's located in
|/usr/share/doc/ghc6-doc/libraries/). 


Good, so it isn't a Haddock difference.

| I'm not really sure where the extra level of encoding comes from.


which is copied literally to g:haddock_index['fromIntegral'], and
passed almost literally to the browser call (with backslash escapes
for '#'/'%'). Somewhere along that line, or later, something different
happens on your system, and it seems to happen outside the plugins (you
could confirm this by using 'echo' as your browser, or by adding
a Vim breakpoint for function DocBrowser, and look at a:url).


|My Vim-scripting fu is minimal.  Do you have any pointers on how to
|set breakpoints, step, and inspect variables?

Sure, ':help debug';-) In this particular case, using 'echo' as your browser
to echo the url Vim tries to pass would give you the information more directly.
But you could also try to look at it from within Vim (lines starting with " are
comments, not to be typed in):

" this should match the doc-index url
:echo g:haddock_index['fromIntegral']

" set breakpoint
:breakadd func DocBrowser
" now, if you use _? on fromIntegral, you'll end up in the debugger,
" in function DocBrowser, where you can inspect the local environment,
" then quit or continue:
echo a:url
echo printf(g:haddock_browser_callformat,g:haddock_browser,escape(a:url,'#%'))
q
" delete all breakpoints
:breakdel *

Does the second echo show the extra encoding? If not, it is beyond
haskellmode, I'm afraid - that command is passed directly to the shell
(well, as directly as a configurable editor will allow - see ':help :!').

Claus


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell

Re: [Haskell-cafe] UPDATE: haskellmode for Vim now at projects.haskell.org (+screencast; -)

2009-04-21 Thread Claus Reinke

|I've finally found the time to take another look at your haskell mode
|for vim.  Two questions so far:

|1. I think earlier my indentation mode would match indentation of the
|previous line by default, after installing your scripts I always end
|up in column 1 after pressing return.  Is there some variable I can
|set to get back the old behaviour?

Matching indentation of previous line is still the behaviour I see here.
A quick look through my vimrc file (nothing to do with the plugins)
shows ':set autoindent' as the only indentation-related setting, and
':help autoindent' confirms that this option has the intended behaviour
(':set noautoindent' gives the alternate behaviour you describe).

I tend not to use the various more clever indentation options Vim
provides, so my plugins do not offer clever indentation either, and
should not interfere with whatever other indentation plugins you use.

':help autoindent' points to other indentation-related settings (c-style,
or language-dependent, or based on an evaluating an expression to
calculate likely indents, or ..). You could also try :scriptnames, and
see whether you are loading any Haskell indent scripts (if you enable
filetype detection to get my plugins loaded for Haskell source, and
you happen to have a Haskell indent file, you might pick that up - you 
can disable filetype indent separately, see :help filetype-indent).


|2. My browser doesn't urldecode its command line arguments, so
|locations like 
"file:///usr/share/doc/ghc6-doc/libraries/base/Prelude.html%23v%253AfromIntegral"
|don't work.  I have to manually fix it up in the location bar.  Do you
|have any tip for dealing with that?

That is the first I hear about such an issue. As it seems to work with
most browsers (I tend to use Opera most, but -long ago- tested with
IE and FireFox as well), I would need to know about the browser in 
question to see what is going on and think about workarounds.


Btw, the URL I see generated looks like this (right from Opera's
location bar):
file://localhost/c:/ghc/ghc-6.11.20090320/doc/libraries/base/Prelude.html#v%3AfromIntegral

so it seems that there is an additional level of encoding in your
setting (the '#'/'%' from the encoding are encoded themselves). 


The haskellmode plugins just get the URLs from Haddock's index files,
and  
also says:


   >fromIntegralPreludefor '#'/'%'). Somewhere along that line, or later, something different 
happens on your system, and it seems to happen outside the plugins 
(you could confirm this by using 'echo' as your browser, or by adding

a Vim breakpoint for function DocBrowser, and look at a:url).

It could be different Vim behaviour, but extra url-encoding is
rather specific to browsing, not editing. 


Thanks for reporting the problems (it is the only way I get to know
about other platforms/configurations than the one I'm using!). The 
plugins have a trac instance as well (I've just made the link more 
prominent on their home page).


Claus


On Tue, Apr 7, 2009 at 10:23 AM, Claus Reinke  wrote:

I have become aware that many Haskellers are not aware of the
Haskell mode plugins for Vim, in spite of the >100 downloads
per month I saw when I last checked. Since the plugins have just
completed their move to their new home at

http://projects.haskell.org/haskellmode-vim/

this seems to be a good opportunity to mention them again (now
with 100% more screencasts!-). They do much (though certainly
not all) of the stuff people often wish for here, such as showing or adding
types, looking up source where available locally or looking up documentation
where source isn't available. Mostly, they collect
my attempts to automate tasks when I have to do them often enough.

Here is a section of the quick reference (:help haskellmode-quickref):

|:make| load into GHCi, show errors (|quickfix| |:copen|)
|_ct| create |tags| file |_si| show info
for id under cursor
|_t| show type for id under cursor
|_T| insert type declaration for id under cursor
|balloon| show type for id under mouse pointer
|_?| browse Haddock entry for id under cursor
|:IDoc| {identifier} browse Haddock entry for unqualified {identifier}
|:MDoc| {module} browse Haddock entry for {module}
|:FlagReference| {s} browse Users Guide Flag Reference for section {s}
|_.| qualify unqualified id under cursor
|_i| add 'import ()' for id under
cursor
|_im| add 'import ' for id under cursor
|_iq| add 'import qualified ()' for id
under cursor
|_iqm| add 'import qualified ' for id under cursor
|_ie| make imports explit for import statement under
cursor
|_opt| add OPTIONS_GHC pragma
|_lang| add LANGUAGE pragma
|i_CTRL-X_CTRL-O| insert-mode completion based on imported ids
(|haskellmode-XO|)
|i_CTRL-X_CTRL-U| insert-mode completion based on documented ids
(|haskellmode-XU|)
|i_CTRL-N| insert-mode comp

Re: [Haskell-cafe] ANNOUNCE: Utrecht Haskell Compiler (UHC) --first release

2009-04-19 Thread Claus Reinke

|data Test = Test { foo :: Int, bar :: Char, baz :: Bool }
|smallPrint t = concatMap (\f -> show $ f t) [foo, bar, baz]

|In this code the list [foo, bar, baz] should have the type [exists a. Show a => 
Test -> a].

{-# LANGUAGE RankNTypes #-}
{-# LANGUAGE ExistentialQuantification #-}

data EShow = forall a. Show a => EShow a

smallPrint t = concatMap (\f-> case f t of EShow a -> show a) [EShow . foo, EShow . bar, EShow . 
baz]


data Test = Test { foo :: Int, bar :: Char, baz :: Bool }

Apart from the extra wrapping, this hardcodes the class. So perhaps
you'd prefer something like

data E t = forall a. E (a->t) a

smallPrint' t = concatMap (\f-> case f t of E show a -> show a) [E show . foo, E show . bar, E show 
. baz]


GHC does have existentials (Hugs has them, too, and HBC had them as well?),
but is more conservative about their use than UHC seems to be.

Claus

PS there's also the old standby of applying the functions in the interface
   and letting non-strict evaluation taking care of the rest (keeping the
   intermediate type implicit, instead of explicitly hidden):

   smallPrint_ t = concatMap (\f-> f t) [show . foo, show . bar, show . baz]


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Strange type error with associated type synonyms

2009-04-16 Thread Claus Reinke

| But the core part of my suggestion (which this example was meant
| to help explain) remains attractive, at least to me: somewhere during
| type inference, GHC *does* unify the *apparently free* 'd' with an
| internal type variable (lets call it 'd1, as in the type error message)

You are speculating about the *algorithm*.  


There is no need to reason about the algorithm, only about its observable
results ('d1' appeared in the error message, but not in the inferred type). 

But I've finally figured out what had me confused - sadly something 
that has come up before, I just had forgotten to apply it here. 


First, the code again:

   {-# LANGUAGE NoMonomorphismRestriction #-}
   {-# LANGUAGE TypeFamilies #-}

   class Fun d where
   type Memo d :: * -> *
   abst :: (d -> a) -> Memo d a
   appl :: Memo d a -> (d -> a)

   -- f ::  (Fun d) => Memo d a -> Memo d a-- (1)
   f = abst . appl

(1) is the type inferred by GHCi. If we uncomment it, we get this
error:

   D:\home\Haskell\tmp\desktop\types.hs:11:11:
   Couldn't match expected type `Memo d1'
  against inferred type `Memo d'
   In the second argument of `(.)', namely `appl'
   In the expression: abst . appl
   In the definition of `f': f = abst . appl

My _erroneous_ reasoning was that some unknown 'd1' was being 
unified with the 'd' from the signature. So I wanted that 'd1' to be

represented in the signature inferred by GHCi.

Of course, the error message really says something quite different,
and if I had recalled that 'Memo' is a type synonym, I would have
noticed that. If GHC ever provides an option to bracket fully applied
type synonyms, I'd probably switch it on by default.

  (what one actually wants is a way to distinguish type-level general 
   functions from type-level constructors that can be decomposed 
   during unification, similar to what Haskell does with capitalization 
   at the expression level, to distinguish general functions from 
   constructors that can be decomposed during matching). 


With such an option, the inferred type would look like this (er, hi Conor):

   f ::  (Fun d) => {Memo d} a -> {Memo d} a

The first parameter of 'Memo' is not directly available for unification,
so this signature is ambiguous as it stands, as you tried to point out
(with no way to pin down 'd', 'Memo d' can never unify with anything
other than itself, identically).

Apart from bracketing fully applied type synonyms, the error message
could be improved by providing the missing bit of information about 
'Memo':


   D:\home\Haskell\tmp\desktop\types.hs:11:11:
   Couldn't match expected type `Memo d1'
  against inferred type `Memo d'
   (type Memo d :: * -> *)
   In the second argument of `(.)', namely `appl'
   In the expression: abst . appl
   In the definition of `f': f = abst . appl

Sorry about letting myself get confused again by this known issue.
Claus

PS. 
I thought I'd try this alternative signature, to make the ambiguity

explicit:

   f ::  (Memo d~md,Fun d) => md a -> md a  -- (2)

but the error message is even less helpful:

   D:\home\Haskell\tmp\desktop\types.hs:11:11:
   Couldn't match expected type `Memo d' against inferred type `md'
 `md' is a rigid type variable bound by
  the type signature for `f'
at D:\home\Haskell\tmp\desktop\types.hs:10:14
   In the second argument of `(.)', namely `appl'
   In the expression: abst . appl
   In the definition of `f': f = abst . appl

Shouldn't the inferred type still be 'Memo d1'?  Why is part of the 
_declared_ type "expected" ('Memo d'), the other "inferred" ('md')? 

Could GHC perhaps be more detailed about 'expected', 'inferred', 
and 'declared', and use these terms from a user perspective? Even 
in the original error message, shouldn't 'expected' and 'inferred' be 
the other way round? 


And if we add the missing contexts for the types mentioned in the
message, the error message could really be:

   D:\home\Haskell\tmp\desktop\types.hs:11:11:
   Couldn't match inferred type `Memo d1'
  (f ::  (Fun d1) => {Memo d1} a -> {Memo d1} a)
  against declared type `Memo d'
  (f ::  (Fun d) => {Memo d} a -> {Memo d} a)
   (type {Memo d} :: * -> *)
   In the second argument of `(.)', namely `appl'
   In the expression: abst . appl
   In the definition of `f': f = abst . appl

Perhaps then I would have seen what was going on?-)


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANN: Elerea, another FRP library

2009-04-15 Thread Claus Reinke

but the fact that the breakout example works is an indication that at
least it's not hopelessly broken.

Well, a breakout game does *not* work (yet) in most other FRP
implementations except Yampa, which do have firm theoretical foundations :-)


While certainly more entertaining, the problem looks similar enough
to the NLift example (a lift serving requests on n floors[0]) in FunWorlds 
(the 2002 OpenGL version[1], not the 2001 VRML version[2]), chosen

to test some expressiveness aspects of the language:

- a dynamically updated collection (requests in NLift, bricks in breakout)
- an object moving in response to user input (lift/paddle+ball)
- collection and object reacting to each other's relative positions 
   (lift at floor levels/paddle ball brick collisions)


In NLift, user input (keyboard) adds to the requests collection, and the
lift reacts to the request collection and its own status, while in breakout, 
user input (mouse) directly controls the paddle, to which the ball reacts. 
The lift stopping at a floor directly removes a request there, while breakout 
bricks disappear when hit by the additional ball. In NLift, collisions and 
movement are one-dimensional, while breakout is two-dimensional.


On the other hand, I hadn't got round to cleaning up the interface, 
let alone firming the theoretical foundations, so perhaps this isn't an 
exception to your rule?-) But I thought I'd mention it on the topic of

"other FRP libraries", with variations of approach/concepts.

Claus

[0] http://community.haskell.org/~claus/FunWorlds/NLift.hs
[1] http://community.haskell.org/~claus/FunWorlds/
[2] http://community.haskell.org/~claus/FunWorlds/VRML/

FunWorlds/OpenGL in brief:

- a behaviour is a description of an experiment

- a behaviour can be sampled (performing the experiment), yielding a current
value and a residual behaviour (the latter replaces the original behaviour)

- the results of measurements can be broadcast and observed via behavioural
channels (a channel observer simply behaves as the channel source behaviour,
with a slight delay)

That's it! The is no special role for time at all. One can establish local
clocks, one can even broadcast their ticking behaviours. But one cannot take an
arbitrary absolute time and ask for the value of a behaviour at that time
(other than actually running that behaviour forward or backward from "now").

Also, there is a natural distinction between describing and running a
behaviour, with the ability to refer to either the description or to sample
outcomes. And having the same behaviour description on both sides of an event
in a stepper/until does not mean that nothing changes at the step: the second
copy doesn't continue where the first left off, but starts from its own
beginning (with no special tricks to achieve this). There are no separate
events, and delays enter via behavioural channels.

Well, there were lots of negatives as well (eg FunWorlds was an
"engine-exposed" workbench rather than a user-directed library), but I 
thought I'd try to get you interested first!-) I'd love to have funding to

work out the details and clean up/modernize the interface, but without
funding, it'll just have to wait until I get round to it (or one of the newer
FRP libraries renders it superfluous..).

If you try the examples, you'll notice that some of them run too fast on
modern machines (because they weren't tied to an external clock), so
you'd have to slow them down (eg, Surface and Torus, in addition to 
the standard rotate/scale controls, also react to 't'/'T' for scaling time) 
but they are are still fun to watch in their naivete (try Boids for simplicity, 
Flock2 for chaos - you'll need to scale it 's'/'S').



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Non-atomic "atoms" for type-level programming

2009-04-15 Thread Claus Reinke

- if type-level tags (such as 'data TTrue'/'data TFalse') are declared
   repeatedly in separate modules, they represent separate types,
   preventing shared use (your type-level predicate doesn't return
   my version of 'TTrue'/'TFalse')


How is the need for a common import for 'data TTrue; data TFalse' 
different then the need for a common import for 'data Bool = True | False'?


'Bool' is hardcoded at the language level (if/then/else), not just standard
library, not just implicitly imported Prelude, so that seems very stable - 
stable enough for a single common standard library import (unlike many

type-level programming concepts, which still require experimentation
and variation).

But even that is considered too unflexible - some users want alternative
Preludes (be it temporarily, to develop a better standard, or permanently,
for personal preferences), most authors of embedded domain-specific 
languages have wanted to replace 'Bool' and associated syntax/classes/

operations with a variant matching their needs.

Now you have several definitions of 'Bool', some of which may be
compatible with each other (say, two variants of FRP libraries that
both simply lift 'Bool' into 'Time->Bool'). How do you, as a library 
user, express that two compatible types from different sources are 
to be considered equivalent, without forcing the authors of the 
compatible definitions to collaborate on a common "standard" 
library for both their projects? It is not a question of possible-in-theory,

it is a question of pragmatics.

The need to go beyond common imports, temporarily (as systems
evolve) or permanently (because tree-like hierarchies do not fit
all modularization strategies), exists for 'Bool' as well as for 'TBool'.

Standard ML's answer to that kind of issue is type sharing. Haskell 
has no equivalent. Haskell has further needs in going beyond plain
hierarchical import structures, though - think of the proposals for 
class aliases, or the question of how to split up package dependencies 
without relying on orphan instances, how to depend on packages in 
specific configurations, etc. Again, the ML family of advanced module

systems has some answers to offer (and, yes, we can encode much
of those in Haskell's type system, but who does that when writing
cabal packages?).

Haskell'98, by design, had the simplest module system it could get
away with. These days, additional layers have accumulated around
this core, based on libraries and cabal packages. These layers run
into all the problems of advanced module systems, only that these
problems currently  aren't acknowledged as language design 
problems, but are treated as issues to hack around whenever 
someone is available to do the hacking.


Clearly, the advent of type-level programming necessitates the design of 
a type-level standard library, which provides standard abstractions to 
enable interoperation of custom libraries. But I don't see why the 
module system should not scale to type-level programming.


Haskell's module system is an embarrassment ignoring decades
of research, its one strong point being its simplicity. There has long
been an implicit assumption that advances in modular programming
would come either via the type class system, or via extensible records,
and that these advanced would happen within modules, without having
to evolve the module system beyond simple namespace management.

In practice, cabal and hackage have changed all that, introducing a
de-facto module configuration system on top of the existing modules, 
with an evolving design.


My typed non-atomic atoms don't fix any of that, but they do seem
to offer a workaround for a single issue that has been around for years,
and has led to several trac tickets and type-level library awkwardnesses.
For instance, it isn't necessary to pre-define hundreds of constant literals 
for a type-level numeric library if they can be generated in client code,

nor is it necessary to hand-define or template-generate an ordering
relation on constructors (which some type-level libraries depend on)
if it can be defined once and for all.

Non of this means that it wouldn't be good to have a standard
library for type-level programming - in fact, I'd expect a revised 
Data.Label to become a small part of such standard!-)


Claus

ps. If you want to know more about my view on module systems
   for functional languages, have a look at chapter 4 of
   http://community.haskell.org/~claus/publications/phd.html ,
   titled "Module Systems for Functional Languages". 

   It is slightly dated by now -lots of things have happened in program 
   (de-)composition research since 1997, when that was written-, but 
   the basis for Haskell's module system is much more ancient that that, 
   so it might be interesting for new Haskellers to see just how old

   some of Haskell's "advanced" ideas really are;-)


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://w

[Haskell-cafe] Non-atomic "atoms" for type-level programming

2009-04-14 Thread Claus Reinke

The recent GHC trac ticket revision reminded me of the old open
type-sharing problem with type tags and record labels:

- if type-level tags (such as 'data TTrue'/'data TFalse') are declared
   repeatedly in separate modules, they represent separate types,
   preventing shared use (your type-level predicate doesn't return
   my version of 'TTrue'/'TFalse')

- if record field labels (as used in your run-of-the-mill extensible
   record library) have to be declared, that is not only inconvenient,
   but prevents sharing of field labels over independent code bases

   (see the old Haskell' ticket #92 for discussion
   http://hackage.haskell.org/trac/haskell-prime/wiki/FirstClassLabels ;
   also http://hackage.haskell.org/trac/ghc/ticket/1872 for alternative
   extensible records libs)

Since Haskell has no concept of type sharing, the only way to express
this is to declare types in a common import. But there is no way for
that common import to predict all possible future uses, and we can't
just keep adding more labels/tags to it, forcing all dependencies to
be updated and kept in sync.

Using Template Haskell, and QuasiQuotes in particular, we can now
at least work around this issue, by splitting the atoms:-)

- 'undefined :: Tag' becomes 'undefined :: (TT :. Ta :. Tg)'

- 'label :: Label' becomes '(Ll :< La :< Lb :< Le :< Ll) :: (Ll :< La :< Lb :< Le 
:< Ll)'

- a common import, Data.Label, provides type letters and combinators:

   'data La = La' and 'data a :< b = a :< b'
   'data Ta' and 'data a :. b'   
   ..


- quasiquoting and Show instances, also provided by Data.Label, try to 
   hide the internal structure of labels and tags, at least at expression level. 
   Since there is no quasiquoting at type-level (why?), generating type 
   synonyms seems the best we can do there..


- since record field labels are constructed from type letters, this would
   also provide a basis for
   - type-level numbers (type-level quasiquoting would help, though)
   - label ordering:
   http://hackage.haskell.org/trac/ghc/ticket/2104
   http://hackage.haskell.org/trac/ghc/ticket/1894

In actual use, this is what tags and labels from Data.Label look like:

-
-- the usual extensible-records-as-nested-pairs
data label := value  = label := value  deriving Show
data field :& record = field :& record deriving Show
infixr :&

-- no quasiquoting for types:-(, so generate type synonyms instead
$(genTypeTag "TTrue")
$(genTypeTag "TFalse")

-- a type-level predicate, using shared tags TTrue/TFalse
class HasField record label tbool | label record -> tbool
instance HasField ()   label TFalse
instance HasField ((label:=value):&record) label TTrue
instance HasField record   label tbool 
 => HasField (field:&record)  label tbool


-- record field selection, driven by field label types
class Select record label value where (!) :: record -> label -> value
instance ..

-- some example records 


-- no need to declare field labels, and they will be
-- shared with other importers of Data.Label!
a = ([$l|this|] := True)
 :&([$l|that|] := "hi")
 :&()

b = ([$l|that|] := "there")
 :&([$l|x|] := 100)
 :&([$l|y|] := 200)
 :&()

-- we don't even need label values for this, 
-- type tags are sufficient, as long as we don't

-- evaluate the (undefined) values of tags
c = ([$t|this|] := 'x')
 :&([$t|y|] := ())
 :&()

-- testing Show and record selection
records = do
 print a
 print b
 print c
 print $ (a ! [$l|this|])
 print $ (c ! [$t|this|])
 print $ (a ! [$l|that|]) ++ ", " ++ (b ! [$l|that|])

-
*Main> [$l|label|]
label
*Main> :t [$l|label|]
[$l|label|] :: Ll :< (La :< (Lb :< (Le :< Ll)))
*Main> [$l|label|] `seq` ()
()
*Main> [$t|label|]
label
*Main> :t [$t|label|]
[$t|label|] :: Tl :. (Ta :. (Tb :. (Te :. Tl)))
*Main> [$t|label|] `seq` ()
*** Exception: Prelude.undefined

*Main> :t [$l|100|]
[$l|100|] :: L1 :< (L0 :< L0)

*Main> records
(this := True) :& ((that := "hi") :& ())
(that := "there") :& ((x := 100) :& ((y := 200) :& ()))
(this := 'x') :& ((y := ()) :& ())
True
'x'
"hi, there"
-

For example code, see 
   http://community.haskell.org/~claus/misc/Data/Label.hs

   http://community.haskell.org/~claus/misc/Data/Label/TH.hs
   http://community.haskell.org/~claus/misc/labels.hs

Claus


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: what does the hidden package error mean?

2009-04-11 Thread Claus Reinke

what a confusing error message, can we change it to say
"package Foo is missing from the MyPackage.cabal build-depends"
would be a lot more obvious how to fix it.


The problem is that ghc doesn't know that, and shouldn't[1]: All it sees
is a -hide-all-packages on its command line, you can witness it by
calling cabal with the -v flag.


Yes, but the user is calling cabal, not ghc - isn't it cabal's task to 
interpret any error messages from tools it uses internally and to 
present them (cabal knows that it has passed -hide-all-packages,
and is therefore able to interpret ghc's error message; the user 
doesn't/shouldn't need to know about these internal details)?


I once suggested an error-handling wrapper for this purpose

http://www.haskell.org/pipermail/cabal-devel/2007-December/001498.html

but in the absence of


OTOH, having some cross-compiler, machine-readable format to describe
errors would enable cabal to do some holding of hands, as well as
save ide developers from having to write all these boring parsers.


Duncan seemed rather unhappy about the idea of doing a simple
regexp-based error-message-pattern-to-cabal-level explanation;-)

Mostly, he wanted to focus cabal hacking resources on less ad-hoc
tasks, but I still think fleshing this out would make a helpful cabal feature..

Claus

ps. there was a hacked-up sketch of such a script at the end of the
   thread, but it seems to have used the wrong encoding for the
   mailinglist archives. I think it was the wrapper code attached to 
   this message (which matches for the kind of error message to 
   check if there is a known suggestion for it - the rule set could

   have been extended whenever issues became faqs).


cabal-wrapper.hs
Description: Binary data
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] compilation related questions

2009-04-10 Thread Claus Reinke

On a related note, I have another question. Say we have some data
structure, for instance a list, some functions on this data structure
(probably defined with some well known functions, such as map or
fold), and a program using them. Is there any research trying to
rewrite the program, and the data structure, to optimize them ?


Do you mean "computer science"?-) Much of this field could be
rephrased as the search for better representations, to enable
better implementations of algorithms (for some measure of "better"):

do we represent programs as data structures to be interpreted,
or as code to be executed? do we use sets, lists, arrays, ..? how
do we represent sets/lists/arrays/..? do we recompute properties 
of data, or do we store them (and is one better always, or sometimes,
and when? or do we need to optimize for the average or the most 
common case? how do we define "better", anyway?)? which are 
the options, for a particular application domain, or a general class 
of data structure/algorithm, and how do they compare? and so on..


But if you limit your question sufficiently, you might have some
luck looking for papers that reference any of these:

   Automatic data structure selection: an example and overview,
   James R. Low, Communications of the ACM, 1978
   http://portal.acm.org/citation.cfm?id=359488.359498

   Techniques for the automatic selection of data structures
   Low, Rovner, 3rd POPL, 1976
   http://portal.acm.org/citation.cfm?id=811540

   Rovner, P. Automatic representation selection for associative 
   data structures. Ph.D. Th., Harvard U., Cambridge, Mass; 
   Tech. Rep. TR10, U. of Rochester, Rochester, N.Y., Sept. 1976. 


Claus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Strange type error with associated type synonyms

2009-04-09 Thread Claus Reinke

|Oh now i see what you mean:  consider
|f' = abst . (id :: (d->a)->(d->a)) . appl
|which GHC understands to mean
|f' = abst . (id :: forall d a. (d->a)->(d->a)) . appl
|
|GHC infers the type
|f' :: (Fun d) => Memo d a -> Memo d a
|Now you are saying that GHC *could* have figured out that if it added the 
signature
|f' :: forall d a. (Fun d) => Memo d a -> Memo d a
|thereby *changing* the scoping of d,a in the buried signature for 'id', doing so would not change 
whether f' was |typeable or not.  Well maybe, but that is way beyond what I have any current plans 
to do.


Indeed!-) I didn't mean to suggest this as a course of action, it was
just a way of representing the internal type inference intermediates
at source level. Another aspect I would not like about this approach is
that renamings of bound type variables would no longer be meaning-
preserving (because the signature would be interpreted in the context
of the source-code it belongs to) - not good.

But the core part of my suggestion (which this example was meant
to help explain) remains attractive, at least to me: somewhere during
type inference, GHC *does* unify the *apparently free* 'd' with an
internal type variable (lets call it 'd1, as in the type error message)
that has no explicit counterpart in source code or type signature,
so the inferred type should not be

   f' :: forall d. (Fun d) => Memo d a -> Memo d a -- (1)

but rather

   f' :: forall d. (Fun d,d~d1) => Memo d a -> Memo d a -- (2)

That way, the internal dependency would be made explicit in
the printed representation of the inferred type signature, and
the unknown 'd1' would appear explicitly in the inferred type,
not just in the error message (the explicit foralls are needed
here to make it clear that 'd1' and by implication, 'd', *cannot*
be freely generalized - 'd' is qualified by the equation with the
unknown 'd1').

To me, (2) makes more sense as an inferred type for f' than (1),
especially as it represents an obviously unusable type signature
(we don't know what 'd1' is, and we can't just unify it with anything),
whereas (1) suggests a useable type signature, but one that will fail
when used:

   Couldn't match expected type `Memo d1' against inferred
   type `Memo d'.

All I'm suggesting is that the type *printed* by GHCi does not
really represent the type *inferred* by GHCi (or else there should
not be any attempt to match the *free* 'd' against some unknown
'd1', as the error message says), and that there might be ways to 
make the discrepancy explicit, by printing the inferred type differently.


Claus


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A challenge

2009-04-08 Thread Claus Reinke

|No indeed – that's what I meant about the latter being quadratic – it
|runs the action far more times than the other.

'quadratic time' usually refers to complexity, not different results,
so I thought I'd mention it anyway.

|ghci tells me this:
|Prelude Control.Applicative Control.Arrow> let iterateM' = let f `op`
|x = uncurry (<$>) . ((:) &&& ((x=<<) . f)) in (foldr op (const $
|return []) .) . replicate
|
|:1:92:
| Ambiguous type variable `m' in the constraints:
|   `Monad m' arising from a use of `return' at :
|1:92-100
|   `Functor m' arising from a use of `op' at :1:80-81
| Probable fix: add a type signature that fixes these type
|variable(s)

But surely you have not enabled the monomorphism restriction
while avoiding explicit recursion?-)

I should, of course, have removed those nasty points - sorry about
that, hope noone got hurt - to leave us with:

(foldr (flip (((uncurry (<$>).).).:)&&&).).((.).(=<<) (const $ return 
[]).) . replicate

there, much better, isn't it? So obvious and clear that it doesn't even
need a name anymore - it is fully declarative and self-explanatory.
And it typechecks, so it must be correct!-) And it passes a test, so it
isn't wrong, either!-)

*Main> ((foldr (flip (((uncurry (<$>).).).:)&&&).).((.).(=<<)
   (const $ return []).) . replicate) 3 print ()
()
()
()
[(),(),()]

Sorry, just couldn't resist:-)
Now, how do I get that tongue out of my cheek?-)

Claus


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A challenge

2009-04-08 Thread Claus Reinke

|iterateM 0 _ _ = return []
|iterateM n f i = (i:) <$> (iterateM (n-1) f =<< f i)

|iterateM' n f i = sequence . scanl (>>=) (return i) $ replicate n f

These function are not the same (sequence of scanl? try using print
in f). Also, I seriously hope you are not looking for this line noise:-)

iterateM' = (foldr op (const $ return []) .) . replicate
 where f `op` x = uncurry (<$>) . ((:) &&& ((x =<<) . f))

Because if you do, your penance for using it would involve 
demonstrating that this is equivalent (+-1), or not (and do not

mention my name anywhere near it!-)

Claus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Strange type error with associated type synonyms

2009-04-07 Thread Claus Reinke

| Here is a variation to make this point clearer:
|
| {-# LANGUAGE NoMonomorphismRestriction #-}
| {-# LANGUAGE TypeFamilies, ScopedTypeVariables #-}
|
| class Fun d where
| type Memo d :: * -> *
| abst :: (d -> a) -> Memo d a
| appl :: Memo d a -> (d -> a)
|
| f = abst . appl
|
| -- f' :: forall d a. (Fun d) => Memo d a -> Memo d a
| f' = abst . (id :: (d->a)->(d->a)) . appl
|
| There is a perfectly valid type signature for f', as given in
| comment, but GHCi gives an incorrect one (the same as for f):
|
| *Main> :browse Main
| class Fun d where
|   abst :: (d -> a) -> Memo d a
|   appl :: Memo d a -> d -> a
| f :: (Fun d) => Memo d a -> Memo d a
| f' :: (Fun d) => Memo d a -> Memo d a


I'm missing something here.  Those types are identical to the one given
in your type signature for f' above, save that the forall is suppressed
(because you are allowed to omit it, and GHC generally does when
printing types).


Not with ScopedTypeVariables, though, where explicit foralls have
been given an additional significance. Uncommenting the f' signature works, while dropping the 
'forall d a' from it fails with

the usual match failure due to ambiguity "Couldn't match expected
type `Memo d1' against inferred type `Memo d'".


I must be missing the point.


The point was in the part you didn't quote:

|In other words, I'd expect :browse output more like this:
|
|f :: forall a d. (Fun d, d~_d) => Memo d a -> Memo d a
|f' :: forall a d. (Fun d) => Memo d a -> Memo d a
|
|making the first signature obviously ambiguous, and the
|second signature simply valid.

Again, the validity of the second signature depends on
ScopedTypeVariables - without that, both f and f' should
get a signature similar to the first one (or some other notation
that implies that 'd' isn't freely quantified, but must match a
non-accessible '_d').

Claus


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] UPDATE: haskellmode for Vim now atprojects.haskell.org (+screencast; -)

2009-04-07 Thread Claus Reinke

Matthijs,

thanks for the reports, and analyses and suggested patches are even
better! We should probably take this offlist at some point, though.


I've found two more little bugs. The first is that version comparison is
incorrect. It now requires that all components are greater, so comparing
6.10.1 >= 6.8.2 returns false (since 1 < 2).


quite right! will fix.


Also, there is a "ghc-pkg field * haddock-html" call, but here the * will be
expanded by the shell into the files in the current directory. To prevent
this, the * should be escaped.


there is a comment in there suggesting that escaping was the wrong thing
to do on some configurations/platforms (obviously, the current code works
on my platform, so I depend on reports from users on other platforms), 
but I can't recall what the issue was at the moment. Will have to check,

and implement a platform-specific solution, if necessary.


I'm also looking at the Qualify() function, which allows you to select a
qualification using tab completion. However, when there is only a single
choice, its a bit silly to have to use tabcompletion. At the very least, the
value should be prefilled, but ideally the qualification should just happen.


Yes, that has been requested before, at least as an option (see the file
haskellmode-files.txt for other open issues and change log). But it should
be done consistently for all menus (all functions, and both GUI and
terminal mode).


Also, I think that a dropdown menu is also available in text mode vim (at
least with vim7), which would be nice for multiple choices (since you can 
see all choices in one glance).


There is a note on using :emenu instead of the old home-brewn
haskellmode-files.txt menu code. I would generally want to clean 
up the script code (some of which is very old), and factor out the 
handling of menus into a single place before making these changes.



I'll have a look at these things as well, expect another patch :-)


Looking forward to it!-)
Claus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Strange type error with associated type synonyms

2009-04-07 Thread Claus Reinke
Basically, type checking proceeds in one of two modes: inferring or  
checking.  The former is when there is no signature is given; the  
latter, if there is a user-supplied signature.  GHC can infer  
ambiguous signatures, but it cannot check them.  This is of course  
very confusing and we need to fix this (by preventing GHC from  
inferring ambiguous signatures).  The issue is also discussed in the  
mailing list thread I cited in my previous reply.


As the error message demonstrates, the inferred type is not
correctly represented - GHC doesn't really infer an ambiguous
type, it infers a type with a specific idea of what the type variable
should match. Representing that as an unconstrained forall-ed 
type variable just doesn't seem accurate (as the unconstrained

type variable won't match the internal type variable) and it is this
misrepresentation of the inferred type that leads to the ambiguity.

Here is a variation to make this point clearer:

{-# LANGUAGE NoMonomorphismRestriction #-}
{-# LANGUAGE TypeFamilies, ScopedTypeVariables #-}

class Fun d where
   type Memo d :: * -> *
   abst :: (d -> a) -> Memo d a
   appl :: Memo d a -> (d -> a)

f = abst . appl

-- f' :: forall d a. (Fun d) => Memo d a -> Memo d a
f' = abst . (id :: (d->a)->(d->a)) . appl

There is a perfectly valid type signature for f', as given in 
comment, but GHCi gives an incorrect one (the same as for f):


*Main> :browse Main
class Fun d where
 abst :: (d -> a) -> Memo d a
 appl :: Memo d a -> d -> a
f :: (Fun d) => Memo d a -> Memo d a
f' :: (Fun d) => Memo d a -> Memo d a

What I suspect is that GHCi does infer the correct type, with
constrained 'd', but prints it incorrectly (no forall to indicate the
use of scoped variable). Perhaps GHCi should always indicate
in its type output which type variables have been unified with 
type variables that no longer occur in the output type (here the 
local 'd'). 

If ScopedTypeVariables are enabled, that might be done via 
explicit forall, if the internal type variable occurs in the source file 
(as for f' here). Otherwise, one might use type equalities. 


In other words, I'd expect :browse output more like this:

f :: forall a d. (Fun d, d~_d) => Memo d a -> Memo d a
f' :: forall a d. (Fun d) => Memo d a -> Memo d a

making the first signature obviously ambiguous, and the
second signature simply valid.

Claus


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] UPDATE: haskellmode for Vim now atprojects.haskell.org (+screencast; -)

2009-04-07 Thread Claus Reinke

Hi Matthijs,


I've installed the vimball, and it spit a few errors at me. In particular, it
couldn't find the haddock documentation directory. A quick look at
haskell_doc.vim shows that it should autodetect the directory. However, for
some reason my ghc-pkg command returns the doc directory twice:

 $ ghc-pkg field base haddock-html
 haddock-html: /usr/local/ghc-6.10.1/share/doc/ghc/libraries/base
 haddock-html: /usr/local/ghc-6.10.1/share/doc/ghc/libraries/base


Interesting. The reason for the double listing is that recent GHCs come
with two base packages (since the packages differ in content, having
both point to the same documentation location looks wrong to me, btw).


The haskell_doc.vim contains the following line, which seems to deal with
multiple lines:

 let field = substitute(system(g:ghc_pkg . ' field base 
haddock-html'),'\n','','')


This just used to remove the final '\n', in the days when multiple versions 
of base were still unthinkable. What I'm really after in that part of the script

is the location of the GHC docs, and the library index (the actual package
docs are processed later). Unfortunately, past GHC versions haven't been 
too helpful (#1226, #1878, #1572), hence all that guesswork in my scripts 
(for a while, there was a ghc --print-docdir, but that didn't quite work and 
disappeared quickly, nowadays, there is the nice ghc-paths package, but 
that doesn't give a concrete path for docdir, so I still need to find the http 
top dir for GHC).


I hadn't noticed this change, because (a) the scripts look in "likely suspects" 
for the docs location as well, and (b) the docs location can be configured
(bypassing all that guesswork) by setting 'g:haddock_docdir' before 
loading the scripts (:help g:haddock_docdir, :help haskellmode-settings-fine).


Using g:haddock_docdir to configure the right path for your installation 
is probably the least wrong thing to do for now, and requires no changes
to the scripts, but I'll have a look at how to improve the guesswork code 
for convenience, looking at the first match only or looking for the relevant 
directories in all matches..


Thanks for the report!
Claus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] UPDATE: haskellmode for Vim now atprojects.haskell.org (+screencast; -)

2009-04-07 Thread Claus Reinke

http://projects.haskell.org/haskellmode-vim/

The download link on this page seems to use \ instead of /, making it not work.



For anyone eager to download it, just replace \ (or %5C) in your address bar
with \ and it should work.


argh, thanks, now fixed (filename completion in Vim, :help i_CTRL-X_CTRL-F,
is useful for making sure one links to the correct file names, but gives 
platform-
specific paths, which I usually correct before publishing; this one I missed).

Claus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] UPDATE: haskellmode for Vim now at projects.haskell.org (+screencast; -)

2009-04-07 Thread Claus Reinke

I have become aware that many Haskellers are not aware of the
Haskell mode plugins for Vim, in spite of the >100 downloads
per month I saw when I last checked. Since the plugins have just
completed their move to their new home at

http://projects.haskell.org/haskellmode-vim/

this seems to be a good opportunity to mention them again (now
with 100% more screencasts!-). They do much (though certainly
not all) of the stuff people often wish for here, such as showing or 
adding types, looking up source where available locally or looking 
up documentation where source isn't available. Mostly, they collect

my attempts to automate tasks when I have to do them often enough.

Here is a section of the quick reference (:help haskellmode-quickref):

   |:make|   load into GHCi, show errors (|quickfix| |:copen|)
   |_ct| create |tags| file 
   |_si| show info for id under cursor

   |_t|  show type for id under cursor
   |_T|  insert type declaration for id under cursor
   |balloon| show type for id under mouse pointer
   |_?|  browse Haddock entry for id under cursor
   |:IDoc| {identifier}  browse Haddock entry for unqualified {identifier}
   |:MDoc| {module}  browse Haddock entry for {module}
   |:FlagReference| {s}  browse Users Guide Flag Reference for section {s}
   |_.|  qualify unqualified id under cursor
   |_i|  add 'import ()' for id under cursor
   |_im| add 'import ' for id under cursor
   |_iq| add 'import qualified ()' for id 
under cursor
   |_iqm|add 'import qualified ' for id under cursor
   |_ie| make imports explit for import statement under cursor
   |_opt|add OPTIONS_GHC pragma
   |_lang|   add LANGUAGE pragma
   |i_CTRL-X_CTRL-O| insert-mode completion based on imported ids 
(|haskellmode-XO|)
   |i_CTRL-X_CTRL-U| insert-mode completion based on documented ids 
(|haskellmode-XU|)
   |i_CTRL-N|insert-mode completion based on imported sources
   |:GHCi|{command/expr} run GHCi command/expr in current module

For those who have never used these plugins, or haven't used Vim
at all, it has often been difficult to imagine what editing in Vim can
be like. The old quick tour of features available has now been updated
from screenshots to screencasts (my first venture into this area - please 
let me know whether that is useful or a waste of time!-), so you can 
watch them on Youtube before deciding to invest time into learning Vim.


For those who are using Vim, the only reason not to use my 
haskellmode plugins would be if you had your own (not uncommon
among Vim users;-), in which case I hope you make yours available 
as well (feel free to adopt features from my plugins, and let me know

if you have some useful features to contribute), here:

http://www.haskell.org/haskellwiki/Libraries_and_tools/Program_development#Vim

For those who have happily been using these plugins: in the process
of moving the site, I noticed that I hadn't updated the published
version for a quite some time - apparently, noone had missed the 
fixes, but in case you want to check, the relevant section of my update

log is appended below.

Happy Vimming!-)
Claus

- updates since last published version on old site
04/04/2009
 haskell_doc.vim: when narrowing choices by qualifier for _?, take
lookup index from un-narrowed list (else we could
end up in the docs for the wrong module)

02/04/2009
 ghc.vim: actually, we can try to make a reasonable guess at the parent
   type for constructors in _ie, from their type signature

01/04/2009
 ghc.vim: try a bit harder to escape " and ' in :GHCi
  eliminate duplicates in _ie and mark data constructor imports
as ???(Cons) - we can't reliably figure out the parent type
for the constructor:-(
  handle Prelude as special case in _ie (can't just comment out
import, need to import [qualified] Prelude [as X]() )

 haskell_doc.vim: fix keys (no namespace tags) and urls (modules) for :MDoc

31/03/2009
 all files: new home page at projects.haskell.org

28/03/2009
 haskell_doc.vim: in ProcessHaddockIndexes2, fix a case where making new entries
   could lose old ones (eg, zipWith's base package locations got
   lost when adding its vector package locations) 


07/12/2008
 haskell_doc.vim: since we're now reading from multiple haddock indices in
   DocIndex, we need to extend, not overwrite entries..

03/12/2008
 ghc.vim: do not reset b:ghc_static_options on every reload

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] automatically inserting type declarations

2009-04-07 Thread Claus Reinke

I remember hearing about a Haskell mode for Vim, Emacs, Yi or
VisualHaskell that inserts type declarations automatically (it's
lazier to just check the type than to write it manually), but I can't
remember any details. What editor mode / IDE was it?


As far as I know, my haskellmode plugins for Vim were the first
to do that, in their long-gone predecessor incarnation of hugs.vim.

But I'm pretty sure this feature was adopted by the Emacs folks
as soon as people started saying they liked it. That is, types for
top-level declarations - more precise types are on everyone's 
todo list, I think (by just doing what VisualHaskell used to do:

ask a dedicated GHC API client for details).

Take the identifier under cursor, run something like 


   ghc -e ":t " 

and either show the result, or insert it. These days, the plugins no
longer call out to GHC every time, instead they update an internal
map whenever you use ':make' in quickfix mode and get no errors
from GHC. Anyway, the plugins have recently moved here:

http://projects.haskell.org/haskellmode-vim


What do most people use with GHC on Linux? I'm more used to Vim than
to Emacs. Yi sounds like something I might like. Is it stable enough
to solve more problems than it would create? (I hate buggy and broken
stuff)


haskellmode-vim isn't free of problems (mostly to do with large
numbers of installed libraries vs naive scripts). The reason it exists
is to solve problems I don't want to have, so it tends to improve
whenever a problem bugs me enough, but whether the result works
for you, you have to try for yourself!-)

Claus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] community.haskell.org unreachable?

2009-04-06 Thread Claus Reinke

Works for me. The admin address referenced there is support [AT]
community.haskell.org


Thanks. Yes, it is working again for me too, now, but had been
quite persistently unreachable before, while www.haskell.org
was reachable, so I'd be surprised if the the problem was at
this end of the connection.

If I read the bandwith graph correctly, there was a peak followed
by several hours of silence, matched by odd memory usage (though
I don't see what the load average is trying to tell us?): 


http://community.haskell.org/mrtg/daily.html

Claus


On Mon, Apr 6, 2009 at 1:32 PM, Claus Reinke wrote:


I seem to be having problems reaching anything on that server.
Does anyone know what is going on, or who to contact? It would
help if the haskellwiki page

http://www.haskell.org/haskellwiki/Haskell.org_domain

would mention the admin address directly, instead of referring to

http://community.haskell.org/admin/
which is also unreachable.

Claus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe










___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Making videos of your project

2009-04-06 Thread Claus Reinke

Btw, there seem to be many Haskells on YouTube - should we have
some way of marking clips related to our Haskell? I've used haskell.org
as a tag, but noone else has, yet - also, perhaps there should be a Haskell 
channel or something?


And just in case there are others on the same Windows adventure:-)


Is there anyone here with experience in screencasting of text-based
applications, who could offer advice on how to produce screencasts
on windows/xp? The basic screencasting (capture+annotation/editing)
is not the problem, eg, CamStudio seems ok, and Wink gives me
more control for mostly input-driven sessions (where I want
screenshots whenever something useful happens, not long videos of
my mousepointer wavering about the screen;-). Both can generate .swf.

..
I think making .swf is a mistake, but I'm not sure.


I should have believed you there!-) I did actually check that Youtube
listed .swf among its supported file types before making my screencast,
but when I had actually joined and uploaded, I discovered that it would
fail on conversion.. More detailed search in the user forums indicated
that this is a common problem, with no solution other than to convert
to another, video- rather than animation-based, format.

Problem is, .swf is very suited to this particular input-driven kind of
screencast (and renders just fine in my browser), eg, Wink has a 
non-video capture mode that only adds frames when something happens

(the only negative: while one can extend the time frames are shown, I've
not yet found a way to reduce the minimum time, so typing looks rather
unnaturally ssllooww). 

Any attempt to convert my just over 1Mb .swf files (one screencast, 
split and edited into 3 sections of 4 minutes or less) to something 
listed as supported (such as .avi, .mpeg2) resulted in huge files sizes 
(60Mb and upwards) that would be impractical to upload with my 
old narrowband connection (might just be that I don't have the right

compressing codecs for these formats?).

Fortunately, while trying to find some way of contacting YT staff, I
stumbled on other help pages that mentioned .wmv in their version
of supported file types. For that, one of the codecs on my machine
produces files that are only 10x the size of the original .swf, so I'm
slowly uploading them to YT, and the first two parts have now been
published without conversion failure.

I was sorely tempted just to upload the small .swfs to haskell.org, 
instead of the .wnk->.swf(1Mb)->.avi(60Mb)->.wmf(10Mb) plus
tedious upload route I had to follow with the free or preinstalled tools 
available to me. But community.haskell.org seems to be having

enough trouble right now, without hosting clips there..


I rendered it in 2 or 3 formats (at 640x480 etc, following the you tube
/ google video recommendations), and uploaded the one that looked best.
You-tube immediately(ish) makes a low quality version available
(320x240?), and a high quality version(480x360?), with more readable
text etc, is available a little later.


Testing with multiple formats/codecs/uploads is recommended,
though the conversion can be somewhat nightmarish (too many options
and tools, many of which look less than trustworthy, too complex, too
expensive, or all of the above; and too few roads to something that 
works). For 640x400 uploads, the near immediately available version 
seems readable enough (search for tag haskell.org or wait for the 
announcement;-).



I would recommend working in a 640x480 screen area. If you can't show
anything in that area, then people won't be able to see anything in your
video (at the size/quality youtube shows it, at least).


Sound advice, which I followed in the end. Just took some more
preparation to find a setup that would fit the screencast, rather than
my usual working habits. I also noticed that I had somehow managed 
to switch off my ClearType support, which explained the initially low

quality font rendering.

Thanks,
Claus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] community.haskell.org unreachable?

2009-04-06 Thread Claus Reinke

I seem to be having problems reaching anything on that server.
Does anyone know what is going on, or who to contact? It would
help if the haskellwiki page

http://www.haskell.org/haskellwiki/Haskell.org_domain

would mention the admin address directly, instead of referring to

http://community.haskell.org/admin/ 


which is also unreachable.

Claus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] high probability of installation problems andquality of the glorious implementation

2009-04-06 Thread Claus Reinke

I want the Zen package:  "Make me one with everything."


But would you find that on hackage?-) If an author had contemplated
the perfect package, they wouldn't have put it on hackage, they wouldn't
have a hackage account, they wouldn't have written the package, they
might not even exist - you would just get the idea.

Claus

(inspired by the ZenBot entry in the list of programs that play on
the computer Go server: http://senseis.xmp.net/?ComputerGoServer )


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] about Haskell code written to be "too smart"

2009-04-04 Thread Claus Reinke

  takeListSt' = evalState . foldr k (return []) . map (State . splitAt)
where k m m'= cutNull $ do x<-m; xs<-m'; return (x:xs)
  cutNull m = do s<-get; if null s then return [] else m


|Not only is ths not that elegant anymore, 


As I was saying, sequence/mapM with early cutout is common
enough that one might want it in the libraries, which would return
this variant into readability again.

|I think it *still* has a bug, stack overflow against

That would really surprise me. Not that it is impossible - as I was
also saying, I haven't replayed the derivation for the modified code.
However, the modification was arrived at by taking the original
derivation, seeing where its result deviated from the explicitly
recursive specification, and spotting the aspect of the implicitly
recursive version that was responsible for the deviation. 

Of course, the derivation itself could have an error, but equating 
the functions themselves gives me rather more confidence/coverage 
than any finite number of tests could. If I were to enter the derivation

into a proof checking tool and be successful, that would further raise
the level of confidence/coverage (leaving bugs in the proof checker).

Note that I'm not asking whether the original spec did the "right"
thing, only whether or not the variations "correctly" do the same
thing as the original spec.

|testP pf = mapM_ putStrLn  [
|  show $ take 5 $ pf (repeat 0) [1,2,3]
|  , show $ pf ( take 1000 [3,7..] ) [1..100]
|  , show . pf [3,7,11,15] $ ( take (10^6) [1..])
|  , show . head . last $ pf (take 1000 $ [3,3..]) [1..10^6]
|]
|
|where the first test (with take 5) is new.
|whereas the version with explicit recursion and pattern matching
|doesn't suffer from this problem

I get identical results for 'takeListSt'' and the original 'takeList1'
(code repeated below). 

It took me a couple of moments to remember that you had been 
playing with Control.Monad.State.Strict instead of the default 
Control.Monad.State(.Lazy). That would invalidate the original 
derivation (different definition of '>>=', therefore a different end 
result after unfolding '>>='), let alone the modified code based on it. 

If you replay the derivation, taking the strictness variations into 
account, you should arrive at an explicitly recursive version that

differs from the spec. Which might make it easier to see what
the difference is.

|partitions [] xs = []
|partitions (n:parts) xs =
|  let (beg,end) = splitAt n xs
|  in  beg : ( case end of
|   [] -> []
|   xs -> partitions parts xs)

That version cannot be transformed into the original spec, because
it doesn't define the same function. As I mentioned:


(btw, both 'takeListSt'' and 'takeListSc'' pass Thomas' 'testP', as does
his 'partitions', but 'partitions' is not the same function as 'takeList':
consider 'null $ takeList [1] undefined' and 'takeList [0] []' ;-)


With the original spec

takeList1 [] _ =  []
takeList1 _ [] =  []
takeList1 (n : ns) xs  =  h : takeList1 ns t
   where (h, t) = splitAt n xs

and 'takeList4' being your 'partitions', we get:

*Main> null $ takeList1 [1] undefined
*** Exception: Prelude.undefined
*Main> null $ takeList4 [1] undefined
False
*Main> takeList1 [0] []
[]
*Main> takeList4 [0] []
[[]]


I am starting to think that the tricky part in all these functions is
that by using higher order functions from the prelude, you sweep the
failure case under the rug. 


Yes, the reason that more abstract functions are useful is that they
hide irrelevant details, allowing us to spend our limited capacity on
relevant details. If all abstract functions happen to hide details that 
matter, more concrete functions that expose those details can be 
more helpful. 

But even that has to be qualified: for instance, at first I found it easier 
to see the issues with the original 'State' variant in its transformed, 
explicitly recursive version, but after the derivation had convinced 
me that there was no magic going on, I realized that it was just the 
old 'mapM' doesn't stop early issue. So I could have seen the issue 
in the abstract form, but my mind (and apparently other minds, too;-) 
refused to think about the cornercases there until prompted. 

If not for this tendency to ignore details that might be relevant, the 
abstract code would provide an abstract treatment of the failure 
case as well: instead of working out the details by trying to find 
useful tests for the explicit pattern matches, we can just look at

wether the definition uses 'mapM' or 'mapMWithCut', or whether
it uses 'Control.Monad.State' or 'Control.Monad.State.Strict'.

Just exposing all the details all the time isn't going to help, as the
'partition' example demonstrates: we might still ignore the relevant
details, this time not because they are hidden in abstractions, but
because they are hidden in other irrelevant details. There really
isn't a single view of sof

Re: [Haskell-cafe] Wishful thinking: a text editor that expands function applications into function definitions

2009-04-03 Thread Claus Reinke

One word says more than a thousand pictures: Vim .
(well, okay, I'm sure Emacs will do just as well, and some of the more 
recent IDEs seem to be catching up;-) plus plugins, of course!-)


- unfolding definitions: if you really want that, it is in the domain of
   program transformation systems and refactorers (HaRe, the Haskell
   refactorer, has been mentioned - it worked on Haskell'98 sources,
   plugging into Vim or Emacs; it would really be great to have funding
   for porting that to a modern GHC/Cabal-based environment, but if 
   you're happy with Haskell'98, and have all the sources, the old HaRe 
   should still do the job once you get it to build with recent GHCs/libraries)


- looking up definitions: that is supported in various ways in Vim/Emacs
   and the like - I'll talk about some Vim examples, as that is what I use.

   - tag files (generated by running tools like 'ghc -e :ctags', hasktags,..
  over the sources) are a simple database linking identifiers to 
  definition sites. Based on these, one can jump from identifiers

  to definitions (keeping a stack of locations, so one can go back
  easily), or open split windows on the definition sites.

  See the "Moving through programs" section in Vim's help, also 
  online at: http://vimdoc.sourceforge.net/htmldoc/usr_29.html .


   - the haskellmode plugins for Vim support documentation lookup
   (opening the haddocs for the identifier under cursor in a browser),
   and the documentation provides source links, if the docs themselves
   aren't sufficient. Useful for all those sourceless package installations.
   
   - the haskellmode plugins also support type tooltips (or, if you
   don't like tooltips, or are working in a terminal without gui, type 
   signatures can be displayed in the status line, or added to the 
   source code). This is currently based on GHCi's :browse!, though, 
   so you can only get the types of toplevel definitions that way. One 
   of the insertmode completions also displays types.


   - if you explain Haskell's import syntax to Vim, you can also search 
   in (local) imported files, using Vim's standard keyword search, for

   instance ([I).

The haskellmode plugins for Vim are currently in the process of moving 
to http://projects.haskell.org/haskellmode-vim/ . Which made me notice
that I hadn't updated the publicly available version in quite some time 
(announcement to follow when that process has settled down somewhat).


Claus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Problem with prepose.lhs and ghc6.10.1

2009-04-03 Thread Claus Reinke
Hugs did not support lexically scoped type variables then (and 
probably doesn't support now).


I may be misremembering, but I think Hugs had them first;-)

http://cvs.haskell.org/Hugs/pages/hugsman/exts.html#sect7.3.3

It is just that Hugs and GHC interpret the language extension 
differently (as usual), so it doesn't quite support the same code
in both. That is made worse by differences in other language 
features relevant to using this extension. Not to mention that

all of that keeps evolving over time.

Claus


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Issue with IsFunction/Vspace in GHC 6.10.1

2009-04-02 Thread Claus Reinke

{-# LANGUAGE ScopedTypeVariables #-}

without, the 'f's in the instance are independent.
Claus

- Original Message - 
From: "Jacques Carette" 

To: 
Sent: Thursday, April 02, 2009 10:15 PM
Subject: [Haskell-cafe] Issue with IsFunction/Vspace in GHC 6.10.1


I was playing with some of Oleg's code (at end for convenience).  After 
minor adjustments for ghc 6.10.1, it still didn't work.  The error 
message is quite puzzling too, as it suggests adding exactly the 
constraint which is present...  Any ideas?


Jacques

-- Oleg's definition of a vector space class, based on IsFunction and
-- TypeCast.  See http://okmij.org/ftp/Haskell/isFunction.lhs
-- for the January 2003 message, which works in GHC 6.2.1 and 6.4
-- code below *works* in 6.8.1 AFAIK
{-# LANGUAGE EmptyDataDecls, MultiParamTypeClasses, 
FunctionalDependencies #-}

{-# LANGUAGE FlexibleInstances #-}
{-# LANGUAGE UndecidableInstances #-}
{-# LANGUAGE OverlappingInstances #-}

module Q where

class Vspace a v | v -> a
   where
   (<+>) :: v -> v -> v
   (*>)  :: a -> v -> v

instance (IsFunction v f, Vspace' f a v) => Vspace a v
 where
 (<+>) = doplus (undefined::f)
 (*>)  = dostar (undefined::f)

class Vspace' f a v | f v -> a
   where
   doplus :: f -> v -> v -> v
   dostar :: f -> a -> v -> v

instance Num a => Vspace' HFalse a a where
doplus _ = (+)
dostar _  = (*)
-- etc.  No problem.

instance (IsFunction v f, Vspace' f a v, Vspace a v)
   => Vspace' HTrue a (c->v) where
doplus _ f g = \x -> f x <+> g x
dostar _ a f x = a *> (f x)


test1 = (1::Int) <+> 2
test2 = ((\x -> x <+> (10::Int)) <+> (\x -> x <+> (10::Int))) 1
test3 = ((\x y -> x <+> y) <+> (\x y -> (x <+> y) <+> x)) (1::Int) (2::Int)

test4 = ((\x y -> x <+> y) <+> (\x y -> ((2 *> x) <+> (3 *> y
   (1::Int) (2::Int)

data HTrue
data HFalse

class IsFunction a b | a -> b
instance TypeCast f HTrue => IsFunction (x->y) f
instance TypeCast f HFalse => IsFunction a f

-- literally lifted from the HList library
class TypeCast   a b   | a -> b, b->a   where typeCast   :: a -> b
class TypeCast'  t a b | t a -> b, t b -> a where typeCast'  :: t->a->b
class TypeCast'' t a b | t a -> b, t b -> a where typeCast'' :: t->a->b
instance TypeCast'  () a b => TypeCast a b where typeCast x = typeCast' () x
instance TypeCast'' t a b => TypeCast' t a b where typeCast' = typeCast''
instance TypeCast'' () a a where typeCast'' _ x  = x


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Reverting to any old version using Darcs

2009-04-01 Thread Claus Reinke
Perhaps the rumours refer to non-tagged "versions"? In 
conventional non-distributed version control systems, one

might go back to the version on a specific date, while with
darcs, that only makes sense wrt a specific repo (I think?).

So you can unpull all patches after a date from your local
repo, but that doesn't mean that you get a repo that matches
someone else's repo after they perform the same procedure.
If both parties commit to a central repo, and pull all changes
via that, there is a greater chance of date-based synchronicity.

Claus


Yes. It would be fairly easy to check this in the docs, too :)

bugfact:

Okay, thanks. So the rumors about this must be incorrect?

On Wed, Apr 1, 2009 at 9:57 PM, Ketil Malde  wrote:

Don Stewart  writes:

>> Rumor goes that this is very difficult to do with Darcs. Is this
correct?

> darcs unpull

Or just cd to a different directory, and darcs get -t ?

-k
--
If I haven't seen further, it is by standing in the footprints of giants



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: ANNOUNCE: vacuum-cairo: a cairo frontend tovacuum for live Haskell data visualization

2009-04-01 Thread Claus Reinke

Did you use hubigraph?

   http://ooxo.org/hubigraph/


Ah, there it is, then. Btw, more interesting than the 3d nature of
the visualizations is that Ubigraph seems to have been designed
for incremental updates of the layout (see the paper available
via their home site). The lack of support for this in standard
graph layout packages was the main reason that I had to give
GHood its own naive layout algorithm.

So I was delighted to see the design criteria for Ubigraph - until
I noticed that it is not only unavailable for Windows, but closed
source as well:-( Let us hope that at least one of these two items
is going to change soon? Then both Hood and Vacuum visual
animations could use the same backend, offering visualizations
of both data and observations.

A platform-independent, open-source, 2d/3d graph layout engine
for incrementally updated graphs (where the graph after the update
has to be similar enough to the one before that one can follow the
animation and make sense of the data displayed) might be a good
project for frp+opengl hackers - force equations between nodes,
influenced by edges, and keeping the structure stable while adding
nodes (parsed from an input stream).

Claus


This cabalized project doesn't appear to be on hackage!

gleb.alexeev:

Don Stewart wrote:

I am pleased to announce the release of vacuum-cairo, a Haskell library
for interactive rendering and display of values on the GHC heap using
Matt Morrow's vacuum library.


Awesome stuff, kudos to you and Matt Morrow!

I thought it'd be fun to visualize data structures in three dimensions.
Attached is quick and dirty hack based on your code and Ubigraph server
(http://ubietylab.net/ubigraph/).

The demo video (apologies for poor quality):
http://www.youtube.com/watch?v=3mMH1cHWB6c

If someone finds it fun enough, I'll cabalize it and upload to Hackage.



module Ubigraph where

import Network.XmlRpc.Client

type Url = String
type VertexId = Int
type EdgeId = Int

defaultServer = "http://127.0.0.1:20738/RPC2";

void :: IO Int -> IO ()
void m = m >> return ()

clear :: Url -> IO ()
clear url = void (remote url "ubigraph.clear")

newVertex :: Url -> IO VertexId
newVertex url = remote url "ubigraph.new_vertex"

newEdge :: Url -> VertexId -> VertexId -> IO EdgeId
newEdge url = remote url "ubigraph.new_edge"

removeVertex :: Url -> VertexId -> IO ()
removeVertex url vid = void (remote url "ubigraph.remove_vertex" vid)

removeEgde :: Url -> EdgeId -> IO ()
removeEgde url eid= void (remote url "ubigraph.remove_edge" eid)


zeroOnSuccess :: IO Int -> IO Bool
zeroOnSuccess = fmap (==0)

newVertexWithId :: Url -> VertexId -> IO Bool
newVertexWithId url vid = zeroOnSuccess (remote url "ubigraph.new_vertex_w_id" 
vid)

newEdgeWithId :: Url -> EdgeId -> VertexId -> VertexId -> IO Bool
newEdgeWithId url eid x y = zeroOnSuccess (remote url "ubigraph.new_edge_w_id" 
eid x y)

setVertexAttribute :: Url -> VertexId -> String -> String -> IO Bool
setVertexAttribute url vid attr val = zeroOnSuccess (remote url "ubigraph.set_vertex_attribute" 
vid attr val)


setEdgeAttribute :: Url -> VertexId -> String -> String -> IO Bool
setEdgeAttribute url eid attr val = zeroOnSuccess (remote url "ubigraph.set_edge_attribute" eid 
attr val)



module VacuumUbigraph where

import GHC.Vacuum
import Data.Char
import Text.Printf
import Data.List

import qualified Data.IntMap as IntMap
import qualified Data.IntSet as IntSet

import qualified Ubigraph as U

nodeStyle n =
case nodeName n of
  ":"  -> ("(:)", "cube", "#ff")

  -- atomic stuff is special
  k | k `elem` ["S#" ,"I#" ,"W#"
   ,"I8#" ,"I16#" ,"I32#" ,"I64#"
   ,"W8#" ,"W16#" ,"W32#" ,"W64#"] -> (showLit n, "sphere", 
"#00ff00")
  -- chars
  "C#" -> (show . chr . fromIntegral . head . nodeLits $ n, "sphere", 
"#00ff00")
  "D#" -> ("Double", "sphere", "#009900")
  "F#" -> ("Float", "sphere", "#009900")

  -- bytestrings
  "PS"-> (printf "ByteString[%d,%d]" (nodeLits n !! 1) (nodeLits n !! 2), "cube", 
"#ff")

  "Chunk" -> (printf "Chunk[%d,%d]" (nodeLits n !! 1) (nodeLits n !! 2), "cube", 
"#ff")

  -- otherwise just the constructor and local fields
  c   | z > 0 ->
(c ++ show (take (fromIntegral z) $ nodeLits n), "cube", 
"#99")
  | otherwise -> (c, "cube", "#99")
where z = itabLits (nodeInfo n)
where
  showLit n = show (head $ nodeLits n)

view a = do
  U.clear srv
  mapM_ renderNode nodes
  mapM_ renderEdge edges
where
  g = vacuum a
  alist = toAdjList g
  nodes = nub $ map fst alist ++ concatMap snd alist
  edges = concatMap (\(n, ns) -> map ((,) n) ns) alist

  style nid = maybe ("...", "cube", "#ff") nodeStyle (IntMap.lookup nid 
g)

  renderNode nid = do
   U.newVertexWithId srv nid
   let (label, shape, color) = style nid
   U.setVertexAttribute srv nid "label" label
   U.s

Re: [Haskell-cafe] Looking for practical examples of Zippers

2009-04-01 Thread Claus Reinke

my quest for data structures continues. Lately I came across "Zippers".
Can anybody point be to some useful examples?


Once upon a time, there was a hardware implementation of a lambda
calculus based functional language (mostly, I was told, to show that it
could be done:-). The program representation was a string of symbols
(graph reduction came later; implementation of graph reduction on
stock hardware came much later) in a constructor syntax (think fully 
applied data constructors in Haskell, each constructor annotated with 
its arity).


The problem: if you were to do beta-reductions somewhere in the
middle of such a string, you'd have to make space or fill gaps, to 
adjust for the differing lengths of redex and reduced, not to mention 
the issue of finding and identifying the redices in the first place.

Working in hardware, you couldn't quite ignore those details.

The solution: use a hardware zipper representation, consisting of a
system of 3 main stacks (there were auxiliary stacks to handle things
like substitution), often likened to a railway shunting yard:

output|_|input
 c
 o
 n
 t
 r
 o
 l

To traverse a program expression:
- the program starts on the input stack, in pre-order form (the 
   constructors before their arguments).

- take a constructor from the front of the input stack, put it on
   the control stack, initialising a counter for the arguments.
- while there are still arguments left, recursively traverse them 
   from input to output stack, decrementing the constructor 
   count.

- after all parameters of the constructor on top of the control
   stack have been traversed to the output stack, move the 
   constructor to the output stack

- the program ends up on the output stack, in post-order form
   (the constructors after their arguments, though still on top
   of them, stack-wise).

That way, all sub-expressions would, at some point of the
traversal, appear on the top of the three stacks, so if there
was a redex on top (application constructor on control, 
function on one stack, argument on the other stack), the 
processor could replace the redex by a reduced expression 
without having to make room or fill gaps, or look anywhere 
but at the top of the stacks.


There was even a special graphical notation, to specify the
elaborate mix of control- and data-flow going on, but at the
heart of it all was that traversal scheme over a shunting yard
of stacks, based on a zipper in hardware.

Claus


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Is there a way to see the equation reduction?

2009-04-01 Thread Claus Reinke

But I am more interested in seeing the expansion and reduction that
the execution encounters as it lazily evaluates the function.


Have you tried GHood?


examples: 

   http://www.cs.kent.ac.uk/people/staff/cr3/toolbox/haskell/GHood/


package:
http://hackage.haskell.org/cgi-bin/hackage-scripts/package/GHood


It is a bit like the recently-released Vacuum visualiser for data
structures, except that it also animates the reduction sequence.
(Have a look at some of the online examples - you can view the reduction
steps right there in your web browser.)


Ahem. While I do recommend GHood for getting animated visual
insights into what your program is doing, 


   *GHood does not animate reductions*
   for instance, the steps of 'id id id ()' reducing will not be visible
   as such, though by observing the first 'id', you would be able 
   to see all the 'id's being applied to their parameters and yielding

   their results (and you could add further probes)

   *GHood animates observations*
   - when the usage contexts starts looking through an observe
   probe for the data behind the probe (demand goes in)
   - when the data behind a probe has been reduced to produce
   a further level of WHNF (data goes out)

Non-strict evaluation means that things won't be reduced unless
that is forced by observation but, nevertheless, the two are not
the same, they complement each other. Also, GHood/Hood do 
not observe sharing or when data is freed, which might be 
relevant for the example. On the positive side, observations 
can be targetted, so one can put small probes into large projects, 
and observations are very relevant for (relative) strictness issues.


It would be nice if an animation visualizer could be built on top 
of Vacuum, just as GHood was built on top of Hood. Of course, 
I'd really like to be able to follow reductions textually as well, 
in a combination of editor/ide/GHCi. It was shown long ago 
that this needn't be in conflict with efficient implementation,

such as compiled graph reduction (as I might have mentioned
before;-). 

I doubt that the ancient code still installs properly, but the old 
pages or pi-RED/KiR are still there(*), so anyone building a 
new Haskell implementation could look at user's guide and 
papers for inspiration, but it would be difficult to add after 
the fact to an implementation as complex as GHC.


Claus

(*) http://www.informatik.uni-kiel.de/~base/


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Making videos of your project

2009-03-31 Thread Claus Reinke

Is there anyone here with experience in screencasting of text-based
applications, who could offer advice on how to produce screencasts
on windows/xp? The basic screencasting (capture+annotation/editing)
is not the problem, eg, CamStudio seems ok, and Wink gives me
more control for mostly input-driven sessions (where I want
screenshots whenever something useful happens, not long videos of
my mousepointer wavering about the screen;-). Both can generate .swf.

The problem comes when trying to scale down the size to
what would fit in a browser window (what a viewer would see,
without having to scroll around) - text becomes hard to read (quality,
not size) if I scale from 1280x800 to 640x400, and if I try to work
in a screen area that fits 640x400 in the first place (so no scaling
would be needed), I can't really show anything..

The intended topic is still haskellmode for Vim, updating the
old screenshot tour from

http://www.cs.kent.ac.uk/~cr3/toolbox/haskell/Vim/vim.html

so there'd be a gvim window and a browser window (in real
life, also a GHCi window, and quite possibly a cygwin window,
but lets keep it simple), and the most interesting info is not in
graphics, but in the texts, source code, menus, tooltips, ...,

Claus


What is Screencasting?
http://www.oreillynet.com/pub/a/oreilly/digitalmedia/2005/11/16/what-is-screencasting.html?page=1



As a windows user, I tried playing with CamStudio and that almost seems
to do the job (capture, annotation, replay, conversion
of .avi to compressed .swf) but I don't like the resolution of the .swf
it generates (screen text isn't as readable as I've seen in other
screencasts). Perhaps I'm missing an option to improve the quality, or
can anyone recommend another free tool for windows, from
positive experience (wikipedia has a whole list of tools
http://en.wikipedia.org/wiki/Comparison_of_screencasting_software )?

For the purpose I have in mind, it would be good to have
many small pieces of screencast, one for each feature, or even better,
one continuous screencast with the ability to link directly to sections
dealing with particular topics - a hyperlinked animation. Is that
supported by some (free) tool?



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] uvector package appendU: memory leak?

2009-03-31 Thread Claus Reinke

Can I close this ticket as not being to do with uvector?
-- Don


You did notice the suggestion that performance of uvector and bytestring 
could be improved drastically if compile-time fusion would be augmented

with runtime fusion?

Claus

http://www.haskell.org/pipermail/haskell-cafe/2009-March/058911.html
http://www.haskell.org/pipermail/haskell-cafe/2009-March/058918.html


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] uvector package appendU: memory leak?

2009-03-31 Thread Claus Reinke

appendU is strict, insertWith just doesn't force it (follow the source link
in the haddocks to see why).


Ok, I see.
But, IMHO, this should be clearly documented.


There seems to be some agreement that strict variant operations should
also be provided, but it needs some more thinking, and whenever I look
at the code, I have the feeling that there are just too many operations
defined in it, so I look away again:-(


However, reading the code now, I prefer my version using alter.


Yes, it is a more direct way to enforce strict evaluation. Btw, if your
update function 'f' is strict in 'old' and 'new', you might just want to 
use 'Just $! f old new' and save all those 'seq's.



By the way, about insertWith/alter; from IntMap documentation:

insertWithKey: O(min(n,W)
alter: O(log n)

So, alter is more efficient than insertWithKey?
And what is that `W` ?


'W' is a maximum bound (see comment at top of that haddock page).
'alter' has some shortcuts if the update functions doesn't actually do
anything, but for reimplementing 'insertWithKey', the complexity 
should be the same as the code would be the same.



But piling up appendUs is still not a good idea. For a moment,
I thought that the stream representation's (+++) was handling
runtime fusion gracefully, but a simple test case suggests otherwise
at least for the simpler case of consU (the attached appendU.hs
doesn't do any appendUs, as the consU case already demonstrates
the issue; be careful with large numbers here, it'll quickly eat your ram):


I'm not sure to fully understand the code.


Writing ':<' for 'snocU', the code for 'singletonU a1:makes a full copy of 'singletonU a1:adds another 'aj' to the end. That can't be good for performance. The
code I provided delayed the 'consU' operations (essentially keeping a 
list of things that need to be 'consU'ed), then did them all at once at

the end (needing only one copy and allocation, for the final array).


But, again, IMHO it does not apply to my original problem.


It is a question of resource constraints, probably. Instead of maintaining
an 'IntMap (UArr Int)' all the way through buildup, you could use an
'IntMap [Int]' while parsing, and then compress into 'IntMap (UArr Int)'
when parsing is finished. If you can afford the larger memory profile
during parsing, this should give better overall performance, with the
same memory need for the final state, but larger memory needs up
to that state. I'd be interested in the differences, if you try it.

Claus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] type-level programming support library

2009-03-30 Thread Claus Reinke

I suppose having a good description of what I'd like to do might help: I'd
like to be able to make an N-Tuple an instance of a type class.

class Foo a where
instance Foo (,) where
instance Foo (,,) where
   
The different kindedness of (,) and (,,) prevent this from working.


Not that this is going to help you much, but perhaps you might
want to refine the problem specification:-) 


Claus

{-# LANGUAGE TypeFamilies #-}
{-# LANGUAGE FlexibleInstances #-}
{-# LANGUAGE KindSignatures #-}
{-# LANGUAGE MultiParamTypeClasses #-}
{-# LANGUAGE EmptyDataDecls #-}
import Data.Char

data Ap0 (f :: *) 
data Ap1 (f :: * -> *) 
data Ap2 (f :: * -> * -> *)

data Ap3 (f :: * -> * -> * -> *)

type family F a b
type instance F (Ap0 b) b = b
type instance F (Ap1 f) b = f b
type instance F (Ap2 f) b = f b b
type instance F (Ap3 f) b = f b b b

class Foo a b where 
 foo   :: a -> b -> F a b

 unfoo :: a -> F a b -> b

instance Foo (Ap0 Bool) Bool where
 foo   _ b = b
 unfoo _ b = b

instance Foo (Ap2 (,)) Bool where 
 foo   _ b = (b,not b)

 unfoo _ (a,b) = a&&b

instance Foo (Ap2 (,)) Char where 
 foo   _ c = (chr (ord c+1), chr (ord c+2))

 unfoo _ (a,b) = maximum [a,b]

instance Foo (Ap3 (,,)) Char where 
 foo   _ c   = (c, chr (ord c+1), chr (ord c+2))

 unfoo _ (a,b,c) = maximum [a,b,c]

f bs | unfoo (undefined::Ap2 (,)) bs = foo (undefined::Ap3 (,,)) 'a'
| otherwise = foo (undefined::Ap3 (,,)) 'b' 


g what1 what2 bs | unfoo what1 bs = foo what2 'a'
| otherwise  = foo what2 '0' 


main = do
 print (f (True,False)::(Char,Char,Char))
 print (g (undefined::Ap0 Bool) (undefined::Ap3 (,,)) False   
::(Char,Char,Char))
 print (g (undefined::Ap2 (,))  (undefined::Ap2 (,))  (True,False)::(Char,Char))


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: A guess on stack-overflows - thunks build-up andtail recursion

2009-03-30 Thread Claus Reinke
I wonder if I could write some sort of "chunked fold" which basically 
still produces the same amount of thunks but in a way so that the do not 
go on the stack all at once for reduction and thus do not cause a stack 
overflow. Kind of a tree.


Not without re-associating the applications of the operation being folded.
It is the association of everything to one side, for a strict operation, that 
leads to an expression whose evaluation will run out of limited stack,

while storing the arguments not yet used on the other side.

If your operation is associative, you could unroll the recursion a few steps
and reassociate, thereby reducing the depth of the stack of left-overs by 
the factor of unrolling. Or you could try building a tree of applications, 
perhaps from the bottom up:


import Debug.Observe

foldr2 op n (a:b:t) = (a`op`b) `op` foldr2 op n t
foldr2 op n [a] = a `op` n
foldr2 op n []  = n

foldl2 op n (a:b:t) = foldl2 op (n`op`(a`op`b)) t
foldl2 op n [a] = n `op` a
foldl2 op n []  = n

foldlt op n []  = n
foldlt op n [a] = n `op` a
foldlt op n l   = foldlt op n (pairs l)
 where pairs []  = []
   pairs [a] = [a]
   pairs (a:b:t) = (a `op` b) : pairs t

main = 
 -- printO $ observe "foldl2" foldl2 (+) 0 [1..10::Int]

 -- printO $ observe "foldr2" foldr2 (+) 0 [1..10::Int]
 printO $ observe "foldlt" foldlt (+) 0 [1..10::Int]

That works somewhat better for foldl2 than for foldr2 (why?), and 
foldlt holds out the longest (is it possible to write a foldrt?), but isn't 
all that efficient:


*Main> foldl (+) 0 [1..50::Int]
446198416
*Main> foldl (+) 0 [1..60::Int]
*** Exception: stack overflow
*Main> foldl2 (+) 0 [1..60::Int]
-388326432
*Main> foldl2 (+) 0 [1..100::Int]
1784293664
*Main> foldl2 (+) 0 [1..110::Int]
*** Exception: stack overflow

*Main> foldr (+) 0 [1..50::Int]
446198416
*Main> foldr (+) 0 [1..60::Int]
*** Exception: stack overflow
*Main> foldr2 (+) 0 [1..60::Int]
-388326432
*Main> foldr2 (+) 0 [1..70::Int]
*** Exception: stack overflow

These versions still have their issues, as observation on small lists
shows, so you might want to experiment with your own variations;-)

Claus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell] Re: [Haskell-cafe] ANN: cmonad 0.1.1

2009-03-30 Thread Claus Reinke

| When I looked at it a year ago or so, it was a return of one
| constructor in a sum.
| Looking at core, you can see several places where a function is called
| and that function always returns the same constructor, so the case
| analysis of the return value is not needed; it should be returned as
| an unboxed tuple instead
| I'll see if I can get a simple example to illustrate the problem.

That would be very helpful, thanks.  Having a Trac ticket with 
a small reproducible test case, and a pointer to a larger example 
that illustrates its importance, would be great.  Indeed, it is pervasive, 
or can you try the effect of performing the transformation by hand, 
and seeing if it makes a difference?


That doesn't guarantee that I, or anyone else, will get to it, but 
it makes it much much more likely!


And in the meantime, it documents a possible performance trap
for others who might not be able to track down the cause, or
even notice that they are shooting themselves in the foot.

Claus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] uvector package appendU: memory leak?

2009-03-29 Thread Claus Reinke
But Claus was right, appendU is lazy; this seems to be the cause of the 
problem.


appendU is strict, insertWith just doesn't force it (follow the source link
in the haddocks to see why).

However now I don't really understand why the two implementations 
differs in lazyness.


Or, to ask a different question, how can I make the version using 
insertWith strict?


deja vu:-(
http://www.haskell.org/pipermail/haskell-cafe/2009-March/057032.html

As you've noticed, alter also allows to enforce strictness.

But piling up appendUs is still not a good idea. For a moment,
I thought that the stream representation's (+++) was handling
runtime fusion gracefully, but a simple test case suggests otherwise
at least for the simpler case of consU (the attached appendU.hs
doesn't do any appendUs, as the consU case already demonstrates
the issue; be careful with large numbers here, it'll quickly eat your ram):

The test is a silly loop, using consU a lot (which isn't uvectors main 
target), to copy an array. The difference between the plain loop and

the unfolded loop shows that compile-time fusion can improve this,
suggesting that no runtime fusion is taking place. The simulated
runtime fusion version demonstrates (by staying within lists till the
end, only then switching back to arrays, avoiding all the intermediate
partial array copies). The old bytesting cons thread I linked to was
about integrating real runtime fusion into things like bytestring/uvector.

Claus


appendU.hs
Description: Binary data
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] uvector package appendU: memory leak?

2009-03-29 Thread Claus Reinke

IntMap (UArr (Word16 :*: Word8))

I was adding elements to the map using something like:

v = map singletonU (a :*: b)

insertWith appendU k v m

However doing this eats a *lot* of memory.


Since 'insertWith' doesn't actually do the 'appendU', the appends
will also be compressed in time, at point of use, rather than spread
out in time, over construction. Which might be inconvenient if large
volumes of data are involved.

So the question is: why appending an array of only one element to an  
existing array causes memory problems?


It must copy the entire array.


And doing this repeatedly, with arrays of increasing length, would
not be a good idea. For single-threaded use, one might use fusion
to avoid the construction/recopying of the intermediate arrays. 


As usual, if the single-threading happens in the body of a recursive
loop, compile-time fusion won't get a chance to work unless one
unfolds the recursion a few steps. But one could do similar fusion
dynamically, by partially reifying the construction functions (if we
are appending an array that is itself created via append, then a
single combined append will do). 


Wasn't there once a similar issue with strict bytestrings and cons?
Ah, found it - different mailing list:

http://www.haskell.org/pipermail/glasgow-haskell-users/2006-November/011603.html

Was this actually resolved? From a quick glance, it seems I
suggested runtime fusion, Don answered with compile-time
fusion, Duncan suggested a different hack to achieve runtime
fusion. But was the runtime fusion of nested cons in strict 
bytestring, or nested appends in uvector, ever implemented?

Does the stream-fusion framework handle that implicitly?

Claus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Template Haskell messes up scoping?

2009-03-29 Thread Claus Reinke

It looks like the scope is interrupted just above $( ... ) - but I'd
like to know why and find a more beautiful way than just moving all th
calls to the bottom of the module file :)


Top-level splices can introduce bindings. IIRC, the current TH 
implementation uses a simple sequencing approach, preventing

splices to be part of a recursive dependency chain. If you have

   A; $(S); B

then first A is compiled, then S, then B. So, B can refer to what 
S builds, but neither A nor S can refer to B.


Btw, there is a TH mailing list, and a wiki page with links to
tutorials and papers, needed given the sparsity of the Haddocks:

http://www.haskell.org/haskellwiki/Template_Haskell

Claus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] about Haskell code written to be "too smart"

2009-03-27 Thread Claus Reinke

Continuing our adventures into stylistic and semantic differences:-)


Can you write this analysis on the wiki?


Hmm, we tried that in the past, and I haven't seen any indication
that people search for those things, let alone find them (one particular
example I recalled I still haven't been able to find on the wiki..).

So I'll try a different approach this time: instead of copying emails
to the wiki, I've created a page for collecting and documenting 
examples of equational reasoning in practice. Please add your
favourites!-) Hopefully, the description of the examples will 
provide sufficient search keywords to make this page findable,

and the linked examples from there; there are category links as well.

Over to you, wiki style!-)
Claus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Exception handling in numeric computations

2009-03-26 Thread Claus Reinke

safeDiv :: (Exception e, Integral a) =>
   a -> a -> Either e a
safeDiv x y = unsafePerformIO . try . evaluate $ div x y


I just want to know, from a theoretical point of view,
whether this 'safeDiv' in above definition is the same as


safeDiv' :: (Exception e, Integral a) =>
a -> a -> Either e a
safeDiv' _ 0 = Left e
safeDiv' x y = Right $ div x y


No. Firstly, safeDiv' doesn't compile!-) Then, if you replace
'e' by 'DivideByZero' and adjust the types:

   *Main> safeDiv 1 (throw Overflow)
   Left arithmetic overflow
   *Main> safeDiv' 1 (throw Overflow)
   *** Exception: arithmetic overflow

Try ':info ArithException' for more in the same group. You
could use other functions in Control.Exceptions to get more
control about which exceptions you want to handle and how,
but so far, there is no indication that 'unsafePerformIO' is
the right hammer to use here..

Claus

-- unsagePerformIO: some things are just not wise to do

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] about Haskell code written to be "too smart"

2009-03-26 Thread Claus Reinke

Continuing our adventures into stylistic and semantic differences:-)

Comparing the 'State' and explicit recursion versions

   takeListSt = evalState . mapM (State . splitAt)

   -- ..with a derivation leading to..

   takeListSt []s = []
   takeListSt (h:t) s = x : takeListSt t s'
 where (x,s') = splitAt h s

instead of

   takeList [] _ =  []
   takeList _ [] =  []
   takeList (n : ns) xs  =  head : takeList ns tail
   where (head, tail) = splitAt n xs

we can see some differences, leading to different functions:

   *Main> null $ takeListSt [1] undefined
   False
   *Main> null $ takeList [1] undefined
   *** Exception: Prelude.undefined
   *Main> takeList [0] []
   []
   *Main> takeListSt [0] []
   [[]]

and similarly for the 'scanl' version

   takeListSc ns xs = zipWith take ns $ init $ scanl (flip drop) xs ns

Depending on usage, these differences might not matter, but what if
we want these different styles to lead to the same function, with only
stylistic and no semantic differences, taking the explicit recursion as
our spec?

In the 'State' version, the issue is that 'mapM' does not terminate
early, while the specification requires an empty list whenever 'xs'
(the state) is empty. Following the derivation at

http://www.haskell.org/pipermail/haskell-cafe/2009-March/058603.html

the first step where we have a handle on that is after unfolding
'sequence':

   takeListSt = evalState . foldr k (return []) . map (State . splitAt)
 where k m m' = do x<-m; xs<-m'; return (x:xs)

If we change that to

   takeListSt' = evalState . foldr k (return []) . map (State . splitAt)
 where k m m'= cutNull $ do x<-m; xs<-m'; return (x:xs)
   cutNull m = do s<-get; if null s then return [] else m

and continue with the modified derivation, we should end up with
the right spec (I haven't done this, so you should check!-). This
isn't all that elegant any more, but support for 'mapM' with early
exit isn't all that uncommon a need, either, so one might expect
a 'mapM' variant that takes a 'cut' parameter to make it into the
libraries.

For the 'scanl' version, we have a more direct handle on the issue:
we can simply drop the offending extras from the 'scanl' result,
replacing 'init' with 'takeWhile (not.null)':

   takeListSc' ns xs = zipWith take ns $ takeWhile (not.null) $ scanl (flip 
drop) xs ns

A somewhat abbreviated derivation at the end of this message
seems to confirm that this matches the spec (as usual with proofs,
writing them down doesn't mean that they are correct, but that
readers can check whether they are).

(btw, both 'takeListSt'' and 'takeListSc'' pass Thomas' 'testP', as does
his 'partitions', but 'partitions' is not the same function as 'takeList':
consider 'null $ takeList [1] undefined' and 'takeList [0] []' ;-)

Someone suggested using 'mapAccumL' instead of 'State', and
that does indeed work, only that everything is the wrong way round:

   takeListMAL = (snd.) . flip (mapAccumL (((snd&&&fst).).(flip splitAt)))

This is an example where all the "cleverness" is spent on the
irrelevant details, giving them way too much importance. So one
might prefer a version that more clearly says that this is mostly
'mapAccumL splitAt', with some administratory complications
that might be ignored on cursory inspection:

   takeListMAL' = mapAccumL' splitAt'
 where splitAt' l n   = swap $ splitAt n l
   mapAccumL' f l acc = snd $ mapAccumL f acc l
   swap (x,y) = (y,x)

Of course, this suffers from the "does not terminate early" issue,
but as this thread encourages us to look at functions we might
not otherwise consider, I thought I'd follow the suggestion, and
perhaps someone might want to modify it with a 'mapAccumL'
with cutoff, and demonstrate whether it matches the spec;-)

Claus

-- view transformation: reducing the level of abstraction

takeList ns xs = zipWith take ns $ takeWhile (not.null) $ scanl (flip drop) xs 
ns

-- fetch definitions of 'zipWith', 'takeWhile', and 'scanl'

takeList ns xs = zipWith take ns $ takeWhile (not.null) $ scanl (flip drop) xs 
ns
 where scanl f q ls = q : case  ls of
[] -> []
x:xs -> scanl f (f q x) xs
   takeWhile _ [] = []
   takeWhile p (x:xs) | p x   = x : takeWhile p xs
  | otherwise = []
   zipWith f (a:as) (b:bs) = f a b : zipWith f as bs
   zipWith _ _  _  = []

-- specialize for 'take', 'not.null', and 'flip drop'

takeList ns xs = zipWith ns $ takeWhile $ scanl xs ns
 where scanl q ls = q : case  ls of
[] -> []
x:xs -> scanl (drop x q) xs
   takeWhile []= []
   takeWhile (x:xs) | not (null x) = x : takeWhile xs
| otherwise= []
   zipWith (a:as) (b:bs) = take a b : zipWith as bs
   zipWith _  _  = []

-- fuse 'takeWhile' and 'scanl' into 

Re: [Haskell-cafe] about Haskell code written to be "too smart"

2009-03-25 Thread Claus Reinke


The beauty of functional programming is that there doesn't have
to be a conflict between those who prefer explicit and those
who prefer implicit recursion. Think of them as different views
on the same functions - just as with graphical visualizations,
pick the view best suited to your purpose and use equational
reasoning to transform one view into another, as needed.

Improving your experience in reasoning about code is going to
help at every level of abstraction, and since you've already
paid the price (using a pure language, to make reasoning easier)
you might as well avail yourself of the facilities;-)

While developing, I might prefer abstraction, as fewer details
mean that I can hold more of the problem in my head at any
point, increasing my chances of seeing all the way to a
solution; if optimizing, or when I haven't found the right
abstractions yet, I might have to resort to less abstract code
until I've sorted out those details or until GHC deals with
the more abstract forms as well as with the less abstract ones.

Fine, you say, but I'd never would have thought of abstract
views like splitAt as a state transformer. Okay, before this
thread, I might not have thought of using that, either. But
after this thread, I'd hope for it to become part of my
thinking about Haskell code. And the way I do that is by
taking the abstract code and unfold it (replacing instances
of left-hand sides with instances of right-hand sides of
definitions - the source links in the Haddock documentation
are very useful for that) until I get to some less abstract
code that I might have come up with.

That doesn't mean that I'd have had the insights to play the
derivation backwards, but by breaking the transformation from
less abstract to more abstract view into smaller steps, starting
from the abstract form that incorporates the additional insights
I was missing, I can increase my understanding of what is going
on, and my chances of noticing the opportunities next time. It
also confirms whether or not the two solutions really are the
same (as has been pointed out, that wasn't the case here).

Paraphrasing and tweaking Sjur Gjøstein Karevoll's remark
a little: clever Perl code is what you hope you understood in
the past, when you wrote it; clever Haskell code is what you
hope you'll understand in the future, when you'll write it yourself!-)

The derivation below is best followed by replaying it yourself
in your editor, but I hope you'll find it helpful anyway.

Claus

-- view transformation: reducing the level of abstraction
-- by turning implicit to explict recursion

takeList = evalState . mapM (State . splitAt)

-- unfold 'mapM'

takeList = evalState . sequence . map (State . splitAt)

-- unfold 'sequence'

takeList = evalState . foldr k (return []) . map (State . splitAt)
 where k m m' = do x<-m; xs<-m'; return (x:xs)
   foldr op n []= n
   foldr op n (h:t) = h `op` foldr op n t

-- specialize 'foldr' for the call paramenters 'k' and 'return []'

takeList = evalState . foldrkn . map (State . splitAt)
 where k m m' = do x<-m; xs<-m'; return (x:xs)
   foldrkn []= return []
   foldrkn (h:t) = h `k` foldrkn t

-- unfold 'k'

takeList = evalState . foldrkn . map (State . splitAt)
 where foldrkn []= return []
   foldrkn (h:t) = do x<-h; xs<-foldrkn t; return (x:xs)

-- foldr op n . map f = foldr (op.f) n

takeList = evalState . foldrkn
 where foldrkn []= return []
   foldrkn (h:t) = do x<-State (splitAt h); xs<-foldrkn t; return (x:xs)

-- unfold 'return' for 'State', eta-expand 'splitAt h'

takeList = evalState . foldrkn
 where foldrkn []= State (\s->([],s))
   foldrkn (h:t) = do x<-State (\s->splitAt h s); xs<-foldrkn t; State 
(\s->(x:xs,s))

-- eta-expand body of 'takeList'

takeList ns xs = evalState (foldrkn ns) xs
 where foldrkn []= State (\s->([],s))
   foldrkn (h:t) = do x<-State (\s->splitAt h s); xs<-foldrkn t; State 
(\s->(x:xs,s))

-- unfold the second '>>=' for 'State'

takeList ns xs = evalState (foldrkn ns) xs
 where foldrkn []= State (\s->([],s))
   foldrkn (h:t) = do x<-State (\s->splitAt h s)
  State (\s->let (xs,s') = runState (foldrkn t) s
 in runState (State (\s->(x:xs,s))) s')

-- runState . State = id

takeList ns xs = evalState (foldrkn ns) xs
 where foldrkn []= State (\s->([],s))
   foldrkn (h:t) = do x<-State (\s->splitAt h s)
  State (\s->let (xs,s') = runState (foldrkn t) s
 in (\s->(x:xs,s)) s')

-- beta-reduce

takeList ns xs = evalState (foldrkn ns) xs
 where foldrkn []= State (\s->([],s))
   foldrkn (h:t) = do x<-State (\s->splitAt h s)
  State (\s->let (xs,s') = runState (foldrkn t) s
 in (x:xs,s'))

-- unfold the remainign '>>=' for 'State'

takeList ns xs = evalState (foldrkn ns) xs
 where foldrkn []= State (\s->([],s))
   foldr

Re: [Haskell-cafe] Making videos of your project

2009-03-24 Thread Claus Reinke

Perhaps the "make a video" slogan doesn't quite explain what is
intended - it didn't to me!-) Reading John Udell's short article

What is Screencasting?
http://www.oreillynet.com/pub/a/oreilly/digitalmedia/2005/11/16/what-is-screencasting.html?page=1

gave me a better idea: the screen video part is the modern, animated 
version of manuals with screenshots, now with audio or text caption 
annotations (a canned demo). He also gives some tool references, 
and some suggestions for focussing the bandwidth on useful contents, 
editing, privacy considerations, etc. Almost certainly, this



   2. type 'recordmydesktop'
   3. do something with haskell
   4. hit control-C
   5. upload out.ogv to youtube


is not a useful recipe - screencasts need planning of the steps one
wants to demonstrate, editing out of aimless moving around or
thinking about what to show next, annotations that guide the 
viewer (text labels or audio track that explains what can be seen,
or what keyboard shortcuts are used, or what the plan is), and 
probably several attempts to get one useful result (minimal 
bandwith/length/.. with maximal "ah, that is how I do it" or 
"ah, that is how it works" or "cool, I want to install that" effect). 

But with a little effort, this could be very useful, more so than 
simple screenshots, lots of text, or combinations thereof, if the

focus is not so much on producing a video to watch, but on
showing potential users what they are going to see, and how
to work with it if they decide to install it. For instance, I'd now like 
to replace my old tour of haskellmode for Vim with a screencast.


As a windows user, I tried playing with CamStudio and that 
almost seems to do the job (capture, annotation, replay, conversion
of .avi to compressed .swf) but I don't like the resolution of the 
.swf it generates (screen text isn't as readable as I've seen in other
screencasts). Perhaps I'm missing an option to improve the quality, 
or can anyone recommend another free tool for windows, from

positive experience (wikipedia has a whole list of tools
http://en.wikipedia.org/wiki/Comparison_of_screencasting_software )?

For the purpose I have in mind, it would be good to have
many small pieces of screencast, one for each feature, or even 
better, one continuous screencast with the ability to link directly 
to sections dealing with particular topics - a hyperlinked animation. 
Is that supported by some (free) tool?


Claus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] about beta NF in lambda calculus

2009-03-22 Thread Claus Reinke

If above is true, I am confused why we have to distinguish the terms which
have NF and be in NF? isn't the terms have NF will eventually become in NF?
or there are some way to avoid them becoming in NF?


Another way to think about it: what about terms which have no NF?
And given both kinds of terms, how do you distinguish them? By 
normal-order reduction, but that is a semi-decision procedure: if

a term has a NF, normal-order reduction will eventually reach it,
but if a term has no NF, there may not be a way to tell. There is
a trivial decision procedure for whether a term is in NF, but none
for whether a term has a NF. That is enough of a difference to
warrant the distinction, in most cases;-)

Btw, NF (as in: no redices) is rather extreme, so you might want
to look up HNF (head normal form: no head redices), which is
useful for actually assigning meaning to terms, and WHNF (weak
head normal form: no head redices outside of lambda abstraction),
which is what implementations of languages like Haskell use during 
evaluation. Normal-order reduction reaches these forms in order,

if they exist: term->WHNF->HNF->NF.

Claus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: A guess on stack-overflows - thunksbuild-upandtail recursion

2009-03-22 Thread Claus Reinke
is all going to happen now, as printing the
result forces the values. One can almost see the chains of demand in
the animation, as thunks (red) get inspected (yellow), leading to further
inspection elsewhere, leading to results (black) elsewhere, leading to
results replacing the original thunks. It is this feel for the dynamics of
evaluation that gives GHood its edge over Hood, which does all of
the work of generating the observation log.

Now, compare with the animation for "version2". Here "f" gets called
early on (event 26/130), as the IntMaps cannot be constructed without
evaluating their keys, which we have made dependent on evaluating
their values. Again, the animation reveals the chain of dependencies.

Stepping further (event 119/130), we notice that we seem to have
been only partially successful. The values are indeed evaluated before
the keys can be used, but there are still thunks in those IntMaps! That
is because the results of our strictified "insert"s have not been observed
yet, and GHood represents not-yet-observed as "thunk". From this
point on, printing of the result will observe and force the whole of
the IntMap, as before, but we see that this will not involve any further
calls to the function "f" that computes the values, just observations
of what has already been computed.

Hope these new examples give further ideas about what GHood
can and cannot be used for, and how to use it, including "Observable"
instances for abstract types for which we lack the source code
(also note that, as with all visualizations, careful setup is needed
to reveal the points of interest - other ways of instrumenting the
code or providing input files would have been possible, but less
helpful). As mentioned before (and below), the paper goes into
more detail on GHood itself.

Claus


With that said, I think the canvas+js idea is a wonderful alternative  to 
proprietary Flash.


I've also often wanted browser-based, interactive, profiling views
instead of the somewhat awkward PostScript generation of static
views.


Regards,
Duane Johnson

On Mar 20, 2009, at 5:36 PM, Claus Reinke wrote:


It would be great to have a video of this in action up on youtube.
You can simply 'recordmydesktop' on linux (and likely elsewhere),  then
upload the result.


I'm curious: how would a non-interactive animation running in Flash
in a browser be better than an interactive animation running in Java
in a browser?-) When I wrote GHood (many years ago), I explicitly
looked into the applet option, in the hope that people would use it
to document and discuss observation logs of their favourite Haskell
strictness issues, with animations available on their web pages, right
next to the discussions.
That hasn't happened yet (the only users I was aware of were the
DrHylo/Pointless Haskell project), but I just checked, the old .jar  file,
the source of which hasn't been perused for a long time, still  worked in applet mode (in Opera, 
a browser I didn't know about in  2001,

using a Java Runtime several versions removed from that time - try
that in Haskell.. ;-), straight from that old project page (which  also explains how to set such 
things up), so anyone could add  animations of their favourite examples on their web-pages. But 
don't  let that keep you or anyone else from addressing the youtube  audience (one could add 
audio explanations, I guess).


Claus

PS. Perhaps these days, someone should rewrite the log viewer
  in Canvas+JavaScript as a more lightweight and modern platform.


It also helps the general adoption cause, having Haskell more visible
and accessible.
claus.reinke:
The problem occurs when the result value is needed and thus  the   thunks need to be reduced, 
starting with the outermost,  which can't   be reduced without reducing the next one  etc 
and it's these   reduction steps that are pushed on the stack  until its size cause a 
stack-overflow.


Yes, that's exactly right, and something that's not often pointed  out.


Btw, this is kind of relative strictness (when is one part of my  program
needed to answer demands on another part) is the kind of example
for which old GHood can be helpful (once you get used to the  display).

If you have Java on your machines, try installing GHood [1] (on  hackage thanks to Hugo 
Pacheco), then things like


ghc -e ':m +Debug.Observe' -e 'printO $ observe "foldr" foldr (+)  0 [1..4] '
ghc -e ':m +Debug.Observe' -e "printO $ observe \"foldl'\"  foldl' (+) 0 [1..4] 
"
ghc -e ':m +Debug.Observe' -e 'printO $ observe "foldl" foldl (+)  0 [1..4] '

This was also among the examples on the GHood home page [2], so  you could try the applet 
version instead, and in section 4.2 of  the paper [3] (as a "well known strictness problem";-). 
Page and  paper

mention

Re: [Haskell-cafe] Re: A guess on stack-overflows - thunksbuild-upand tail recursion

2009-03-20 Thread Claus Reinke

It would be great to have a video of this in action up on youtube.
You can simply 'recordmydesktop' on linux (and likely elsewhere), then
upload the result.


I'm curious: how would a non-interactive animation running in Flash
in a browser be better than an interactive animation running in Java
in a browser?-) When I wrote GHood (many years ago), I explicitly
looked into the applet option, in the hope that people would use it
to document and discuss observation logs of their favourite Haskell
strictness issues, with animations available on their web pages, right
next to the discussions. 


That hasn't happened yet (the only users I was aware of were the
DrHylo/Pointless Haskell project), but I just checked, the old .jar file,
the source of which hasn't been perused for a long time, still worked 
in applet mode (in Opera, a browser I didn't know about in 2001,

using a Java Runtime several versions removed from that time - try
that in Haskell.. ;-), straight from that old project page (which also 
explains how to set such things up), so anyone could add animations 
of their favourite examples on their web-pages. But don't let that 
keep you or anyone else from addressing the youtube audience 
(one could add audio explanations, I guess).


Claus

PS. Perhaps these days, someone should rewrite the log viewer
   in Canvas+JavaScript as a more lightweight and modern platform.


It also helps the general adoption cause, having Haskell more visible
and accessible.

claus.reinke:
The problem occurs when the result value is needed and thus the   
thunks need to be reduced, starting with the outermost, which can't   
be reduced without reducing the next one  etc and it's these   
reduction steps that are pushed on the stack until its size cause a   
stack-overflow.


Yes, that's exactly right, and something that's not often pointed out.


Btw, this is kind of relative strictness (when is one part of my program
needed to answer demands on another part) is the kind of example
for which old GHood can be helpful (once you get used to the display).

If you have Java on your machines, try installing GHood [1] (on hackage 
thanks to Hugo Pacheco), then things like


ghc -e ':m +Debug.Observe' -e 'printO $ observe "foldr" foldr (+) 0 [1..4] '
ghc -e ':m +Debug.Observe' -e "printO $ observe \"foldl'\" foldl' (+) 0 [1..4] "
ghc -e ':m +Debug.Observe' -e 'printO $ observe "foldl" foldl (+) 0 [1..4] '

This was also among the examples on the GHood home page [2], so you could 
try the applet version instead, and in section 4.2 of the paper [3] (as a 
"well known strictness problem";-). Page and paper

mention a few other similar examples and discuss some differences
between static (which parts are needed at all) and dynamic strictness
(which parts are needed when, relative to other demands).

Claus

[1] http://hackage.haskell.org/cgi-bin/hackage-scripts/package/GHood
[2] http://www.cs.kent.ac.uk/~cr3/toolbox/haskell/GHood
[3] http://www.cs.kent.ac.uk/~cr3/publications/GHood.html

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: A guess on stack-overflows - thunks build-upand tail recursion

2009-03-20 Thread Claus Reinke
The problem occurs when the result value is needed and thus the  
thunks need to be reduced, starting with the outermost, which can't  
be reduced without reducing the next one  etc and it's these  
reduction steps that are pushed on the stack until its size cause a  
stack-overflow.


Yes, that's exactly right, and something that's not often pointed out.


Btw, this is kind of relative strictness (when is one part of my program
needed to answer demands on another part) is the kind of example
for which old GHood can be helpful (once you get used to the display).

If you have Java on your machines, try installing GHood [1] (on 
hackage thanks to Hugo Pacheco), then things like


ghc -e ':m +Debug.Observe' -e 'printO $ observe "foldr" foldr (+) 0 [1..4] '
ghc -e ':m +Debug.Observe' -e "printO $ observe \"foldl'\" foldl' (+) 0 [1..4] "
ghc -e ':m +Debug.Observe' -e 'printO $ observe "foldl" foldl (+) 0 [1..4] '

This was also among the examples on the GHood home page [2], 
so you could try the applet version instead, and in section 4.2 of the 
paper [3] (as a "well known strictness problem";-). Page and paper

mention a few other similar examples and discuss some differences
between static (which parts are needed at all) and dynamic strictness
(which parts are needed when, relative to other demands).

Claus

[1] http://hackage.haskell.org/cgi-bin/hackage-scripts/package/GHood
[2] http://www.cs.kent.ac.uk/~cr3/toolbox/haskell/GHood
[3] http://www.cs.kent.ac.uk/~cr3/publications/GHood.html

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] What unsafeInterleaveIO is unsafe

2009-03-19 Thread Claus Reinke
couldn't hurt to look at, say, Olivier Danvy and coworker's 
publications at BRICS[4]: they have been rather thorough in documenting 
various semantics folklore, including formal derivations of abstract machines 
from operational semantics and correspondences between various forms 
of operational semantics. Such derivations are usually easier than trying

to prove equivalence of separately developed semantics and implementation.

Gordon Plotkin's notes and references are certainly relevant [5].

Also check out the several treatments of lazy lambda calculus and
call-by-need reduction: the earliest ones relied on a combination of
language-level rules with an abstract heap to model sharing, the later
ones represent sharing entirely at the language level[6].


Oversimplified: they study equivalence classes of semantics,

I thought in a fully-abstract denotational semantics we had
   [| x |] = [| y |]
whenever x and y were operationally equivalent?


And if you have that, doesn't that imply that everything else about
the denotational semantics is irrelvant? In particular, any denotational
semantics that includes things not present in the operational one, or
vice versa, is not fully abstract.

Claus

[0] Tackling the awkward squad: monadic input/output, concurrency, 
   exceptions, and foreign-language calls in Haskell, Simon Peyton Jones

   http://research.microsoft.com/en-us/um/people/simonpj/papers/marktoberdorf/
[1] Concurrent Haskell, Peyton Jones, Gordon, Finne
   
http://research.microsoft.com/en-us/um/people/simonpj/papers/concurrent-haskell.ps.gz
[2] A semantics for imprecise exceptions, Peyton Jones, Reid,
   Hoare, Marlow, Henderson
   
http://research.microsoft.com/en-us/um/people/simonpj/papers/imprecise-exn.htm
[3] Functions, Frames, and Interactions, Claus Reinke (chapter 3)
   http://www.cs.kent.ac.uk/~cr3/publications/phd.html
[4] Olivier Danvy et al, miscellaneous reports
   http://www.brics.dk/~danvy/
[5] The Origins of Structural Operational Semantics,
   Gordon D. Plotkin, number 48 here:
   http://homepages.inf.ed.ac.uk/gdp/publications/
[6] A call-by-need  lambda calculus,
   John Maraist, Martin Odersky, and Philip Wadler
   http://homepages.inf.ed.ac.uk/wadler/topics/call-by-need.html#need-journal
   A natural semantics for lazy evaluation, John Launchbury
   
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.35.2016&rep=rep1&type=pdf

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Go Haskell!

2009-03-18 Thread Claus Reinke
I finally got around to making my code for random Go playouts available. 
Here it is:


  http://www.haskell.org/~simonmar/goboard.tar.gz


Cool!-) To reciprocate, I've temporarily put a snapshot of my code here:
http://www.cs.kent.ac.uk/~cr3/tmp/SimpleGo.hs

I've not yet decided whether to be depressed or encouraged by the
timings;-) without mercy rule, your code simulates at about 17k/s runs
here, vs only about 3k/s for mine. There are some obvious aspects, such 
as hardcoding the boardsize isn't quite as straightforward when GTP

allows to change it at runtime, but I don't think that explains the bulk
of the difference. I do hope there are lots of small things to learn (perhaps
you could summarize your findings in a performance tips paper?-), but
at first glance, I suspect the difference in approach: my early experiments
suggested that maintaining chains/strings wasn't going to be more efficient
than following the affected parts of strings when needed - but I didn't
think of representing strings as cyclicly referenced lists (which allows
for string merge in constant instead of linear time). Nice idea!-)

Thanks,
Claus



If someone were to make a nice library API on top of this and upload it to 
hackage, I'd be delighted.  Hack away.


A GTP interface would be useful, to allow playing against other bots.


Cheers,
Simon

Simon Marlow wrote:

Claus Reinke wrote:

Do you have an example of a mutable state/ IO bound application, like,
hmm, a window manager or a revision control system or a file system...?


If you're looking for a challenge, how about this one (there used to
be lots of Haskellers into this game, any of you still around?-):

http://computer-go.org/pipermail/computer-go/2008-October/016680.html


[ catching up with old haskell-cafe email ]

Interestingly, I did this a while ago.  Here's my results:

$ ./Bench 1 10
b: 14840, w: 17143 mercy: 67982
elapsed time: 3.42s
playouts/sec: 29208


so, nearly 30k/sec random playouts on 9x9.  That's using a hack that 
stops the game when the score is heavily in favour of one player, it 
drops to around 20k/sec with that turned off.


Not bad, but probably I'd guess an order of magnitude worse than you can 
do in tightly-coded C.  The Haskell implementation isn't nice, as you 
predicted.  Also the code is derived from some non-free internal MS 
code, so unfortunately I can't share it (but I could perhaps extract the 
free bits if anyone is really interested).


W wins slightly more often I think because komi 5.5 on a 9x9 board is a 
tad high.


It does parallelise too, of course:

$ ./Bench 8 10 +RTS -N8
b: 14872, w: 17488 mercy: 67584
elapsed time: 1.00s
playouts/sec: 99908

though still room for improvement there.

Cheers,
Simon



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] What unsafeInterleaveIO is unsafe

2009-03-17 Thread Claus Reinke

So that first step already relies on IO (where the two are equivalent).

Come again?


The first step in your implication chain was (without the return)

 throw (ErrorCall "urk!") <= 1
 ==> evaluate (throw (ErrorCall "urk!")) <= evaluate 1

but, using evaluation only (no context-sensitive IO), we have


throw (ErrorCall "urk") <= evaluate (throw (ErrorCall "urk"))

Sure enough.


meaning that first step replaced a smaller with a bigger item on the
smaller side of the inequation. Unless the reasoning includes context-
sensitive IO rules, in which case the IO rule for evaluate brings the
throw to the top (evaluate (throw ..) -> (throw ..)), making the two
terms equivalent (modulo IO), and hence the step valid (modulo IO).

Unless you just rely on


But throwIO (ErrorCall "urk") /= _|_:
   Control.Exception> throwIO (ErrorCall "urk!") `seq` ()
   ()


in which case that step relies on not invoking IO, so it can't be
mixed with the later step involving IO for catch (I think).


This is very delicate territory. For instance, one might think that
this 'f' seems to define a "negation function" of information content



f x = Control.Exception.catch (evaluate x >> let x = x in x) (\(ErrorCall 
_)->return 0) >>=
print

and hence violates monotonicity

(_|_ <= ()) => (f _|_ <= f ())

since

*Main> f undefined
0
*Main> f ()
Interrupted.

But that is really mixing context-free expression evaluation and
context-sensitive execution of io operations. Most of our favourite
context-free equivalences only hold within the expression evaluation
part, while IO operations are subject to additional, context-sensitive
rules.


Could you elaborate on this?  It sounds suspiciously like you're saying
Haskell's axiomatic semantics is unsound :: IO.


Not really unsound, if the separation is observed. One could probably
construct a non-separated semantics (everything denotational), but at
the cost of mapping everything to computations rather than values.

Then computations like that 'f' above would, eg, take an extra context
argument (representing "the world", or at least aspects of the machine
running the computation), and the missing information needed to take
'f _|_'[world] to '()'[world'] would come from that context parameter
(somewhere in the computational context, there is a representation of
the computation, which allows the context to read certain kinds of '_|_'
as exceptions; the IO rule for 'catch' takes that external information and
injects it back from the computational context into the functional program,
as data structure representations of exceptions).

That price is too high, though, as we'd now have to do all reasoning
in context-sensitive terms which, while more accurate, would bury
us in irrelevant details. Hence we usually try to use context-free
reasoning whenever we can get away with it (the non-IO portions
of Haskell program runs), resorting to context-sensitive reasoning
only when necessary (the IO steps of Haskell program runs).

This gives us convenience when the context is irrelevant as well
as accuracy when the context does matter - we just have to be
careful when combining the two kinds of reasoning.


For instance, without execution

*Main> f () `seq` ()
()
*Main> f undefined `seq` ()
()

but if we include execution (and the context-sensitive equivalence
that implies, lets call it ~),


So

  a ~ b = `The observable effects of $(x) and $(y) are equal'

?


Observational equivalence is one possibility, there are various forms
of equivalences/bi-similarities, with different ratios of convenience and
discriminatory powers (the folks studying concurrent languages and
process calculi have been fighting with this kind of situation for a long
time, and have built up a wealth of experience in terms of reasoning).

The main distinction I wanted to make here was that '=' was
a context-free equivalence (valid in all contexts, involving only
context-free evaluation rules) while '~' was a context-sensitive
equivalence (valid only in IO contexts, involving both context-free
and context-sensitive rules).


we have

f () ~ _|_ <= return 0 ~ f _|_

so 'f' shows that wrapping both sides of an inequality in 'catch' need
not preserve the ordering (modulo ~)


If f _|_ <= f (), then it seems that (<=) is not a (pre-) order w.r.t.
(~).  So taking the quotient of IO Int over (~) gives you a set on which
(<=) is not an ordering (and may not be a relation).


As I said, mixing '=' and '~', without accounting for the special nature of
the latter, is dangerous. If we want to mix the two, we have to shift all
reasoning into the context-sensitive domain, so we'd have something like

   f () [world] ~ _|_ [world''] <= return 0 [world'] ~ f _|_ [world]

(assuming that '<=' is lifted to compare values in contexts). And now the
issue goes away, because 'f' doesn't look at the '_|_', but at the 
representation
of '_|_' in the 'world' (the representation of '_|_' in GHC's runtime system

Re: [Haskell-cafe] What unsafeInterleaveIO is unsafe

2009-03-16 Thread Claus Reinke

> > "exception handling" which allows to "catch" programming errors.
> And which I have a sneaking suspicion actually *is* `unsafe'.  Or, at
> least, incapable of being given a compositional, continuous semantics.
"A semantics for imprecise exceptions"
http://research.microsoft.com/en-us/um/people/simonpj/papers/imprecise-exn.htm
Basically if we can only catch exceptions in IO then it doesn't matter,
it's just a little extra non-determinism and IO has plenty of that
already.


I'm not sure that helps much.  Given the following inequalities (in the
domain ordering) and equations:
 throw "urk"! <= return 1
 ==> evaluate (throw "urk!") <= evaluate 1


   throw (ErrorCall "urk") <= evaluate (throw (ErrorCall "urk"))

as demonstrated here

   *Main> throw (ErrorCall "urk") `seq` ()
   *** Exception: urk
   *Main> evaluate (throw (ErrorCall "urk")) `seq` ()
   ()

So that first step already relies on IO (where the two are equivalent).

This is very delicate territory. For instance, one might think that
this 'f' seems to define a "negation function" of information content

   f x = Control.Exception.catch (evaluate x >> let x = x in x) (\(ErrorCall _)->return 0) >>= 
print


and hence violates monotonicity

   (_|_ <= ()) => (f _|_ <= f ())

since

   *Main> f undefined
   0
   *Main> f ()
   Interrupted.

But that is really mixing context-free expression evaluation and
context-sensitive execution of io operations. Most of our favourite
context-free equivalences only hold within the expression evaluation
part, while IO operations are subject to additional, context-sensitive
rules. For instance, without execution

   *Main> f () `seq` ()
   ()
   *Main> f undefined `seq` ()
   ()

but if we include execution (and the context-sensitive equivalence
that implies, lets call it ~), we have

   f () ~ _|_ <= return 0 ~ f _|_

so 'f' shows that wrapping both sides of an inequality in 'catch' need
not preserve the ordering (modulo ~) - its whole purpose is to recover
from failure, making something more defined (modulo ~) by translating
_|_ to something else. Which affects your second implication.

If the odd properties of 'f' capture the essence of your concerns, I think
the answer is to keep =, <=, and ~ clearly separate, ideally without losing
any of the context-free equivalences while limiting the amount of
context-sensitive reasoning required. If = and ~ are mixed up, however,
monotonicity seems lost.

The semantics in the "imprecise exceptions" paper combines a
denotational approach for the context-free part with an operational
semantics for the context-sensitive part. This tends to use the good
properties of both, with a clear separation between them, but a
systematic treatment of the resulting identities was left for future work.

Claus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] What unsafeInterleaveIO is unsafe

2009-03-15 Thread Claus Reinke

main = do
r <- newIORef 0
v <- unsafeInterleaveIO $ do
writeIORef r 1
return 1
x <- case f v of
0 -> return 0
n -> return (n - 1)
y <- readIORef r
print y

-- a couple of examples:
f x = 0 -- program prints "0"
-- f x = x -- program prints "1"


"f" is pure.  But if f is nonstrict, this program prints 0, and if
it's strict, it prints 1.  The strictness of a pure function can
change the observable behavior of your program!


Strictness is an effect, even if it isn't recorded in Haskell's types.
Replacing 'v <- ..' by 'let v = undefined' provides similar choices.

'unsafeInterleaveIO' explicitly uses the "implicit" effects of
strictness to drive the "explicit" effects of IO, which is why it
is unsafe (moving IO effects from explicit to implicit). But even 
if 'unsafeInterleaveIO' where eliminated, strictness/evaluation 
would still remain as an implicit effect in Haskell. Here's a 
variation without 'unsafeInterleaveIO':


import Data.IORef
import Control.Exception

main = do
   r <- newIORef 0
   let v = undefined
   handle (\(ErrorCall _)->print "hi">>return 42) $ case f v of
 0 -> return 0
 n -> return (n - 1)
   y <- readIORef r
   print y

-- a couple of examples:
f x = 0 -- program prints '0'
f x = x -- program prints '"hi"0'

(sideline: I often interpret '_|_::t' not so much as an element of 't'
but as failure to produce information at type 't'; the type tells us
what information we can have, evaluation not only provides the
details, but also decides whether or not we have any of that info,
and how much of it)


Furthermore, due to the monad laws, if f is total, then reordering the
(x <- ...) and (y <- ...) parts of the program should have no effect.


   case f v of { 0 -> return 0; n -> return (n - 1) }
=
   return $! case f v of { 0 -> 0; n -> (n - 1) }
=/=
   return $ case f v of { 0 -> 0; n -> (n - 1) }

Monad laws apply to the latter, so how is reordering justified?
It doesn't matter if 'f' is total, the context requires to know whether
'f v' is '0' or not. Since the only thing used from the result is its
successful evaluation, the case could be eliminated, but that still
leaves

   return $! f v
=/=
   return $ f v


But if you switch them, the program will *always* print 0.


In my copy of the docs, the only things known about 'unsafeInterleaveIO'
are its module, its type, and that is is "unsafe". If we assume, from the name, 
that evaluation of its parameter will be interleaved unsafely with the main IO 
thread, there is no knowing when or if that evaluation will happen. Unless 
there is a dependency on the result of that evaluation, in which case we have

an upper bound on when the evaluation must happen, but still no lower
bound. From the comments in the source code, it appears that lower
and upper bound are intended to be identical, ie. evaluation is supposed
to happen at the latest possible point permitted by dependencies.

Changing dependencies changes the program.


Also, the compiller might notice that x is never used, and that "f" is
total.  So it could just optimize out the evaluation of "f v"
completely, at which point the program always prints 0 again; changing
optimization settings modifies the result of the program.


It doesn't matter whether or not 'x' is used. It matters whether 'f v' 
needs to be evaluated to get at the 'return' action. Even if 'f' is total,

that evaluation cannot be skipped. Eta-expansion changes strictness,
which changes the program (eg, '\p->(fst p,snd p)' =/= 'id::(a,b)->(a,b)',
even though these functions only apply to pairs, so we "know" that
"whatever 'p' is, it ought to be a pair" - only we don't; and neither do
we know that 'f v' is a number, even if 'f' itself is total).

None of this means that lazy IO and monadic IO ought to be mixed
freely. If a program depends on strictness/non-strictness, that needs 
to be taken into account, which can be troublesome, which is why

lazy IO hasn't been the default IO mechanism in Haskell for many
years. It is still available because when it is applicable, it can be
quite elegant and simple. But we need to decide whether or not
that is the case for each use case, even without the explicit 'unsafe'.

Hth?
Claus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to catch error in array index when debugging

2009-03-14 Thread Claus Reinke

   Claus> None of which is satisfactory. You might also want to add
   Claus> yourself to this ticket:

   Claus>"index out of range" error message regression
   Claus> http://hackage.haskell.org/trac/ghc/ticket/2669

How do I do that?


Ghc Trac's idea of voting is by adding yourself to the cc, so that
tickets can be sorted by length of cc list:

   http://hackage.haskell.org/trac/ghc/report/17

That is often subverted by closing tickets as duplicate/related,
without transferring the cc list to the one ticket that is kept;-)

Apart from the immediate bug of not getting any information,
there's also the more general issue of wanting information about
the call site (who called which operation, leading to the exception).

A solution to that issue has been sought for a long time, but there
seem to be so many options that the discussion has slowed down 
to a halt:


   Lexical call site string
   http://hackage.haskell.org/trac/ghc/ticket/960

   Maintaining an explicit call stack
   http://hackage.haskell.org/trac/ghc/wiki/ExplicitCallStack

Using your own wrappers to give you the missing information
is probably the best short-term workaround, but it is no fun.
Something like this, perhaps:

   import qualified Data.Array.IArray as A
   import Control.Exception

   arr ! index = mapException (addErrorInfo (" ! "++show index)) $ arr A.! index
   arr // idxs = mapException (addErrorInfo (" // "++show idxs)) $ arr A.// idxs

   addErrorInfo info (ErrorCall str) = ErrorCall (str++":"++info)

   test1 i = (A.array (1,5) [(i,i)|i<-[1..5]] :: A.Array Int Int) ! i
   test2 i = (A.array (1,5) [(i,i)|i<-[1..5]] :: A.Array Int Int) // [(i,0)]

   *Main> test1 0
   *** Exception: Error in array index: ! 0
   *Main> test1 3
   3
   *Main> test1 8
   *** Exception: Error in array index: ! 8
   *Main> test2 0
   array *** Exception: Error in array index: // [(0,0)]
   *Main> test2 4
   array (1,5) [(1,1),(2,2),(3,3),(4,0),(5,5)]
   *Main> test2 7
   array *** Exception: Error in array index: // [(7,0)]

Claus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to catch error in array index when debugging

2009-03-14 Thread Claus Reinke

I'm getting a runtime failure "Error in array index". This causes ghci
to exit.



Is there a way to get it to break instead, so I can find out which
function is failing?


i recall two techniques - one is trivially define your own (!) and
print index at least. another is to use ghc profiling with -xc RTS
option


None of which is satisfactory. You might also want to add yourself 
to this ticket:


   "index out of range" error message regression
   http://hackage.haskell.org/trac/ghc/ticket/2669

Claus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Sugestion for a Haskell mascot

2009-03-12 Thread Claus Reinke
I agree that looking for a mascot that is inspired by "laziness" is a bad idea 
from a P.R. perspective (I am tired of people walking out the room when I 
give Haskell talks to general audiences and explain lazy evaluation).


Do they walk out when you mention it or when you explain it?-)

Lazy evaluation -as an optimization of non-strict evaluation- seems
like an unambiguosly good idea. Lazy evaluation -as one efficient
form of non-strict evaluation- has its pros and cons: neither strict
nor non-strict evaluation fit all purposes, and the real trick is to
find the right middle ground. It just so happens that non-strict is
a safe default from which to try and reach that middle ground. In
other words, even in non-PR terms, laziness is a stepping stone,
not the ultimate goal.

Your remark reminded me of some old slides of mine, where I
tried to offer one perspective on the problems of "communicating 
fp ideas to general audiences". In brief, successful communication 
assumes some shared background, and if that doesn't exist,
communication is difficult at best and usually fails. 

Haskellers often resort to formal maths models, which is fine for 
those with a shared background, not so fine for general audiences. 
In that old talk I suggested using a model that general audiences, 
and business folks in particular, are familiar with, and started to 
outline an initial "dictionary of fp terms" - the translation worked

well enough to show that neither strict nor non-strict evaluation
make for good business models, and that we're really looking for
some form of "just in time" evaluation (of course, you have to keep 
in mind that my understanding of business terms is only that of an 
average general audience;-). 

I've temporarily put the slides here (note that the contact info, 
from one of my previous lives, is years out of date):


http://www.cs.kent.ac.uk/~cr3/tmp/slides.pdf

Perhaps you find some of the ideas useful? And now that we 
actually have some more business folks amongst us, perhaps 
some of them would like to comment on the suitability or

otherwise of these ideas?-).

Claus

-- Lazy evaluation: been there, done that.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Looking for literature

2009-03-08 Thread Claus Reinke
I'm trying to catch up with all the wonderful Haskell Types, classes, 
Abstract Data Types, Algebraic Data Types, Types that give peoples 
headaches and all the other, deeper stuff I have been happily putting off.


Hmm, do we need more pragmas?-)

{-# LANGUAGE TypesThatGivePeopleHeadaches #-}
{-# LANGUAGE DeeperStuffIHaveBeenHappilyPuttingOff #-}

Anyway, a good source of headaches are the research paper
links collected at haskell.org:

http://www.haskell.org/haskellwiki/Research_papers

Given your interests, in particular:

http://www.haskell.org/haskellwiki/Research_papers/Type_systems
http://www.haskell.org/haskellwiki/Research_papers/Data_structures

Claus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] bytestring vs. uvector

2009-03-08 Thread Claus Reinke

uvector is, if my memory serves me correctly, a fork of the vector library.
It uses modern stream fusion, but is under active development and is a
little scary. I'm a little unclear on the exact difference between uvector
and vector. Both use arrays that are not pinned, so they can't be readily
used with foreign code. If you want to use either library, understand that
you're embarking on a bracing adventure.


vector and uvector are roughly based on the same technology; uvector
is - as far as I remember - a fork of some of the old DPH code which
uses stream fusion which Don cleaned up and worked on (and it's proven
pretty useful, and people are still hacking on it.)

vector however, has the notion of 'recycling arrays' when it does
array operations. The technique is in fact quite similar to stream
fusion. Roman L. built this from scratch I think, so it's quite a bit
more unused and less stable than even uvector is maybe, but I suppose
you could say it's kind of a superset of uvector. Hopefully though
it should mature a little, and the plans are to have the technology
from both of these folded into the Data Parallel Haskell project so we
get fast array operations+automatic parallelisation.

For info, see Roman's paper, 'Recycle your arrays!'

http://www.cse.unsw.edu.au/~rl/publications/recycling.html


Given the close relationship between uvector and vector, it would
be very helpful if both package descriptions on hackage could 
point to a common haskell wiki page, starting out with the text
and link above, plus a link to the stream fusion paper (I hadn't 
been aware that vector incorporates the recycling work, and 
had often wondered about the precise relationship between those

two packages). Apart from saving others from similar confusion,
that would also provide a place to record experience with those two 
alternatives.

Btw, have any of the Haskell array optimization researchers
considered fixpoints yet? Both fusion and recycling are based 
on rewrite rules of the kind "in . out --> id". Now, given a loop 
like this:


   loop a = if c a then loop (out (action (in a))) else a
   loop a

these rules don't apply. Unrolling the loop a fixed number of
times would enable some rule applications, but still some would
remain in the loop body. But with a little rewriting

   loop a = if c a then loop (out (action (in a))) else out (id (in a))
   loop a

   loop a = if c a then loop (out (action (in a))) else out (id (in a))
   (if c a then loop (out (action (in a))) else out (id (in a)))

we can now push the out into the next iteration of the loop or,
if there is no next iteration, into the loop epilogue

   loop a = if c (out a) then loop (action (in (out a))) else id (in (out a))
   out (if c a then loop (action (in a)) else a)

making the rewrite rule applicable

   loop a = if c (out a) then loop (action a) else id a
   out (if c a then loop (action (in a)) else a)

leading (modulo bugs, omissions, and oversights;-) to a fused/
recycled loop body, with potentially substantial benefit.

Claus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] possible memory leak in uvector 0.1.0.3

2009-03-03 Thread Claus Reinke

That helps to make things clearer, I think. One issue is
the nature of Maps (strict in keys, non-strict in values).

- neither singleton nor unionWith are strict in the Map values, so
   nothing here forces the evaluation of rate or constructionof UArr

But, as I have written, in one of my tests I also tried rnf to force 
evaluation:

rnf v `seq` rnf m `seq` return m
Isn't this sufficient?


It will force the Map that results from the repeated unioning,
but does not ensure that this is done in an efficient way.


A standard trick to keep Map values evaluated by construction
is to make the availability of keys dependent on their values, eg
(singleton key) $! value. That won't help with unionWith and
the appendUs, but it should allow the source string references
to be dropped early, as the singletons are constructed.


Tried; but, even using union instead of unionWith, the memory 
grows fast as before.


Strange. I built myself a small wrapper to make your code
fragment compilable, and just replacing (unionWith appendU)
with (union) makes a drastic difference - as it should.

It is rather annoying that Data.IntMap doesn't provide a strict
form of unionWith or insertWith (Data.Map does at least provide
insertWith'). But we can define our own, at the cost of an extra
lookup. We can then foldl' that insertWith' directly over the ratings 
list, bypassing the non-strict parts of the Data.IntMap API (see

code below).

Claus 
  (who still thinks that all Maps should be parameterized

   over their key-value pair type constructor, so that the default
   non-strict Maps would result from using non-strict pairs

   type IntMap = IntMapP (,)

   while the often desired element-strict Maps would result 
   from using strict pairs, with no other change in API


   type IntMapStrict = IntMapP (:*:)
   )

---
{-# LANGUAGE TypeOperators #-}
import qualified Data.ByteString.Lazy as L
import Data.Array.Vector
import qualified Data.IntMap as IM
import Data.List
import Data.Word
import qualified Data.ByteString.Lazy.Char8 as L8
import Data.Maybe
import System.IO

-- don't use this for real, test wrapper only
ratings :: L.ByteString -> [(Word32,Word8)]
ratings = map (\[i,r]->(fromIntegral $ fst $ fromJust $ L8.readInt i
  ,fromIntegral $ fst $ fromJust $ L8.readInt r))
   . map L8.words . L8.lines

parse handle = do
 contents <- L.hGetContents handle
 let v =  map singleton' $ ratings contents
 let m = foldl' (\m (kw,v)->insertWith' appendU (fromIntegral kw,v) m) IM.empty 
v
 -- let m = foldl1' (IM.unionWith appendU) v
 -- let m = foldl1' (IM.union) v
 return $! m

 where
   -- Build a Map with a single movie rating
   singleton' :: (Word32, Word8) -> (Int,UArr Rating)
   singleton' (id, rate) =
 ((fromIntegral $ id), (singletonU $ pairS (id, rate)))
 -- (IM.singleton (fromIntegral $ id)) $ (singletonU $ pairS (id, rate))

insertWith' op (k,v) m =
 maybe (IM.insert k v m)
   (\old->((IM.insert k) $! (v `op` old)) m)
   (IM.lookup k m)

type Rating = Word32 :*: Word8
type MovieRatings = IM.IntMap (UArr Rating) -- UArr from uvector

-- more test wrapper, some trivial input data
generate = withFile "in.data" WriteMode $ \h->
   mapM_ (\(i,r)->hPutStrLn h $ show i++" "++show r) 
   $ take 100 $ cycle [(i,i)|i<-[0..100]]


main = withFile "in.data" ReadMode parse >>= print

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] possible memory leak in uvector 0.1.0.3

2009-03-03 Thread Claus Reinke
I split the string in lines, then map some functions on each line to 
parse the data, and finally calling toU, for converting to an UArr.


Just to make sure (code fragments or, better, reduced examples
would make it easier to see what the discussion is about): are you 
forcing the UArr to be constructed before putting it into the Map?


parse handle =
  contents <- S.hGetContents handle
  let v =  map singleton' $ ratings contents
  let m = foldl1' (unionWith appendU) v
  v `seq` return $! m

  where
-- Build a Map with a single movie rating
singleton' :: (Word32, Word8) -> MovieRatings
singleton' (id, rate) =
  singleton (fromIntegral $ id) (singletonU $ pairS (id, rate))


That helps to make things clearer, I think. One issue is
the nature of Maps (strict in keys, non-strict in values).

- neither singleton nor unionWith are strict in the Map values, so
   nothing here forces the evaluation of rate or construction 
   of UArr


   Prelude Data.IntMap> (unionWith (++) (singleton 1 undefined) (singleton 2 
undefined)) `seq` ()
   ()

- singletonU is strict, but that only means that it will evaluate its
   parameter if it is evaluated itself (which it isn't, because 
   singleton isn't strict)


- seq on a list only forces the first node of the list ((:),[],_|_),
   so (v `seq`) isn't likely to help much. Also, you probably
   do not want to force the whole list of singletons before
   builing the Map, you want the singletons to be constructed 
   and consumed incrementally.


- forcing a Map doesn't force any of the values, nor does
   it force more than the top-level node of whatever the
   internal Map representation is, so (return $! m) isn't much
   help, either (by nature of unionWith and foldl1', it'll force 
   all keys before it can say anything much about the Map,

   but the values remain untouched, burried further under
   unevaluated (++)s)


type Rating = Word32 :*: Word8
type MovieRatings = IntMap (UArr Rating) -- UArr from uvector


A standard trick to keep Map values evaluated by construction
is to make the availability of keys dependent on their values, eg
(singleton key) $! value. That won't help with unionWith and
the appendUs, but it should allow the source string references
to be dropped early, as the singletons are constructed.

Hth,
Claus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] possible memory leak in uvector 0.1.0.3

2009-03-03 Thread Claus Reinke

At first guess it sounds like you're holding onto too much, if not the
whole stream perhaps bits within each chunk. 


It is possible.

I split the string in lines, then map some functions on each line to 
parse the data, and finally calling toU, for converting to an UArr.


Just to make sure (code fragments or, better, reduced examples
would make it easier to see what the discussion is about): are you 
forcing the UArr to be constructed before putting it into the Map?


Claus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: Extensible and Modular Generics for theMasses: emgm-0.3

2009-03-03 Thread Claus Reinke

Due to a problem with the template-haskell package and Cabal, you
cannot cabal-install the emgm package with GHC 6.8. However, take
heart! EMGM works very well with GHC 6.8. You merely have to download
the tar.gz package from Hackage yourself, build it, and install it.


I thought I had seen you mention a workaround for that issue, but
if not, perhaps you might want to add your experience to

   http://hackage.haskell.org/trac/hackage/ticket/326
   Cabal should support Cabal-version-dependent Setup.hs

There's a sketched hack-around at the end, using Setup.hs only 
to call to separate Setup-version.hs files, to circumvent Cabal API

instabilities. Not nice, but perhaps better than to bypass Cabal
alltogether? One would also have to pass parameters, and the
Process API has changed in the meantime.., which I why I still
think Cabal itself should simply be able to pick cabal-versioned 
configuration and setup files, if they exist.


Claus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Re[2]: Pickling a finite map (Binary + zlib) [was: [Haskell-cafe]Data.Binary poor read performance]

2009-02-24 Thread Claus Reinke

btw, i always thought that it should be a way to overcome any export
lists and go directly to module internals. limiting export is the way
to protect programmer from errors, not security feature, and it should
be left to programmer to decide when he don't need it. compilers
should just be able to check whether some module/library imported
abstractly or with internals too. this will render useless all those
.Internals modules that now we need to add everywhere


You're not alone!-) This has been called "Open Implementation" 
(OI, a pre-cursor of aspect-oriented programming):


   http://www2.parc.com/csl/groups/sda/projects/oi/

They argue for an explicit auxiliary interface instead of full access 
to module internals. Since these same folks worked on meta-object 
protocols as well (at the meta-level, the boundaries can be bypassed 
entirely), that suggestion probably comes from experience. They do 
allow for the auxiliary interface to use meta-programming style 
features, though that depends on the language in question (in 
Haskell, type classes or type functions might be used instead,

but rewrite rules and Template Haskell are also available).

   Open Implementation Design Guidelines 
   http://www2.parc.com/csl/groups/sda/publications/papers/Kiczales-ICSE97/


is a short paper discussing a Set API/Open Implementation example.


I agree in principle, but GHC also uses that knowledge to optimize the
code better - if a function is exported it has to produce the most
polymorphic possible code for its type, if it isn't it can specialize
better... that sort of thing.


That refers to optimization in the provider module. As the OI
people argued, optimization in the client modules also needs to
be taken into account. If the default one-size-fits-all-uses
implementation behind the default API doesn't work well 
enough, there'll be a proliferation of library variants. If there 
is a way to fine-tune the implementation via an auxiliary API,
much of that can be avoided. 

In other words, instead of half a dozen Maps and a dozen 
Array variants, there'd just be one of each, but with auxiliary 
interfaces that would allow client code to choose and tune 
the most suitable implementations behind the main interfaces.


It isn't magic, though: if, say, even the auxiliary API doesn't
allow you to say "thanks, but I know the length already", you're
still stuck. But it does help to think about designing 2 APIs
(the default functionality one and the auxiliary fine-tuning one)
instead of offering only the choice of 1 API or full access to 
internals.


Claus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Hoogle and Network.Socket

2009-02-22 Thread Claus Reinke

sitting in a pub with some beer having a platform war). Martijn's
thoughts of +windows, +unix, +os is exactly right, I'm happy to let
users say "oh, please show me these packages", but there are
trade-offs in Hoogle design. If someone has some clear viewpoint on
the answers, I'd love to hear them. The three problems are:

1) What packages should Hoogle search by default? All of hackage? The
base libraries? Only the packages a user has installed? Only packages
that make it in to the Haskell Platform?


Why not make that configurable, similar to trac's custom queries,
defining several default configurations with short names (+windows,
+hackage, +hp, ..), rather than trying to define one default config?

The kinds of configuration option available could be taken from
information available to hackage/cabal (which presumably will
specify/show platforms at some point).

Claus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Re[4]: [Haskell-cafe] Re: speed: ghc vs gcc

2009-02-20 Thread Claus Reinke

Turning this into a ticket with associated test will:


but why you think that this is untypical and needs a ticket? ;)


Because generally ghc is doing a good-enough job. And it is doing
that because long ago, ghc hq's war cry was "if ghc generates code
that is slower than any other haskell implementation, it is a bug, and
you should file a ticket".

The failure of "avoid success at all cost" has meant shifting priorities,
and those priorities tend to imply that alternative implementations
can't keep up (not over the full range of ghc's offerings). So there is
less "internal" competition, more "other stuff" on todo lists, and good
enough just has to be good enough most of the time. 

But when it isn't, then not just people at ghc hq are still listening. And 
among all the other good stuff they keep adding, they also keep 
tweaking ghc's basics, in such a way that what would be hard to 
do now, may be easier to implement in future ghc versions. And for 
that, they need input about what they should focus on. And the 
advertised way of providing that input is the ticket tracker.


For tickets, small examples are great - they often highlight aspects
that are still present in real code, but are hard to see there (let alone
reducing complex code to small test cases that help to fix those
cases). In the particular example of unrolling, iirc, the issue was 
that ghc's internal representation does not easily lend itself to adding

some counter that would keep the very modular optimizer from
applying recursive unfoldings uselessly. Once that fundamental
problem is fixed, I am optimistic that this useful optimization will
be looked at again - if there is a ticket to remind everyone.

And if -as I suspect- lots of ghc users find themselves doing 
loop unrolling/partial recursion unfoldings by hand (worker/wrapper

for recursive functions is just such an example), they will up
the priority on that ticket. You're right that ghc hq can't be
everywhere, but they aren't the only ones who are invited to
look at those tickets. And everytime someone starts on ghc
hacking for some unrelated purpose, they need a small project
to start on - well-defined tickets have a chance there.

I'm not at all happy with Haskell optimists turning evangelists, 
but neither is it productive for Haskell pessimists to spread
their frustration. There are useful ways for both sides to 
contribute to the Haskell world, can we focus on those ways?


Claus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Hoogle and Network.Socket

2009-02-20 Thread Claus Reinke

> 1) Show all the functions (when the number is low), but place platform
> specific functions under separate headers: "Windows",
> "Linux/BSD/POSIX", "OS X", etc.

If a function isn't available on all OS's then all Hoogle would be
encouraging you to do is break compatibility and stop me from using
your software.


How about showing all functions, but indicating available platforms?

That way, we won't go looking for functions that work on "my platform",
but for functions that work on the most platforms. And if we're looking
at a function that isn't portable, we'll know immediately - unless I'm
explicitly in one-shot/one-platform mode, portability (or lack of) is
rather important information for me. It would be ok to filter that info
in some way (character codes, or green symbol if it is available on
major platforms, red otherwise, with more detailed info at package level).

One of the most appealing things about Haskell, usually more common
with scripting languages, is that it is possible to write portable code in
it. So we don't have to make extra efforts to "code for Windows" or
"code for Unix" - we just have to make the small effort to ensure that
we are not coding for any particular platform. The less platform-specific
code, the less work when a piece of software starts being successful.

It would be very nice to have this info at the package/hackage level,
too, including transitive dependency chasing. That way, package
authors will not accidentally depend on packages that have non-portable
dependencies, but have the chance to prefere portable alternatives
whenever possible (unix vs unix-compat, etc). And package users
will see where "cross-platform" was sacrificed.

Claus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Re[2]: [Haskell-cafe] Re: speed: ghc vs gcc

2009-02-20 Thread Claus Reinke

Concrete examples always help, thanks.

Turning this into a ticket with associated test will:

- enable others to find and repeat the test when this thread is long gone,
   to see whether any other ghc changes have helped in any way
- enable documentation of what exactly the issue is (why is it slow?)
- enable others to vote for having this issue addressed
- help to keep the various performance issues separate (I seem to
   recall that you and others had found some other infelicities in 
   ghc-generated code, and lots of other useful bits a pieces over

   the years, not all of which have been resolved or made into tickets?)

Without ticket, such examples will be snowed under here in no
time. With ticket, it will take a little longer!-)


afaik, ghc can be compared with 20-years old C compilers. it uses
registers for performing tight loops but has very simple register
allocation procedure. also it doesn't unroll loops


I've occasionally run into situations where it would have been nice
to have loop unrolling, or more generally, partial unfolding of recursive 
functions. But I've got the feeling that this isn't the whole story here,

or is it?

In simple enough situations, one can roll one's own loop unrolling;),
somewhat like shown below (worker/wrapper split to bring the function
parameter representing the loop body into scope, then template haskell 
to unroll its applications syntactically, then optimization by transformation

to get rid of the extra code). It is all rather more complicated than one
would like it to be, what with TH scoping restrictions and all, but perhaps 
a library of self-unrolling loop combinators along these lines might help, as 
a workaround until ghc does its own unrolling.


Claus

{-# LANGUAGE TemplateHaskell #-}
module Apply where
import Language.Haskell.TH.Syntax
apply i bound | i $(apply (i+1) bound) f (f i x) |]
 | otherwise = [| \f x -> x |]

{-# LANGUAGE CPP #-}
{-# LANGUAGE TemplateHaskell #-}
{-# LANGUAGE BangPatterns #-}
{-# OPTIONS_GHC -DN=8 -ddump-splices #-}
module Main(main) where
import Apply
main = print $ loopW 1 (10^9) body 0

{-# INLINE loopW #-}
loopW :: Int -> Int -> (Int -> Int -> Int) -> Int -> Int
loopW i max body acc = loop i acc
 where
 loop :: Int -> Int -> Int
 loop !i !acc | i+N<=max  = loop (i+N) ($(apply (0::Int) N) (\j acc->body (i+j) 
acc) acc)
 {-
 loop !i !acc | i+8<=max  = loop (i+8) ( body (i+7)
   $ body (i+6)
   $ body (i+5)
   $ body (i+4)
   $ body (i+3)
   $ body (i+2)
   $ body (i+1)
   $ body i acc)
 -}
 loop !i !acc | i<=max= loop (i+1) (body i acc)
  | otherwise = acc

body :: Int -> Int -> Int
body !i !acc = i+acc

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re[4]: [Haskell] Google Summer of Code

2009-02-12 Thread Claus Reinke

> Check out what GHC is doing these days, and come back with an analysis
> of what still needs to be improved.  We can't wait to hear!
can you point me to any haskell code that is as fast as it's C
equivalent?

You should do your own benchmarking!


Please, folks! This is hardly helpful.

It isn't difficult to find examples of fast Haskell code by Don, nor
is it difficult to find benchmarking and analyses of slow GHC-generated 
code by Bulat. But expecting "your" readers to search for such fragments

instead of providing direct references weakens your cases. In fact, I'm
not aware of any documents that would make a coherent argument
either way: 

- most of Don's examples focus on small bits of code where (a) it is still 
   possible to associate core output with parts of the input (making the dire
   state of tool support less of an issue) and (b) few real-life awkwardnesses 
   interfere with the optimization of pure algorithms (by which I mean code 
   artifacts such as error handling etc confusing GHC's optimization phases).


- most of Bulat's analyses refer to the via-C path, which used to be the
   fastest path, but appears to have an uncertain future (not that I'm aware
   of any claims, let alone documentation showing that the now standard 
   compilation path in GHC is at least as good as via-C used to be).


That summary is of course only based on the fragments I have chosen
to find - if you want a fairer summary, you need to provide concrete and
uptodate references!-) And if you don't want to be asked to repeat your
argument every time this thread comes up, you need to present it in easily
referenced form outside this mailing list. In particular, I would suggest that 

1. performance issues are recorded as GHC trac tickets, with test cases, 
   (there is an explicit ticket type for this) so that we could check easily 
   whether any general GHC improvements have had any effect on which 
   known bad cases. Note that performance isn't a high-priority issue

   per se at GHC HQ (due to manpower), unless it is a show-stopper
   or many users report it as important to them. Nevertheless, the issues
   should be recorded, so that we know where we stand.

2. performance wisdom is recorded on the Haskell/GHC wikis, or in
   tutorials/papers. There is a lot of information on small tricks already,
   but very little on what to do when those small tricks no longer suffice
   (eg, after one has found out that profiling lies, that core isn't easy to
   link back to source, when the code in question is not just pure and
   simple algorithm but riddled with error handling imported from libraries).

Personally, I find the situation wrt performance of GHC-generated code
very unsatisfactory, not because it isn't good (more often than not, it is),
but because there is very little help when it isn't. And the first step towards
improving that situation (eg, with tools helping to analyse core, or better
understood profiling tools, or support for by-hand application of optimization
transformation/undo/modify/retarget/replay..) is to acknowledge that there 
is a problem.


Pointing out that some experienced hackers have been through this,
and are now able to make do for themselves, is a non-solution (unless 
you want to secure consulting fees;-). Just as declarative languages allow 
us to ignore operational details until/unless they really are needed, offering

a clear path on how to specify declarative aspects of our code, so there
should be a clear path on how to specify operational details if that becomes
necessary. A path that any mildly experienced Haskeller should be
able to follow through good documentation, examples, and tools.

Claus

PS. Talking about "high-performance computing" seems slightly
   misleading in this context, if the only goal is to have code as fast
   as typical C. From what I've heard from HPC folks, generic
   compilers or generic C code are not tough benchmarks from 
   their point of view, platform vendor compilers and platform-tuned
   Fortran code are. Of course, that info is second-hand, and 
   somewhat dated, but I think it would help to state our goals 
   explicitly, to avoid confusion or unreasonable expectations.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Why binding to existing widget toolkitsdoesn't make any sense

2009-02-01 Thread Claus Reinke

A common standard would be useful, but OpenVG doesn't look
like "ready soon". On the declarative side, there's also SVG..


Seems I've got to qualify that remark somewhat - apart from some
commercial/closed-source implementations, there is also a open
source ANSI C one (implemented on top of OpenGL, so it isn't
quite the same as direct OpenGL hardware support, but it offers
the same API and might be a good target for a Haskell binding..):

http://ivanleben.blogspot.com/2007/07/shivavg-open-source-ansi-c-openvg.html

Also, an OpenVG backend for Cairo, which seems to have so 
many backends that it might be the safest user-level choice?


http://lists.cairographics.org/archives/cairo/2008-January/012833.html
http://lists.cairographics.org/archives/cairo/2008-January/012840.html

Claus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Why binding to existing widget toolkits doesn't make any sense

2009-01-31 Thread Claus Reinke

I should have mentioned that my tests have been done only on Windows and
OSX.
I guess I would have to try on a system that supports XRender to compare.

Unfortunately, the target audience of our application are mostly windows and
OSX users, so although it would be great that Cairo performs fast on unix
variants, it would be of little value to us, unless of course XRender also
runs on Windows/OSX somehow :)


from the glitz page I refered to:

 The semantics of glitz are designed to precisely match the
 specification of the X Render extension. Glitz does not only implement
 X Render features like component alpha and image transformations, but
 also support for additional features like convolution filters and
 color gradients, which are not currently part of the X Render
 specification. 


 The performance and capabilities of glitz are much dependent on
 graphics hardware. Glitz does not in any way handle software
 fall-backs when graphics hardware is insufficient. However, glitz will
 report if any requested operation cannot be carried out by graphics
 hardware, hence making a higher level software layer responsible for
 appropriate actions. 


 Glitz can be used as a stand-alone layer above OpenGL but is also
 designed to act as a backend for cairo, providing it with OpenGL
 accelerated output. 


That's why it would be good to know whether glitz is still supported
(and compatible with current cairo): it, or something like it, would
provide direct access to hardware-accelerated cairo functionality on
all OpenGL platforms (without the current Haskell-land dependency 
of cairo on Gtk2hs; though software fallbacks for missing hardware 
support would seem essential).


Claus

|There seem to be some older references to an OpenGL backend for Cairo
|
|http://www.cairographics.org/OpenGL/
|http://www.freedesktop.org/wiki/Software/glitz
|http://www.usenix.org/events/usenix04/tech/freenix/nilsson.html


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Why binding to existing widget toolkits doesn't make any sense

2009-01-31 Thread Claus Reinke

When I said Cairo felt rather slow, I was comparing it again fully hardware
accelerated solutions.
..
IMO the future is fully hardware accelerated rendering on the GPU, like OpenVG.
It will take a while before it is common to see glyphs being rendered on the
GPU, but I'm sure this is all doable.


There seem to be some older references to an OpenGL backend for Cairo

   http://www.cairographics.org/OpenGL/
   http://www.freedesktop.org/wiki/Software/glitz
   http://www.usenix.org/events/usenix04/tech/freenix/nilsson.html

aiming for hardware acceleration. There doesn't seem to be anything
newer than 2006, though, which is puzzling, as the OpenGL backend
is still advertized - is it still supported?

A common standard would be useful, but OpenVG doesn't look
like "ready soon". On the declarative side, there's also SVG..

Claus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Laws and partial values

2009-01-25 Thread Claus Reinke

Yes.  If you've got a set of terminating computations, and it has
multiple distinct elements, it generally doesn't *have* a least element.
The P in CPO stands for Partial.


and this concern does not apply to ()  .


Btw, what would a non-lifted () be used for?

If the least element _|_ is known to be the only element (), 
there isn't anything to do, indeed, there isn't even anything 
to store at runtime, as the locations of those statically fully 
known elements is also known statically, from the types.


Claus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


library documentation comments and contributions (was [Haskell-cafe] Comments from OCaml Hacker Brian Hurt)

2009-01-16 Thread Claus Reinke

2) the Haskell docs _don't_ do good enough a job at giving intuition
for what math terms mean

If we fix #2, then #1 is no longer a problem, yes?

For you folks who work on GHC, is it acceptable to open tickets for
poor documentation of modules in base?  I think leaving the
documentation to the tragedy of the commons isn't the best move, but
if even a few of us could remember to open tickets when new
Haskell'ers complain about something being confusing then it could be
on _someone's_ docket.


I can't find the thread at the moment, but this has been discussed
before, and my recollection is that wikis were to be used to accumulate
documentation comments and updates (there also was some discussion
about the best format for comment patches, but getting content was
thought to be more important). So, there would be a subset of the
Haskell wiki for the base library, and package-specific wiki locations
for their documentations.

Haddock already seems to provide the necessary support:

   http://www.haskell.org/haddock/doc/html/invoking.html

   --comments-base=URL , --comments-module=URL , --comments-entity=URL 

   Include links to pages where readers may comment on the documentation. 
   This feature would typically be used in conjunction with a Wiki system.


   Use the --comments-base option to add a user comments link in the header 
   bar of the contents and index pages. Use the --comments-module to add a 
   user comments link in the header bar of each module page. Use the 
   --comments-entity option to add a comments link next to the documentation 
   for every value and type in each module. 

   In each case URL is the base URL where the corresponding comments page 
   can be found. For the per-module and per-entity URLs the same substitutions 
   are made as with the --source-module and --source-entity options above.


   For example, if you want to link the contents page to a wiki page, and every 
   module to subpages, you would say haddock --comments-base=url 
   --comments-module=url/%M


So it seems it is only a question of actually using these options with
suitably prepared per-package documentation wikis, and improving
the documentation or asking for clarifications would be almost as easy
as emailing here!-) And if everyone who stumbles over a documentation
problem puts the solution on the wiki, documentation should improve
quickly (there is still the issue of selecting wiki improvement suggestions
for inclusion in the "real" documentation).

Does anyone know why these options are not in use already?

Claus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Names in Haskell (Was: [Haskell-cafe] Comments from OCamlHacker Brian Hurt)

2009-01-15 Thread Claus Reinke
What I don't understand is why Monoid and Monad are objectionable, while 
Hash, Vector, Boolean, and Integer are (presumably) not objectionable. 
They all appear equally technical to me.


It's simply that other languages' libraries already have those terms in
them. So there is nothing new to learn.


1. Imagine learning English. Easy, everyone here has been there, done that.
2. Imagine learning English as a second language. Slightly different:
   a. you can use an English-native language dictionary. 
   won't get you all the way, but is fairly essential for beginners.
   b. you can use an English-English dictionary. 
   once you know enough of the language to "bootstrap", this is

   preferred over (a), as it helps to get familiar with the language,
   and to improve your autonomy every time you need to look
   something up (instead of clinging to the crutches of translation).
   c. you can use an English-GOB (General Ontology Base) dictionary.
   languages are just syntax. understand the principles behind them!
   can be really useful, but probably not when you're still learning
   your second language, knowing neither English nor GOB. once 
   you've seen at least two examples of everything, and know enough

   to read about generalizations, the situation changes.

Does this help?-)
Claus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: [Haskell] ANN: HLint 1.0

2008-12-27 Thread Claus Reinke

You were asking about getting the output of ':show modules' into a
variable 'x', so that you can process it further. ':redir x :show modules'
should do just that. There is another example command for implementing
':edit' this way (by now a native ghci command).


I think I'm seeing your meaning. So that brings me up to this:

let hlint _ = return $ unlines [":redir hlintvar1 :show modules", "let
hlintvar2 = map (fst . break (==',') . drop 2 . snd . break (== '('))
$ lines hlintvar1", ":! hlint (concat $ intersperse \" \" hlintvar2"]
:def hlint hlint

This doesn't work. The issue is that :! is weird; for it to work, one
need to pass each argument as a separate string, and it won't evaluate
a variable.


It isn't just ':!', quoting/variable interpretation is generally rather
uncomfortable in GHCi scripting (so much so that I originally submitted
output redirection as a patch before figuring out that it could be done 
without patching GHCi - that surprise find was the motivation for posting
my findings as an email). Have you tried reading the mini tutorial that 
I keep mentioning and which the "using GHCi" page is pointing to? 
Here's the direct link:


http://www.haskell.org/pipermail/haskell-cafe/2007-September/032260.html

The discussion is rather brief, but that tutorial has several examples
that need to work around issues like this, ranging from simple but
tedious construct-the-command-string to extra levels of ':cmd' in
order to get extra levels of interpretation (when you need to construct
a command string from a variable that will be bound via a constructed
command string (see the definitions of ':find', ':le' or ':b(rowse)' - the 
latter is an example of using the info from ':show modules').


Claus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: [Haskell] ANN: HLint 1.0

2008-12-23 Thread Claus Reinke

Well, sort of. Ok, we can parse that. Let's assume a variable x holds
the output of :show modules as a String. We call lines on it, then map
words on it, do a !! 2 on it, and we get ["Util.hs,", "Recorder.hs,",
"Game.hs,", "Monadius.hs,", "Demo.hs,"]. Chuck in a map (filter (\=
',')), and we get a good list. We can turn the list into a string
suitable for hlint with a quick unwords.

So our long sought after command becomes ':def hoogle (\_ -> return $
":! " ++ (unwords $ map (filter (\= ',')) $ (map words $ lines x) !!
2))'. But wait, how do we get 'x'? How do we call :show modules inside
a Haskell expression? 



The first url includes a link to a .ghci mini-tutorial (section 4) that,
among other things, implements
  :redir-- execute , redirecting stdout to 



Perhaps my cold has fogged my head too much, but I'm not sure how
:redir would help. I could do :redir var "hlint .", but that's as
unsatisfactory as :! "hlint ."


You were asking about getting the output of ':show modules' into a
variable 'x', so that you can process it further. ':redir x :show modules'
should do just that. There is another example command for implementing
':edit' this way (by now a native ghci command).

Claus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: [Haskell] ANN: HLint 1.0

2008-12-22 Thread Claus Reinke

Well, sort of. Ok, we can parse that. Let's assume a variable x holds
the output of :show modules as a String. We call lines on it, then map
words on it, do a !! 2 on it, and we get ["Util.hs,", "Recorder.hs,",
"Game.hs,", "Monadius.hs,", "Demo.hs,"]. Chuck in a map (filter (\=
',')), and we get a good list. We can turn the list into a string
suitable for hlint with a quick unwords.

So our long sought after command becomes ':def hoogle (\_ -> return $
":! " ++ (unwords $ map (filter (\= ',')) $ (map words $ lines x) !!
2))'. But wait, how do we get 'x'? How do we call :show modules inside
a Haskell expression? I have carefully looked over
http://haskell.org/haskellwiki/GHC/GHCi#Using_GHCi and
http://www.haskell.org/ghc/docs/latest/html/users_guide/ghci-commands.html
and my conclusion is that you can't. You can't do a let x = :show
modules, there is no function which will take ":show modules", and so
on. :functions can accept Haskell output, but it's a one-way barrier.
It's no good writing Haskell functions which need information from the
:functions.


The first url includes a link to a .ghci mini-tutorial (section 4) that, 
among other things, implements 


   :redir-- execute , redirecting stdout to 

Happy Holidays!-)
Claus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Missing comment highlighting in vim syntax script

2008-12-14 Thread Claus Reinke

The Haskell syntax script for vim mentions this mailing list as the
maintainer. Perhaps one of you could fix this bug.
Comments on the same line as import declarations don't get highlighted:


I don't know how this list-as-maintainer idea is going to work,
but the fix seems straightforward: find the line in 
$VIMRUNTIME/syntax/haskell.vim that says


   syn match hsImport  "\.*"he=s+6 contains=hsImportMod

and change it to

   syn match hsImport  "\.*"he=s+6 
contains=hsImportMod,hsLineComment,hsBlockComment

See :help syn-contains. 


Claus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Problem with haddock 2.3.0 (again)

2008-12-11 Thread Claus Reinke

Still, you might find something useful in the discussion for this ticket:

   Cabal should support Cabal-version-dependent Setup.hs
   http://hackage.haskell.org/trac/hackage/ticket/326


or, more directly:

http://www.haskell.org/pipermail/cabal-devel/2008-August/003576.html

so, if you add the haddock package to your build-depends, that
might give you a MIN_VERSION_haddock(_,_,_) already, without
setup fiddling - Duncan? Then again, haddock depends on ghc and
specific versions of other packages, so you might not want to depend 
on haddock..


Looks like one of those frustrating corners of packaging

- haddock wants to be up to date wrt ghc, so claims not to need
   a macro; but it isn't complete, ghc keeps developing, so the macro
   that shouldn't be needed in theory would be useful in practice

- cabal supports package version macros, but they aren't available
   everywhere, due to the way they are implemented

- tools like haddock aren't tracked by cabal - it would be nice if
   every tool executable also installed a tool package with version/
   path information (similar to ghc-paths for ghc), as that would be
   tracked by cabal

- package version dependent settings in .cabal/Setup.hs would be
   useful; apart from the special case of cabal-version-dependent
   code in Setup.hs, perhaps something like this, to set options
   depending on package availability

   if flag(haddock2)
   build-depends: haddock > 2
   haddock-options: -D__HADDOCK2__

   only that options fields are limited to a predefined set, not including
   haddock? Shouldn't options fields be available for every use of
   .cabal, like haddocking?

Claus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Problem with haddock 2.3.0 (again)

2008-12-11 Thread Claus Reinke

I would be happy to work around it if I could, but I can't. As far as I can
tell, I can't pass any flags to Haddock via the Cabal file. I would love to
tell Hackage to run Haddock like so, "cabal haddock
--haddock-option=--optghc=-D__HADDOCK__", but I don't know how.


Let's suppose that I do actually want to define __HADDOCK__ for my library.
Can I do this with a user-defined hook using the Cabal library?


You might want to define it conditionally, depending on version.

Cabal supports package-version macros, and haddock installs both
a package (the paths are wrong in the package description, but you
can find the version number there) and an executable for itself, but the 
package version macros are not available in Setup.hs.


Still, you might find something useful in the discussion for this ticket:

   Cabal should support Cabal-version-dependent Setup.hs
   http://hackage.haskell.org/trac/hackage/ticket/326

Claus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Problem with haddock 2.3.0 (again)

2008-12-11 Thread Claus Reinke

Let's see what David thinks. If he thinks is possible to fix these kinds
of things where haddock is not covering the whole GHC AST (at least to
the degree where it can ignore bits it does not understand). If that's
not realistic in the short term then we probably should introduce some
haddock version macro so that people can workaround it.


There was some related discussion here:

http://trac.haskell.org/haddock/ticket/48

Claus
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Dr Dobbs: Time to get good at functionalprogramming

2008-12-08 Thread Claus Reinke

 For this purpose, the only thing better is if we could somehow
 get them to mention Microsoft everywhere they mention Haskell.
 Any actual explaining would just get in the way :)


Doesn't quite work without explaining, because one would have
to be very careful not to mis-represent financial support by some
as endorsement by the whole; but if one were to ask those who
have contributed financially to Haskell development in some form
or other for their permissions, one could make quite an interesting 
little list (not to mention the vastly larger list of people who have

contributed their time, efforts, and ideas).

There have been occasional discussions of language-specific or
FP-in-general industry consortiums. Perhaps the simpler form
of "Haskell sponsorship" with mutual bragging rights for haskell.org
and sponsor could be a seed corn for such organisations. These
days, being associated with FP is no longer something that needs
explaining, let alone defending, so sponsors could get something
out of their contribution, such as cute little logos, tea-cups and 
t-shirts (apart from the prime motive of better Haskell;-). 

Imagine all the students that universities could attract by being a 
"Haskell campus", not just turning out "Haskell engineers", but 
"supporting Haskell development" by contributing staff time. 
Imagine all the competent developers and major customers 
companies could attract by being a "Haskell sponsor", being 
actively involved in "Haskell frontline development&research".

Imagine all those pretty Haskell logos on all those webpages.

oops, wrong universe again;-)
Claus

PS. Of course, if you do go down that route, your next Haskell
   job in industry might be representing your company on a 
   committee to decide the internationalization of an XML-based 
   interchange format for unicode lambda characters, with the

   usual sub-committees for upper-, lower-, and othercase
   variants, each with at least two spin-off standards representing
   minority preferences or implementors' we-just-did-it decisions. 
   So be careful what you wish for.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: [Haskell] GHC 6.10 and OpenGL

2008-12-02 Thread Claus Reinke

I believe these errors are caused by the wrong calling convention
being used in the Haskell bindings. This part in the configure script
tests the build (host) platform:

case "$host" in
*-mingw32) CALLCONV=stdcall ;;
*)  CALLCONV=ccall ;;
esac

Since it doesn't test for Cygwin, you end up with the calling
convention being ccall, which leads to the linker errors because of
associated name mangling (you would also see run-time crashes if you
managed to somehow link your program).


Ah, that makes sense! And, indeed, for the archives, this workaround 
fixes the issue (even when building from cygwin bash):


   cabal install opengl glut --configure-option="--host=i386-unknown-mingw32" 
--reinstall

Thanks, now FunWorlds can ride again!-)

I suppose we could document this on the OpenGL wiki, but even
better would be a fix to OpenGL's configure (apart from checking 
the build environment (mingw) rather than the host (cygwin here), it 
should ensure that the build uses exactly what configure has tested 
successfully) or .cabal (allways setting the --host configure option 
if on windows)?


[cc-ed to hopengl list, since this seems to be an OpenGL-specific 
issue, not a general Cabal one; Cabal++ :-]


Claus

PS of course, a high-level interpretation of the linker errors, by
   cabal or by ghc, would also help in such cases (linking object
   code is a ghc-internal detail, we work with Haskell code!-).

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: [Haskell] GHC 6.10 and OpenGL

2008-12-02 Thread Claus Reinke

ghc --make -package OpenGL -package GLUT something.hs
I get nothing but undefined references in the linking phase.
C:\Program 
Files\Haskell\GLUT-2.1.1.2\ghc-6.11.20081202/libHSGLUT-2.1.1.2.a(Window.o):fake:
(.text+0x15): undefined reference to `glutWarpPointer'


Does ghc-pkg describe OpenGL tell you that the package needs some gl C
lib? Eg "ld-options: -lGLU -lGL -lm"


Yes:

   $ ghc-pkg field OpenGL 
import-dirs,library-dirs,hs-libraries,include-dirs,includes,cc-options,ld-options

   import-dirs: "C:\\Program Files\\Haskell\\OpenGL-2.2.1.1\\ghc-6.11.20081202"
   library-dirs: "C:\\Program Files\\Haskell\\OpenGL-2.2.1.1\\ghc-6.11.20081202"
   hs-libraries: HSOpenGL-2.2.1.1
   include-dirs: "C:\\Program 
Files\\Haskell\\OpenGL-2.2.1.1\\ghc-6.11.20081202\\include"
   includes: HsOpenGL.h
   cc-options:
   ld-options: -lglu32 -lopengl32

   $ ghc-pkg field GLUT 
import-dirs,library-dirs,hs-libraries,include-dirs,includes,cc-options,ld-options

   import-dirs: "C:\\Program Files\\Haskell\\GLUT-2.1.1.2\\ghc-6.11.20081202"
   library-dirs: "C:\\Program Files\\Haskell\\GLUT-2.1.1.2\\ghc-6.11.20081202"
   hs-libraries: HSGLUT-2.1.1.2
   include-dirs: "C:\\Program 
Files\\Haskell\\GLUT-2.1.1.2\\ghc-6.11.20081202\\include"
   includes: HsGLUT.h
   cc-options:
   ld-options: -lglut32 -lglu32 -lopengl32


When you ghc --make -v, is it actually linking to a gl C lib?


It looks that way:

*** Linker: (slightly reformated)
C:\ghc\ghc-6.11.20081202\gcc

-BC:\ghc\ghc-6.11.20081202\gcc-lib/

-IC:\ghc\ghc-6.11.20081202\include/mingw/ -v -o Lift.exe
-DDONT_WANT_WIN32_DLL_SUPPORT obj\Main.o obj\FunWorlds.o
obj\Scene.o obj\Behavior.o C:\DOCUME~1\cr3\LOCALS~1\Temp\/ghc3144_0/ghc3144_0.o

-LC:\Program Files\Haskell\GLUT-2.1.1.2\ghc-6.11.20081202

-LC:\Program Files\Haskell\containers-0.2.0.0\ghc-6.11.20081202

-LC:\Program Files\Haskell\OpenGL-2.2.1.1\ghc-6.11.20081202

-LC:\ghc\ghc-6.11.20081202\haskell98-1.0.1.0 
-LC:\ghc\ghc-6.11.20081202\random-1.0.0.1
-LC:\ghc\ghc-6.11.20081202\process-1.0.1.1 
-LC:\ghc\ghc-6.11.20081202\directory-1.0.0.2
-LC:\ghc\ghc-6.11.20081202\old-time-1.0.0.1 
-LC:\ghc\ghc-6.11.20081202\old-locale-1.0.0.1
-LC:\ghc\ghc-6.11.20081202\filepath-1.1.0.1 
-LC:\ghc\ghc-6.11.20081202\Win32-2.2.0.0
-LC:\ghc\ghc-6.11.20081202\bytestring-0.9.1.4
-LC:\Program Files\Haskell\array-0.2.0.0\ghc-6.11.20081202 
-LC:\ghc\ghc-6.11.20081202\base-3.0.3.0
-LC:\ghc\ghc-6.11.20081202\syb-0.1.0.0 -LC:\ghc\ghc-6.11.20081202\base-4.0.0.0
-LC:\ghc\ghc-6.11.20081202\integer-0.1.0.0 
-LC:\ghc\ghc-6.11.20081202\ghc-prim-0.1.0.0
-LC:\ghc\ghc-6.11.20081202 -LC:\ghc\ghc-6.11.20081202/gcc-lib

-lHSGLUT-2.1.1.2 -lglut32 -lglu32 -lopengl32 -lHScontainers-0.2.0.0
-lHSOpenGL-2.2.1.1 -lglu32 -lopengl32

-lHShaskell98-1.0.1.0 -lHSrandom-1.0.0.1 -lHSprocess-1.0.1.1 
-lHSdirectory-1.0.0.2
-lHSold-time-1.0.0.1 -lHSold-locale-1.0.0.1 -lHSfilepath-1.1.0.1 
-lHSWin32-2.2.0.0
-luser32 -lgdi32 -lwinmm -lkernel32 -ladvapi32 -lHSbytestring-0.9.1.4 
-lHSarray-0.2.0.0
-lHSbase-3.0.3.0 -lHSsyb-0.1.0.0 -lHSbase-4.0.0.0 -lwsock32 -lmsvcrt -lkernel32
-luser32 -lshell32 -lHSinteger-0.1.0.0 -lHSghc-prim-0.1.0.0 -lHSrts -lm -lgmp 
-lwsock32
-u _ghczmprim_GHCziTypes_Izh_static_info -u 
_ghczmprim_GHCziTypes_Czh_static_info
-u _ghczmprim_GHCziTypes_Fzh_static_info -u 
_ghczmprim_GHCziTypes_Dzh_static_info
-u _base_GHCziPtr_Ptr_static_info -u _base_GHCziWord_Wzh_static_info
-u _base_GHCziInt_I8zh_static_info -u _base_GHCziInt_I16zh_static_info
-u _base_GHCziInt_I32zh_static_info -u _base_GHCziInt_I64zh_static_info
-u _base_GHCziWord_W8zh_static_info -u _base_GHCziWord_W16zh_static_info
-u _base_GHCziWord_W32zh_static_info -u _base_GHCziWord_W64zh_static_info
-u _base_GHCziStable_StablePtr_static_info -u _ghczmprim_GHCziTypes_Izh_con_info
-u _ghczmprim_GHCziTypes_Czh_con_info -u _ghczmprim_GHCziTypes_Fzh_con_info
-u _ghczmprim_GHCziTypes_Dzh_con_info -u _base_GHCziPtr_Ptr_con_info
-u _base_GHCziPtr_FunPtr_con_info -u _base_GHCziStable_StablePtr_con_info
-u _ghczmprim_GHCziBool_False_closure -u _ghczmprim_GHCziBool_True_closure
-u _base_GHCziPack_unpackCString_closure -u 
_base_GHCziIOBase_stackOverflow_closure
-u _base_GHCziIOBase_heapOverflow_closure
-u _base_ControlziExceptionziBase_nonTermination_closure
-u _base_GHCziIOBase_blockedOnDeadMVar_closure
-u _base_GHCziIOBase_blockedIndefinitely_closure
-u _base_ControlziExceptionziBase_nestedAtomically_closure
-u _base_GHCziWeak_runFinalizzerBatch_closure -u 
_base_GHCziTopHandler_runIO_closure
-u _base_GHCziTopHandler_runNonIO_closure
-u _base_GHCziConc_ensureIOManagerIsRunning_closure -u 
_base_GHCziConc_runSparks_closure
-lHSffi

$ ls C:/ghc/ghc-6.11.20081202/gcc-lib/*gl*
C:/ghc/ghc-6.11.20081202/gcc-lib/CRT_noglob.o  
C:/ghc/ghc-6.11.20081202/gcc-lib/libglut.a
C:/ghc/ghc-6.11.20081202/gcc-lib/libglaux.a
C:/ghc/ghc-6.11.20081202/gcc-lib/libglut32.a
C:/ghc/ghc-6.11.20081202/gcc-lib/libglu32.a
C:/ghc/ghc-6.11.20081202/gcc-lib/libopengl32.a


Btw, is the format for the cabal config file document

Re: [Haskell-cafe] Re: [Haskell] GHC 6.10 and OpenGL

2008-12-02 Thread Claus Reinke

I finally got round to trying cabal-install with OpenGL/GLUT,
using a freshly built ghc head, a cygwin bash, and


> http://haskell.org/~duncan/cabal/cabal.exe


   cabal-install version 0.6.0
   using version 1.6.0.1 of the Cabal library


> Yes, building it requires mingw/msys, but with it cabal install opengl
> really does work (I've tried it).


   cabal.exe opengl glut

The first oddity was in the dependency resolution phase, where cabal.exe
insists on installing array-0.2.0.0 and containers-0.2.0.0, which are already
installed. Ignoring that, the build went through without any apparent hitch.
Here is the relevant part of the build log:

   package: GLUT-2.1.1.2
   os: windows
   arch: i386
   compiler: ghc-6.11.20081202
   client: cabal-install-0.6.0
   flags: split-base
   dependencies: OpenGL-2.2.1.1 array-0.2.0.0 base-3.0.3.0
 containers-0.2.0.0
   install-outcome: InstallOk
   docs-outcome: NotTried
   tests-outcome: NotTried

   package: OpenGL-2.2.1.1
   os: windows
   arch: i386
   compiler: ghc-6.11.20081202
   client: cabal-install-0.6.0
   flags:
   dependencies: base-3.0.3.0
   install-outcome: InstallOk
   docs-outcome: NotTried
   tests-outcome: NotTried

   package: array-0.2.0.0
   os: windows
   arch: i386
   compiler: ghc-6.11.20081202
   client: cabal-install-0.6.0
   flags:
   dependencies: base-4.0.0.0 syb-0.1.0.0
   install-outcome: InstallOk
   docs-outcome: NotTried
   tests-outcome: NotTried

   package: containers-0.2.0.0
   os: windows
   arch: i386
   compiler: ghc-6.11.20081202
   client: cabal-install-0.6.0
   flags:
   dependencies: array-0.2.0.0 base-4.0.0.0
   install-outcome: InstallOk
   docs-outcome: NotTried
   tests-outcome: NotTried

But when I actually try to build anything using (yes, I know the explicit
package flags aren't needed with --make)

   ghc --make -package OpenGL -package GLUT something.hs

I get nothing but undefined references in the linking phase.

   C:\Program 
Files\Haskell\GLUT-2.1.1.2\ghc-6.11.20081202/libHSGLUT-2.1.1.2.a(Window.o):fake:
(.text+0x15): undefined reference to `glutWarpPointer'
   C:\Program 
Files\Haskell\GLUT-2.1.1.2\ghc-6.11.20081202/libHSGLUT-2.1.1.2.a(Window.o):fake:(.text+0x3d): 
undefined reference to `glutReshapeWindow'
   C:\Program 
Files\Haskell\GLUT-2.1.1.2\ghc-6.11.20081202/libHSGLUT-2.1.1.2.a(Window.o):fake:(.text+0x65): 
undefined reference to `glutPositionWindow'

   ..


The problem is that this places an additional barrier on distribution:
Haskell OpenGL authors cannot simply distribute their Haskell code,
because many other Haskellers will not be able to get it to work:


For the rest of the issues I think the proper solution is the platform
installer which would include cabal.exe and a pre-built OpenGL.

We could probably do better for packages that need mingw. Most of them
record this information in the build-type: Configure, so perhaps
cabal.exe should check if sh.exe is present before trying to build
packages that need it. If that sounds sensible then lets file a ticket
so we don't forget.


'build-type: Configure' just means that configure is needed, but says
nothing about whether that is the MSys/Cygwin/.. variant of the
toolchain, or which version. That would be okay if the variant/version
didn't matter, but that is rarely the case.

Why not have a shallow facade cabal-package for the native packages/
tools? Then cabal packages could depend on precise versions of those
facade packages for windows builds, and users would know exactly
what they need (and whether they have it, if the test for MSys/configure
is in the msys-configure package, or where to get it, if that is documented
in the facade packages; the same trick should work for other tool
dependencies, such as happy/alex/..; possibly even for non-haskell
library dependencies for API FFI bindings: depend on the msys-configure
package, run configure to see whether library is installed, register outcome
as a cabal facade package with how-to-get-it information).

Btw, is the format for the cabal config file documented somewhere?
I though I had set documentation to True.

Claus


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Go Haskell! -> array libraries

2008-11-30 Thread Claus Reinke

Should mutable arrays have list-like APIs? All the usual operations,
just in-place and destructive where appropriate?


I don't know. To be honest, I don't think that the term "mutable  
array" describes a single data structure. For instance, one of the  
central questions which unveils a whole bunch of design possibilities  
is: can mutable arrays be concatenated and how does that work if yes?


wrt the first part: within a pure functional language, mutable arrays
are something I (am forced to) choose when the implementation
is unable to see some essential optimization opportunities in the
way I am using immutable arrays; whenever I am forced to go
down that route, I'd at least like to have an equivalent set of
operations. The move from immutable arrays to mutable arrays
is painful enough as it is. So the suggested design criterion would
be: as uniform an API as possible, just that immutable arrays
cannot make some essential performance guarantees and some
operations sit a little uncomfortably in a librart where those
guarantees are important.

wrt the second part: I'm not sure whether you'll be able to choose
a single one of those design possibilities as the only one (perhaps
the concatenation is about simpler code, so virtual concatenation
would be sufficient and copying possible, but perhaps the goal
was locality, so copying would be needed, or perhaps a mixture
of both, and the limited disruption in locality is less expensive than
the cost of copying, ..), so you might want to expose several of 
them, with a separate way of selecting a default or I-don't-care 
choice (imilar to why you have several array libs at present, 
just that there should be a common API and documentation of

design alternatives/rationales;-).

--

Btw, it would really be nice if packages took a hint from academic
publishing: good papers are expected not only (a) to provide new
content, but also (b) to position that content wrt related work
and (c) to make explicit what the new contributions/goals are.

As forks have become more common on hackage, perhaps
.cabal files could be extended with two fields, pointing to related
packages (this is more specific than category, linking to packages
that might be used instead or in combination with the current 
package) and giving the rationale/high-lights for the package in 
text form. 

In the meantime, it would be great if all packages came with good 
old README files, which should be linked from the .cabal-based 
package descriptions on hackage, covering all those useful bits of 
information that are not yet covered in .cabal files, and giving an 
overview of what the package is about if the package does not 
have its own webspace (as is often the case). That would improve 
on the current situation, where sometimes all one has to go on is 
the package name and a one-liner description.


Random example: looking at hackage package categories, I'd
look for array libraries under 'data structures', but they are 
actually spread over 'data structures', 'data', and 'maths' (perhaps

others?). Once I've found one, say 'vector' or 'uvector', what
do the package descriptions tell me about the packages and
their relation, or their relative advantages? 

The descriptions are brief, the "home pages" are actually "home 
repositories", the haddocks seem to have details only, no overview, 
and from the references/authors/uvector README, one might
think these are actually the same library, sequential libs spun 
off from the same data-parallelism project; only that the 
haddock details suggest that these libraries are quite different.

How? Why? What other alternatives are there?

Note that I'm not attacking any specific packages, I would 
just encourage all package authors to try and see their packages

from the perspective of a hackage user: what information does
the user look for, what information does the package description
offer?

--

While we're on the topic: hackage has grown so much and so
rapidly, that it might be worthwhile to have a hackage "editor",
not in the sense of accepting/rejecting packages (not wanted
at present), nor in the sense of objective quality measurements
(that is in the process of being automated, according to Duncan),
but in the sense of trying to guarantee a subjectively useful overall 
presentation (category organisation, finding and putting side-by-side 
related packages despite different names/categories, suitability of 
package descriptions, getting package authors to talk, etc.). 

And no, I'm not volunteering!-) But I would imagine that someone 
might find this a useful way of contributing. Such a hackage editor 
would also be able to give the Haskell Community Report editor 
a hand by summarizing hackage activity/trends and highlighting 
interesting projects that the HCAR editor might want to solicit 
reports for. From my own time as HCAR editor, I recall finding
and chasing interesting projects as well as deciding how to 
structure the tools&libraries sections as

Re: [Haskell-cafe] Control.Exception Funny

2008-11-29 Thread Claus Reinke

CE.catch :: (CE.Exception e) => IO a -> (e -> IO a) -> IO a



foo d = CE.catch (openFile d ReadMode >> return ())
 (\e -> hPutStr stderr ("Couldn't open "++ d ++": " ++ show e))


btw, if your handler cannot return the same type as your action, is this
the right place to catch the exceptions?


Run.hs:70:8:
Ambiguous type variable `e' in the constraint:
  `CE.Exception e'
arising from a use of `CE.catch' at Run.hs:(70,8)-(71,78)
Probable fix: add a type signature that fixes these type variable(s)


Now I think I never used to get this under 6.8.2 but I don't easily have
a 6.8.2 to try it out on.


That would be the new extensible exceptions - instead of a single 
non-extendable exception type (no ambiguities), there's now an

extendable class of exceptions.


Doing what the compiler suggests doesn't work for obvious reasons:


foo :: CE.Exception e => FilePath -> IO ()
foo d = CE.catch (openFile d ReadMode >> return ())
 (\e -> hPutStr stderr ("Couldn't open "++ d ++": " ++ show e))



Run.hs:69:0:
Ambiguous constraint `CE.Exception e'
At least one of the forall'd type variables mentioned by the constraint
must be reachable from the type after the '=>'
In the type signature for `foo':
  foo :: (CE.Exception e) => FilePath -> IO ()


The suggestion was to fix the type 'e'. Neither your signature, nor your
exception handler do that. I found the documentation less than helpful
for this recent switch, but if you look at the instances of the Exception
class:

http://www.haskell.org/ghc/docs/latest/html/libraries/base/Control-Exception.html#1

you'll see 'IOException' listed, so 'show (e::IOException)' might do
what you want.


There seems to be a ticket for it
(http://hackage.haskell.org/trac/ghc/ticket/2819) but this doesn't give
a suggested example that compiles.


I've annotated the ticket. Please check whether the suggested 
explanation would be helpful, and report any other places that

have not been updated to the new exception system.

Claus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Go Haskell! -> array libraries

2008-11-28 Thread Claus Reinke
Yes, it is very difficult. A sensible API for a standard array library  
is something that needs more research. FWIW, I don't know of any other  
language that has what I'd like to see in Haskell. C++ probably comes  
closest but they have it easy - they don't do fusion.


I assume you've looked at SAC? http://www.sac-home.org/

Their main research and development focus was/has been on arrays 
(fusion/layout/padding/tiling/slicing/data-parallelism/shape-invariance

(source algorithms parameterized over array dimensionality/shape)/
whole-array ops/list-like ops/lots of surface operations reducable to
a minimal set of core operations that need implementation/cache
behaviour/performance/performance/performance/..). When they 
started out, I tried to make the point that I would have liked to have 
their wonderful array ideas in our otherwise wonderful language, 
but they preferred to follow their own way towards world 
domination (*). Does that sound familiar?-)


Claus

(*) ok, they did have a good motive: they came out of a research
   group that had done non-sequential functional programming in
   the 1980s, with all the things we see today: great speedups,
   shame about the sequential baseline; creating parallel threads
   is easy, load balancing slightly harder, but pumping (creating
   thread hierarchies recursively, only to see them fold into a
   small result, for the process to begin again) is a waste, etc.;
   so they decided to start from a fast sequential baseline instead 
   of full functional language, and designed their language around

   arrays instead of trying to add arrays to an existing language.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Go Haskell! -> array libraries

2008-11-28 Thread Claus Reinke

But I don't want Perl, I want a well designed language and well
designed libraries.
I think it's find to let libraries proliferate, but at some point you
also need to step back and abstract.

 -- Lennart


Especially so if the free marketeers claim there is something 
fundamentally wrong with the standard libraries and language, as in 
the case of arrays. When someone did that nice little survey of the 
last bunch of array libraries (Bulat, I think? now in the wiki book), 
I was hoping there would be a grand unification soon. Instead, it 
seems that those who have worked most with Haskell arrays 
recently have simply abandoned all of the standard array libraries. 

Okay, why not, if there are good reasons. But can't you document 
those reasons, for each of your alternative proposals, so that people 
have some basis on which to choose (other than who has the loudest 
market voice;-)? And would it be difficult for you all to agree on a 
standard API, to make switching between the alternatives easy (if

it is indeed impossible to unify their advantages in a single library,
the reasons for which should also be documented somewhere)?
And what is wrong about Simon's suggestion, to use the standard 
array lib APIs on top of your implementations?


Not that I see Haskell' coming soon, but I'd certainly not want it
to continue standardising a kind of array that appears to have no 
backing among the Haskell array user/library author community. Nor 
would I like something as central as arrays to remain outside the 
standard, where it won't remain stable enough for Haskell 
programmers to rely on in the long run.


bazaar, yes; mayhem, no.
Claus


On Fri, Nov 28, 2008 at 9:46 PM, Don Stewart <[EMAIL PROTECTED]> wrote:

andrewcoppin:

What *I* propose is that somebody [you see what I did there?] should sit
down, take stock of all the multitudes of array libraries, what features
they have, what obvious features they're missing, and think up a good
API from scratch. Once we figure out what the best way to arrange all
this stuff is, *then* we attack the problem of implementing it for real.

It seems lots of people have written really useful code, but we need to
take a step back and look at the big picture here before writing any
more of it.


No.

My view would be to let the free market of developers decide what is
best. No bottlenecks -- there's too many Haskell libraries already (~1000 now).

And this approach has yielded more code than ever before, more libraries
than ever before, and library authors are competing.

So let the market decide. We're a bazaar, not a cathedral.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Go Haskell!

2008-11-27 Thread Claus Reinke

What do those folks working on parallel Haskell arrays think about the
sequential Haskell array baseline performance?


Try using a fast array library like uvector? (With no serious overhead for
tuples too, fwiw)...


I downloaded uvector a while ago, but haven't got round to trying it
(yet another array API?). Mostly, I'd like to know a little more than just 
"fast array lib":


- in which ways is it supposed to be faster? why?
- for which usage patterns is it optimised? how?
- if it is faster in general, why hasn't it replaced the default arrays?

In general, I think Haskell has too many array libraries, with too
many APIs. And that doesn't even take account the overuse of
unsafe APIs, or the non-integrated type-level safety tricks - if 
array accesses were properly optimized, there should be a lot 
less need for the extreme all-or-nothing checks or home-grown

alternative special-purpose APIs:

- type-level code for watermarking indices belonging to certain 
   index sets is one step to eliminate index checks, but hasn't been

   integrated into any of the standard libs
- one could also associate index subsets with operations that do 
   not leave the index superset belonging to an array (eg, if 
   min
- GHC does seem to common up index checks only if they are
   identical, but if min   are really necessary (as long as we know about '+')

- whole-array ops allow to avoid index checks without type-level
   tricks, leaving the indexing implicit; but the corresponding ops
   in Data.Array.MArray claim to construct new arrays, contrary
   to the intended inplace updating for which one uses MArrays?
- etc. etc.

At least, uvector seems to take multi-element ops more seriously.
But with so many people working on sequential and parallel Haskell
array libraries, I was hoping for someone to take a big ax and clear
out all that sick and half-dead wood, to give a small number of healthy 
arrays libs room to grow. Would be a lot easier for us poor naive

Haskell array users who otherwise tend to get lost in that forrest!-)

Just my 2c,-)
Claus


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


<    1   2   3   4   5   6   >