Re: [Haskell-cafe] Array, Vector, Bytestring

2013-06-05 Thread Peter Simons
Hi Tom,

thank you for the explanation.

 > I believe you are suggesting that there is redundancy in the
 > implementation details of these libraries, not in the APIs they
 > expose.

I meant to say that there is redundancy in *both*. The libraries
mentioned in this thread re-implement the same type internally and
expose APIs to the user that are largely identical.

Take care,
Peter


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Array, Vector, Bytestring

2013-06-04 Thread Peter Simons
Hi Tom,

 > On Tue, Jun 04, 2013 at 04:01:37PM +0200, Peter Simons wrote:
 >>  > How is this a problem?
 >>  >
 >>  > If you're representing text, use 'text'.
 >>  > If you're representing a string of bytes, use 'bytestring'.
 >>  > If you want an "array" of values, think c++ and use 'vector'.
 >>
 >> the problem is that all those packages implement the exact same data
 >> type from scratch, instead of re-using an implementation of a
 >> general-purpose array internally. That is hardly desirable, nor is it
 >> necessary.
 >
 > Just to clarify for those on the sidelines, the issue is duplication of
 > implementation details, rather than duplication of functionality?

I am not sure what the terms "duplication of implementation details" and
"duplication of functionality" mean in this context. Could you please
explain how these two concepts differ in your opinion?

Take care,
Peter


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Array, Vector, Bytestring

2013-06-04 Thread Peter Simons
Hi Clark,

 > How is this a problem?
 >
 > If you're representing text, use 'text'.
 > If you're representing a string of bytes, use 'bytestring'.
 > If you want an "array" of values, think c++ and use 'vector'.

the problem is that all those packages implement the exact same data
type from scratch, instead of re-using an implementation of a
general-purpose array internally. That is hardly desirable, nor is it
necessary.

Take care,
Peter


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Packages in distro mentioned on hackage?

2013-04-30 Thread Peter Simons
Hi Magnus,

> How does a distro get to be added like that?

check out .

Take care,
Peter


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] What happened to http://hackage.haskell.org/platform/2010.2.0.0/cabal/haskell-platform-2010.2.0.0.tar.gz?

2013-04-03 Thread Peter Simons
Is it just me or have some of the old Haskell Platform releases
disappeared from haskell.org? 

The 2010.x links from http://www.haskell.org/platform/prior.html also
point to non-existent pages.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Streaming bytes and performance

2013-03-19 Thread Peter Simons
Hi Don,

 > "Using this input file stored in /dev/shm"
 >
 > So not measuring the IO performance at all. :)

of course the program measures I/O performance. It just doesn't measure
the speed of the disk.

Anyway, a highly optimized benchmark such as the one you posted is
eventually going to beat one that's not as highly optimized. I think
no-one disputes that fact.

I was merely trying to point out that a program which encodes its
evaluation order properly is going to be reasonably fast without any
further optimizations.

Take care,
Peter

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Streaming bytes and performance

2013-03-19 Thread Peter Simons
Hi Don,

 > Compare your program (made lazy) on lazy bytestrings using file IO: [...]

if I make those changes, the program runs even faster than before:

  module Main ( main ) where

  import Prelude hiding ( foldl, readFile )
  import Data.ByteString.Lazy.Char8

  countSpace :: Int -> Char -> Int
  countSpace i c | c == ' ' || c == '\n' = i + 1
 | otherwise = i

  main :: IO ()
  main = readFile "test.txt" >>= print . foldl countSpace 0

This gives

 | $ ghc --make -O2 -funbox-strict-fields test1 && time ./test1
 | 37627064
 |
 | real0m0.375s
 | user0m0.346s
 | sys 0m0.028s

versus:

 | $ ghc --make -O2 -funbox-strict-fields test2 && time ./test2
 | 37627064
 |
 | real0m0.324s
 | user0m0.299s
 | sys 0m0.024s

Whether getFile or getContents is used doesn't seem to make difference.

Take care,
Peter

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Streaming bytes and performance

2013-03-19 Thread Peter Simons
Don Stewart  writes:

 > Here's the final program: [...]

Here is a version of the program that is just as fast:

  import Prelude hiding ( getContents, foldl )
  import Data.ByteString.Char8

  countSpace :: Int -> Char -> Int
  countSpace i c | c == ' ' || c == '\n' = i + 1
 | otherwise = i

  main :: IO ()
  main = getContents >>= print . foldl countSpace 0

Generally speaking, I/O performance is not about fancy low-level system
features, it's about having a proper evaluation order:

 | $ ghc --make -O2 -funbox-strict-fields test1 && time ./test1
 | 37627064
 |
 | real 0m0.381s
 | user 0m0.356s
 | sys  0m0.023s

Versus:

 | $ ghc --make -O2 -funbox-strict-fields test2 && time ./test2 http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Announce: Leksah 0.13.1 (a bit experimental)

2013-01-09 Thread Peter Simons
Hi Hamish,

 > Features in process-leksah have been merged into process. For newer
 > versions of GHC leksah-server just depends on process.

I trust this applies to the unreleased beta version that you just
announced, right? (The latest release versions still seem to depend on
process-leksah.) In that case, I'll try again building Leksah once the
new version is available from Hackage.

Thank you for the quick response!

Peter


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Announce: Leksah 0.13.1 (a bit experimental)

2013-01-07 Thread Peter Simons
Hi Hamish,

would it be possible to get an update for process-leksah that works with
recent versions of the 'filepath' package? I cannot build leksah-server
with GCC 7.4.2 because of this issue.

Take care,
Peter


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to determine correct dependency versions for a library?

2012-11-16 Thread Peter Simons
Hi Tobias,

 >> When such a situation has arisen in the past, it's my experience
 >> that the author of B typically releases an update to fix the issue
 >> with the latest version of C:
 >>
 >>   B 2.5.4.0 build-depends: C >= 3.8
 >>
 >> So that particular conflict does hardly ever occur in practice.
 >
 > And what if the maintainer of a takes the chance to make some major
 > updates and directly releases 2.6? Then all packages depending on
 > 2.5.* will probably break.

yes, that is true. In such a case, one would have to contact the
maintainer of A, B, and C to discuss how to remedy the issue.
Fortunately, pathological cases such as this one seem to happen rarely
in practice.

 > All this boils down to a system where only a combination of latest
 > versions will be stable. So why restrict dependencies anyway?

Now, I think that is an exaggeration. Do you know a single example of a
package on Hackage that actually suffers from the problem you're
describing?

Take care,
Peter


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to determine correct dependency versions for a library?

2012-11-16 Thread Peter Simons
Hi Tobias,

 > A 1.1.4.0 build-depends: B ==2.5.* C ==3.7.* (overspecified)
 > B 2.5.3.0 build-depends: C ==3.* (underspecified)
 > C 3.7.1.0
 >
 > Everything works nice until C-3.8.0.0 appears with incompatible changes
 > that break B, but not A.
 >
 > Now both A and B have to update their dependencies and we have now:
 >
 > A 1.1.5.0 build-depends: B ==2.5.* C >=3.7 && <3.9
 > B 2.5.4.0 build-depends: C >=3 && <3.8
 > C 3.8.0.0
 >
 > And now the following combination is still valid:
 > A 1.1.5.0
 > B 2.5.3.0 (old version)
 > C 3.8.0.0
 > Bang!

thank you for contributing this insightful example.

When such a situation has arisen in the past, it's my experience that the
author of B typically releases an update to fix the issue with the latest
version of C:

  B 2.5.4.0 build-depends: C >= 3.8

So that particular conflict does hardly ever occur in practice.

Note that package A would build just fine after that update of B -- if the
author of A hadn't overspecified its dependencies. As it is, however, a
new version of A has to released that changes no code, but only the Cabal
file.

Take care,
Peter


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to determine correct dependency versions for a library?

2012-11-11 Thread Peter Simons
Hi Clark.

 > I think we just use dependencies [to specify] different things.

If dependency version constraints are specified as a white-list --
i.e. we include only those few versions that have been actually
verified and exclude everything else --, then we take the risk of
excluding *too much*. There will be versions of the dependencies that
would work just fine with our package, but the Cabal file prevents
them from being used in the build.

The opposite approach is to specify constraints as a black-list. This
means that we don't constrain our build inputs at all, unless we know
for a fact that some specific versions cannot be used to build our
package. In that case, we'll exclude exactly those versions, but
nothing else. In this approach, we risk excluding *too little*. There
will probably be versions of our dependencies that cannot be used to
build our package, but the Cabal file doesn't exclude them from being
used.

Now, the black-list approach has a significant advantage. In current
versions of "cabal-install", it is possible for users to extend an
incomplete black-list by adding appropriate "--constraint" flags on
the command-line of the build. It is impossible, however, to extend an
incomplete white-list that way.

In other words: build failures can be easily avoided if some package
specifies constraints that are too loose. Build failures caused by
version constraints that are too strict, however, can be fixed only by
editing the Cabal file.

For this reason, dependency constraints in Cabal should rather be
underspecified than overspecified.

Take care,
Peter


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to determine correct dependency versions for a library?

2012-11-09 Thread Peter Simons
Hi Clark,

 > It's not restrictive.

how can you say that by adding a version restriction you don't restrict
anything?


 > I just don't like to claim that my package works with major versions
 > of packages that I haven't tested.

Why does it not bother you to claim that your package can *not* be built
with all those versions that you excluded without testing whether those
restrictions actually exist or not?

Take care,
Peter


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to determine correct dependency versions for a library?

2012-11-09 Thread Peter Simons
Hi Janek,

 > How to determine proper version numbers?

if you know for a fact that your package works only with specific
versions of its dependencies, then constrain the build to exactly those
versions that you know to work.

If *don't* know of any such limitations, then *don't* specify any
constraints.

Take care,
Peter


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHC maintenance on Arch

2012-10-30 Thread Peter Simons
Hi Vagif,

 > I fail to see how a fringe bleeding edge linux distro undermines a
 > haskell platform.

Arch Linux does not comply to the Haskell Platform. That fact communicates
to users of the distribution: "We, the maintainers, don't believe that HP is
relevant." Clearly, this undermines the Haskell Platform, doesn't it?

Take care,
Peter


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHC maintenance on Arch

2012-10-29 Thread Peter Simons
Hi Timothy,

the Haskell community is not the right audience to be addressing these
complaints to. Instead, you should be talking to the ArchLinux developers,
who are responsible for packaging Haskell-related software in the [core]
and [extra] repositories. I am no expert in these matters, but my guess is
that the mailing list

  https://mailman.archlinux.org/mailman/listinfo/arch-dev-public

is more appropriate than haskell-cafe for this thread.

Take care,
Peter


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] extensible-exceptions no longer a part of GHC 7.6.1?

2012-09-10 Thread Peter Simons
Hi,

'extensible-exceptions' used to be a part of GHC, but it appears that
the package has been dropped from 7.6.1. Yet, the release notes on
haskell.org don't say anything about this subject (other than TODO).

Was that change intentional?

Take care,
Peter


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Which ghc binary does ghc-mod use?

2012-07-24 Thread Peter Simons
Hi Brandon,

 > I think you'd have to install a separate ghc-mod binary for each one,
 > then, as it looks to me like ghc-mod is using ghc-as-a-library.  That
 > is, it actually has the compiler linked into itself.

I see, thank you for the clarification.

One more thing: I would like to configure search paths for extra
libraries that ghc-mod won't find without help. Does anyone know a way
to configure the set of flags that's being passed to GHC/ghc-mod?

Take care,
Peter


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Which ghc binary does ghc-mod use?

2012-07-23 Thread Peter Simons
Hi,

I am a happy user of Emacs with ghc-mod for Haskell programming. There is just
one issue I've run into: I have multiple versions of GHC installed on my
machine. Now, ghc-mod seems to use the GHC binary that was used to compile
ghc-mod itself, but that is not the version I want it to use for syntax
checking, etc. In fact, I want to be able to switch ghc-mod between different
GHC binaries depending on which project I'm working on, but I have no idea how
to do that.

Is there maybe some Elisp guru reading this list who can help me out? Can I
somehow configure which GHC binary ghc-mod uses?

Take care,
Peter


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ghc-7.4 on CentOS-5.8 ?

2012-06-27 Thread Peter Simons
Hi Johannes,

 > ghc-7.0 is working but when I use it to compile 7.4,
 > it breaks with some linker error (relocation R_X86_64_PC32 ...)
 > it also suggests "recompile with -fPIC" but I don't see how.

I seem to remember that this is a problem with the old version of
GCC that's used to build the compiler. It can we avoided, though, by
disabling optimizations.

Try adding the following lines to a file called "mk/build.mk" before
running the build:

GhcLibWays = v
SRC_HC_OPTS= -H64m -O0 -fasm# -O -H64m
GhcStage1HcOpts= -O -fasm
GhcStage2HcOpts= -O0 -fasm  # -O2 -fasm
GhcLibHcOpts   = -O -fasm   # -O2 -XGenerics
GhcHcOpts  = -Rghc-timing
# GhcLibWays  += p
# GhcLibWays  += dyn
NoFibWays  =
STRIP_CMD  = :

I attached the RPM spec file that I used to build GHC 7.0.4 on
CentOS. It's quite likely that you can use it to automate the 7.4.x
build after editing some version numbers and file paths in it.

Good luck! :-)

Peter


Name:   ghc
Version:7.0.4
Release:1

Summary:Glorious Haskell Compiler
License:BSD
Group:  Compiler
URL:http://haskell.org/ghc

Prefix: /opt/ghc/7.0.4
BuildArch:  x86_64
ExclusiveArch:  x86_64
ExclusiveOS:Linux

Source: %{name}-%{version}.tar.gz
BuildRoot:  %{_tmppath}/%{name}-%{version}-%{release}-root-%(%{__id_u} -n)
BuildRequires:  ghc == 6.12.3
BuildRequires:  make
BuildRequires:  perl
BuildRequires:  python
BuildRequires:  gmp-devel
BuildRequires:  ncurses-devel
BuildRequires:  zlib-devel
BuildRequires:  gcc
Requires:   gmp-devel
Requires:   ncurses-devel
Requires:   zlib-devel
Requires:   gcc

%description
Glorious Haskell Compiler

%clean
%{__rm} -rf %{buildroot}

%prep
%setup

%build
cat >mk/build.mk <___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [Haskell] JustHub 'Sherkin' Release

2012-06-20 Thread Peter Simons
Hi Chris,

I'm also wondering about this issue:

>> - How do you handle packages that depend on system libraries? "hsdns",
>>   for example, requires the adns library to build. Does Hub know about
>>   this?

Does Hub know about system-level libraries that Haskell packages need to
build, like Gtk, ADNS, Avahi, etc.? 

Take care,
Peter


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [Haskell] JustHub 'Sherkin' Release

2012-06-18 Thread Peter Simons
Hi Chris,

 > There is a worked out example at the bottom of the overview up on the
 > web site: http://justhub.org/overview

thank you for the pointer, I think I found it:

^=7.4.1
List-0.4.2
fgl-5.4.2.4
hexpat-0.20.1
mtl-2.1.1
regex-base-0.93.2
regex-compat-0.95.1
regex-posix-0.95.2
text-0.11.2.1
transformers-0.3.0.0
utf8-string-0.3.7

Very nice, this looks quite straightforward. I wonder about two things:

 - Is it possible to pass configure-time flags to those libraries? For
   example, I would like to build "haskeline" with "-fterminfo". Can Hub
   do this?

 - How do you handle packages that depend on system libraries? "hsdns",
   for example, requires the adns library to build. Does Hub know about
   this?

Take care,
Peter


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [Haskell] JustHub 'Sherkin' Release

2012-06-18 Thread Peter Simons
Hi Chris,

 > hub save project >project.har

I am curious to see what this file looks like. Could you please post a
short example of one?

Take care,
Peter


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [Haskell] JustHub 'Sherkin' Release

2012-06-17 Thread Peter Simons
Hi Chris,

 >> How much time, approximately, did you spend working with Nix?
 >> 1 hour? 10 hours? 10 days? 10 months?
 >
 > You know that it is not 10 months.

actually, no. I don't know that, which is why I asked. I find it hard to
get an answer from you, though. It seems strange that you keep such
trivial information to yourself like some super personal secret. The
point of this discussion is to compare the respective properties of Nix
and Hub. In that context, it seems natural that I might be curious how
much actual working experience you have with Nix.


 > JustHub [and Nix] have some similarities -- mostly around the idea of
 > allowing multiple tool chains to co-exist; the way they go about it
 > is very different.

I'm not sure what differences you are referring to. Could you please be
a little bit more specific? How exactly do Nix and Hub differ in the way
they install multiple tool-chains?


 > I also know that I have been adding things that a generic package
 > manager is most unlikely to be covering [...].

What you mean is: you really don't know, but you are speculating.


 > To take just one example, I provide a mechanism that allows
 > developers to archive the configuration of their Haskell development
 > environment and check it into a source management system. The
 > developer can check it out on a another system and if the build
 > process invokes the recovery mechanism it will automatically rebuild
 > the environment on the first run [...].

Yes, is Nix we solve that problem as follows. Configurations are lazily
evaluated functions. The function that builds Hub, for example, looks
like this:

 | { cabal, fgl, filepath, hexpat, regexCompat, utf8String }:
 |
 | cabal.mkDerivation (self: {
 |   pname = "hub";
 |   version = "1.1.0";
 |   sha256 = "0vwn1v32l1pm38qqms9ydjl650ryic37xbl35md7k6v8vim2q8k3";
 |   isLibrary = false;
 |   isExecutable = true;
 |   buildDepends = [ fgl filepath hexpat regexCompat utf8String ];
 |   meta = {
 | homepage = "https://justhub.org";;
 | description = "For multiplexing GHC installations and providing 
development sandboxes";
 | license = self.stdenv.lib.licenses.bsd3;
 | platforms = self.ghc.meta.platforms;
 |   };
 | })

When Nix runs that build, it's executed in a temporary environment that
contains exactly those package that have been declared as build inputs,
but nothing else. Since all built-time dependencies of this package are
arguments of the function, it's possible to instantiate that build with
any version of GHC, Cabal, fgl, filepath, etc. If I pass GHC 6.12.3, Hub
will be built with GHC 6.12.3. If I pass GHC 7.4.2, Hub will be built
with GHC 7.4.2 instead.

Now, in my home directory there is a file ~/.nixpkgs/config.nix that
works like the 'main' function in a Haskell program insofar as that it
ties all those individual functions together into an user configuration:

 | let
 |   haskellEnv = pkgs: pkgs.ghcWithPackages (self: with pkgs; [
 | # Haskell Platform
 | haskellPlatform
 | # other packages
 | cmdlib dimensional funcmp hackageDb hledger hledgerLib hlint hoogle
 | HStringTemplate monadPar pandoc smallcheck tar uulib permutation
 | criterion graphviz async
 |   ]);
 | in
 | {
 |   packageOverrides = pkgs:
 |   {
 | ghc704Env = haskellEnv pkgs.haskellPackages_ghc704;
 | ghc741Env = haskellEnv pkgs.haskellPackages_ghc741;
 | ghc742Env = haskellEnv pkgs.haskellPackages_ghc742;
 |   };
 | }

I can copy that file to every other machine, regardless of whether it's
a Linux host, a Mac, or a BSD Unix, and run

  nix-env -iA ghc704Env

to have Nix build my GHC 7.0.4 development environment with exactly
those extra libraries that I configured.

How would I do something like that in Hub?


 > Maybe Nix provides such a mechanism -- I don't know.

It does. :-)

Take care,
Peter


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [Haskell] JustHub 'Sherkin' Release

2012-06-15 Thread Peter Simons
Hi Chris,

 > I deatiled some of my trials with Nix -- I wasn't making it up!

of course, I didn't mean to imply that you were. My question was phrased
poorly, I am sorry.

What I meant to ask is: how much time, approximately, did you spend
working with Nix? 1 hour? 10 hours? 10 days? 10 months?

Take care,
Peter


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [Haskell] JustHub 'Sherkin' Release

2012-06-15 Thread Peter Simons
Hi Chris,

 > I cannot see how it can address any of the user-level Haskell package
 > database management and sandboxing mechanisms that I mentioned in the
 > announcement and subsequent emails.

have you ever actually used Nix?

Take care,
Peter


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [Haskell] JustHub 'Sherkin' Release

2012-06-15 Thread Peter Simons
Hi Chris,

 > Where is this functionality provided by Nix?

simply run these commands

 # Haskell Platform 2009.2.0.2
 nix-env -p ~/ghc-6.10.4 -iA haskellPackages_ghc6104.haskellPlatform

 # Haskell Platform 2010.2.0.0
 nix-env -p ~/ghc-6.12.3 -iA haskellPackages_ghc6123.haskellPlatform

 # Haskell Platform 2012.2.0.0'
 nix-env -p ~/ghc-7.4.1 -iA haskellPackages_ghc741.haskellPlatform

and you'll have profiles that contain the appropriate binaries and
libraries defined by the corresponding platform. Nix can do this without
any superuser privileges on Linux, Darwin, and BSD Unix, although I have
to say that BSD support is limited because there seem to be very few
people using Nix on BSD. (I reckon the BSD people are happy with their
BSD ports and aren't interested in a third-party package manager.)

Furthermore, Nix can many different versions of *any* package
simultaneously, not just Haskell:

  nix-env -p ~/python-2.6.7 -iA python26
  nix-env -p ~/python-2.7.3 -iA python27
  nix-env -p ~/python-3.2.3 -iA python3

Anyone who's interested in Nix can find lots of information on the web
site . There's also the IRC channel #nixos on
irc.freenode.org where some Nix developers hang out. Last but not least,
there is the developer mailing list .

I'll be happy to answer any further questions that may arise.

Take care,
Peter


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Installing REPA

2012-04-07 Thread Peter Simons
Hi Ben,

 > Please try again now.

thank you very much for the quick update! Everything installs fine now.
I've also packaged the latest versions for NixOS. 

Take care,
Peter


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Subscriber-only lists as Maintainer contacts of Cabal packges

2012-04-07 Thread Peter Simons
Hi Joachim,

 > Please make sure the list is not set to subscriber only; it is an
 > unreasonable burden to subscribe for people who just want to send you
 > one question, and possibly have to contact dozends of different
 > package authors, e.g. as a distribution packager.

+1

I have had that problem, too. Maintainers give contact details, but then
I have to jump through hoops before I can actually contact them. I see
why people want to protect themselves from spam, but this approach seems
counter-productive to me.


 > (Who really thinks that using the subscriber-only setting of mailman as
 > an anti-spam-measure is an abuse of the feature, and that mailman should
 > offer a “non-subscribers get a bounce that allows them to approve the
 > message themselves“ feature which would give the same spam protection
 > but much less hassle for the users.)

The way to accomplish that is to configure the list as "moderated", and
to set all list subscribers as "unmoderated". This makes postings from
subscribers go right through, and everyone else's message are forwarded
to the list moderator for approval. It's not quite the same as a
challenge-response scheme that empowers casual posters to confirm their
honest intentions (i.e. the correctness of their mail envelope address),
but it's still a lot better than just dropping every mail from anyone
who isn't subscribed.

Take care,
Peter


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Installing REPA

2012-04-07 Thread Peter Simons
Hi Ben,

 > I've just pushed Repa 3 onto Hackage, which has a much better API
 > than the older versions, and solves several code fusion problems.

when using the latest version of REPA with GHC 7.4.1, I have trouble
building the repa-examples package:

 | Building repa-examples-3.0.0.1...
 | Preprocessing executable 'repa-volume' for repa-examples-3.0.0.1...
 | [1 of 1] Compiling Main ( examples/Volume/Main.hs, 
dist/build/repa-volume/repa-volume-tmp/Main.o )
 | Linking dist/build/repa-volume/repa-volume ...
 | Preprocessing executable 'repa-sobel' for repa-examples-3.0.0.1...
 | [1 of 2] Compiling Solver   ( examples/Sobel/src-repa/Solver.hs, 
dist/build/repa-sobel/repa-sobel-tmp/Solver.o )
 | Loading package ghc-prim ... linking ... done.
 | Loading package integer-gmp ... linking ... done.
 | Loading package base ... linking ... done.
 | Loading package array-0.4.0.0 ... linking ... done.
 | Loading package bytestring-0.9.2.1 ... linking ... done.
 | Loading package deepseq-1.3.0.0 ... linking ... done.
 | Loading package containers-0.4.2.1 ... linking ... done.
 | Loading package binary-0.5.1.0 ... linking ... done.
 | Loading package bmp-1.2.1.1 ... linking ... done.
 | Loading package old-locale-1.0.0.4 ... linking ... done.
 | Loading package old-time-1.1.0.0 ... linking ... done.
 | Loading package extensible-exceptions-0.1.1.4 ... linking ... done.
 | Loading package time-1.4 ... linking ... done.
 | Loading package random-1.0.1.1 ... linking ... done.
 | Loading package pretty-1.1.1.0 ... linking ... done.
 | Loading package template-haskell ... linking ... done.
 | Loading package QuickCheck-2.4.2 ... linking ... done.
 | Loading package primitive-0.4.1 ... linking ... done.
 | Loading package vector-0.9.1 ... linking ... done.
 | Loading package repa-3.0.0.1 ... linking ... done.
 | Loading package repa-io-3.0.0.1 ... linking ... done.
 | Loading package repa-algorithms-3.0.0.1 ... linking ... done.
 | [2 of 2] Compiling Main ( examples/Sobel/src-repa/Main.hs, 
dist/build/repa-sobel/repa-sobel-tmp/Main.o )
 | Linking dist/build/repa-sobel/repa-sobel ...
 | Preprocessing executable 'repa-mmult' for repa-examples-3.0.0.1...
 | 
 | examples/MMult/src-repa/Main.hs:3:8:
 | Could not find module `Solver'
 | Use -v to see a list of the files searched for.

When I attempt to use repa 3.1.x, the build won't even get past the
configure stage, because Cabal refuses these dependencies. Is that a
known problem, or am I doing something wrong?

Take care,
Peter


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to make asynchronous I/O composable and safe?

2012-01-14 Thread Peter Simons
Hi Daniel,

 > I've been trying to write networking code in Haskell too. I've also
 > come to the conclusion that channels are the way to go.

isn't a tuple of input/output channels essentially the same as a stream
processor arrow? I found the example discussed in the "arrow paper" [1]
very enlightening in that regard. There also is a Haskell module that
extends the SP type to support monadic IO at [2].

Take care,
Peter


[1] 
http://www.ittc.ku.edu/Projects/SLDG/filing_cabinet/Hughes_Generalizing_Monads_to_Arrows.pdf
[2] http://hackage.haskell.org/package/streamproc


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to make asynchronous I/O composable and safe?

2012-01-14 Thread Peter Simons
Hi guys,

 >> I'm not happy with asynchronous I/O in Haskell.  It's hard to reason
 >> about, and doesn't compose well.
 >
 > Async I/O *is* tricky if you're expecting threads to do their own
 > writes/reads directly to/from sockets. I find that using a
 > message-passing approach for communication makes this much easier.

yes, that is true. I've always felt that spreading IO code all over the
software is a choice that makes the programmers live unnecessarily hard.
The (IMHO superior) alternative is to have one central IO loop that
generates buffers of input, passes them to callback a function, and
receives buffers of output in response.

I have attached a short module that implements the following function:

  type ByteCount= Word16
  type Capacity = Word16
  data Buffer   = Buf !Capacity !(Ptr Word8) !ByteCount
  type BlockHandler st  = Buffer -> st -> IO (Buffer, st)

  runLoop :: ReadHandle -> Capacity -> BlockHandler st -> st -> IO st

That setup is ideal for implementing streaming services, where there is
only one connection on which some kind of dialog between client/server
takes place, i.e. an HTTP server.

Programs like Bittorrent, on the other hand, are much harder to design,
because there's a great number of seemingly individual I/O contexts
(i.e. the machine is talking to hundreds, or even thousands of other
machines), but all those communications need to be coordinated in one
way or another.

A solution for that problem invariably ends up looking like a massive
finite state machine, which is somewhat unpleasant.

Take care,
Peter



{-# LANGUAGE DeriveDataTypeable #-}
{- |
   Module  :  BlockIO
   License :  BSD3

   Maintainer  :  sim...@cryp.to
   Stability   :  provisional
   Portability :  DeriveDataTypeable

   'runLoop' drives a 'BlockHandler' with data read from the
   input stream until 'hIsEOF' ensues. Everything else has
   to be done by the callback; runLoop just does the I\/O.
   But it does it /fast/.
-}

module BlockIO where

import Prelude hiding ( catch, rem )
import Control.Exception
import Control.Monad.State
import Data.List
import Data.Typeable
import System.IO
import System.IO.Error hiding ( catch )
import Foreign  hiding ( new )
import System.Timeout

-- * Static Buffer I\/O

type ReadHandle  = Handle
type WriteHandle = Handle

type ByteCount = Word16
type Capacity  = Word16
data Buffer= Buf !Capacity !(Ptr Word8) !ByteCount
 deriving (Eq, Show, Typeable)

-- |Run the given computation with an initialized, empty
-- 'Buffer'. The buffer is gone when the computation
-- returns.

withBuffer :: Capacity -> (Buffer -> IO a) -> IO a
withBuffer 0 = fail "BlockIO.withBuffer with size 0 doesn't make sense"
withBuffer n = bracket cons dest
  where
  cons = mallocArray (fromIntegral n) >>= \p -> return (Buf n p 0)
  dest (Buf _ p _) = free p

-- |Drop the first @n <= size@ octets from the buffer.

flush :: ByteCount -> Buffer -> IO Buffer
flush 0 buf   = return buf
flush n (Buf cap ptr len) = assert (n <= len) $ do
  let ptr' = ptr `plusPtr` fromIntegral n
  len' = fromIntegral len - fromIntegral n
  when (len' > 0) (copyArray ptr ptr' len')
  return (Buf cap ptr (fromIntegral len'))

type Timeout = Int

-- |If there is space, read and append more octets; then
-- return the modified buffer. In case of 'hIsEOF',
-- 'Nothing' is returned. If the buffer is full already,
-- 'throwDyn' a 'BufferOverflow' exception. When the timeout
-- exceeds, 'ReadTimeout' is thrown.

slurp :: Timeout -> ReadHandle -> Buffer -> IO (Maybe Buffer)
slurp to h b@(Buf cap ptr len) = do
  when (cap <= len) (throw (BufferOverflow h b))
  timeout to (handleEOF wrap) >>=
maybe (throw (ReadTimeout to h b)) return
  where
  wrap = do let ptr' = ptr `plusPtr` fromIntegral len
n= cap - len
rc <- hGetBufNonBlocking h ptr' (fromIntegral n)
if rc > 0
   then return (Buf cap ptr (len + fromIntegral rc))
   else hWaitForInput h (-1) >> wrap

-- * BlockHandler and I\/O Driver

-- |A callback function suitable for use with 'runLoop'
-- takes a buffer and a state, then returns a modified
-- buffer and a modified state. Usually the callback will
-- use 'slurp' to remove data it has processed already.

type BlockHandler st = Buffer -> st -> IO (Buffer, st)

type ExceptionHandler st e = e -> st -> IO st

-- |Our main I\/O driver.

runLoopNB
  :: (st -> Timeout)-- ^ user state provides timeout
  -> (SomeException -> st -> IO st)   -- ^ user provides I\/O error handler
  -> ReadHandle -- ^ the input source
  -> Capacity   -- ^ I\/O buffer size
  -> BlockHandler st-- ^ callback
  -> st -- ^ initial callback state
  -> IO st  -- ^ return final callback state
runLoopNB mkTO errH hIn cap f initST = withBuffer cap (`ioloop` initST)
  where
  ioloop buf st = buf `seq` st `seq`
handle (`errH` st)

Re: [Haskell-cafe] ANN: wxHaskell 0.13.2

2012-01-07 Thread Peter Simons
Hi guys,

 > I am please to announce that wxHaskell 0.13.2 has just been uploaded
 > to Hackage.

when I try to build the latest version on Linux/x86_64 running NixOS, I
get the following error at configure time:

Setup: Missing dependency on a foreign library:
* Missing C library: wx_gtk2u_media-2.8

I searched my hard disk for that library, and apparently my installed
copy of wxGTK-2.8.12 doesn't have it. There are plenty of libwx_gtk2u_*
libraries, but none of them is called "media".

Does anyone know how wxGtk must be built in order to make sure that
library exists? Is there some special configure flag, maybe?

Take care,
Peter


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Announce: The Haskell Platform 2011.4

2011-12-18 Thread Peter Simons
Hi guys,

 > We're pleased to announce the release of the Haskell Platform: a
 > single, standard Haskell distribution for everyone.

Haskell Platform 2011.4 is fully supported on NixOS .

Take care,
Peter


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [ANNOUNCEMENT] xmobar 0.14

2011-12-11 Thread Peter Simons
Hi Antoine,

 > What errors are you getting compiling with GHC 6.10.4? If its a small
 > thing I certainly don't mind patching things.

I am sorry, my previous statement was inaccurate. Parsec 3.1.2 compiles
fine, but the 'text' library -- on which Parsec depends -- does not. We
can probably avoid that issue by downgrading text to version 0.11.0.6
for GHC 6.10.4, which builds fine. It's not a pretty solution, but it
seems to work fine.

So, the good news is that we now have Parsec 3 available for GHC 6.10.4
in NixOS after all. :-)

Take care,
Peter


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [ANNOUNCEMENT] xmobar 0.14

2011-12-11 Thread Peter Simons
Hi Jose,

 > Peter, would using parsec 3.x be an acceptable solution to you?

well, we can link xmobar with parsec 3.x on NixOS. The situation
is tricky, though, because the latest version of parsec that we
have, 3.1.2, doesn't compile with GHC 6.10.4 anymore, so we'd
have to use some older version to work around that problem. That
kind of setup somewhat complicated to maintain, which is why I
would prefer to compile xmobar with parsec 2 2 if at all
possible.

Generally speaking, though, GHC 6.10.4 support is not a high
priority. I just thought it might be worth pointing out that
backwards compatibility has been lost in the 0.14 release,
because earlier versions worked just fine.

Take care,
Peter


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [ANNOUNCEMENT] xmobar 0.14

2011-12-10 Thread Peter Simons
Hi Jose,

 > I'm happy to announce the release of xmobar 0.14.

previous versions of xmobar used to compile fine with GHC 6.10.4, but
the new version no longer does:

src/Parsers.hs:163:52:
Couldn't match expected type `Char' against inferred type `[Char]'
  Expected type: GenParser Char st Char
  Inferred type: GenParser Char st String
In the second argument of `($)', namely `wrapSkip $ string "Run"'
In a stmt of a 'do' expression:
  notFollowedBy $ wrapSkip $ string "Run"

The complete log is at ,
just in case there happens to be an easy fix for that error.

Thank you very much for your efforts!

Take care,
Peter


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] problem with cabal install MissingH-1.1.1.0

2011-09-22 Thread Peter Simons
Hi Mariano,

 > I'm with mac OS X lion, ghc version 7.2.1 and when a i try to install
 > MissingH version 1.1.1.0 it fails with [...]

that version of MissingH compiles fine on Linux, so I reckon the
problem you're seeing is in some way specific to Darwin. Your best
bet of getting a fix would be to report that error to the author,
i.e. by submitting a bug report at

  https://github.com/jgoerzen/missingh/issues

Take care,
Peter


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANN: cabal-ghci 0.1

2011-09-09 Thread Peter Simons
Hi Etienne,

 > Here is a helpful package I wrote to ease the development of projects
 > using cabal.

thank you very much for this helpful tool!

I notice that Haddock has trouble parsing the documentation:

  
http://hackage.haskell.org/packages/archive/cabal-ghci/0.1/logs/failure/ghc-7.2

Is that error hard to fix?

Take care,
Peter


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Off-topic: Mathematics

2011-08-30 Thread Peter Simons
Hi Andrew,

 > I know of several places where I can ask maths questions and half a
 > dozen people will take guesses at what the correct solution might be.
 > I haven't yet found anywhere where I can say "when would a
 > chi-squared test be more appropriate than a KS test?" and get an
 > informed, knowledgeable answer. (Answers from people who /know/ what
 > they're talking about rather than just /think/ they know.)

I believe this phenomenon is quite natural and easily explained. When
you're asking a non-trivial question, hardly anyone just "knows" the
correct answer -- especially when it comes to math. In order to answer
your question, people have to dedicate time and effort to study the
problem you're asking about. (Furthermore, formulating a coherent
response is usually be a bit of an effort, too.)

Now, a person who has profound knowledge of the subject you're asking
about is not very likely to do this, because he is probably not going to
learn anything in the process. Dedicating time and effort to studying
your particular problem is not an appealing prospect. A person who has
superficial understanding of the subject, however, is more likely to be
fascinated by the problem, and consequently he is more likely to
dedicate time and effort into formulating a response.

In other words, even if Donald Knuth himself is reading the forum you're
posting to, it doesn't mean that he is actually going to respond. On the
other hand, if you're asking the right question, Donald Knuth just might
respond to it, but not necessarily in the forum that you were originally
asking in.

Take care,
Peter


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Distributions link on Hackage

2011-08-11 Thread Peter Simons
Hi,

the home page of a package on Hackage links to various distributions to
show which versions are available, i.e. Fedora, Debian, FreeBSD, etc. In
NixOS, we have fairly up-to-date package set, and I would like to see
that distribution included on Hackage.

Now I wonder how to get that done? Can anyone advice on the procedure to
add support for a distribution to Hackage?

Take care,
Peter


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Crypto-api performance

2011-05-05 Thread Peter Simons
Hi Matthew,

 > While I haven't investigated myself, from seeing haskell build processes
 > in the past this is almost certainly not crypto-api's fault and is in
 > fact your linker's fault. If you are not using it already, try switching
 > to gold over ld, it may help.

well, memory consumption sky-rockets while compiling "Crypto.CPoly".
That behavior is probably not related to the linker.

Take care,
Peter


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Crypto-api performance

2011-05-04 Thread Peter Simons
Also, it appears that crypto-api needs vast amounts of memory when
compiled with optimization enabled. The latest version 0.6.1 is
effectively unbuildable on my EeePC, which has only 1GB RAM. That
property is fairly undesirable for a library package.

Take care,
Peter


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] makeTokenParser + LanguageDef

2011-03-08 Thread Peter Simons
Hi Klaus,

for what it's worth, you might want to consider using this package
instead of Parsec:

  http://hackage.haskell.org/package/BNFC

Take care,
Peter


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why is there no "splitSeperator" function in Data.List

2011-02-14 Thread Peter Simons
Hi Evan,

 >>> The reason it's not in Data.List is because there are a bazillion
 >>> different splits one might want (when I was pondering the issue
 >>> before Brent released it, I had collected something like 8
 >>> different proposed splits), so no agreement could ever be reached.
 >>
 >> It is curious though that the Python community managed to agree on a
 >> single implementation and include that in the standard library… So
 >> it is possible :)
 >
 > This is sometimes cited as the advantage of a benevolent
 > dictator-for-life. I remember there was lots of argument when 'join'
 > was added as a string method (vs. should it be a list method). In the
 > end, Guido decided on one and that's what went in.

having a dictator is not a necessary prerequisite for the ability to
make decisions. It's quite possible to decide controversial matters
without a dictator -- say, by letting people vote.

Take care,
Peter


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] ArchLinux binary repository available for beta testing

2011-01-11 Thread Peter Simons
Hi guys,

those of you who use the ArchLinux distribution might be interested to know
that a team of volunteers has put together a binary package repository that
complements the set of Haskell packages that's already being distributed by
ArchLinux. Subscribers of that repository can use Pacman to install all of
Haskell Platform 2010.2.0.0 as well as a few other popular packages such as
bnfc, hledger, pandoc, sifflet, and yesod on both i686 and x86_64. If you
want to use the repository, then append the following two lines at the end
of your /etc/pacman.conf file:

  [haskell]
  Server = http://andromeda.kiwilight.com/$repo/$arch

Please be aware of the fact that this is the very first public announcement
of this repository, so you should consider it as being in a kind of beta
state. Basically, if your Linux machine is responsible for controlling some
large nuclear power plant or something, you probably shouldn't be using
this. Everyone else is encouraged to try it out. If you encounter problems,
have questions, or would like to make suggestions, then please raise an
issue at . Of course, you're
also welcome to provide feedback by posting to the haskell-cafe mailing list
or to arch-hask...@haskell.org.

Many people have contributed to this effort in one way or another. Don
Stewart originally wrote the cabal2arch tool that is being used to generate
the HABS tree on which this repository is based. Rémy Oudompheng has
extended that tool and the underlying ArchLinux library significantly, and
he has also written most of the build system that's being used to compile
the binary packages. Magnus Therning has compiled all the x86_64 binaries.
The i686 binaries were compiled by Yours Truely. Kaiting Chen is kindly
hosting the repository on his server. Furthermore, there are many other
people who have submitted bug reports, suggestions, and fixes by way of AUR.

Have fun,
Peter


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: storable-endian

2010-12-24 Thread Peter Simons
Hi guys,

 >> You could use ADNS.Endian.endian from package hsdns in your Setup.hs
 >> to define endianness at compile time.
 >
 > Cool, it's already there! However I would not recommend to let a
 > low-level library depend on a higher-level one. I think it would be
 > cleaner to move the ADNS.Endian module to storable-endian package or
 > even to a separate package.

yes, I agree. hsdns should re-use storable-endian, not the other way round.
The API offered by storable-endian is rather similar to the functions hsdns
currently uses internally, so it should be fairly straightforward to adapt.

Take care,
Peter


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHC 7.0.1 developer challenges

2010-12-15 Thread Peter Simons
Hi John,

 > I think the previous responder was asserting the 32M limit, not you.

I believe the previous poster suggested that you use ulimit to provide a
hard upper bound for run-time memory use. That 32M figure seemed to be made
up out of thin air just as an example to illustrate the syntax of the ulimit
command. I don't have the impression that it was meant be any significant.


 > [My program allows] users to set a step count bound, after which the
 > program aborts. But guess what users do. They keep increasing the step
 > count bound to see if just a few more steps will allow termination on
 > their problem. Of course, some end up setting the bound so high, that
 > thrashing occurs.

I see. I must have misunderstood the situation. From your original posting,
I got the impression that the program would depend on an externally enforced
memory limit just to terminate at all!


 > So for implementations of undecidable algorithms, you really need an
 > intelligent memory bound on the GHC runtime.

Well, some sort of externally enforced memory limit is useful, yes, but you
don't strictly need that functionality in GHC. You can just as well use the
operating system to enforce that limit, i.e. by means of 'ulimit'.

Take care,
Peter


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHC 7.0.1 developer challenges

2010-12-14 Thread Peter Simons
Hi John,

 > On Mon, Dec 13, 2010 at 10:45 AM, Peter Simons  wrote:
 >
 >> Relying exclusively on GHC's ability to limit run-time memory
 >> consumption feels like an odd choice for this task. It's nice that
 >> this feature exists in GHC, but it's inherently non-portable and
 >> outside of the scope of the language. There really ought to be a
 >> better way to catch an infinite loop that this.
 >
 > It all comes down to picking the correct memory limit. How do you
 > propose to do it? How did you come up with the number 32M? That
 > number would have been a disaster for me.

I beg your pardon? I didn't say anything about "32M". I said that
designing software to rely on a GHC-enforced memory limit as a means of
"dealing" with infinite loops feels really not like a particularly good
solution.

Take care,
Peter


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHC 7.0.1 developer challenges

2010-12-13 Thread Peter Simons
Hi Mathieu,

 > Why don't you use ulimit for this job?
 >
 > $ ulimit -m 32M; ./cpsa

yes, I was thinking the same thing. Relying exclusively on GHC's ability to
limit run-time memory consumption feels like an odd choice for this task.
It's nice that this feature exists in GHC, but it's inherently non-portable
and outside of the scope of the language. There really ought to be a better
way to catch an infinite loop that this.

Just my 2 cents
Peter


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANN: hledger 0.13

2010-12-09 Thread Peter Simons
Hi Simon,

 > [Are you] avoiding use of cabal-install and hackage entirely?

yes, I'm trying to provide a package for hledger 0.13 that can be
installed using ArchLinux's native package manager. The current version
is available here: .


 > How did hledger-0.13 get into the Arch packaging system [...].

It isn't, but I'm trying to get it in. :-)

Take care,
Peter


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANN: hledger 0.13

2010-12-07 Thread Peter Simons
Hi Simon,

thank you very much for your efforts. I wonder whether there is any
particular reason why hledger won't build with process-1.0.1.3?

Take care,
Peter


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Updating Haskell Packages through Archlinux becoming A Pain

2010-11-16 Thread Peter Simons
Hi Mathew,

 > [My GHC installation breaks] when pacman updates a package using an
 > AUR package, which cabal refuses to install because it can break
 > other packages (yet the package still gets installed according to
 > pacman).

this bug has been fixed about two weeks ago; it should no longer occur
with the current PKGBUILD files. An easy way to get your system back in
shape is to un-install all Haskell packages by running

  pacman -R --cascade ghc

..., then clean up the package database with

  rm -rf /usr/lib/ghc-6.12.3

..., and finally re-install those packages that you'd like to have. The
procedure is a little awkward, I'm afraid, but you won't ever have to do
it again, just this one time. ;-)

Take care,
Peter

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Who manages ?

2010-11-04 Thread Peter Simons
Hi guys,

a while ago, I created an account on Trac. Now, it seems that I've forgotten
both the password and the e-mail address that I used at the time. I cannot
log in, and I cannot make Trac send me the password either. Clearly, I need
the help of a human being with administrator privileges to figure that out.

Can someone give me a pointer about who I'd want to contact regarding that
issue?

Take care,
Peter

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: I/O performance drop in ghc 6.12.1

2010-01-14 Thread Peter Simons
Hi Svein,

 > Hold on, he's using hGetBuf/hPutBuf.

exactly, that's what I was thinking. When a program requests that 'n'
bytes ought to be read into memory at the location designated by the
given 'Ptr Word8', how could GHC possibly do any encoding or decoding?
That API doesn't allow for multi-byte characters. I would assume that
hGetBuf/hPutBuf are the equivalent to POSIX read() and write()?

 > I wonder if the difference goes away if the handle is explicitly set
 > to binary?

I added an

   mapM_ (\h -> hSetBinaryMode h True) [ stdin, stdout ]

to 'main', and it does seem to improve performance a little, but it's
still quite a bit slower than /bin/cat:

 | $ time /bin/cat /dev/null
 |
 | real0m2.119s
 | user0m0.003s
 | sys 0m1.967s
 |
 | $ time ./cat-hgetbuf /dev/null
 |
 | real0m3.449s
 | user0m1.137s
 | sys 0m2.240s

Take care,
Peter

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] I/O performance drop in ghc 6.12.1

2010-01-14 Thread Peter Simons
Hi,

I just updated to GHC 6.12.1, and I noticed a significant drop in I/O
performance that I can't explain. The following code is a simple
re-implementation of cat(1), i.e. it just echos all data from standard
input to standard output:

> module Main ( main ) where
>
> import System.IO
> import Foreign ( allocaBytes )
>
> bufsize :: Int
> bufsize = 4 * 1024
>
> catBuf :: Handle -> Handle -> IO ()
> catBuf hIn hOut = allocaBytes bufsize input
>   where
>   input ptr= hGetBuf hIn ptr bufsize >>= output ptr
>   output  _  0 = return ()
>   output ptr n = hPutBuf hOut ptr n >> input ptr
>
> main :: IO ()
> main = do
>   mapM_ (\h -> hSetBuffering h NoBuffering) [ stdin, stdout ]
>   catBuf stdin stdout

That program used to have exactly the same performance as /bin/cat, but
now it no longer does:

 | $ dd if=/dev/urandom of=test.data bs=1M count=512
 |
 | $ time /bin/cat /dev/null
 |
 | real0m1.939s
 | user0m0.003s
 | sys 0m1.923s
 |
 | $ time ./cat-hgetbuf /dev/null
 |
 | real0m4.327s
 | user0m1.967s
 | sys 0m2.317s

I've tested different variants of the program that were built with -O,
-O2, and -O2 -funbox-strict-fields, respectively, but it doesn't seem to
make a difference.

Is there something I'm missing? Any suggestion would be kindly
appreciated.

Take care,
Peter

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Functional MetaPost in 5 Steps

2008-10-28 Thread Peter Simons
Hi Robin,

> [FuncMP problems with pdflatex]

I have no experience whatsoever with pdflatex, I'm sorry, Funcmp works
just fine for me in normal LaTeX, though. That's not exactly what you
need, but from the sound of it, it might be step forward anyway.

First of all, try writing the MetaPost files with the following
function, toMPFile, rather than the standard 'generate':

  toMPFile:: (IsPicture a) => a -> FilePath -> IO ()
  toMPFile pic f  = writeFile f (show $ toMetaPost pic)

  toMetaPost  :: (IsPicture a) => a -> Doc
  toMetaPost a= emit $ metaPost 0 (toPicture a) params
  where
  params  =  Parameters
 { mpBin  = undefined
 , funcmpBin  = undefined
 , funcmpRTS  = undefined
 , defaultDX  = 3
 , defaultDY  = 3
 , textDX = 2
 , textDY = 2
 , prolog = myprolog
 , epilog = "\\end"
 , newmp  = False
 }
  myprolog= "verbatimtex\n"
++ "\\documentclass[11pt]{report}\n"
++ "\\begin{document}\n"
++ "etex\n\n"
++ "input boxes\n"
++ "input FuncMP\n"

The resulting .mp file has to be run through mpost with the $MPINPUTS
variable pointing to the directory that contains FuncMP.mp from.  This
will give you an EPS file, which in turn can be included in any LaTeX
document with, say \epsfig{}.

This whole process ought to work fine with texlive of tetex.

I hope this helps,
Peter
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: [Haskell] Re: Streams: the extensible I/O library

2006-02-06 Thread Peter Simons
Hey Bulat,

I tried removing the "import System.Win32", but unfortunately it
only got me so far:

 | Examples$ ghc -i..  -O2 -funbox-strict-fields --make wc.hs -o wc
 | Chasing modules from: wc.hs
 | [ 1 of 16] Compiling System.FD( ../System/FD.hs, ../System/FD.o )
 |
 | /tmp/ghc9376_0.hc:6:16:  io.h: No such file or directory
 | /tmp/ghc9376_0.hc: In function `SystemziFD_zdwccall_entry':
 |
 | /tmp/ghc9376_0.hc:1670:0:
 |  warning: implicit declaration of function `_eof'
 | /tmp/ghc9376_0.hc: In function `SystemziFD_zdwccall1_entry':
 |
 | /tmp/ghc9376_0.hc:1951:0:
 |  warning: implicit declaration of function `filelength'
 | /tmp/ghc9376_0.hc: In function `SystemziFD_zdwccall2_entry':
 |
 | /tmp/ghc9376_0.hc:2055:0:
 |  warning: implicit declaration of function `tell'
 | [abort]

I also downloaded the new release archive, just to be sure. but
it doesn't contain a file "io.h" either. Is that a system header?
The problem seems to be _eof.


 > btw, my "wc" has about the same speed as yours :)

I expected nothing less. All your code I've seen so far has been
exceptionally clever. I'm quite curious to try it out.

Peter



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: binary IO

2005-12-30 Thread Peter Simons
Hi Bulat,

 >> general-purpose binary I/O library for Haskell.
 >
 > where i can find it?

the module is available here:

  http://cryp.to/blockio/fast-io.html
  http://cryp.to/blockio/fast-io.lhs

The article is incomplete and a bit messy, but the code
works fine. Feedback and ideas for improvement are very
welcome.

Peter

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Haskell Speed

2005-12-29 Thread Peter Simons
Albert Lai writes:

 > For almost a decade, most (I dare claim even all) Pascal
 > and C compilers were "three-pass" or "two-pass". It means
 > perhaps the compiler reads the input two or three times
 > [...], or perhaps the compiler reads the input once,
 > produces an intermediate form and saves it to disk, then
 > reads the intermediate form from disk, produces a second
 > intermediate form and saves it to disk, then reads the
 > second intermediate form from disk, then it can produce
 > machine code.
 >
 > It must have been the obvious method, since even though
 > it was obviously crappy [...].

I beg to differ. Highly modular software is not necessarily
crapy if you're writing something as complex as a C or
Pascal compiler -- especially in times where RAM existed
only in miniscule amounts. A highly modularized algorithm
for counting lines, words, and characters in an input file,
however, is something altogether different. I doubt anyone
but the most inexperienced novices would have tried to do
that in three passes in a strict language.

Peter

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: binary IO

2005-12-29 Thread Peter Simons
Bulat Ziganshin writes:

 > your BlockIO library is great, but it's usage is limited
 > to very specific sutuations - when we can save pass state
 > between processing of individual bytes

In my experience, any other approach to I/O is slow. If you
don't have an explicit state between processing of
individual bytes, you have an implicit state. Making that
state (the I/O buffer) explicit gives you control over how
it is used and how it is evaluated. With an implicit state
(lazy evaluation), you have no control.

Fast I/O is a matter of application design. BlockIO is fast
because its API forces you to design your applications as
stateful, interruptible computations -- a finite state
machine. If you don't want to design your I/O application as
a finite state machine, then it will be slow regardless of
the I/O library you use. It sucks, but that is my
experience.

This phenomenon isn't specific to Haskell, by the way. C++'s
std::iostream is another fine example for an implicit state
API that is beautiful, elegant, and quite useless for
high-performance I/O.


 > what for (de)serialization tasks? my best attempts is
 > still 10x slower than C version. can you roll up little
 > prototype for such library or just sketch an ideas so i
 > can try to implement it?

The "Fast I/O" article I posted a few days ago is my
unfinished attempt at writing an efficient, general-purpose
binary I/O library for Haskell. I don't know how soon I'll
be able to complete that, nor do I know whether it would be
useful to many Haskell programmers if I do complete it. The
original BlockIO code has been stable (and quite fast) for
over a year or so, but I wouldn't know of anyone actually
using it. Apparently, designing explicit state data types is
nothing the Haskell community is fond of. :-)

Peter

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: binary IO

2005-12-27 Thread Peter Simons
Joel Reymont writes:

 > I will have to leave this for a while. I apologize but
 > I'm more than a bit frustrated at the moment and it's not
 > fair of me to take it out on everybody else.

Never mind. Haskell has a very high potential for frustrating
newcomers. I went through the exact same experience when I wrote
my first network code, and I still marvel at the patience the
nice folks on these mailing lists had with all my complaints.

>From what I can tell you have mastered a lot of sophisticated
language theory in a very short time. Part of the reason why
no-one can give you simple answers to your questions is that we
don't know these answers either. Just by asking those questions
you have already extended the boundaries of what the Haskell
community at large knows and understands. Give things a little
time to sink in, and then try it again. Even if you ultimately
decide to write your application in another language, you'll find
that knowing and understanding Haskell will change the way you
design software -- regardless of the language you use. Please
don't give up now that you have come this far.

Peter

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: binary IO

2005-12-27 Thread Peter Simons
Joel Reymont writes:

 > I would challenge everyone with a fast IO library to plug
 > it into the timeleak code, run it under a profiler and
 > post the results (report + any alarms).

My guess is that you would learn more if _you_ would plug
the different IO libraries into your test code. I'm certain
the respective library authors will be quite happy to answer
questions and to investigate unexpected results. The
enthusiasm you put into this subject is very much
appreciated. Your collected results will be invaluable for
the Haskell community. Thank you for not giving up on the
language at the first sign of trouble. It's great that
you're so curious about Haskell and put so much effort into
finding a good, efficient solution for your needs.

Peter

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Haskell Speed

2005-12-25 Thread Peter Simons
Tomasz Zielonka writes:

 >> wc :: String -> (Int, Int, Int)
 >> wc file = ( length (lines file)
 >>   , length (words file)
 >>   , length file
 >>   )
 >
 > I have a crazy idea: what if we computed all three length
 > applications concurrently, with the RTS preempting the
 > thread when it generates too much unreclaimable nodes?

That's a pretty good idea, actually. What is the state of
the 'par' primitive in GHC 6.x? Now that I think of it, it
seems that adding a proper "compute in parallel" annotation
could make a big difference in this case.

Peter

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Haskell Speed

2005-12-25 Thread Peter Simons
Paul Moore writes:

 > It would be interesting to see standalone code for wcIOB
 > (where you're allowed to assume that any helpers you
 > need, like your block IO library, are available from the
 > standard library). This would help in comparing the
 > "obviousness" of the two approaches.

A simple version of the program -- which doesn't need any
3rd party modules to compile -- is attached below. My guess
is that this approach to I/O is quite obvious, too, if you
have experience with system programming in C.

IMHO, the main point of the example in the article is that

  wc :: String -> (Int, Int, Int)
  wc file = ( length (lines file)
, length (words file)
, length file
)

is a crapy word-counting algorithm. I'm not sure whether
conclusions about functional programming in general or even
programming in Haskell can be derived from this code. Most
people seem to have trouble with lazy-evaluation, first of
all.

Peter



-- Compile with: ghc -O2 -funbox-strict-fields -o wc wc.hs

module Main ( main ) where

import System.IO
import Foreign

type Count = Int
data CountingState = ST !Bool !Count !Count !Count
 deriving (Show)

initCST :: CountingState
initCST = ST True 0 0 0

wc :: Char -> CountingState -> CountingState
wc '\n' (ST _ l w c) = ST True (l+1)  w   (c+1)
wc ' '  (ST _ l w c) = ST True   lw   (c+1)
wc '\t' (ST _ l w c) = ST True   lw   (c+1)
wc  _   (ST True  l w c) = ST False  l  (w+1) (c+1)
wc  _   (ST False l w c) = ST False  lw   (c+1)


bufsize :: Int  -- our I/O buffer size
bufsize = 4096

type IOHandler st = Ptr Word8 -> Int -> st -> IO st

countBuf :: IOHandler CountingState
countBuf  _  0 st@(ST _ _ _ _) = return st
countBuf ptr n st@(ST _ _ _ _) = do
  c <- peek ptr
  let st' = wc (toEnum (fromEnum c)) st
  countBuf (ptr `plusPtr` 1) (n - 1) st'

loop :: Handle -> Int -> IOHandler st -> st -> IO st
loop h n f st' = allocaArray n (\ptr' -> loop' ptr' st')
  where
  loop' ptr st = st `seq` do
rc <- hGetBuf h ptr n
if rc == 0
   then return st
   else f ptr rc st >>= loop' ptr

main :: IO ()
main = do
  ST _ l w c <- loop stdin bufsize countBuf initCST
  putStrLn . shows l . (' ':) . shows w . (' ':) $ show c

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Haskell Speed

2005-12-23 Thread Peter Simons
Daniel Carrera writes:

 > when I have a simple algorithm and performance is an
 > issue [...] I'd use C.

You don't have to. You can write very fast programs in
Haskell.

I never really finished the article I wanted to write about
this subject, but the fragment I have might be interesting
or even useful nonetheless:

  http://cryp.to/blockio/fast-io.html
  http://cryp.to/blockio/fast-io.lhs

The text uses one of the Language Shootout's tasks as an
example.

Peter

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Haskell Speed

2005-12-23 Thread Peter Simons
Daniel Carrera writes:

 > If the results could be trusted, they would be useful.
 > You could balance the expected loss in performance
 > against other factors (e.g. speed of development).

How do you measure the time it takes to come up with a
QuickSort algorithm that, implemented in Haskell, crushes
the MergeSort algorithm all other languages use? ;-)

Peter

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Haskell Speed

2005-12-23 Thread Peter Simons
Daniel Carrera writes:

 > http://shootout.alioth.debian.org/
 >
 > It looks like Haskell doesn't do very well. It seems to be
 > near the bottom of the pile in most tests. Is this due to
 > the inherent design of Haskell or is it merely the fact that
 > GHC is young and hasn't had as much time to optimize as
 > other compilers?

It's because nobody took the time to write faster entries
for the tests where Haskell is at the bottom of the pile.
The "Computer Language Shootout Benchmark" is a fun idea,
but it's quite pointless to draw any conclusions about
programming languages from those results. If it were
included in the contest, native assembler code would win
every time.

Peter

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Tutorial uploaded

2005-12-21 Thread Peter Simons
 > Some example for writing a text the IO oriented way:
 >   do putStrLn "bla"
 >  replicateM 5 (putStrLn "blub")
 >  putStrLn "end"
 >
 > whereas the lazy way is
 >   putStr (unlines (["bla"] ++ replicate 5 "blub" ++ ["end"]))

Um, maybe it's just me, but I think the first program is far
superior to the second one. The last thing you want your I/O
code to be is lazy. You want the exact opposite: you want it
to be as strict as possible. Not only does the second
version waste a lot of CPU time and memory for pointlessly
constructing a lazily evaluated list nobody ever needs, it
will also explode into your face the moment you use that
approach to write any non-trivial number of bytes.

Peter

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Tutorial uploaded

2005-12-20 Thread Peter Simons
Daniel Carrera writes:

 > I'm scared of monads :) I really don't know what a monad
 > is.

Neither do I, but that doesn't mean that I can't use just
fine. ;-)


 >> putStrLn :: String -> World -> World
 >
 > That seems less scary.

Things become a lot clearer when you think about how to
print _two_ lines with that kind of function. You'd write:

  f :: World -> World
  f world = putStrLn "second line" (putStrLn "first line" world)

The 'world' parameter forces the two functions into the
order you want, because printing "second line" needs the
result of printing "first line" before it can be evaluated.

However, writing complex applications with that kind of API
would drive you nuts, so instead you can say

  f :: IO ()
  f = do putStrLn "first line"
 putStrLn "second line"

and it means the exact same thing.

Curiously enough, if you check out the reference
documentation at:

  
http://haskell.org/ghc/docs/latest/html/libraries/base/Control-Monad-ST.html#t%3ARealWorld

..., you'll find that a "World" type actually exists.

Peter

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Tutorial uploaded

2005-12-20 Thread Peter Simons
 > == So how do I write "Hello, world"? ==
 >
 > Well, the first thing you need to understand that in a
 > functional language like Haskell, this is a harder
 > question than it seems. Most of the code you will write
 > in Haskell is "purely functional", which means that it
 > returns the same thing every time it is run, and has no
 > side effects. Code with side effects is referred to as
 > "imperative", and is carefully isolated from functional
 > code in Haskell.

I believe this description is a bit misleading. Code written
in the IO monad is purely functional just the same. Haskell
knows no other code than purely functional one. In my humble
opinion, it's unfortunate that many tutorials and
introductionary texts leave the impression that monadic code
would be something utterly different than "normal" Haskell
code. I feel it intimidates the reader by making a monad
appear like black magic, even though it's little more than
syntactic sugar to describe implicit function arguments.

If we'd have an opaque "World" type instead of the IO monad,
'putStrLn' would be:

  putStrLn :: String -> World -> World

How is this function any different from any other function?
So why should

  putStrLn :: String -> IO ()

be different from any other function?

Peter

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: module names

2005-12-17 Thread Peter Simons
Scherrer, Chad writes:

 > When I'm using ghci, I have lots of modules that I
 > sometimes want to load "as Main", and sometimes I only
 > want them loaded as a dependency from another module.
 > Currently, I have to go into each file to change the
 > "module Foo where" line to do this.

Maybe the "-main-is" option can help to make your live
easier? You'll find more information here:

  
http://haskell.org/ghc/docs/latest/html/users_guide/flag-reference.html#id3131936

Peter

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: syscall, sigpause and EINTR on Mac OSX

2005-12-11 Thread Peter Simons
Joel Reymont writes:

 > How do I get a threaded+debug runtime?

You have to build GHC from source code for that. When you
do, make sure your ${srcdir}/ghc/mk/build.mk file contains:

  GhcRTSWays += thr_debug

Peter

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: [Newbie] Why or why not haskell ?

2005-12-10 Thread Peter Simons
Christophe Plasschaert writes:

 > With erlang or haskell, can we play with or implement
 > lower network fuction (routing daemon interacting with a
 > kernel) [...]

I can't speak for Erlang, but in Haskell you can. Through
the Foreign Function Interface, you can access arbitrary
3rd-party libraries or system calls, including pointer
arithmetic and the wonders of memory management.


 > In terms of speed, is haskell good enough ?

You have C/C++'s performance for algorithms that do things
the way you would do them in C. Once you start to rely on
lazy evaluation, performance may be really good or really
bad, depending on your algorithms. Practical experience
suggests that writing efficient algorithms in a non-strict
language is difficult at first. If you stay away from
infinite lists that map to an I/O stream through lazy
evaluation, however, you should be fine. ;-)

Peter

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Network parsing and parsec

2005-09-15 Thread Peter Simons
John Goerzen writes:

 > With networking, you must be careful not to attempt to
 > read more data than the server hands back, or else you'll
 > block. [...] With a protocol such as IMAP, there is no
 > way to know until a server response is being parsed, how
 > many lines (or bytes) of data to read.

The approach I recommend is to run a scanner (tokenizer)
before the actual parser.

IMAP, like most other RFC protocols, is line-based; so you
can use a very simple scanner to read a CRLF-terminated line
efficiently (using non-blocking I/O, for example), which you
can then feed into the parser just fine because you know
that it has to contain a complete request (response) that
you can handle.

Peter

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Creating a Haskell section in Autoconf Macro Archive

2005-08-12 Thread Peter Simons
David Roundy writes:

 > I'm happy of course to have darcs' autoconf macros
 > included, I'm just not too likely to find time to do it
 > myself. :)

I have the same problem, which is why I hoped someone would
help with the effort. ;-)

Peter

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Creating a Haskell section in Autoconf Macro Archive

2005-08-12 Thread Peter Simons
Fellow Haskell'ers,

I've been maintaining the site 
for a while now, and I can't help but notice that there is no
Haskell section. I know that GHC comes with a lot of useful
Haskell-related Autoconf macros, so does Darcs, and so do a lot
of other projects too. Thus, I was wondering whether the
respective macro authors would be willing to submit their macros
to the archive for distribution, so that other software
developers around the world can find them, and re-use them.

To have a macro added to the archive, all you need to do is to
e-mail it to me in the proper mark-up format, so that the
software which generates the archive can extract the necessary
meta information and documentation.

I'll use one of David Roundy's macros as an example to show what
that looks like. Hope you don't mind, David. ;-) Here it is:

 | dnl @synopsis AX_TRY_COMPILE_GHC(PROGRAM, [ACTION-IF-TRUE], 
[ACTION-IF-FALSE])
 | dnl
 | dnl AX_TRY_COMPILE_GHC tries to compile and link the given
 | dnl program using $GHC for the compiler, and $GHCFLAGS as
 | dnl flags passed to the compiler.
 | dnl
 | dnl @category Haskell
 | dnl @author David Roundy <[EMAIL PROTECTED]>
 | dnl @version 2005-08-12
 | dnl @license AllPermissive
 |
 | AC_DEFUN([AX_TRY_COMPILE_GHC],[
 | cat << \EOF > conftest.hs
 | -- [#]line __oline__ "configure"
 | [$1]
 | EOF
 | rm -f Main.hi Main.o
 | if AC_TRY_COMMAND($GHC $GHCFLAGS -o conftest conftest.hs) && test -s conftest
 | then
 | dnl Don't remove the temporary files here, so they can be examined.
 |   ifelse([$2], , :, [$2])
 | else
 |   echo "configure: failed program was:" >&AC_FD_CC
 |   cat conftest.hs >&AC_FD_CC
 |   echo "end of failed program." >&AC_FD_CC
 | ifelse([$3], , , [ rm -f Main.hi Main.o
 |   $3
 | ])dnl
 | fi])

One of the more interesting keywords is @license, obviously. The
license recommended by the Free Software Foundation for Autoconf
macros is the all-permissive license, which reads like this:

 | Copying and distribution of this file, with or without
 | modification, are permitted in any medium without royalty
 | provided the copyright notice and this notice are preserved.

Other valid choices are "GPL2", "GPLWithACException", and "BSD".
A more thorough description of the keywords is available on the
archive's homepage.

Peter

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Creating a Haskell app to do crosstabs

2005-08-11 Thread Peter Simons
Bulat Ziganshin writes:

 > afaik Spirit is modeled after ParseC (parsing combinators)
 > haskell library and Phoenix was needed for this library
 > because parser combinators require lazy functional language to
 > work :)

Just a minor nit: the Phoenix library has nothing to do with
parsing. It's basically a collection of expression templates
which save you a lot of time when it comes to writing glue code.
Binding arguments of arbitrary function objects is something
Phoenix can do, for example. Spirit works well with that library
because both were written by the same author, but they aren't
really related.

You are right, though, that Spirit was influenced by Haskell
quite a bit. As a matter of fact, it was Spirit's author -- Joel
de Guzman -- who made me aware of Haskell when he posted some
example source code on the mailing list back then; I think it was
the usual implementation of Quicksort. I distinctly remember that
I couldn't believe my eyes when I saw that. ;-)

Peter

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: cabal --user question

2005-07-11 Thread Peter Simons
Isaac Jones writes:

 > ./setup configure --user #if it depends on user-local packages
 > ./setup build
 > ./setup install --user

 > Perhaps install --user should be the default if you
 > configure --user.

Yes, I think that would be more intuitive. It would also be
nice to be able to configure Cabal to do the user-style
install per default. An environment variable, maybe?

Peter

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] HaXml (was: Processing a file with HaXml ...)

2005-06-02 Thread Peter Simons
Graham Klyne writes:

 > http://www.ninebynine.org/Software/HaskellUtils/HaXml-1.12/

 > This code is all heavily refactored from the original
 > HaXml for improved XML entity handling, namespace,
 > xml:lang and xml:base support [...].

Is there any chance of reuniting the two HaXml versions into
a single release?

I maintain quite a bit of code that's based on Malcolm's
original HaXml version, and I'm reluctant to switch because
I'm very happy with his library, but I would also like to
have support for the features you've mentioned. So from my
perspective, getting your changes back into the "main
release" would be the best course of action.

We've talked about that before. Has there been any progress?

Peter

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: CGI module almost useless

2005-06-01 Thread Peter Simons
John Goerzen writes:

 > Is there a better CGI module out there somewhere [...]?

http://cryp.to/formdata/

The module addresses your points insofar as that it doesn't
prohibit you from solving them yourself -- like Network.CGI
does. Patches to improve (read: add) documentation would be
very welcome. ;-)

Peter

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Unexported functions are evil

2005-05-17 Thread Peter Simons
Iavor Diatchki writes:

 >> Do you have an concrete example which illustrates this
 >> point?

 > [...] consider a file A.hs that defines some data type T
 > and exports a function "f" that is defined in terms of a
 > private function "g". Now if we place "g" in a file
 > called "Private.hs" then A needs to import Private, but
 > also "Private" needs to import "A" for the definition of
 > "T".

Ah, now it see it! Great example.

But couldn't you just put T in "Foo/Base.hs", g in
"Foo/Private.hs", then create "Foo/API.hs" which imports
"Foo/Base.hs" and "Foo/Private.hs", and leave f in "A.hs"
and import "Foo/API.hs"?

Peter

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Unexported functions are evil

2005-05-17 Thread Peter Simons
Thanks for your opinions everybody!


Ketil Malde writes:

 > I guess you could sometimes have name clashes as well?

I was afraid about those for the longest time too, but in
practice name clashes curiously enough hardly ever occur --
in my experience. The problem only arises when you actually
_use_ a function that's ambiguous, just importing two
modules with a function called 'foo' in them is no problem.
And then you still have the option of using 'hide' or
'qualified' when importing the modules.


 >> On those occasions, however, why not put the function
 >> into a module, say "Foo.Bar.Private" and import it into
 >> "Foo.Bar" from there?

 > So you don't want to automatically re-export imports, I
 > take it? :-)

No. ;-) Although I would like to have a shortcut for saying
"(re-)export everything".


David Roundy writes:

 >> [...] why not put the function into a module, say
 >> "Foo.Bar.Private" and import it into "Foo.Bar" from
 >> there?

 > Because a separate module is more work.

It sure is, but I don't think that "it's more work" or "it's
less work" is a good principle by which to make software
design decisions. If you follow this idea through, then you
could also argue that splitting a program into separate
functions is more work than writing one big, big 'main'
function for the task. And it sure is. Still enlightened
programmers do just that, because it often turns out that
doing so is _less work_ in the long run. I believe the same
applies to cleanly grouping functions into separate modules,
and a separation between "public API" and "internal
implementation" doesn't sound so unreasonable, IMHO.


 > Almost every module I write has private functions, and
 > I'd really not want to write twice as many modules.

Why do these functions need to be private?


 > In darcs, if someone needs to "play with fire", he can do
 > it right there in the module itself, and export a new
 > interface function.

Not really, unless you decide to include the patches into
the main distribution. If you don't, someone who wants to
access the internals essentially has to create a fork of
your software, and that's something you really want to avoid
if you want to encourage re-use.


 > In the case of darcs, I'd say that the whole point of
 > using modules (besides making things faster to compile)
 > is to place these barriers, so that one can modify an
 > individual module without worrying about the rest of the
 > code, as long as one keeps the interface fixed.

I'm not sure whether I understand that. When modifying a
function 'foo', why do you have to worry _less_ about 'bar'
if it is in a separate module than you'd have to if it would
be in the same module?


 > There's also the ease of bug-writing issue. I think that
 > exported interfaces should be the sorts of functions
 > where the meaning of the function can be guessed from its
 > name and type.

Shouldn't _any_ function have those properties if at all
possible?


ajb writes:

 >> Is there any reason why you would have a function in one
 >> of your modules and _not_ export it?

 > Because that function is nobody else's business.

I'm sorry, but that's not really a convincing technical
argument, that's essentially "because I want it so".


 > So while I think you've identified a real problem (the
 > modules that you want to use expose insufficient APIs), I
 > think your solution is wrong. The right solution is to
 > complain to the module writer, and ask them to export a
 > functionally complete API.

So my solution is wrong and your solution is right. ;-)
Having that out of the way, what are your reasons for this
opinion? (Other than that the "art of programming" says it
ought to be this way.)


 >> The only reason I could think of is that a function is
 >> considered to be "internal" [...]

 > Right. And I agree with David: This is reason enough.

How is an internal function any _more_ internal if you don't
export it? How is it less internal if you _do_ export it?
Why doesn't the approach

  -- | /Attention:/ this function is internal and may change
  --  at random without even so much as shrug.

  foo = ...

suffice?


 > With my business hat on: Every time you expose something
 > for use, you must at the very least document it.

I'd recommend documenting _all_ functions I write, not just
the exported ones.


 > Taking this to an illogcial extreme, why don't we allow
 > pointer arithmetic in Haskell, but require people to
 > import "Prelude.C" first, so that people who enjoy
 > playing with fire can do it?

You mean "Foreign.Ptr"? Curiously enough, if Haskell
wouldn't support pointer arithmetic, the language would be
completely useless to me, so I for one don't think that's
taking things to the illogical extreme.


Iavor Diatchki writes:

 > [...] in practice this is likely to often lead to
 > recursive modules [...]

Why is that? My intuition would say that the exact opposite
is true: a more fine-grained set of modules is _less_ likely
to require recursive modul

[Haskell-cafe] Unexported functions are evil

2005-05-15 Thread Peter Simons
Please pardon the tendentious subject, but I felt like
making a clear statement. ;-)

I was wondering: Is there any reason why you would have a
function in one of your modules and _not_ export it?

I ask because I have _never_ had problems with a module
exporting too much, but I have had problems with modules
exporting too little quite frequently.

The reason why I like purely functional languages like
Haskell is that it is virtually impossible to write code
that cannot be reused. So why would you exclude other
modules from reusing you code?

The only reason I could think of is that a function is
considered to be "internal", meaning: You don't want users
of the module to rely on the function still being there (or
still working the same way) in any of the next revisions.

On those occasions, however, why not put the function into a
module, say "Foo.Bar.Private" and import it into "Foo.Bar"
from there? Then those people who enjoy playing with fire
_can_ use it, and everybody else will not.

Is there something I missed?

Peter

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: NumberTheory library

2005-05-04 Thread Peter Simons
The list found at

  http://haskell.org/libraries/#numerics

might be a good starting point for finding what you need.
I can recommend the "DoCon" library, which is pretty
sophisticated.

Another good choice might be the crypto library available
at:

  http://www.haskell.org/crypto/

It also includes several number theory modules and is
arguably somewhat simpler to use than DoCon is.

Peter

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Where do you use Haskell?

2005-05-03 Thread Peter Simons
Daniel Carrera writes:

 > So, I'm tempted to conclude that FP is only applicable to
 > situations where user interaction is a small part of the
 > program. For example, for simulations.

I can't confirm that. I've written several I/O-intensive
applications in Haskell, including full-blown network
servers which arguably do nothing _but_ user interaction,
and I haven't run into any problems I couldn't solve. Plus,
most of the problems I've ran into were caused by non-strict
evaluation rather than the purely functional design per se.

The old saw of "right tool for the right job" certainly
applies to Haskell too, but given the fact that there are
_operating systems_ written in Haskell, I'd say that
Haskell's scope is a lot broader than most people would
think.

Peter

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Random State Monad and Stochastics

2005-05-02 Thread Peter Simons
Dominic Steinitz writes:

 > I don't think they are in the standard libraries but
 > there was some discussion about them a few months ago but
 > I couldn't find a reference.

 > Peter, Can you supply one?

Naturally. ;-) The discussion started here:

  http://www.haskell.org//pipermail/libraries/2005-February/003143.html

There were many different (but more or less equivalent)
versions of these combinators posted in the thread.


 > Did you put a library of this sort of thing together?

Not yet. I still plan to set up a Darcs repository for this
kind of thing ("prelude extensions" it is called on the
Wiki), but I may not get around to it for the next couple of
weeks, unfortunately. I've recently started working as
full-time a software developer (in C++, yuck), and ever
since my own private projects have come to a screeching
halt. It's very annoying. Anyway, I guess I'm not the only
one on this list who has that problem. ;-)

Peter

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: fptools in darcs now available

2005-04-29 Thread Peter Simons
Simon Marlow writes:

 > if I just want to check out e.g. Haddock, I have to get
 > the entire fptools repo (350M+, wasn't it?).

I guess the "best" way to do that with Darcs would be to

 (1) pull the fp-tools repository,
 (2) delete all files you don't need for Haddock,
 (3) pull that into your Haddock repository.

So by pulling Haddock you would automatically get those
parts of fptools that you need. The intermediate repository
created in (2) can be deleted afterwards.

Now you can pull from "fp-tools" into "Haddock" to update
your build infrastructure.


 > 1. Make it possible to 'darcs get' just part of a tree.

I might be wrong about this, but my impression is that
Darcs does not support "modules" of any kind. You check out
an entire repository, not less.


 > 2. Create separate repositories for GHC, Happy, Haddock
 > etc., and duplicate the shared fptools structure in each
 > project. Each time we modify something in the shared part
 > of the tree, we pull the patch into the other trees.

That's the way to do it, IMHO.

 > is it possible to cherry-pick from a tree that doesn't
 > have a common ancestor?

Yes, although the merging process may be non-trivial.


 > If not, can we make the repositories appear to have
 > common ancestry?).

Just pull the "common ancestor" repository into all
sub-repositories, as described above.

Peter



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: about Ghci

2005-04-28 Thread Peter Simons
SCOTT J writes:

 > What do I have to do in order not having to type always
 > :set -fglasgow-exts

Add the line

  {-# OPTIONS -fglasgow-exts #-}

at the top of the source code. Then the flag will be set
when you load the module. This works for all kind of
settings:

  
http://haskell.org/ghc/docs/latest/html/users_guide/using-ghc.html#source-file-options

Peter

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Closest analog to popen()

2005-04-14 Thread Peter Simons
Dimitry Golubovsky writes:

 >> System.Process.runInteractiveCommand

 > Is this available only in 6.4?

Yes, I think so. The module's source code was posted to the
-libraries mailing list a while ago, but GHC 6.4 is the
first release to ship it.

Peter

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Closest analog to popen()

2005-04-13 Thread Peter Simons
Dimitry Golubovsky writes:

 > Does there exist any analog of popen in the standard Haskell libraries?

Maybe System.Process.runInteractiveCommand is what you need?

http://haskell.org/ghc/docs/latest/html/libraries/base/System.Process.html

Peter



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Strange HTTP module behavior [PATCH]

2005-02-19 Thread Peter Simons
John Goerzen writes:

 > Which arguably is not what one would expect recv to do,
 > and in any case is undocumented at
 > http://www.haskell.org/ghc/docs/latest/html/libraries/network/Network.Socket.html#v%3Arecv

Someone correct me if I am wrong, but I think that _all_ I/O
functions may throw an EOF exception -- not just recv. I
have found the following wrapper to be the simplest solution
to dealing with that:

  -- |Return 'Nothing' if the given computation throws an
  -- 'isEOFError' exception.

  handleEOF :: IO a -> IO (Maybe a)
  handleEOF f =
catchJust ioErrors
  (fmap Just f)
  (\e -> if isEOFError e then return Nothing else ioError e)

Peter

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Literate Haskell

2005-02-19 Thread Peter Simons
Dmitri Pissarenko writes:

 > I'm curious what experienced Haskellers think about using
 > literate Haskell in daily work.

My experience is that literate-style documentation tends to be
out-of-sync or misleading as quickly as any other kind of
documentation. It doesn't matter that much whether you write

 | This function does something.
 |
 | > the_function ...

or whether you write (for Haddock):

 | -- |This function does something.
 |
 | the_function ...

It's just a difference in syntax.

I've also found that many of the projects I have seen actually
written in literate-style did *not* come with a particularly
good documentation. (Darcs probably is the exception; because
it is documented very well.)

I use literate Haskell mostly when writing articles that
include code. For that, it is great. But for everyday
programming, I prefer Haddock.

Peter

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Parsing in Haskell

2005-02-15 Thread Peter Simons
Johan Glimming writes:

 > What is the best way of replacing yacc/bison and (f)lex when
 > migrating the project into Haskell?

My favorite tool for writing parsers is this one:

  http://www.cs.chalmers.se/~markus/BNFC/

You give it a grammar in BNF notation, and it will generate
parsers and appropriate data structures for Haskell, C,
Java, etc. 

Peter

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Point-free style

2005-02-10 Thread Peter Simons
Jan-Willem Maessen writes:

 > Is it really clear or obvious what
 >
 >map . (+)
 >
 > means?

Yes, it is perfectly obvious once you write it like this:

  incrEach :: Integer -> [Integer] -> [Integer]
  incrEach = map . (+)

Now compare that to the following function, which does the
some thing but without point-free notation:

  incrEach' :: Integer -> [Integer] -> [Integer]
  incrEach' i is = is >>= \i' -> return (i'+i)

Which one is harder to read? ;-)

Peter

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: File path programme

2005-02-02 Thread Peter Simons
Glynn Clements writes:

 >> Well, there is a sort-of canonic version for every path;
 >> on most Unix systems the function realpath(3) will find
 >> it. My interpretation is that two paths are equivalent
 >> iff they point to the same target.

 > I think that any definition which includes an "iff" is
 > likely to be overly optimistic.

I see your point. I guess it comes down to how much effort
is put into implementing a realpath() derivate in Haskell.


 > Even so, you will need to make certain assumptions. E.g.
 > older Unices would allow root to replace the "." and ".."
 > entries; you probably want to assume that can't happen.

My take on things is that it is hopeless to even try and
cover all this weird behavior. I'd like to treat paths as
something abstract. What I'm aiming for is that my library
can be used to manipulate file paths as well as URLs,
namespaces, and whatnot else; so I'll necessarily lose some
functionality that an implementation specifically designed
for file paths could provide. If you want to be portable,
you cannot use any esoteric functionality anyway.


 > There are also issues of definition, e.g. is "/dev/tty"
 > considered "equivalent" to the specific "/dev/ttyXX"
 > device for the current process?

No, because the paths differ. ;-)

Peter

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: File path programme

2005-01-31 Thread Peter Simons
Sven Panne writes:

 > OK, but even paths which realpath normalizes to different
 > things might be the same (hard links!).

Sure, but paths it normalizes to the same thing almost
certainly _are_ the same. ;-) That's all I am looking for.
In general, I think path normalization is a nice-to-have
feature, not a must-have.


 > IMHO we can provide something like realpath in the IO
 > monad, but shouldn't define any equality via it.

You are right; Eq shouldn't be defined on top of that. And
couldn't even, if normalization needs the IO monad anyway.

Peter

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: File path programme

2005-01-31 Thread Peter Simons
Sven Panne writes:

 > Hmmm, I'm not really sure what "equivalence" for file
 > paths should mean in the presence of hard/symbolic links,
 > (NFS-)mounted file systems, etc.

Well, there is a sort-of canonic version for every path; on
most Unix systems the function realpath(3) will find it.
My interpretation is that two paths are equivalent iff they
point to the same target.

You (and the others who pointed it out) are correct, though,
that the current 'canon' function doesn't accomplish that. I
guess, I'll have to move it into the IO monad to get it
right. And I should probably rename it, too. ;-)


Ben Rudiak-Gould writes:

 > The Read and Show instances aren't inverses of each
 > other. I don't think we should be using Read for path
 > parsing, for this reason.

That's fine with me; I can change that.


 > I don't understand why the path ADT is parameterized by
 > segment representation, but then the Posix and Windows
 > parameter types are both wrappers for String.

No particular reason. I just wanted to make the library work
with a simple internal representation before doing the more
advanced stuff. It is experimental code.


 > It seems artificial to distinguish read :: String ->
 > RelPath Windows from read :: String -> RelPath Posix in
 > this way.

I think it's pretty neat, actually. You have a way to
specify what kind of path you have -- and the type system
distinguishes it, not a run-time error.

Peter

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: File path programme

2005-01-31 Thread Peter Simons
Robert Dockins writes:

 > 1) File names are abstract entities.  There are a number of
 > ways one might concretely represent a filename. Among these
 > ways are:
 >
 >a) A contiguous sequence of octets in memory
 > (C style string on most modern hardware)
 >b) A sequence of unicode codepoints
 > (Haskell style string)
 >c) Algebraic datatypes supporting path manipulations
 > (yet to be developed)

The solution I have in mind uses algebraic data types which
are parameterized over the actual representation. Thus, you
can use them to represent any type of path (in any kind of
representation). In the spirit of release early, release
often:

  http://cryp.to/pathspec/PathSpec.hs
  darcs get http://cryp.to/pathspec

The module currently knows only _relative_ paths. I am still
experimenting with absolute paths because I have recently
learned that on Windows something like "C:foo.txt" is
actually relative -- not absolute. Very weird.

There also is a function which changes a path specification
into its canonic form, meaning that all redundant segments
are stripped. So although two paths which designate the same
target may not be equal, they can be tested for equivalence.

Suggestions for enhancement are welcome, of course.

Peter

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


  1   2   >