Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-16 Thread Daniel Trstenjak

On Wed, Aug 15, 2012 at 03:54:04PM -0700, Michael Sloan wrote:
> Upper bounds are a bit of a catch-22 when it comes to library authors evolving
> their APIs:
> 
> 1) If library clients aren't encouraged to specify which version of the
>exported API they target, then changing APIs can lead to opaque compile
>errors (without any information about which API is intended).  This could
>lead the client to need to search for the appropriate version of the
>library.

Having the version number A.B.*, than most packages seem to mostly
increase B or lower parts of the version number.

If an upper bound is missing, than cabal could use any package in the range 
A.*.* .

If an author wants to make breaking changes to his API, than he could
indicate this by increasing A.

I've nothing against your proposal, I just don't think that it will be
done that soon.


Greetings,
Daniel

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Can pipes solve this problem? How?

2012-08-16 Thread oleg

> Consider code, that takes input from handle until special substring matched:
>
> > matchInf a res s | a `isPrefixOf` s = reverse res
> > matchInf a res (c:cs)   = matchInf a (c:res) cs
> > hTakeWhileNotFound str hdl = hGetContents hdl >>= return.matchInf str []
>
> It is simple, but the handle is closed after running. That is not good,
> because I want to reuse this function.

This example is part of one of Iteratee demonstrations
http://okmij.org/ftp/Haskell/Iteratee/IterDemo1.hs

Please search for 
-- Early termination:
-- Counting the occurrences of the word ``the'' and the white space
-- up to the occurrence of the terminating string ``the end''

The iteratee solution is a bit more general because it creates an
inner stream with the part of the outer stream until the match is
found. Here is a sample application:

run_bterm2I fname = 
  print =<< run =<< enum_file fname .| take_until_match "the end"
   (countWS_iter `en_pair` countTHE_iter)

It reads the file until "the end" is found, and counts white space and
occurrences of a specific word, in parallel. All this processing
happens in constant space and we never need to accumulate anything
into string. If you do need to accumulate into string, there is 
an iteratee stream2list that does that.

The enumeratee take_until_match, as take and take_while, stops when the
terminating condition is satisfied or when EOF is detected. In the
former case, the stream may contain more data and remains usable. 

A part of IterDemo1 is explained in the paper
http://okmij.org/ftp/Haskell/Iteratee/describe.pdf

I am not sure though if I answered your question since you were
looking for pipes. I wouldn't call Iteratee pipes.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [Pipes] Can pipes solve this problem? How?

2012-08-16 Thread Michael Snoyman
On Wed, Aug 15, 2012 at 9:54 PM, Daniel Hlynskyi wrote:

> Hello Cafe.
> Consider code, that takes input from handle until special substring
> matched:
>
> > matchInf a res s | a `isPrefixOf` s = reverse res
> > matchInf a res (c:cs)   = matchInf a (c:res) cs
> > hTakeWhileNotFound str hdl = hGetContents hdl >>= return.matchInf str []
>
> It is simple, but the handle is closed after running. That is not good,
> because I want to reuse this function.
> Code can be rewritten without hGetContent, but it is much less
> comprehensible:
>
> hTakeWhileNotFound str hdl = fmap reverse$ findStr str hdl [0] []
>  where
>findStr str hdl indeces acc = do
>  c <- hGetChar hdl
>  let newIndeces = [ i+1 | i <- indeces, i < length str, str!!i == c]
>  if length str `elem` newIndeces
>then return (c : acc)
>else findStr str hdl (0 : newIndeces) (c : acc)
>
> So, the question is - can pipes (any package of them) be the Holy Grail in
> this situation, to both keep simple code and better deal with handles (do
> not close them specifically)? How?
>
> ___
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe
>
>
This is essentially what we do in wai-extra for multipart body parsing[1].
This code uses `conduit`.

The tricky part is that you have to remember that the substring you're
looking for might be spread across multiple chunks, so you need to take
that into account. A simple approach would be:

* If the search string is a substring of the current chunk, success.
* If the end of the current chunk is a prefix of the search string, grab
the next chunk, append the two, and repeat. (Note: there are more efficient
approaches than appending.)
* Otherwise, skip to the next chunk.
* If no more chunks available, the substring was not found.

Michael

[1]
https://github.com/yesodweb/wai/blob/master/wai-extra/Network/Wai/Parse.hs#L270
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-16 Thread Joachim Breitner
Hi,

Am Mittwoch, den 15.08.2012, 12:38 -0700 schrieb Bryan O'Sullivan:
> I propose that the sense of the recommendation around upper bounds in
> the PVP be reversed: upper bounds should be specified only when there
> is a known problem with a new version of a depended-upon package.

as a Debian packager, I kinda like the tight upper bounds, as it allows
me to predict what packages will break when I upgrade package X. It is
only an approximation, but a safe one. If we would not have this, then I
would either
  * upgrade X, upload to Debian, start rebuilding other stuff (this
is automated and happens on Debians build servers), notice a
break in package Y and then have to wait for Y’s upstream author
to fix it. All the while, Y and all its reverse dependencies
would not be installable. If this collides with a freeze, we’d
be in big trouble.
  * upgrade X locally, rebuild everything manually and locally and
only if thinks work out nicely, upload the whole bunch. Would
work, but would be a huge amount of extra work.
(note that in Debian we have at most one version of each Haskell
package).


I think what we’d need is a more relaxed policy with modifying a
package’s meta data on hackage. What if hackage would allow uploading a
new package with the same version number, as long as it is identical up
to an extended version range? Then the first person who stumbles over an
upper bound that turned out to be too tight can just fix it and upload
the fixed package directly, without waiting for the author to react.

If modifying packages with the same version number is not nice (it does
break certain other invariants), we could make it acceptable behavior to
upload someone else’s package without asking provided
  * one bumps the last (forth) component of the version
  * one extends the version range to encompass a set of versions
that the uploader has just tested to work _without changes_
  * no other change to the tarball is done.

This way, the common user of hackage (including us distro packagers)
will likely get a successful build if the build dependencies are
adhered, but new version will much quicker work everywhere, even when
original authors are temporarily slow to react.

More power to everyone! Be bold! :-)

Greetings,
Joachim

-- 
Joachim "nomeata" Breitner
  m...@joachim-breitner.de  |  nome...@debian.org  |  GPG: 0x4743206C
  xmpp: nome...@joachim-breitner.de | http://www.joachim-breitner.de/



signature.asc
Description: This is a digitally signed message part
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Parsing pragmas in a Haskell-like language

2012-08-16 Thread Björn Peemöller
Dear cafe,

I'm experimenting with extending the parser for a Haskell-like language
by module pragmas. The parser is written using parser combinators.

Currently, I adapted the lexer to ignore whitespace and comments, but
create lexemes for both the pragma start and end (and the pragma's
content, of course). While these lexemes are necessary for parsing the
pragmas before the module header, they somehow should be ignored
(treated like comments) afterwards.

Could anyone give me a hint me how this behaviour (treat pragmas like
comments) is achieved in GHC or in haskell-src-exts?

Thanks in advance,
Björn

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-16 Thread Ketil Malde
"Bryan O'Sullivan"  writes:

> I propose that the sense of the recommendation around upper bounds in the
> PVP be reversed: upper bounds should be specified *only when there is a
> known problem with a new version* of a depended-upon package.

Another advantage to this is that it's not always clear what constitutes
an API change.  I had to put an upper bound on binary, since 0.5
introduced laziness changes that broke my program.  (I later got some
help to implement a workaround, but binary-0.4.4 is still substantially
faster).  Understandably, the authors didn't see this as a breaking API
change.

So, +1.

-k
-- 
If I haven't seen further, it is by standing in the footprints of giants

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-16 Thread Chris Smith
I am tentatively in agreement that upper bounds are causing more
problems than they are solving.  However, I want to suggest that
perhaps the more fundamental issue is that Cabal asks the wrong person
to answer questions about API stability.  As a package author, when I
release a new version, I know perfectly well what incompatible changes
I have made to it... and those might include, for example:

1. New modules, exports or instances... low risk
2. Changes to less frequently used, advanced, or "internal" APIs...
moderate risk
3. Completely revamped commonly used interfaces... high risk

Currently *all* of these categories have the potential to break
builds, so require the big hammer of changing the first-dot version
number.  I feel like I should be able to convey this level of risk,
though... and it should be able to be used by Cabal.  So, here's a
proposal just to toss out there; no idea if it would be worth the
complexity or not:

A. Cabal files should get a new "Compatibility" field, indicating the
level of compatibility from the previous release: low, medium, high,
or something like that, with definitions for what each one means.

B. Version constraints should get a new syntax:

bytestring ~ 0.10.* (allow later versions that indicate low or
moderate risk)
bytestring ~~ 0.10.* (allow later versions with low risk; we use
the dark corners of this one)
bytestring == 0.10.* (depend 100% on 0.10, and allow nothing else)

Of course, this adds a good bit of complexity to the constraint
solver... but not really.  It's more like a pre-processing pass to
replace fuzzy constraints with precise ones.

-- 
Chris

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] linking Haskell app with Curl on Windows

2012-08-16 Thread Eugene Dzhurinsky
Hi!

I'm facing strange issue with linking my application against Curl (using Haskell
Curl binding curl-1.3.7): the application compiles well, but fails on linking
stage:

=

C:\haskell\bin\ghc.exe --make -o dist\build\imgpaste\imgpaste.exe 
-hide-all-packages -fbuilding-cabal-package -package-conf 
dist\package.conf.inplace -i -idist\build\imgpaste\imgpaste-tmp -i. 
-idist\build\autogen -Idist\build\autogen -Idist\build\imgpaste\imgpaste-tmp 
-optP-include -optPdist\build\autogen\cabal_macros.h -odir 
dist\build\imgpaste\imgpaste-tmp -hidir dist\build\imgpaste\imgpaste-tmp 
-stubdir dist\build\imgpaste\imgpaste-tmp -package-id 
base-4.5.0.0-597748f6f53a7442bcae283373264bb6 -package-id 
bytestring-0.9.2.1-df82064cddbf74693df4e042927e015e -package-id 
curl-1.3.7-ed08f87bd8c487f1e11a8c3b67bf4e51 -package-id 
directory-1.1.0.2-0270278088d4b2588b52cbec49af4cb7 -package-id 
hxt-9.2.2-e687550fbbb6ff367ee9c95584c3f0a0 -package-id 
hxt-xpath-9.1.2-4a15d34a0b66fa21832bb4bb0f68477f -package-id 
regex-pcre-0.94.4-f2f06ed579a684904354d97b04a74d9e -O -XHaskell98 .\Main.hs 
-llibcrypto -lssh2 -lssl -lz -lidn -LC:\curl\lib
Linking dist\build\imgpaste\imgpaste.exe ...
C:\Program 
Files\Haskell\curl-1.3.7\ghc-7.4.1/libHScurl-1.3.7.a(curlc.o):curlc.c:(.text+0xd2):
 undefined reference to `_imp__curl_easy_getinfo'
C:\Program 
Files\Haskell\curl-1.3.7\ghc-7.4.1/libHScurl-1.3.7.a(curlc.o):curlc.c:(.text+0xee):
 undefined reference to `_imp__curl_easy_getinfo'
C:\Program 
Files\Haskell\curl-1.3.7\ghc-7.4.1/libHScurl-1.3.7.a(curlc.o):curlc.c:(.text+0x10a):
 undefined reference to `_imp__curl_easy_getinfo'

[ lots of error messages skipped ]

C:\curl\lib/libcurl.a(md5.o):(.text+0x3b): undefined reference to `MD5_Update'
C:\curl\lib/libcurl.a(md5.o):(.text+0x4e): undefined reference to `MD5_Final'
C:\curl\lib/libcurl.a(md5.o):(.rdata+0x0): undefined reference to `MD5_Init'
C:\curl\lib/libcurl.a(md5.o):(.rdata+0x4): undefined reference to `MD5_Update'
C:\curl\lib/libcurl.a(md5.o):(.rdata+0x8): undefined reference to `MD5_Final'
C:\curl\lib/libcurl.a(md5.o):(.rdata+0x14): undefined reference to `MD5_Init'
C:\curl\lib/libcurl.a(md5.o):(.rdata+0x18): undefined reference to `MD5_Update'
C:\curl\lib/libcurl.a(md5.o):(.rdata+0x1c): undefined reference to `MD5_Final'
cabal.EXE: Error: some packages failed to install:
imgpaste-0.2 failed during the building phase. The exception was:
ExitFailure 1

=

What may be wrong here? Curl itself was installed without errors. Looking for
symbols 'MD5_Final' in directory C:\curl\lib results in 'libcrypto.a',
'libcrypto.dll.a' and 'libcurl.a'

-- 
Eugene N Dzhurinsky


pgpj0k8gEAyFB.pgp
Description: PGP signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-16 Thread Twan van Laarhoven

On 16/08/12 14:07, Chris Smith wrote:

As a package author, when I
release a new version, I know perfectly well what incompatible changes
I have made to it... and those might include, for example:

1. New modules, exports or instances... low risk
2. Changes to less frequently used, advanced, or "internal" APIs...
moderate risk
3. Completely revamped commonly used interfaces... high risk


Would adding a single convenience function be low or high risk? You say it is 
low risk, but it still risks breaking a build if a user has defined a function 
with the same name. I think the only meaningful distinction you can make are:
  1. No change to public API at all, user code is guaranteed to "compile and 
work if it did so before".

 Perhaps new modules could also fall under this category, I'm not sure.
  2. changes to exports, instances, modules, types, etc. But with the guarantee 
that "if it compiles, it will be correct"
  3. changes to functionality, which require the user to reconsider all code. 
"even if it compiles, it might be wrong"


For the very common case 2, the best solution is to just go ahead and try to 
compile it.



A. Cabal files should get a new "Compatibility" field, indicating the
level of compatibility from the previous release: low, medium, high,
or something like that, with definitions for what each one means.


You would need to indicate how large the change is compared to a certain 
previous version. "Moderate change compared to 0.10, large change compared to 0.9".



B. Version constraints should get a new syntax:

 bytestring ~ 0.10.* (allow later versions that indicate low or
moderate risk)
 bytestring ~~ 0.10.* (allow later versions with low risk; we use
the dark corners of this one)
 bytestring == 0.10.* (depend 100% on 0.10, and allow nothing else)

Of course, this adds a good bit of complexity to the constraint
solver... but not really.  It's more like a pre-processing pass to
replace fuzzy constraints with precise ones.



Perhaps it would be cleaner if you specified what parts of the API you depend 
on, instead of an arbitrary distinction between 'internal' and 'external' parts. 
From cabal's point of view the best solution would be to have a separate 
package for the internals. Then the only remaining distinction is between 
'breaking' and 'non-breaking' changes. The current policy is to rely on major 
version numbers. But this could instead be made explicit: A cabal package should 
declare what API version of itself it is mostly-compatible with.


To avoid forcing the creation of packages just for versioning, perhaps 
dependencies could be specified on parts of a package?


build-depends: bytestring.internal ~< 0.11

and the bytestring package would specify what parts have changed:

compatibility: bytestring.internal >= 0.11, bytestring.external >= 0.10

But these names introduce another problem: they will not be fine-grained enough 
until it is too late. You only know how the API is partitioned when, in the 
future, a part of it changes while another part does not.



Twan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-16 Thread Chris Smith
Twan van Laarhoven  wrote:
> Would adding a single convenience function be low or high risk? You say it
> is low risk, but it still risks breaking a build if a user has defined a
> function with the same name.

Yes, it's generally low-risk, but there is *some* risk.  Of course, it
could be high risk if you duplicate a Prelude function or a name that
you know is in use elsewhere in a related or core library... these
decisions would involve knowing something about the library space,
which package maintainers often do.

> I think the only meaningful distinction you can make are:

Except that the whole point is that this is *not* the only distinction
you can make.  It might be the only distinction with an exact
definition that can be checked by automated tools, but that doesn't
change the fact that when I make an incompatible change to a library
I'm maintaining, I generally have a pretty good idea of which kinds of
users are going to be fixing their code as a result.  The very essence
of my suggestion was that we accept the fact that we are working in
probabilities here, and empower package maintainers to share their
informed evaluation.  Right now, there's no way to provide that
information: the PVP is caught up in exactly this kind of legalism
that only cares whether a break is possible or impossible, without
regard to how probable it is.  The complaint that this new mechanism
doesn't have exactly such a black and white set of criteria associated
with it is missing the point.

-- 
Chris

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-16 Thread Ivan Lazar Miljenovic
On 16 August 2012 20:50, Ketil Malde  wrote:
> "Bryan O'Sullivan"  writes:
>
>> I propose that the sense of the recommendation around upper bounds in the
>> PVP be reversed: upper bounds should be specified *only when there is a
>> known problem with a new version* of a depended-upon package.
>
> Another advantage to this is that it's not always clear what constitutes
> an API change.  I had to put an upper bound on binary, since 0.5
> introduced laziness changes that broke my program.  (I later got some
> help to implement a workaround, but binary-0.4.4 is still substantially
> faster).  Understandably, the authors didn't see this as a breaking API
> change.

Except 0.4 -> 0.5 _is_ a major version bump according to the PVP.

>
> So, +1.
>
> -k
> --
> If I haven't seen further, it is by standing in the footprints of giants
>
> ___
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe



-- 
Ivan Lazar Miljenovic
ivan.miljeno...@gmail.com
http://IvanMiljenovic.wordpress.com

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-16 Thread timothyhobbs

So that we are using concrete examples.  here is an example of a change that
really shouldn't break any package:




https://github.com/timthelion/threadmanager/commit/c23e19cbe78cc6964f23fdb90
b7029c5ae54dd35





The exposed functions are the same.  The behavior is changed.  But as the
commiter of the change, I cannot imagine that it would break any currently
working code.




There is another issue though.  With this kind of change, there is no reason
for a package which was written for the old version of the library, to be 
built with the new version.  If I am correct, that this change changes
nothing for currently working code, then why should an old package be built
with the newer package?

The advantage in this case, is merely that we want to prevent version
duplication.  We don't want to waste disk space by installing every possible
iteration of a library.





I personally think that disk space is so cheep, that this last consideration
is not so important.  If there are packages that only build with old 
versions of GHC, and old libraries, why can we not just seamlessly install
them?  One problem, is if we want to use those old libraries with new code. 
 Take the example of Python2 vs Python3.  Yes, we can seamlessly install
python2 libraries, even though we use python3 normally, but we cannot MIX 
python2 libraries with python3 libraries.




Maybe we could make Haskell linkable objects smart enough that we COULD mix
old with new?  That sounds complicated.




I think, Michael Sloan is onto something though with his idea of
compatibility layers.  I think that if we could write simple "dictionary"
packages that would translate old API calls to new ones, we could use old 
code without modification.  This would allow us to build old libraries which
normally wouldn't be compatible with something in base using a base-old-to-
new dictionary package.  Then we could use these old libraries without
modification with new code.




It's important that this be possible from the side of the person USING the
library, and not the library author.   It's impossible to write software, if
you spend all of your time waiting for someone else to update their
libraries.





Timothy





-- Původní zpráva --
Od: Ivan Lazar Miljenovic 
Datum: 16. 8. 2012
Předmět: Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not
our friends
"On 16 August 2012 20:50, Ketil Malde  wrote:
> "Bryan O'Sullivan"  writes:
>
>> I propose that the sense of the recommendation around upper bounds in the
>> PVP be reversed: upper bounds should be specified *only when there is a
>> known problem with a new version* of a depended-upon package.
>
> Another advantage to this is that it's not always clear what constitutes
> an API change. I had to put an upper bound on binary, since 0.5
> introduced laziness changes that broke my program. (I later got some
> help to implement a workaround, but binary-0.4.4 is still substantially
> faster). Understandably, the authors didn't see this as a breaking API
> change.

Except 0.4 -> 0.5 _is_ a major version bump according to the PVP.

>
> So, +1.
>
> -k
> --
> If I haven't seen further, it is by standing in the footprints of giants
>
> ___
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe
(http://www.haskell.org/mailman/listinfo/haskell-cafe)



--
Ivan Lazar Miljenovic
ivan.miljeno...@gmail.com
http://IvanMiljenovic.wordpress.com(http://IvanMiljenovic.wordpress.com)

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe
(http://www.haskell.org/mailman/listinfo/haskell-cafe)"___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-16 Thread Felipe Almeida Lessa
On Thu, Aug 16, 2012 at 10:01 AM, Chris Smith  wrote:
> Twan van Laarhoven  wrote:
>> Would adding a single convenience function be low or high risk? You say it
>> is low risk, but it still risks breaking a build if a user has defined a
>> function with the same name.
>
> Yes, it's generally low-risk, but there is *some* risk.  Of course, it
> could be high risk if you duplicate a Prelude function or a name that
> you know is in use elsewhere in a related or core library... these
> decisions would involve knowing something about the library space,
> which package maintainers often do.

If you import qualified then adding functions will never break anything.

Cheers,

-- 
Felipe.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-16 Thread Yitzchak Gale
Bryan O'Sullivan wrote:
> A substantial number of the difficulties I am encountering are related to
> packages specifying upper bounds on their dependencies. This is a recurrent
> problem, and its source lies in the recommendations of the PVP itself

I think the PVP recommendation is good, though admittedly
one that in practice can be taken with a grain of salt.

Publishing supposedly stable and supported packages
with no upper bounds leads to persistent build problems
that are tricky to solve.

A good recent example is the encoding package.
This package depends on HaXML >= 1.19, with
no upper bound. However, the current version of HaXML
is 1.23, and the encoding package cannot build
against it due to API changes. Furthermore, uploading
a corrected version of encoding wouldn't even
solve the problem completely. Anyone who already
has the current version of encoding will have
build problems as soon as they upgrade HaXML.
The cabal dependencies are lying, so there is no
way for cabal to know that encoding is the culprit.
Build problems caused by missing upper bounds
last forever; their importance fades only gradually.

Whereas it is trivially easy to correct an upper
bound that has become obsolete, and once you
fix it, it's fixed.

For actively maintained packages, I think the
problem is that package maintainers don't find
out promptly that an upper bound needs to be
bumped. One way to solve that would be a
simple bot that notifies the package maintainer
as soon as an upper bound becomes out-of-date.

For unresponsive package maintainers or
unmaintained packages, it would be helpful to
have some easy temporary fix mechanism as
suggested by Joachim.

Joachim also pointed out the utility of upper bounds
for platform packaging.

Why throw away much of the robustness of
the package versioning system just because
of a problem we are having with these trivially
easy upper-bound bumps?  Let's just find a
solution for the problem at hand.

Thanks,
Yitz

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-16 Thread dag.odenh...@gmail.com
On Wed, Aug 15, 2012 at 9:38 PM, Bryan O'Sullivan wrote:

> A benign change will obviously have no visible effect, while a compilation
> failure is actually *better* than a depsolver failure, because it's more
> informative.
>

But with upper bounds you give Cabal a chance to try and install a
supported version, thus avoiding failure all together.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [Pipes] Can pipes solve this problem? How?

2012-08-16 Thread Mario Blažević

On 12-08-15 02:54 PM, Daniel Hlynskyi wrote:

Hello Cafe.
Consider code, that takes input from handle until special substring 
matched:


> matchInf a res s | a `isPrefixOf` s = reverse res
> matchInf a res (c:cs)   = matchInf a (c:res) cs
> hTakeWhileNotFound str hdl = hGetContents hdl >>= return.matchInf str []

So, the question is - can pipes (any package of them) be the Holy 
Grail in this situation, to both keep simple code and better deal with 
handles (do not close them specifically)? How?


It's more complex than Pipes, but SCC gives you what you need. If you 
cabal install it, you have the choice of using the shsh executable on 
the command line to accomplish your task:


$ shsh -c 'cat input-file.txt | select prefix (>! substring "search 
string")'


or using the equivalent library combinators from Haskell code:

> import System.IO (Handle, stdin)
> import Control.Monad.Coroutine (runCoroutine)
> import Control.Concurrent.SCC.Sequential

> pipeline :: String -> Handle -> Producer IO Char ()
> pipeline str hdl = fromHandle hdl >-> select (prefix $ sNot $ 
substring str)


> hTakeWhileNotFound :: String -> Handle -> IO String
> hTakeWhileNotFound str hdl =
>fmap snd $ runCoroutine $ pipe (produce $ pipeline str hdl) 
(consume toList)


> main = hTakeWhileNotFound "up to here" stdin >>= putStrLn



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-16 Thread Brent Yorgey
On Thu, Aug 16, 2012 at 05:30:07PM +0300, Yitzchak Gale wrote:
> 
> For actively maintained packages, I think the
> problem is that package maintainers don't find
> out promptly that an upper bound needs to be
> bumped. One way to solve that would be a
> simple bot that notifies the package maintainer
> as soon as an upper bound becomes out-of-date.

This already exists:

  http://packdeps.haskellers.com/
 
-Brent

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Fwd: 'let' keyword optional in do notation?

2012-08-16 Thread Martijn Schrage

On 09-08-12 10:35, Tillmann Rendel wrote:

Hi,

Martijn Schrage wrote:

Would expanding each let-less binding to a separate let "feel" more
sound to you?


That was actually my first idea, but then two declarations at the same
level will not be in the same binding group, so

do x = y
   y = 1

would not compile. This would create a difference with all the other
places where bindings may appear.


But it would be in line with <- bindings in the do notation, so maybe 
it wouldn't feel so wrong.
It would absolutely be the easiest solution to implement, since as far 
as I can see, it requires only a small change to the parser. However, I 
still think it will be too confusing to have bindings that look the same 
as everywhere else but have different binding group rules. Especially 
since there is no reason for it from a semantic point of view (unlike 
when you mix in a monadic <- binding, after which it makes sense to have 
a new binding group.)


Anyhow, I'll submit it as a GHC feature request and see what happens.

Cheers,
Martijn Schrage -- Oblomov Systems (http://www.oblomov.com)

  Tillmann

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] [Announce] Compositional Compiler Construction, Oberon0 examples available

2012-08-16 Thread Doaitse Swierstra
Over the years we have been constructing a collection of Embedded Domain 
Specific Languages for describing compilers which are assembled from fragments 
which can be compiled individuallu. In this way one can gradually ``grow a 
langauge'' in a large number of small steps. The technique replaces things like 
macro extensions or Template Haskell; it has become feasable to just extend the 
language at hand by providing  extra modules. The nice thing is that existing 
code does not have to be adapted, nor has to be available nor has to be 
recompiled.

Recently we have been using (and adapting) the frameworks such that we could 
create an entry in the ldta11 (http://ldta.info/tool.html) tool challenge, 
where one has to show how one's tools can be used to create a compiler for the 
Oberon0 language, which is used a a running example in Wirth's compiler 
construction book.

We have uploaded our implementation to hackage at: 
http://hackage.haskell.org/package/oberon0.

More information can be found at the wiki: 
http://www.cs.uu.nl/wiki/bin/view/Center/CoCoCo

You may take a look at the various Gram modules to see how syntax is being 
defined, and at the various Sem modules to see how we use our first class 
attribute grammars to implement the static semantics associated with the 
various tasks of the challenge.

We hope you like it, and comments are welcome,

Marcos Viera
Doaitse Swierstra









___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] 3 level hierarchy of Haskell objects

2012-08-16 Thread Jay Sulzberger



On Wed, 15 Aug 2012, wren ng thornton  wrote:


On 8/13/12 5:42 PM, Jay Sulzberger wrote:

One difficulty which must impede many who study this stuff is
that just getting off the ground seems to require a large number
of definitions of objects of logically different kinds. (By
"logic" I mean real logic, not any particular formalized system.)
We must have "expressions", values, type expressions, rules of
transformation at the various levels, the workings of the
type/kind/context inference system, etc., to get started.
Seemingly Basic and Scheme require less, though I certainly
mention expressions and values and types and
objects-in-the-Lisp-world in my Standard Rant on^W^WIntroduction
to Scheme.


Indeed, starting with Haskell's type system is jumping in at the deep end. 
And there isn't a very good tutorial on how to get started learning type 
theory. Everyone I know seems to have done the "stumble around until it 
clicks" routine--- including the folks whose stumbling was guided by formal 
study in programming language theory.


However, a good place to start ---especially viz a vis Scheme/Lisp--- is to 
go back to the beginning: the simply-typed lambda-calculus[1]. STLC has far 
fewer moving parts. You have type expressions, term expressions, term 
reduction, and that's it.


Yes.  The simply-typed lambda-calculus presents as a different
sort of thing from the "untyped" lambda calculus, and the many
complexly typed calculi.

I'd add the list of components of the base machine of STLC these
things:

1. The model theory, which is close to the model theory of the
   Lower Predicate Calculus.

2. The explication of "execution of a program", which is more
   subtle than anything right at the beginning of the study of
   the Lower Predicate Calculus.  It certainly requires a score
   of definitions to lay it out clearly.

But, to say again, yes the STLC can, like linear algebra 101, be
presented in this way: The machine stands alone in bright
sunlight.  There are no shadows.  Every part can be seen plainly.
The eye sees all and is satisfied.

ad 2: It would be worthwhile to have an introduction to STLC
which compares STLC's explication of "execution of a program"
with other standard explications, such as these:

1. the often not explicitly presented explication that appears in
   textbooks on assembly and introductory texts on computer hardware

2. the usually more explicitly presented explication that appears
   in texts on Basic or Fortran

3. the often explicit explication that appears in texts on Snobol

4. various explications of what a database management system does

5. explications of how various Lisp variants work

6. explications of how Prolog works

7. explications of how general constraint solvers work, including
   "proof finders"



Other lambda calculi add all manner of bells and whistles, but STLC is the 
core of what lambda calculus and type systems are all about. So you should be 
familiar with it as a touchstone. After getting a handle on STLC, then it's 
good to see the Barendregt cube. Don't worry too much about understanding it 
yet, just think of it as a helpful map of a few of the major landmarks in 
type theory. It's an incomplete map to be sure. One major landmark that's 
suspiciously missing lays about halfway between STLC and System F: that's 
Hindley--Milner--Damas, or ML-style, lambda calculus.[2]


8. explication of how Hindley--Milner--Damas works



After seeing the Barendregt cube, then you can start exploring in those 
various directions. Notably, you don't need to think about the kind level 
until you start heading towards LF, MLTT, System Fw, or CC, since those are 
were you get functions/reduction at the type level and/or multiple sorts at 
the type level.


Haskell98 (and the official Haskell2010) take Hindley--Milner--Damas as the 
starting point and then add some nice things like algebraic data types and 
type classes (neither of which are represented on the Barendregt cube). This 
theory is still relatively simple and easy to understand, albeit in a 
somewhat ad-hoc manner.


Unexpectedly, to me, missing word in explications of algebraic
data types and "pattern matching": "magma".



Modern "Haskell" lives somewhere beyond the top plane of the cube. We have 
all of polymorphism (aka System F, aka second-order quantification; via 
-XRankNTypes), most of type operators (i.e., extending System F to System Fw; 
via type families etc), some dependent types (aka first-order quantification; 
via GADTs), plus things not represented on the cube (e.g., (co)inductive data 
types, type classes, etc). Trying to grok all of that at once without prior 
understanding of the pieces is daunting to be sure.


Yes.




[1] Via Curry--Howard, the pure STLC corresponds to natural deduction for the 
implicational fragment of intuitionistic propositional logic. Of course, you 
can add products (tuples), coproducts (Either), and absurdity to get natural 
deduction for the full intuitionistic

Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-16 Thread Joey Adams
On Wed, Aug 15, 2012 at 3:38 PM, Bryan O'Sullivan  wrote:
> I propose that the sense of the recommendation around upper bounds in the
> PVP be reversed: upper bounds should be specified only when there is a known
> problem with a new version of a depended-upon package.

I, too, agree.  Here is my assortment of thoughts on the matter.

Here's some bad news: with cabal 1.14 (released with Haskell Platform
2012.2), cabal init defaults to bounds like these:

  build-depends:   base ==4.5.*, bytestring ==0.9.*, http-types ==0.6.*

Also, one problem with upper bounds is that they often backfire.  If
version 0.2 of your package does not have upper bounds, but 0.2.1 does
(because you found out about a breaking upstream change), users who
try to install your package may get 0.2 instead of the latest, and
still get the problem you were trying to shield against.

A neat feature would be a cabal option to ignore upper bounds.  With
--ignore-upper-bounds, cabal would select the latest version of
everything, and print a list of packages with violated upper bounds.

-Joey

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Parsing pragmas in a Haskell-like language

2012-08-16 Thread David Feuer
Where are pragmas treated like comments?
On Aug 16, 2012 6:14 AM, "Björn Peemöller" 
wrote:

> Dear cafe,
>
> I'm experimenting with extending the parser for a Haskell-like language
> by module pragmas. The parser is written using parser combinators.
>
> Currently, I adapted the lexer to ignore whitespace and comments, but
> create lexemes for both the pragma start and end (and the pragma's
> content, of course). While these lexemes are necessary for parsing the
> pragmas before the module header, they somehow should be ignored
> (treated like comments) afterwards.
>
> Could anyone give me a hint me how this behaviour (treat pragmas like
> comments) is achieved in GHC or in haskell-src-exts?
>
> Thanks in advance,
> Björn
>
> ___
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe
>
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Working with remote HTTP file as ZIP archive

2012-08-16 Thread Dennis Yurichev
Hi.

I want %subj%.

Is it possible with lazy evaluation and any existing Haskell libraries do:

1) open remote file via HTTP, read random pieces of it but not to
download the whole file? HTTP standard supporting partial reads...

2) open remote file via HTTP as ZIP archive, get list of files, get
specific file (whole) or get part of specific file, but again, using lazy eval,
not to fetch all stuff.

So is it possible?

Thanks in advance!

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Data structure containing elements which are instances of the same type class

2012-08-16 Thread wren ng thornton

On 8/15/12 12:32 PM, David Feuer wrote:

On Aug 15, 2012 3:21 AM, "wren ng thornton"  wrote:

It's even easier than that.

 (forall a. P(a)) -> Q <=> exists a. (P(a) -> Q)

Where P and Q are metatheoretic/schematic variables. This is just the
usual thing about antecedents being in a "negative" position, and thus
flipping as you move into/out of that position.


Most of this conversation is going over my head. I can certainly see how
exists a. (P(a)->Q) implies that (forall a. P(a))->Q. The opposite
certainly doesn't hold in classical logic. What sort of logic are you folks
working in?


Ryan gave a nice classical proof. Though note that, in a constructive 
setting you're right to be dubious. The validity of the questioned 
article is related to the "generalized Markov's principle" which Russian 
constructivists accept but which is not derivable from Heyting's 
axiomatization of intuitionistic logic:


GMP : ~(forall x. P(x)) -> (exists x. ~P(x))

There's been a fair amount of work on showing that GMP is constructively 
valid; though the fact that it does not derive from Heyting's 
axiomatization makes some squeamish. For these reasons it may be 
preferable to stick with the double-negation form upthread for 
converting between quantifiers. I just mentioned the above as a 
simplification of the double-negation form, which may be more familiar 
to those indoctrinated in classical logic.


--
Live well,
~wren

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Data structure containing elements which are instances of the same type class

2012-08-16 Thread wren ng thornton

On 8/15/12 2:55 PM, Albert Y. C. Lai wrote:

On 12-08-15 03:20 AM, wren ng thornton wrote:

(forall a. P(a)) -> Q <=> exists a. (P(a) -> Q)


For example:

A. (forall p. p drinks) -> (everyone drinks)
B. exists p. ((p drinks) -> (everyone drinks))

In a recent poll, 100% of respondents think A true, 90% of them think B
paradoxical, and 40% of them have not heard of the Smullyan drinking
paradox.


:)

Though bear in mind we're discussing second-order quantification here, 
not first-order.


--
Live well,
~wren

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

2012-08-16 Thread wren ng thornton

On 8/15/12 11:02 PM, MightyByte wrote:

One tool-based way to help with this problem would
be to add a flag to Cabal/cabal-install that would cause it to ignore
upper bounds.


I'd much rather have a distinction between hard upper bounds ("known to 
fail with") vs soft upper bounds ("tested with").


Soft upper bounds are good for future proofing, both short- and 
long-range. So ignoring soft upper bounds is all well and good if things 
still work.


However, there are certainly cases where we have hard upper 
bounds[1][2][3], and ignoring those is not fine. Circumventing hard 
upper bounds should require altering the .cabal file, given as getting 
things to compile will require altering the source code as well. Also, 
hard upper bounds are good for identifying when there are 
semantics-altering changes not expressed in the type signatures of an 
API. Even if relaxing the hard upper bound could allow the code to 
compile, it is not guaranteed to be correct.


The problem with the current policy is that it mandates hard upper 
bounds as a solution to the problem of libraries not specifying soft 
upper bounds. This is indeed a tooling problem, but let's identify the 
problem for what it is: not all upper bounds are created equally, and 
pretending they are only leads to confusion and pain.



[1] Parsec 2 vs 3, for a very long time
[2] mtl 1 vs 2, for a brief interim
[3] John Lato's iteratee <=0.3 vs >=0.4, for legacy code
...

--
Live well,
~wren

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] How to simplify the code of Maybe within a monad?

2012-08-16 Thread Magicloud Magiclouds
Hi,
  Since Maybe is a monad, I could write code like 'maybeA >> maybeB >>
maybeC' to check if all these are not Nothing. Or 'liftM foo maybeD'
to avoid ugly 'case of'.
  But how if here maybe[ABC] are like 'IO (Maybe Int)', or foo is type
of 'Int -> IO Int'?
-- 
竹密岂妨流水过
山高哪阻野云飞

And for G+, please use magiclouds#gmail.com.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to simplify the code of Maybe within a monad?

2012-08-16 Thread Ertugrul Söylemez
Magicloud Magiclouds  wrote:

>   Since Maybe is a monad, I could write code like 'maybeA >> maybeB >>
> maybeC' to check if all these are not Nothing. Or 'liftM foo maybeD'
> to avoid ugly 'case of'.

Also check out the somewhat cleaner Functor class with its liftM
equivalent called 'fmap', for which you don't need to import
Control.Monad.  For monads fmap = liftM.


>   But how if here maybe[ABC] are like 'IO (Maybe Int)', or foo is type
> of 'Int -> IO Int'?

Well, this is Haskell, so you can always write your own higher order
functions:

(~>>=) :: (Monad m) => m (Maybe a) -> (a -> m (Maybe b)) -> m (Maybe b)
c ~>>= f = c >>= maybe (return Nothing) f

(~>>) :: (Monad m) => m (Maybe a) -> m (Maybe b) -> m (Maybe b)
c ~>> d = c >>= maybe (return Nothing) (const d)

infixl 1 ~>>=
infixl 1 ~>>

However in the second case of course there is no Maybe, but then notice
that IO itself acts like Maybe through its exceptions.  In fact Maybe is
a transparent exception monad.


Greets,
Ertugrul

-- 
Not to be or to be and (not to be or to be and (not to be or to be and
(not to be or to be and ... that is the list monad.


signature.asc
Description: PGP signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Data structure containing elements which are instances of the same type class

2012-08-16 Thread Alexander Solla
On Thu, Aug 16, 2012 at 8:07 PM, wren ng thornton  wrote:

> On 8/15/12 2:55 PM, Albert Y. C. Lai wrote:
>
>> On 12-08-15 03:20 AM, wren ng thornton wrote:
>>
>>> (forall a. P(a)) -> Q <=> exists a. (P(a) -> Q)
>>>
>>
>> For example:
>>
>> A. (forall p. p drinks) -> (everyone drinks)
>> B. exists p. ((p drinks) -> (everyone drinks))
>>
>> In a recent poll, 100% of respondents think A true, 90% of them think B
>> paradoxical, and 40% of them have not heard of the Smullyan drinking
>> paradox.
>>
>
> :)
>
> Though bear in mind we're discussing second-order quantification here, not
> first-order.


Can you expand on what you mean here?  I don't see two kinds of
quantification in the type language (at least, reflexively, in the context
of what we're discussing).  In particular, I don't see how to quantify over
predicates for (or sets of, via the extensions of the predicates) types.

Is Haskell's 'forall' doing double duty?
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe