Re: [Haskell-cafe] Monads

2012-10-01 Thread Albert Y. C. Lai

On 12-10-01 05:34 AM, Jon Fairbairn wrote:

"Albert Y. C. Lai"  writes:


On 12-09-30 06:33 PM, Jake McArthur wrote:

When discussing monads, at least, a side effect is an effect that is
triggered by merely evaluating an expression. A monad is an interface
that decouples effects from evaluation.


I don't understand that definition. Or maybe I do subconsciously.

I have

s :: State Int ()
s = do { x <- get; put (x+1) }

Is there an effect triggered by merely evaluating s?

I have

m :: IO ()
m = if True then putStrLn "x" else putChar 'y'

Is there an effect triggered by merely evaluating m?

What counts as "evaluate"?


Evaluation! Consider m `seq` 42. m is evaluated, but to no
effect.


Sure thing. So s has no side effect, and m has no side effect. Since 
they have no side effect, there is no effect to be decoupled from 
evaluation.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monads

2012-10-01 Thread Jon Fairbairn
"Albert Y. C. Lai"  writes:

> On 12-09-30 06:33 PM, Jake McArthur wrote:
>> When discussing monads, at least, a side effect is an effect that is
>> triggered by merely evaluating an expression. A monad is an interface
>> that decouples effects from evaluation.
>
> I don't understand that definition. Or maybe I do subconsciously.
>
> I have
>
> s :: State Int ()
> s = do { x <- get; put (x+1) }
>
> Is there an effect triggered by merely evaluating s?
>
> I have
>
> m :: IO ()
> m = if True then putStrLn "x" else putChar 'y'
>
> Is there an effect triggered by merely evaluating m?
>
> What counts as "evaluate"?

Evaluation! Consider m `seq` 42. m is evaluated, but to no
effect.

-- 
Jón Fairbairn jon.fairba...@cl.cam.ac.uk


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monads

2012-09-30 Thread Albert Y. C. Lai

On 12-09-30 06:33 PM, Jake McArthur wrote:

When discussing monads, at least, a side effect is an effect that is
triggered by merely evaluating an expression. A monad is an interface
that decouples effects from evaluation.


I don't understand that definition. Or maybe I do subconsciously.

I have

s :: State Int ()
s = do { x <- get; put (x+1) }

Is there an effect triggered by merely evaluating s?

I have

m :: IO ()
m = if True then putStrLn "x" else putChar 'y'

Is there an effect triggered by merely evaluating m?

What counts as "evaluate"?

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monads

2012-09-30 Thread Kristopher Micinski
On Sun, Sep 30, 2012 at 6:33 PM, Jake McArthur  wrote:
>
> On Sep 30, 2012 10:56 AM, "Albert Y. C. Lai"  wrote:
>>
>> On 12-09-29 09:57 PM, Vasili I. Galchin wrote:
>>>
>>>  I would an examples of monads that are pure, i.e. no
>>> side-effects.
>>
>>
>> What does "side effect" mean, to you? Definition?
>
> When discussing monads, at least, a side effect is an effect that is
> triggered by merely evaluating an expression. A monad is an interface that
> decouples effects from evaluation.
>

Ohh, I like that quote.., that's another good one..

kris

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monads

2012-09-30 Thread Jake McArthur
On Sep 30, 2012 10:56 AM, "Albert Y. C. Lai"  wrote:
>
> On 12-09-29 09:57 PM, Vasili I. Galchin wrote:
>>
>>  I would an examples of monads that are pure, i.e. no
side-effects.
>
>
> What does "side effect" mean, to you? Definition?

When discussing monads, at least, a side effect is an effect that is
triggered by merely evaluating an expression. A monad is an interface that
decouples effects from evaluation.

>
> Because some people say "State has no side effect", and some other people
say "State has side effects". The two groups use different definitions.
>
>
> ___
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monads

2012-09-30 Thread wren ng thornton

On 9/30/12 7:00 AM, Tillmann Rendel wrote:

Vasili I. Galchin wrote:

I would an examples of monads that are pure, i.e. no side-effects.


One view of programming in monadic style is: You call return and >>= all
the time. (Either you call it directly, or do notation calls it for
you). So if you want to understand whether a monad "has side-effects",
you should look at the implementation of return and >>=. If the
implementation of return and >>= is written in pure Haskell (without
unsafePerformIO or calling C code etc.), the monad is pure.


I'm not sure return and bind will give you the information you seek, 
however. In order to obey the monad laws, return and bind must be (for 
all intents and purposes) pure.


The place to look for "impurities" is the primitive operations of the 
monad; i.e., those which cannot be implemented using return and bind. 
These are the operations which take you out of a free monad and the 
operations which must be given some special semantic interpretation.



--
Live well,
~wren

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monads

2012-09-30 Thread Albert Y. C. Lai

On 12-09-29 09:57 PM, Vasili I. Galchin wrote:

 I would an examples of monads that are pure, i.e. no side-effects.


What does "side effect" mean, to you? Definition?

Because some people say "State has no side effect", and some other 
people say "State has side effects". The two groups use different 
definitions.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monads

2012-09-30 Thread Tillmann Rendel

Vasili I. Galchin wrote:

I would an examples of monads that are pure, i.e. no side-effects.


One view of programming in monadic style is: You call return and >>= all 
the time. (Either you call it directly, or do notation calls it for 
you). So if you want to understand whether a monad "has side-effects", 
you should look at the implementation of return and >>=. If the 
implementation of return and >>= is written in pure Haskell (without 
unsafePerformIO or calling C code etc.), the monad is pure.


  Tillmann

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monads

2012-09-29 Thread KC
From:

http://www.haskell.org/haskellwiki/Monad


"The computation doesn't have to be impure and can be pure itself as
well. Then monads serve to provide the benefits of separation of
concerns, and automatic creation of a computational "pipeline"."


On Sat, Sep 29, 2012 at 6:57 PM, Vasili I. Galchin  wrote:
> Hello,
>
> I would an examples of monads that are pure, i.e. no side-effects.
>
> Thank you,
>
> Vasili
>
>
> ___
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe
>



-- 
--
Regards,
KC

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monads

2012-09-29 Thread Kristopher Micinski
You have fallen into the misconception that monads are impure, they are not.

Many monad tutorials begin (erroneously) with the lines "monads allow
you to do impure programming in Haskell."

This is false, monads are pure, it's IO that's impure, not the monadic
programming style.  Monads let you *emulate* an impure style in pure
code, but it's nothing more than this: an emulation.

So in summary you can take any monad you want, and it will be pure,
however it's underlying implementation may not be (such is the case
with IO, for example).  Consider any of:
  -- State,
  -- List,
  -- Cont,
  -- ... literally any monad, some may be more "obviously pure" than
others, but hey, people say that about me all the time.

kris

On Sat, Sep 29, 2012 at 9:57 PM, Vasili I. Galchin  wrote:
> Hello,
>
> I would an examples of monads that are pure, i.e. no side-effects.
>
> Thank you,
>
> Vasili
>
>
> ___
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe
>

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Monads

2012-09-29 Thread Vasili I. Galchin
Hello,

I would an examples of monads that are pure, i.e. no side-effects.

Thank you,

Vasili
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monads with "The" contexts?

2012-07-21 Thread Takayuki Muranushi
Dear Oleg,

You're right. The points boil down to
> That assumption (that the deviations are small) is not stated in types and it 
> is hard to see how can we enforce it.
and even if it's small, there's corner cases at df/dx = 0 or df/dx =
infinity (as you have mentioned.)

Thanks to your advices, I'll look for other ways to set up
probabilistic computations.

2012/7/19  :
>
>> http://en.pk.paraiso-lang.org/Haskell/Monad-Gaussian
>> What do you think? Will this be a good approach or bad?
>
> I don't think it is a Monad (or even restricted monad, see
> below). Suppose G a is a `Gaussian' monad and n :: G Double is a
> random number with the Gaussian (Normal distribution).  Then
> (\x -> x * x) `fmap` n
> is a random number with the chi-square distribution (of
> the degree of freedom 1). Chi-square is _not_ a normal
> distribution. Perhaps a different example is clearer:
>
> (\x -> if x > 0 then 1.0 else 0.0) `fmap` n
>
> has also the type G Double but obviously does not have the normal
> distribution (since that random variable is discrete).
>
> There are other problems
>
>> Let's start with some limitation; we restrict ourselves to Gaussian
>> distributions and assume that the standard deviations are small
>> compared to the scales we deal with.
>
> That assumption is not stated in types and it is hard to see how can
> we enforce it. Nothing prevents us from writing
> liftM2 n n
> in which case the variance will no longer be small compared with the
> mean.
>
> Just a technical remark: The way G a is written, it is a so-called
> restricted monad, which is not a monad (the adjective `restricted' is
> restrictive here).
> http://okmij.org/ftp/Haskell/types.html#restricted-datatypes
>
>



-- 
Takayuki MURANUSHI
The Hakubi Center for Advanced Research, Kyoto University
http://www.hakubi.kyoto-u.ac.jp/02_mem/h22/muranushi.html

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monads with "The" contexts?

2012-07-19 Thread oleg

> http://en.pk.paraiso-lang.org/Haskell/Monad-Gaussian
> What do you think? Will this be a good approach or bad?

I don't think it is a Monad (or even restricted monad, see
below). Suppose G a is a `Gaussian' monad and n :: G Double is a
random number with the Gaussian (Normal distribution).  Then 
(\x -> x * x) `fmap` n 
is a random number with the chi-square distribution (of
the degree of freedom 1). Chi-square is _not_ a normal
distribution. Perhaps a different example is clearer:

(\x -> if x > 0 then 1.0 else 0.0) `fmap` n

has also the type G Double but obviously does not have the normal
distribution (since that random variable is discrete).

There are other problems

> Let's start with some limitation; we restrict ourselves to Gaussian
> distributions and assume that the standard deviations are small
> compared to the scales we deal with.

That assumption is not stated in types and it is hard to see how can
we enforce it. Nothing prevents us from writing
liftM2 n n
in which case the variance will no longer be small compared with the
mean.

Just a technical remark: The way G a is written, it is a so-called
restricted monad, which is not a monad (the adjective `restricted' is
restrictive here). 
http://okmij.org/ftp/Haskell/types.html#restricted-datatypes



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monads with "The" contexts?

2012-07-18 Thread Takayuki Muranushi
Done with some exercises on Gaussian distribution as a monad!

http://en.pk.paraiso-lang.org/Haskell/Monad-Gaussian
What do you think? Will this be a good approach or bad?

Also this is the first page in my attempt to create runnable, and even
testable wiki pages. To run the tests, please use
hackage.haskell.org/package/doctest .

2012/7/18 Takayuki Muranushi :
> Thank you Oleg, for your detailed instructions!
>
> First, let me clarify my problem here (in sacrifice of physical accuracy.)
> c.f. Wrong.hs .
>
>> earthMass, sunMass, marsMass :: [Double]
>> earthMass = [1,10,100]
>> sunMass = (*) <$>  [9,10,11] <*> earthMass
>> marsMass = (*) <$> [0.09,0.1,0.11] <*> earthMass
>>
>> sunPerMars = (/) <$> sunMass <*> marsMass
>> sunPerMars_range = (minimum sunPerMars, maximum sunPerMars)
>>
>> sunPerMars_range
> gives> (0.8181818181818182,1.2223)
>
> These extreme answers close to 1 or 1 are inconsistent in sense
> that they used different Earth mass value for calculating Sun and Mars
> mass. Factoring out Earth mass is perfect and efficient solution in
> this case, but is not always viable when more complicated functions
> are involved.
>
>  We want to remove such inconsistency.
>
>> -- Exercise: why do we need the seemingly redundant EarthMass
>> -- and deriving Typeable?
>> -- Could we use TemplateHaskell instead?
>
> Aha! you use the Types as unique keys that resides in "The" context.
> Smart! To  understand  this,  I have made MassStr.hs, which
> essentially does the same  thing with more familiar type Strings. Of
> course using Strings are naive and collision-prone approach. Printing
> `stateAfter` shows pretty much what have happened.
>
> I'll remember that we can use Types as  global identifiers.
>
>> -- The following is essentially Control.Monad.Sharing.Memoization
>> -- with one important addition
>> -- Can you spot the important addition?
>>
>> type NonDet a = StateT FirstClassStore [] a
>> data Key = KeyDyn Int | KeySta TypeRep
>>  deriving (Show, Ord, Eq)
>>
>
> Hmm, I  don't see what Control.Monad.Sharing.Memoization is;  googling
> https://www.google.co.jp/search?q=Control.Monad.Sharing.Memoization
> gives our conversation at the top.
>
> If it's Memo in chapter 4.2 of your JFP paper, the difference I see is
> that you used Data.Set here instead of list of pairs for better
> efficiency.
>
>
>> Exercise: how does the approach in the code relate to the approaches
>> to sharing explained in
>> http://okmij.org/ftp/tagless-final/sharing/sharing.html
>>
> Chapter 3 introduces an  implicit impure counter, and Chapter 4 uses a
> database that is passed around.
> let_ in Chapter 5 of sharing.pdf realizes the sharing with sort of
> continuation-passing style.The unsafe counter works across the module
> (c.f. counter.zip) but is generally unpredictable...
>
>
> Now I'm on to the next task; how we represent continuous probability
> distributions? The existing libraries:
>
> http://hackage.haskell.org/package/probability-0.2.4
> http://hackage.haskell.org/package/ProbabilityMonads-0.1.0
>
> Seemingly have restricted themselves to discrete distributions, or at
> least providing Random support for Monte-Carlo simulations. There's
> some hope; I guess Gaussian distributions form a Monad provided that
> 1. the standard deviations you are dealing with are small compared to
> the scale you deal with, and 2. the monadic functions are
> differentiable.
>
> Maybe I can use non-standard analysis and automatic differentiation;
> maybe I can resort to numerical differentiation; maybe I just give up
> and be satisfied with random sampling. I have to try first; then
> finally we can abstract upon different approaches.
>
> Also, I can start writing my Knowledge libraries from the part our
> knowledge is so accurate enough that the deviations are negligible
> (such as Earth mass!)
>
>
> P.S. extra  spaces may have annoyed you. I'm sorry for that. My
> keyboard is chattering badly now; I have to update him soon.
>
>
> Best wishes,

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monads with "The" contexts?

2012-07-14 Thread oleg

The bad news is that indeed you don't seem to be able to do what you
want. The good news: yes, you can. The enclosed code does exactly what
you wanted:

> sunPerMars :: NonDet Double
> sunPerMars = (/) <$> sunMass <*> marsMass
>
> sunPerMars_run = runShare sunPerMars
> sunPerMars_run_len = length sunPerMars_run
> -- 27

where earthMass, sunMass, marsMass are all top-level bindings, which
can be defined in separate and separately-compiled modules.

Let's start with the bad news however. Recall the original problem:

> earthMass, sunMass, marsMass :: [Double]
> earthMass = [5.96e24, 5.97e24, 5.98e24]
> sunMass = (*) <$>  [2.5e5, 3e5, 4e5] <*> earthMass
> marsMass = (*) <$> [0.01, 0.1, 1.0] <*> earthMass

The problem was that the computation 
sunPerMars = (/) <$> sunMass <*> marsMass
produces too many answers, because earthMass in sunMass and earthMass
in marsMass were independent non-deterministic computations. Thus the
code says: we measure the earthMass to compute sunMass, and we measure
earthMass again to compute marsMass. Each earthMass measurement is
independent and gives us, in general, a different value. 

However, we wanted the code to behave differently. We wanted to
measure earthMass only once, and use the same measured value to
compute masses of other bodies. There does not seem to be a way to do
that in Haskell. Haskell is pure, so we can substitute equals for
equals. earthMass is equal to [5.96e24, 5.97e24, 5.98e24]. Thus the
meaning of program should not change if we write

> sunMass = (*) <$>  [2.5e5, 3e5, 4e5] <*> [5.96e24, 5.97e24, 5.98e24]
> marsMass = (*) <$> [0.01, 0.1, 1.0] <*> [5.96e24, 5.97e24, 5.98e24]

which gives exactly the wrong behavior (and 81 answers for sunPerMars,
as easy to see). Thus there is no hope that the original code should
behave any differently.

> I don't know if memo can solve this problem. I have to test. Is the
> `memo` in your JFP paper section 4.2 Memoization, a psuedo-code? (What
> is type `Thunk` ?) and seems like it's not in explicit-sharing
> hackage.

BTW, the memo in Hansei is different from the memo in the JFP paper.
In JFP, memo is a restricted version of share:
memo_jfp :: m a -> m (m a)

In Hansei, memo is a generalization of share:
memo_hansei :: (a -> m b) -> m (a -> m b)

You will soon need that generalization (which is not mention in the
JFP paper).


Given such a let-down, is there any hope at all? Recall, if Haskell
doesn't do what you want, embed a language that does. The solution becomes
straightforward then. (Please see the enclosed code).

Exercise: how does the approach in the code relate to the approaches
to sharing explained in
http://okmij.org/ftp/tagless-final/sharing/sharing.html

Good luck with the contest!

{-# LANGUAGE FlexibleContexts, DeriveDataTypeable #-}

-- Sharing of top-level bindings
-- Solving Takayuki Muranushi's problem
-- http://www.haskell.org/pipermail/haskell-cafe/2012-July/102287.html

module TopSharing where

import qualified Data.Map as M
import Control.Monad.State
import Data.Dynamic
import Control.Applicative

-- Let's pretend this is one separate module.
-- It exports earthMass, the mass of the Earth, which
-- is a non-deterministic computation producing Double.
-- The non-determinism reflects our uncertainty about the mass.

-- Exercise: why do we need the seemingly redundant EarthMass
-- and deriving Typeable?
-- Could we use TemplateHaskell instead?
data EarthMass deriving Typeable
earthMass :: NonDet Double
earthMass = memoSta (typeOf (undefined::EarthMass)) $
msum $ map return [5.96e24, 5.97e24, 5.98e24]


-- Let's pretend this is another separate module
-- It imports earthMass and exports sunMass
-- Muranushi: ``Let's also pretend that we can measure the other 
-- bodies' masses only by their ratio to the Earth mass, and 
-- the measurements have large uncertainties.''

data SunMass deriving Typeable
sunMass :: NonDet Double
sunMass = memoSta (typeOf (undefined::SunMass)) mass
 where mass = (*) <$> proportion <*> earthMass
   proportion = msum $ map return [2.5e5, 3e5, 4e5]

-- Let's pretend this is yet another separate module
-- It imports earthMass and exports marsMass

data MarsMass deriving Typeable
marsMass :: NonDet Double
marsMass = memoSta (typeOf (undefined::MarsMass)) mass
 where mass = (*) <$> proportion <*> earthMass
   proportion = msum $ map return [0.01, 0.1, 1.0]

-- This is the main module, importing the masses of the three bodies
-- It computes ``how many Mars mass object can we create 
-- by taking the sun apart?''
-- This code is exactly the same as in Takayuki Muranushi's message
-- His question: ``Is there a way to represent this? 
-- For example, can we define earthMass'' , sunMass'' , marsMass'' all 
-- in separate modules, and yet have (length $ sunPerMars'' == 27) ?

sunPerMars :: NonDet Double
sunPerMars = (/) <$> sunMass <*> marsMass

sunPerMars_run = runShare sunPerMars
sunPerMars_run_len = length sunPerMars_run
-- 27


-- 

Re: [Haskell-cafe] Monads with "The" contexts?

2012-07-13 Thread Takayuki Muranushi
Thank you Tillman, and Oleg, for your advices! Since ICFP contest is
starting in a few hours, I will make a quick response with
gratefulness and will read the full paper later

Let me guess a few things, please tell me am I right.

The share :: m a -> m (m a) is almost the thing I am looking for. I
have independently (believe me!) invented an equivalent, "bind trick,"
for my DSL:
http://nushisblogger.blogspot.jp/2012/06/builder-monad.html
but am now enlighted with what it really meant. Still, `share` cannot
bring the shared binding to global scope (e.g. it doesn't allow place
sunMass, earthMass, marsMass in separate modules)

I guess, my original question is ill-posed, since all the values in
Haskell are pure, so in the following trivial example

> earthCopyMass = earthMass

there is no way to distinguish two masses, thus there's no telling if
Earth and EarthCopy is two reference to one planet or two distinct
planets.

I don't know if memo can solve this problem. I have to test. I'll try
implement `memo` in your JFP paper section 4.2 Memoization; seems like
it's not in explicit-sharing hackage.

I'm vaguely foreseeing, that like in memoized (f2 0, f2 1, f2 0, f2 1)
we need to pass around some `world` among it. That will be random
generator seeds if our continuous-nondeterminism is an MonadIO when we
perform Monte-Carlo simulations; or it's a virtual `world` if we make
Gaussian approximation of probabilistic density functions.

To Ben: Thank you for your comments anyway! But since I'm not going to
use the List monad (the use of List was just for explanation,) the
discreteness is not an issue here. That's my intent when I said
"another story." Sorry for confusion!

All the best,

Takayuki

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Monads with "The" contexts?

2012-07-12 Thread oleg

Tillmann Rendel has correctly noted that the source of the problem is
the correlation among the random variables. Specifically, our
measurement of Sun's mass and of Mars mass used the same rather than
independently drawn samples of the Earth mass. Sharing (which
supports what Functional-Logic programming calls ``call-time choice'')
is indeed the solution. Sharing has very clear physical intuition: it
corresponds to the collapse of the wave function.

Incidentally, a better reference than our ICFP09 paper is the
greatly expanded JFP paper
http://okmij.org/ftp/Computation/monads.html#lazy-sharing-nondet

You would also need a generalization of sharing -- memoization -- to
build stochastic functions. The emphasis is on _function_: when
applied to a value, the function may give an arbitrary sample from a
distribution. However, when applied to the same value again, the
function should return the same sample. The general memo combinator is
implemented in Hansei and is used all the time. The following article
talks about stochastic functions (and correlated variables):

http://okmij.org/ftp/kakuritu/index.html#all-different

and the following two articles show just two examples of using memo:

http://okmij.org/ftp/kakuritu/index.html#noisy-or
http://okmij.org/ftp/kakuritu/index.html#colored-balls

The noisy-or example is quite close to your problem. It deals with the
inference in causal graphs (DAG): finding out the distribution of
conclusions from the distribution of causes (perhaps given
measurements of some other conclusions). Since a cause may contribute
to several conclusions, we have to mind sharing. Since the code works
by back-propagation (so we don't have to sample causes that don't
contribute to the conclusions of interest), we have to use memoization
(actually, memoization of recursively defined functions).




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monads with "The" contexts?

2012-07-12 Thread Ben Doyle
On Thu, Jul 12, 2012 at 11:01 AM, Takayuki Muranushi wrote:

> > sunPerMars :: [Double]
> > sunPerMars = (/) <$> sunMass <*> marsMass
>
> Sadly, this gives too many answers, and some of them are wrong because
> they assume different Earth mass in calculating Sun and Mars masses,
> which led to inconsistent calculation.
>

I think what you want to do is factor out the Earth's mass, and do your
division first:

> sunPerMars'' = (/) <$> sunMassCoef <*> marsMassCoef

The mass of the earth cancels.

That gives a list of length 9, where your approach gave 16 distinct
results. But I think that's just floating point rounding noise. Try the
same monadic calculation with integers and ratios.

The moral? Using numbers in a physics calculation should be your last
resort ;)
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monads with "The" contexts?

2012-07-12 Thread Tillmann Rendel

Hi,

Takayuki Muranushi wrote:

sunPerMars :: [Double]
sunPerMars = (/) <$> sunMass <*> marsMass


Sadly, this gives too many answers, and some of them are wrong because
they assume different Earth mass in calculating Sun and Mars masses,
which led to inconsistent calculation.


This might be related to the problem adressed by Sebastian Fischer, Oleg 
Kiselyov and Chung-chieh Shan in their ICFP 2009 paper on purely 
functional lazy non-deterministic programming.


  http://www.cs.rutgers.edu/~ccshan/rational/lazy-nondet.pdf

An implementation seems to be available on hackage.

  http://hackage.haskell.org/package/explicit-sharing

Tillmann

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Monads with "The" contexts?

2012-07-12 Thread Takayuki Muranushi
tl;dr: Is there any way to pass the Supercombinator level names (the
names in haskell source codes with  zero indent) as context to a Monad
so that it will read from "The" context?

Hello, everyone. I'm thinking of representing our knowledge about our
life, the universe and everything as Haskell values. I'd also like to
address the uncertainties in our knowledge. The uncertainties are
usually continuous (probabilistic distributions) but that's another
story. Please forget about it for a while.

We learned that List is for nondeterministic context.

> earthMass, sunMass, marsMass :: [Double]

Let's pretend that there are large uncertainty in our knowledge of the
Earth mass.

> earthMass = [5.96e24, 5.97e24, 5.98e24]

Let's also pretend that we can measure the other bodys' masses only by
their ratio to the earth mass, and the measurements have large uncertainties.

> sunMass = (*) <$>  [2.5e5, 3e5, 4e5] <*> earthMass
> marsMass = (*) <$> [0.01, 0.1, 1.0] <*> earthMass

Then, how many Mars mass object can we create by taking the sun apart?

> sunPerMars :: [Double]
> sunPerMars = (/) <$> sunMass <*> marsMass

Sadly, this gives too many answers, and some of them are wrong because
they assume different Earth mass in calculating Sun and Mars masses,
which led to inconsistent calculation.
*Main> length $ sunPerMars
81


We had to do this way;

> sunMass' e = map (e*)  [2.5e5, 3e5, 4e5]
> marsMass' e = map (e*) [0.01, 0.1, 1.0]

> sunPerMars' :: [Double]
> sunPerMars' = do
>   e <- earthMass
>   (/) <$> sunMass' e <*> marsMass' e

to have correct candidates (with duplicates.)

*Main> length $ sunPerMars'
27

The use of (e <- earthMass) seems inevitable for representing that the
two Earth masses are taken from the same source of nondeterminism.
However, as the chain of the reasoning grows, we can easily imagine
the function arguments will grow impractically large. To get the Higgs
mass, we will need to feed them all the history of research that led
to the measurement of it.

There is "the" source of nondeterminism for Earth mass we will always use.

Is there a way to represent this? For example, can we define
earthMass'' , sunMass'' , marsMass'' all in separate modules, and yet
have (length $ sunPerMars'' == 27) ?



By the way,
*Main> length $ nub $ sort sunPerMars'
16

is not 9. That's another story, I said!

Thanks in advance.
-- 
Takayuki MURANUSHI
The Hakubi Center for Advanced Research, Kyoto University
http://www.hakubi.kyoto-u.ac.jp/02_mem/h22/muranushi.html

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monads, do and strictness

2012-01-23 Thread David Barbour
Space leaks, time leaks, resource leaks, subtle divergence issues when
filtering lists, etc.

On Mon, Jan 23, 2012 at 11:57 AM, Jake McArthur wrote:

> On Mon, Jan 23, 2012 at 10:45 AM, David Barbour 
> wrote:
> > the repeated failures of attempting to model stream processing with
> infinite
> > lists,
>
> I'm curious about what failures you're talking about.
>
> - Jake
>
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monads, do and strictness

2012-01-23 Thread Jake McArthur
On Mon, Jan 23, 2012 at 10:45 AM, David Barbour  wrote:
> the repeated failures of attempting to model stream processing with infinite
> lists,

I'm curious about what failures you're talking about.

- Jake

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monads, do and strictness

2012-01-23 Thread David Barbour
Thanks for the reference. I base my opinion on my own observations - e.g.
the repeated failures of attempting to model stream processing with
infinite lists, the relative success of modeling exceptions explicitly with
monads compared to use of `fail` or SomeException, etc..

On Mon, Jan 23, 2012 at 6:29 AM, Sebastian Fischer wrote:

> On Sun, Jan 22, 2012 at 5:25 PM, David Barbour 
> wrote:
> > The laws for monads only apply to actual values and combinators of the
> monad algebra
>
> You seem to argue that, even in a lazy language like Haskell,
> equational laws should be considered only for values, as if they where
> stated for a total language. This kind of reasoning is called "fast
> and loose" in the literature and the conditions under which it is
> justified are established by Danielsson and others:
>
>
> http://www.cse.chalmers.se/~nad/publications/danielsson-et-al-popl2006.html
>
> Sebastian
>
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monads, do and strictness

2012-01-23 Thread Sebastian Fischer
On Sun, Jan 22, 2012 at 5:25 PM, David Barbour  wrote:
> The laws for monads only apply to actual values and combinators of the monad 
> algebra

You seem to argue that, even in a lazy language like Haskell,
equational laws should be considered only for values, as if they where
stated for a total language. This kind of reasoning is called "fast
and loose" in the literature and the conditions under which it is
justified are established by Danielsson and others:

http://www.cse.chalmers.se/~nad/publications/danielsson-et-al-popl2006.html

Sebastian

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monads, do and strictness

2012-01-22 Thread David Barbour
2012/1/22 MigMit 

>
>
> Отправлено с iPad
>
> 22.01.2012, в 20:25, David Barbour  написал(а):
> > Attempting to shoehorn `undefined` into your reasoning about domain
> algebras and models and monads is simply a mistake.
>
> No. Using the complete semantics — which includes bottoms aka undefined —
> is a pretty useful technique, especially in a non-strict language.


It is a mistake. You mix semantic layers - confusing the host language with
the embedded language. If you need to model non-termination or exceptions
or the like, you should model them explicitly as values in your model. That
is, *each* layer of abstraction that needs `undefined` should explicitly
have its own representation for such concepts, rather than borrowing
implicitly from the host.

Regards,

Dave
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monads, do and strictness

2012-01-22 Thread MigMit


Отправлено с iPad

22.01.2012, в 20:25, David Barbour  написал(а):
> Attempting to shoehorn `undefined` into your reasoning about domain algebras 
> and models and monads is simply a mistake. 

No. Using the complete semantics — which includes bottoms aka undefined — is a 
pretty useful technique, especially in a non-strict language.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monads, do and strictness

2012-01-22 Thread David Barbour
> observably different from `undefined`

If we understand `undefined` as meaning a computation that never ends, then
you cannot ever observe whether one `undefined` is or is not equivalent to
another. In strict languages, this is especially obvious.

In any case, I don't accept a concept of `monads` that changes so
drastically based upon the host language. The laws for monads only apply to
actual values and combinators of the monad algebra - and, since `undefined`
is not a value, it need not apply. Similarly, algebraic laws for integer
arithmetic don't need to account for `undefined`. And our geometry
abstractions and theorems don't need to account for `undefined`.

Attempting to shoehorn `undefined` into your reasoning about domain
algebras and models and monads is simply a mistake.

Regards,

Dave

On Sun, Jan 22, 2012 at 6:49 AM, Sebastian Fischer wrote:

> On Sat, Jan 21, 2012 at 8:09 PM, David Barbour 
> wrote:
> > In any case, I think the monad identity concept messed up. The property:
> >   return x >>= f = f x
> >
> > Logically only has meaning when `=` applies to values in the domain.
> > `undefined` is not a value in the domain.
> >
> > We can define monads - which meet monad laws - even in strict languages.
>
> In strict languages both `return undefined >>= f` and `f undefined`
> are observably equivalent to `undefined` so the law holds.
>
> In a lazy language both sides might be observably different from
> `undefined` but need to be consistently so. The point of equational
> laws is that one can replace one side with the other without observing
> a difference. Your implementation of `StrictT` violates this
> principle.
>
> Sebastian
>
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monads, do and strictness

2012-01-22 Thread Sebastian Fischer
On Sat, Jan 21, 2012 at 8:09 PM, David Barbour  wrote:
> In any case, I think the monad identity concept messed up. The property:
>   return x >>= f = f x
>
> Logically only has meaning when `=` applies to values in the domain.
> `undefined` is not a value in the domain.
>
> We can define monads - which meet monad laws - even in strict languages.

In strict languages both `return undefined >>= f` and `f undefined`
are observably equivalent to `undefined` so the law holds.

In a lazy language both sides might be observably different from
`undefined` but need to be consistently so. The point of equational
laws is that one can replace one side with the other without observing
a difference. Your implementation of `StrictT` violates this
principle.

Sebastian

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monads, do and strictness

2012-01-21 Thread David Barbour
Evaluating the argument/result was my intention. Evaluating the computation
itself might be useful in some cases, though.

Regards,

Dave

On Sat, Jan 21, 2012 at 3:20 PM, Yves Parès  wrote:

> > (StrictT op) >>= f = StrictT (op >>= \ x -> x `seq` runStrictT (f x))
>
> Are you sure? Here you evaluate the result, and not the computation itself.
> Wouldn't it be:
>
> (StrictT op) >>= f  = op ` seq` StrictT (op >>= \x -> runStrictT (f x))
>
> ??
>
> 2012/1/21 David Barbour 
>
>> On Sat, Jan 21, 2012 at 10:08 AM, Roman Cheplyaka wrote:
>>
>>> * David Barbour  [2012-01-21 10:01:00-0800]
>>> > As noted, IO is not strict in the value x, only in the operation that
>>> > generates x. However, should you desire strictness in a generic way, it
>>> > would be trivial to model a transformer monad to provide it.
>>>
>>> Again, that wouldn't be a monad transformer, strictly speaking, because
>>> "monads" it produces violate the left identity law.
>>>
>>
>> It meets the left identity law in the same sense as the Eval monad from
>> Control.Strategies.
>>
>> http://hackage.haskell.org/packages/archive/parallel/3.1.0.1/doc/html/src/Control-Parallel-Strategies.html#Eval
>>
>> That is, so long as values at each step can be evaluated to WHNF, it
>> remains true that `return x >>= f` = f x.
>>
>> I did mess up the def of >>=. I think it should be:
>>   (StrictT op) >>= f = StrictT (op >>= \ x -> x `seq` runStrictT (f x))
>>
>> But I'm not interested enough to actually pull out an interpreter and
>> test...
>>
>> Regards,
>>
>> Dave
>>
>>
>> ___
>> Haskell-Cafe mailing list
>> Haskell-Cafe@haskell.org
>> http://www.haskell.org/mailman/listinfo/haskell-cafe
>>
>>
>
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monads, do and strictness

2012-01-21 Thread Yves Parès
> (StrictT op) >>= f = StrictT (op >>= \ x -> x `seq` runStrictT (f x))

Are you sure? Here you evaluate the result, and not the computation itself.
Wouldn't it be:

(StrictT op) >>= f  = op ` seq` StrictT (op >>= \x -> runStrictT (f x))

??

2012/1/21 David Barbour 

> On Sat, Jan 21, 2012 at 10:08 AM, Roman Cheplyaka wrote:
>
>> * David Barbour  [2012-01-21 10:01:00-0800]
>> > As noted, IO is not strict in the value x, only in the operation that
>> > generates x. However, should you desire strictness in a generic way, it
>> > would be trivial to model a transformer monad to provide it.
>>
>> Again, that wouldn't be a monad transformer, strictly speaking, because
>> "monads" it produces violate the left identity law.
>>
>
> It meets the left identity law in the same sense as the Eval monad from
> Control.Strategies.
>
> http://hackage.haskell.org/packages/archive/parallel/3.1.0.1/doc/html/src/Control-Parallel-Strategies.html#Eval
>
> That is, so long as values at each step can be evaluated to WHNF, it
> remains true that `return x >>= f` = f x.
>
> I did mess up the def of >>=. I think it should be:
>   (StrictT op) >>= f = StrictT (op >>= \ x -> x `seq` runStrictT (f x))
>
> But I'm not interested enough to actually pull out an interpreter and
> test...
>
> Regards,
>
> Dave
>
>
> ___
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe
>
>
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monads, do and strictness

2012-01-21 Thread Roman Cheplyaka
* David Barbour  [2012-01-21 11:09:43-0800]
> Logically only has meaning when `=` applies to values in the domain.
> `undefined` is not a value in the domain.
> 
> We can define monads - which meet monad laws - even in strict languages.

In strict languages 'undefined' is not a value in the domain indeed, but
it is in non-strict languages, exactly because they are non-strict.

I think that's what Robert Harper meant by saying that Haskell doesn't
have a type of lists, while ML has one [1].

[1]: http://existentialtype.wordpress.com/2011/04/24/the-real-point-of-laziness/

-- 
Roman I. Cheplyaka :: http://ro-che.info/

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monads, do and strictness

2012-01-21 Thread David Barbour
On Sat, Jan 21, 2012 at 11:08 AM, Roman Cheplyaka  wrote:

> * David Barbour  [2012-01-21 11:02:40-0800]
> > On Sat, Jan 21, 2012 at 10:51 AM, David Menendez 
> wrote:
> >
> > > The Eval monad has the property: return undefined >>= const e = e.
> > >
> >
> > You can't write `const e` in the Eval monad.
>
> Why not?
>
> ghci> runEval $ return undefined >>= const (return ())
> ()
>

It is, at the very least, utterly pointless to have an Eval monad with
`const`.

Regards,

Dave
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monads, do and strictness

2012-01-21 Thread David Barbour
Oops, I was misreading. You have `e` here as the next monad.

In any case, I think the monad identity concept messed up. The property:
  return x >>= f = f x

Logically only has meaning when `=` applies to values in the domain.
`undefined` is not a value in the domain.

We can define monads - which meet monad laws - even in strict languages.


On Sat, Jan 21, 2012 at 11:02 AM, David Barbour  wrote:

> On Sat, Jan 21, 2012 at 10:51 AM, David Menendez wrote:
>
>> The Eval monad has the property: return undefined >>= const e = e.
>>
>
> You can't write `const e` in the Eval monad.
>
>
>
> From what I can tell, your proposed monads do not.
>>
>
> You can't write `const e` as my proposed monad, either.
>
> Regards,
>
> Dave
>
>
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monads, do and strictness

2012-01-21 Thread Roman Cheplyaka
* David Barbour  [2012-01-21 11:02:40-0800]
> On Sat, Jan 21, 2012 at 10:51 AM, David Menendez  wrote:
> 
> > The Eval monad has the property: return undefined >>= const e = e.
> >
> 
> You can't write `const e` in the Eval monad.

Why not?

ghci> runEval $ return undefined >>= const (return ())
()

-- 
Roman I. Cheplyaka :: http://ro-che.info/

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monads, do and strictness

2012-01-21 Thread David Barbour
On Sat, Jan 21, 2012 at 10:51 AM, David Menendez  wrote:

> The Eval monad has the property: return undefined >>= const e = e.
>

You can't write `const e` in the Eval monad.



>From what I can tell, your proposed monads do not.
>

You can't write `const e` as my proposed monad, either.

Regards,

Dave
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monads, do and strictness

2012-01-21 Thread David Menendez
On Sat, Jan 21, 2012 at 1:45 PM, David Barbour  wrote:
> On Sat, Jan 21, 2012 at 10:08 AM, Roman Cheplyaka  wrote:
>>
>> * David Barbour  [2012-01-21 10:01:00-0800]
>> > As noted, IO is not strict in the value x, only in the operation that
>> > generates x. However, should you desire strictness in a generic way, it
>> > would be trivial to model a transformer monad to provide it.
>>
>> Again, that wouldn't be a monad transformer, strictly speaking, because
>> "monads" it produces violate the left identity law.
>
>
> It meets the left identity law in the same sense as the Eval monad from
> Control.Strategies.
>  http://hackage.haskell.org/packages/archive/parallel/3.1.0.1/doc/html/src/Control-Parallel-Strategies.html#Eval

The Eval monad has the property: return undefined >>= const e = e.
>From what I can tell, your proposed monads do not.

-- 
Dave Menendez 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monads, do and strictness

2012-01-21 Thread David Barbour
On Sat, Jan 21, 2012 at 10:08 AM, Roman Cheplyaka  wrote:

> * David Barbour  [2012-01-21 10:01:00-0800]
> > As noted, IO is not strict in the value x, only in the operation that
> > generates x. However, should you desire strictness in a generic way, it
> > would be trivial to model a transformer monad to provide it.
>
> Again, that wouldn't be a monad transformer, strictly speaking, because
> "monads" it produces violate the left identity law.
>

It meets the left identity law in the same sense as the Eval monad from
Control.Strategies.

http://hackage.haskell.org/packages/archive/parallel/3.1.0.1/doc/html/src/Control-Parallel-Strategies.html#Eval

That is, so long as values at each step can be evaluated to WHNF, it
remains true that `return x >>= f` = f x.

I did mess up the def of >>=. I think it should be:
  (StrictT op) >>= f = StrictT (op >>= \ x -> x `seq` runStrictT (f x))

But I'm not interested enough to actually pull out an interpreter and
test...

Regards,

Dave
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monads, do and strictness

2012-01-21 Thread Steve Horne

On 21/01/2012 18:08, Steve Horne wrote:
Even so, to see that strictness isn't the issue, imagine that (>>=) 
were rewritten using a unary executeActionAndExtractResult function. 
You could easily rewrite your lamba to contain this expression in 
place of x, without actually evaluating that 
executeActionAndExtractResult. You'd still be doing a form of 
composition of IO actions. And when you finally did force the 
evaluation of the complete composed expression, the ordering of side 
effects would still be preserved - provided you only used that 
function as an intermediate step in implementing (>>=) at least.


Doh!!! - that function *does* exist and is spelled "unsafePerformIO". 
But AFAIK it isn't used for compilation/interpretation of (>>=) operators.


If it *were* used, the rewriting would also need an extra "return".

So...

  a >>= b -> f b

becomes...

  return (f (unsafePerformIO a))


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monads, do and strictness

2012-01-21 Thread Roman Cheplyaka
* David Barbour  [2012-01-21 10:01:00-0800]
> As noted, IO is not strict in the value x, only in the operation that
> generates x. However, should you desire strictness in a generic way, it
> would be trivial to model a transformer monad to provide it.

Again, that wouldn't be a monad transformer, strictly speaking, because
"monads" it produces violate the left identity law.

-- 
Roman I. Cheplyaka :: http://ro-che.info/

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monads, do and strictness

2012-01-21 Thread Steve Horne

On 21/01/2012 17:29, Victor S. Miller wrote:

The "do" notation translates

do {x<- a;f}  into

a>>=(\x ->  f)

However when we're working in the IO monad the semantics we want requires that 
the lambda expression be strict in its argument.  So is this a special case for 
IO?  If I wanted this behavior in other monads is there a way to specify that?

IO is a special case, but strictness isn't the issue.

The value x cannot be evaluated in concrete form (I think the technical 
term is "head normal form") until the IO action a has been executed. 
However, evaluating to head normal form isn't really the key issue. The 
key issue is that the effects of the action must occur at the correct time.


This is why the internals of the IO monad are a "black box" (you can't 
use pattern matching to sneak a look inside a cheat the evaluation 
order) and, yes, it's why the IO monad is a bit special.


But you could still in principle use a non-strict evaluation order. It's 
a bit like evaluating (a + b) * c - you don't need to specify strict, 
lazy or whatever to know that you need to evaluate (a + b) before you 
can evaluate the (? + c), that aspect of evaluation ordering is fixed 
anyway.


In this case, it's just that instead of being able to rewrite the (\x -> 
f) to (\someExpression -> f), there is no expression that you can insert 
there - there is e.g. no unary operator to extract out the result of an 
action and make it available as a normal value outside the IO context. 
If there were, it could defeat the whole point of the IO monad.


Even so, to see that strictness isn't the issue, imagine that (>>=) were 
rewritten using a unary executeActionAndExtractResult function. You 
could easily rewrite your lamba to contain this expression in place of 
x, without actually evaluating that executeActionAndExtractResult. You'd 
still be doing a form of composition of IO actions. And when you finally 
did force the evaluation of the complete composed expression, the 
ordering of side effects would still be preserved - provided you only 
used that function as an intermediate step in implementing (>>=) at least.


BTW - there's a fair chance I'm still not understanding this correctly 
myself (still newbie), so wait around to see everyone explain why I'm 
insane before taking this too seriously.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monads, do and strictness

2012-01-21 Thread David Barbour
As noted, IO is not strict in the value x, only in the operation that
generates x. However, should you desire strictness in a generic way, it
would be trivial to model a transformer monad to provide it.

E.g.

data StrictT m a = StrictT (m a)

runStrictT :: StrictT m a -> m a
runStrictT (StrictT op) = op

class (Monad m) => Monad (StrictT m a) where
  return x = StrictT (return x)
  (StrictT op) >>= f = op >>= \ a -> a `seq` StrictT (f a)


On Sat, Jan 21, 2012 at 9:29 AM, Victor S. Miller
wrote:

> The "do" notation translates
>
> do {x <- a;f}  into
>
> a>>=(\x -> f)
>
> However when we're working in the IO monad the semantics we want requires
> that the lambda expression be strict in its argument.  So is this a special
> case for IO?  If I wanted this behavior in other monads is there a way to
> specify that?
>
> Victor
>
> Sent from my iPhone
> ___
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe
>
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monads, do and strictness

2012-01-21 Thread Roman Cheplyaka
* Victor S. Miller  [2012-01-21 12:29:32-0500]
> The "do" notation translates
> 
> do {x <- a;f}  into
> 
> a>>=(\x -> f)
> 
> However when we're working in the IO monad the semantics we want
> requires that the lambda expression be strict in its argument.

I'm not aware of any semantics that would require that.

According to a monad law,

  return x >>= f

should be equivalent to (f x). In particular,

  return x >>= const (return ())

is equivalent to (const (return ()) x) or simply (return ()).

So, const is non-strict in its second argument even when used in (>>=).

-- 
Roman I. Cheplyaka :: http://ro-che.info/

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monads, do and strictness

2012-01-21 Thread MigMit

On 21 Jan 2012, at 21:29, Victor S. Miller wrote:

> The "do" notation translates
> 
> do {x <- a;f}  into
> 
> a>>=(\x -> f)
> 
> However when we're working in the IO monad the semantics we want requires 
> that the lambda expression be strict in its argument.  So is this a special 
> case for IO?  If I wanted this behavior in other monads is there a way to 
> specify that?

No, why? The (>>=) combinator (for the IO monad) is strict on it's first 
argument, that's all. We don't impose any special requirements on the lambda 
expression.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Monads, do and strictness

2012-01-21 Thread Victor S. Miller
The "do" notation translates

do {x <- a;f}  into

a>>=(\x -> f)

However when we're working in the IO monad the semantics we want requires that 
the lambda expression be strict in its argument.  So is this a special case for 
IO?  If I wanted this behavior in other monads is there a way to specify that?

Victor

Sent from my iPhone
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Monads in the 'hood

2010-12-09 Thread Greg Meredith
Dear Haskellians,

Keepin' it light. For your amusement this weekend: monads in the
hood
.

Best wishes,

--greg

-- 
L.G. Meredith
Managing Partner
Biosimilarity LLC
1219 NW 83rd St
Seattle, WA 98117

+1 206.650.3740

http://biosimilarity.blogspot.com
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monads and Functions sequence and sequence_

2010-10-30 Thread Roman Cheplyaka
* Mark Spezzano  [2010-10-30 15:37:30+1030]
> Can somebody please explain exactly how the monad functions "sequence"
> and "sequence_" are meant to work?

The others in this thread have already explained how these functions
work, so I'll just give an example how they are used.

Consider the following task: to read 30 lines from standard input.
For one line you would use an action

  getLine :: IO String

How to execute this 30 times? Haskell has a function

  replicate :: Int -> a -> [a]

which takes a number and produces a list with that number of
identical elements.

So this is close to what we need:

  replicate 30 getLine :: [IO String]

This is a list containing 30 'getLine' actions. But the list of
actions _is not_ an action itself. This is what sequence does -- it
transforms a list of actions into a single action which gathers the
results into one list. As its name suggests, it does sequencing of
actions.

  sequence $ replicate 30 getLine :: IO [String]

Exactly what we need, an action producing a list of lines read.

Now, let's consider this code:

  sequence [ putStrLn $ show i ++ " green bottles standing on the wall"
   | i <- reverse [1..10] ] :: IO [()]

This action prints 10 lines and also returns us gathered results, i.e.
10 '()', one from each putStrLn (recall that putStrLn has type "String -> IO 
()".

Most probably we don't care about those '()', but they still occupy
memory and introduce a space leak as explained here[1]. That's why a
version of sequence is introduced which ignores the results of actions
and simply returns (). It is called "sequence_".

  sequence_ :: (Monad m) => [m a] -> m ()

As a side note, we can rewrite our last example without using list
comprehension in the following way:

  let p i = putStrLn $ show i ++ " green bottles standing on the wall"
  in  sequence_ $ map p $ reverse [1..10]

The combinations of sequence and sequence_ with map are so common that
they have special names:

  mapM  = \f -> sequence  . map f :: (Monad m) => (a -> m b) -> [a] -> m [b]
  mapM_ = \f -> sequence_ . map f :: (Monad m) => (a -> m b) -> [a] -> m ()

[1] 
http://neilmitchell.blogspot.com/2008/12/mapm-mapm-and-monadic-statements.html

-- 
Roman I. Cheplyaka :: http://ro-che.info/
"Don't let school get in the way of your education." - Mark Twain
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monads and Functions sequence and sequence_

2010-10-29 Thread Gregory Crosswhite

The expression

sequence [a,b,c,...]

is roughly equivalent to

do
r_a <- a
r_b <- b
r_c <- c
...
return [r_a,r_b,r_c,...]

The expression

sequence_ [a,b,c,...]

is roughly equivalent to

do
a
b
c
...
return ()

Does that help?

Cheers,
Greg

On 10/29/10 10:07 PM, Mark Spezzano wrote:

Hi,

Can somebody please explain exactly how the monad functions "sequence" and 
"sequence_" are meant to work?

I have almost every Haskell textbook, but there's surprisingly little 
information in them about the two functions.

 From what I can gather, "sequence" and "sequence_" behave differently 
depending on the types of the Monads that they are processing. Is this correct? Some concrete 
examples would be really helpful.

Even references to some research papers that explain the rationale behind these 
(and all the other..?) monad functions would be great.

Thanks in advance,


Mark Spezzano
  ___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Monads and Functions sequence and sequence_

2010-10-29 Thread Mark Spezzano
Hi,

Can somebody please explain exactly how the monad functions "sequence" and 
"sequence_" are meant to work?

I have almost every Haskell textbook, but there's surprisingly little 
information in them about the two functions.

From what I can gather, "sequence" and "sequence_" behave differently depending 
on the types of the Monads that they are processing. Is this correct? Some 
concrete examples would be really helpful.

Even references to some research papers that explain the rationale behind these 
(and all the other..?) monad functions would be great.

Thanks in advance,


Mark Spezzano
 ___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monads from Functors

2009-04-14 Thread Sebastian Fischer

Edward Kmett wrote:

I think there is another way to view the ContT/Codensity/Monad  
generated by a functor, in terms of an accumulating parameter that  
may be informative. [...]
Now what I find interesting is that Yoneda is basically carrying  
around an accumulator for 'fmap' applications. [...]
When you look at the definition for Codensity f a, what it  
effectively supplies is a 'better accumulator', one which is large  
enough to encode (>>=).


Indeed an insightful perspective!

Ryan Ingram wrote:


If we have a computation

   a :: Monad m => m a

and we have a pointed functor `f`, then we can get the result of the
computation `a` in our functor `f` because `runContT a :: f a`.


Sure, but you can also do the same much more simply:

mkPointed :: Pointed f => forall f a. (forall m. Monad m => m a) ->  
f a

mkPointed m = point $ runIdentity m


Ha! Nice damper. So the interesting part is not that one gets a monad  
for free but that one gets a monad for free that can be combined with  
effects defined elsewhere. That seems closely related to your  
MonadPrompt. Thanks for the pointer.



You may as well do the following:


data MPlus a = Zero | Pure a | Plus (MPlus a) (MPlus a)
instance Monad MPlus where ...
instance MonadPlus MPlus where ...


mkAlternative :: forall f a. Alternative f => (forall m. MonadPlus  
m => m a) -> f a


(all this code is really saying is that being polymorphic over
MonadPlus is kind of dumb, because you aren't really using Monad at
all)


Well, you can use return, bind, mzero and mplus.. I'd consider this  
*really* monad.
You are right, that your version is equivalent to the "generic"  
MonadPlus instance which (only) eliminates the intermediate data  
structure.



Without any way to lift other effects from the underlying functor into
ContT, I don't really see how the "generic" ContT MonadPlus instance
buys you much :)


I'm not sure which other effects can be lifted without bind and which  
need lift. But anyway, there is a lift function that can be used if  
necessary. Non-determinism is one interesting effect that does not  
need lift.


The generic MonadPlus (ContT t) instance buys you a lot, if you are  
interested in non-determinism and search. Inspired by all your replies  
I have applied ContT to a type for difference lists which resulted in  
the well known two-continuation model for depth first search --  
refactoring it modularly into two different parts that are useful on  
their own. Replacing one of those parts lead to a CPS implementation  
of breadth-first search that I have not been aware of previously.  
Please tell me if you have seen this implementation before. I have  
written about it:


http://www-ps.informatik.uni-kiel.de/~sebf/haskell/stealing-bind.html


Ryan:
 > The CPS transfrom in ContT also has the nice property that it  
makes

 > most applications of >>= in the underlying monad be
 > right-associative.

Do you have a specific reason to say *most* (rather than *all*) here?


Yes, because

runContT ( (lift (a >>= f)) >>= g )

still has a left-associative >>=.
[...]
I know this was a pretty in-depth example, so I hope I've made it  
clear :)


Yes, thanks!

Cheers,
Sebastian

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monads from Functors

2009-04-09 Thread Edward Kmett
I think there is another way to view the ContT/Codensity/Monad generated by
a functor, in terms of an accumulating parameter that may be informative. I
kind of alluded to it in the post on Kan extensions.

Take for a moment, Yoneda f given in:

http://comonad.com/haskell/category-extras/src/Control/Functor/Yoneda.hs

> newtype Yoneda f a = Yoneda { runYoneda :: forall b. (a -> b) -> f b }

You can readily construct an instance of Functor for Yoneda without any
concern for the underlying instance on f.

> instance Functor (Yoneda f) where
> fmap f m = Yoneda (\k -> runYoneda m (k . f))

> lowerYoneda :: Yoneda f a -> f a
> lowerYoneda m = runYoneda m id

Basically the operation of creating a value of type Yoneda f a requires you
to use the fmap for the functor, so there is a Mendler-style encoding going
on here, no magic dictionaries are required to use the resulting value.
Now what I find interesting is that Yoneda is basically carrying around an
accumulator for 'fmap' applications.

Basically it enforces the fusion rule that fmap f . fmap g = fmap (f . g) by
accumulating mappings and applying them in one go when you ask for the
result.

As an aside, when you extract a value out of (Yoneda f) uses the function it
has accumulated so far, from pushing computations. And it has to use the
secret valid version of fmap that you had to know to find a function that
had the signature (forall a. (a -> b) -> f b) so no work is saved, its just
fmap fusion.

When you make a Monad out of Yoneda, that Monad can 'push' the map along to
the next bind operation so it can come along for the ride, avoiding the need
for an extra (>>=) to liftM the function you want to map. (It still has to
appeal to the secret fmap you knew to make the Yoneda f a value!) Of course,
this steps outside of the principles I'm espousing in this post by using a
dictionary.

> instance Monad f => Monad (Yoneda f) where
>   return a = Yoneda (\f -> return (f a))
>   m >>= k = Yoneda (\f -> runYoneda m id >>= \a -> runYoneda (k a) f)

When you look at the definition for Codensity f a, what it effectively
supplies is a 'better accumulator', one which is large enough to encode
(>>=).

> newtype Codensity m a = Codensity { runCodensity :: forall b. (a -> m b)
-> m b }

> instance Monad (Codensity f) where
>   return x = Codensity (\k -> k x)
>   m >>= k = Codensity (\c -> runCodensity m (\a -> runCodensity (k a) c))

You then have to pass it a function of the type (a -> m b) to get the value
out. Effectively you tell it how to return by injecting something. And since
it can return and bind, you can define liftM thanks to the monad laws,
giving an admissable implementation of Functor:

 > instance Functor (Codensity k) where
>   fmap f m = Codensity (\k -> runCodensity m (k . f))

Again no dictionaries were harmed in the encoding of this function and the
type is no bigger than it needs to be to express these properties.

The Codensity monad above can be seen as just an instance of enforced bind
fusion, enforcing choice of association of (>>=). This consequence
is logical because the result of a CPS transform is invariant under choice
of reduction order.

The reason I mention this is that this scenario is just an instance of
a very general pattern of Mendler-style encoding. I have another example of
Mendler-style encoding, this time for recursion schemes near the bottom of:
http://knol.google.com/k/edward-kmett/catamorphisms/

I find this idiom to be rather effective when I want to enforce the
equivalent of a rewrite rule or law about a type or work around the
requirement that there can be only one instance of a particular typeclass.
Yoneda doesn't care that f implements Functor, only that it satisfies the
properties of a functor, and that fmap _could_ be defined for it. Codensity
doesn't care that f is a monad. Well, it does if you want to do any
computations with f, but you could have several different lifts to different
monad/applicative 'instances' over f, as long as you lift into the codensity
monad with a bind operation and lower back out with a return operation that
agree. The dictionary for Monad m is irrelevant to the construction of
Codensity m.

> liftCodensity :: Monad m => m a ->  Codensity m a
> liftCodensity m = Codensity (m >>=)

> lowerCodensity :: Monad m => Codensity m a -> m a
> lowerCodensity a = runCodensity a return

-Edward Kmett


On Thu, Apr 9, 2009 at 5:22 AM, Sebastian Fischer <
s...@informatik.uni-kiel.de> wrote:

> Hi all,
>
> thanks for pointing me at the Codensity monad and for mentioning the
> lift operation! I'll try to sum up what I learned from this thread.
>
> In short:
>
> What I find interesting is
>
>  1. you can express the results of monadic computations using
> functors that do not themselves support the `>>=` operation. You
> only need an equivalent of `return`.
>
> and
>
>  2. a variety of *non-*monadic effects (e.g., non-determinism) can be
> lifted to this "monad for free", so you get, e.g., a
>

Re: [Haskell-cafe] Monads from Functors

2009-04-09 Thread Ryan Ingram
On Thu, Apr 9, 2009 at 2:22 AM, Sebastian Fischer
 wrote:
> Let's see whether I got your point here. If we have a computation
>
>    a :: Monad m => m a
>
> and we have a pointed functor `f`, then we can get the result of the
> computation `a` in our functor `f` because `runContT a :: f a`.

Sure, but you can also do the same much more simply:

mkPointed :: Pointed f => forall f a. (forall m. Monad m => m a) -> f a
mkPointed m = point $ runIdentity m

> Clearly, `runContT (lift b)` (or `b` itself) and `runContT b` may
> behave differently (even if they both (can) have the type `f b`)
> because `ContT` 'overwrites' the definition for `>>=` of `f` if `f`
> has one. So it depends on the application which of those behaviours is
> desired.

My point was slightly more general; what I was saying was there is not
a huge use for the "generic" MonadPlus using Alternative, because
without "lift", you have a hard time embedding any other effects from
the applicative into the continuation use.  You may as well do the
following:

> data MPlus a = Zero | Pure a | Plus (MPlus a) (MPlus a)
> instance Monad MPlus where
>return = Pure
>Zero >>= k = Zero
>Pure a >>= k = k a
>Plus a b >>= k = Plus (a >>= k) (b >>= k)
> instance MonadPlus MPlus where
>mzero = Zero
>mplus = Plus

> mkAlternative :: forall f a. Alternative f => (forall m. MonadPlus m => m a) 
> -> f a
> mkAlternative m = convertPlus m where
>convertPlus :: forall b. MPlus b -> f b
>convertPlus Zero = empty
>convertPlus (Pure a) = pure a
>convertPlus (Plus a b) = convertPlus a <|> convertPlus b

(all this code is really saying is that being polymorphic over
MonadPlus is kind of dumb, because you aren't really using Monad at
all)

Without any way to lift other effects from the underlying functor into
ContT, I don't really see how the "generic" ContT MonadPlus instance
buys you much :)

> Ryan:
>  > The CPS transfrom in ContT also has the nice property that it makes
>  > most applications of >>= in the underlying monad be
>  > right-associative.
>
> Do you have a specific reason to say *most* (rather than *all*) here?

Yes, because
>  runContT ( (lift (a >>= f)) >>= g )
still has a left-associative >>=.

Now of course that looks silly, but things aren't as simple as they
seem; in particular I ran into this first when using
Control.Monad.Prompt[1] (which is really just a sophisticated Cont
monad with a nice interface)

> data MPlus m a = Zero | Plus (m a) (m a)
>
> instance MonadPlus (RecPrompt MPlus) where
> mzero = prompt Zero
> mplus x y = prompt (Plus x y)
>
> runPlus :: forall a. RecPrompt MPlus r -> [r]
> runPlus = runPromptC ret prm . unRecPrompt where
>ret :: r -> [r]
>ret x = [x]
>
>prm :: forall a. MPlus (RecPrompt MPlus) a -> (a -> [r]) -> [r]
>prm Zero k = []
>prm (Plus a b) k = (runPlus a ++ runPlus b) >>= k
>-- this >>= is in the list monad

Now, consider runPlus ((mplus mzero (a >>= f)) >>= g), which uses the
Plus "effect"; this will reduce to
(runPlus (a >>= f)) >>= k
where k is the continuation that runs g using "ret" and "prm" to
convert to a list; the result still may have a left-associated >>= in
it, depending on the value of "a" and "f".

However, you're limited to at most one left associative bind per
"recursive" lifted operation; the other binds will all be
right-associative.

I know this was a pretty in-depth example, so I hope I've made it clear :)

  -- ryan

[1] http://hackage.haskell.org/cgi-bin/hackage-scripts/package/MonadPrompt
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monads from Functors

2009-04-09 Thread Sebastian Fischer

Hi all,

thanks for pointing me at the Codensity monad and for mentioning the
lift operation! I'll try to sum up what I learned from this thread.

In short:

What I find interesting is

  1. you can express the results of monadic computations using
 functors that do not themselves support the `>>=` operation. You
 only need an equivalent of `return`.

and

  2. a variety of *non-*monadic effects (e.g., non-determinism) can be
 lifted to this "monad for free", so you get, e.g., a
 non-determinism *monad* even if all you have is a non-determinism
 *functor*.

Here is the long version:

Because `ContT` makes a monad out of anything with kind `* -> *`, we
also get instances for `Functor` and `Applicative` for free. We could
use the `Monad` instance to get them for free by
`Control.Applicative.WrappedMonad` but defining them from scratch
might be insightful, so let's try.

We could define an instance of `Functor` for `ContT t` as follows.

instance Functor (ContT t)
 where
  fmap = liftM

But let's see what we get if we inline this definition:

fmap f a
  = liftM f a
  = a >>= return . f
  = ContT (\k -> unContT a (\x -> unContT (return (f x)) k))
  = ContT (\k -> unContT a (\x -> unContT (ContT ($f x)) k))
  = ContT (\k -> unContT a (\x -> k (f x)))

That leads to the `Functor` instance described in the first post on
Kan extensions by Edward Kmett.

> instance Functor (ContT t)
>  where
>   fmap f a = ContT (\k -> unContT a (k.f))

We also get an Applicative instance for free:

instance Applicative (ContT t)
 where
  pure  = return
  (<*>) = ap

If we inline `ap` we get

f <*> a
  = f `ap` a
  = f >>= \g -> a >>= \x -> return (g x)
  = ContT (\k1 -> unContT f (\x1 -> unContT (a >>= \x -> return (x1  
x)) k1))

  = ...
  = ContT (\k -> unContT f (\g -> unContT a (\x -> k (g x

So, a direct Applicative` instance would be:

> instance Applicative (ContT t)
>  where
>   pure x  = ContT ($x)
>   f <*> a = ContT (\k -> unContT f (\g -> unContT a (\x -> k (g x

The more interesting bits are conversions between the original and the
lifted types. As mentioned earlier, if `f` is a pointed functor, then
we can convert values of type `ContT f a` to values of type `f a`.

runContT :: Pointed f => ContT f a -> f a
runContT a = unContT a point

Ryan Ingram pointed in the other direction:

 > You are missing one important piece of the puzzle for ContT:
 >
 > ~~~
 > lift :: Monad m => m a -> ContT m a
 > lift m = ContT $ \k -> m >>= k
 > ~~~
 >
 > This >>= is the bind from the underlying monad.  Without this
 > operation, it's not very useful as a transformer!

That is a fine reason for the *class* declaration of `MonadTrans` to
mention `Monad m` as a constraint for `lift`. But as `ContT t` is a
monad for any `t :: * -> *`, a constraint `Monad t` on the *instance*
declaration for `Monad (ContT t)` is moot. This constraint is not
necessary to define `lift`.

> instance MonadTrans ContT
>  where
>   lift m = ContT (m>>=)

Ryan continues:

 > Without lift, it's quite difficult to get effects from the
 > underlying Applicative *into* ContT.  Similarily, your MonadPlus
 > instance would be just as good replacing with the "free"
 > alternative functor:
 >
 > ~~~
 > data MPlus a = Zero | Pure a | Plus (MPlus a) (MPlus a)
 > ~~~
 >
 > and then transforming MPlus into the desired type after runContT;
 > it's the embedding of effects via lift that makes ContT useful.

Let's see whether I got your point here. If we have a computation

a :: Monad m => m a

and we have a pointed functor `f`, then we can get the result of the
computation `a` in our functor `f` because `runContT a :: f a`.

If we now have a computation with additional monadic effects (I use
non-determinism as a specific effect, but Edward Kmett shows in his
second post on Kan extensions how to lift other effects like Reader,
State, and IO)

b :: MonadPlus m => m b

then we have two possibilities to express the result using `f` if `f` is
also an alternative functor.

  1. If `f` is itself a monad (i.e. an instance of MonadPlus), then
 the expression `runContT (lift b)` has type `f b`. (Well, `b`
 itself has type `f b`..)

  2. If `f` is *not* a monad, we can still get the result of `b` in
 our functor `f` (using the `MonadPlus` instance from my previous
 mail), because `runContT b` also has type `f b`.

Clearly, `runContT (lift b)` (or `b` itself) and `runContT b` may
behave differently (even if they both (can) have the type `f b`)
because `ContT` 'overwrites' the definition for `>>=` of `f` if `f`
has one. So it depends on the application which of those behaviours is
desired.

Ryan:

 > The CPS transfrom in ContT also has the nice property that it makes
 > most applications of >>= in the underlying monad be
 > right-associative.

Do you have a specific reason to say *most* (rather than *all*) here?
Because if we have a left-associative application of `>>=` in `ContT`,
then we hav

Re: [Haskell-cafe] Monads from Functors

2009-04-08 Thread Ryan Ingram
On Wed, Apr 8, 2009 at 11:22 AM, Sebastian Fischer
 wrote:
>> newtype ContT t a = ContT { unContT :: forall r . (a -> t r) -> t r }
>>
>> instance Monad (ContT t)
>>  where
>>   return x = ContT ($x)
>>   m >>= f  = ContT (\k -> unContT m (\x -> unContT (f x) k))

> Both the `mtl` package and the `transformers` package use the same
> `Monad` instance for their `ContT` type but require `t` to be an
> instance of `Monad`. Why? [^1]

You are missing one important piece of the puzzle for ContT:

> lift :: Monad m => m a -> ContT m a
> lift m = ContT $ \k -> m >>= k

This >>= is the bind from the underlying monad.  Without this
operation, it's not very useful as a transformer!

Without lift, it's quite difficult to get effects from the underlying
Applicative *into* ContT.  Similarily, your MonadPlus instance would
be just as good replacing with the "free" alternative functor:

data MPlus a = Zero | Pure a | Plus (MPlus a) (MPlus a)

and then transforming MPlus into the desired type after runContT; it's
the embedding of effects via lift that makes ContT useful.

The CPS transfrom in ContT also has the nice property that it makes
most applications of >>= in the underlying monad be right-associative.
 This is important for efficiency for some monads; especially free
monads; the free monad discussed by jcc has an O(n^2) running cost for
n left-associative applications of >>=.

  -- ryan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monads from Functors

2009-04-08 Thread Edward Kmett
I discovered them and bundled them up a year or so back in category-extras.

http://comonad.com/haskell/category-extras/dist/doc/html/category-extras/Control-Monad-Codensity.html

I also wrote a series of blog posts including the derivation of these and
their dual in the form of right- and left- Kan extensions.

http://comonad.com/reader/2008/kan-extensions/
http://comonad.com/reader/2008/kan-extensions-ii/
http://comonad.com/reader/2008/kan-extension-iii/

I shared with Janis Voigtlaender the connection to his asymptotic
improvement in the performance of free monads paper as well. After I
discovered the connection between these and that paper shortly thereafter.

-Edward Kmett

On Wed, Apr 8, 2009 at 2:22 PM, Sebastian Fischer <
s...@informatik.uni-kiel.de> wrote:

> > {-# LANGUAGE Rank2Types #-}
>
> Dear Haskellers,
>
> I just realized that we get instances of `Monad` from pointed functors
> and instances of `MonadPlus` from alternative functors.
>
> Is this folklore?
>
> > import Control.Monad
> > import Control.Applicative
>
> In fact, every unary type constructor gives rise to a monad by the
> continuation monad transformer.
>
> > newtype ContT t a = ContT { unContT :: forall r . (a -> t r) -> t r }
> >
> > instance Monad (ContT t)
> >  where
> >   return x = ContT ($x)
> >   m >>= f  = ContT (\k -> unContT m (\x -> unContT (f x) k))
>
> Both the `mtl` package and the `transformers` package use the same
> `Monad` instance for their `ContT` type but require `t` to be an
> instance of `Monad`. Why? [^1]
>
> If `f` is an applicative functor (in fact, a pointed functor is
> enough), then we can translate monadic actions back to the original
> type.
>
> > runContT :: Applicative f => ContT f a -> f a
> > runContT m = unContT m pure
>
> If `f` is an alternative functor, then `ContT f` is a `MonadPlus`.
>
> > instance Alternative f => MonadPlus (ContT f)
> >  where
> >   mzero   = ContT (const empty)
> >   a `mplus` b = ContT (\k -> unContT a k <|> unContT b k)
>
> That is no surprise because `empty` and `<|>` are just renamings for
> `mzero` and `mplus` (or the other way round). The missing piece was
> `>>=` which is provided by `ContT` for free.
>
> Are these instances defined somewhere?
>
> Cheers,
> Sebastian
>
> [^1] I recognized that Janis Voigtlaender defines the type `ContT`
> under the name `C` in Section 3 of his paper on "Asymptotic
> Improvement of Computations over Free Monads" (available at
> http://wwwtcs.inf.tu-dresden.de/~voigt/mpc08.pdf) and gives a monad
> instance without constraints on the first parameter.
>
> ___
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe
>
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Monads from Functors

2009-04-08 Thread Sebastian Fischer

> {-# LANGUAGE Rank2Types #-}

Dear Haskellers,

I just realized that we get instances of `Monad` from pointed functors
and instances of `MonadPlus` from alternative functors.

Is this folklore?

> import Control.Monad
> import Control.Applicative

In fact, every unary type constructor gives rise to a monad by the
continuation monad transformer.

> newtype ContT t a = ContT { unContT :: forall r . (a -> t r) -> t r }
>
> instance Monad (ContT t)
>  where
>   return x = ContT ($x)
>   m >>= f  = ContT (\k -> unContT m (\x -> unContT (f x) k))

Both the `mtl` package and the `transformers` package use the same
`Monad` instance for their `ContT` type but require `t` to be an
instance of `Monad`. Why? [^1]

If `f` is an applicative functor (in fact, a pointed functor is
enough), then we can translate monadic actions back to the original
type.

> runContT :: Applicative f => ContT f a -> f a
> runContT m = unContT m pure

If `f` is an alternative functor, then `ContT f` is a `MonadPlus`.

> instance Alternative f => MonadPlus (ContT f)
>  where
>   mzero   = ContT (const empty)
>   a `mplus` b = ContT (\k -> unContT a k <|> unContT b k)

That is no surprise because `empty` and `<|>` are just renamings for
`mzero` and `mplus` (or the other way round). The missing piece was
`>>=` which is provided by `ContT` for free.

Are these instances defined somewhere?

Cheers,
Sebastian

[^1] I recognized that Janis Voigtlaender defines the type `ContT`
under the name `C` in Section 3 of his paper on "Asymptotic
Improvement of Computations over Free Monads" (available at
http://wwwtcs.inf.tu-dresden.de/~voigt/mpc08.pdf) and gives a monad
instance without constraints on the first parameter.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monads aren't evil? I think they are.

2009-01-14 Thread Alberto G. Corona
The question of imperative versus pure declarative coding has brought to my
mind some may be off-topic speculations.  (so please don´t read it if you
have no time to waste):  I´m interested in the misterious relation bentween
mathematics, algoritms and reality (see
thisfor example).  Functional
programming is declarative, you can model the
entire world functionally with no concern for the order of calculations. The
world is mathematical. The laws of physics have no concern for sequencing.
But CPUs and communications are basically sequential.  Life is sequential,
and programs run along the time coordinate. Whenever you have to run a
program,  you or your compiler must sequence it.  The sequentiation  must be
done by you or your compiler or both. The functional declarative code can be
sequenced on-the-run by the compiler in the CPU by the runtime. but IO is
different.
You have to create, explicitly or implicitly the sequence of IO actions
because the external events in the external world  are not controlled by you
or the compiler. So you, the programmer are the responsible of sequencing
effects in coordination with the external world. so every language must give
you ways to express  sequencing of actions. that is why, to interact with
the external world you must think in terms of algorithms, that is ,
imperatively, no matter if you make the imperative-sequence  (relatively)
explicit with monads or if you make it trough pairs (state, value) or
unsafePerformIO or whatever. You have to think imperatively either way,
because yo HAVE TO construct a sequence. I think that the elegant option is
to recognize this different algorithmic nature of IO by using the IO monad.
In other terms, the appearance of input-output in a problem means that you
modelize just a part of the whole system. the interaction with the part of
the system that the program do not control appears as input output. if the
program includes the model of the environment that give the input output
(including perhaps a model of yourself), then, all the code may be side
effects free and unobservable. Thus, input output is a measure of the lack
of  a priori information.  Because this information is given and demanded at
precide points in the time dimension with a quantitative (real time) or
ordered (sequential) measure, then these impure considerations must be taken
into account in the program. However, the above is nonsensical, because if
you know everithing a priory, then you don´t have a problem, so you don´t
need a program. Because problem solving is to cope with unknow data that
appears AFTER the program has been created, in oder to produce output at the
corrrect time, then the program must have timing on it. has an algoritmic
nature, is not pure mathemathical. This applies indeed to all living beings,
that respond to the environment, and this creates the notion of time.


Concerning monadic code with no side effects, In mathematical terms,
 sequenced (monadic) code are mathematically different from declarative
code: A  function describes what in mathematics is called a  "manifold" with
a number of dimensions equal to the number of parameters.  In the other
side, a sequence describe a particular  trayectory in a manifold, a ordered
set of points in a wider manyfold surface. For this reason the latter must
be described algorithmically. The former can be said that include all
possible trajectories, and can be expressed declaratively. The latter is a
sequence  . You can use the former to construct the later, but you must
express the sequence because you are defining the concrete trajectory in the
general manifold that solve your concrete problem, not other infinite sets
of related problems. This essentially applies also to IO.

 Well this does not imply that you must use monads for it. For example, a
way to express a sequence is a list where each element is a function of the
previous.  The complier is forced to sequence it in the way you whant, but
this happens also with monad evaluation.

This can be exemplified with the laws of Newton: they are declarative. Like
any phisical formula. has no concern for sequencing. But when you need to
simulate the behaviour of a ballistic weapon, you must use a sequence of
instructions( that include the l newton laws). (well, in this case the
trajectory is continuous integrable and can be expressed in a single
function. In this case, the manifold includes a unique trajectory, but  this
is not the case in ordinary discrete problems,) . So any language need
declarative as well as imperative elements to program mathematical models as
well as algorithms.


Cheers
  Alberto.

2009/1/11 Apfelmus, Heinrich 

> Ertugrul Soeylemez wrote:
> > Let me tell you that usually 90% of my code is
> > monadic and there is really nothing wrong with that.  I use especially
> > State monads and StateT transformers very often, because they are
> > convenient and are just a clean combinator frontend to what y

Re: [Haskell-cafe] Monads aren't evil? I think they are.

2009-01-12 Thread Philippa Cowderoy
On Sun, 2009-01-11 at 10:44 +0100, Apfelmus, Heinrich wrote:
> Ertugrul Soeylemez wrote:
> > Let me tell you that usually 90% of my code is
> > monadic and there is really nothing wrong with that.  I use especially
> > State monads and StateT transformers very often, because they are
> > convenient and are just a clean combinator frontend to what you would do
> > manually without them:  passing state.
> 
> The insistence on avoiding monads by experienced Haskellers, in
> particular on avoiding the IO monad, is motivated by the quest for elegance.
> 

By some experienced Haskellers. Others pile them on where they feel it's
appropriate, though avoiding IO where possible is still a good
principle.

I often find that less is essentially stateful than it looks. However, I
also find that as I decompose tasks - especially if I'm willing to
'de-fuse' things - then state-like dataflows crop up again in places
where they had been eliminated. Especially if I want to eg instrument or
quietly abstract some code. Spotting particular sub-cases like Reader
and Writer is a gain, of course!

> A good example is probably the HGL (Haskell Graphics Library), a small
> vector graphics library which once shipped with Hugs. The core is the type
> 
>   Graphic
> 
> which represents a drawing and whose semantics are roughly
> 
>   Graphic = Pixel -> Color



> After having constructed a graphic, you'll also want to draw it on
> screen, which can be done with the function
> 
>   drawInWindow :: Graphic -> Window -> IO ()
> 

Note that there are two different things going on here. The principle of
building up 'programs' in pure code to execute via IO is good - though
ironically enough, plenty of combinator libraries for such tasks form
monads themselves. Finding the right domain for DSL programs is also
important, but this is not necessarily as neatly functional. If you
start with a deep embedding rather than a shallow one then this isn't
much of a problem even if you find your first attempt was fatally flawed
- the DSL code's just another piece of data. 

-- 
Philippa Cowderoy 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monads aren't evil? I think they are.

2009-01-12 Thread Henning Thielemann
Apfelmus, Heinrich schrieb:
> Ertugrul Soeylemez wrote:
>> Let me tell you that usually 90% of my code is
>> monadic and there is really nothing wrong with that.  I use especially
>> State monads and StateT transformers very often, because they are
>> convenient and are just a clean combinator frontend to what you would do
>> manually without them:  passing state.
> 
> The insistence on avoiding monads by experienced Haskellers, in
> particular on avoiding the IO monad, is motivated by the quest for elegance.
> 
> The IO and other monads make it easy to fall back to imperative
> programming patterns to "get the job done". But do you really need to
> pass state around? Or is there a more elegant solution, an abstraction
> that makes everything fall into place automatically? Passing state is a
> valid implementation tool, but it's not a design principle.

I collected some hints, on how to avoid at least the IO monad:
  http://www.haskell.org/haskellwiki/Avoiding_IO

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Monads aren't evil? I think they are.

2009-01-11 Thread Apfelmus, Heinrich
Ertugrul Soeylemez wrote:
> Let me tell you that usually 90% of my code is
> monadic and there is really nothing wrong with that.  I use especially
> State monads and StateT transformers very often, because they are
> convenient and are just a clean combinator frontend to what you would do
> manually without them:  passing state.

The insistence on avoiding monads by experienced Haskellers, in
particular on avoiding the IO monad, is motivated by the quest for elegance.

The IO and other monads make it easy to fall back to imperative
programming patterns to "get the job done". But do you really need to
pass state around? Or is there a more elegant solution, an abstraction
that makes everything fall into place automatically? Passing state is a
valid implementation tool, but it's not a design principle.


A good example is probably the HGL (Haskell Graphics Library), a small
vector graphics library which once shipped with Hugs. The core is the type

  Graphic

which represents a drawing and whose semantics are roughly

  Graphic = Pixel -> Color

There are primitive graphics like

  empty   :: Graphic
  polygon :: [Point] -> Graphic

and you can combine graphics by laying them on top of each other

  over:: Graphic -> Graphic -> Graphic

This is an elegant and pure interface for describing graphics.

After having constructed a graphic, you'll also want to draw it on
screen, which can be done with the function

  drawInWindow :: Graphic -> Window -> IO ()

This function is in the IO monad because it has the side-effect of
changing the current window contents. But just because drawing on a
screen involves IO does not mean that using it for describing graphics
is a good idea. However, using IO for *implementing* the graphics type
is fine

  type Graphics = Window -> IO ()

  empty  = \w -> return ()
  polygon (p:ps) = \w -> moveTo p w >> mapM_ (\p -> lineTo p w) ps
  over g1 g2 = \w -> g1 w >> g2 w
  drawInWindow   = id


Consciously excluding monads and restricting the design space to pure
functions is the basic tool of thought for finding such elegant
abstractions. As Paul Hudak said in his message "A regressive view of
support for imperative programming in Haskell"

   In my opinion one of the key principles in the design of Haskell has
   been the insistence on purity. It is arguably what led the Haskell
   designers to "discover" the monadic solution to IO, and is more
   generally what inspired many researchers to "discover" purely
   functional solutions to many seemingly imperative problems.

   http://article.gmane.org/gmane.comp.lang.haskell.cafe/27214

The philosophy of Haskell is that searching for purely functional
solution is well worth it.


Regards,
H. Apfelmus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Monads aren't evil

2009-01-08 Thread Ertugrul Soeylemez
Hello fellow Haskellers,

When I read questions from Haskell beginners, it somehow seems like they
try to avoid monads and view them as a last resort, if there is no easy
non-monadic way.  I'm really sure that the cause for this is that most
tutorials deal with monads very sparingly and mostly in the context of
input/output.  Also usually monads are associated with the do-notation,
which makes them appear even more special, although there is really
nothing special about them.

I appeal to all experienced Haskell programmers, especially to tutorial
writers, to try to focus more on how monads are nothing special, when
talking to beginners.  Let me tell you that usually 90% of my code is
monadic and there is really nothing wrong with that.  I use especially
State monads and StateT transformers very often, because they are
convenient and are just a clean combinator frontend to what you would do
manually without them:  passing state.


Greets,
Ertugrul.


-- 
nightmare = unsafePerformIO (getWrongWife >>= sex)
http://blog.ertes.de/


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] monads with take-out options

2008-11-26 Thread Jonathan Cast
On Wed, 2008-11-26 at 19:09 +, Jules Bean wrote:
> Greg Meredith wrote:
> > Haskellians,
> > 
> > Some monads come with take-out options, e.g.
> > 
> > * List
> > * Set
> > 
> > In the sense that if unit : A -> List A is given by unit a = [a], then 
> > taking the head of a list can be used to retrieve values from inside the 
> > monad.
> > 
> > Some monads do not come with take-out options, IO being a notorious example.
> > 
> > Some monads, like Maybe, sit on the fence about take-out. They'll 
> > provide it when it's available.
> 
> To amplify other people's comments:
> 
> List A is just as on the fence as Maybe. "[]" plays the role of "Nothing".
> 
> Some monads require that you put something in, before you take anything 
> out [r -> a, s -> (a,s), known to their friends as reader and state]
> 
> Error is similar to Maybe, but with a more informative Nothing.
> 
> Most monads provide some kind of
> 
> runM :: ## -> m a -> ## a

More precisely,

runM :: f (m a) -> g a

Where f and g are usually functors.

Maybe, of course, has the nice property that g = m.

jcc


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] monads with take-out options

2008-11-26 Thread Jules Bean

Greg Meredith wrote:

Haskellians,

Some monads come with take-out options, e.g.

* List
* Set

In the sense that if unit : A -> List A is given by unit a = [a], then 
taking the head of a list can be used to retrieve values from inside the 
monad.


Some monads do not come with take-out options, IO being a notorious example.

Some monads, like Maybe, sit on the fence about take-out. They'll 
provide it when it's available.


To amplify other people's comments:

List A is just as on the fence as Maybe. "[]" plays the role of "Nothing".

Some monads require that you put something in, before you take anything 
out [r -> a, s -> (a,s), known to their friends as reader and state]


Error is similar to Maybe, but with a more informative Nothing.

Most monads provide some kind of

runM :: ## -> m a -> ## a

where the ## are meta-syntax, indicating that you might need to pass 
something in, and you might get something slightly 'funny' out. 
Something based upon 'a' but not entirely 'a'.


The taxonomy of monads is pretty much expressed in the types of these 
'run' functions, I think.


Jules
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] monads with take-out options

2008-11-25 Thread Jonathan Cast
On Mon, 2008-11-24 at 15:06 -0800, Greg Meredith wrote:
> Jonathan,

> Nice! Thanks. In addition to implementations, do we have more
> mathematical accounts? Let me expose more of my motives.
>   * i am interested in a first-principles notion of data.

Hunh.  I have to say I'm not.  The difference between Bool -> alpha and
(alpha, alpha) is not one I've ever felt a need to elaborate.  And I'm
not sure you *could* elaborate it, within a mathematical context ---
which to me means you only work up to isomorphism anyway.

> Neither lambda nor π-calculus come with a criterion for
> determining which terms represent data and which programs.

As you know, lambda-calculus was originally designed to provide a
foundation for mathematics in which every mathematical object --- sets,
numbers, function, etc. --- would be a function (a lambda-term) under
the hood.  It was designed to abstract away the distinction between
values and functions, not really to express it.

> You can shoe-horn in such notions -- and it is clear that
> practical programming relies on such a separation

What?  `Practical programming' in my experience relies on the readiness
to see functions as first-class values (as near to data as possible).
Implementations want to distinguish them, in various ways --- but then
once you draw that distinction, programmers want to use data structures
like tries, to get your `data structure' implementation for the program
design's `function' types (or some of them...)

As near as I can tell, the distinction between data and code is
fundamentally one of performance, which makes it quite
implementation-dependent.  And, for me, boring.

> -- but along come nice abstractions like generic programming
> and the boundary starts moving again.
> (Note, also that one of the reasons i mention π-calculus is
> because when you start shipping data between processes you'd
> like to know that this term really is data and not some nasty
> little program...)

It's not nice to call my children^Wprograms nasty :)  I think, though,
that the real problem is to distinguish values with finite canonical
forms (which can be communicated to another process in finite time) from
values with infinite canonical forms (which cannot).  The problem then
is defining what a `canonical form' is.  But characterizing the problem
in terms of data vs. code isn't going to help:

  \ b -> if b then 0 else 1

is a perfectly good finite canonical form of type Bool -> Int, while

  repeat 0

is a perfectly good term of type [Int] (a data type!) with no finite
canonical form (not even a finite normal form).

I'm sure this fails to engage your point, but perhaps it might clarify
some points you hadn't considered.

jcc


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] monads with take-out options

2008-11-24 Thread Greg Meredith
Claus,
Thanks for your thoughtful response. Let me note that fully abstract
semantics for PCF -- a total toy, mind you, just lambda + bools + naturals
-- took some 25 years from characterization of the problem to a solution.
That would seem to indicate shoe-horning, in my book ;-). Moreover, when i
look at proposals to revisit Church versus Parigot encodings for data
structures (ala Oliveira's thesis), i think we are still at the very
beginnings of an understanding of how data fits into algebraic accounts of
computation (such as lambda and π-calculi).

Obviously, we've come a long way. The relationship between types and
pattern-matching, for example, is now heavily exploited and generally a good
thing. But, do we really understand what's at work here -- or is this just
another 'shut up and calculate' situation, like we have in certain areas of
physics. Frankly, i think we are really just starting to understand what
types are and how they relate to programs. This really begs fundamental
questions. Can we give a compelling type-theoretic account of the separation
of program from data?

The existence of such an account has all kinds of implications, too. For
example, the current "classification" of notions of quantity (number) is
entirely one of history and accident.

   - Naturals
   - Rationals
   - Constructible
   - Algebraic
   - Transcendental
   - Reals
   - Complex
   - Infinitessimal
   - ...

Can we give a type theoretic reconstruction of these notions (of traditional
data types) that would unify -- or heaven forbid -- redraw the usual
distinctions along lines that make more sense in terms of applications that
provide value to users? Conway's ideas of numbers as games is remarkably
unifying and captures all numbers in a single elegant data type. Can this
(or something like it) be further rationally partitioned to provide better
notions of numeric type? Is there a point where such an activity hits a wall
and what we thought was data (real numbers; sequences of naturals; ...) are
just too close to programs to be well served by data-centric treatments?

Best wishes,

--greg

2008/11/24 Claus Reinke <[EMAIL PROTECTED]>

>  - i am interested in a first-principles notion of data. Neither lambda
>>  nor π-calculus come with a criterion for determining which terms
>> represent
>>  data and which programs. You can shoe-horn in such notions -- and it is
>>  clear that practical programming relies on such a separation -- but along
>>  come nice abstractions like generic programming and the boundary starts
>>  moving again. (Note, also that one of the reasons i mention π-calculus is
>>  because when you start shipping data between processes you'd like to know
>>  that this term really is data and not some nasty little program...)
>>
>
> I wouldn't call the usual representations of data in lambda shoe-horned
> (but perhaps you meant the criterion for distinguishing programs from
> data, not the notion of data?). Exposing data structures as nothing but
> placeholders for the contexts operating on their components, by making
> the structure components parameters to yet-to-be-determined continuations,
> seems to be as reduced to first-principles as one can get.
>
> It is also close to the old saying that "data are just dumb programs"
> (does anyone know who originated that phrase?) - when storage
> was tight, programmers couldn't always afford to fill it with dead
> code, so they encoded data in (the state of executing) their routines.
> When data was separated from real program code, associating data
> with the code needed to interpret it was still common. When high-level
> languages came along, treating programs as data (via reflective meta-
> programming, not higher order functions) remained desirable in some
> communities. Procedural abstraction was investigated as an alternative
> to abstract data types. Shipping an interpreter to run stored code was
> sometimes used to reduce memory footprint.
>
> If your interest is in security of mobile code, I doubt that you want to
> distinguish programs and data - "non-program" data which, when
> interpreted by otherwise safe-looking code, does nasty things, isn't
> much safer than code that does non-safe things directly. Unless you
> rule out code which may, depending on the data it operates on, do
> unwanted things, which is tricky without restricting expressiveness.
>
> More likely, you want to distinguish different kinds/types of
> programs/data, in terms of what using them together can do (in
> terms of pi-calculus, you're interested in typing process communication,
> restricting access to certain resources, or limiting communication
> to certain protocols). In the presence of suitably expressive type
> systems, procedural data abstractions need not be any less "safe"
> than dead bits interpreted by a separate program. Or if reasoning
> by suitable observational equivalences tells you that a piece of code
> can't do anything unsafe, does it matter whether that piece

Re: [Haskell-cafe] monads with take-out options

2008-11-24 Thread Greg Meredith
Brandon,
i see your point, but how do we sharpen that intuition to a formal
characterization?

Best wishes,

--greg

On Mon, Nov 24, 2008 at 10:45 PM, Brandon S. Allbery KF8NH <
[EMAIL PROTECTED]> wrote:

> On 2008 Nov 24, at 17:06, Greg Meredith wrote:
>
> Now, are there references for a theory of monads and take-out options? For
> example, it seems that all sensible notions of containers have take-out. Can
> we make the leap and define a container as a monad with a notion of
> take-out? Has this been done? Are there reasons for not doing? Can we say
> what conditions are necessary to ensure a notion of take-out?
>
>
> Doesn't ST kinda fall outside the pale?  (Well, it is a container of sorts,
> but a very different from Maybe or [].)
>
> --
> brandon s. allbery [solaris,freebsd,perl,pugs,haskell] [EMAIL PROTECTED]
> system administrator [openafs,heimdal,too many hats] [EMAIL PROTECTED]
> electrical and computer engineering, carnegie mellon universityKF8NH
>
>
>


-- 
L.G. Meredith
Managing Partner
Biosimilarity LLC
806 55th St NE
Seattle, WA 98105

+1 206.650.3740

http://biosimilarity.blogspot.com
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] monads with take-out options

2008-11-24 Thread Brandon S. Allbery KF8NH

On 2008 Nov 24, at 17:06, Greg Meredith wrote:
Now, are there references for a theory of monads and take-out  
options? For example, it seems that all sensible notions of  
containers have take-out. Can we make the leap and define a  
container as a monad with a notion of take-out? Has this been done?  
Are there reasons for not doing? Can we say what conditions are  
necessary to ensure a notion of take-out?


Doesn't ST kinda fall outside the pale?  (Well, it is a container of  
sorts, but a very different from Maybe or [].)


--
brandon s. allbery [solaris,freebsd,perl,pugs,haskell] [EMAIL PROTECTED]
system administrator [openafs,heimdal,too many hats] [EMAIL PROTECTED]
electrical and computer engineering, carnegie mellon universityKF8NH


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] monads with take-out options

2008-11-24 Thread Claus Reinke

  - i am interested in a first-principles notion of data. Neither lambda
  nor π-calculus come with a criterion for determining which terms represent
  data and which programs. You can shoe-horn in such notions -- and it is
  clear that practical programming relies on such a separation -- but along
  come nice abstractions like generic programming and the boundary starts
  moving again. (Note, also that one of the reasons i mention π-calculus is
  because when you start shipping data between processes you'd like to know
  that this term really is data and not some nasty little program...)


I wouldn't call the usual representations of data in lambda shoe-horned
(but perhaps you meant the criterion for distinguishing programs from
data, not the notion of data?). Exposing data structures as nothing but
placeholders for the contexts operating on their components, by making
the structure components parameters to yet-to-be-determined continuations,
seems to be as reduced to first-principles as one can get.

It is also close to the old saying that "data are just dumb programs"
(does anyone know who originated that phrase?) - when storage
was tight, programmers couldn't always afford to fill it with dead
code, so they encoded data in (the state of executing) their routines.
When data was separated from real program code, associating data
with the code needed to interpret it was still common. When high-level
languages came along, treating programs as data (via reflective meta-
programming, not higher order functions) remained desirable in some
communities. Procedural abstraction was investigated as an alternative
to abstract data types. Shipping an interpreter to run stored code was
sometimes used to reduce memory footprint.

If your interest is in security of mobile code, I doubt that you want to
distinguish programs and data - "non-program" data which, when
interpreted by otherwise safe-looking code, does nasty things, isn't
much safer than code that does non-safe things directly. Unless you
rule out code which may, depending on the data it operates on, do
unwanted things, which is tricky without restricting expressiveness.

More likely, you want to distinguish different kinds/types of
programs/data, in terms of what using them together can do (in
terms of pi-calculus, you're interested in typing process communication,
restricting access to certain resources, or limiting communication
to certain protocols). In the presence of suitably expressive type
systems, procedural data abstractions need not be any less "safe"
than dead bits interpreted by a separate program. Or if reasoning
by suitable observational equivalences tells you that a piece of code
can't do anything unsafe, does it matter whether that piece is
program or data?

That may be too simplistic for your purposes, but I thought I'd put
in a word for the representation of data structures in lambda.

Claus

Ps. there have been lots of variation of pi-calculus, including
   some targetting more direct programming styles, such as
   the blue calculus (pi-calculus in direct style, Boudol et al).
   But I suspect you are aware of all that.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] monads with take-out options

2008-11-24 Thread John Meacham
On Mon, Nov 24, 2008 at 02:06:33PM -0800, Greg Meredith wrote:
> Now, are there references for a theory of monads and take-out options? For
> example, it seems that all sensible notions of containers have take-out. Can
> we make the leap and define a container as a monad with a notion of
> take-out? Has this been done? Are there reasons for not doing? Can we say
> what conditions are necessary to ensure a notion of take-out?

Yes, you are describing 'co-monads'. 

here is an example that a quick web search brought up, and there was a
paper on them and their properties published a while ago
http://www.eyrie.org/~zednenem/2004/hsce/Control.Comonad.html

the duals in that version are

extract - return
duplicate - join
extend  - flip (>>=) (more or less)

John


-- 
John Meacham - ⑆repetae.net⑆john⑈
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] monads with take-out options

2008-11-24 Thread Greg Meredith
Jonathan,
Nice! Thanks. In addition to implementations, do we have more mathematical
accounts? Let me expose more of my motives.

   - i am interested in a first-principles notion of data. Neither lambda
   nor π-calculus come with a criterion for determining which terms represent
   data and which programs. You can shoe-horn in such notions -- and it is
   clear that practical programming relies on such a separation -- but along
   come nice abstractions like generic programming and the boundary starts
   moving again. (Note, also that one of the reasons i mention π-calculus is
   because when you start shipping data between processes you'd like to know
   that this term really is data and not some nasty little program...) One step
   towards a first-principles characterization of data (as separate from
   program) is a first-principles characterization of containers.
  - Along these lines Barry Jay's pattern-matching calculus is an
  intriguing step towards such a characterization. This also links up with
  Foldable and Traversable.
  - i also looked at variants of Wischik's fusion calculus in which
  Abramsky's proof expressions characterize the notion of shippable data.
  (Part of the intuition here is that shippable data really ought to have a
  terminating access property for access to some interior region.
The linear
  types for proof expressions guarantee such a property for all well-typed
  terms.)
   - There is a real tension between nominal strategies and structural
   strategies for accessing data. This is very stark when one starts adding
   notions of data to the  π-calculus -- which is entirely nominal in
   characterization. Moreover, accessing some piece of data by path is natural,
   obvious and programmatically extensible. Accessing something by name is
   fast. These two ideas come together if one's nominal technology (think
   Gabbay-Pitt's freshness widgetry) comes with a notion of names that have
   structure.*
   - Finally, i think the notion of take-out option has something to do with
   being able to demarcate regions. In this sense i think there is a very deep
   connection with  Oleg's investigations of delimited continuations and --
   forgive the leap -- Linda tuple spaces.

As i've tried to indicate, in much the same way that monad is a very, very
general abstraction, i believe that there are suitably general abstractions
that account for a broad range of phenomena and still usefully separate a
notion of data from a notion of program. The category theoretic account of
monad plays a very central role in exposing the generality of the
abstraction (while Haskell's presentation has played a very central role in
understanding the utility of such a general abstractin). A similarly
axiomatic account of the separation of program from data could have
applicability and utility we haven't even dreamed of yet.

Best wishes,

--greg

* i simply cannot resist re-counting an insight that i got from Walter
Fontana, Harvard Systems Biology, when we worked together. In some sense the
dividing line between alchemy and chemistry is the periodic table. Before
the development of the periodic table a good deal of human investigation of
material properties could be seen as falling under the rubric alchemy. After
it, chemistry. If you stare at the periodic table you see that the element
names do not matter. They are merely convenient ways of referring to the
positional information of the table. From a position in the table you can
account for and predict all kind of properties of elements (notice that all
the noble gases line up on the right!). Positions in the table -- kinds of
element -- can be seen as 'names with structure', the structure of which
determines the properties of instances of said kind. i believe that a
first-principles account of the separation of program and data could have as
big an impact on our understanding of the properties of computation as the
development of the periodic table had on our understanding of material
properties.

On Mon, Nov 24, 2008 at 2:30 PM, Jonathan Cast <[EMAIL PROTECTED]>wrote:

> On Mon, 2008-11-24 at 14:06 -0800, Greg Meredith wrote:
> > Haskellians,
>
> > Some monads come with take-out options, e.g.
> >   * List
> >   * Set
> > In the sense that if unit : A -> List A is given by unit a = [a], then
> > taking the head of a list can be used to retrieve values from inside
> > the monad.
>
> > Some monads do not come with take-out options, IO being a notorious
> > example.
>
> > Some monads, like Maybe, sit on the fence about take-out. They'll
> > provide it when it's available.
>
> It might be pointed out that List and Set are also in this region.  In
> fact, Maybe is better, in this regard, since you know, if fromJust
> succeeds, that it will only have once value to return.  head might find
> one value to return, no values, or even multiple values.
>
> A better example of a monad that always has a left inverse t

Re: [Haskell-cafe] monads with take-out options

2008-11-24 Thread Jonathan Cast
On Mon, 2008-11-24 at 14:06 -0800, Greg Meredith wrote:
> Haskellians,

> Some monads come with take-out options, e.g.
>   * List
>   * Set
> In the sense that if unit : A -> List A is given by unit a = [a], then
> taking the head of a list can be used to retrieve values from inside
> the monad.

> Some monads do not come with take-out options, IO being a notorious
> example.

> Some monads, like Maybe, sit on the fence about take-out. They'll
> provide it when it's available.

It might be pointed out that List and Set are also in this region.  In
fact, Maybe is better, in this regard, since you know, if fromJust
succeeds, that it will only have once value to return.  head might find
one value to return, no values, or even multiple values.

A better example of a monad that always has a left inverse to return is
((,) w), where w is a monoid.  In this case,

snd . return = id :: a -> a

as desired (we always have the left inverses

join . return = id :: m a -> m a

where join a = a >>= id).

> Now, are there references for a theory of monads and take-out options?
> For example, it seems that all sensible notions of containers have
> take-out.

Sounds reasonable.  Foldable gives you something:

  foldr const undefined

will pull out the last value visited by foldr, and agrees with head at [].

> Can we make the leap and define a container as a monad with a notion
> of take-out?

If you want.  I'd rather define a container to be Traversable; it
doesn't exclude anything interesting (that I'm aware of), and is mostly
more powerful.

> Has this been done?

Are you familiar at all with Foldable
(http://haskell.org/ghc/docs/latest/html/libraries/base/Data-Foldable.html#t%3AFoldable)
 and Traversable 
(http://haskell.org/ghc/docs/latest/html/libraries/base/Data-Traversable.html#t%3ATraversable)

jcc


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] monads with take-out options

2008-11-24 Thread Jason Dagit
2008/11/24 Greg Meredith <[EMAIL PROTECTED]>

> Haskellians,
> Some monads come with take-out options, e.g.
>
>- List
>- Set
>
> In the sense that if unit : A -> List A is given by unit a = [a], then
> taking the head of a list can be used to retrieve values from inside the
> monad.
>
> Some monads do not come with take-out options, IO being a notorious
> example.
>

I think the take-out option for IO is usually called 'main'. :)

Jason
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] monads with take-out options

2008-11-24 Thread Greg Meredith
Haskellians,
Some monads come with take-out options, e.g.

   - List
   - Set

In the sense that if unit : A -> List A is given by unit a = [a], then
taking the head of a list can be used to retrieve values from inside the
monad.

Some monads do not come with take-out options, IO being a notorious example.

Some monads, like Maybe, sit on the fence about take-out. They'll provide it
when it's available.

Now, are there references for a theory of monads and take-out options? For
example, it seems that all sensible notions of containers have take-out. Can
we make the leap and define a container as a monad with a notion of
take-out? Has this been done? Are there reasons for not doing? Can we say
what conditions are necessary to ensure a notion of take-out?

Best wishes,

--greg

-- 
L.G. Meredith
Managing Partner
Biosimilarity LLC
806 55th St NE
Seattle, WA 98105

+1 206.650.3740

http://biosimilarity.blogspot.com
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [EMAIL PROTECTED]: Re: [Haskell-cafe] Monads for Incremental computing]

2008-11-14 Thread Ryan Ingram
2008/11/13 Conal Elliott <[EMAIL PROTECTED]>:
> As Magnus pointed out in his (very clever) paper, the Applicative interface
> allows for more precise/efficient tracking of dependencies, in that it
> eliminates accidental sequentiality imposed by the Monad interface.  (Magnus
> didn't mention Applicative by name, as his paper preceded
> Idiom/Applicative.)  However, I don't see an Applicative instance in the
> library.

I was wondering about that, actually.  The specialized Applicative
instance mentioned in the paper adds an additional storage cell to the
computation graph.  I would expect that for simple computations, using
the applicative instance could perform worse than "naive" monadic
code.

For example, given three adaptive references "ref1", "ref2", and
"ref3" :: Adaptive Int

do
a <- ref1
b <- ref2
c <- ref3
return (a+b+c)

This computation adds three "read" nodes to the current "write"
output.  But this code:

do
(\a b c -> a + b + c) <$> read ref1 <*> read ref2 <*> read ref3

using (<*>) = the enhanced version of "ap" from the paper.

I believe this would allocate additional cells to hold the results of
the function partially applied to the intermediate computations.
Overall I think this would not be a win in this case.  But it is also
easy to construct cases where it *is* a win.  I suppose you can allow
the user to choose one or the other, but my general expectation for
Applicative is that (<*>) should never perform *worse* than code that
uses Control.Monad.ap

Is this the right understanding, or am I totally missing something
from the Adaptive paper?

  -- ryan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [EMAIL PROTECTED]: Re: [Haskell-cafe] Monads for Incremental computing]

2008-11-13 Thread Conal Elliott
As Magnus pointed out in his (very clever) paper, the Applicative interface
allows for more precise/efficient tracking of dependencies, in that it
eliminates accidental sequentiality imposed by the Monad interface.  (Magnus
didn't mention Applicative by name, as his paper preceded
Idiom/Applicative.)  However, I don't see an Applicative instance in the
library.

  - Conal

On Thu, Nov 13, 2008 at 7:46 PM, Don Stewart <[EMAIL PROTECTED]> wrote:

> Magnus writes:
>
>Thanks to Peter Jonsson, the source is now on hackage:
>
>http://hackage.haskell.org/cgi-bin/hackage-scripts/package/Adaptive
>
>Cheers,
>Magnus
>
>
> Donnie Jones wrote:
> > Hello sanzhiyan,
> >
> > I believe this is the same paper, the pdf is available here:
> >   http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.8.3014
> >
> > Cheers.
> > __
> > Donnie
> >
> > On Thu, Nov 13, 2008 at 9:02 PM, Don Stewart <[EMAIL PROTECTED]
> > > wrote:
> >
> > I sit next to the author, CC'd.
> >
> > -- Don
> >
> > sanzhiyan:
> > >I'm looking for the source code of the library for adaptive
> > computations
> > >exposed in Magnus Carlsson's "Monads for Incremental
> > Computing"[1], but
> > >the link in the paper is broken.
> > >So, does anyone have the sources or knows how to contact the
> > author?
> > >
> > >[1] http://portal.acm.org/citation.cfm?id=581482
> >
> > > ___
> > > Haskell-Cafe mailing list
> > > Haskell-Cafe@haskell.org 
> > > http://www.haskell.org/mailman/listinfo/haskell-cafe
> >
> > ___
> > Haskell-Cafe mailing list
> > Haskell-Cafe@haskell.org 
> > http://www.haskell.org/mailman/listinfo/haskell-cafe
> >
> >
>
> - End forwarded message -
> ___
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe
>
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[EMAIL PROTECTED]: Re: [Haskell-cafe] Monads for Incremental computing]

2008-11-13 Thread Don Stewart
Magnus writes:

Thanks to Peter Jonsson, the source is now on hackage:

http://hackage.haskell.org/cgi-bin/hackage-scripts/package/Adaptive

Cheers,
Magnus


Donnie Jones wrote:
> Hello sanzhiyan,
> 
> I believe this is the same paper, the pdf is available here:
>   http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.8.3014
> 
> Cheers.
> __
> Donnie
> 
> On Thu, Nov 13, 2008 at 9:02 PM, Don Stewart <[EMAIL PROTECTED]
> > wrote:
> 
> I sit next to the author, CC'd.
> 
> -- Don
> 
> sanzhiyan:
> >I'm looking for the source code of the library for adaptive
> computations
> >exposed in Magnus Carlsson's "Monads for Incremental
> Computing"[1], but
> >the link in the paper is broken.
> >So, does anyone have the sources or knows how to contact the
> author?
> >
> >[1] http://portal.acm.org/citation.cfm?id=581482
> 
> > ___
> > Haskell-Cafe mailing list
> > Haskell-Cafe@haskell.org 
> > http://www.haskell.org/mailman/listinfo/haskell-cafe
> 
> ___
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org 
> http://www.haskell.org/mailman/listinfo/haskell-cafe
> 
> 

- End forwarded message -
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monads for Incremental computing

2008-11-13 Thread Donnie Jones
Hello sanzhiyan,

I believe this is the same paper, the pdf is available here:
  http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.8.3014

Cheers.
__
Donnie

On Thu, Nov 13, 2008 at 9:02 PM, Don Stewart <[EMAIL PROTECTED]> wrote:

> I sit next to the author, CC'd.
>
> -- Don
>
> sanzhiyan:
> >I'm looking for the source code of the library for adaptive
> computations
> >exposed in Magnus Carlsson's "Monads for Incremental Computing"[1],
> but
> >the link in the paper is broken.
> >So, does anyone have the sources or knows how to contact the author?
> >
> >[1] http://portal.acm.org/citation.cfm?id=581482
>
> > ___
> > Haskell-Cafe mailing list
> > Haskell-Cafe@haskell.org
> > http://www.haskell.org/mailman/listinfo/haskell-cafe
>
> ___
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe
>
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monads for Incremental computing

2008-11-13 Thread Don Stewart
I sit next to the author, CC'd.

-- Don

sanzhiyan:
>I'm looking for the source code of the library for adaptive computations
>exposed in Magnus Carlsson's "Monads for Incremental Computing"[1], but
>the link in the paper is broken.
>So, does anyone have the sources or knows how to contact the author?
> 
>[1] http://portal.acm.org/citation.cfm?id=581482

> ___
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Monads for Incremental computing

2008-11-13 Thread sanzhiyan
I'm looking for the source code of the library for adaptive computations  
exposed in Magnus Carlsson's "Monads for Incremental Computing"[1], but the  
link in the paper is broken.

So, does anyone have the sources or knows how to contact the author?


[1] http://portal.acm.org/citation.cfm?id=581482
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monads that are Comonads and the role of Adjunction

2007-12-14 Thread David Menendez
On Dec 14, 2007 5:14 AM, Jules Bean <[EMAIL PROTECTED]> wrote:

> There are two standard ways to decompose a monad into two adjoint
> functors: the Kleisli decomposition and the Eilenberg-Moore decomposition.
>
> However, neither of these categories is a subcategory of Hask in an
> obvious way, so I don't immediately see how to write "f" and "g" as
> haskell functors.
>
> Maybe someone else can show the way :)


One possibility is to extend Haskell's Functor class. We can define a class
of (some) categories whose objects are Haskell types:

> class Category mor where
> id  :: mor a a
> (.) :: mor b c -> mor a b -> mor a c

The instance for (->) should be obvious. We can also define an instance for
Kleisli operations:

> newtype Kleisli m a b = Kleisli { runKleisli :: a -> m b }
> instance (Monad m) => Category (Kleisli m) -- omitted

Next, a class for (some) functors between these categories:

> class (Category morS, Category morT) => Functor f morS morT where
> fmap :: morS a b -> morT (f a) (f b)

Unlike the usual Haskell Functor class, this requires us to distinguish the
functor itself from the type constructor involved in the functor.

Here's an instance converting Kleisli operations to functions.

> instance Monad m => Functor m (Kleisli m) (->) where
> -- fmap :: Kleisli m a b -> (m a -> m b)
> fmap f = (>>= runKleisli f)

Going the other way is tricker, because our Functor interface requires a
type constructor. We'll use Id.

> newtype Id a = Id { unId :: a }
>
> instance Monad m => Functor Id (->) (Kleisli m) where
> -- fmap :: (a -> b) -> Kleisli m (Id a) (Id b)
> fmap f = Kleisli (return . Id . f . unId)

Finally, adjunctions between functors:

> class (Functor f morS morT, Functor g morT morS)
> => Adjunction f g morS morT | f g morS -> morT, f g morT -> morS
> where
> leftAdj   :: morT (f a) b -> morS a (g b)
> rightAdj  :: morS a (g b) -> morT (f a) b

The functional dependency isn't really justified. It's there to eliminate
ambiguity in the later code.

The two functors above are adjoint:

> instance (Monad m) => Adjunction Id m (->) (Kleisli m) where
> -- leftAdj :: Kleisli (Id a) b -> (a -> m b)
> leftAdj f = runKleisli f . Id
>
> -- rightAdj :: (a -> m b) -> Kleisli (Id a) b
> rightAdj f = Kleisli (f . unId)

So, given two adjoint functors, we have a monad and a comonad. Note,
however, that these aren't necessarily the same as the Haskell classes Monad
and Comonad.

Here are the monad operations:

> unit :: (Adjunction f g morS morT) => morS a (g(f a))
> unit = leftAdj id
>
> extend :: (Adjunction f g morS morT) => morS a (g(f b)) -> morS (g(f a))
(g(f b))
> extend f = fmap (rightAdj f)

The monad's type constructor is the composition of g and f. Extend
corresponds to (>>=) with the arguments reversed.

In our running example, unit and extend have these types:

unit   :: Monad m => a -> m (Id a)
extend :: Monad m => (a -> m (Id b)) -> m (Id a) -> m (Id b)

This corresponds to our original monad, only with the extra Id.

Here are the comonad operations:

> counit :: (Adjunction f g morS morT) => morT (f(g a)) a
> counit = rightAdj id
>
> coextend :: (Adjunction f g morS morT) => morT (f(g a)) b -> morT (f(g a))
(f(g b))
> coextend f = fmap (leftAdj f)

In our running example, counit and coextend have these types:

counit   :: Monad m => Kleisli m (Id (m a)) a
coextend :: Monad m => Kleisli m (Id (m a)) b -> Kleisli m (Id (m a))
(Id (m b))

Thus, m is effectively a comonad in its Kleisli category.

We can tell a similar story with Comonads and CoKleisli operations,
eventually reaching an adjunction like this:

> instance (Comonad w) => Adjunction w Id (CoKleisli w) (->) -- omitted

-- 
Dave Menendez <[EMAIL PROTECTED]>

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monads that are Comonads and the role of Adjunction

2007-12-14 Thread Jules Bean

Dan Weston wrote:

apfelmus wrote:

Luke Palmer wrote:

Isn't a type which is both a Monad and a Comonad just Identity?

(I'm actually not sure, I'm just conjecting)


Good idea, but it's not the case.

  data L a = One a | Cons a (L a)   -- non-empty list


Maybe I can entice you to elaborate slightly. From
http://www.eyrie.org/~zednenem/2004/hsce/Control.Functor.html and 
Control.Comonad.html there is


--
newtype O f g a   -- Functor composition:  f `O` g

instance (Functor f, Functor g) => Functor (O f g) where ...
instance Adjunction f g => Monad   (O g f) where ...
instance Adjunction f g => Comonad (O f g) where ...

-- I assume Haskell can infer Functor (O g f) from Monad (O g f), which
-- is why that is missing here?


No. But it can infer Functor (O g f) from instance (Functor f, Functor 
g) => Functor (O f g), (using 'g' for 'f' and 'f' for 'g').




class (Functor f, Functor g) => Adjunction f g | f -> g, g -> f where
  leftAdjunct  :: (f a -> b) -> a -> g b
  rightAdjunct :: (a -> g b) -> f a -> b
--

Functors are associative but not generally commutative. Apparently a 
Monad is also a Comonad if there exist left (f) and right (g) adjuncts 
that commute. [and only if also??? Is there a contrary example of a 
Monad/Comonad for which no such f and g exist?]


In the case of
 >   data L a = One a | Cons a (L a)   -- non-empty list

what are the appropriate definitions of leftAdjunct and rightAdjunct? 
Are they Monad.return and Comonad.extract respectively? That seems to 
unify a and b unnecessarily. Do they wrap bind and cobind? Are they of 
any practical utility?




I think you're asking the wrong question!

The first question needs to be :

What is "f" and what is "g" ? What are the two Functors in this case?

We know that we want g `O` f to be L, because we know that the unit is 
return, i.e. One, and


unit :: a -> O g f a
otherwise known as eta :: a -> O g f a

We also know there is a co-unit epsilon :: O f g a -> a, but we don't 
know much about that until we work out how to decompose L into two Functors.


There are two standard ways to decompose a monad into two adjoint 
functors: the Kleisli decomposition and the Eilenberg-Moore decomposition.


However, neither of these categories is a subcategory of Hask in an 
obvious way, so I don't immediately see how to write "f" and "g" as 
haskell functors.


Maybe someone else can show the way :)

Jules

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Monads that are Comonads and the role of Adjunction

2007-12-13 Thread Dan Weston

apfelmus wrote:

Luke Palmer wrote:

Isn't a type which is both a Monad and a Comonad just Identity?

(I'm actually not sure, I'm just conjecting)


Good idea, but it's not the case.

  data L a = One a | Cons a (L a)   -- non-empty list


Maybe I can entice you to elaborate slightly. From
http://www.eyrie.org/~zednenem/2004/hsce/Control.Functor.html and 
Control.Comonad.html there is


--
newtype O f g a   -- Functor composition:  f `O` g

instance (Functor f, Functor g) => Functor (O f g) where ...
instance Adjunction f g => Monad   (O g f) where ...
instance Adjunction f g => Comonad (O f g) where ...

-- I assume Haskell can infer Functor (O g f) from Monad (O g f), which
-- is why that is missing here?

class (Functor f, Functor g) => Adjunction f g | f -> g, g -> f where
  leftAdjunct  :: (f a -> b) -> a -> g b
  rightAdjunct :: (a -> g b) -> f a -> b
--

Functors are associative but not generally commutative. Apparently a 
Monad is also a Comonad if there exist left (f) and right (g) adjuncts 
that commute. [and only if also??? Is there a contrary example of a 
Monad/Comonad for which no such f and g exist?]


In the case of
>   data L a = One a | Cons a (L a)   -- non-empty list

what are the appropriate definitions of leftAdjunct and rightAdjunct? 
Are they Monad.return and Comonad.extract respectively? That seems to 
unify a and b unnecessarily. Do they wrap bind and cobind? Are they of 
any practical utility?


My category theory study stopped somewhere between Functor and 
Adjunction, but is there any deep magic you can describe here in a 
paragraph or two? I feel like I will never get Monad and Comonad until I 
understand Adjunction.


Dan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monads

2007-12-03 Thread Radosław Grzanka
Hi,

2007/12/3, PR Stanley <[EMAIL PROTECTED]>:
> Hi
> Does the list consider
> http://en.wikibooks.org/w/index.php?title=Haskell/Understanding_monads&oldid=933545
> a reliable tutorial on monads and, if not, could you recommend an
> onlien alternative please?

I really enjoyed "All about Monads" by Jeff Newbern
http://www.haskell.org/all_about_monads/html/index.html

Cheers,
  Radek.

-- 
Codeside: http://codeside.org/
Przedszkole Miejskie nr 86 w Lodzi: http://www.pm86.pl/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Monads

2007-12-03 Thread PR Stanley

Hi
Does the list consider
http://en.wikibooks.org/w/index.php?title=Haskell/Understanding_monads&oldid=933545
a reliable tutorial on monads and, if not, could you recommend an 
onlien alternative please?

Thanks,
Paul

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] monads and groups -- instead of loops

2007-08-01 Thread Cale Gibbard
On 01/08/07, Greg Meredith <[EMAIL PROTECTED]> wrote:
> Haskellians,
>
> Though the actual metaphor in the monads-via-loops doesn't seem to fly with
> this audience, i like the spirit of the communication and the implicit
> challenge: find a pithy slogan that -- for a particular audience, like
> imperative programmers -- serves to uncover the essence of the notion. i
> can't really address that audience as my first real exposure to programming
> was scheme and i moved into concurrency and reflection after that and only
> ever used imperative languages as means to an end. That said, i think i
> found another metaphor that summarizes the notion for me. In the same way
> that the group axioms organize notions of symmetry, including addition,
> multiplication, reflections, translations, rotations, ... the monad(ic
> axioms) organize(s) notions of snapshot (return) and update (bind),
> including state, i/o, control,  In short
>
> group : symmetry :: monad : update
>
> Best wishes,
>
> --greg

Hello,

I just wrote
http://www.haskell.org/haskellwiki/Monads_as_computation
after starting to reply to this thread and then getting sidetracked
into writing a monad tutorial based on the approach I've been taking
in the ad-hoc tutorials I've been giving on #haskell lately. :)

It might be worth sifting through in order to determine an anthem for monads.

Something along the lines of:
"monads are just a specific kind of {embedded domain specific
language, combinator library}"
would work, provided that the person you're talking to knows what one
of those is. :) I've found it very effective to explain those in
general and then explain what a monad is in terms of them.

I'm not certain that the article is completely finished. I'm a bit
tired at the moment, and will probably return to polish the treatment
some more when I'm more awake, but I think it's finished enough to
usefully get the main ideas across.

 - Cale
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] monads and groups -- instead of loops

2007-08-01 Thread Greg Meredith
Andrew,

;-) Agreed! As i said in my previous post, i can't address the imperative
programmer. i really don't think that way and have a hard time understanding
people who do! (-;

Best wishes,

--greg

On 8/1/07, Andrew Wagner <[EMAIL PROTECTED]> wrote:
>
> That's great, unless the imperative programmer happens to be one of
> the 90% of programmers that isn't particularly familiar with group
> theory...
>
> On 8/1/07, Greg Meredith <[EMAIL PROTECTED]> wrote:
> > Haskellians,
> >
> > Though the actual metaphor in the monads-via-loops doesn't seem to fly
> with
> > this audience, i like the spirit of the communication and the implicit
> > challenge: find a pithy slogan that -- for a particular audience, like
> > imperative programmers -- serves to uncover the essence of the notion. i
> > can't really address that audience as my first real exposure to
> programming
> > was scheme and i moved into concurrency and reflection after that and
> only
> > ever used imperative languages as means to an end. That said, i think i
> > found another metaphor that summarizes the notion for me. In the same
> way
> > that the group axioms organize notions of symmetry, including addition,
> > multiplication, reflections, translations, rotations, ... the monad(ic
> > axioms) organize(s) notions of snapshot (return) and update (bind),
> > including state, i/o, control,  In short
> >
> > group : symmetry :: monad : update
> >
> > Best wishes,
> >
> > --greg
> >
> > --
> > L.G. Meredith
> > Managing Partner
> > Biosimilarity LLC
> > 505 N 72nd St
> > Seattle, WA 98103
> >
> > +1 206.650.3740
> >
> > http://biosimilarity.blogspot.com
> > ___
> > Haskell-Cafe mailing list
> > Haskell-Cafe@haskell.org
> > http://www.haskell.org/mailman/listinfo/haskell-cafe
> >
> >
>



-- 
L.G. Meredith
Managing Partner
Biosimilarity LLC
505 N 72nd St
Seattle, WA 98103

+1 206.650.3740

http://biosimilarity.blogspot.com
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] monads and groups -- instead of loops

2007-08-01 Thread Andrew Wagner
That's great, unless the imperative programmer happens to be one of
the 90% of programmers that isn't particularly familiar with group
theory...

On 8/1/07, Greg Meredith <[EMAIL PROTECTED]> wrote:
> Haskellians,
>
> Though the actual metaphor in the monads-via-loops doesn't seem to fly with
> this audience, i like the spirit of the communication and the implicit
> challenge: find a pithy slogan that -- for a particular audience, like
> imperative programmers -- serves to uncover the essence of the notion. i
> can't really address that audience as my first real exposure to programming
> was scheme and i moved into concurrency and reflection after that and only
> ever used imperative languages as means to an end. That said, i think i
> found another metaphor that summarizes the notion for me. In the same way
> that the group axioms organize notions of symmetry, including addition,
> multiplication, reflections, translations, rotations, ... the monad(ic
> axioms) organize(s) notions of snapshot (return) and update (bind),
> including state, i/o, control,  In short
>
> group : symmetry :: monad : update
>
> Best wishes,
>
> --greg
>
> --
> L.G. Meredith
> Managing Partner
> Biosimilarity LLC
> 505 N 72nd St
> Seattle, WA 98103
>
> +1 206.650.3740
>
> http://biosimilarity.blogspot.com
> ___
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe
>
>
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] monads and groups -- instead of loops

2007-08-01 Thread Greg Meredith
Haskellians,

Though the actual metaphor in the monads-via-loops doesn't seem to fly with
this audience, i like the spirit of the communication and the implicit
challenge: find a pithy slogan that -- for a particular audience, like
imperative programmers -- serves to uncover the essence of the notion. i
can't really address that audience as my first real exposure to programming
was scheme and i moved into concurrency and reflection after that and only
ever used imperative languages as means to an end. That said, i think i
found another metaphor that summarizes the notion for me. In the same way
that the group axioms organize notions of symmetry, including addition,
multiplication, reflections, translations, rotations, ... the monad(ic
axioms) organize(s) notions of snapshot (return) and update (bind),
including state, i/o, control,  In short

group : symmetry :: monad : update

Best wishes,

--greg

-- 
L.G. Meredith
Managing Partner
Biosimilarity LLC
505 N 72nd St
Seattle, WA 98103

+1 206.650.3740

http://biosimilarity.blogspot.com
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monads and constraint satisfaction problems (CSP)

2007-05-31 Thread Jeremy Shaw
At Thu, 31 May 2007 11:36:55 -0700,
Jeremy Shaw wrote:

> This paper describes a non-monadic, compositional method for solving CSPs:
> 
> http://www.cse.ogi.edu/PacSoft/publications/2001/modular_lazy_search_jfp.pdf

btw, there are multiple versions of this paper. This version includes
a section on dynamic variable ordering, as well as some improvements
to the other sections.

j.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monads and constraint satisfaction problems (CSP)

2007-05-31 Thread Jeremy Shaw
At Thu, 31 May 2007 10:42:57 -0700,
Greg Meredith wrote:

> BTW, i think this could have a lot of bang-for-buck because the literature i
> read exhibited two basic features:
> 
>- the "standard" treatments (even by CS-types) are decidedly not
>compositional
>- the people in the field who face industrial strength csp problems
>report that they have to take compositional approaches because the problems
>are just too large otherwise (both from a human engineering problem as well
>as a computational complexity problem)

This paper describes a non-monadic, compositional method for solving CSPs:

http://www.cse.ogi.edu/PacSoft/publications/2001/modular_lazy_search_jfp.pdf

There is also the LogicT monad transformer:

http://okmij.org/ftp/Computation/monads.html

j.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monads and constraint satisfaction problems (CSP)

2007-05-31 Thread Brandon Michael Moore
On Thu, May 31, 2007 at 10:42:57AM -0700, Greg Meredith wrote:
> All,
> 
> All this talk about Mathematica and a reference to monadic treatments of
> backtracking reminded me that a year ago i was involved in work on a
> Mathematica-like widget. At the time i noticed that a good deal of the
> structure underlying LP, SAT and other solvers was terribly reminiscent of
> comprehension-style monadic structure. i think i asked Erik Meijer if he
> knew of any work done on this and posted to LtU, but nobody seemed to have
> understood what i was mumbling about. So, let me try here: does anybody know
> of references for a monadic treatment of constraint satisfaction?

It's not particularly monadic, but you might check out 
"Modular Lazy Search for Constraint Satisfaction Problems"
http://cse.ogi.edu/PacSoft/publications/.../modular_lazy_search.pdf
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Monads and constraint satisfaction problems (CSP)

2007-05-31 Thread Greg Meredith

All,

All this talk about Mathematica and a reference to monadic treatments of
backtracking reminded me that a year ago i was involved in work on a
Mathematica-like widget. At the time i noticed that a good deal of the
structure underlying LP, SAT and other solvers was terribly reminiscent of
comprehension-style monadic structure. i think i asked Erik Meijer if he
knew of any work done on this and posted to LtU, but nobody seemed to have
understood what i was mumbling about. So, let me try here: does anybody know
of references for a monadic treatment of constraint satisfaction?

BTW, i think this could have a lot of bang-for-buck because the literature i
read exhibited two basic features:

  - the "standard" treatments (even by CS-types) are decidedly not
  compositional
  - the people in the field who face industrial strength csp problems
  report that they have to take compositional approaches because the problems
  are just too large otherwise (both from a human engineering problem as well
  as a computational complexity problem)


Best wishes,

--greg

--
L.G. Meredith
Managing Partner
Biosimilarity LLC
505 N 72nd St
Seattle, WA 98103

+1 206.650.3740

http://biosimilarity.blogspot.com
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] monads once again: a newbie perspective

2006-08-31 Thread Andrea Rossato
Il Thu, Aug 31, 2006 at 02:39:59PM +0100, Simon Peyton-Jones ebbe a scrivere:
> Andrea
> 
> Don't forget to link to it from here!
>   http://haskell.org/haskellwiki/Books_and_tutorials#Using_monads

Simon,
I'll do. But now the text is far from being complete: there's only the
code... (the most difficult part, for me at least!).

Thanks for your kind attention.
Andrea
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] monads once again: a newbie perspective

2006-08-26 Thread Lennart Augustsson

I like sigfpe's introduction to monads:

http://sigfpe.blogspot.com/2006/08/you-could-have-invented-monads- 
and.html


-- Lennart

On Aug 26, 2006, at 14:04 , Andrea Rossato wrote:

Il Fri, Aug 25, 2006 at 01:13:58PM -0400, Cale Gibbard ebbe a  
scrivere:

Hey cool, a new monad tutorial! :)

Just out of interest, have you seen my Monads as Containers article?
http://www.haskell.org/haskellwiki/Monads_as_Containers

Let me know what you think of it. I find that often newcomers to
monads will find the container perspective easier to grasp before
moving on to treating monads as an abstraction of computation, but
that side of things needs coverage too. :)


Sure I've read it!
I must confess I find it difficult, though. I mean, the
exemplification part is very interesting but, for me, it was too
difficult to connect it to the code I was looking at.

This is way I decided this approach: let's start building a monad and
see what it is by actually looking it at work.

The evaluator is a very simple piece of code, you can clearly see what
it does.
And then you start building up your knowledge by expanding it.
Take into account that, for me, *writing* that tutorial is *the* way
to get to grasp all the concepts behind the type system (and monads).

As you can see in a thread below, I'm studying hard in order to find
out the proper continuation of the tutorial, and for my learning.

So far I'll be able to describe the code of a statefull evaluator that
produces output. Then I'd like to add exception handling.
Let's see if I'll be able to get that far...

By the way, I'll soon add links to the other important tutorials on
monads: yours, IO Inside, All about monads.
At the I'd like to be able to link A Gentle introduction, the
Haskell 98 Report and, why not, "Write yourself a Scheme in 48
hours".
That will be it.

Thanks for your kind attention.
Andrea
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] monads once again: a newbie perspective

2006-08-26 Thread Andrea Rossato
Il Fri, Aug 25, 2006 at 01:13:58PM -0400, Cale Gibbard ebbe a scrivere:
> Hey cool, a new monad tutorial! :)
> 
> Just out of interest, have you seen my Monads as Containers article?
> http://www.haskell.org/haskellwiki/Monads_as_Containers
> 
> Let me know what you think of it. I find that often newcomers to
> monads will find the container perspective easier to grasp before
> moving on to treating monads as an abstraction of computation, but
> that side of things needs coverage too. :)

Sure I've read it!
I must confess I find it difficult, though. I mean, the
exemplification part is very interesting but, for me, it was too
difficult to connect it to the code I was looking at.

This is way I decided this approach: let's start building a monad and
see what it is by actually looking it at work.

The evaluator is a very simple piece of code, you can clearly see what
it does.
And then you start building up your knowledge by expanding it.
Take into account that, for me, *writing* that tutorial is *the* way
to get to grasp all the concepts behind the type system (and monads).

As you can see in a thread below, I'm studying hard in order to find
out the proper continuation of the tutorial, and for my learning.

So far I'll be able to describe the code of a statefull evaluator that
produces output. Then I'd like to add exception handling.
Let's see if I'll be able to get that far...

By the way, I'll soon add links to the other important tutorials on
monads: yours, IO Inside, All about monads.
At the I'd like to be able to link A Gentle introduction, the
Haskell 98 Report and, why not, "Write yourself a Scheme in 48
hours".
That will be it.

Thanks for your kind attention.
Andrea
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] monads once again: a newbie perspective

2006-08-25 Thread Andrea Rossato
Il Thu, Aug 24, 2006 at 08:02:38PM +0100, Neil Mitchell ebbe a scrivere:
> Just shove it on the wiki regardless. If its useless then no one will
> read it. If its a bit unreadable, then people will fix it. If its
> useful the world will benefit. Any outcome is a good outcome!

Ok: I've put it on the wiki:
http://www.haskell.org/haskellwiki/The_Monadic_Way

Added some stuff: I've basicly introduced the do-notation.

Andrea
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] monads once again: a newbie perspective

2006-08-25 Thread Andy Elvey
On Thu, 2006-08-24 at 20:53 +0200, Andrea Rossato wrote:
> Hello!
> 
> I' m new to Haskell and try to find my way through types and monads.
> I tried the yet Another Haskell Tutorial, very useful for types, but
> almost unreadable for monads (that's my perspective!).
> Then I discovered that wonderful paper by Wadler (Monads for
> functional programming).
> So I started "translating it" for someone who can be scared of
> something with an abstract and footnotes coming from a professor.
> 
> I started writing it in order to clarify to myself this difficult
> topic. I think I'm now grasping the concept of monads. 
> I thought that someone else could find my writings useful.
> 
> It could become a page on the wiki. But before posting there I would
> like to have your opinion. Perhaps this is just something unreadable.
> 
> Let me know.
> Andrea

As another Haskell newbie - I like it! Well done!  
I particularly like your simple examples, and the very clear description
of each step in the tutorial. Fwiw, this would be the best "absolute
beginner's guide to monads" that I've seen. Keep up the good work!  
- Andy 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] monads once again: a newbie perspective

2006-08-24 Thread Neil Mitchell

Hi,


It could become a page on the wiki. But before posting there I would
like to have your opinion. Perhaps this is just something unreadable.


Just shove it on the wiki regardless. If its useless then no one will
read it. If its a bit unreadable, then people will fix it. If its
useful the world will benefit. Any outcome is a good outcome!

Once its on the wiki I'll give it a read, since it looks promising,
but its a bit hard to read in teletype font as displayed by my
browser.

Thanks

Neil
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] monads once again: a newbie perspective

2006-08-24 Thread Andrea Rossato
Hello!

I' m new to Haskell and try to find my way through types and monads.
I tried the yet Another Haskell Tutorial, very useful for types, but
almost unreadable for monads (that's my perspective!).
Then I discovered that wonderful paper by Wadler (Monads for
functional programming).
So I started "translating it" for someone who can be scared of
something with an abstract and footnotes coming from a professor.

I started writing it in order to clarify to myself this difficult
topic. I think I'm now grasping the concept of monads. 
I thought that someone else could find my writings useful.

It could become a page on the wiki. But before posting there I would
like to have your opinion. Perhaps this is just something unreadable.

Let me know.
Andrea
An evaluation of Philip Wadler's "Monads for functional programming"
(avail. from http://homepages.inf.ed.ac.uk/wadler/topics/monads.html)


Let's start with something simple: suppose we want to implement a new
programming language. We just finished with Abelson and Sussman's
Structure and Interpretation of Computer Programs
[http://swiss.csail.mit.edu/classes/6.001/abelson-sussman-lectures/]
and we want to test what we have learned.

Our programming language will be very simple: it will just compute the
sum operation.
So we have just one primitive operation (Add) that takes to constants
and calculates their sum
For instance, something like:
(Add (Con 5) (Con 6))
should yeld:
11

We will implement our language with the help of a data type
constructor such as:

> module MyMonads where
> data Term = Con Int
>  | Add Term Term
>deriving (Show)

After that we build our interpreter:

> eval :: Term -> Int
> eval (Con a) = a
> eval (Add a b) = eval a + eval b

That's it. Just an example:

*MyMonads> eval (Add (Con 5) (Con 6))
11

Very very simple. The evaluator checks if its argument is a Cons: if
it is it just returns it.
If it's not a Cons, but it is a Term, it evaluates the right one and
sums the result with the result of the evaluation of the second term.

Now, that's fine, but we'd like to add some features, like providing
some output, to show how the computation was carried out.
Well, but Haskell is a pure functional language, with no side effects,
we were told.
Now we seem to be wanting to create a side effect of the computation,
its output, and be able to stare at it...
If we had some global variable to store the out that would be
simple...
But we can create the output and carry it along the computation,
concatenating it with the old one, and present it at the end of the
evaluation together with the evaluation of the expression!
Simple and neat!

> type MOut a = (a, Output)
> type Output = String
> 
> formatLine :: Term -> Int -> Output
> formatLine t a = "eval (" ++ show t ++ ") <= " ++ show a ++ " - " 
>   
> 
> evalO :: Term -> MOut Int
> evalO (Con a) = (a, formatLine (Con a) a)
> evalO (Add t u) = ((a + b),(x ++ y ++ formatLine (Add t u) (a + b)))
> where (a, x) = evalO t
>   (b, y) = evalO u


Now we have what we want. But we had to change our evaluator quite a
bit. First we added a function, that takes a Term (of the expression
to be evaluated), an Int (the result of the evaluation) and gives back
an output of type Output (that is a synonymous of String). 

The evaluator changed quite a lot! Now it has a different type: it
takes a Term data type and produces a new type, we called MOut, that
is actually a pair of a variable type a (an Int in our evaluator) and
a type Output, a string.
So our evaluator, now, will take a Term (the type of the expressions
in our new programming language) and will produce a pair, composed of
the result of the evaluation (an Int) and the Output, a string.

So far so good. But what's happening inside the evaluator?
The first part will just return a pair with the number evaluated and
the output formatted by formatLine. 
The second part does something more complicated: it returns a pair
composed by 
1. the result of the evaluation of the right Term summed to the result
of the evaluation of the second Term
2. the output: the concatenation of the output produced by the
evaluation of the right Term, the output produced by the evaluation of
the left Term (each this evaluation returns a pair with the number and
the output), and the formatted output of the evaluation.

Let's try it:
*MyMonads> evalO (Add (Con 5) (Con 6))
(11,"eval (Con 5) <= 5 - eval (Con 6) <= 6 - eval (Add (Con 5) (Con 6)) <= 11 - 
")
*MyMonads>

It works! Let's put the output this way:
eval (Con 5) <= 5 - 
eval (Con 6) <= 6 - 
eval (Add (Con 5) (Con 6)) <= 11 -

Great! We are able to produce a side effect of our evaluation and
present it at the end of the computation, after all.

Let's have a closer look at this expression:
evalO (Add t u) = ((a + b),(x ++ y ++ formatLine (Add t u) (a + b)))
 where (a, x) = evalO t
   (b, y) = evalO u

Why all that? The problem is that w

[Haskell-cafe] Monads (was "how to write an interactive program ? gui library to use ?")

2006-02-25 Thread Jared Updike
(Note, moved to haskell-cafe.)

> Essentially, the answer is "yes", the state needs to be passed around
> (neglecting hackery to simulate global variables that is better
> avoided).  However, this can be made convenient by using a monad.

BTW, Minh, If you don't know what monads are, then read this. Monads
are an indispensable part of programming in Haskell:
   http://www.nomaware.com/monads/html/
   http://haskell.org/hawiki/Monad
Monads are good. Monads are your friend.

  Jared.
--
http://www.updike.org/~jared/
reverse ")-:"
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Re[2]: [Haskell-cafe] Monads in Scala, XSLT, Unix shell pipes was Re: Monads in ...

2005-11-28 Thread Gregory Woodhouse


On Nov 27, 2005, at 2:36 PM, Bill Wood wrote:

(I'm going to do a lazy permute on your stream of consciousness;  
hope it

terminates :-).

I think the Rubicon here is the step from one to many -- one
function/procedure to many, one thread to many, one processor to
many, ... .  Our favorite pure functions are like the Hoare triples  
and

Dijkstra weakest preconditions of the formal methods folks in that the
latter abstract from the body of a procedure to the input-output
relation it computes; both the function and the abstracted  
procedure are

atomic and outside of time.


Right. And the key to working with Hoare triples is that they obey a  
natural composition law. I can go from P and Q to P;Q as long as the  
pre- and post-conditions "match up". It's less clear that such a  
simple logic is even possible for concurrent programs, particularly  
in a deterministic setting.



After all, aren't "referential
transparency" and "copy rule" all about replacing a function body with
its results?


Is that really a "copy rule" or is it a requirement that the program  
obey some type of compositional semantics? Put a little differently,  
I think your terminology here is a bit leading. By "copy rule", I  
think you have in mind something like beta reduction -- but with  
respect to whom? If there is some kind of agent doing the copying  
that we think of as a real "thing", isn't that a process?



Well, as soon as there are two or more
functions/procedures in the same environment, the prospect of
interaction and interference arises, and our nice, clean,
*comprehensible* atemporal semantics get replaced by temporal logic,
path expressions, trace specifications, what have you.


Right, because our execution threads become little lambda universes  
interacting with their respective environments (i.e., communicating)



Some notion of
process is inevitable, since now each computation must be treated  
as an

activity over time in order to relate events that occur doing the
execution of one computation with the events of another.


You may be right, but I suppose I'm stubborn and haven't quite given  
up yet. Think about temporal and dynamic logic as being, in some  
sense, alternative program logics. They are both useful, of course,  
but differ in where the "action" is. For temporal logic, the primary  
dimension is time. Will this condition always hold? Will it hold at  
some time in the future? But in dynamic logic, the "action" is  
program composition. Even so, if you write [alpha]P, then you assert  
that P is satisfied by every execution (in time?) of P, but you do  
not otherwise reason about program execution. In terms of Kripke  
("possible worlds") semantics, what is your accessibility relationship?




Functional programming gives us the possibility of using algebra to
simplify the task of reasoning about single programs.  Of course,
non-functional procedures can also be reasoned about algebraically,
since a procedure P(args) that hammers on state can be adequately
described by a pure function f_P :: Args -> State -> State.  The  
problem

is, of course, that the state can be large.


Right, but Kripke semantics gives us a language in which to talk  
about how state can  change. Better, can subsystems be combined in  
such a way that state in the larger system can can  naturally be  
understood in terms of state in the subsystems?




But the functional paradigm offers some hope for containing the
complexity in the world of many as it does in the world of one. I  
think
combining formalisms like Hoare's CSP or Milner's CCS with  
computations

gives us the possibility of doing algebra on the temporal event
sequences corresponding to their interactions; the hope is that  
this is

simpler than doing proofs in dynamic or temporal logic.  Using
functional programs simplifies the algebraic task by reducing the size
of the set of events over which the algebra operates -- you consider
only the explicitly shared parameters and results, not the implicitly
shared memory that can couple non-functional procedures.


But isn't this true because interaction between the "pieces" is more  
narrowly constrained? An algebraic analog might be a semidirect  
product of groups. Unlike the special case of direct products, there  
is some interference here, but it is restricted to inner  
automorphisms (conjugation).




It is conceivable that you can get your compositionality here as well.
Suppose we package computations with input-output parameter
specifications and CSP-like specifications of the pattern of event
sequences produced when the computation executes.  It may be  
possible to

reason about the interactions of the event sequences of groups of
packages, determine the event sequences over non-hidden events  
provided

by the composite system, etc.

As far as Bulat's comment goes, I'm mostly in agreement.  My dataflow
view was really driven by the intuition that a functional program  
can be
described by a network

  1   2   >