Re: [Haskell-cafe] Lazy language on JVM/CLR

2010-02-08 Thread Marcin Kosiba
On Mon, Feb 8, 2010 at 5:16 PM, John Meacham j...@repetae.net wrote:
 On Tue, Feb 09, 2010 at 10:42:26AM +1000, Tony Morris wrote:
 I expect others have forethought and perhaps even experimented with such
 a language. Are there any dangers to be wary of that undo the entire
 endeavour?

 There have been a couple papers published on it, the main sticking point
 seems to be tail call elimination. When targeting real hardware you
 always had the option of dropping to assembly to do a direct jump, but
 there isn't an equivalent in the JVM. If you look up tail call + jvm you
 will probably get a few hits. I believe there are even a couple haskell
 specific papers on the issue.

I think .NET 4.0 has tail-call optimization built in because I
remember reading that F# relies on such a mechanism. AFAIK you just
need to mark the bytecode as to be optimized and the runtime does
that for you.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: simulation in the haskell way

2009-08-18 Thread Marcin Kosiba
On Tuesday 18 August 2009, Maurí­cio CA wrote:
  ...) But for simulation kind-of problems, in which I think OO
  really fits the best, what's the haskell way to structure such problems?
  I once thought maybe I can use the State monad to simulate objects. But
  it's really hard for me to implement, because I think State monad is used
  to simulate a global shared state, is it right?
 
  Then what's the best way to simulate private states or just instead how
  to solve simulation problems such as a physical engine using the haskell
  way?

 A physical engine can be simulated using pure code, with a
 function from timestep to state. (Of course, that doesn't hold
 when you want to deal with user interaction.) I think there is no
 general answer to your question. My sugestion is to describe an
 example you would like to work with.

Hi,
I did a medium sized mobility models simulator this way. The simulation 
is 
represented as an infinite list of states, where the next state is created by 
applying a state transformation (next) function to the previous state.

This has the advantage that you can calculate values based on more than one 
state. The downside is that if you need to look back 100 stesp, you need to 
remember 100 states (though if enough of it is unchanged and shared then it's 
not really that much of a problem).

As far as the details go -- different parts of the simulator are executed in 
different monads. god-mode code, like the next function has the type
nextStep :: World - World
but the mobility model implementation (which tells a node how to move) is a 
stateful computation run by nextStep:
mobilityModel :: StateT NodeState (Reader World) ()

But as Maurí­cio said -- please describe what you want to achieve. At least 
what kind of simulation you'd like to write.
-- 
Thanks!
Marcin Kosiba


signature.asc
Description: This is a digitally signed message part.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: DDC compiler and effects; better than Haskell?

2009-08-16 Thread Marcin Kosiba
On Sunday 16 August 2009, Artem V. Andreev wrote:
 John A. De Goes j...@n-brain.net writes:
  On Aug 15, 2009, at 6:36 AM, Jason Dusek wrote:
  2009/08/14 John A. De Goes j...@n-brain.net:
  Hmmm, my point (perhaps I wasn't clear), is that different
  effects have different commutability properties. In the case
  of a file system, you can commute two sequential reads from
  two different files.
 
   I think this is a bad example -- it's not something that's
   safe in general and discredits your idea. How would the
   compiler even know that two files are not actually the same
   file?
 
  I don't think the file system is the best example. However, I do think 
  it's a reasonable one.
 
  Let's say the type of the function getFilesInDir is annotated in such  a
  way as to tell the effect system that every file in the returned  array
  is unique. Further, let's say the type of the function  makeNewTempFile
  is annotated in such a way as to tell the effect  system that the
  function will succeed in creating a new temp file with  a name unique
  from any other existing file.

 Sorry, but this example is ridiculuous. While file *names* in this case
 might be reasonably assumed to be unique, the *files* themselves may not.
 Any modern filesystem does support file aliasing, and usually several forms
 thereof. And what does makeNewTempFile function do? Does it create a new
 file like POSIX mktemp() and return its name, or does it rather behave as
 POSIX mkstemp()? The first case is a well known security hole, and the
 second case does not, as it seems to me, fit well into the rest of your
 reasoning.

Hi,
IMHO, provided with a flexible effect system, the decision on how to do 
read/write operations on files is a matter of libraries.
But I'd like to ask another question: is the effects system you're 
discussing 
now really capable of providing significant performance improvements in case 
of file I/O? Even if we assume some consistency model and transform one 
correct program to another one -- how do you estimate efficiency without 
knowledge of physical media characteristics? I kinda see how this could be 
used to optimize different kinds of media access (reordering socket/file 
operations or even running some of those in parallel), but I don't see how 
can we benefit from reordering writes to the same media.
Another thing is that not every kind of r/w operation requires the same 
consistency model -- like when I'm writing a backup for later use I only care 
about my writes being in the same order. I imagine that such an effect system 
could help write software for CC-NUMA architectures or shared-memory 
distributed systems.
-- 
Thanks!
Marcin Kosiba


signature.asc
Description: This is a digitally signed message part.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Static linking

2009-07-24 Thread Marcin Kosiba
On Friday 24 July 2009, Michael Oswald wrote:
 Hello,


 I wrote a small installer program which configures and installs some
 software packages. In order to be able to let it run on a different
 machine, where I possibly don't have the needed shared libraries, I tried
 to link it statically, following this advice:

 http://www.haskell.org/haskellwiki/Practical_web_programming_in_Haskell#Dep
loyin g_statically_linked_applications

 However I get some linking errors:

  ghc --make Installer.hs -static -optl-static

 /usr/lib/ghc-6.10.3/unix-2.3.2.0/libHSunix-2.3.2.0.a(User__143.o)(.text+0xf
1):

 In function `s7Tu_info':
 : warning: Using 'getgrnam_r' in statically linked applications requires at

 runtime the shared libraries from the glibc version used for linking
 /usr/lib/gcc-lib/i586-suse-linux/3.3.3/../../../libedit.a(readline.o)(.text
+0x7b 2):

 In function `username_completion_function':
 /home/oswald/build/libedit-20090610-3.0/src/readline.c:1467:0:
  warning: Using 'getpwent' in statically linked applications requires
 at runtime the shared libraries from the glibc version used for linking

 src/dlmalloc.c:2486:0:  undefined reference to `pthread_mutex_lock'
 /usr/lib/ghc-6.10.3/libffi.a(closures.o)(.text+0x132):src/dlmalloc.c:2493:
 undefined reference to `pthread_mutex_unlock'
 /usr/lib/ghc-6.10.3/libffi.a(closures.o)(.text+0x184):src/dlmalloc.c:2490:
 undefined reference to `pthread_mutex_init'
 /usr/lib/ghc-6.10.3/libffi.a(closures.o)(.text+0x227): In function
 `ffi_closure_free':


 Because of the undefined references to pthread_mutex_lock I then
 tried various versions of

 ghc --make Installer.hs -static -optl-static -lpthread -lrt

 and reordered the -l switches, but neither combination helped. I always get
 the same error.

 Does somebody know how to correctly specify the pthread library for this?

Hi,
Try:
ghc --make Installer.hs -static -optl-static -optl-pthread

If that doesn't work, you can always do a ghc -v, find the command it uses for 
linking, modify it and run it manually.

Thanks!
Marcin Kosiba


signature.asc
Description: This is a digitally signed message part.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Laziness enhances composability: an example

2009-07-10 Thread Marcin Kosiba
On Friday 10 July 2009, Gleb Alexeyev wrote:
 Marcin Kosiba wrote:
  Hi,
  To illustrate what I meant I'm attaching two examples. In example_1.py
  I've written code the way I think would be elegant (but it doesn't work).
  In example_2.py I've written code so that it works, but it isn't elegant.
  I know I'm abusing Python iterators here. Also, I'm not sure the way to
  compose iterators shown in example_2.py is the only option. Actually I'd
  love to see a better solution, because it would remove a lot of bloat
  from my code ;)

 You may want to look at Lua coroutines, which are more powerful than
 Python iterators. Your example_1.py is very similiar to the example in
 the Coroutines Tutorial [1].

 [1] http://lua-users.org/wiki/CoroutinesTutorial

That in turn looks similar to Stackless Python[1], which I've been looking 
over recently ;)

[1] http://www.stackless.com/wiki/Channels
Thanks!
Marcin Kosiba


signature.asc
Description: This is a digitally signed message part.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Laziness enhances composability: an example

2009-07-09 Thread Marcin Kosiba
On Thursday 09 July 2009, Cristiano Paris wrote:
 Thanks. In fact, I was stuck trying to find an example which couldn't
 be written using Python's iterators. The only difference coming up to
 my mind was that Haskell's lists are a more natural way to express a
 program relying on laziness. That was the reason why added the clause
 at least not elegantly in my first post.

Hi,
I recently tried writing some code to simulate a certain protocol in 
Python. 
I thought I'll go the smart way and rely on the Python yield construct to 
do a CPS transformation of my code.
While this worked to a certain extent, composability was a problem, 
because 
any sub-procedure which used yield (and thus was an iterator) required being 
called in a rather inelegant way.

Thanks!
Marcin Kosiba


signature.asc
Description: This is a digitally signed message part.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Laziness enhances composability: an example

2009-07-09 Thread Marcin Kosiba
On Thursday 09 July 2009, you wrote:
 2009/7/9 Marcin Kosiba marcin.kos...@gmail.com:
  I thought I'll go the smart way and rely on the Python yield construct
  to do a CPS transformation of my code.
         While this worked to a certain extent, composability was a
  problem, because any sub-procedure which used yield (and thus was an
  iterator) required being called in a rather inelegant way.

 i confess to not understanding the inelegance.
 can you provide an example where an iterator inside CPS requires
 special support?

Hi,
To illustrate what I meant I'm attaching two examples. In example_1.py 
I've 
written code the way I think would be elegant (but it doesn't work). In 
example_2.py I've written code so that it works, but it isn't elegant.
I know I'm abusing Python iterators here. Also, I'm not sure the way to 
compose iterators shown in example_2.py is the only option. Actually I'd love 
to see a better solution, because it would remove a lot of bloat from my 
code ;)

Thanks!
Marcin Kosiba


example_1.py
Description: application/python


example_2.py
Description: application/python


signature.asc
Description: This is a digitally signed message part.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] following up on space leak

2009-07-04 Thread Marcin Kosiba
On Saturday 04 July 2009, Uwe Hollerbach wrote:
 Good evening, all, following up on my question regarding space leaks,
 I seem to have stumbled across something very promising. I said I was
 using this tiny function lastOrNil to get the last value in a list,
 or the empty (scheme) list if the haskell list was empty. The uses of
 it were all of the form

 lastOrNil (mapM something some list)

 so I wrote a different function mapML to do this directly:
  mapML fn lst = mapMLA (List []) fn lst
where mapMLA r _ [] = return r
  mapMLA ro fn (x:xs) =
 do rn - fn x
mapMLA rn fn xs

 This isn't an accumulator, it's a replacer (or, if you like, the
 accumulation is drop the old one on the floor), it starts out with
 the scheme empty list that I want as the default, and it never even
 builds the list which it'll just dump an instant later. Shazam! Memory
 usage dropped by roughly an order of magnitude in my little Collatz
 benchmark, and incidentally runtime improved by 25% or so as well. The
 horror! :-)

Hi,
IMHO expressing mapML using StateT would be a bit cleaner ;)

mapML :: (Monad m) = (a - m List) - [a] - m List
mapML fn lst = execStateT mapMLAs (List [])
  where
mapMLAs  = sequence_ $ map mapMLA lst
mapMLA x = (lift $ fn x) = put

-- 
Marcin Kosiba


signature.asc
Description: This is a digitally signed message part.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: FFI and heap memory usage limit

2009-06-28 Thread Marcin Kosiba
On Friday 26 June 2009, Simon Marlow wrote:
 Maybe bzlib allocates using malloc()?  That would not be tracked by
 GHC's memory management, but could cause OOM.

probably, because it's a binding to a C library. I'm really busy right now, 
but I'll try and create a small program to repro this error.

 Another problem is that if you ask for a large amount of memory in one
 go, the request is usually honoured immediately, and then we GC shortly
 afterward.  If this is the problem for you, please submit a ticket and
 I'll see whether it can be changed.  You could work around it by calling
 System.Mem.performGC just before allocating the memory.

I've already worked around the problem.
-- 
Marcin Kosiba


signature.asc
Description: This is a digitally signed message part.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] FFI and heap memory usage limit

2009-06-22 Thread Marcin Kosiba
Hello,
Recently I've come across a certain GC/FFI-related problem. I've 
googled a 
bit, but didn't find anything specific.
I'm running certain simulations, which tend to allocate a lot of 
garbage in 
memory. Since this causes the OOM-killer to kill my simulation at 98% 
completion, I used the -M switch, and all was well.
But because my simulation results are fairly big, I needed to compress 
them 
with bz2 before sending them over the network. So I used bzlib.
Now this took an odd turn, because the simulation started crashing with 
out-of-memory errors _after_ completing (during bz2 compression). I'm fairly 
certain this is a GC/FFI bug, because increasing the max heap didn't help. 
Moving the bz2 compression to a separate process provided a reasonable 
solution.
What I think is happening is that after the simulation completes, 
almost all 
of the available memory (within the -M limit) is filled with garbage. Then I 
run bzlib which tries to allocate more memory (from behind FFI?) to compress 
the results, which in turn causes an out-of-memory error instead of 
triggering a GC collection.
I'm writing to ask if this is a known/fixed issue. I'm using ghc 
6.10.3, 
bzlib 0.5.0.0. If this is something new then I'll try to come up with a small 
program which demonstrates the problem.
--
Marcin Kosiba
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Obscure weirdness

2009-06-20 Thread Marcin Kosiba
On Saturday 20 June 2009, Andrew Coppin wrote:
 OK, so here's an interesting problem...

 I've been coding away all day, but now my program is doing something
 slightly weird. For a specific input, it summarily terminates. The
 registered exception handler does not fire. There is no output to stdout
 or stderr indicating what the problem is. It just *stops* half way
 through the printout.

 Weirder: If I run it in GHCi, then GHCi itself terminates. (I didn't
 think you could *do* that!)

 It's not as if my program is anything unusual. There are no unsafe
 functions. No FFI. Nothing. Just regular high-level Haskell.

 Is this a known bug in GHC 6.10.1? Will upgrading fix it? (Obviously,
 it's quite a lot of work to change GHC.) Suffice it to say that my
 program is quite big and complicated; it worked fine when it was still
 small and simple. ;-)

Hi,
With the information you've provided it's hard to even guess. At least 
take a 
look at your app's RAM usage -- it just may be that its allocating too much 
memory and the OOM killer is killing it (if you're running linux, that is).
You may also want to try the GHCi debugger [1] to find out where the 
program 
crashes. The last thing I'd do is blame it on ghc/ghci, but as always -- such 
a possibility exists.

Thanks!
Marcin Kosiba

[1] http://www.haskell.org/ghc/docs/latest/html/users_guide/ghci-debugger.html
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Software Transactional Memory and LWN

2009-06-12 Thread Marcin Kosiba
On Thursday 11 June 2009, Ketil Malde wrote:
 Hi,

 Browsing LWN, I ran across this comment:

 http://lwn.net/Articles/336039/

 The author makes a bunch of unsubstantiated claims about STM, namely
 that all implementations use locking under the hood, and that STM can
 live- and deadlock.  I've seen a lot of similar sentiments in other
 places as well (comp.arch springs to mind).

 Now, I'm no expert on STM, but I was pretty sure these are incorrect,
 and I certainly had the impression that Haskell's STM guarantees some
 progress - which it couldn't in a deadlock situation.

 Am I wrong?

Hi,
While I'm no STM expert either, I'd like to remind an often overlooked 
detail: locks and STM are two different abstractions. While locks provide 
semantics for running critical sections (informally, parts of code which only 
one thread can execute) STM provides semantics for atomic actions 
(informally, actions the intermediate state of which can't be observed by 
other threads).

Now, while an STM implementation can use locks under the hood, it doesn't 
change the fact that the programmer isn't exposed to using those. Saying STM 
is bad because it uses locks under the hood is the same as saying that a 
garbage-collected environment is bad because it uses malloc and free under 
the hood.

As far as the STM-is-better-because-it-doesn't-use-locks theory is concerned, 
the idea here is that STM doesn't associate a lock with every atomic section. 
This means that (in an optimistic implementation) any number of threads (and 
potentially CPU cores) can execute the atomic action in parallel, and if 
there are few rollbacks, this can lead to better performance than using a 
single lock[3]. And you can't forget about composability -- code written 
using locks is less modular than code written using STM.

While either optimistic[1] or pessimistic[2] STM can livelock, this can be 
solved by some sort of exponential backoff algorithm (which does not 
guarantee progress, just makes a livelock less likely).

As far as deadlock is concerned -- if we allow for retry, then as it has 
already been mentioned, there is a possibility for deadlock. But with STM, 
you can do deadlock detection by means of cycle detection in wait-for graphs 
(in much the same way as it is done in DBMS).

While now STM usually performs worse than locks, it is an active area of 
research (even Intel released an experimental STM-supporing C++ compiler[4]).
It is also true, that as the number of cores in cpus grows, STM will become 
more attractive. 

Thanks!
Marcin Kosiba

[1]http://en.wikipedia.org/wiki/Software_transactional_memory#Implementation_issues
[2]when a long-lived transaction is rolled-back by small transactions
[3]you can even switch between STM and locking: General and Efficient Locking 
without Blocking. Yannis Smaragdakis, Anthony Kay, Reimer Behrends, Michal 
Young
[4]http://software.intel.com/en-us/articles/intel-c-stm-compiler-prototype-edition-20/


signature.asc
Description: This is a digitally signed message part.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] WHNF versus HNF (was: Optimizing unamb by determining the state of a thunk?)

2009-04-22 Thread Marcin Kosiba
On Wednesday 22 April 2009, Peter Verswyvelen wrote:
 I'm having difficulty to understand the difference between WHNF and HNF.

 Is this
 explanationhttp://encyclopedia2.thefreedictionary.com/Weak+Head+Normal+For
m the
 correct one? Or is WHNF and HNF equivalent in Haskell land?

 The GHC documentation of seq says:
 Evaluates its first argument to head normal form, and then returns its
 second argument as the result.

 Let's try in GHCi

 *Main let f = trace \\x $ \x - ((trace \\y $ \y - trace y y +
 trace x x) $ trace 2 2)
 *Main f `seq` ()
 \x
 ()
 *Main

 That did not evaluate anything inside the body of the first lambda, so
 according to the article, seq reduces to weak head normal form, not hnf...
Hi,
 Try:

Prelude Debug.Trace (f 0) `seq` ()
\x
\y
y
2
x
()

--
Thanks!
Marcin Kosiba


signature.asc
Description: This is a digitally signed message part.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] about import alias

2009-04-04 Thread Marcin Kosiba
On Saturday 04 April 2009, Manlio Perillo wrote:
 Hi.

 Haskell 98 allows import alias:
  import qualified VeryLongModuleName as C

 however it does not allow aliasing for imported names
  import NormalModule (veryLongFunctionName as C)


 what is the rational?
 IMHO this can be very useful in some cases.

While I do agree with the argument that it could be useful in some cases, 
misuse of this feature would cause a lot of confusion, consider:

from System.IO.Unsafe import (unsafePerformIO as safe)

I think that the alias feature is OK, when the namespace is long, 
longfunctionnameswhichareapaintoreadandespeciallywrite are a bug and should 
be treated and fixed by the owner of that code, not by the user.

If very function names are that much of a problem to you consider a wrapper 
library.
-- 
Cheers!
Marcin Kosiba


signature.asc
Description: This is a digitally signed message part.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANN: cmonad 0.1.1

2009-03-29 Thread Marcin Kosiba
On Sunday 29 March 2009, Lennart Augustsson wrote:
 I've uploaded my CMonad package to Hackage.  It allows you to write
 Haskell code in a C style.

Now I've heard that Haskell makes a fine (if not the finest) imperative 
language, but isn't this taking that thought a bit too far ;)

 Unfortunately, GHC lacks certain optimizations to make efficient code when
 using CMonad, so instead of C speed you get low speed.

I mean as a proof-of-concept this is cool, but if your primary concern is 
performance then I think you're barking up the wrong tree with this.

Cheers!
Marcin Kosiba


signature.asc
Description: This is a digitally signed message part.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Using a monad to decompose a function into functions

2009-03-14 Thread Marcin Kosiba
On Friday 13 March 2009, Cristiano Paris wrote:
 2009/3/13 Marcin Kosiba marcin.kos...@gmail.com:
  Hi,
         I've already checked those out. I tried using your yield
  implementation and while it works, I couldn't get it to work with the
  state monad.
         So while:
  data RecPair a b = Nil | RP (b, a - RecPair a b)
  yield x = Cont $ \k - RP (x, k)
 
         got me half-way to my goal, I couldn't figure out how to make
  something like:
 
  yield' = do
   state - get
   state' - yield state
   put state'

 Basically, the yield is built upon the Cont monad which has a
 transformer counter part, ContT. You could try and re-implement the
 yield under ContT instead of just Cont then you can stack ContT on top
 of State (or StateT if you need more monads) and have a state (i.e.
 get/put) and the yield.

Great! That helped a lot. I'm attaching a ConT yield implementation and 
another one which also handles a return statement with a different type. Hope 
someone finds them useful.

Thanks!
Marcin Kosiba
{-# OPTIONS -XNoMonomorphismRestriction #-}
module Main where

import Control.Monad
import Control.Monad.Cont
import Control.Monad.State
import Control.Monad.Identity

data (Monad m) = RecPair m a b = Nil | RP (b, a - m (RecPair m a b))

yield :: (Monad m) = r - ContT (RecPair m a r) m a
yield x = ContT $ \k - return $ RP(x, k)

f'cps = return 2

test = do
  x - f'cps
  yield x
  yield (x + 1)
  return ()

testSt :: (MonadState s m, Num s) = ContT (RecPair m a s) m ()
testSt = do
  y - f'cps
  v - get
  put (y + 1)
  yield v
  v' - get
  yield v'
  return ()

getRP :: RecPair Identity a a - [a]
getRP Nil = []
getRP (RP (b, f)) = b : (getRP $ runIdentity $ f b)

runYield :: ContT (RecPair m a1 b) Identity a - RecPair m a1 b
runYield f = runIdentity $ runContT f (\_ - return Nil)

--result is [2,3]
runTest = getRP $ runYield test

getRPSt :: (RecPair (State t) a a, t) - [a]
getRPSt (Nil, _) = []
getRPSt (RP (b, f), s) = b : (getRPSt $ runState (f b) s)

runYieldSt :: (Num s) = s - (RecPair (State s) a s, s)
runYieldSt iState = runState (runContT testSt (\_ - return Nil)) iState

--result is [iState, 3]
runTestSt iState = getRPSt $ runYieldSt iState{-# OPTIONS -XNoMonomorphismRestriction #-}
module Main where

import Control.Monad
import Control.Monad.Cont
import Control.Monad.State
import Control.Monad.Identity

data (Monad m) = RecPair m a b r = Nil r | RP (b, a - m (RecPair m a b r))

yield :: (Monad m) = r - ContT (RecPair m a r v) m a
yield x = ContT $ \k - return $ RP(x, k)

f'cps = return 2

test = do
  x - f'cps
  yield x
  yield (x + 1)
  return (-1)

testSt = do
  y - f'cps
  v - get
  put (y + v)
  yield v
  testSt

getRP :: RecPair Identity a a a - [a]
getRP (Nil x) = [x]
getRP (RP (b, f)) = b : (getRP $ runIdentity $ f b)

runYield :: ContT (RecPair m a1 b a) Identity a - RecPair m a1 b a
runYield f = runIdentity $ runContT f (\x - return $ Nil x)

--result is [2,3, -1]
runTest = getRP $ runYield test

getRPSt :: (RecPair (State t) v v v, t) - [v]
getRPSt (Nil x, _) = [x]
getRPSt (RP (b, f), s) = b : (getRPSt $ runState (f b) s)

runYieldSt :: ContT (RecPair m a1 b a) (State s) a
  - s
  - (RecPair m a1 b a, s)
runYieldSt f iState = runState (runContT f (\x - return $ Nil x)) iState

--result is [iState, iState + 2..]
runTestSt iState = getRPSt $ runYieldSt testSt iState


signature.asc
Description: This is a digitally signed message part.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proof of induction case of simple foldl identity.

2009-03-14 Thread Marcin Kosiba
On Saturday 14 March 2009, Daniel Fischer wrote:
 Am Samstag, 14. März 2009 21:45 schrieb R J:
  Can someone provide the induction-case proof of the following identity:
 
 foldl (-) ((-) x y) ys = (foldl (-) x ys) - y
 
  If foldl is defined as usual:
 
 foldl  :: (b - a - b) - b - [a] - b
 foldl f e []   =  e

 (R)foldl f e (x : xs) =  foldl f (f e x) xs

  The first two cases, _|_ and [], are trivial.
 
  Here's my best attempt at the (y : ys) case (the left and right sides
  reduce to expressions that are obviously equal, but I don't know how to
  show it):
 
 Case (y : ys).
 
Left-side reduction:
 
   foldl (-) ((-) x y) (y : ys)
=  {second equation of foldl}
   foldl (-) ((-) ((-) x y) y) ys
=  {notation}
   foldl (-) ((-) (x - y) y) ys
=  {notation}
   foldl (-) ((x - y) - y) ys
=  {arithmetic}
   foldl (-) (x - 2 * y) ys
 
Right-side reduction:
 
   (foldl (-) x (y : ys)) - y
=  {second equation of foldl}
   (foldl (-) ((-) x y) ys) - y
=  {induction hypothesis: foldl (-) ((-) x y) ys = (foldl (-) x ys)
  - y} ((foldl (-) x ys) - y) - y
=  {arithmetic}
   (foldl (-) x ys) - 2 * y
 
  Thanks as always.

 Consider a one-element list.
 foldl (-) (x-y) [z] = (x-y)-z
 (foldl (-) x [z]) - y = (x-z)-y

 So a necessary condition for the identity to generally hold is that the Num
 instance obeys the law

 (L)   forall u v w. (u - v) - w = (u - v) - w

Typo? :)

(L)   forall u v w. (u - v) - w = (u - w) - v

 Then take as inductive hypothesis that zs is a list such that
 forall a b. foldl (-) (a-b) zs = (foldl (-) a zs) - b

 Let z be an arbitrary value of appropriate type, x and y too.
 Then

 foldl (-) (x - y) (z:zs)
= foldl (-) ((x-y)-z) zs   (R)
= (foldl (-) (x-y) zs) - z (IH, a = x-y, b = z)
= ((foldl (-) x zs) - y) - z   (IH, a = x, b = y)
= ((foldl (-) x zs) - z) - y   (L, u = foldl (-) x zs, v = y, 
 w = z)
= (foldl (-) (x-z) zs) - y (IH, a = x, b = z)
= (foldl (-) x (z:zs)) - y (R)

 The trick is to formulate the inductive hypothesis with enough generality,
 so you have strong foundations to build on.


-- 
Pozdrawiam,
Marcin Kosiba


signature.asc
Description: This is a digitally signed message part.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Using a monad to decompose a function into functions

2009-03-13 Thread Marcin Kosiba
On Friday 13 March 2009, you wrote:
 2009/3/13 Marcin Kosiba marcin.kos...@gmail.com:
  On Thursday 12 March 2009, you wrote:
  2009/3/12 Marcin Kosiba marcin.kos...@gmail.com:
   Hi,
          I'm doing a bit of research into mobility models and I'm
   currently exploring implementation language choices for the simulator
   (*snip*)
   The simulation algorithm requires expressing
   the node's mobility so that it is stateless. The mobility model
   algorithm's type should be something like:
   mobility_model :: WorldState - NodeState - OtherInput - (Action,
   NodeState)
  
   where Action can alter WorldState and the second NodeState is an
   altered input NodeState. I perform a form of speculative execution on
   mobility_model so sometimes I need to backtrack to a previous world
   and node state. This is all fairly simple stuff, and was just an
   introduction. What I do now is store an enum in NodeState and
   implement mobility_model as one big case statement. Yes, this is very
   imperative of me, I know. What I'd like to do is to express
   mobility_model, so that the code would look like:
  
   mobility_model world node input = do
      do_calculus
      emit_action
      if something
        then emit_action
        else emit_action
      do_calculus
      emit_action
      mobility_model world node input
 
  Hi,
 
  It seems you can use
  http://hackage.haskell.org/packages/archive/mtl/latest/doc/html/Control-
 Mon ad-State-Lazy.html Just have a look at the exemple :
 
   tick :: State Int Int
   tick = do n - get
             put (n+1)
             return n
 
  your code would become something like
  mobility_model :: OtherInput - State (WorldState,NodeState) ()
  mobility_model input = do
    world - gets fst
    node - gets snd
    
    let (world',node') = ...
    put (world',node')
 
  ok, that solves part of the problem. what this doesn't solve is that
  somewhere between these lines (which corespond to emit_action in my
  example)
 
    let (world',node') = ...
    put (world',node')
 
  I need to return a value and an Action and NodeState to the simulation
  algorithm. and then, after the simulation algorithm calculates a new
  WorldState it will want the mobility_model to where it left off, but with
  a new WorldState.
 
  I hope I'm clear about what I wish to achieve: each emit_action should
  return a value (Action, NodeState) and maybe a function
  mobility_model_cont which I then could call with the new WorldState to
  continue from where emit_action returned.

 I'm not entirely sure ... but I think it doesn't matter that much :)
 Here is why.

 This was just an exemple :
  mobility_model :: OtherInput - State (WorldState,NodeState) ()

 You could also have
  mobility_model :: OtherInput - NodeState - State WorldState
 (NodeState,Action)
 or whatever.

 In fact, the State monad makes it easy to thread (in this context, it
 means 'pass around') an argument to many functions, providing a nice
 syntax reminiscent of imperative language. But it lets you completely
 free of what is passed around. It depends on what you want to be
 explicitely passed by argument, and what you want to pass in the state
 of the monad (that is, what you want to appear, inside the monad only,
 as some global variable).

 So in your code, if you often need to pass a WorldState to a function
 which should return a modified WorldState, it makes sense to put
 WorldState inside the state monad. But, maybe, if there is just a few
 functions which act on NodeState, it has not to be part of the state
 carried by the state monad.

 I'm not entirely sure of what is a problem to you : is it the use of
 the State monad, or something else ?
 If it can help you to formulate your question you can post some code
 (or past it to http://hpaste.org/)...

Threading the state is not the problem. Maybe this will help:
what I have now:

fsm world state = case state of
  first -
do_stuff_one
(move_up, succ state)
  second -
do_stuff_two
(move_left, succ state)
 third -
 do_stuff_three
 (move_right, first)

what I'd want to have is to say:
fsm world state = do
  do_stuff_one
  yield move_up
  do_stuff_two
  yield move_left
  do_stuff_three
  yield move_right
  fsm world state

and have it translated to:

fsm world state = 
  do_stuff_one
  (move_up, \world' state' -
do_stuff_two
  (move_left, \world'' state'' -
 do_stuff_three
   (move_right, fsm world'' state'')

Thanks!
Marcin Kosiba


signature.asc
Description: This is a digitally signed message part.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Using a monad to decompose a function into functions

2009-03-13 Thread Marcin Kosiba
On Friday 13 March 2009, Cristiano Paris wrote:
 2009/3/13 Marcin Kosiba marcin.kos...@gmail.com:
  ...
  Threading the state is not the problem. Maybe this will help:
  what I have now:
 
  fsm world state = case state of
   first -
     do_stuff_one
     (move_up, succ state)
   second -
     do_stuff_two
     (move_left, succ state)
   third -
      do_stuff_three
      (move_right, first)
 
  what I'd want to have is to say:
  fsm world state = do
   do_stuff_one
   yield move_up
   do_stuff_two
   yield move_left
   do_stuff_three
   yield move_right
   fsm world state
 
  and have it translated to:
 
  fsm world state =
   do_stuff_one
   (move_up, \world' state' -
     do_stuff_two
       (move_left, \world'' state'' -
          do_stuff_three
            (move_right, fsm world'' state'')

 Hi,

 I've not fully understood your exact problem but I think you might
 have a look to Continuations and Delimited Continuations.

 Both can help you solve the problem with implementing a yield
 statement. You can have a look at one of my (rather) old blog's posts
 about how to implement yield/send statements a-la-python:

 http://monadicheadaches.blogspot.com/2008/01/python-25s-iterators-in-haskel
l-sort-of.html

 Notice that blogspot messed up with code blocks so indentation looks
 bad and some character is even missing.

Hi,
I've already checked those out. I tried using your yield implementation 
and 
while it works, I couldn't get it to work with the state monad.
So while:
data RecPair a b = Nil | RP (b, a - RecPair a b)
yield x = Cont $ \k - RP (x, k)

got me half-way to my goal, I couldn't figure out how to make something 
like:

yield' = do
  state - get
  state' - yield state
  put state'

Thanks!
Marcin Kosiba


signature.asc
Description: This is a digitally signed message part.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Using a monad to decompose a function into functions

2009-03-12 Thread Marcin Kosiba
Hi,
I'm doing a bit of research into mobility models and I'm currently 
exploring 
implementation language choices for the simulator (yes, sadly it needs to be 
a custom one).
I've been using Haskell here and there for some small tasks, and 
thought I 
should consider it as an implementation language for the simulator.
While I already have an working implementation in Haskell, there is one 
thing 
that I would like to express in a more elegant way, but just can't figure 
out. The simulation algorithm requires expressing the node's mobility so that 
it is stateless. The mobility model algorithm's type should be something 
like:
mobility_model :: WorldState - NodeState - OtherInput - (Action, NodeState)

where Action can alter WorldState and the second NodeState is an altered input 
NodeState. I perform a form of speculative execution on mobility_model so 
sometimes I need to backtrack to a previous world and node state. 
This is all fairly simple stuff, and was just an introduction. What I 
do now 
is store an enum in NodeState and implement mobility_model as one big case 
statement. Yes, this is very imperative of me, I know. What I'd like to do is 
to express mobility_model, so that the code would look like:

mobility_model world node input = do
do_calculus
emit_action
if something
  then emit_action
  else emit_action
do_calculus
emit_action
mobility_model world node input

but I'd like to be able to alter world and node state before continuing from 
emit_action. 

I've tried to get this working by using the idea from 
http://www.haskell.org/pipermail/haskell/2005-April/015684.html but couldn't 
get the state-altering behavior I was looking for. I've also taken a look at 
http://monadicheadaches.blogspot.com/2008/01/python-25s-iterators-in-haskell-sort-of.html,
 
the unified concurrency model and Control.Coroutine, but couldn't get the 
behavior I was going for.

Thanks!
Marcin Kosiba


signature.asc
Description: This is a digitally signed message part.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Using a monad to decompose a function into functions

2009-03-12 Thread Marcin Kosiba
On Thursday 12 March 2009, you wrote:
 2009/3/12 Marcin Kosiba marcin.kos...@gmail.com:
  Hi,
         I'm doing a bit of research into mobility models and I'm currently
  exploring implementation language choices for the simulator 
  (*snip*) 
  The simulation algorithm requires expressing 
  the node's mobility so that it is stateless. The mobility model
  algorithm's type should be something like:
  mobility_model :: WorldState - NodeState - OtherInput - (Action,
  NodeState)
 
  where Action can alter WorldState and the second NodeState is an altered
  input NodeState. I perform a form of speculative execution on
  mobility_model so sometimes I need to backtrack to a previous world and
  node state. This is all fairly simple stuff, and was just an
  introduction. What I do now is store an enum in NodeState and implement
  mobility_model as one big case statement. Yes, this is very imperative of
  me, I know. What I'd like to do is to express mobility_model, so that the
  code would look like:
 
  mobility_model world node input = do
     do_calculus
     emit_action
     if something
       then emit_action
       else emit_action
     do_calculus
     emit_action
     mobility_model world node input

 Hi,

 It seems you can use
 http://hackage.haskell.org/packages/archive/mtl/latest/doc/html/Control-Mon
ad-State-Lazy.html Just have a look at the exemple :

  tick :: State Int Int
  tick = do n - get
put (n+1)
return n

 your code would become something like
 mobility_model :: OtherInput - State (WorldState,NodeState) ()
 mobility_model input = do
   world - gets fst
   node - gets snd
   
   let (world',node') = ...
   put (world',node')

ok, that solves part of the problem. what this doesn't solve is that somewhere 
between these lines (which corespond to emit_action in my example)
   let (world',node') = ...
   put (world',node')
I need to return a value and an Action and NodeState to the simulation 
algorithm. and then, after the simulation algorithm calculates a new 
WorldState it will want the mobility_model to where it left off, but with a 
new WorldState.

I hope I'm clear about what I wish to achieve: each emit_action should return 
a value (Action, NodeState) and maybe a function mobility_model_cont which I 
then could call with the new WorldState to continue from where emit_action 
returned.

Thanks,
Marcin Kosiba


signature.asc
Description: This is a digitally signed message part.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe