[Haskell-cafe] Stream instance for Parsec + conduits

2013-05-09 Thread Phil Scott
Hi all.

I would like to have a Parsec Stream instance for Data.Text streams in yesod's 
ConduitM. So far, I have this:

hpaste.org/87599

The idea is that because Yesod's conduits will be chunking values in Data.Text, 
I should have a wrapper StreamSource to cache chunked values.

For some reason, the test parser fails, saying:

[Left Blah (line 1, column 1):
unexpected g
expecting h or g]

Any ideas?

Cheers!

Phil


pgpO1A9LG3iWl.pgp
Description: PGP signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Announce: LA Haskell User Group February Meeting

2013-02-07 Thread Phil Freeman
Dear Haskellers,

There will be a meeting of the LA Haskell User Group on Wednesday February
13th at 7pm. The details, including a list of discussion topics, as they
become available, can be found here:
http://www.meetup.com/Los-Angeles-Haskell-User-Group/events/102199892/

Thanks,

Phil Freeman.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] First Los Angeles Haskell User Group Meetup

2013-01-10 Thread Phil Freeman
Hello Haskell Cafe,

I'd like to announce that I've created a meetup group for any Haskell users
in the LA area. The first meeting will be a meet-and-greet session,
held next Tuesday, 15th January 2013, at Wurstkuche in the Downtown LA Arts
District. After that, the goal is to organise presentations and discussions
on Haskell-related topics roughly once a month.

Interested parties can register for the group and RSVP here:
http://www.meetup.com/Los-Angeles-Haskell-User-Group/

Thanks,

Phil Freeman.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] HTTP package freezes on Windows 7

2010-03-16 Thread Phil

On 16/03/2010 01:05, Phil wrote:
Scrap my original query - the problem isn't as black  white as I 
thought.


The below works fine - I've changed the response type from json to 
xml strange, but for some reason downloading json doesn't work 
it's fine on Linux.


I'm guessing this is more likely to be a Windows issue rather than a 
Haskell issue - any ideas?


A bit more testing - this is not a Windows issue per se.  It seems to be 
a limitation of the HTTP library running on Windows.


I've ran what I consider to be identical commands (in terms of 
functional use, both perform a HTTP GET on the location given) in 
Haskell and Python from each of their consoles on the same computer.  
The Python one correctly returns exactly what the Haskell one does on 
Linux.  The Haskell on Windows just hangs.


However as mentioned earlier SOME http requests do work from Haskell so 
I don't think it's a problem with my build of HTTP or Network libs.  The 
simplest example is to replace 'json' with 'xml' in the below query.  
The best guess I can make is that XML is deemed renderable, but for some 
reason JSON is considered to be binary/file data?


Can anyone confirm this behaviour I only have one 1 Windows PC, so I 
can't test on another machine.  If it is a wide problem, I reckon it 
warrants a bug ticket on the library.


Logs from console below,

Phil

*Haskell / GHCI:*
Prelude Network.HTTP do x - (simpleHTTP $ getRequest 
http://maps.google.com/maps/api/geocode/json?address=Londonsensor=false;) 
= getResponseBody; print x

Loading package bytestring-0.9.1.5 ... linking ... done.
Loading package Win32-2.2.0.1 ... linking ... done.
Loading package array-0.3.0.0 ... linking ... done.
Loading package syb-0.1.0.2 ... linking ... done.
Loading package base-3.0.3.2 ... linking ... done.
Loading package mtl-1.1.0.2 ... linking ... done.
Loading package parsec-2.1.0.1 ... linking ... done.
Loading package network-2.2.1.7 ... linking ... done.
Loading package old-locale-1.0.0.2 ... linking ... done.
Loading package old-time-1.0.0.3 ... linking ... done.
Loading package HTTP-4000.0.9 ... linking ... done.

--- Just sits here chewing up processor time and memory.

*Python:*
 print 
urllib.urlopen(http://maps.google.com/maps/api/geocode/json?address=Londonsensor=false;).read()

{
  status: OK,
  results: [ {
types: [ locality, political ],
formatted_address: Westminster, London, UK,
address_components: [ {
  long_name: London,
  short_name: London,
  types: [ locality, political ]
}, {
  long_name: Westminster,
  short_name: Westminster,
  types: [ administrative_area_level_3, political ]
}, {
  long_name: Greater London,
  short_name: Gt Lon,
  types: [ administrative_area_level_2, political ]
}, {
  long_name: England,
  short_name: England,
  types: [ administrative_area_level_1, political ]
}, {
  long_name: United Kingdom,
  short_name: GB,
  types: [ country, political ]
} ],
geometry: {
  location: {
lat: 51.5001524,
lng: -0.1262362
  },
  location_type: APPROXIMATE,
  viewport: {
southwest: {
  lat: 51.4862583,
  lng: -0.1582510
},
northeast: {
  lat: 51.5140423,
  lng: -0.0942214
}
  },
  bounds: {
southwest: {
  lat: 51.4837180,
  lng: -0.1878940
},
northeast: {
  lat: 51.5164655,
  lng: -0.1099780
}
  }
}
  } ]
}










___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] HTTP package freezes on Windows 7

2010-03-15 Thread Phil

Hi,

I'm using GHC 6.12.1 on Windows 7.  I've built the latest Network 
package using Haskell's MinGW and installed HTTP package on top of this.


The code below builds fine, but on execution it just sits there grabbing 
ever increasing amounts of memory.


It's a simplified call that I've got working fine in Linux.

Is this a known issue?  Anyone else had success using HTTP from Windows?


Thanks,

Phil.


import qualified Network.HTTP as HTTP

main :: IO ()
main
  = do
x - HTTP.simpleHTTP(HTTP.getRequest 
http://maps.google.com/maps/api/geocode/json?address=Londonsensor=false;)

print x
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


re: [Haskell-cafe] HTTP package freezes on Windows 7

2010-03-15 Thread Phil

Scrap my original query - the problem isn't as black  white as I thought.

The below works fine - I've changed the response type from json to 
xml strange, but for some reason downloading json doesn't work 
it's fine on Linux.


I'm guessing this is more likely to be a Windows issue rather than a 
Haskell issue - any ideas?



import qualified Network.HTTP as HTTP

main :: IO ()
main
   = do
 x - getLocation
 print x



getLocation = (HTTP.simpleHTTP $ HTTP.getRequest url) = 
HTTP.getResponseBody

where
url = 
http://maps.google.com/maps/api/geocode/xml?address=Londonsensor=false;



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Anyone up for Google SoC 2010?

2010-03-11 Thread phil
Would be great to see GHC on Maemo.  I recently bought an N900 and
googled around to see if this is possible to write Haskell for the
platform.

The short answer is 'not easily'

There are some old notes on getting previous versions compiling, but
nothing up to date

I gave up pretty quickly :-(


On Wed, 2010-03-10 at 17:43 -0500, Yakov Zaytsev wrote:
 I've got N900 recently and saw that according to this page
 
 http://hackage.haskell.org/trac/ghc/wiki/Platforms
 
 it's not possible to run GHC and GHCi easily on ARM. This sucks.
 
 I want to propose a project bring GHC back to life on arm-linux. It is
 supposed that the outcome will be a package for Maemo 5.
 
 Actually, I want to apply for this project as a student. I hope to make
 NCG for ARM working in some sense.
 
 Dear list, what do you think about it?
 
 -- Yakov
 
 
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] A few ideas about FRP and arbitrary access in time

2010-03-02 Thread Phil Jones
...Are hereby presented at:
http://www.ee.bgu.ac.il/~noamle/_downloads/gaccum.pdf

Comments are more than welcome.
(P.S Thanks to a whole bunch of people at #haskell for educating me about
this, but most notably Conal Elliott)
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: frag game - compilation fixes

2009-10-29 Thread Phil Jones
Ok, thanks - done. I also fixed the gun problem thanks to Henk-Jan van Tuyl.

On Thu, Oct 29, 2009 at 11:52 AM, Malcolm Wallace 
malcolm.wall...@cs.york.ac.uk wrote:

 So here's the resulting package tree. If anyone knows how to turn it into a
 darcs working copy and create a patch from it, please do!


 It's easy (and I recommend you do it yourself).

  * darcs get http://...blah/blah/foo
  * cp -R /my/hacked/copy/of/foo/* foo
  * cd foo
  * darcs record
  * darcs send --help

 That is, just copy your version of the source tree on top of a darcs
 repository of the original source tree, then record the changes.

 Regards,
Malcolm


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: frag game - compilation fixes

2009-10-28 Thread Phil Jones
I've hacked through (senselessly) the various compilation errors (I think
they were all related to GLfloat vs. Float, etc.)

Frag now compiles and works, but I think I may have introduced some bugs
(the weapon doesn't appear on the screen?)

Unfortunately, I did the whole job on an unpacked cabal package from
hackage, and also I've never used darcs. So here's the resulting package
tree (excluding the big files). If anyone knows how to turn it into a darcs
working copy and create a patch from it, please do!

(The attachment is available at
http://www.haskell.org/pipermail/haskell-cafe/attachments/20091028/884ff469/frag-1.1.2b.tar-0001.gz
)
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] FFI link failing due to no main?

2009-08-27 Thread phil

I'm not really sure, but does the linking step really need to be given
the --make flag? I would try linking without that in the first
instance.


I think so - it's not just a link step, it does the following:

Compile:
CInterface.c - CInterface.o

Link:
CInterface.o HaskellFuncs_stub.o Hasnkell.Funcs.o - libCInterface.so


I'll take a look at the full -v output and see if that reveals anything.

Thanks,

Phil.


On 27 Aug 2009, at 04:38, Bernie Pope wrote:


Hi Phil,


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] FFI link failing due to no main?

2009-08-26 Thread phil

Thanks for the reply!

I think this might be a Mac OS X issue.  I've stripped my rather  
longwinded example down to the simplest case (one Haskell source file  
to create a single shared library containing a single exported  
function) and this compiles (and ultimately runs) fine on Linux.  So  
I'm either doing something wrong which shouldn't really work on Linux  
(and I'm getting lucky!)... or something screwy is happening on Mac  
version:


This exports a single function which is then #included in CInterface.c  
to create a new pure-C wrapper to the function.


ghc -O2 -c HaskellFuncs.hs
ghc -O2 -no-hs-main --make -optl '-shared' CInterface.c  
HaskellFuncs_stub.o HaskellFuncs.o -o libCInterface.so


One Mac OS X I get the following error - but it works fine on Ubuntu.   
I'm using 6.10.4 on both machines:


Linking libCInterface.so ...
Undefined symbols:
  _ZCMain_main_closure, referenced from:
  _ZCMain_main_closure$non_lazy_ptr in libHSrts.a(Main.o)
  ___stginit_ZCMain, referenced from:
  ___stginit_ZCMain$non_lazy_ptr in libHSrts.a(Main.o)
ld: symbol(s) not found

Could anyone comment if I'm doing anything wrong, or is this a case of  
unsupported functionality on (PPC/Leopard) Mac OS X?  Has anyone  
succeeded in getting a similar example to work on Mac OS X?


I notice on Linux it is still very temperamental, if I play around  
with the arguments even slightly I get the same error there.



Cheers,

Phil.

On 26 Aug 2009, at 06:51, Yusaku Hashimoto wrote:


Missing -c option?

And -v option to see what's going on.

On Wed, Aug 26, 2009 at 10:37 AM, p...@beadling.co.uk wrote:

Hi,

After creating my stub objects etc using GHC, I'm trying to create  
a library
with a C interface to some Haskell functions.  I'm explicitly  
passing in

-no-hs-main yet the linker still fails due to missing main?

I'm sure I've had this working before with a slightly simpler  
example, but

can't work out what is wrong here.

If I give it a main (to humor it - it's not a solution), then it  
links and
produces an executable - so it looks to me like I'm not telling the  
linker

what I want correctly?

Any ideas?

Cheers,

Phil.


ghc -O2 --make -no-hs-main -package mtl   -package array -optl '- 
shared'

FFI/Octave/MyInterface.c FFI/Octave/OptionInterface_stub.o
FFI/Octave/OptionInterface.o ./FrameworkInterface.o ./Maths/Prime.o
./MonteCarlo/DataStructures.o ./MonteCarlo/European.o
./MonteCarlo/Framework.o ./MonteCarlo/Interface.o ./MonteCarlo/ 
Lookback.o

./Normal/Acklam.o ./Normal/BoxMuller.o ./Normal/Framework.o
./Normal/Interface.o ./Random/Framework.o ./Random/Halton.o
./Random/Interface.o ./Random/Ranq1.o   -o FFI/Octave/ 
libMyInterface.so

Linking FFI/Octave/libMyInterface.so ...
Undefined symbols:
 ___stginit_ZCMain, referenced from:
 ___stginit_ZCMain$non_lazy_ptr in libHSrts.a(Main.o)
 _ZCMain_main_closure, referenced from:
 _ZCMain_main_closure$non_lazy_ptr in libHSrts.a(Main.o)
ld: symbol(s) not found
collect2: ld returned 1 exit status

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] FFI link failing due to no main?

2009-08-25 Thread phil

Hi,

After creating my stub objects etc using GHC, I'm trying to create a  
library with a C interface to some Haskell functions.  I'm explicitly  
passing in -no-hs-main yet the linker still fails due to missing main?


I'm sure I've had this working before with a slightly simpler example,  
but can't work out what is wrong here.


If I give it a main (to humor it - it's not a solution), then it links  
and produces an executable - so it looks to me like I'm not telling  
the linker what I want correctly?


Any ideas?

Cheers,

Phil.


ghc -O2 --make -no-hs-main -package mtl   -package array -optl '- 
shared' FFI/Octave/MyInterface.c FFI/Octave/OptionInterface_stub.o FFI/ 
Octave/OptionInterface.o ./FrameworkInterface.o ./Maths/Prime.o ./ 
MonteCarlo/DataStructures.o ./MonteCarlo/European.o ./MonteCarlo/ 
Framework.o ./MonteCarlo/Interface.o ./MonteCarlo/Lookback.o ./Normal/ 
Acklam.o ./Normal/BoxMuller.o ./Normal/Framework.o ./Normal/ 
Interface.o ./Random/Framework.o ./Random/Halton.o ./Random/ 
Interface.o ./Random/Ranq1.o   -o FFI/Octave/libMyInterface.so

Linking FFI/Octave/libMyInterface.so ...
Undefined symbols:
  ___stginit_ZCMain, referenced from:
  ___stginit_ZCMain$non_lazy_ptr in libHSrts.a(Main.o)
  _ZCMain_main_closure, referenced from:
  _ZCMain_main_closure$non_lazy_ptr in libHSrts.a(Main.o)
ld: symbol(s) not found
collect2: ld returned 1 exit status

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Linking failing due to Control.Monad.State.Strict?

2009-08-07 Thread phil

That's the puppy!  Thanks so much for your help!

Phil.

On 7 Aug 2009, at 10:14, Malcolm Wallace wrote:

If I look with '-v' tho it seems to include Haskell libs in the  
underlying link - see below?  Plus it only complains about this  
library, I use many other standard libs too?  Looks like something  
stranger is going on?


Looks like you need to add -package mtl to the ghc commandline.  If  
you don't use --make, then you need to be explicit about which  
packages to link against.


Regards,
   Malcolm

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Linking failing due to Control.Monad.State.Strict?

2009-08-06 Thread phil

Hi,

I'm getting a linker error when generating makefile dependencies on  
the below library for a program I'm using.  It links fine under 'ghc -- 
make'.  I'm using (more or less) the makefile from the haskell docs.   
I am using the Strict State Monad in the object files it is  
complaining about, the compile is fine, just the linker which is  
blowing up. Any ideas what is causing this?


I'm using GHC 6.10.4 on PPC Mac OS X 10.5.

I've included the makefile below the error.

Cheers!

Phil.


ghc -o OptionCalculator -O2 -Wall  ./FrameworkInterface.o ./Maths/ 
Prime.o ./Misc/Debug.o ./MonteCarlo/DataStructures.o ./MonteCarlo/ 
European.o ./MonteCarlo/Framework.o ./MonteCarlo/Interface.o ./ 
MonteCarlo/Lookback.o ./Normal/Acklam.o ./Normal/BoxMuller.o ./Normal/ 
Framework.o ./Normal/Interface.o ./OptionCalculator.o ./Random/ 
Framework.o ./Random/Halton.o ./Random/Interface.o ./Random/Ranq1.o

Undefined symbols:
  _mtlzm1zi1zi0zi2_ControlziMonadziStateziStrict_a29_info,  
referenced from:

  _r1hl_info in Acklam.o
  _rGL_info in BoxMuller.o
  _mtlzm1zi1zi0zi2_ControlziMonadziStateziStrict_a29_closure,  
referenced from:

  _r1hl_srt in Acklam.o
  _rGL_srt in BoxMuller.o
  ___stginit_mtlzm1zi1zi0zi2_ControlziMonadziStateziStrict_,  
referenced from:

  ___stginit_FrameworkInterface_ in FrameworkInterface.o
  ___stginit_FrameworkInterface_ in FrameworkInterface.o
  ___stginit_MonteCarloziEuropean_ in European.o
  ___stginit_MonteCarloziEuropean_ in European.o
  ___stginit_MonteCarloziFramework_ in Framework.o
  ___stginit_MonteCarloziFramework_ in Framework.o
  ___stginit_MonteCarloziLookback_ in Lookback.o
  ___stginit_MonteCarloziLookback_ in Lookback.o
  ___stginit_NormalziAcklam_ in Acklam.o
  ___stginit_NormalziAcklam_ in Acklam.o
  ___stginit_NormalziBoxMuller_ in BoxMuller.o
  ___stginit_NormalziBoxMuller_ in BoxMuller.o
  ___stginit_NormalziFramework_ in Framework.o
  ___stginit_NormalziFramework_ in Framework.o
  ___stginit_RandomziFramework_ in Framework.o
  ___stginit_RandomziFramework_ in Framework.o
  ___stginit_RandomziHalton_ in Halton.o
  ___stginit_RandomziHalton_ in Halton.o
  ___stginit_RandomziRanq1_ in Ranq1.o
  ___stginit_RandomziRanq1_ in Ranq1.o
ld: symbol(s) not found
collect2: ld returned 1 exit status
make: *** [OptionCalculator] Error 1




Makefile:


HC  = ghc
HC_OPTS = -O2 -Wall $(EXTRA_HC_OPTS)

SRCS := $(shell find . -name *.hs -print)
OBJS = $(SRCS:.hs=.o)
PROG = OptionCalculator

.SUFFIXES : .o .hs .hi .lhs .hc .s

${PROG} : $(OBJS)
  rm -f $@
  $(HC) -o $@ $(HC_OPTS) $(OBJS)

# Standard suffix rules
.o.hi:
@:

.lhs.o:
$(HC) -c $ $(HC_OPTS)

.hs.o:
$(HC) -c $ $(HC_OPTS)

.o-boot.hi-boot:
@:

.lhs-boot.o-boot:
$(HC) -c $ $(HC_OPTS)

.hs-boot.o-boot:
$(HC) -c $ $(HC_OPTS)

clean :
find . -name *.hi -exec rm -f {} \;
find . -name *.o -exec rm -f {} \;
rm -f ${PROG}

depend :
ghc -M $(HC_OPTS) $(SRCS)


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Linking failing due to Control.Monad.State.Strict?

2009-08-06 Thread phil

Thanks for the reply.


Otherwise, you'll need to go digging in
your library for the .o files to link to by hand.


If I look with '-v' tho it seems to include Haskell libs in the  
underlying link - see below?  Plus it only complains about this  
library, I use many other standard libs too?  Looks like something  
stranger is going on?


Also I've tried using --include-pkg-deps (perhaps incorrectly) - it  
doesn't help.



Phil.


rm -f OptionCalculator
ghc -o OptionCalculator -O2 -Wall -v  ./FrameworkInterface.o ./Maths/ 
Prime.o ./Misc/Debug.o ./MonteCarlo/DataStructures.o ./MonteCarlo/ 
European.o ./MonteCarlo/Framework.o ./MonteCarlo/Interface.o ./ 
MonteCarlo/Lookback.o ./Normal/Acklam.o ./Normal/BoxMuller.o ./Normal/ 
Framework.o ./Normal/Interface.o ./OptionCalculator.o ./Random/ 
Framework.o ./Random/Halton.o ./Random/Interface.o ./Random/Ranq1.o
Glasgow Haskell Compiler, Version 6.10.4, for Haskell 98, stage 2  
booted by GHC version 6.10.3

Using package config file: /usr/local/lib/ghc-6.10.4/./package.conf
hiding package base-3.0.3.1 to avoid conflict with later version  
base-4.1.0.0

wired-in package ghc-prim mapped to ghc-prim-0.1.0.0
wired-in package integer mapped to integer-0.1.0.1
wired-in package base mapped to base-4.1.0.0
wired-in package rts mapped to rts-1.0
wired-in package haskell98 mapped to haskell98-1.0.1.0
wired-in package syb mapped to syb-0.1.0.1
wired-in package template-haskell mapped to template-haskell-2.3.0.1
wired-in package dph-seq mapped to dph-seq-0.3
wired-in package dph-par mapped to dph-par-0.3
Hsc static flags: -static
*** Linker:
gcc -v -o OptionCalculator FrameworkInterface.o Maths/Prime.o Misc/ 
Debug.o MonteCarlo/DataStructures.o MonteCarlo/European.o MonteCarlo/ 
Framework.o MonteCarlo/Interface.o MonteCarlo/Lookback.o Normal/ 
Acklam.o Normal/BoxMuller.o Normal/Framework.o Normal/Interface.o  
OptionCalculator.o Random/Framework.o Random/Halton.o Random/ 
Interface.o Random/Ranq1.o -L/usr/local/lib/ghc-6.10.4/ 
haskell98-1.0.1.0 -L/usr/local/lib/ghc-6.10.4/random-1.0.0.1 -L/usr/ 
local/lib/ghc-6.10.4/process-1.0.1.1 -L/usr/local/lib/ghc-6.10.4/ 
directory-1.0.0.3 -L/usr/local/lib/ghc-6.10.4/unix-2.3.2.0 -L/usr/ 
local/lib/ghc-6.10.4/old-time-1.0.0.2 -L/usr/local/lib/ghc-6.10.4/old- 
locale-1.0.0.1 -L/usr/local/lib/ghc-6.10.4/filepath-1.1.0.2 -L/usr/ 
local/lib/ghc-6.10.4/array-0.2.0.0 -L/usr/local/lib/ghc-6.10.4/ 
syb-0.1.0.1 -L/usr/local/lib/ghc-6.10.4/base-4.1.0.0 -L/usr/local/lib/ 
ghc-6.10.4/integer-0.1.0.1 -L/usr/local/lib/ghc-6.10.4/ghc- 
prim-0.1.0.0 -L/usr/local/lib/ghc-6.10.4 -lHShaskell98-1.0.1.0 - 
lHSrandom-1.0.0.1 -lHSprocess-1.0.1.1 -lHSdirectory-1.0.0.3 - 
lHSunix-2.3.2.0 -ldl -lHSold-time-1.0.0.2 -lHSold-locale-1.0.0.1 - 
lHSfilepath-1.1.0.2 -lHSarray-0.2.0.0 -lHSsyb-0.1.0.1 -lHSbase-4.1.0.0  
-lHSinteger-0.1.0.1 -lHSghc-prim-0.1.0.0 -lHSrts -lm -lffi -lgmp -ldl - 
u _ghczmprim_GHCziTypes_Izh_static_info -u  
_ghczmprim_GHCziTypes_Czh_static_info -u  
_ghczmprim_GHCziTypes_Fzh_static_info -u  
_ghczmprim_GHCziTypes_Dzh_static_info -u  
_base_GHCziPtr_Ptr_static_info -u _base_GHCziWord_Wzh_static_info -u  
_base_GHCziInt_I8zh_static_info -u _base_GHCziInt_I16zh_static_info -u  
_base_GHCziInt_I32zh_static_info -u _base_GHCziInt_I64zh_static_info - 
u _base_GHCziWord_W8zh_static_info -u  
_base_GHCziWord_W16zh_static_info -u _base_GHCziWord_W32zh_static_info  
-u _base_GHCziWord_W64zh_static_info -u  
_base_GHCziStable_StablePtr_static_info -u  
_ghczmprim_GHCziTypes_Izh_con_info -u  
_ghczmprim_GHCziTypes_Czh_con_info -u  
_ghczmprim_GHCziTypes_Fzh_con_info -u  
_ghczmprim_GHCziTypes_Dzh_con_info -u _base_GHCziPtr_Ptr_con_info -u  
_base_GHCziPtr_FunPtr_con_info -u _base_GHCziStable_StablePtr_con_info  
-u _ghczmprim_GHCziBool_False_closure -u  
_ghczmprim_GHCziBool_True_closure -u  
_base_GHCziPack_unpackCString_closure -u  
_base_GHCziIOBase_stackOverflow_closure -u  
_base_GHCziIOBase_heapOverflow_closure -u  
_base_ControlziExceptionziBase_nonTermination_closure -u  
_base_GHCziIOBase_blockedOnDeadMVar_closure -u  
_base_GHCziIOBase_blockedIndefinitely_closure -u  
_base_ControlziExceptionziBase_nestedAtomically_closure -u  
_base_GHCziWeak_runFinalizzerBatch_closure -u  
_base_GHCziTopHandler_runIO_closure -u  
_base_GHCziTopHandler_runNonIO_closure -u  
_base_GHCziConc_runHandlers_closure -u  
_base_GHCziConc_ensureIOManagerIsRunning_closure -Wl,- 
search_paths_first -read_only_relocs warning

Using built-in specs.
Target: powerpc-apple-darwin9
Configured with: /var/tmp/gcc/gcc-5490~1/src/configure --disable- 
checking -enable-werror --prefix=/usr --mandir=/share/man --enable- 
languages=c,objc,c++,obj-c++ --program-transform-name=/^[cg][^.-]*$/s/ 
$/-4.0/ --with-gxx-include-dir=/include/c++/4.0.0 --with-slibdir=/usr/ 
lib --build=i686-apple-darwin9 --program-prefix= --host=powerpc-apple- 
darwin9 --target=powerpc-apple-darwin9

Thread model: posix
gcc version 4.0.1 (Apple Inc. build 5490)
 /usr/libexec/gcc/powerpc-apple-darwin9

Re: [Haskell-cafe] Retrieving inner state from outside the transformer

2009-08-01 Thread phil

Thanks very much for both replies.

I think I get this now.

Simply, my choice of evaluation functions (evalStateT, execStateT and  
execState) ensured that the states are not returned.  It was obvious.


I can get this working, but I have one more more question to make sure  
I actually understand this.


Below is a very simple and pointless example I wrote to grasp the  
concept.  This returns ((1,23),21) which is clear to me.


import Control.Monad.State

myOuter :: StateT Int (State Int) Int
myOuter = StateT $ \s - do p - get
  return (s,p+s+1)

main :: IO()
main = do let innerMonad = runStateT myOuter 1
 y = runState innerMonad 21
print y

Thus we are saying that a=(1,23) and s=21 for the state monad, and  
that a=1 and s=23 for the state transformer.  That is the return value  
of the state monad is the (a,s) tuple of the transformer and it's own  
state is of course 21.


This got me thinking - the return value's type of the state monad is  
dictated by the evaluation function used on the state transformer - it  
could be a, s, or (a,s) depending which function is used.  Thus if I  
edit the code to to:


do let innerMonad = evalStateT myOuter 1

I get back (1,21) - which is the problem I had - we've lost the  
transformer's state.


Look at the Haskell docs I get:

evalStateT :: Monad m = StateT s m a - s - m a
runStateT :: s - m (a, s)

So the transformer valuation functions are returning a State monad  
initialized with either a or (a,s).


Now I know from messing around with this that the initialization is  
the return value, from the constructor:


newtype State s a = State {
runState :: s - (a, s)
}

Am I right in assuming that I can read this as:

m (a,s_outer) returned from runStateT is equivalent to calling the  
constructor as (State s_inner) (a,s_outer)


This makes sense because in the definition of myOuter we don't specify  
the return value type of the inner monad:


myOuter :: StateT Int (State Int) Int


The problem is whilst I can see that we've defined the inner monad's  
return value to equal the *type* of the transformer's evaluation  
function, I'm loosing the plot trying to see how the *values* returned  
by the transformer are ending up there.  We haven't specified what the  
state monad actually does?


If I look at a very simple example:

simple :: State Int Int
simple = State $ \s - (s,s+1)

This is blindly obvious, is I call 'runState simple 8', I will get  
back (8,9).  Because I've specified that the return value is just the  
state.


In the more original example, I can see that the 'return (s,p+s+1)'  
must produce a state monad where a=(1,23), and the state of this monad  
is just hardcoded in the code = 21.


I guess what I'm trying to say is - where is the plumbing that ensures  
that this returned value in the state/transformer stack is just the  
(a,s) of the transformer?



I have a terrible feeling this is a blindly obvious question -  
apologies if it is!



Thanks again!


Phil.



On 31 Jul 2009, at 04:39, Ryan Ingram wrote:


StateT is really simple, so you should be able to figure it out:

runStateT :: StateT s m a - s - m (a,s)
runState :: State s a - s - (a,s)

So if you have
m :: StateT s1 (StateT s2 (State s3)) a

runStateT m :: s1 - StateT s2 (State s3) (a,s)

\s1 s2 s3 - runState (runStateT (runStateT m s1) s2) s3)
:: s1 - s2 - s3 - (((a,s1), s2), s3)



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Retrieving inner state from outside the transformer

2009-07-30 Thread Phil
Hi,

I've hit a brick wall trying to work out, what should be (and probably is!)
a simple problem.

I have a StateT stack (1 State monad, 2 StateT transformers) which works
fine and returns the result of the outer monad.  I thought I understood this
fine, but perhaps not.  My understanding is that the result returned by the
inner-most monad is always 'transformed' by the outer monads and thus the
result you get is that computed in the outer transformer.

The problem I have is now I'd like not only to get the final state of the
outer most transformer, but I'd also like to know the final states of the
the inner StateT and the inner State at the end of the computation (so that
at a later point in time I can reinitialize a similar stack and continue
with the same set of states I finished with).

So I figured I could have a separate (parent) State Monad (not part of this
stack) that would store the final state of the sequence below.  I figured it
couldn't be part of this stack, as one computation on the stack does not
lead to one result in the parent State Monad; it is only the end states of
the sequence I care about.

Anyway, currently I just have the stack evaluated as below.  Is there anyway
from outside of the computation that I can interrogate the states of the
inner layers?  The only way I can see to do this is inside the outer monad
itself.  As I'm not using the result I could use 'lift get' and 'lift lift
get' to make the outer transformer return the two inner states as it's
result.  I could ignore this result for the first (iterations-1) and bind a
final iteration which uses replicateM instead of replicateM_.

This strikes me as pretty horrible tho!

So, in the example below if I want to modify the 'result' function so it
returns no only the outer state, but also the two inners states as a tuple
(Double,Double,Double) is there an easier way of doing this?

result :: RngClass a = NormalClass b = a - b - MonteCarloUserData -
Double
result initRngState initNormState userData = evalState a initRngState

where  a = evalStateT b initNormState

   b = execStateT ( do replicateM_ (iterations userData) (mc
userData)) 0


Any advice greatly appreciated!

Thanks,

Phil.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Ambiguous type variable - help!

2009-07-20 Thread phil


On 19 Jul 2009, at 21:18, Yitzchak Gale wrote:


Hi Phil,






I've concocted a very simple example to illustrate this (below) - but
it doesn't compile because ghc complains that my type is ambiguous  
arising

from my use of 'fromSeq'.


Notice that you have given two completely separate sets
of instructions of what to do depending on whether Int
or Double is selected. You have not given any indication
of how to choose between them, even at runtime. Of course,
the compiler doesn't care that your string constants Int and
Double happen also to be the names of types if unquoted.


I see now.  I'm passing fromSeq a SeqType, but it has no way
of knowing if I want to process it as an Int or a Double.
The only thing which is polymorphic is nextSeq as it must handle
the underlying state of Int and Double.

Your result function handles the general case and the typeclass
instances deal with the specialization depending on the state's type.

The printResult function takes in a SeqType and then parses (for  
want of

a better word) out the
underlying type of Int or Double.  It then calls results against the  
Int or Double which

in turn will invoke the correct version of nextSeq.


Thank you very much for explaining this!


Phil.



import Control.Monad.State -- Why Strict? Haskell is lazy by default.



Ahh, no reason for the Strict - in the large program I'm righting it  
is required
because otherwise I end up with almighty thunks.  But here it serves  
no purpose.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Ambiguous type variable - help!

2009-07-19 Thread phil

Hi,

I'm trying to work out how to handle a choice at runtime which  
determines what instance of a State monad should be used.  The choice  
will dictate the internal state of the monad so different  
implementations are needed for each.  I've concocted a very simple  
example to illustrate this (below) - but it doesn't compile because  
ghc complains that my type is ambiguous arising from my use of  
'fromSeq'.  I can kind-of see what the compiler is complaining about,  
I'm guessing because it is the internals of my type which dictate  
which state Monad to use and it can't know that?


Thinking about it I tried making SeqType an instance of Sequence  
class, but had no luck here.


I understand that haskell is static at compile time, so I'm looking  
for something like a template solution in C++ (rather than a virtual  
function style implementation).  I see there are libraries out their  
which do this, but I was wondering in my simple example if there was a  
way of doing this without writing a load of boilerplate code in main  
(this would get ugly very quickly if you had loads of choices).  If  
this is impossible does anyone have an example / advice of  
implementing simple template style code in Haskell?


Any help or suggestions would be really appreciated.

Many Thanks,

Phil.

Thus just implements a state Monad which counts up from 1 to 10, using  
either an Int or a Double depending on user choice.  It's pointless of  
course, but illustrates my point.


{-# LANGUAGE TypeSynonymInstances #-}

import Control.Monad.State.Strict

data SeqType = SeqDouble Double | SeqInt Int

class SequenceClass a where
  nextSeq :: State a Int
  fromSeq :: SeqType - a

instance SequenceClass Int where
  nextSeq = State $ \s - (s,s+1)
  fromSeq (SeqInt i) = i
  fromSeq _ = 0

instance SequenceClass Double where
  nextSeq = State $ \s - (truncate s,s+1.0)
  fromSeq (SeqDouble d) = d
  fromSeq _ = 0.0


chooser :: String - SeqType
chooser inStr | inStr == Double = SeqDouble 1.0
  | inStr == Int= SeqInt 1
  | otherwise = SeqInt 1

main :: IO()
main = do userInput - getLine
  let result = evalState (do replicateM 10 nextSeq) $ fromSeq  
$ chooser userInput

  print result
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Catering for similar operations with and without state

2009-07-06 Thread phil
Well, the simplest solution I can think of is below.  The  
OtherNormalStateT doesn't actually have any state at all, but still  
gets state from the StateT 'below' it and returns a result.


This is still a bit ugly, but it compiles - and although I haven't  
tested it properly yet, simply implementing the 'other' helper  
function to do the work should be fine.


It's a question of how smart the compiler is.  Obviously this is  
inefficient in theory, but will the compiler notice we are passing  
around a 'unit' state and that the s - (a,s) function doesn't care  
about the input perhaps.  I'd expect the overhead from this to be  
fairly small and it does allow me to continue using the same paradigm  
for stateless versions of my normal generator.


I have seen people do similar things when they wish to carry around  
state but have no result, and thus the result is set to ().  I can't  
see why this is any less inefficient than that?




type BoxMullerStateT = StateT (Maybe Double)
type BoxMullerRandomStateStack = BoxMullerStateT MyRngState

instance NormalClass BoxMullerRandomStateStack where
  generateNormal = StateT $ \s - case s of
Just d  - return (d,Nothing)
Nothing - do qrnBaseList - nextRand
	  let (norm1,norm2) = boxMuller (head qrnBaseList) (head $  
tail qrnBaseList)

  return (norm1,Just norm2)


-- New stateless StateT below!

type OtherNormalStateT = StateT ()
type OtherRandomStateStack = OtherNormalStateT MyRngState



instance NormalClass OtherRandomStateStack where
generateNormal = StateT $ \_ - do rn:rns - nextRand
   return ( other rn, () )


On 17 Jun 2009, at 07:38, Jason Dagit wrote:


Hi Phil,

On Mon, Jun 15, 2009 at 5:23 PM, Phil p...@beadling.co.uk wrote:
Hi,

I'm trying to think around a problem which is causing me some  
difficulty in Haskell.


I'm representing a stateful computation using a State Transform -  
which works fine.  Problem is in order to add flexibility to my  
program I want to performs the



snip

g my own Monad from scratch but this crossed my mind as another  
possibillity - i.e. a Monad that either has a state of maybe double,  
or has no state at all?


I have a feeling I'd just 'return' the pure computations into the  
state monad.  My example code above seems weird and heavy weight to  
me.


I'd love to see what you figure you.

Jason

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Catering for similar operations with and without state

2009-06-15 Thread Phil
Hi,

I'm trying to think around a problem which is causing me some difficulty in
Haskell.

I'm representing a stateful computation using a State Transform - which
works fine.  Problem is in order to add flexibility to my program I want to
performs the same operation using different techniques - most of which
require no state.

My program at the moment is a stack of state monads/transforms.  I have a
random number generator as a state monad (seed=state), feeding a Box Muller
(normal) generator implemented as a state transform (state is 'maybe'
normal, so it only spits out 1 normal at a time), which in turn feeds
another state machine.

This all works fine, but I want to add flexibility so that I can chop and
change the Box Muller algorithm with any number of other normal generators.
Problem is most of them do not need to carry around state at all.
This leaves me with a messy solution of implementing lots of state monads
that don't actually have a state, if I want to maintain the current
paradigm.

This strikes me as really messy - so I'm hoping someone can point me in the
direction of a more sensible approach?

Currently I have my Box Muller implemented as below - this works:

class NormalClass myType where
  generateNormal :: myType Double

type BoxMullerStateT = StateT (Maybe Double)
type BoxMullerRandomStateStack = BoxMullerStateT MyRngState

instance NormalClass BoxMullerRandomStateStack where
  generateNormal = StateT $ \s - case s of
  Just d  - return (d,Nothing)
  Nothing - do qrnBaseList - nextRand
let (norm1,norm2) = boxMuller (head
qrnBaseList) (head $ tail qrnBaseList)
return (norm1,Just norm2)


But say I have another instance of my NormalClass that doesn't need to be
stateful, that is generateNormal() is a pure function.  How can I represent
this without breaking my whole stack?

I've pretty much backed myself into a corner here as my main() code expects
to evalStateT on my NormalClass:

main = do let sumOfPayOffs = evalState normalState (1,[3,5]) -- (ranq1Init
981110)
where
  mcState = execStateT (do replicateM_ iterations mc) 0
  normalState = evalStateT mcState Nothing

If it wasn't for this I was thinking about implementing the IdentityT
transformer to provide a more elegant pass-through.
I've never tried designing my own Monad from scratch but this crossed my
mind as another possibillity - i.e. a Monad that either has a state of maybe
double, or has no state at all?
I may be talking rubbish here of course :-) I'm pretty daunted by where to
even start - but I want to improve this as the quick and dirty solution of
implementing a load of state monads with no state just to cater for the
above method strikes me as very ugly,  and as I can easily see ways of doing
this in C or C++, I figure there must be a better approach in Haskell - I'm
just thinking in the right way!

Any advice or hints would be great,

Cheers,

Phil.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Stacking State on State.....

2009-03-03 Thread Phil
I've had a look at your example - it's raised yet more questions in my mind!


On 02/03/2009 23:36, Daniel Fischer daniel.is.fisc...@web.de wrote:


 A stupid example:
 --
 module UhOh where
 
 import Control.Monad
 import Control.Monad.State.Lazy
 --import Control.Monad.State.Strict
 
 
 uhOh :: State s ()
 uhOh = State $ \_ - undefined
 
 uhOhT :: Monad m = StateT s m ()
 uhOhT = StateT $ \_ - return undefined
 
 uhOhT2 :: Monad m = StateT s m ()
 uhOhT2 = StateT $ \_ - undefined
 
 oy :: State s ()
 oy = State $ \_ - ((),undefined)
 
 oyT :: Monad m = StateT s m ()
 oyT = StateT $ \_ - return ((),undefined)
 
 hum :: State Int Int
 hum = do
 k - get
 w - uhOh
 put (k+2)
 return w
 return (k+1)
 
 humT :: Monad m = StateT Int m Int
 humT = do
 k - get
 w - uhOhT
 put (k+2)
 return w
 return (k+1)
 
 
 humT2 :: Monad m = StateT Int m Int
 humT2 = do
 k - get
 w - uhOhT2
 put (k+2)
 return w
 return (k+1)
 
 
 whoa n = runState (replicateM_ n hum  hum) 1
 
 whoaT n = runStateT (replicateM_ n humT  humT) 1
 
 whoaT2 n = runStateT (replicateM_ n humT2  humT2) 1
 
 yum :: State Int Int
 yum = do
 k - get
 w - oy
 put (k+2)
 return w
 return (k+1)
 
 yumT :: Monad m = StateT Int m Int
 yumT = do
 k - get
 w - oyT
 put (k+2)
 return w
 return (k+1)
 
 hoha n = runState (replicateM_ n yum  yum) 1
 
 hohaT n = runStateT (replicateM_ n yumT  yumT) 1
 
 oops m = runState m 1
 --
 
 What happens with
 
 whoa 10
 hoha 10
 oops (whoaT 10)
 oops (whoaT2 10)
 oops (hohaT 10)
 
 respectively when the Lazy or Strict library is imported?
 Answer first, then test whether you were right.

OK, I had a think about this - I'm not 100% clear but:

UhOh - OK for lazy, Bad for Strict.  undefined 'could' be of the form
(a,s) so the lazy accepts it, but the strict version tries to produce (a,s)
out of undefined and fails.

Oy - Both are OK here.  The pair form is retained and neither will go as far
as to analyse the contents of either element of the pair, as neither is
used.

UhOhT - OK for lazy, Bad for Strict. Same as Oh UhOh, but as we have
transformer we return inside a Monad.

UhOhT2 - Bad for both - transformers should return a Monad.

OyT - Same as Oy, but returned inside a monad.


The thing which confuses me is why we care about these functions at all hum,
yum, etc.  Although these inspect the State Monads above they stick the
values in to 'w' which is never used (I think), because the first return
statement just produces M w which is not returned because of the return
(k+1) afterwards??

Because lazy and strict are only separated by the laziness on the bind
between contiguous hum and yum states, I would have thought that laziness on
w would have been the same on both.

Hmmm. But I suppose each call to hum and yum is increment stating in it's
corresponding UhOh and Oy function.  Thus causing these to be strictly
evaluated one level deeper In which case I do understand.

We have:

hum  hum  hum  .

And At each stage we are also doing UhOh  UhOh  UhOh inside the hums?

Is this right, I'm not so sure?  I'm in danger of going a bit cross-eyed
here!


 
 This means that each new (value,state) is just passed around as a thunk and
 not even evaluated to the point where a pair is constructed - it's just a
 blob, and could be anything as far as haskell is concerned.
 
 Not quite anything, it must have the correct type, but whether it's
 _|_, (_|_,_|_), (a,_|_), (_|_,s) or (a,s) (where a and s denote non-_|_
 elements of the respective types), the (=) doesn't care. Whether any
 evaluation occurs is up to (=)'s arguments.
 

By correct type you mean that it must *feasibly* be a pair... But the lazy
pattern matching doesn't verify that it *is* a pair.  Thus if we returned
something that could never be a pair, it will fail to compile, but if it is
of the form X or (X,X) it won't check any further than that, but if it was
say [X] that wouldn't work even for lazy - haskell doesn't trust us that
much!?

 It follows that each new state cannot evaluated even if we make newStockSum
 strict as (by adding a bang) because the state tuple newStockSum is wrapped
 in is completely unevaluated - so even if newStockSum is evaluated INSIDE
 this blob, haskell will still keep the whole chain.
 
 Well, even with the bang, newStockSum will only be evaluated if somebody looks
 at what mc delivers. In the Strict case, (=) does, so newStockSum is
 evaluated at each step.

When you say 'looks' at it do you mean it is the final print state on the
result that ultimately causes the newStockSum to be evaluated in the lazy
version?  Thus we are saying we evaluate it only because we know it is
needed.  
However in the strict case, the fact that newStockSum is used to evaluate
the NEXT newStockSum in the subsequent state 

Re: [Haskell-cafe] Stacking State on State.....

2009-03-02 Thread Phil
Thanks again - one quick question about lazy pattern matching below!


On 01/03/2009 23:56, Daniel Fischer daniel.is.fisc...@web.de wrote:


 
 No, it's not that strict. If it were, we wouldn't need the bang on newStockSum
 (but lots of applications needing some laziness would break).
 
 The Monad instance in Control.Monad.State.Strict is
 
 instance (Monad m) = Monad (StateT s m) where
 return a = StateT $ \s - return (a, s)
 m = k  = StateT $ \s - do
 (a, s') - runStateT m s
 runStateT (k a) s'
 fail str = StateT $ \_ - fail str
 
 (In the lazy instance, the second line of the = implementation is
 ~(a,s') - runStateT m s)
 
 The state will only be evaluated if runStateT m resp. runStateT (k a)
 require it. However, it is truly separated from the return value a, which is
 not the case in the lazy implementation.
 The state is an expression of past states in both implementations, the
 expression is just much more complicated for the lazy.
 

I think I get this - so what the lazy monad is doing is delaying the
evaluation of the *pattern* (a,s') until it is absolutely required.
This means that each new (value,state) is just passed around as a thunk and
not even evaluated to the point where a pair is constructed - it's just a
blob, and could be anything as far as haskell is concerned.
It follows that each new state cannot evaluated even if we make newStockSum
strict as (by adding a bang) because the state tuple newStockSum is wrapped
in is completely unevaluated - so even if newStockSum is evaluated INSIDE
this blob, haskell will still keep the whole chain.
Only when we actually print the result is each state required and then each
pair is constructed and incremented as described by my transformer.  This
means that every tuple is held as a blob in memory right until the end of
the full simulation.
Now with the strict version each time a new state tuple is created, to check
that the result of running the state is at least of the form (thunk,thunk).
It won't actually see much improvement just doing this because even though
you're constructing pairs on-the-fly we are still treating each state in a
lazy fashion.  Thus right at the end we still have huge memory bloat, and
although we will not do all our pair construction in one go we will still
value each state after ALL states have been created - performance
improvement is therefore marginal, and I'd expect memory usage to be more or
less the same as (thunk,thunk) and thunk must take up the same memory.

So, we stick a bang on the state.  This forces each state to evaluated at
simulation time.  This allows the garbage collector to throw away previous
states as the present state is no longer a composite of previous states AND
each state has been constructed inside it's pair - giving it Normal form.

Assuming that is corrected, I think I've cracked it.

One last question if we bang a variable i.e. !x = blah blah, can we assume
that x will then ALWAYS be in Normal form or does it only evaluate to a
given depth, giving us a stricter WHNF variable, but not necessarily
absolutely valued?




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Stacking State on State.....

2009-03-01 Thread Phil
Hi,

Thanks for the replies - I haven't had a chance to try out everything
suggested yet - but your explanations of transformers nailed it for me.

However, in terms of performance when stacking, I've come across something
I'm struggling to explain - I was wondering if anyone could offer up and
explanation.
I've rewritten my code twice - one with 3 stacked monads, and one with 2
stacked monads and a load of maps.  Heuristically I would have thought the 3
stacked monads would have performed as well, or even better than the 2
stacked solution, but the 2 stacked solution is MUCH faster and MUCH less
memory is used.  They are both using 90% of the same code and both chain
together the same number of computations using replicateM.  From profiling I
can see that the pure function 'reflect' takes up most of the umph in both
cases - which I'd expect.  But in the triple stacked version the garbage
collector is using up 90% of the time.

I've tried using BangPatterns to reduce memory usage in the Triple Stack
version - doing this I can half the time it takes, but it is still running
at over twice the time of the two stack version.  The BangPatterns were also
put in Common Code in the reflect function - so I'd expect both solutions to
need them?

Even though both pieces of code are a bit untidy, the triple stacked monad
'feels' nicer to me - everything is encapsulated away and one evaluation in
main yields the result.  From purely a design perspective I prefer it - but
obviously not if it runs like a dog!

Any ideas why the triple stack runs so slow?

Thanks again!

Phil

* Triple Stack Specific Impl:

type MonteCarloStateT = StateT Double

mc :: MonteCarloStateT BoxMullerQuasiState ()
mc = StateT $ \s - do nextNormal - generateNormal
   let stochastic = 0.2*1*nextNormal
   let drift = 0.05 - (0.5*(0.2*0.2))*1
   let newStockSum = payOff 100 ( 100 * exp ( drift +
stochastic ) ) + s
   return ((),newStockSum)

iterations = 100
main :: IO()
main = do let sumOfPayOffs = evalState ( evalStateT ( execStateT (do
replicateM_ iterations mc) $ 0 ) $ (Nothing,nextHalton) ) $ (1,[3,5])
  let averagePO = sumOfPayOffs / fromIntegral iterations
  let discountPO = averagePO * exp (-0.05)
  print discountPO


* Double Stack and Map Specific Impl:


iterations = 100
main :: IO()
main = do let normals = evalState ( evalStateT (do replicateM iterations
generateNormal) $ (Nothing,nextHalton) ) $ (1,[3,5])
  let stochastic = map (0.2*1*) normals
  let sde = map ((( 0.05 - (0.5*(0.2*0.2)) )*1)+) stochastic
  let expiryMult = map exp sde
  let expiry = map (100*) expiryMult
  let payoff = map (payOff 100) expiry
  let averagePO = (foldr (+) 0 payoff) / fromIntegral iterations
  let discountPO = averagePO * exp (-0.05)
  print discountPO


* Common Code Used By Both Methods:


import Control.Monad.State
import Debug.Trace

-- State Monad for QRNGs - stores current iteration and list of
-- bases to compute
type QuasiRandomState = State (Int,[Int])

nextHalton :: QuasiRandomState [Double]
nextHalton = do (n,bases) - get
let !nextN = n+1
put (nextN,bases)
return $ map (reflect (n,1,0)) bases

type ReflectionThreadState = (Int,Double,Double)

reflect :: ReflectionThreadState - Int - Double
reflect (k,f,h) base
  | k = 0 = h
  | otherwise = reflect (newK,newF,newH) base
  where
newK = k `div` base
newF = f / fromIntegral base
newH = h + fromIntegral(k `mod` base) * newF

-- So we are defining a state transform which has state of 'maybe double'
and an
-- operating function for the inner monad of type QuasiRandomMonad returning
a [Double]
-- We then say that it wraps an QuasiRandomMonad (State Monad) - it must of
course
-- if we pass it a function that operates on these Monads we must wrap the
same
-- type of Monad.  And finally it returns a double

type BoxMullerStateT = StateT (Maybe Double, QuasiRandomState [Double])
type BoxMullerQuasiState = BoxMullerStateT QuasiRandomState

generateNormal :: BoxMullerQuasiState Double
generateNormal = StateT $ \s - case s of
(Just d,qrnFunc) - return (d,(Nothing,qrnFunc))
(Nothing,qrnFunc) - do qrnBaseList - qrnFunc
let (norm1,norm2) = boxMuller (head
qrnBaseList) (head $ tail qrnBaseList)
return (norm1,(Just norm2,qrnFunc))

boxMuller :: Double - Double - (Double,Double)
-- boxMuller rn1 rn2 | trace ( rn1  ++ show rn1 ++  rn2  ++ show rn2 )
False=undefined 
boxMuller rn1 rn2 = (normal1,normal2)
  where
r= sqrt ( (-2)*log rn1)
twoPiRn2 = 2 * pi * rn2
normal1  = r * cos ( twoPiRn2 )
normal2  = r * sin ( twoPiRn2 )



payOff :: Double - Double - Double
payOff strike stock | (stock - strike)  0

Re: [Haskell-cafe] Stacking State on State.....

2009-03-01 Thread Phil
On 01/03/2009 20:16, Andrew Wagner wagner.and...@gmail.com wrote:
I know very little about profiling, but your comment about spending a lot of
time garbage collecting rang a bell with me - the example on RWH talks about
that exact issue. Thus, I would recommend walking through the profiling
techniques described on
http://book.realworldhaskell.org/read/profiling-and-optimization.html .

Yes, I¹ve been going through this very example to try to ascertain where the
problem lies:

Profiling gives an almost identical program flow for both:

Three stacks:
http://pastebin.com/m18e530e2

Two Stacks:
http://pastebin.com/m2ef7c081

The output from ³­sstderr² shows the garbage collector in overdrive for
Three Stacks:

Three Stacks:
http://pastebin.com/m5f1c93d8

Two Stacks:
http://pastebin.com/m2f5a625

Also note the huge memory usage for Three Stacks!  If I Œheap profile¹ this
I can see that within the first few seconds the ŒThree Stacks¹ approach
grabs ~85MB, it then peaks after 100 seconds.  It then starts to reclaim the
memory until the end of the program Slowly but surely.  The ~85MB is due
to a Constant Applicative Form ­ as I understand it these are values with no
arguments that have a one-off cost.  I assume this means they are things
that are not assigned to a variable.

On the other hand the graph for the Two Stack approach is a mess ­ that is
it jumps all over the place which I can only interpret as things are being
allocated and de-allocated very quickly.

Three Stacks heap:
http://www.beadling.co.uk/mc2_3stacks.pdf

Two Stacks heap:
http://www.beadling.co.uk/mc2_2stacks.pdf


Thinking about this some more, perhaps the issue here is that all the memory
required is held through the whole computation for the Three Stack approach,
because we continually thread computations until we have an answer.  Because
the Two Stack approach produces a list that we then map, perhaps the garbage
collector can start to reduce memory usage as it does the final computation.
This is counterintuitive to what I had hoped ­ but if we use replicateM,
does haskell throw away preceding states after we have a new state, or is
holding the 3 states of every single computation right up until it has
chained the last computation?
I¹d still expect to see a spike or a peak in the Two State approach, but we
see nothing of the sort.

If anyone can offer up a better explanation, I¹d be interested to hear it!

Thanks again,

Phil,
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Stacking State on State.....

2009-03-01 Thread Phil
Thanks very much for your patient explanations - this has really helped
again!

A few final questions in-line.

On 01/03/2009 21:46, Daniel Fischer daniel.is.fisc...@web.de wrote:


 
 One thing that helps much is to use
 
 import Control.Monad.State.Strict
 
 Using the default lazy State monad, you build enormous thunks in the states,
 which harms the triple stack even more than the double stack.
 With the strict State monad (and a strict left fold instead of foldr in the
 double stack), I get

Ahhh, I see.  Just to make sure I understand this the Strict version will
evaluate each state as an atomic number.  The standard lazy version will
create each state as an expression of past states... Consequentially these
will grow and grow as state is incremented?


 
 type MonteCarloStateT = StateT Double
 
 mc :: MonteCarloStateT BoxMullerQuasiState ()
 mc = StateT $ \s - do nextNormal - generateNormal
let stochastic = 0.2*1*nextNormal
let drift = 0.05 - (0.5*(0.2*0.2))*1
let newStockSum = payOff 100 ( 100 * exp ( drift +
 stochastic ) ) + s
return ((),newStockSum)
 
 Don't use a new let on each line, have it all in one let-block.
 And, please, force the evaluation of newStockSum:
 

I had looked at making this strict (along with the values in the reflect
function too), it was making a little bit of difference, but not much.  I
reckon this is because the improvement was being masked by the lazy state
monad.  Now that this is corrected, I can see it makes a big difference.

One question here tho - if we have made our State strict, will this not
result in newStockSum being atomically evaluated when we set the new state?

Also on the use of multiple 'let' statements - this has obviously completely
passed me by so far!  I'm assuming that under one let we only actually
create the newStockSum, but with 3 let statements, each is created as a
separate entity?

 
 
 w00t!
 
 

You're not joking - this is a textbook example of performance enhancement!
It's clearly something I have to keep more in mind.


 
 * Double Stack and Map Specific Impl:
 
 
 iterations = 100
 main :: IO()
 main = do let normals = evalState ( evalStateT (do replicateM iterations
 generateNormal) $ (Nothing,nextHalton) ) $ (1,[3,5])
   let stochastic = map (0.2*1*) normals
   let sde = map ((( 0.05 - (0.5*(0.2*0.2)) )*1)+) stochastic
   let expiryMult = map exp sde
   let expiry = map (100*) expiryMult
   let payoff = map (payOff 100) expiry
   let averagePO = (foldr (+) 0 payoff) / fromIntegral iterations
   let discountPO = averagePO * exp (-0.05)
   print discountPO
 
 
 Same here, but important for performance is to replace the foldr with foldl'.
 

Again I understand that foldl' is the strict version of foldl, and as we are
summing elements we can use either foldl or foldr.  I'm assuming this is
another thunk optimisation.  Does foldl not actually calculate the sum, but
moreover it creates an expression of the form a+b+c+d+e+ Where foldl'
will actually evaluate the expression to an atomic number?

 
 Cheers,
 Daniel

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Stacking State on State.....

2009-02-28 Thread Phil
then (normal,0)
else (n1,n2)
where
  n1 = 2
  n2 = 3
  put (not isStored, storeNorm)
  return retNorm

Now this is incomplete and may be even wrong!  I¹ll Explain my thinking:

(Bool,Double) is equivalent to myState and storedNormal in the C example
The last Double is the return value of the BoxMuller Monad
The (State Int) is supposed to represent the VanDerCorput monad ­ but the
compiler (GHC 6.10) will only let me specify one parameter with it ­ so I¹ve
put the state and left the return type to the gods!! As I said this
isn¹t quite right ­ any ideas how to specify the type?

The next few lines get and test the BoxMuller state, this seems to work OK
to me, the problem is when I try to look at the STATE OF THE INTERNAL monad.
n1 and n2 should evaluate and increment the state of VanDerCorput monad ­
but I can¹t get anything to compile here.  2 and 3 are just dummy values to
make the thing compile so I could debug.

My last gripe is how to actually call this from a pure function ­ do I need
to use both evalStateT and evalState ­ I can¹t see how to initialize both
the inner and outer state ?

OK ­ I think that¹s more than enough typing, apologies for the warpeace
sized post.

Any help muchly muchly appreciated,

Many Thanks,

Phil.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Array use breaks when I make it unboxed?

2009-02-21 Thread Phil
Hi,

The code below compiles fine as it is, but if I change the import statement
to:

import Data.Array.Unboxed

I get the following error:

philip-beadlings-imac-g5:MonteCarlo phil$ ghc -O2 --make test.hs
[2 of 5] Compiling InverseNormal( InverseNormal.hs, InverseNormal.o )

InverseNormal.hs:28:38:
No instance for (IArray a1 Double)
  arising from a use of `!' at InverseNormal.hs:28:38-40
Possible fix: add an instance declaration for (IArray a1 Double)
In the first argument of `(*)', namely `c ! 1'
In the first argument of `(+)', namely `c ! 1 * q'
In the first argument of `(*)', namely `(c ! 1 * q + c ! 2)'

and so on


My understanding is that I should just be able to use them like-for-like?
Anyone seen this before?

Thanks,

Phil.


module InverseNormal
where

import Array

a = listArray (1,6) [-3.969683028665376e+01, 2.209460984245205e+02,
 -2.759285104469687e+02, 1.383577518672690e+02,
 -3.066479806614716e+01, 2.506628277459239e+00]

b = listArray (1,5) [-5.447609879822406e+01, 1.615858368580409e+02,
 -1.556989798598866e+02, 6.680131188771972e+01,
 -1.328068155288572e+01]

c = listArray (1,6) [-7.784894002430293e-03, -3.223964580411365e-01,
 -2.400758277161838e+00, -2.549732539343734e+00,
 4.374664141464968e+00,  2.938163982698783e+00]

d = listArray (1,4) [7.784695709041462e-03,  3.224671290700398e-01,
 2.445134137142996e+00,  3.754408661907416e+00]

invnorm :: Double - Double
invnorm p | p  0.02425 = let q = sqrt ( -2*log(p) )
 in (c!1*q+c!2)*q+c!3)*q+c!4)*q+c!5)*q+c!6) /
d!1*q+d!2)*q+d!3)*q+d!4)*q+1)
   
  | p  (1-0.02425) = let q = sqrt ( -2*log(1-p) )
 in -(c!1*q+c!2)*q+c!3)*q+c!4)*q+c!5)*q+c!6)
/ d!1*q+d!2)*q+d!3)*q+d!4)*q+1)

  | otherwise = let q = p-0.5
r = q*q
in (a!1*r+a!2)*r+a!3)*r+a!4)*r+a!5)*r+a!6)*q
/ (b!1*r+b!2)*r+b!3)*r+b!4)*r+b!5)*r+1)


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Array use breaks when I make it unboxed?

2009-02-21 Thread Phil
Thanks for the tip - I got it to work using:

a :: UArray Int Double

 And so on.


Cheers,

Phil.


On 22/02/2009 01:05, Felipe Lessa felipe.le...@gmail.com wrote:

 2009/2/21 Phil pbeadl...@mail2web.com:
 
 InverseNormal.hs:28:38:
 No instance for (IArray a1 Double)
   arising from a use of `!' at InverseNormal.hs:28:38-40
 Possible fix: add an instance declaration for (IArray a1 Double)
 In the first argument of `(*)', namely `c ! 1'
 In the first argument of `(+)', namely `c ! 1 * q'
 In the first argument of `(*)', namely `(c ! 1 * q + c ! 2)'
 
 'a1' here is the type of the key. Try something like
 
 
 a = listArray (1 :: Int,6) [-3.969683028665376e+01, 2.209460984245205e+02,
  -2.759285104469687e+02, 1.383577518672690e+02,
  -3.066479806614716e+01, 2.506628277459239e+00]
 
 or maybe
 
 b :: UArray (Int, Int) Double
 b = listArray (1,5) [-5.447609879822406e+01, 1.615858368580409e+02,
  -1.556989798598866e+02, 6.680131188771972e+01,
  -1.328068155288572e+01]
 
 I think this is the problem.
 
 HTH,

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] darcs-client on ghc 6.10.1

2009-01-30 Thread Phil
Hi,

Was wondering if anyone knew if darcs-server is still maintained?  The
author¹s e-mail address bounces.

The Haskell client is broken in ghc 6.10.  I have a straightforward fix for
it:

phil$ darcs whatsnew
hunk ./client/build 2
-ghc -Wall -O2 -o darcs-client -package network Http.hs DarcsClient.hs
+ghc -Wall -O2 -o darcs-client.exe -package base-3.0.3.0 -package network
Http.hs DarcsClient.hs

Phil.




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Employment

2009-01-19 Thread Phil
Barclays Capital use it for Equity Derivative modeling and pricing - it's a
small team at the moment, but the whole project is in Haskell.

I don't work on it myself so I couldn't give you any details (plus I would
get fired for blabbing!), I work in an adjacent group.  Haskell certainly
lends itself to complex financial maths simulation tho, so I think they've
made a good choice.


On 19/01/2009 19:34, Andrew Coppin andrewcop...@btinternet.com wrote:

 Is it possible to earn money using Haskell? Does anybody here actually
 do this?
 
 Inquiring minds want to know... ;-)
 
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Efficient Factoring Out of Constants

2009-01-17 Thread Phil
Hi,

I¹ve been thinking about factoring constants out of iterations and have
spotted another place in my code where I can make use of this.

See the two examples below ­ the first example iterates over the mcSimulate
function ­ this has changed a little bit but essentially still recurses
around passing in
two constants, and two variables that are changed each time it is called ­
it has the following form:

mcSimulate (startStock, endTime, seedForSeed, runningSum) = ( startStock,
endTime, newSeedForSeed, newRunningSum )

I figured I¹m passing around the constants startStock and endTime so looked
factor these out producing the second example below.

My concern is that although iterate function now only handles two variables,
it¹s still passing around 1 tuple, which under the bonnet is likely to be
represented in machine code as a pointer?  Humor me here a little ­ I know
I¹m thinking of this in terms of C++, but I¹m guessing the final byte code
will adhere to this:

Thus each time mcSimulate is called  a machine code subroutine will be
passed a memory address to the input data.  Now, the crux of this is, will
it make a COPY of the old input data, BUT with the newSeedForSeed and
newRunningSum, to pass to the next iteration?  If this is the case each
iteration does two useless copies of startStock and endTime?  Here the
second example should produce better code because nothing constant is copied
from 1 iteration to the next.  However if the compiler is smarter and simply
REPLACES the modified data the second example buys us nothing.

However, again, depending very much on the compiler, the second example may
actually be less efficient.  Let¹s say the compiler is super smart and
doesn¹t copy around the startStock and endTime on each iteration in the
first example.  Then we¹ve gained nothing.  However the second example Will
call 3 functions each iteration:

mcWrapper - mcSimulate - getStateInfo

In the second example we probably have something like 6 ŒJMP¹ statements in
machine code ­ 3 to jump in to each function, and 3 to jump back out.  In
the first we have 2 ­ one to jump us into mcSimulate and one to return.  So
each iteration executes 4 more JMPs in the second example.  All others
things being equal this will produce slightly less efficient code.

Now I know I¹m speculating like crazy, and perhaps I¹m drunk with efficiency
here, but it would seem to me that whatever way it works there will be a
certain critical mass of constant data that you can carry around that once
breached (i.e. When the copy operations exceed the CPU time taken for the 4
extra JMPs) you will be better off factoring the constant data out. That
is assuming any of my analysis is correct :-)

If anyone has any insight into how this might looked once compiled down to
machine code, or has an opinion one which example below makes for better
Haskell, I¹d be grateful for any comments, advice or discussion.

Cheers,

Phil.

Note:  I recognize the use of getSum and getStateInfo could be polished
using data structures instead, and the use of !! will not produce strict
evaluation. 
-

getSum :: (Double, Double, Word64, Double) - Double
getSum (startStock, endTime, seedForSeed, runningSum) = runningSum

getAveragePayoff :: Double - Double - Int - Word64 - Double
getAveragePayoff startStock endTime iterations seedForSeed = average
  where
average = (getSum $ (iterate mcSimulate (startStock, endTime,
seedForSeed, 0)) !! iterations ) / fromIntegral iterations

---

getStateInfo :: (Double, Double, Word64, Double) - (Word64,Double)
getStateInfo (startStock, endTime, seedForSeed, runningSum) = (seedForSeed,
runningSum)

getAveragePayoff :: Double - Double - Int - Word64 - Double
getAveragePayoff startStock endTime iterations seedForSeed = average
  where
average =  (snd $ (iterate mcWrapper (seedForSeed,0)) !! iterations ) /
fromIntegral iterations
  where
mcWrapper = \(seed,runningSum) - getStateInfo $ mcSimulate (
startStock, endTime, seed, runningSum )


On 16/01/2009 01:41, Phil pbeadl...@mail2web.com wrote:

 
 On 16/01/2009 01:28, Luke Palmer lrpal...@gmail.com wrote:
 
 Compile-time constants could be handled by simple top-level bindings.  This
 technique is specifically for the case you are after:
 
 mcSimulate :: Double - Double - Word64 - [Double]
 mcSimulate startStock endTime seedForSeed = go seedForSeed
   where
 go = fst expiryStock : go newSeedForSeed
   where
   expiryStock = iterate evolveUnderlying (startStock, ranq1Init
 seedForSeed) 
 !! truncate (endTime/timeStep)
   newSeedForSeed = seedForSeed + 246524
  
 See what's going on there?
 
 I don't know about that nested where.  In Real Life I would probably use a
 let instead for expiryStock and newSeedForSeed.
 
 Luke
 
 Ahh, I get it now, that¹s pretty neat - Œgo¹ is only updating the
 seedForSeed and the expiryStock, the inner Œwhere¹ clause keeps everything
 else constant each time

Re: [Haskell-cafe] Efficient Factoring Out of Constants

2009-01-17 Thread Phil
On 17/01/2009 16:55, Luke Palmer lrpal...@gmail.com wrote:
Wow.  I strongly suggest you forget about efficiency completely and become a
proficient high-level haskeller, and then dive back in.  Laziness changes
many runtime properties, and renders your old ways of thinking about
efficiency almost useless.

If you are interested, though, you can use the ghc-core tool on hackage to
look at the core (lowish-level intermediate language) and even the generated
assembly for minimal cases.  It's dense, but interesting if you have the
time to study it.

Others will know more about this specific speculation than I.

Luke

[Phil]
Heh heh ­ totally accept what your saying, I am obsessing over details here.
I¹ve ran some empirical tests to get some crude insight (just using the
linux¹s time program),  I expected the differences to be small for the
amount of data I was passing around, but I was surprised.  I modified the
code ever so slightly to use data structures to pass around the data.  Thus
got rid of the getSum and getStateInfo functions.  In the first unfactored
example everything was passed around in one structure.  In the second
example I had two structures; one for constant data and one for state date.
Thus doesn¹t really change the problem it just means that in the unfactored
example mcSimulate takes one parameter (holding data constant and variable
state data) and in the factored example it takes two parameters.  The rest
of the code remained more-or-less identical to my original post.

Running both programs at the same time on an otherwise unloaded CPU gave a
very consistent result over numerous trials that for 1 million calls to
mcSimulate, the unfactored example took approx 1m44s and the factored
example took 1m38 ­ which is a fairly significant difference.

So whilst I can¹t offer any exact explanation, it is clear that factoring
out even a few parameters taking up little memory produces a significant
performance increase.  This would suggest that my ŒJMP¹ analysis is not
right and that the compiler is able to optimize the factored version better.

If anyone else fancies offering up their 2 cents on what the compiler is
doing, I¹d still be interested, but the empirical evidence alone is enough
for me to be swayed to factoring all static parameters in an iteration out
of the iteration and into a wrapper.

Phil.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Efficient Factoring Out of Constants

2009-01-17 Thread Phil
On 17/01/2009 20:45, Eugene Kirpichov ekirpic...@gmail.com wrote:

 A very short time ago Simon Marlow (if I recall correctly) commented
 on this topic: he told that this transformation usually improves
 efficiency pretty much, but sometimes it leads to some problems and it
 shouldn't be done by the compiler automatically. Search the recent
 messages for 'map' and one of them will have an answer :)
 

Thanks for the tip - I'll have a look for this.

I've also tried a third test using the two possibilities below, both have
separate structures for variable and constant data, but the second example
explicitly factors out passing this around as in previous examples.  Here
there is no discernable difference between the two methods' timed results,
if anything the first one which is not explicitly factored out had a slight
edge in tests.

Therefore it would seem that providing you separate the constant and
variable data out into the separate parameters, the compiler will do the
rest for you.  This suggests that the compiler is indeed smart; the
staticData is not copied from iteration to iteration providing the whole
structure is constant, but when you mix and match within a structure itself
the compiler isn't smart enough to factor out the constant members.  I
suppose a conclusive test would be to add a very large amount of constant
data to the structure to see if results were still similar.

Note: strictIdx is just a strict version of !!:

getAveragePayoff :: Double - Double - Int - Word64 - Double
getAveragePayoff startStock endTime iterations seedForSeed = average
  where
staticData = MCStatic startStock endTime
average = ( runningSum $ snd $ strictIdx
iterations ( iterate mcSimulate (staticData, (MCState
seedForSeed 0)) ) )
  / fromIntegral iterations

___

getAveragePayoff :: Double - Double - Int - Word64 - Double
getAveragePayoff startStock endTime iterations seedForSeed = average
  where
staticData = MCStatic startStock endTime
average = ( runningSum  $ strictIdx
iterations ( iterate mcWrapper (MCState seedForSeed 0) ) )
  / fromIntegral iterations
  where
mcWrapper = \stateData - snd $ mcSimulate (staticData, stateData)




On 17/01/2009 20:45, Eugene Kirpichov ekirpic...@gmail.com wrote:

 A very short time ago Simon Marlow (if I recall correctly) commented
 on this topic: he told that this transformation usually improves
 efficiency pretty much, but sometimes it leads to some problems and it
 shouldn't be done by the compiler automatically. Search the recent
 messages for 'map' and one of them will have an answer :)
 
 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Multiple State Monads

2009-01-15 Thread Phil
Inline


On 14/01/2009 01:08, Luke Palmer lrpal...@gmail.com wrote:

 On Tue, Jan 13, 2009 at 5:45 PM, Phil pbeadl...@mail2web.com wrote:
 mcSimulate :: Double - Double - Word64 - [Dou
 ble]
 mcSimulate startStock endTime seedForSeed = fst expiryStock : mcSimulate
 startStock endTime newSeedForSeed
 
 It is abundantly clear that the startStock and endTime are just being passed
 around from call to call unchanged ­ that is their value is constant
 throughout the the simulation.  For the purposes here when I'm only passing 2
 'constants' around it doesn't strike me as too odd, but my list of
 'constants' is likely to grow as I bolt more functionality onto this.  For
 readability, I understand that I can create new types to encapsulate complex
 data types into a single type , but I can't help thinking that passing say 9
 or 10 'constants' around and around like this 'feels wrong'.  If I sit back
 and think about it, it doesn't strike me as implausible that the compiler
 will recognize what I'm doing and optimize this out for me, and what I'm
 doing is thinking about the whole think like a C++ programmer (which I
 traditionally am) would.
 
 You can factor out constants in a couple ways.  If you are just passing
 constants between a recursive call to the same function, you can factor out
 the recursive bit into a separate function:
 
 something param1 param2 = go
 where
 go = ... param1 ... param2 ... etc ... go ...
 etc = ...
 
 Where go takes only the parameters that change, and the rest is handled by its
 enclosing scope.  You might buy a little performance this way too, depending
 on the compiler's cleverness (I'm not sure how it optimizes these things).
 
 
 [PHIL]
 Firstly ­ thanks for your advice.
 
 When I say constants, I should be clear ­ these are parameters passed in by
 the user, but they remain constant throughout the recursive call.  I think the
 example above is only relevant if they are constants at compile time?  If not
 I¹m not sure I follow the example.  If we have something like
 
 mcSimulate :: Double - Double - Word64 - [Double]
 mcSimulate startStock endTime seedForSeed = fst expiryStock : mcSimulate
 startStock endTime newSeedForSeed
   where
 expiryStock = iterate evolveUnderlying (startStock, ranq1Init seedForSeed)
 !! truncate (endTime/timeStep)
 newSeedForSeed = seedForSeed + 246524
 
 Here startStock and endTime are not altered from iteration to iteration, but
 they are not known at compile time.  I see that I can reduce this to something
 like
 
 test seedForSeed = fst expiryStock : test newSeedForSeed
   where
 expiryStock = iterate evolveUnderlying (_startStock, ranq1Init
 seedForSeed) !! truncate (_endTime/timeStep)
 newSeedForSeed = seedForSeed + 246524
 
 But don¹t understand how I Œfeed¹ the _startStock and _endTime in?
 
 Could you explain this in detail, or confirm my suspicions that it only works
 for compile-time constants?
 
 
 Thanks again,
 
 Phil.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Multiple State Monads

2009-01-15 Thread Phil

On 16/01/2009 01:28, Luke Palmer lrpal...@gmail.com wrote:

 Compile-time constants could be handled by simple top-level bindings.  This
 technique is specifically for the case you are after:
 
 mcSimulate :: Double - Double - Word64 - [Double]
 mcSimulate startStock endTime seedForSeed = go seedForSeed
   where
 go = fst expiryStock : go newSeedForSeed
   where
   expiryStock = iterate evolveUnderlying (startStock, ranq1Init
 seedForSeed) 
 !! truncate (endTime/timeStep)
   newSeedForSeed = seedForSeed + 246524
  
 See what's going on there?
 
 I don't know about that nested where.  In Real Life I would probably use a
 let instead for expiryStock and newSeedForSeed.
 
 Luke
 
 Ahh, I get it now, that¹s pretty neat - Œgo¹ is only updating the seedForSeed
 and the expiryStock, the inner Œwhere¹ clause keeps everything else constant
 each time it is called.

Thanks again!

Phil.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Multiple State Monads

2009-01-13 Thread Phil
Many thanks for the replies.

Using 'modify' cleans the syntax up nicely.

With regard to using 'iterate' as shown by David here:

 mcSimulate :: Double - Double - Word64 - [Double]
 mcSimulate startStock endTime seedForSeed = fst expiryStock : mcSimulate
 startStock endTime newSeedForSeed
  where
expiryStock = iterate evolveUnderlying (startStock, ranq1Init seedForSeed)
 !! truncate (endTime/timeStep)
newSeedForSeed = seedForSeed + 246524

My only concern with using this method is - Will 'iterate' not create a full
list of type [Double] and then take the final position once the list has
been fully realized?  For my application this would be undesirable as the
list may be millions of items long, and you only ever care about the last
iteration (It's a crude Monte Carlo simulator to give it some context).  If
Haskell is smart enough to look ahead and see as we only need the last
element as it is creating the list, therefore garbage collecting earlier
items then this would work fine - by I'm guessing that is a step to far for
the compiler?

I had originally implemented this similar to the above (although I didn't
know about the 'iterate' keyword - which makes things tidier - a useful
tip!), I moved to using the state monad and replicateM_ for the first
truncate(endTime/timeStep)-1 elements so that everything but the last result
is thrown away, and a final bind to getEvolution would return the result.

Now that the code has been modified so that no result is passed back, using
modify and execState, this can be simplified to replicateM_
truncate(endTime/timeStep) with no final bind needed.  I've tried this and
it works fine.

The key reason for using the Monad was to tell Haskell to discard all but
the current state.  If I'm wrong about please let me know, as I don't want
to be guilty of overcomplicating my algorithm, and more importantly it means
I'm not yet totally grasping the power of Haskell!

Thanks again,

Phil.




On 13/01/2009 03:13, David Menendez d...@zednenem.com wrote:

 On Mon, Jan 12, 2009 at 8:34 PM, Phil pbeadl...@mail2web.com wrote:
 Thanks Minh - I've updated my code as you suggested.  This looks better than
 my first attempt!
 
 Is it possible to clean this up any more?  I find:
 
 ( (), (Double, Word64) )
 
 a bit odd syntactically, although I understand this is just to fit the type
 to the State c'tor so that we don't have to write our own Monad longhand.
 
 If you have a function which transforms the state, you can lift it
 into the state monad using modify.
 
 evolveUnderlying :: (Double, Word64) - (Double, Word64)
 evolveUnderlying (stock, state) = ( newStock, newState )
  where
newState = ranq1Increment state
newStock = stock * exp ( ( ir - (0.5*(vol*vol)) )*timeStep + (
 vol*sqrt(timeStep)*normalFromRngState(state) ) )
 
 getEvolution :: State (Double, Word64) ()
 getEvolution = modify evolveUnderlying
 
 Now, I don't know the full context of what you're doing, but the
 example you posted isn't really gaining anything from the state monad.
 Specifically,
 
   execState (replicateM_ n (modify f))
 = execState (modify f  modify f  ...  modify f)
 = execState (modify (f . f . ... . f))
 = f . f . ... . f
 
 So you could just write something along these lines,
 
 mcSimulate :: Double - Double - Word64 - [Double]
 mcSimulate startStock endTime seedForSeed = fst expiryStock : mcSimulate
 startStock endTime newSeedForSeed
  where
expiryStock = iterate evolveUnderlying (startStock, ranq1Init seedForSeed)
 !! truncate (endTime/timeStep)
newSeedForSeed = seedForSeed + 246524
 
 
 Coming back to your original question, it is possible to work with
 nested state monad transformers. The trick is to use lift to make
 sure you are working with the appropriate state.
 
 get :: StateT s1 (State s2) s1
 put :: s1 - StateT s1 (State s2) ()
 
 lift get :: StateT s1 (State s2) s2
 lift put :: s2 - StateT s1 (State s2) ()
 
 A more general piece of advice is to try breaking things into smaller
 pieces. For example:
 
 getRanq1 :: MonadState Word64 m = m Word64
 getRanq1 = do
 seed - get
 put (ranq1Increment seed)
 return seed
 
 getEvolution :: StateT Double (State Word64) ()
 getEvolution = do
 seed - lift getRanq1
 modify $ \stock - stock * exp ( ( ir - (0.5*(vol*vol)) )*timeStep
 + ( vol*sqrt(timeStep)*normalFromRngState(seed) ) )
 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Multiple State Monads

2009-01-13 Thread Phil
Ahh, I see ­ so using the State monad is arguably overcomplicating this.
This is very helpful.

The use of Œkeyword¹ was just an unfortunate use of terminology ­ my bad.

Very useful explanation about the laziness resulting in stack overflows too
­ when I crank up the numbers I have been seeing this, I had been
temporarily ignoring the issue and just increasing the stack size at
runtime, but I suspected something was awry.

One last question on this function:

In the definition:

mcSimulate :: Double - Double - Word64 - [Double]
mcSimulate startStock endTime seedForSeed = fst expiryStock : mcSimulate
startStock endTime newSeedForSeed

It is abundantly clear that the startStock and endTime are just being passed
around from call to call unchanged ­ that is their value is constant
throughout the the simulation.  For the purposes here when I¹m only passing
2 Œconstants¹ around it doesn¹t strike me as too odd, but my list of
Œconstants¹ is likely to grow as I bolt more functionality onto this.  For
readability, I understand that I can create new types to encapsulate complex
data types into a single type , but I can¹t help thinking that passing say 9
or 10 Œconstants¹ around and around like this Œfeels wrong¹.  If I sit back
and think about it, it doesn¹t strike me as implausible that the compiler
will recognize what I¹m doing and optimize this out for me, and what I¹m
doing is thinking about the whole think like a C++ programmer (which I
traditionally am) would.

However before I allayed my own concerns I wanted to check that in the
Haskell world passing around lots of parameters isn¹t a bad thing ­ that is,
I¹m not missing a trick here to make my code more readable or more
importantly more performant.

Thanks again,

Phil.

On 13/01/2009 23:24, Luke Palmer lrpal...@gmail.com wrote:

 On Tue, Jan 13, 2009 at 3:29 PM, Phil pbeadl...@mail2web.com wrote:
 My only concern with using this method is - Will 'iterate' not create a full
 list of type [Double] and then take the final position once the list has
 been fully realized?  For my application this would be undesirable as the
 list may be millions of items long, and you only ever care about the last
 iteration (It's a crude Monte Carlo simulator to give it some context).  If
 Haskell is smart enough to look ahead and see as we only need the last
 element as it is creating the list, therefore garbage collecting earlier
 items then this would work fine - by I'm guessing that is a step to far for
 the compiler?
 
 No, doing this type of thing is very typical Haskell, and the garbage
 collector will incrementally throw away early elements of the list.
 
 
 I had originally implemented this similar to the above (although I didn't
 know about the 'iterate' keyword
 
 FWIW, iterate is just a function, not a keyword.  Could just be terminology
 mismatch.
  
 So, while the garbage collector will do the right thing, for a list millions
 of elements long, I suspect you will get stack overflows and/or bad memory
 performance because the computation is too lazy.  One solution is to use a
 stricter version of !!, which evaluates elements of the list as it whizzes by
 them.  Because the function you're iterating is strict to begin with, you do
 not lose performance by doing this:
 
 strictIdx :: Int - [a] - a
 strictIdx _ [] = error empty list
 strictIdx 0 (x:xs) = x
 strictIdx n (x:xs) = x `seq` strictIdx (n-1) xs
 
 (Note that I flipped the arguments, to an order that is nicer for currying)
 
 The reason is that iterate f x0 constructs a list like this:
 
 [ x0, f x0, f (f x0), f (f (f x0)), ... ]
 
 But shares the intermediate elements, so if we were to evaluate the first f x0
 to, say, 42, then the thunks are overwritten and become:
 
 [ x0, 42, f 42, f (f 42), ... ]
 
 So iterate f x0 !! 100 is f (f (f (f ( ... a million times ... f x0,
 which will be a stack overflow because of each of the calls.  What strictIdx
 does is to evaluate each element as it traverses it, so that each call is only
 one function deep, then we move on to the next one.
 
 This is the laziness abstraction leaking.  Intuition about it develops with
 time and experience.  It would be great if this leak could be patched by some
 brilliant theorist somewhere.
 
 Luke
 
  - which makes things tidier - a useful
 tip!), I moved to using the state monad and replicateM_ for the first
 truncate(endTime/timeStep)-1 elements so that everything but the last result
 is thrown away, and a final bind to getEvolution would return the result.
 
 Now that the code has been modified so that no result is passed back, using
 modify and execState, this can be simplified to replicateM_
 truncate(endTime/timeStep) with no final bind needed.  I've tried this and
 it works fine.
 
 The key reason for using the Monad was to tell Haskell to discard all but
 the current state.  If I'm wrong about please let me know, as I don't want
 to be guilty of overcomplicating my algorithm, and more importantly it means
 I'm

[Haskell-cafe] Multiple State Monads

2009-01-12 Thread Phil
Hi,

I¹ve been reading the Monads aren¹t evil posts with interest ­ I¹m a 2nd
week Haskell newbie and I¹m doing my best to use them where (I hope) it is
appropriate.  Typically I¹m writing my code out without using Monads
(normally using list recursion), and then when I get them working, I delve
into the Monad world This has been going well so far with a bit of help
from you guys, but I¹ve hit a snag.

In the code below I¹m using a state Monad (getEvolution), but unlike simpler
cases I¹m passing around two items of state, and one of these states is also
ultimately a result ­ although I don¹t care about the result until I reach
an end state.  My implementation is a bit ugly to say the least and clearly
I¹m forcing round pegs into square holes here ­ reading a bit online I get
the impression that I can solve the two-state issue using Monad
Transformers, by  wrapping a StateT around a regular State object (or even
two StateT Monads around an Identity Monad??).  I think I understand the
theory here, but any attempt to implement it leads to a horrible mess that
typically doesn¹t compile.  The other problem of having a state that is also
a result, I¹m sure what to do about this.

Was wondering if anyone could give me a push in the right direction ­ how
can I rework my state monad so that it looks less wildly.

Many thanks,

Phil.

mcSimulate :: Double - Double - Word64 - [Double]
mcSimulate startStock endTime seedForSeed = expiryStock : mcSimulate
startStock endTime newSeedForSeed
  where
expiryStock =  evalState ( do replicateM_ (truncate(endTime/timeStep)-1)
getEvolution; getEvolution )
   $ (startStock,ranq1Init seedForSeed)
newSeedForSeed = seedForSeed + 246524

discount :: Double - Double - Double - Double
discount stock r t = stock * exp (-r)*t

payOff :: Double - Double - Double
payOff strike stock | (stock - strike)  0 = stock - strike
| otherwise = 0

-- Monad Implementation

-- Yuk! 
evolveUnderlying :: (Double, Word64) - ( Double, (Double, Word64) )
evolveUnderlying (stock, state) = ( newStock, ( newStock, newState ) )
  where
newState = ranq1Increment state
newStock = stock * exp ( ( ir - (0.5*(vol*vol)) )*timeStep + (
vol*sqrt(timeStep)*normalFromRngState(state) ) )

getEvolution :: State (Double, Word64) Double
getEvolution = State evolveUnderlying



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Multiple State Monads

2009-01-12 Thread Phil
Thanks Minh - I've updated my code as you suggested.  This looks better than
my first attempt!

Is it possible to clean this up any more?  I find:

( (), (Double, Word64) )

a bit odd syntactically, although I understand this is just to fit the type
to the State c'tor so that we don't have to write our own Monad longhand.  I
guess given that (), as I understand, is just like 'void' in C, it should
not affect program performance, and the fact that I'm using replicateM_
means that the result is being ignored for all but my last iteration.

As an exercise I assume I could have approached the problem using the StateT
transformer, although for the purposes below carrying two states in a tuple
is probably clearer and more performant?

Thanks again,

Phil.

mcSimulate :: Double - Double - Word64 - [Double]
mcSimulate startStock endTime seedForSeed = fst expiryStock : mcSimulate
startStock endTime newSeedForSeed
  where
expiryStock =  execState ( do replicateM_ (truncate(endTime/timeStep)-1)
getEvolution; getEvolution )
   $ (startStock,ranq1Init seedForSeed)
newSeedForSeed = seedForSeed + 246524


-- Monad Implementation

evolveUnderlying :: (Double, Word64) - ( (), (Double, Word64) )
evolveUnderlying (stock, state) = ( (), ( newStock, newState ) )
  where
newState = ranq1Increment state
newStock = stock * exp ( ( ir - (0.5*(vol*vol)) )*timeStep + (
vol*sqrt(timeStep)*normalFromRngState(state) ) )

getEvolution :: State (Double, Word64) ()
getEvolution = State evolveUnderlying


On 12/01/2009 20:49, minh thu not...@gmail.com wrote:

 2009/1/12 Phil pbeadl...@mail2web.com:
 Hi,
 
 I've been reading the Monads aren't evil posts with interest ­ I'm a 2nd
 week Haskell newbie and I'm doing my best to use them where (I hope) it is
 appropriate.  Typically I'm writing my code out without using Monads
 (normally using list recursion), and then when I get them working, I delve
 into the Monad world This has been going well so far with a bit of help
 from you guys, but I've hit a snag.
 
 In the code below I'm using a state Monad (getEvolution), but unlike simpler
 cases I'm passing around two items of state, and one of these states is also
 ultimately a result ­ although I don't care about the result until I reach
 an end state.  My implementation is a bit ugly to say the least and clearly
 I'm forcing round pegs into square holes here ­ reading a bit online I get
 the impression that I can solve the two-state issue using Monad
 Transformers, by  wrapping a StateT around a regular State object (or even
 two StateT Monads around an Identity Monad??).  I think I understand the
 theory here, but any attempt to implement it leads to a horrible mess that
 typically doesn't compile.  The other problem of having a state that is also
 a result, I'm sure what to do about this.
 
 Was wondering if anyone could give me a push in the right direction ­ how
 can I rework my state monad so that it looks less wildly.
 
 Many thanks,
 
 Phil.
 
 mcSimulate :: Double - Double - Word64 - [Double]
 mcSimulate startStock endTime seedForSeed = expiryStock : mcSimulate
 startStock endTime newSeedForSeed
   where
 expiryStock =  evalState ( do replicateM_ (truncate(endTime/timeStep)-1)
 getEvolution; getEvolution )
$ (startStock,ranq1Init seedForSeed)
 newSeedForSeed = seedForSeed + 246524
 
 discount :: Double - Double - Double - Double
 discount stock r t = stock * exp (-r)*t
 
 payOff :: Double - Double - Double
 payOff strike stock | (stock - strike)  0 = stock - strike
 | otherwise = 0
 
 -- Monad Implementation
 
 -- Yuk!
 evolveUnderlying :: (Double, Word64) - ( Double, (Double, Word64) )
 evolveUnderlying (stock, state) = ( newStock, ( newStock, newState ) )
   where
 newState = ranq1Increment state
 newStock = stock * exp ( ( ir - (0.5*(vol*vol)) )*timeStep + (
 vol*sqrt(timeStep)*normalFromRngState(state) ) )
 
 getEvolution :: State (Double, Word64) Double
 getEvolution = State evolveUnderlying
 
 Hi,
 
 the evolveUnderlying can simply manipulate the state, so you can
 
 do evolveUnderlying -- state (not your state, but the tuple) changes here
r - gets fst -- query the state for the first element of the tuple
return r -- simply return what you want
 
 Note that if you want to combine your state and the stock, you simply end
 with a new kind of state : the tuple (thus, no need to compose two State)
 
 Note also, since evolveUnderlying only manipulates the internal state of the
 State monad, it returns ().
 
 Depending on how you want to structure your code, you can also use execState
 instead of evalState : it returns the state on which you can use fst.
 
 hope it helps,
 Thu

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Shared library creating on Mac OS X

2009-01-10 Thread Phil

I¹ve made a bit of progress here after reading up on Darwin¹s GCC a bit
more:

ghc --make -no-hs-main -fPIC -optl '-dynamiclib' -optl '-undefined' -optl
'suppress' -optl '-flat_namespace'  -o Inv.dylib InverseNormal.hs

This dies when it links against haskell¹s own libraries, my guess is because
they are position dependant.  So the only way I see forward would be to
recompile haskell with ³­fPIC².

This seems like a lot of hassle, so I¹m shelving this for now ­ if anyone
has any other (less distruptive) ways to proceed give me a shout ­ even if
it means linking statically.

Cheers,

Phil.

Linker error now is:


ld: warning codegen with reference kind 13 in _stg_CAF_BLACKHOLE_info
prevents image from loading in dyld shared cache
ld: absolute addressing (perhaps -mdynamic-no-pic) used in
___stginit_haskell98_Array_ from
/usr/local/ghc/6.10.1/lib/ghc-6.10.1/haskell98-1.0.1.0/libHShaskell98-1.0.1.
0.a(Array__1.o) not allowed in slidable image
collect2: ld returned 1 exit status






On 10/01/2009 02:26, Phil pbeadl...@mail2web.com wrote:

 Hi,
 
 I¹m hitting a problem trying create shared haskell libs to be linked into a C
 program on Mac OS X.
 
 I¹m using the latest download for Leopard from the GHC page:
 http://www.haskell.org/ghc/dist/6.10.1/witten/ghc-6.10.1-powerpc-apple-darwin.
 tar.bz2
 
 I can get basic executables working fine (with a C main() #including ghc¹s
 stub header), using something like:
 ghc -optc-O invnorm.c InverseNormal.o InverseNormal_stub.o -o cTest
 
 I started off using the following line to try to create a shared lib:
 
  ghc --make -no-hs-main -optl '-shared' -o Inv.so InverseNormal.hs
 
 This doesn¹t work on mac os x because Apple¹s gcc annoyingly takes different
 switches, so I changed it to:
 
 ghc --make -no-hs-main -optl '-dynamiclib' -o Inv.dylib InverseNormal.hs
 
 Which still fails at the final link giving:
 
 Linking Inv.dylib ...
 Undefined symbols:
   _environ, referenced from:
   _environ$non_lazy_ptr in libHSbase-4.0.0.0.a(PrelIOUtils.o)
 ld: symbol(s) not found
 
 I¹ve seen similar things before, and I believe if you have full control over
 the source you just slip in a:
 
 #define environ (*_NSGetEnviron())
 
 Sure enough, I can find references to environ in, for example HsBase.h
 
 Problem (as I see it) is that references to environ are already wrapped up in
 the static lib libHSbase-4.0.0.0.a, so without recompiling Haskell we can¹t
 alter the C definition now.  However, given that packager must have made this
 behave when he compiled the distribution there must be a way to make Mac gcc
 accept _envrion symbols??
 
 Has anyone seen this before / can confirm my analysis / and by any chance have
 a solution?
 
 Many thanks,
 
 Phil.
 
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Shared library creating on Mac OS X

2009-01-09 Thread Phil
Hi,

I¹m hitting a problem trying create shared haskell libs to be linked into a
C program on Mac OS X.

I¹m using the latest download for Leopard from the GHC page:
http://www.haskell.org/ghc/dist/6.10.1/witten/ghc-6.10.1-powerpc-apple-darwi
n.tar.bz2

I can get basic executables working fine (with a C main() #including ghc¹s
stub header), using something like:
ghc -optc-O invnorm.c InverseNormal.o InverseNormal_stub.o -o cTest

I started off using the following line to try to create a shared lib:

 ghc --make -no-hs-main -optl '-shared' -o Inv.so InverseNormal.hs

This doesn¹t work on mac os x because Apple¹s gcc annoyingly takes different
switches, so I changed it to:

ghc --make -no-hs-main -optl '-dynamiclib' -o Inv.dylib InverseNormal.hs

Which still fails at the final link giving:

Linking Inv.dylib ...
Undefined symbols:
  _environ, referenced from:
  _environ$non_lazy_ptr in libHSbase-4.0.0.0.a(PrelIOUtils.o)
ld: symbol(s) not found

I¹ve seen similar things before, and I believe if you have full control over
the source you just slip in a:

#define environ (*_NSGetEnviron())

Sure enough, I can find references to environ in, for example HsBase.h

Problem (as I see it) is that references to environ are already wrapped up
in the static lib libHSbase-4.0.0.0.a, so without recompiling Haskell we
can¹t alter the C definition now.  However, given that packager must have
made this behave when he compiled the distribution there must be a way to
make Mac gcc accept _envrion symbols??

Has anyone seen this before / can confirm my analysis / and by any chance
have a solution?

Many thanks,

Phil.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] State Monad - using the updated state

2009-01-08 Thread Phil
I think I've got this now - thanks to you all for the superb advice!

The reason I cannot increment state inside main is because main is not a
State monad (it's an IO monad).  Thus in order to use my State Monad, I have
execute inside a State monad as that the state is encapsulated in there.

I'll have to have a think about how I'm going to structure the rest of my
code inside something like Ryan's randomComputation example - the basic
example works perfectly!  I'm writing a Monte Carlo simulator for financial
portfolios - it's something I've done in several languages so I often use it
to test drive a new language.  Most imperative implementations of this sort
thing are very state-heavy, so I thought it would fun to re-think it a bit
in Haskell.

My initial thoughts before delving into Monads was to take advantage of
Haskell's lazy evaluation and create an 'infinite' list of randoms using
something like the below:

ranq1List :: (Word64 - a ) - Word64 - [a]
ranq1List converter state = converter newState : ranq1List converter
newState
  where
newState = ranq1Increment state

This works fine - the converter is an extra parameter that carrys a
partially defined function used to numerically translate from
word64-whatever_type__we_want as stipulated in Numerical Recipes' C++
example.  It was at this point I felt it was getting a bit ugly and started
to look at Monads (plus I wanted to see what all 'fuss' was about with
Monads too!).

One more question on this - the other concern I had with the recursive list
approach was that although lazy evaluation prevents me generating numbers
before I 'ask' for them, I figured that if I was going to be asking for say
10 million over the course of one simulation, that although I request them
one by one, over hours or even days, at the end of the simulation I will
still have a list of 10 million word64s - each of which I could throw away
within minutes of asking for it.  This seemed like huge memory bloat, and
thus probably I was taking the wrong approach.

I'd be interested to know if you have any thoughts on the various solutions?
Ryan's randomComputation strikes me as the most practical and there's an old
adage that if a language provides a facility (i.e. The State Monad here),
you shouldn't be rewriting similar functionality yourself unless there is a
very very good reason to go it alone.  Thus I figure that Haskell's State
Monad used as described is always going to beat anything I come up with to
do the same thing - unless I spend an awful lot of time tailoring a specific
solution.

If you think there is a nicer non-Monadic, pure solution to this type of
problem, I'd be interested to hear them.

Thanks again for all your help,

Phil.



On 08/01/2009 13:27, Kurt Hutchinson kelansli...@gmail.com wrote:

 Ryan gave some great advice about restructuring your program to do
 what you want, but I wanted to give a small explanation of why that's
 necessary.
 
 2009/1/7 Phil pbeadl...@mail2web.com:
  I want to be able to do:
 
 Get_a_random_number
 
  a whole load of other stuff 
 
 Get the next number as defined by the updated state in the first call
 
 some more stuff
 
 Get another number, and so on.
 
 The issue you're having is that you're trying to do the other stuff
 in your 'main', but main isn't inside the State monad. The only State
 computation you're calling from main is getRanq1, but you really need
 another State computation that does other stuff and calls getRanq1
 itself. That's what Ryan's first suggestion implements. You need all
 your other stuff to be done inside the State monad so that it has
 read/update access to the current random state. So all your main does
 is run a State computation. That computation calls getRanq1 itself and
 then other stuff in between calls to getRanq1.
 
 Does that make sense?
 
 Kurt

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] State Monad - using the updated state

2009-01-07 Thread Phil
Hi,

I¹m a newbie looking to get my head around using the State Monad for random
number generation.  I¹ve written non-monad code that achieves this no
problem.  When attempting to use the state monad I can get what I know to be
the correct initial value and state, but can¹t figure out for the life of me
how to then increment it without binding more calls there and then.  Doing
several contiguous calls is not what I want to do here ­ and the examples
I¹ve read all show this (using something like liftM2 (,) myRandom myRandom).
I want to be able to do:

Get_a_random_number

 a whole load of other stuff 

Get the next number as defined by the updated state in the first call

some more stuff

Get another number, and so on.

I get the first number fine, but am lost at how to get the second, third,
forth etc without binding there and then.  I just want each number one at a
time where and when I want it, rather than saying give 1,2,10 or even Œn¹
numbers now.  I¹m sure it¹s blindly obvious!

Note: I¹m not using Haskell¹s built in Random functionality (nor is that an
option), I¹ll spare the details of the method I¹m using (NRC¹s ranq1) as I
know it works for the non-Monad case, and it¹s irrelevent to the question.
So the code is:

ranq1 :: Word64 - ( Double, Word64 )
ranq1 state = ( output, newState )
  where
newState = ranq1Increment state
output = convert_to_double newState

ranq1Init :: Word64 - Word64
ranq1Init = convert_to_word64 . ranq1Increment . xor_v_init

-- I¹ll leave the detail of how ranq1Increment works out for brevity.  I
know this bit works fine.  Same goes for the init function it¹s just
providing an initial state.

-- The Monad State Attempt
getRanq1 :: State Word64 Double
getRanq1 = do
  state - get
  let ( randDouble, newState ) = ranq1 state
  put newState
  return randDouble


_ And then in my main _

-- 124353542542 is just an arbitrary seed
main :: IO()
main = do
   let x = evalState getRanq1 (ranq1Init 124353542542)
   print (x)


As I said this works fine; x gives me the correct first value for this
sequence, but how do I then get the second and third without writing the
giveMeTenRandoms style function?  I guess what I want is a next() type
function, imperatively speaking.


Many thanks for any help,


Phil.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] State Monad - using the updated state in an adhoc manner

2009-01-07 Thread Phil
Hi,

I¹m a newbie looking to get my head around using the State Monad for random
number generation.  I¹ve written non-monad code that achieves this no
problem.  When attempting to use the state monad I can get what I know to be
the correct initial value and state, but can¹t figure out for the life of me
how to then increment it without binding more calls there and then.  Doing
several contiguous calls is not what I want to do here ­ and the examples
I¹ve read all show this (using something like liftM2 (,) myRandom myRandom).
I want to be able to do:

Get_a_random_number

 a whole load of other stuff 

Get the next number as defined by the updated state in the first call

some more stuff

Get another number, and so on.

I get the first number fine, but am lost at how to get the second, third,
forth etc without binding there and then.  I just want each number one at a
time where and when I want it, rather than saying give 1,2,10 or even Œn¹
numbers now.  I¹m sure it¹s blindly obvious!

Note: I¹m not using Haskell¹s built in Random functionality (nor is that an
option), I¹ll spare the details of the method I¹m using (NRC¹s ranq1) as I
know it works for the non-Monad case, and it¹s irrelevent to the question.
So the code is:

ranq1 :: Word64 - ( Double, Word64 )
ranq1 state = ( output, newState )
  where
newState = ranq1Increment state
output = convert_to_double newState

ranq1Init :: Word64 - Word64
ranq1Init = convert_to_word64 . ranq1Increment . xor_v_init

-- I¹ll leave the detail of how ranq1Increment works out for brevity.  I
know this bit works fine.  Same goes for the init function it¹s just
providing an initial state.

-- The Monad State Attempt
getRanq1 :: State Word64 Double
getRanq1 = do
  state - get
  let ( randDouble, newState ) = ranq1 state
  put newState
  return randDouble


_ And then in my main _

-- 124353542542 is just an arbitrary seed
main :: IO()
main = do
   let x = evalState getRanq1 (ranq1Init 124353542542)
   print (x)


As I said this works fine; x gives me the correct first value for this
sequence, but how do I then get the second and third without writing the
giveMeTenRandoms style function?  I guess what I want is a next() type
function, imperatively speaking.


Many thanks for any help,


Phil.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Who started 42, and when?

2008-02-01 Thread Phil Molyneux
Hi --- The arbitrary constant was made popular by Douglas Adams in  
the mid-1970s radio series ``A Hitchhikers Guide to the Galaxy'' (a  
trilogy in 4 parts) --- however it does have a basis in the standard  
model of physics --- a paper in Phys.Rev. of the early 1970s  
described the unification of the Electro-Weak and Strong nuclear  
forces --- the arbitrary constant (of nearly) 42 appears in the  
calculations. I forget the original paper but if you get hold of  
Frank Close ``The Cosmic Onion'' a graph reproduces the result. I met  
Douglas Adams once at a book signing and asked him how he got hold of  
the Phys.Rev. paper so early. Technically he should have written that  
``42 is the answer to life, the universe and everything except for  
gravity and a few other arbitrary constants''


Adams was interested in computing --- I think his reaction to being  
told about functional programming was to wonder what non-functional  
programming might be.


Phil

On 1 Feb 2008, at 14:03, Loup Vaillant wrote:


I have read quite a lot of Haskell papers, lately, and noticed that
the number 42 appeared quite often, in informal tutorials as well as
in very serious research papers. No wonder Haskell is the Answer to
The Great Question of Life, The Universe, and Everything, but I would
like to know who started this, and when.

Google wasn't much help, and I can't believe it's coincidence --hence
this email. I hope I didn't opened some Pandora box. :-)

Cheers,
Loup
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe

This email has been scanned for all viruses by the MessageLabs Email
Security System.



This email has been scanned for all viruses by the MessageLabs Email
Security System.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


RE: Yet Another Monad Tutorial

2003-08-14 Thread Phil Molyneux
I think the crucial distinction is to think about is the difference
between calculating a value (such as a character) compared to calculating
an action (such as a computation that reads a character) --- one is wholly
in your programming language while the other interacts with something
outside your language (the I/O system). Many years ago we were all told it
was a *good thing* to separate calculating values from *doing* I/O. Of
course, you weren't actually *doing* any I/O when you *wrote* your program
--- you were *calculating* (working out, writing...) your program.

For most actions/computations, the properties you usually want is that A
followed by (B followed by C) is the same as (A followed by B) followed by
C and a unit (as in multiplication) or zero (as in addition) action. I
guess the usefulness of this is quite subtle --- the Romans had no
notation for zero and look what happened to them (although we still use
Roman numerals for decoration). How do you know you have structured your
program so that the preceding properties hold ? You show that you're
working in the appropriate Monad --- remember, maths is the QA of
programming. I suspect that tutorials should at some point mention some
definitions of monads, monoids et al --- since this is where the power
(sorry, QA) comes from. 

Phil




On Tue, 12 Aug 2003, Bayley, Alistair wrote:

 Date: Tue, 12 Aug 2003 12:10:24 +0100
 From: Bayley, Alistair [EMAIL PROTECTED]
 Subject: RE: Yet Another Monad Tutorial
 
  From: Wolfgang Jeltsch [mailto:[EMAIL PROTECTED]
  
  For example, the function readFile is pure. For a specific 
  string s the 
  expression readFile s always yields the same result: an I/O 
  action which 
  searches for a file named s, reads its content and takes this 
  content as the 
  result of the *action* (not the expression).
 
 What about getChar? This is a function which takes no arguments, yet returns
 a (potentially) different value each time. I know, I know: it returns an IO
 action which reads a single char from the terminal and returns it.
 
 Is the IO monad the only one (currently) in which you would say that it
 returns an action, which is then executed by the runtime system? I would
 have thought that monads that are not involved in IO (e.g. State) would be
 pure in the sense that Van Roy was thinking. You wouldn't need to describe
 functions in them as returning an action.
 
 
 *
 The information in this email and in any attachments is 
 confidential and intended solely for the attention and use 
 of the named addressee(s). This information may be 
 subject to legal professional or other privilege or may 
 otherwise be protected by work product immunity or other 
 legal rules.  It must not be disclosed to any person without 
 our authority.
 
 If you are not the intended recipient, or a person 
 responsible for delivering it to the intended recipient, you 
 are not authorised to and must not disclose, copy, 
 distribute, or retain this message or any part of it.
 *
 
 ___
 Haskell-Cafe mailing list
 [EMAIL PROTECTED]
 http://www.haskell.org/mailman/listinfo/haskell-cafe
 
 



Phil Molyneux email  [EMAIL PROTECTED]
tel  work 020 8547 2000 x 5233  direct 020 8547 8233  home 020 8549 0045
Kingston Business School room 339 WWW http://www.kingston.ac.uk/~ku00597
Kingston University,  Kingston Hill,  Kingston upon Thames  KT2 7LB,  UK


___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


(no subject)

2002-02-09 Thread Phil Haymen

hi,I have a function, using list comprehension to pick
out the head and last elements from a list of lists
and output this into a list without duplicates. It
doesn't work. I want to know what is the error. 

function :: [[Int]] - [Int] 
function seg = nub (concat([head s, last s | s -
seg])  

__
Do You Yahoo!?
Send FREE Valentine eCards with Yahoo! Greetings!
http://greetings.yahoo.com
___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe