Re: [Haskell-cafe] Origins of '$'

2008-12-09 Thread Arnar Birgisson
On Mon, Dec 8, 2008 at 23:10, Dan Piponi [EMAIL PROTECTED] wrote:
 More generally, all of Tarski's high school algebra axioms carry
 over to types. You can see the axioms here:
 http://math.bu.edu/people/kayeats/papers/saga_paper4.ps That proves
 type theory is child's play :-)

Ah, the power of a well chosen notation :)

As for the original question, this message:

http://www.cse.unsw.edu.au/~dons/haskell-1990-2006/msg00675.html

suggests that the application operator was suggested by Kent Karlsson,
but I couldn't find the actual suggestion though.

cheers,
Arnar
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Unicode's greek lambda

2008-11-21 Thread Arnar Birgisson
Hi,

On Thu, Nov 20, 2008 at 18:56, Colin Adams
[EMAIL PROTECTED] wrote:
 Another problem is font support.. none of the fonts on my
 system (and I have quite a lot) have these codepoints defined.

 David That seems surprising. Some of the more exotic math characters added at
 Unicode 3.2,4,5 are only just now getting into fonts (such as STIX and
 Cambria math) but (almost) any truetype or type1 font built in the last
 10 years or so should have the basic latin-greek-cyrillic font set, at
 least.

To clarify - most of modern fonts do in deed have latin-greek-cyrillic
(including the U+03BB lambda), but I was referring to the specific
math symbols such as the U+1D6CC bold lam(b)da, which reside in the
Supplementary Multilingual Plane (SMP). Those are indeed not present
in my fonts, including Lucida Math and Adobe Mathematical Pi).

cheers,
Arnar
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Unicode's greek lambda

2008-11-20 Thread Arnar Birgisson
On Thu, Nov 20, 2008 at 12:23, Duncan Coutts
[EMAIL PROTECTED] wrote:
 On Thu, 2008-11-20 at 09:50 +, Jan Jakubuv wrote:
 Some others are (I have no idea why they are referenced as lamda
 instead of lambda):

 Yeah, that's why I missed them. I was searching for what I thought was
 the correct spelling.

Same here. Another problem is font support.. none of the fonts on my
system (and I have quite a lot) have these codepoints defined. One
would have to open them in a font editor and copy the greek lambda.

cheers,
Arnar
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] What *not* to use Haskell for

2008-11-12 Thread Arnar Birgisson
Hi Reinier,

On Wed, Nov 12, 2008 at 14:22, Reinier Lamers [EMAIL PROTECTED] wrote:
 Also, in my experience Haskell is not so good at data structures where
 you can't do structural recursion easily, like graphs. In such cases
 you want a language with easy pointers and destructive updates. You
 can do those things in pure Haskell by using
 the ST monad, but the code will be more verbose than in Java or C++,
 and it will occasionally drive you insane with type messages like the
 above (You thought you could use '$' freely instead of application?
 Wrong!).

Can you give examples of what you mean and why algebraic data types
are not sufficient? In my research most things are structurally
recursive and it was because Haskell is so good at such things that I
started using it.

cheers,
Arnar
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] What *not* to use Haskell for

2008-11-11 Thread Arnar Birgisson
Hi all,

On Tue, Nov 11, 2008 at 11:38, Dave Tapley [EMAIL PROTECTED] wrote:
 Usually I'll avoid then question and explain that it is a 'complete'
 language and we do have more than enough libraries to make it useful and
 productive. But I'd be keen to know if people have any anecdotes,
 ideally ones which can subsequently be twisted into an argument for
 Haskell ;)

I would not use Haskell if I were faced with the prospect of producing
a huge system in short time (i.e. meaning I couldn't do it by myself)
and all I had was a pool of regular programmers that have been
trained in .NET, Java, C++, Python, your favorite imperative
language.

Yes, sad - but true. I'd very much like to see this twisted to an
argument for Haskell :)

cheers,
Arnar
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] What *not* to use Haskell for

2008-11-11 Thread Arnar Birgisson
On Tue, Nov 11, 2008 at 14:37, Krasimir Angelov [EMAIL PROTECTED] wrote:
 You can hire one Haskell programmer instead of 1,2,3... programmers in
 your favorite imperative language.

That's assuming I can find a Haskell programmer in the first place.
Also, he/she has to be good to replace 10 other programmers.

cheers,
Arnar
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: What *not* to use Haskell for

2008-11-11 Thread Arnar Birgisson
On Tue, Nov 11, 2008 at 14:46, Henning Thielemann
[EMAIL PROTECTED] wrote:
 SWIG helps wrapping C++ libraries by providing C wrappers to C++ functions.
 However, as far as I know, templates cannot be wrapped as they are, but only
 instances of templates. Thus there is no wrapper to STL.

Maybe my understanding is a bit off, but isn't this to be expected?
There's no way to compile a generic template to machine code, as
template instantiation happens at source level in C++.

cheers,
Arnar
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] What *not* to use Haskell for

2008-11-11 Thread Arnar Birgisson
Hi again,

On Tue, Nov 11, 2008 at 11:38, Dave Tapley [EMAIL PROTECTED] wrote:
 So I should clarify I'm not a troll and do see the Haskell light. But
 one thing I can never answer when preaching to others is what does
 Haskell not do well?

C does extremely well when you want to write low level exploits, such
as meddling with memory, the machine stack and others.

This goes also for writing device drivers, media codecs, compression
algorithms etc. and applications that are mostly about copying bits
(with some transformations) between various memory locations.

I imagine this kind of thing would be very hard to write in Haskell.

Does that count?

cheers,
Arnar
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: No atomic read on MVar?

2008-11-04 Thread Arnar Birgisson
On Mon, Nov 3, 2008 at 23:51, David Menendez [EMAIL PROTECTED] wrote:
 On Mon, Nov 3, 2008 at 6:29 AM, Philip K.F. Hölzenspies
 [EMAIL PROTECTED] wrote:

 I have now implemented my variable as a pair of MVars, one of which serves as
 a lock on the other. Both for performance reasons and for deadlock analysis,
 I would really like an atomic read on MVars, though. Does it exist? If not,
 why not?

 Have you considered using STM? All the operations on TMVars are atomic.

I will second this. At the very least, if you only need atomic
read/write operations you can use TMVars and make aliases that compose
with atomically:

takeMVar = atomically . takeTMVar
putMVar = atomically . putTMVar

etc.

cheers,
Arnar
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] Terminal-like Application Design

2008-10-16 Thread Arnar Birgisson
Hi Jeff,

On Fri, Oct 17, 2008 at 01:29, Jeff Wheeler [EMAIL PROTECTED] wrote:
 I'm a slight Haskell newbie, but I'm trying to write a terminal-like
 application that accepts simple commands with optional arguments, and can
 then execute them. Most of these commands will need IO, as later I will want
 to communicate over USB for most of them.

Do you mean a terminal application that runs a session, accepting
commands interactively? If so, check out Shellac [1].

[1] http://www.cs.princeton.edu/~rdockins/shellac/home/

cheers,
Arnar
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Still stacking monad transformers

2008-10-13 Thread Arnar Birgisson
Hi Andrew,

On Mon, Oct 13, 2008 at 19:58, Andrew Coppin
[EMAIL PROTECTED] wrote:
 Right. OK. So... isn't there a class somewhere called MonadChoice or
 similar, which defines (|)?

Just to pitch in a helpful tip, Hoogle is excellent for these kind of
questions (which come up very often):

http://www.haskell.org/hoogle/?q=%3C|%3E

cheers,
Arnar
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] synchronous channels in STM

2008-10-09 Thread Arnar Birgisson
On Thu, Oct 9, 2008 at 12:29, roger peppe [EMAIL PROTECTED] wrote:
 It's useful, thanks, but not really what I was originally looking for.
 Synchronous channels are generally easier to reason about (less states
 to deal with).

Right, that's very true. Interaction between transactions is naturally
forbidden by the I in ACId (small d because that one doesn't really
apply to memory transactions), but that does not mean you can't
implement bounded buffers as you said.

cheers,
Arnar
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] synchronous channels in STM

2008-10-09 Thread Arnar Birgisson
On Thu, Oct 9, 2008 at 10:50, roger peppe [EMAIL PROTECTED] wrote:
 On Thu, Oct 9, 2008 at 9:15 AM, Ryan Ingram [EMAIL PROTECTED] wrote:
 I don't think what you want is possible if both sides are in STM.
 Other authors have posted solutions where one side or the other of the
 transaction is in I/O, but wholly inside STM it's not possible.

 Thanks, that's what I thought, although I wasn't sure of it, being
 new to both Haskell and STM.

 Presumably this result means that it's not possible to implement
 any bounded-buffer-type interface within (rather than on top of) STM.

 Isn't that a rather serious restriction?

Sorry, I come into this discussion late. One-place buffers, or MVars,
are indeed implemented over STM in the orignal paper [1]. Is that what
you seek? Copied from the paper, it looks like this:

type MVar a = TVar (Maybe a)

newEmptyMVar :: STM (MVar a)
newEmptyMVar = newTVar Nothing

takeMVar :: MVar a - STM a
takeMVar mv
  = do v - readTVar mv
   case v of
 Nothing  - retry
 Just val - do writeTVar mv Nothing
return val

putMVar :: MVar a - a - STM ()
putMVar mv val
  = do v - readTVar mv
   case v of
 Nothing  - writeTVar mv (Just val)
 Just _   - retry

Again, sorry if I'm missing your point. Note that transactions cannot
block like threads, when they retry - they do block, but when they
are unblocked they are retried from the top. This is naturally due to
the requirement that a transaction must not be affected by something
that happens concurrently.

[1] also goes to implement buffered, multi-item, multi-cast channels.

[1] Tim Harris, Simon Marlow, Simon Peyton Jones, Maurice Herlihy.
Composable Memory Transactions.

cheers,
Arnar
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] synchronous channels in STM

2008-10-09 Thread Arnar Birgisson
Hi there,

2008/10/9 David Leimbach [EMAIL PROTECTED]:
 see writeTChan and readTChan.  I assume readTChan is synchronous :-).
  writeTChan may be asynchronous for all I can tell (haven't looked deeply).

writeTChan is asynchronous, i.e. channels in this case are unbounded buffers.

 But I did write a concurrent prime sieve with it:

I did the same, with the one-place-buffers (the MVars implemented over
STM). Be warned that there is no stop condition, this just keeps
printing primes forever.

import Control.Concurrent (forkIO)
import Control.Concurrent.STM

-- MVars from the STM paper
type MVar a = TVar (Maybe a)

newEmptyMVar :: STM (MVar a)
newEmptyMVar = newTVar Nothing

takeMVar :: MVar a - STM a
takeMVar mv
 = do v - readTVar mv
  case v of
Nothing  - retry
Just val - do writeTVar mv Nothing
   return val

putMVar :: MVar a - a - STM ()
putMVar mv val
 = do v - readTVar mv
  case v of
Nothing  - writeTVar mv (Just val)
Just _   - retry

-- Sieve
forever a = do a; forever a

pfilter :: Int - MVar Int - MVar Int - IO ()
pfilter p in_ out =
forever $ do atomically $ do v - takeMVar in_
 if v `mod` p /= 0
then putMVar out v
else return ()

sieve :: MVar Int - MVar Int - IO ()
sieve in_ out =
do p - atomically $ takeMVar in_
   atomically $ putMVar out p
   ch - atomically $ newEmptyMVar
   forkIO $ pfilter p in_ ch
   sieve ch out

feeder :: MVar Int - IO ()
feeder out = feed' 2
   where
 feed' i = do atomically $ putMVar out i
  feed' (i+1)

printer :: MVar Int - IO ()
printer in_ = forever $ do v - atomically $ takeMVar in_
   putStrLn $ show v

main :: IO ()
main = do in_ - atomically newEmptyMVar
  out - atomically newEmptyMVar
  forkIO $ feeder in_
  forkIO $ printer out
  forkIO $ sieve in_ out
  return ()


cheers,
Arnar
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] synchronous channels in STM

2008-10-09 Thread Arnar Birgisson
On Thu, Oct 9, 2008 at 18:10, Arnar Birgisson [EMAIL PROTECTED] wrote:
 But I did write a concurrent prime sieve with it:

 I did the same, with the one-place-buffers (the MVars implemented over
 STM). Be warned that there is no stop condition, this just keeps
 printing primes forever.

Please forgive me for reposting, but the last one exited quite prematurely :)


module Main where

import Control.Concurrent (forkIO)
import Control.Concurrent.STM
import System (getArgs)

-- MVars from the STM paper
type MVar a = TVar (Maybe a)

newEmptyMVar :: STM (MVar a)
newEmptyMVar = newTVar Nothing

takeMVar :: MVar a - STM a
takeMVar mv
 = do v - readTVar mv
  case v of
Nothing  - retry
Just val - do writeTVar mv Nothing
   return val

putMVar :: MVar a - a - STM ()
putMVar mv val
 = do v - readTVar mv
  case v of
Nothing  - writeTVar mv (Just val)
Just _   - retry

-- Sieve
forever a = do a; forever a

pfilter :: Int - MVar Int - MVar Int - IO ()
pfilter p in_ out =
forever $ do atomically $ do v - takeMVar in_
 if v `mod` p /= 0
then putMVar out v
else return ()

sieve :: MVar Int - MVar Int - IO ()
sieve in_ out =
do p - atomically $ takeMVar in_
   atomically $ putMVar out p
   ch - atomically $ newEmptyMVar
   forkIO $ pfilter p in_ ch
   sieve ch out

feeder :: MVar Int - IO ()
feeder out = feed' 2
   where
 feed' i = do atomically $ putMVar out i
  feed' (i+1)

printer :: MVar () - MVar Int - Int - IO ()
printer stop in_ max =
do v - atomically $ takeMVar in_
   putStrLn $ show v
   if v  max then atomically $ putMVar stop ()
  else printer stop in_ max

main :: IO ()
main = do max:_ - getArgs
  in_ - atomically newEmptyMVar
  out - atomically newEmptyMVar
  stop - atomically newEmptyMVar
  forkIO $ feeder in_
  forkIO $ printer stop out (read max)
  forkIO $ sieve in_ out
  atomically $ takeMVar stop
  return ()
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] a really juvenile question .. hehehehe ;^)

2008-10-06 Thread Arnar Birgisson
On Mon, Oct 6, 2008 at 14:58, Cale Gibbard [EMAIL PROTECTED] wrote:
 2008/10/6 Don Stewart [EMAIL PROTECTED]:
 dagit:
data and newtype vary in one more subtle way, and that's how/when they
evaluate to bottom.  Most of the time they behave identically, but in the
right cases they act sightly differently.  newtype is usually regarded as
more efficient than data.  This is because the compiler can choose to
optimize away the newtype so that it only exists at type check time.  I
think this is also possible with data in some, but not all, uses.

 The compiler *must* optimise away the use. They're sort of 'virtual'
 data, guaranteed to have no runtime cost.

 I'm not sure that I'd want to be that emphatic about what an
 implementation *must* do regarding something so operational.

 The informal semantics of pattern matching in the Report says:

 Matching the pattern con pat against a value, where con is a
 constructor defined by newtype, depends on the value:
* If the value is of the form con v, then pat is matched against v.
* If the value is _|_, then pat is matched against _|_.
 That is, constructors associated with newtype serve only to change the
 type of a value.

 This clearly has an implementation which introduces no overhead, which
 one can expect from good implementations of the language. There are
 obviously implementations of these semantics which do introduce
 overhead as well though, so I would be hesitant to make any
 requirement like that.

And this requirement is there why? Is it specifically put in so that
one is able to create this overhead-less implementation?

Given:

data A = A Int
newtype B = B Int

ta (A x) = True
tb (B x) = True

This happens (not surprisingly given your above comments):

*Main GOA :load test.hs
[1 of 1] Compiling Main ( test.hs, interpreted )
Ok, modules loaded: Main.
*Main GOA ta undefined
*** Exception: Prelude.undefined
*Main GOA tb undefined
True

Why is the x evaluated in ta?

cheers,
Arnar
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] a really juvenile question .. hehehehe ;^)

2008-10-06 Thread Arnar Birgisson
Hi,

On Mon, Oct 6, 2008 at 16:10, Ryan Ingram [EMAIL PROTECTED] wrote:
 On Mon, Oct 6, 2008 at 2:19 PM, Arnar Birgisson [EMAIL PROTECTED] wrote:
 And this requirement is there why? Is it specifically put in so that
 one is able to create this overhead-less implementation?

 Given:

 data A = A Int
 newtype B = B Int

 ta (A x) = True
 tb (B x) = True

 This happens (not surprisingly given your above comments):

 *Main GOA :load test.hs
 [1 of 1] Compiling Main ( test.hs, interpreted )
 Ok, modules loaded: Main.
 *Main GOA ta undefined
 *** Exception: Prelude.undefined
 *Main GOA tb undefined
 True

 Why is the x evaluated in ta?

 x isn't evaluated.

Yes, realized my error just after hitting send :/

 undefined is evaluated to see if it matches the
 constructor A.  But we don't even get to check, because undefined
 throws an exception during its evaluation.

 In the tb case, (B x) always matches because B is a newtype.  x gets
 bound to undefined, but never evaluated.

And this happens because data values are basically pattern matched at
run-time but newtype values are matched at compile-time, effectively
turning tb into an Int - Bool function?

That explains pretty well why newtype can have only one constructor.

cheers,
Arnar
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] a really juvenile question .. hehehehe ;^)

2008-10-06 Thread Arnar Birgisson
On Mon, Oct 6, 2008 at 18:48, Ryan Ingram [EMAIL PROTECTED] wrote:
 Also, you can get the same behavior out of ta if you write it like this:

 ta ~(A x) = True

 The translation looks something like this:

 f ~(A x) = e
  =
 f a = e
  where x = case a of (A v) - v  -- a,v fresh variables not mentioned in e

 (or, equivalently)

 f a = let (A x) = a in e -- a some fresh variable not mentioned in e

 This delays the pattern-matching (and thus, the evaluation of a)
 lazily until x is demanded, at which point a might throw an
 exception or infinite loop, or, if the type has more than one
 constructor, fail to pattern match (which also throws an exception).
 If x is never demanded then neither is a.

Ah, that's pretty neat and subtle. Now, say I have a type created with
data that has only one constructor. Couldn't the compiler optimize
away this unneccessary evaluation during pattern matching? I.e. it
would make what you just wrote implicit in that case. Or perhaps data
declarations with just one ctor should really be turned into newtypes
by the programmer?

cheers,
Arnar
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Shooting your self in the foot with Haskell

2008-10-02 Thread Arnar Birgisson
On Thu, Oct 2, 2008 at 00:39, Bill [EMAIL PROTECTED] wrote:
 On Wed, 2008-10-01 at 16:46 -0400, John Van Enk wrote:
   . . .
 I fully realize how un-clever this is. Some one please give me
 something more worth of the original list. :)

 You shoot the gun but nothing happens (Haskell is pure, after all).

Realizing that I pressed reply instead of reply to all yesterday,
here's my message again (sorry Bill).

A little refinement: You shoot in the direction of your foot, but
since you are inside the STM monad you can just retry until you figure
out what to do.

Alternatively:

You shoot yorself in the foot, but you are perfectly fine as long you
just don't evaluate the foot.

Same, but more subtle:

You shoot yourself in the foot, but nothing happens unless you start walking.

cheers,
Arnar
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Shooting your self in the foot with Haskell

2008-10-02 Thread Arnar Birgisson
On Thu, Oct 2, 2008 at 17:11, Albert Y. C. Lai [EMAIL PROTECTED] wrote:
 Don't forget about memory consumption!

 If you don't look, the bullet causes heap overflow. If you look, the bullet
 causes stack overflow.

So it is a Heisenberg-bullet?

Arnar
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: (A little humour)

2008-09-26 Thread Arnar Birgisson
On Fri, Sep 26, 2008 at 22:03, Achim Schneider [EMAIL PROTECTED] wrote:
 Instead, it was decided to by default limit identifiers to a single
 character

 The Wisdom of it! Preparing C++ programmers for Haskell's limitation!

Oh no, you are forgetting the primes, x', y'  :P

I liked the motivation of the physicists kept writing xy even the
problem had been explained multiple times. Those pesky physicists..

cheers,
Arnar
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Python's big challenges, Haskell's big advantages?

2008-09-17 Thread Arnar Birgisson
Hi Manlio and others,

On Wed, Sep 17, 2008 at 14:58, Manlio Perillo [EMAIL PROTECTED] wrote:
 http://www.heise-online.co.uk/open/Shuttleworth-Python-needs-to-focus-on-future--/news/111534

 cloud computing, transactional memory and future multicore processors


 Multicore support is already supported in Python, if you use
 multiprocessing, instead of multithreading.

Well, I'm a huge Python fan myself, but multiprocessing is not really
a solution as much as it is a workaround. Python as a language has no
problem with multithreading and multicore support and has all
primitives to do conventional shared-state parallelism. However, the
most popular /implementation/ of Python sacrifies this for
performance, it has nothing to do with the language itself.

Stackless Python is an interesting implementation of the CSP+channels
paradigm though. It has been quite successfully used for a few large
projects.

 And scalability is not a real problem, if you write RESTful web
 applications.

Of course scalability is a real problem, ask anyone who runs a big
website. I don't see how RESTful design simply removes that problem.

cheers,
Arnar
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Python's big challenges, Haskell's big advantages?

2008-09-17 Thread Arnar Birgisson
Hi Bruce,

On Wed, Sep 17, 2008 at 15:03, Bruce Eckel [EMAIL PROTECTED] wrote:
 Multicore support is already supported in Python, if you use
 multiprocessing, instead of multithreading.

 This is one of the reasons for my other question on this list, about
 whether you can solve all problems using multiple isolated processes
 with message passing.

Well, processing (the Python module) serves as a good example how
shared memory can be emulated through message passing. Of course,
performance takes a hit but since it is out there and being used by
people it should tell us that is really af feasible solution.

I guess the gist of the answers people posted to your question still
remains, the answer depends on if you consider performance as part of
the power of each approach.

Arnar
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Python's big challenges, Haskell's big advantages?

2008-09-17 Thread Arnar Birgisson
Hi again,

On Wed, Sep 17, 2008 at 15:13, Bruce Eckel [EMAIL PROTECTED] wrote:
 Well, I'm a huge Python fan myself, but multiprocessing is not really
 a solution as much as it is a workaround. Python as a language has no
 problem with multithreading and multicore support and has all
 primitives to do conventional shared-state parallelism. However, the
 most popular /implementation/ of Python sacrifies this for
 performance, it has nothing to do with the language itself.

 Actually, no. Neither Python nor Ruby can utilize more than a single
 processor using threads. The only way to use more than one processor
 is with processes.

I wanted to make a distinction between the language and its
implementation. I think you are conflating the two.

If you read the Python specification there is nothing preventing you
from running on two cores in parallel. The standard library does
indeed have semaphores, monitors, locks etc. In fact, I'm pretty sure
the Jython implementation can use multiple cores. It is just CPython
that can't, as is very well known and advertised.

cheers,
Arnar
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Python's big challenges, Haskell's big advantages?

2008-09-17 Thread Arnar Birgisson
Hi Aaron,

On Wed, Sep 17, 2008 at 23:20, Aaron Denney [EMAIL PROTECTED] wrote:
 I entered the discussion as which model is a workaround for the other --
 someone said processes were a workaround for the lack of good threading
 in e.g. standard CPython.  I replied that most languages thread support can be
 seen as a workaround for the poor performance of communicating processes.
 (creation in particular is usually cited, but that cost can often be reduced
 by process pools, context switching costs, alas, is harder.)

That someone was probably me, but this is not what I meant. I meant
that the processing [1] Python module is a workaround for CPython's
performance problems with threads. For those who don't know it, the
processing module exposes a nearly identical interface to the standard
threading module in Python, but runs each thread in a seperate OS
process. The processing module emulates shared memory between these
threads as well as locking primitives and blocking. That is what I
meant when I said processing (the module) was a workaround for
CPython's threading issues.

[1] http://www.python.org/dev/peps/pep-0371/

The processes vs. threads depends on definitions. There seem to be two
sets floating around here. One is that processes and threads are
essentially the same, the only difference being that processes don't
share memory but threads do. With this view it doesn't really matter
if processes are implemented as proper OS processes or OS threads.
Discussion based on this definition can be interesting and one model
fits some problems better than the other and vice versa.

The other one is the systems view of OS processes vs. OS threads.
Discussion about the difference between these two is only mildly
interesting imo, as I think most people agree on things here and they
are well covered in textbooks that are old as dirt.

 The central aspect in my mind is a default share-everything, or
 default share-nothing.

[..snip...]
 These are, in fact, process models.  They are implemented on top of thread 
 models,
 but that's a performance hack.  And while putting this model on top
 restores much of the programming sanity, in languages with mutable
 variables and references that can be passed, you still need a fair
 bit of discipline to keep that sanity.  There, the implementation detail
 of thread, rather than process allows and even encourages shortcuts that
 violate the process model.

Well, this is a viewpoint I don't totally agree with. Correct me if
I'm not understanding you, but you seem to be making the point that OS
processes are often preferred because with threads, you *can* get
yourself in trouble by using shared memory.

The thing I don't agree with is let's use A because B has dangerous
features. This is sort of like the design mantra of languages like
Java. Now, you may say that indeed Java has been wildly successful,
but I think (or hope) that is because we don't give people
(programmers) enough credit. Literature, culture and training in the
current practice of programming could do well IMO with making fewer
_good_ programmers rather than a lot of mediocre ones. And _good_
programmers don't need to be handcuffed just because otherwise they
*could* poke themselves in the eye.

I.e. if you need to sacrifice the efficiency of threads for full-blown
OS processes because people can't stay away from shared memory, then
something is fundamentally wrong.

I'll stop here, this is starting to sound like a very OT rant.

cheers,
Arnar
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] STM and FFI

2008-09-09 Thread Arnar Birgisson
On Tue, Sep 9, 2008 at 11:36, Jules Bean [EMAIL PROTECTED] wrote:
 ...not only must it be safe to be called with invalid inputs, but it most
 not have any long-term effects, whether the input is valid or invalid, since
 I do not believe that there is any way for the function to 'undo' its effect
 at 'retry' time.

Maybe this is an idea for an extension to the STM system, adding
something like unsafeIOToSTM, except that in addition to the main IO
action, it also takes two more IO actions that are invoked on rollback
and commit, respectively.

This might allow for integration with transactional systems (e.g. a
remote transaction on an rdbms), although to support two-phased commit
we'd need a third action for the prepare step.

cheers,
Arnar
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] STM and FFI

2008-09-09 Thread Arnar Birgisson
On Tue, Sep 9, 2008 at 11:58, Jules Bean [EMAIL PROTECTED] wrote:
 Maybe this is an idea for an extension to the STM system, adding
 something like unsafeIOToSTM, except that in addition to the main IO
 action, it also takes two more IO actions that are invoked on rollback
 and commit, respectively.

 This might allow for integration with transactional systems (e.g. a
 remote transaction on an rdbms), although to support two-phased commit
 we'd need a third action for the prepare step.

 That would be an absolutely killer feature.

 A common problem in large systems is that the underlying RDBMS supports
 transactionality, but then the software layer has to handle its own
 rollbacks. I've seen some nasty bugs when the DB rolled back and the
 software didn't.

 If we could have a transactional RDBMS linked into STM with matching
 semantics, that would be a very nice thing.

I think this is entirely doable. For comparison we already have done
this with another STM framework, the DSTM2 library for Java. I.e. we
hooked into prepare, commit and rollback and integrated with both
MySQL transactions and a transactional file system library from Apache
Commons.

I'm not yet involved enough with the GHC library code, but I guess
this would require the addition of a prepare phase to the STM code.

There's also the question of what to do when the remote TX system
indicates failure, should the transaction be retried or aborted? In
the DSTM2 case we make it abort and throws an exception encapsulating
the remote error to the code that initiated the TX (in Haskell's case,
the caller of atomically).

On a related note, we do have a paper on utilizing the STM system for
authorization and policy enforcement in general. The paper is to be
presented at CCS'08, and has an implementation on top of DSTM2, but we
have a technical report in the works that implements this on top of
the Haskell STM and gives operational semantics for the whole thing.

You can find the conference paper on my website:
http://www.hvergi.net/arnar/publications

cheers,
Arnar
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] STM and FFI

2008-09-09 Thread Arnar Birgisson
On Tue, Sep 9, 2008 at 13:58, Sterling Clover [EMAIL PROTECTED] wrote:
 I've been playing with this, and on top of STM as it exists, managed to
 neatly interleave it with sqite3 and postgres. To do so with postgres,
 however, required setting the locking mode to be a bit more restrictive than
 it is out-of-the-box. Clever use of encapsulation and monad transformers
 gets you 90% of the way there quite easily. Note, however, that
 unsafeIOToSTM is *much* more unsafe at the moment than you would expect --
 in fact there is no safe way to use it at all, due to the interaction of
 exceptions and rollbacks at the moment. The thread about this on
 glasgow-haskell-users[1], along with my initial note, has a very useful
 reply by Simon Marlow where he both explains some things about the STM
 implementation and logic behind it that I didn't understand, and also
 describes how the GHC team intends to fix this at some point in the future.

This is very interesting, do you have any code to release?

Thanks for the ghu link, registering for that ML now :)

cheers,
Arnar
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Can you do everything without shared-memory concurrency?

2008-09-08 Thread Arnar Birgisson
Hi Bruce,

On Mon, Sep 8, 2008 at 21:33, Bruce Eckel [EMAIL PROTECTED] wrote:
 I know that both Haskell and Erlang only allow separated memory spaces
 with message passing between processes, and they seem to be able to
 solve a large range of problems -- but are there problems that they
 cannot solve?

Modern Haskell has shared memory variables, so that statement
[Haskell] only allows seperated memory spaces... is not true in
practice. In fact, Haskell probably has the (semantically) cleanest
and best implementation of STM (Software Transactional Memory) there
is imho, which removes most of the headaches of shared memory based
concurrency without sacrificing shared memory itself.

As for the question Is there something that the Actor model cannot do
but you can with shared memory?, I'd say the answer is probably no.
After all, you could just simulate shared memory by having one actor
manage all shared state.

 I recently listened to an interview with Simon
 Peyton-Jones where he seemed to suggest that this newsgroup might be a
 helpful place to answer such questions. Thanks for any insights -- it
 would be especially useful if I can point to some kind of proof one
 way or another.

I may be completely missing your point, and if so I apologize, but
does the simulation argument above suffice as a proof?

cheers,
Arnar
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] What's the best way to give up?

2008-09-06 Thread Arnar Birgisson
Hi there,

On Sat, Sep 6, 2008 at 13:30, David F. Place [EMAIL PROTECTED] wrote:
 If there is a solution, it finds it in a few seconds.  If there is no
 solution, it goes away for days proving that.  So, I'd like to give up on it
 if it doesn't return in a few seconds.   I can think of several ways of
 doing that.  I could keep a tally of the number of variable assignments and
 give up when it reaches an impossibly huge number.   I could change the type
 to
 a - IO (Either a a ) and use getCPUTime.

 Is there a standard way to do this?  Can you think of another way to do it?

I don't know, but this seems relevant:
http://www.haskell.org/pipermail/haskell-cafe/2005-October/011946.html

I'd make a generic (i.e. higher-order) function that handles the
timeout for any computation. i.e.

timeout :: (a - b) - Int - a - IO (Maybe b)

or similar.

Arnar
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell] Compiler Construction course using Haskell?

2008-08-22 Thread Arnar Birgisson
On Fri, Aug 22, 2008 at 02:46, Norman Ramsey [EMAIL PROTECTED] wrote:
 In Fall 2006 I gave a graduate course in advanced functional
 programming in which the default project was a compiler from a
 functional language of the student's own design to the 2D circuit
 language invented by the Cult of the Bound Variable.  The project was
 primarily an excuse to read papers about parsing combinators,
 polymorphic typed defunctionalization, A-Normal Form, generics for the
 masses, linear types, and so on, and to implement all that stuff in Haskell.

As a student, I find this very interesting. Would you happen to have a
syllabus or other course material (list of papers, assignments, etc.)
available online?

cheers,
Arnar
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell-cafe] doctest for haskell -- a good project?

2008-03-23 Thread Arnar Birgisson
On Sun, Mar 23, 2008 at 5:09 PM, Marc Weber [EMAIL PROTECTED] wrote:
  Adding documentation ficilities to ghci is nice,
  however my experience is that documentation is not complete everywhere.
  That's why I'm looking at source code directly (thus having doc strings if 
 given
  else I can have look at source directly).

This reminds me of a very handy feature in ipython (a custom python
repl) - you can type

name?

which prints the docstring associated with name (which can be a
function, module, class etc.) - but if you type

name??

you get the docstring _and_ the source code (colorized and all :). I
use it a lot when I'm experimenting in a interactive session. I could
see something similar for haskell being very useful, to me at least.

Arnar
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] doctest for haskell -- a good project?

2008-03-22 Thread Arnar Birgisson
Hey Don,

On Sat, Mar 22, 2008 at 8:39 AM, Don Stewart [EMAIL PROTECTED] wrote:
  I'm not sure how doctest works, or how it would work in a Haskell
  setting, could you elaborate?

In a nutshell, Python doctest has the programmer put an example interactive
session in a functions docstring. A doctest module then extracts those, tries
running the function on the inputs and sees if it matches the output.
Best shown by
an example:

def adder(a,b):
Returns the sum of the two arguments.

 adder(10,10)
20
 adder(a,b)
'ab'
 adder(10,100)
Traceback (most recent call last):
  File test.py, line 6, in adder
return a+b
TypeError: unsupported operand type(s) for +: 'int' and 'str'

return a+b

if __name__ == __main__:
import doctest
doctest.testmod()  # Test this module



Running this script with
$ python test.py -v
gives the following output

Trying:
adder(10,10)
Expecting:
20
ok
Trying:
adder(a,b)
Expecting:
'ab'
ok
Trying:
adder(10,100)
Expecting:
Traceback (most recent call last):
  File test.py, line 6, in adder
return a+b
TypeError: unsupported operand type(s) for +: 'int' and 'str'
ok
1 items had no tests:
__main__
1 items passed all tests:
   3 tests in __main__.adder
3 tests in 2 items.
3 passed and 0 failed.
Test passed.

I.e it extracted the test cases from the sample interactive session
and compared the real results.

  One idea that does strike me is that it would be super useful to have
  the ability in ghci to extract the haddocks associated with a function.

   :doc map

  would result in:

  -- | 'map' @f xs@ is the list obtained by applying @f@ to each element
  -- of @xs@, i.e.,
  --
  --  map f [x1, x2, ..., xn] == [f x1, f x2, ..
  --  map f [x1, x2, ...] == [f x1, f x2, ...]

  marked up in ascii.

  I'm not sure if that's related to doctest, but it sure would be useful!

Python does exactly that and it is very very useful. I'd love to see
that in GHCi.

Arnar
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Analysing Haskell with Graph Theory

2008-03-07 Thread Arnar Birgisson
On Fri, Mar 7, 2008 at 1:45 AM, Ivan Miljenovic
[EMAIL PROTECTED] wrote:
  Can anyone think of any other kind of functions that would be useful
  in this kind of source code analysis?

Will you be considering parallel programs? Also, perhaps some
information flow analysis would be interesting.

Arnar
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Analysing Haskell with Graph Theory

2008-03-07 Thread Arnar Birgisson
On Fri, Mar 7, 2008 at 12:42 PM, Ivan Miljenovic
[EMAIL PROTECTED] wrote:
 On 07/03/2008, Arnar Birgisson [EMAIL PROTECTED] wrote:
   Will you be considering parallel programs? Also, perhaps some
information flow analysis would be interesting.

  What do you mean by parallel programs? The parallelism hints used by
  ghc?  In that case, I'll be supporting whatever the parser I can find
  supports :p

I mean parallel programs in general. For example, you mention that
preferably one would want main to be the only function that has no
incoming edges. With parallel programs that may not be the case. This
ties into also considering reactive programs (i.e. programs which main
function is of the form while (1) { ... }). I'm not quite sure if it
fits the project you describe, but looking for certain properties like
possible deadlocks or livelocks in such programs is something of
interest.

There is also a security angle to this. Consider two agents, one
representing an ATM and one representing the branch office of a bank.
Analysing the possible possible information paths between the two can
help you define a clear interface between the two (assuming one
doesn't exist already). Having such an interface rigorously defined
helps with maintaining security properties.

Also related to considering parallel programs are coroutines.
Analysing the call graph of systems of coroutines might be
interesting as well.

I'm just thinking outloud though, some of these might not be
interesting at all :)

cheers,
Arnar
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: A question about monad laws

2008-02-11 Thread Arnar Birgisson
Hi all,

On Feb 11, 2008 3:14 PM, apfelmus [EMAIL PROTECTED] wrote:
 I will be mean by asking the following counter question:

x + (y + z) = (x + y) + z

 is a mathematical identity. If it is a mathematical identity, a
 programmer need not care about this law to implement addition + . Can
 anyone give me an example implementation of addition that violates this law?

Depends on what you mean by addition. In general, algebraists call
any associative and commutative operation on a set addition, and
nothing else. From that POV, there is by definition no addition that
violates this law.

Arnar
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] anybody can tell me the pronuncation of haskell?

2008-01-29 Thread Arnar Birgisson
On Jan 29, 2008 1:45 PM, Yitzchak Gale [EMAIL PROTECTED] wrote:
 Paul Hudak wrote:
   Well, Haskell was Curry's first name, so perhaps we should use Moses,
   which was Schönfinkel's first name, and has some nice biblical metaphors
  :-)

 Haskell is fine for that. In Biblical Hebrew, it means enlightenment
 or insight.

With slightly less dignity, Haskell in Icelandic sounds close to
rasskell - which in turn means a spanking. :)

cheers,
Arnar
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] let vs. where

2007-11-16 Thread Arnar Birgisson
On Nov 16, 2007 12:26 AM, Lennart Augustsson [EMAIL PROTECTED] wrote:
 On Nov 14, 2007 1:05 AM, Robin Green [EMAIL PROTECTED] wrote:
 
  On Tue, 13 Nov 2007 13:51:13 -0800
  Dan Piponi [EMAIL PROTECTED] wrote:
 
   Up until yesterday I had presumed that guards only applied to
   functions. But I was poking about in the Random module and discovered
   that you can write things like
  
   a | x  1 = 1
 | x  -1 = -1
 | otherwise = x
  
   where 'a' clearly isn't a function.
 
  Isn't it a function taking zero arguments?

 No, Haskell functions take exactly one argument.

Depends on who you ask. From [1]:

quote
taxRate = 0.06

total cart = subtotal + tax
  where
subtotal = sum cart
taxable  = filter isTaxable cart
tax = (sum taxable) * taxRate

This example defines two functions, taxRate, which returns a constant
value, and total, which computes the total cost of the list of items
in a shopping cart. (Although the taxRate definition appears to be
defining a variable, it's best to think of it as a constant function,
a function that takes no parameters and always returns the same
value.) The definition of total is quite expressive, and highlights
the intent of the function, by isolating and naming important
sub-expressions in the computation. (total also refers to an isTaxable
function, not presented here.)
/quote

Technically, all Haskell functions may take exactly one parameter -
but focusing only on semantics, I guess there's really nothing wrong
with considering constants as parameterless functions, is there?

[1]

cheers,
Arnar
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] let vs. where

2007-11-15 Thread Arnar Birgisson
On Nov 16, 2007 12:35 AM, Arnar Birgisson [EMAIL PROTECTED] wrote:
 [1]

I'm terribly sorry, that was meant to be:

[1] 
http://www.onlamp.com/pub/a/onlamp/2007/07/12/introduction-to-haskell-pure-functions.html

sorry,
Arnar
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Semantics of uniqueness types for IO (Was: Why can't Haskell be faster?)

2007-11-01 Thread Arnar Birgisson
Hi there,

I'm new here, so sorry if I'm stating the obvious.

On Nov 1, 2007 2:46 PM, apfelmus [EMAIL PROTECTED] wrote:
 Stefan Holdermans wrote:
  Exposing uniqueness types is, in that sense, just an alternative
  to monadic encapsulation of side effects.

 While  *World - (a, *World)  seems to work in practice, I wonder what
 its (denotational) semantics are. I mean, the two programs

loop, loop' :: *World - ((),*World)

loop  w = loop w
loop' w = let (_,w') = print x w in loop' w'

 both have denotation _|_ but are clearly different in terms of side
 effects. (The example is from SPJs awkward-squad tutorial.) Any pointers?

For side-effecting program one has to consider bisimilarity. It's
common that semantically equivalent (under operational or denotational
semantics) behave differently with regard to side-effects (i/o) - i.e.
they are not bisimilar.

http://en.wikipedia.org/wiki/Bisimulation

Arnar
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Disjunctive Normal Form

2007-11-01 Thread Arnar Birgisson
Hey folks,

On Nov 1, 2007 6:41 PM, Luke Palmer [EMAIL PROTECTED] wrote:
 A good way to approach this is data-structure-driven programming.  You
 want a data structure which represents, and can _only_ represent,
 propositions in DNF.  So:

 data Term = Pos Var | Neg Var
 type Conj = [Term]
 type DNF  = [Conj]

 Then write:

 dnf :: LS - DNF

 The inductive definition of dnf is straightforward given this output type...

Jim, if you don't want to see an attempt at a solution, don't read further.

I'm learning too and found this an interesting problem. Luke, is this
similar to what you meant?

data LS = Var String | Not LS | And LS LS | Or LS LS deriving Show

data Term = Pos String | Neg String deriving Show
type Conj = [Term]
type DNF = [Conj]

dnf :: LS - DNF
dnf (Var s) = [[Pos s]]
dnf (Or l1 l2) = (dnf l1) ++ (dnf l2)
dnf (And l1 l2) = [t1 ++ t2 | t1 - dnf l1, t2 - dnf l2]
dnf (Not (Not d)) = dnf d
dnf (Not (And l1 l2)) = (dnf $ Not l1) ++ (dnf $ Not l2)
dnf (Not (Or l1 l2)) = [t1 ++ t2 | t1 - dnf $ Not l1, t2 - dnf $ Not l2]
dnf (Not (Var s)) = [[Neg s]]

-- test cases
x = (Or (And (Var A) (Var B)) (Or (And (Not $ Var C) (Var D))
(Var E)))
y = (Not (And (Var A) (Var B)))
z = (Not (And (Not y) y))


 Also, is it correct?

cheers,
Arnar
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe