The goals of the concurrency standard?

2006-04-11 Thread oleg

John Meacham wrote
> [Rule 1]
> * in a cooperative implementation of threading, any thread with value
>   _|_ may cause the whole program to have value _|_. In a preemtive one,
>   this is not true.

> would the simple qualifier
> 'if there exists another runnable thread'
> solve the issues?

  if there exists another runnable thread of at least the same
priority.


I am afraid I must ask for the clarification of the goals of this
discrimination between pre-emptive and cooperative scheduling -- which
leads me to further questions.

One one hand, I understand the motivation: providing guidance to the
end programmer. In a cooperative thread system, the programmer just
can't write code imagining the current thread is the only one. If the
thread is to compute the Nth Mersenne prime, it would effectively hang
the whole system. So, the user must specifically insert calls to yield
-- which means that the whole code has to be converted into a monadic
style, because yield is most certainly a monadic operation _and_
because we must be sure that computations before yield are indeed
finished, rather than snowball up to the very end. So, we have to
re-write pure functional code (like the Mersenne prime computation)
into a monadic one, and thus explicitly serialize it. That negates one
of the significant benefits of Haskell -- no longer code looks like a
specification or mathematical notation. If we're forced to write
everything in the A-normal form, we might as well use a better
language for that, like SML.

In a preemptive thread system, we can indeed program a thread as if it
were the only thread in the system -- and the scheduler would do the
rest.

Problem solved? Only if threads do not interact, which is not
likely. At least they have to write something, and so contend for the
access to stdout (assuming multi-line output). A thread running a long
computation while holding a critical resource can just as well hang
the whole system, pre-emptive scheduler notwithstanding. Lazy
evaluation of Haskell makes this problem quite subtle:

do
r <- return (mersenne' 44)
acquire'lock
putStrLn "Great success"
print r
release'lock

One may think that the long computation occurs in the "r <- ..." line.
Alas, it may just as well occur in the "print r" line, when the digits
of the number are really required to print. So, we will be running a
long computation while holding a lock. This isn't much better than the
situation with the cooperative scheduling, is it?

It appears that just pre-emptiveness does not guarantee us much. We
need to add, for example, hyper-strict evaluation -- or, effectively,
give up on the lazy evaluation because otherwise we can never be
certain about the length of the computation -- and without that
information, we can never be certain if our locking strategy is
fair. The latter has nothing to do with pre-emptiveness.

There seems to be another, Erlang way. The programmer must have an
impression that threads do not share absolutely anything, and must
communicate exclusively via message passing. Incidentally, how this
message passing implemented on a shared memory multi-processor is
another matter entirely: Erlang is quite good at taking advantage of
shared memory while giving the user impression of complete isolation.
Message passing in Erlang is quite advanced: a thread may snoop the
message queue looking for a specific message. The user may code a
process as if no other processes exist. Erlang's scheduler does the
right thing (I think it pre-empts based on the number of reductions;
the scheduler may take the size of the message queue into
account). The example above becomes

do
r <- return (mersenne' 45)
send "Great success"
send $ show r
send "print it now"

the output thread won't write anything until it catches the sight of
"print it now" message in its message queue. In a sense, this strategy
isn't quite different from STM.

There is a price to pay however: absolutely no IORefs, no MVars
etc. All system calls must be converted to a message-passing
style. The latter isn't a huge problem: Erlang has managed. But for
us, that entails the re-design of FFI, IO, and the banishment of
IORefs and MVars...

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: preemptive vs cooperative: attempt at formalization

2006-04-11 Thread John Meacham
On Tue, Apr 11, 2006 at 09:05:12PM -0700, [EMAIL PROTECTED] wrote:
> > [Rule 1]
> > * in a cooperative implementation of threading, any thread with value
> >   _|_ may cause the whole program to have value _|_. In a preemtive one,
> >   this is not true.
> 
> I'm afraid that claim may need qualifications:
> 
>  1. if there is only one runnable thread, if it loops in pure code,
> the whole program loops -- regardless of preemptive/cooperative
> scheduling.
> 
>  2. in a system with thread priorities, if the highest priority thread
> loops (in pure code or otherwise), the whole program loops -- again
> irrespective of the preemptive/cooperative scheduling.
> 
>  3. [a variation of 1 or 2]. A thread that loops in a critical section
> (or holding a mutex on which the other threads wait) loops the whole
> program -- again, irrespective of preemptive/cooperative scheduling.

would the simple qualifier
'if there exists another runnable thread'

solve the issues?

A thread is not runnable if it is waiting on a resource or can't run due
to the priority policy of the scheduler. and it means there is at least
one other thread to switch to.


perhaps we should just make the ability to implement 'merge' and
'nmerge' the difference. though, defining the behavior of those routines
very well could be a harder problem than defining the difference between
preemptive and cooperative in the first place.


-- 
John Meacham - ⑆repetae.net⑆john⑈
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: preemptive vs cooperative: attempt at formalization

2006-04-11 Thread Taral
On 4/11/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>
> > [Rule 1]
> > * in a cooperative implementation of threading, any thread with value
> >   _|_ may cause the whole program to have value _|_. In a preemtive one,
> >   this is not true.
>
> I'm afraid that claim may need qualifications:

I was thinking that it might be more useful to express it programatically:

if preemptive then fork _|_ >> return () => ()

--
Taral <[EMAIL PROTECTED]>
"You can't prove anything."
-- Gödel's Incompetence Theorem
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: preemptive vs cooperative: attempt at formalization

2006-04-11 Thread oleg

> [Rule 1]
> * in a cooperative implementation of threading, any thread with value
>   _|_ may cause the whole program to have value _|_. In a preemtive one,
>   this is not true.

I'm afraid that claim may need qualifications:

 1. if there is only one runnable thread, if it loops in pure code,
the whole program loops -- regardless of preemptive/cooperative
scheduling.

 2. in a system with thread priorities, if the highest priority thread
loops (in pure code or otherwise), the whole program loops -- again
irrespective of the preemptive/cooperative scheduling.

 3. [a variation of 1 or 2]. A thread that loops in a critical section
(or holding a mutex on which the other threads wait) loops the whole
program -- again, irrespective of preemptive/cooperative scheduling.


As an example of formalizing fairness, I'd like to point out

L. de Alfaro, M. Faella, R. Majumdar, V. Raman. Code Aware Resource
Management. In EMSOFT 05: Fifth ACM International Conference on Embedded
Software, ACM Press, 2005.

http://luca.soe.ucsc.edu/Publications?action=AttachFile&do=get&target=emsoft05.pdf

  ``To achieve compact, yet fair, managers, we consider win-
ning strategies that may be randomized, that is, scheduling
decisions may use lotteries over available moves; the strate-
gies ensure progress and fairness with probability 1.''

Of a special interest is p5 of the paper:

''Inter-thread nondeterminism. We assume that the underlying operating
system scheduler is fair: more precisely, we assume that, if a thread
is infinitely often ready to execute, it will make progress
infinitely often

Intra-thread nondeterminism. Assuming that intrathread nondeterminism
is resolved in an arbitrary way may easily lead to declaring the
manager synthesis problem to be infeasible. In fact, whenever a thread
can execute a loop while holding a resource, the arbitrary resolution
of intra-thread nondeterminism introduces the possibility that the
loop never terminates. In practice, a reasonable assumption is that
intra-thread nondeterminism is resolved in a (strongly) fair fashion:
if each choice is presented infinitely often, each choice outcome
will follow infinitely often. Such fairness entails loop
termination.''

The paper then formalizes these fairness assumptions and the goal of
the fair resource manager in temporal logic.


Thus the difference between cooperative and preemptive scheduling
strategies becomes quite fuzzy: neither strategy guarantees much
unless we make some assumptions about threads (e.g., no thread loops
while holding a critical resource). But if we do make assumptions
about threads, we might as well assume that threads are behaving
nicely and yield (or the underlying monad or memory allocator yield
on their behalf).




___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: postponing discussion on exceptions and deepSeq

2006-04-11 Thread Robert Dockins
On Tuesday 11 April 2006 01:09 pm, isaac jones wrote:
> I'd like to ask the list to postpone discussion on exceptions and
> deepSeq until a later iteration.  While these are two topics that are of
> deep importance to me, I would prefer to focus on the other two topics
> at hand until they are solved.  That is, concurrency, and the class
> system.
>
> I'm still postponing opening up another topic since I find that the
> class system isn't being as enthusiastically discussed as I had hoped.
> Let's all focus our energies on these topics, I promise that the others
> won't be forgotten.
>
> Ross has asked for use cases for functional dependencies and so far has
> only two replies.  Surely there are those on this list who have use of
> functional dependencies?


Edison makes use of MPTC and functional dependencies.  I'll list the relavant 
class declaration heads here:

class Eq a => CollX c a | c -> a
class (CollX c a, Ord a) => OrdCollX c a | c -> a
class CollX c a => SetX c a | c -> a
class (OrdCollX c a, SetX c a) => OrdSetX c a | c -> a
class CollX c a => Coll c a | c -> a
class (Coll c a, OrdCollX c a) => OrdColl c a | c -> a
class (Coll c a, SetX c a) => Set c a | c -> a
class (OrdColl c a, Set c a) => OrdSet c a | c -> a

class (Eq k, Functor m) => AssocX m k | m -> k
class (AssocX m k, Ord k) => OrdAssocX m k | m -> k 
class AssocX m k => FiniteMapX m k | m -> k
class (OrdAssocX m k, FiniteMapX m k) => OrdFiniteMapX m k | m -> k
class AssocX m k => Assoc m k | m -> k
class (Assoc m k, OrdAssocX m k) => OrdAssoc m k | m -> k
class (Assoc m k, FiniteMapX m k) => FiniteMap m k | m -> k
class (OrdAssoc m k, FiniteMap m k) => OrdFiniteMap m k | m -> k


An additional issue is the following instance declarations, which require 
undecidable instances under GHC:

Eq (s a) => Eq (Rev s a)
(Sequence s, Read (s a)) => Read (Rev s a)
(Sequence s, Show (s a)) => Show (Rev s a)


The haddock for current Edison is here:

http://www.eecs.tufts.edu/~rdocki01/docs/edison/index.html


Rob Dockins
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: MVar semantics: proposal

2006-04-11 Thread Jan-Willem Maessen
Sorry for the long delay in responding to this message---this issue  
takes all the brain cells I've got in one go.


Ordinarily I'd trim the forgoing discussion, but it was rusty enough  
that I've retained it:


On Apr 4, 2006, at 7:12 AM, Simon Marlow wrote:

Jan-Willem - thanks for your thoughts on this, it's greatly  
appreciated.


On 31 March 2006 18:49, Jan-Willem Maessen wrote:


John -

You are, in effect, proposing a memory model for MVars and IORefs.
The high-level model for programmers is "In order to communicate data
between threads, you *must* use an MVar, and never an IORef."

But the devil is in the details.  I'd like to strongly urge *against*
adopting the extremely loose model you have proposed.  The following
things seem particularly important:

* reads and writes to IORefs should be atomic, meaning either a
complete update is observed or no change is observed.  In the absence
of this guarantee, misuse of IORefs can cause programs to crash in
unrepeatable ways.  If the machine doesn't make this easy, the
implementor ought to sweat a little so that Haskell programmers don't
have to sweat at all.

* I assume forkIO constitutes a sequence point.  I suspect throwTo et
al ought to as well.

* I would urge that atomicModifyIORef constitute a sequence point---I
suspect it loses a great deal of its utility otherwise.

Now, on to more difficult issues...  Consider the following example
(untested):

data RefList a = Nil | Cons a (IORef (RefList a))

cons :: a -> RefList a -> IO (RefList a)
cons x xs = do
   a <- newIORef xs
   return (Cons x a)

hd :: RefList a -> a
hd (Cons a _) = a

tl :: RefList a -> IO (RefList a)
tl (Cons a t) = readIORef a

setTl :: RefList a -> RefList a -> IO ()
setTl (Cons a t) t' = writeIORef t t'

main = do a <- cons 'a' Nil
   forkIO $ do
 c <- cons 'c' Nil
 b <- cons 'b' Nil
setTl b c
 setTl a b
   at <- tl a
   case at of
 Nil -> return ()
 Cons _ _ -> do
  putChar (hd at)
   att <- tl at

This program is, by your informal model, buggy.  The question is
this: how badly wrong is it?
Let's say at happens to read b.  Is (hd at) well defined?  That's
assuming very strong consistency from the memory system already.  How
about the IORef in at?  Is that fully allocated, and properly
initialized?  Again, if it is, that implies some pretty strong
consistency from the memory system.

Now, what about att?  By your argument, it may or may not be c.  We
can ask the same questions about its contents assuming it happens to
be c.

People have talked a lot about weakly-ordered NUMA machines for more
than a decade, and they're always just a couple of years away.  In
practical terms, non-atomic NUMA memory models tend to be so hard to
program that these machines have never found any traction---you need
to throw away all of your software, including your OS, and start
afresh with programmers that are vastly more skilled than the ones
who wrote the stuff you've already got.

My feeling is that the purely-functional portion of the Haskell
language already makes pretty stringent demands of memory
consistency.


This is true - in GHC we are required to add a memory barrier to thunk
update on architectures that don't have strong memory ordering,  
just to

ensure that when you follow the pointer in an indirection you can
actually see the value at the end of the pointer.

Since x86 & x86_64 can implement strong memory ordering without
(seemingly) too much overhead, surely adding the barrier  
instruction for
other architectures shouldn't impose too much of a penalty, at  
least in

theory?


Interesting question.  The currently-popular architectures can get by  
without too many memory barriers, in large part by requiring stores  
to commit to memory in order; my belief is that SPARC TSO can get by  
with no memory barriers for thunk update/read, and that PowerPC  
requires a write barrier (and perhaps read barriers).


It remains to be seen whether multi-core pipelines will change this  
equation; there are reasons an architect might prefer to use a single  
store pipeline for multiple threads, satisfying loads from one thread  
from pending stores for another thread.  The practical upshot would  
be weaker memory models all around.


Sadly, x86 has a bad record of bungling synchronization operations,  
and clear documentation on the x86 memory model is conspicuous by its  
absence.



In light of those demands, and the fact that mutable
state is used in pretty tightly-controlled ways, it's worth
considering much stronger memory models than the one you propose.
I'd even go so far as to say "IORefs and IOArrays are sequentially
consistent".


Certainly possible; again on x86 & x86_64 it's a no-op, on other
architectures it means adding a barrier to writeIORef.  In GHC we're
already doing a write barrier (of the generational GC kind, not the
microprocessor kind) in writeIORef

Re: deeqSeq proposal

2006-04-11 Thread Lennart Augustsson

Yes, I realize than dynamic idempotence is not the same as
cycle detection.  I still worry. :)

I think expectance is in the eye of the beholder.  The reason
that (the pure subset of) pH was a proper implementation of
Haskell was because we were not over-specifying the semantics
originally.  I would hate to do that now.

-- Lennart

Simon Peyton-Jones wrote:

| Well, my worry was partly about the suggested version of deepSeq that
| would not diverge on circular structures (since circular structures
| are just one way to implement "infinite" data structures).

Dynamic idempotence is not the same as detecting circular structures.
Deepseqing a circular structure should definitely diverge, as it would
as if it was infinite.  Idempotence changes the operational behaviour,
but not the denotational behaviour.  So that part of the worry is ok.

But since the dynamic-idempotence operational behaviour is (as I
understand the proposal) the whole point, it's true that the
implementation would be constrained.  In the same kind of way that we
expect call-by-need rather than call-by-name.  


S



___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


preemptive vs cooperative: attempt at formalization

2006-04-11 Thread John Meacham
I'd like to be a bit more formal when it comes to the distinction
between cooperative and preemptive implementations of concurrency, here
is a first attempt.

1. termination,

In a concurrent implementation, a thread performing an infinite loop
with no IO or interaction with the outside world can potentially stall
switching to another thread forever, in FP, we usually denote an
infinite loop by _|_. so I think the first difference would be:

[Rule 1]
* in a cooperative implementation of threading, any thread with value
  _|_ may cause the whole program to have value _|_. In a preemtive one,
  this is not true.

I am using _|_ in the stronger sense of non-termination, not including
things like exceptions which should have a well defined behavior.

2. fairness

Assuming no thread has value _|_ now, what can both models guarentee
when it comes to fairness?

both can guarentee every runnable thread will eventually run with a
round-robin type scheduler.

both can fairly allocate CPU time at the limit, if a thread is taking
too much CPU in either model, it can be allocated less timeslices in the
future. the same scheduling algorithms work for both with minor
modifications.

the thing that pre-emptive implementations can do that cooperative ones
cannot is provide latency fairness. as in, the time before a given
thread is scheduled once it becomes runnable can be bound independently
of what other threads are doing.

so perhaps the following

[Rule 2]
* A cooperative system cannot necessarily guarentee or provide for
  latency requirements when it comes to scheduling a thread once it
  becomes runnable. A pre-emptive system can provide such guarentees
  (discounting foreign nonconcurrent calls)


I think these two rules are enough to distinguish when you can say you
meet the pre-emptive option without delving into implementation details.

What do y'all think?


John





-- 
John Meacham - ⑆repetae.net⑆john⑈
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: limitations of newtype-derivings (fixed)

2006-04-11 Thread John Meacham
On Tue, Apr 11, 2006 at 02:19:22PM +0100, Simon Peyton-Jones wrote:
> | > newtype Id = Id Int
> | > data Term = ...
> | > newtype Subst = Subst (IM.IntMap Term)
> | 
> | ideally, we'd like an MapLike instance, but we'd have to tediously
> write
> | it ourselves. if we allow the supergeneralized newtype deriving, we
> can do:
> | 
> | > deriving(MapLike Id Term Subst)
> 
> Now things aren't so clear.  You are assuming that we have an instance
>   instance MapLike Int a (IntMap a)
> 
> But suppose we also had an explicit instance decl for
>   instance MapLike Int Term Subst
> which we might.  Which would the 'deriving' base its instance on? We
> might also have an explicit instance 
>   instance MapLike Id a (IntMap a)
> Now it's even less obvious which to use.

good point. We would probably want to specify which instance we are
deriving it from with something like

deriving (MapLike Int a (IntMap a) => MapLike Id Term Subst)

being explicit seems better than making up some resolution rules.


> What if the newtype was buried more deeply.  Can we say
>   deriving( C (Foo Id) )
> if we happen to have an instance for C (Foo Int) around already?  Here
> the newtype isn't at the "top level" of the class argument.

I had not thought about that. A use doesn't occur to me off the top of
my head, but that is probably just because it hasn't been available so I
have not considered uses of it.

I see no particular problem assuming all the constructors of Foo and Id
and the methods of C are in scope.

> GHC's newtype-deriving mechanism is very precise: it unwraps exactly one
> layer of exactly one newtype.  It's attractive to go further, as you
> describe, but it'd need to be tightly specified.  (And of course, that
> increases the complexity of the overall language.)

yeah, the restriction that you can only newtype derive the last argument
has always bothered me with its arbitraryness based solely on syntax. so
getting rid of that restriction would simplify the language. coderiving
(is there a better term?) instances based on multiple newtypes is a true
new feature, but I don't see any issues from an implementation
standpoint, just the same problem of defining it without saying "the
same method"

We also have a few derivings which are special,
'Show,Read,Typeable,Data' that don't follow the newtype deriving rule,
but I am not proposing we change them. 

John

-- 
John Meacham - ⑆repetae.net⑆john⑈
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


postponing discussion on exceptions and deepSeq

2006-04-11 Thread isaac jones
I'd like to ask the list to postpone discussion on exceptions and
deepSeq until a later iteration.  While these are two topics that are of
deep importance to me, I would prefer to focus on the other two topics
at hand until they are solved.  That is, concurrency, and the class
system.

I'm still postponing opening up another topic since I find that the
class system isn't being as enthusiastically discussed as I had hoped.
Let's all focus our energies on these topics, I promise that the others
won't be forgotten.

Ross has asked for use cases for functional dependencies and so far has
only two replies.  Surely there are those on this list who have use of
functional dependencies?

peace,

  isaac


-- 
isaac jones <[EMAIL PROTECTED]>

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: FFI, safe vs unsafe

2006-04-11 Thread Aaron Denney
On 2006-04-11, Ross Paterson <[EMAIL PROTECTED]> wrote:
> On Tue, Apr 11, 2006 at 09:13:00AM +0100, Simon Marlow wrote:
>>  - the default should be... concurrent reentrant, presumably, because
>>that is the safest.  (so we need to invert the notation).
>
> I think the name "concurrent" has a similar problem to "safe": it reads
> as an instruction to the implementation, rather than a declaration by the
> programmer of the properties of a particular function; as Wolfgang put it,
> "this function might spend a lot of time in foreign lands".

I'd like to second this.

-- 
Aaron Denney
-><-

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Signals + minimal proposal (was Re: asynchronous exceptions)

2006-04-11 Thread David Roundy
On Mon, Apr 10, 2006 at 02:19:23PM -0700, John Meacham wrote:
> On Mon, Apr 10, 2006 at 02:58:20PM +0100, Simon Marlow wrote:
> > Suppose I want to do some action with a temporary file:
> > 
> >bracket
> >newTempFile
> >(\f -> removeTempFile f)
> >(\f -> doSomethingWith f)
> > 
> > Under your scheme, this code doesn't get to remove its temporary file on
> > exit, unless I explicitly add an exit handler that throws an exception
> > to the current thread.
> > 
> > I think code like the above should just work.  Furthermore, I think it
> > should be an invariant that a thread is never discarded or killed, only
> > sent an exception.  Otherwise, how else can I acquire a resource and
> > guarantee to release it when either an exception is raised, the program
> > exits, or the computation completes?
> 
> you ask the system to send you an exception on exit.

(As I'm sure you are aware) I'm with Simon on this one.  The default
cleanup handling should "just work", and you shouldn't need to write
special-case code in order to properly clean up.  Having exitWith throw
exceptions (as it currently does in ghc) is very nice.

Perhaps we could have a special sort of thread that is just killed upon
exit with no exceptions being raised, but I wouldn't be tempted to use such
a thread... unless I suppose I had some unimportant long-running ffi call
to make, and didn't want exiting to be slowed down by waiting for it to
complete.  It would definitely be nice to have interruptible ffi calls (and
not just interruptible by exiting), but I'm not really sure how one would
go about that.
-- 
David Roundy
http://www.darcs.net
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


RE: limitations of newtype-derivings (fixed)

2006-04-11 Thread Simon Peyton-Jones
I like this idea.  Needs fleshing out though.

|  * you can only newtype derive the last argument to a MPTC.
|  * you cannot co-derive an instance for multiple newtype renamings.
| 
| it seems that both these can be solved when combined with the other
| proposed extension, allowing deriving clauses to be separate from data
| definitions.
| 
| basically, we would allow deriving anywhere.
| 
| > deriving (Show Foo)

I'm all for that.  A modest but useful gain. All we need is the syntax,
and that is something that Haskell Prime might usefully define.


| > newtype Id = Id Int
| > data Term = ...
| > newtype Subst = Subst (IM.IntMap Term)
| 
| ideally, we'd like an MapLike instance, but we'd have to tediously
write
| it ourselves. if we allow the supergeneralized newtype deriving, we
can do:
| 
| > deriving(MapLike Id Term Subst)

Now things aren't so clear.  You are assuming that we have an instance
instance MapLike Int a (IntMap a)

But suppose we also had an explicit instance decl for
instance MapLike Int Term Subst
which we might.  Which would the 'deriving' base its instance on? We
might also have an explicit instance 
instance MapLike Id a (IntMap a)
Now it's even less obvious which to use.

What if the newtype was buried more deeply.  Can we say
deriving( C (Foo Id) )
if we happen to have an instance for C (Foo Int) around already?  Here
the newtype isn't at the "top level" of the class argument.


GHC's newtype-deriving mechanism is very precise: it unwraps exactly one
layer of exactly one newtype.  It's attractive to go further, as you
describe, but it'd need to be tightly specified.  (And of course, that
increases the complexity of the overall language.)

Simon
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: FDs and confluence

2006-04-11 Thread Ross Paterson
On Mon, Apr 10, 2006 at 06:48:35PM +0100, Claus Reinke wrote:
> note also that we are talking about different things here: I am talking 
> about FD consistency, you are talking about the FD consistency condition.

That would explain a few things.

> as this example shows once again, there are instance declarations 
> for which the FD consistency condition, as currently interpreted by 
> Hugs, fails, even though no inconsistent constraints are implied. so I 
> fail to see the point of continuing to require the FD consistency
> condition in unrevised form.

Do you have a revised set of restrictions on the form of instances?

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


RE: deeqSeq proposal

2006-04-11 Thread Simon Peyton-Jones
| Well, my worry was partly about the suggested version of deepSeq that
| would not diverge on circular structures (since circular structures
| are just one way to implement "infinite" data structures).

Dynamic idempotence is not the same as detecting circular structures.
Deepseqing a circular structure should definitely diverge, as it would
as if it was infinite.  Idempotence changes the operational behaviour,
but not the denotational behaviour.  So that part of the worry is ok.

But since the dynamic-idempotence operational behaviour is (as I
understand the proposal) the whole point, it's true that the
implementation would be constrained.  In the same kind of way that we
expect call-by-need rather than call-by-name.  

S
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


RE: Exceptions

2006-04-11 Thread Simon Marlow
On 11 April 2006 13:54, John Meacham wrote:

> On Tue, Apr 11, 2006 at 01:43:18PM +0100, Simon Marlow wrote:
>> yes, when I say "one throw" I was referring to the argument type, not
>> the return type.  We should still have ioError - although it would
>> probably be better named throwIO:
> 
> Ah, I see what you mean now.
> 
> would it be possible to use Typeable1 to just catch 'ArithException a'
> for any Typeable a? It seems like it should be, but I have not used
> Typeable1 much.

I tried it briefly and couldn't get it to work, but I'm no expert on the
SYB stuff.  You might need an Exception1 class to go with Typeable1, and
that would be fairly ugly.

Cheers,
Simon
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Exceptions

2006-04-11 Thread John Meacham
On Tue, Apr 11, 2006 at 01:43:18PM +0100, Simon Marlow wrote:
> yes, when I say "one throw" I was referring to the argument type, not
> the return type.  We should still have ioError - although it would
> probably be better named throwIO:

Ah, I see what you mean now.

would it be possible to use Typeable1 to just catch 'ArithException a'
for any Typeable a? It seems like it should be, but I have not used
Typeable1 much.


John

-- 
John Meacham - ⑆repetae.net⑆john⑈
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


RE: Exceptions

2006-04-11 Thread Simon Marlow
On 11 April 2006 13:35, John Meacham wrote:

> On Tue, Apr 11, 2006 at 01:24:07PM +0100, Simon Marlow wrote:
>> Attached is another variant of the extensible exceptions idea, it
>> improves on the previous designs in a couple of ways:  there's only
>> one catch & throw, regardless of what type you're throwing or
>> catching. There is an extensible hierarchy of exceptions, and you
>> can catch and re-throw subclasses of exceptions.
> 
> I made the catch and throw separate so the decision as to whether to
> include imprecice exceptions and extensible extensions can be made
> independently.
> 
> that and
> 
> throw x /= ioError x
> 
> ioError x >> return ()  -> IO _|_  (only _|_ when IO action executed)
> throw x >> return () -> _|_
> 
> ioError x `seq` ()  -> ()
> throw x `seq` () -> _|_

yes, when I say "one throw" I was referring to the argument type, not
the return type.  We should still have ioError - although it would
probably be better named throwIO:

http://www.haskell.org/ghc/docs/latest/html/libraries/base/Control-Excep
tion.html#v%3AthrowIO

(the docs for throwIO also mention the strictness property you described
above)

Cheers,
Simon
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: deeqSeq proposal

2006-04-11 Thread Lennart Augustsson

Simon Peyton-Jones wrote:

| Any function that is not defineable in (pure) Haskell should be viewed
| with utmost suspicion.  The seq function is one of these.  At least
| seq has simple denotational semantics, which can't be said for
deepSeq.
| 
| I say, put deepSeq in a type class (which is what I've done when I

need
| it).

The whole *point* is that deepSeq is (dynamically) idempotent: deepSeq
(deepSeq x) = deepSeq x.  Its denotational behaviour is perfectly
definable in Haskell, but its operational behaviour is not.  That is
both attractive (because it means you feel less anxious about wasting
work with deepSeq) and repellent (because it constrains the
implementation, as John points out).

Whether it should be in a class is a rather separate discussion.  In a
way we already sold out when we allowed seq to escape from the
type-class world.  Perhaps deepSeq is worse (because it traverses data
structures) but not obviously. 


Well, my worry was partly about the suggested version of deepSeq that
would not diverge on circular structures (since circular structures
are just one way to implement "infinite" data structures).

I think deepSeq is only worse than seq if we insist that it should
have some semantics that constrains implementations (like that the
second time you apply deepSeq it should be "fast").

I think it was a mistake to let seq out of the type class bag, but
that's already done.

-- Lennart

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Exceptions

2006-04-11 Thread John Meacham
On Tue, Apr 11, 2006 at 05:35:12AM -0700, John Meacham wrote:
> throw x >> return () -> _|_

hmm.. actually is this true? hmm.. seq and IO always mixed oddly.

John

-- 
John Meacham - ⑆repetae.net⑆john⑈
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Signals + minimal proposal

2006-04-11 Thread Marcin 'Qrczak' Kowalczyk
John Meacham <[EMAIL PROTECTED]> writes:

> forkIO action = forkIO' action' where 
> action' = do
> myThreadId >>= onExit . throwTo PleaseExit
> action

This would be a memory leak: even after the thread finishes,
its onExit handler would remain registered.

-- 
   __("< Marcin Kowalczyk
   \__/   [EMAIL PROTECTED]
^^ http://qrnik.knm.org.pl/~qrczak/
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Exceptions

2006-04-11 Thread John Meacham
On Tue, Apr 11, 2006 at 01:24:07PM +0100, Simon Marlow wrote:
> Attached is another variant of the extensible exceptions idea, it
> improves on the previous designs in a couple of ways:  there's only one
> catch & throw, regardless of what type you're throwing or catching.
> There is an extensible hierarchy of exceptions, and you can catch and
> re-throw subclasses of exceptions.

I made the catch and throw separate so the decision as to whether to
include imprecice exceptions and extensible extensions can be made
independently.  

that and

throw x /= ioError x

ioError x >> return ()  -> IO _|_  (only _|_ when IO action executed)
throw x >> return () -> _|_

ioError x `seq` ()  -> ()
throw x `seq` () -> _|_

John

-- 
John Meacham - ⑆repetae.net⑆john⑈
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


RE: Exceptions

2006-04-11 Thread Simon Marlow
Attached is another variant of the extensible exceptions idea, it
improves on the previous designs in a couple of ways:  there's only one
catch & throw, regardless of what type you're throwing or catching.
There is an extensible hierarchy of exceptions, and you can catch and
re-throw subclasses of exceptions.

So this design contains a dynamically-typed extensible hierarchy, but
it's fairly lightweight.  Adding a new leaf exception type requires 3
lines + 1 line for each superclass (just 1 line for a top level leaf, as
before).  Adding a new node requires about 10 lines + 1 line for each
superclass, msotly boilerplate.

Perhaps the type class hackers can do better than this!

Cheers,
Simon


Exception-2.hs
Description: Exception-2.hs
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Signals + minimal proposal

2006-04-11 Thread Marcin 'Qrczak' Kowalczyk
"Simon Marlow" <[EMAIL PROTECTED]> writes:

> How does "cancelling" a thread differ from sending it an exception?

It doesn't. By cancelling I mean just sending a particular async
exception.

> Can a thread be GC'd without being sent an exception first?

Yes, but I'm now changing this.

Unfortunately it doesn't seem possible to guarantee proper stack
unwinding in all cases:

POSIX threads evaporate after fork() in the child process. This means
that foreign code performing callbacks doesn't exist there, except in
the thread doing the fork; the C stacks are inaccessible. So in Kogut
in this case bound threads become unbound, and they only run up to the
end of the nearest callback from foreign code: then they are killed
immediately.

Another technical limitation: in a build where OS threads are not used,
callbacks returning in a non-LIFO order must wait for the callback
using the top of the OS stack to return. It's impossible to cause such
thread to continue immediately even if it gets an exception.

There are also limitations caused by principles I've adopted myself.
I have scoped mutex locking and scoped unlocking. In particular
waiting for a condition unlocks the mutex and always relocks it before
returning. When a thread is waiting to relock a mutex when exiting a
scope (rather than when entering a scope), it must absolutely lock it
before it can continue, in order to guarantee consistent state of the
mutex in regions of code.

So I'm going to send threads about to be GC'd a signal rather than an
exception; it will be handled only if the thread has signals unblocked.

There is another case similar to GC'ing a thread: when the runtime
discovers that there are no threads to be run, to wait for I/O, to
wait for a timeout, and the thread handling system signals doesn't
seem to be intentionally waiting for signals (it's not blocked on a
Kogut construct similar to POSIX sigwait), the runtime attempts to
wake up the thread handling system signals with a Deadlock signal,
so the program can react to a total deadlock. Of course a deadlock
of only a subset of threads won't be detected if the threads are
not GC'd.

When waiting for the rest of threads at program exit, it might happen
that some threads won't want to return after being cancelled, e.g.
they have signals blocked or they lock up during cleanup. Such case
would normally be a deadlock (the main thread is waiting until they
finish, and they are waiting for something else), but the above
mechanism causes the main thread to be woken up and continue even
though some threads have not finished.

-- 
   __("< Marcin Kowalczyk
   \__/   [EMAIL PROTECTED]
^^ http://qrnik.knm.org.pl/~qrczak/
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


RE: deeqSeq proposal

2006-04-11 Thread Simon Peyton-Jones
| instance Eval (a -> b) where
| 

You could say the same of instance Num Int, because Int is a primitive
type.  But yes, seq on functions is not lambda-definable, and that is
indeed a qualitivative difference between seq and deepSeq I agree.

It's not worth making a meal of this.  All I'm saying is that there are
two distinct decisions
1. Add seqFun :: (a->b) -> c -> c
2. Given (1), one could provide seq via a type-class, or without.  H98
chooses the latter

For deepSeq, (1) doesn't arise, but the same choice arises for (2), and
with the same arguments for and against.  See in particular John
Hughes's impassioned pleas for not changing type signatures "all the way
up" when adding a 'seq'.

I don't think we disagree here.  But equally I don't there's a clear
"right" answer.

Simon
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: deeqSeq proposal

2006-04-11 Thread John Meacham
On Tue, Apr 11, 2006 at 12:15:57PM +0100, Simon Peyton-Jones wrote:
> | well, there is a difference there in that 'seq' is unimplementable in
> | haskell, so the design comitee had freedom to implement it however
> they
> | wanted. 
>   
>   class Eval a where
> seq :: a -> b -> b
> 
>   instance Eval (a,b) where
>  seq (_,_) b = b
> 
>   instance Eval [a] where
>  seq [] b = b
>  seq (_:_) b = b

instance Eval (a -> b) where


?

John

-- 
John Meacham - ⑆repetae.net⑆john⑈
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


RE: deeqSeq proposal

2006-04-11 Thread Simon Peyton-Jones
| well, there is a difference there in that 'seq' is unimplementable in
| haskell, so the design comitee had freedom to implement it however
they
| wanted. 

class Eval a where
  seq :: a -> b -> b

instance Eval (a,b) where
   seq (_,_) b = b

instance Eval [a] where
   seq [] b = b
   seq (_:_) b = b

etc

Simon
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Signals + minimal proposal (was Re: asynchronous exceptions)

2006-04-11 Thread John Meacham
On Tue, Apr 11, 2006 at 08:54:32AM +0100, Simon Marlow wrote:
> now If I have a handler registered that throws an exception to the
> current thread, what happens?  handleLoop is aborted, the exception is
> propagated to the top level of the thread, where the top-level exception
> handler calls exitWith again, and promptly deadlocks because exitMVar is
> already empty.

True, the handlers probably should run in their own thread then. hmm..
will think more on these issues...

John

-- 
John Meacham - ⑆repetae.net⑆john⑈
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: deeqSeq proposal

2006-04-11 Thread John Meacham
On Tue, Apr 11, 2006 at 09:53:54AM +0100, Simon Peyton-Jones wrote:
> Whether it should be in a class is a rather separate discussion.  In a
> way we already sold out when we allowed seq to escape from the
> type-class world.  Perhaps deepSeq is worse (because it traverses data
> structures) but not obviously. 

well, there is a difference there in that 'seq' is unimplementable in
haskell, so the design comitee had freedom to implement it however they
wanted. however, now that we have seq, a deepSeq is perfectly
implementable* in haskell using a typeclass, which is a strong argument
for making it have one. 

* dynamic idempotent issues aside. 

in any case, if it were to be in the standard, I'd put it in a typeclass
and give a haskell translation with a note that implemenations are free
to implement optimized versions under the hood as long as the observable
effect is the same but you can't count on anything better than the plain
old recursive seq definition.

John

-- 
John Meacham - ⑆repetae.net⑆john⑈
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: FFI, safe vs unsafe

2006-04-11 Thread John Meacham
On Tue, Apr 11, 2006 at 09:13:00AM +0100, Simon Marlow wrote:
> What are the conclusions of this thread?
> 
> I think, but correct me if I'm wrong, that the eventual outcome was:
> 
>  - concurrent reentrant should be supported, because it is not 
>significantly more difficult to implement than just concurrent.

It wasn't a difficulty of implementation issue, it was whether there
were unavoidable performance traeoffs. I have no problem with very
difficult things if they are well specified and don't require
unreasonable concessions elsewhere in the design.

in any case, I think the __thread local storage trick makes this fast
enough to implement everywhere and there were strong arguments for not
having it causing issues for library developers.


>  - the different varieties of foreign call should all be identifiable,
>because there are efficiency gains to be had in some implementations.

indeed. 

>  - the default should be... concurrent reentrant, presumably, because
>that is the safest.  (so we need to invert the notation).

well, I like to reserve the word 'safe' for things that might crash the
runtime, unsafePerformIO, so making things nonconcurrent isn't so much
something unsafe as a decision. I'd prefer nonconcurrent be the default
because it is the much more common case and is just as safe in that
regard IMHO.

> So, can I go ahead and update the wiki?  I'll try to record the
> rationale from the discussion too.

sure.

> I'd like to pull out something from the discussion that got a bit lost
> in the swamp: the primary use case we have for concurrent reentrant is
> for calling the main loop of a GUI library.  The main loop usually never
> returns (at least, not until the application exits), hence concurrent,
> and it needs to invoke callbacks, hence reentrant.

this is a pain. (making various libraries main loops play nice
together). not that it is a haskell specific problem though I guess we
have to deal with it.  I was thikning of using something like
http://liboop.org/ internally in jhc.. but am not sure and would prefer
a pure haskell solution without compelling reason to do otherwise.


John

-- 
John Meacham - ⑆repetae.net⑆john⑈
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Defaults for superclass methods

2006-04-11 Thread John Meacham
On Tue, Apr 11, 2006 at 11:35:09AM +0100, Simon Marlow wrote:
> On 11 April 2006 11:08, Ross Paterson wrote:
> 
> > On Tue, Apr 11, 2006 at 11:03:22AM +0100, Simon Marlow wrote:
> >> This is a rather useful extension, and as far as I can tell it
> >> doesn't have a ticket yet: 
> >> 
> >>   http://www.haskell.org//pipermail/libraries/2005-March/003494.html
> >> 
> >> should I create a ticket?  Is there any reason it might be hard to
> >> implement?
> > 
> > There are a range of proposals, but none of them are implemented.
> > Wouldn't that rule them out for Haskell'?
> 
> If it's not clear which is the right way to go, then yes I guess that
> does rule it out.  Could you summarise the proposals?  If there was a
> clear winner, and it was easy enough to implement, perhaps we can knock
> up a prototype in time.

As I recall, this was brought up a few times during the class alias
discussion and there were good technical reasons why it would be
tricky to define a sane semantics for it. as in, it's harder than it first
looks.

John

-- 
John Meacham - ⑆repetae.net⑆john⑈
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


RE: Defaults for superclass methods

2006-04-11 Thread Simon Marlow
On 11 April 2006 11:08, Ross Paterson wrote:

> On Tue, Apr 11, 2006 at 11:03:22AM +0100, Simon Marlow wrote:
>> This is a rather useful extension, and as far as I can tell it
>> doesn't have a ticket yet: 
>> 
>>   http://www.haskell.org//pipermail/libraries/2005-March/003494.html
>> 
>> should I create a ticket?  Is there any reason it might be hard to
>> implement?
> 
> There are a range of proposals, but none of them are implemented.
> Wouldn't that rule them out for Haskell'?

If it's not clear which is the right way to go, then yes I guess that
does rule it out.  Could you summarise the proposals?  If there was a
clear winner, and it was easy enough to implement, perhaps we can knock
up a prototype in time.

The reason being I just hit a case where this would be useful, while
trying to find a nice way to express extensible exceptions.

Cheers,
Simon
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Defaults for superclass methods

2006-04-11 Thread Ross Paterson
On Tue, Apr 11, 2006 at 11:03:22AM +0100, Simon Marlow wrote:
> This is a rather useful extension, and as far as I can tell it doesn't
> have a ticket yet:
> 
>   http://www.haskell.org//pipermail/libraries/2005-March/003494.html
> 
> should I create a ticket?  Is there any reason it might be hard to
> implement?

There are a range of proposals, but none of them are implemented.
Wouldn't that rule them out for Haskell'?

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Defaults for superclass methods

2006-04-11 Thread Simon Marlow
This is a rather useful extension, and as far as I can tell it doesn't
have a ticket yet:

  http://www.haskell.org//pipermail/libraries/2005-March/003494.html

should I create a ticket?  Is there any reason it might be hard to
implement?

Cheers,
Simon
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re[2]: deeqSeq proposal

2006-04-11 Thread Bulat Ziganshin
Hello John,

Tuesday, April 11, 2006, 2:43:49 AM, you wrote:

> true. in any case, deepseq is not always a win.

don't forget that Andy don't plan to apply deepSeq to any expression.
in his program, there is a LARGE datastructure with a couple of
unevaluated thunks what may be simplified by call to deepSeq. your
example is based exclusively on the syntax transformations of source
code, i think that in his program the logic is so complex that such
syntax transformations is entirely impossible

anyway i think that the easisest way for Andy to get what he need is
to write ghc-specific `deepSeq` implementation that should just walk
unevaluated parts of datastructure and evaluate them all. as i
understand, he don't need to evaluate arguments of partially applied
functions - there is just no such beasts in his data


-- 
Best regards,
 Bulatmailto:[EMAIL PROTECTED]

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: collecting requirements for FDs

2006-04-11 Thread Bulat Ziganshin
Hello Ross,

Tuesday, April 11, 2006, 3:43:18 AM, you wrote:

> The favourite customer for FDs has been the monad transformer library.

> What other libraries should Haskell' support, and what are their
> requirements?

why you think that FD are required only for libs? :)  i think it's
better to ask in main Haskell list where FDs really used

my own library (http://freearc.narod.ru/Streams.tar.gz) use FDs widely
for monad-involving classes. one typical example is:

class Stream m h | h->m

where each Stream type `h` is working (i.e. can be read/written) in
some monad `m`. Another class defines monad-independent operations on
references:

class Ref m r | r->m, m->r where
  newRef::...
  readRef::...
  writeRef:...

This class used in definitions of monad-independent Stream
Transformers, i.e. transformers that can be applied to streams working
in any monad (references used to hold internal transformer's state).

instances of stream classes sometimes are very complex:

instance (ByteStream m h, Ref m r) => BinaryStream m (BitAligned h r)
instance (TextStream m h, ByteStream m h, BinaryStream m (BitAligned h r))
  => TextStream m (BitAligned h r) where

i'm not sure that G' should support my library but at least it
provides feature-extended replacement for Handles, binary I/O and
serializaition facilities. i think that extending Streams to work in
any monads is what the hskellers would like to see

-- 
Best regards,
 Bulatmailto:[EMAIL PROTECTED]

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


RE: deeqSeq proposal

2006-04-11 Thread Simon Peyton-Jones
| Any function that is not defineable in (pure) Haskell should be viewed
| with utmost suspicion.  The seq function is one of these.  At least
| seq has simple denotational semantics, which can't be said for
deepSeq.
| 
| I say, put deepSeq in a type class (which is what I've done when I
need
| it).

The whole *point* is that deepSeq is (dynamically) idempotent: deepSeq
(deepSeq x) = deepSeq x.  Its denotational behaviour is perfectly
definable in Haskell, but its operational behaviour is not.  That is
both attractive (because it means you feel less anxious about wasting
work with deepSeq) and repellent (because it constrains the
implementation, as John points out).

Whether it should be in a class is a rather separate discussion.  In a
way we already sold out when we allowed seq to escape from the
type-class world.  Perhaps deepSeq is worse (because it traverses data
structures) but not obviously. 

Simon
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: FFI, safe vs unsafe

2006-04-11 Thread Ross Paterson
On Tue, Apr 11, 2006 at 09:13:00AM +0100, Simon Marlow wrote:
>  - the default should be... concurrent reentrant, presumably, because
>that is the safest.  (so we need to invert the notation).

I think the name "concurrent" has a similar problem to "safe": it reads
as an instruction to the implementation, rather than a declaration by the
programmer of the properties of a particular function; as Wolfgang put it,
"this function might spend a lot of time in foreign lands".

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


RE: FFI, safe vs unsafe

2006-04-11 Thread Simon Marlow
What are the conclusions of this thread?

I think, but correct me if I'm wrong, that the eventual outcome was:

 - concurrent reentrant should be supported, because it is not 
   significantly more difficult to implement than just concurrent.

 - the different varieties of foreign call should all be identifiable,
   because there are efficiency gains to be had in some implementations.

 - the default should be... concurrent reentrant, presumably, because
   that is the safest.  (so we need to invert the notation).

So, can I go ahead and update the wiki?  I'll try to record the
rationale from the discussion too.

I'd like to pull out something from the discussion that got a bit lost
in the swamp: the primary use case we have for concurrent reentrant is
for calling the main loop of a GUI library.  The main loop usually never
returns (at least, not until the application exits), hence concurrent,
and it needs to invoke callbacks, hence reentrant.

Cheers,
Simon
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


RE: Signals + minimal proposal (was Re: asynchronous exceptions)

2006-04-11 Thread Simon Marlow
On 10 April 2006 22:19, John Meacham wrote:
> On Fri, Apr 07, 2006 at 02:58:01PM +0100, Simon Marlow wrote:
>> According to your definition of exitWith above, I can't both raise an
>> exception *and* exit in the same thread.  If I register an onExit
>> handler that throws an exception to the current thread, things go
>> wrong if the current thread also calls exitWith.  Also, you couldn't
>> call exitWith while holding an MVar, if the handlers need access to
>> the same MVar.
> 
> hrm? nothing goes wrong. it is the same as calling 'throw' in the
> current thread.

Your code for exitWith:

exitWith status = do
takeMVar exitMVar -- winner takes all
let handleLoop = do
hs <- swapMVar handlerMVar []
sequence_ hs
if null hs then return () else handleLoop
handleLoop
exitWith_ status

now If I have a handler registered that throws an exception to the
current thread, what happens?  handleLoop is aborted, the exception is
propagated to the top level of the thread, where the top-level exception
handler calls exitWith again, and promptly deadlocks because exitMVar is
already empty.

In the interests of keeping the discussion manageable, I'll deal with
the rest of the points later.

Cheers,
Simon
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


RE: deeqSeq proposal

2006-04-11 Thread Simon Marlow
On 10 April 2006 22:41, Andy Gill wrote:

> Why can't we just steal a bit in the (GHC)
> info table,
> rather than mess with LSB of pointers, or have two info tables?

Because you need one bit per constructor *instance*.  eg. there are two
variants of Just: the normal one, and the deepSeq'd one.  So I either
put the bit in the constructor instance itself, or I need one info table
for each variant, or I need to distinguish based on the address of the
closure.

Cheers,
Simon
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime