The problem is that the G-machine optimizes away some of the
updates which make sure that the heap is always in a consistent
state in a pure graph-reduction system.
A pure G-machine updates all redexes, but the STG-machine only
updates /shared/ redexes, yes?
It's a while since I looked at
SimonM:
My impression is that the problem is more fundamental: the thunk for
x is under evaluation when the finalizer begins to run,
Urgle.
probably we shouldn't get a crash, but a blackhole instead.
Even a blackhole is wrong. There's no cycle so it ought to evaluate
successfully.
There's another problem with Simon's patch I haven't been able to pin
down: if you run the example, interrupt it at the right point and type
another expression, the finalizers run, but the expression is lost.
I can get it to fail another way too:
main = do
p - mallocBytes 64
I'd hoped that blockFinalizers would be useful for defining other
primitives but since it won't even work for GHC, I agree that PVar
will meet most of our needs. (An even simpler design might be to
extend our IORef implementations with 'atomicallyModifyIORef'.)
So, is this a design that
I don't know how to achieve the same goal with
atomicModifyIORef.
I do. To modify ioRef1 and ioRef2 simultaneously, write
atomicModifyIORef ioRef1 (\ contents1 - unsafePerformIO
ioRef2 (\ contents2 - blah blah))
The actual modification will take place when the result or
contents
Simon Marlow wrote:
I don't know how to achieve the same goal with
atomicModifyIORef.
I do. To modify ioRef1 and ioRef2 simultaneously, write
atomicModifyIORef ioRef1 (\ contents1 - unsafePerformIO
ioRef2 (\ contents2 - blah blah))
The actual modification will take place
Simon Marlow wrote:
I'd hoped that blockFinalizers would be useful for defining other
primitives but since it won't even work for GHC, I agree that PVar
will meet most of our needs. (An even simpler design might be to
extend our IORef implementations with 'atomicallyModifyIORef'.)
Alastair Reid wrote:
Alastair:
So, is this a design that we could agree on?
SimonM:
I like it. I'd vote for 'atomicModifyIORef' rather than a new PVar
type, though.
Ok, onto the second question:
Can we use atomicModifyIORef to make our code finalizer-safe?
I see potential
Alastair:
On a system where finalizers behave like preemptive threads
[nonsense deleted]
My outline of consequences/ implementation for GHC-like systems was
completely wrong because I was still thinking about finalizers as
special cases.
What needs to happen on GHC-like systems is that
Alastair:
So, is this a design that we could agree on?
SimonM:
I like it. I'd vote for 'atomicModifyIORef' rather than a new PVar
type, though.
Ok, onto the second question:
Can we use atomicModifyIORef to make our code finalizer-safe?
I see potential problems wherever two IORefs need
Alastair:
I don't know how to achieve the same goal with atomicModifyIORef.
George:
I do. To modify ioRef1 and ioRef2 simultaneously, write
atomicModifyIORef ioRef1 (\ contents1 -
unsafePerformIO ioRef2 (\ contents2 -
blah blah))
The actual modification will take
Ross Paterson [EMAIL PROTECTED] writes:
there's an unsafe use in evalName(),
I think this is easily fixed by using malloc to allocate the buffer
and then tracking down all uses and calling free.
and I don't understand the mutual recursion between eval() and run().
Not sure what you don't
Alastair Reid wrote:
Alastair:
I don't know how to achieve the same goal with atomicModifyIORef.
George:
I do. To modify ioRef1 and ioRef2 simultaneously, write
atomicModifyIORef ioRef1 (\ contents1 -
unsafePerformIO ioRef2 (\ contents2 -
blah blah))
The actual
On Thu, Oct 17, 2002 at 01:42:47PM +0100, Alastair Reid wrote:
Ross Paterson [EMAIL PROTECTED] writes:
there's an unsafe use in evalName(),
I think this is easily fixed by using malloc to allocate the buffer
and then tracking down all uses and calling free.
OK -- there only 20 of those.
Simon Marlow wrote:
[snip]
Don't you run into a problem even if the two threads use the same
ordering? Suppose
- thread 1 does the atomicModifyIORef, and gets preempted before
doing the seq
- thread 2 does its own atomicModifyIORef, and the seq. Thread 2
gets an inconsistent
Simon Marlow wrote:
[snip]
Don't you run into a problem even if the two threads use the same
ordering? Suppose
- thread 1 does the atomicModifyIORef, and gets preempted before
doing the seq
- thread 2 does its own atomicModifyIORef, and the seq. Thread 2
gets an
Alastair Reid wrote:
However in general I think we can hide some of the horribleness from
the user:
modify2IORefs :: IORef a - IORef b - (a - b - (a,b,c)) - IO c
[horrible code deleted]
And if they need to update 3 IORefs or a list of IORefs?
It would be a fairly trivial matter to
However in general I think we can hide some of the horribleness from
the user:
modify2IORefs :: IORef a - IORef b - (a - b - (a,b,c)) - IO c
[horrible code deleted]
And if they need to update 3 IORefs or a list of IORefs?
Writing code like that yourself and getting it right and portable
Simon Marlow wrote:
[snip]
However, I think we're trying to solve a problem that doesn't exist yet.
All the libraries we have which are affected can be fixed by using
atomicModifyIORef, and even if one were to arrive which can't be fixed
in this way, the chances that someone would also want to
On Thu, Oct 17, 2002 at 10:09:06AM +0100, Simon Marlow wrote:
There's another problem with Simon's patch I haven't been able to pin
down: if you run the example, interrupt it at the right point and type
another expression, the finalizers run, but the expression is lost.
I can get it to
Worse though, I don't even know what semantic framework to use to
reason about it if we want to be sure the code will work in the
presence of strictness analyzers, eager evaluation, parallel
evaluation, fully-lazy evaluation, etc. Operational reasoning and
reasoning by example struggle with
(message referred to follows)
Alastair suggested implementing blockFinalizers rather than PVars. However I
dislike this for two reason:
(1) I'm rather attached to PVars. Not just because I suggested them (actually
I think I stole them from Einar Karlsen) but because it looks to me as if they
(2) blockFinalizers looks fine for Hugs and NHC which only have a
single-thread model, but it looks tricky in general where [...]
Ah, I see what you mean.
I'd kinda hardwired into the definition the assumption that finalizers
run at higher priority than other threads and that there's a
SimonM:
[...] Can we easily identify which are the unsafe places and fix them?
Alastair:
Look at all prims with types in IO. Look at what data structures they
touch. Check if there are accesses to that data structure 'both
sides' of a call to eval (there may be a few functions which invoke
Indeed, I very nearly implemented such a thing as part of the patch
I sent out. However, it didn't look trivial enough to implement so
I backed off. The blocked state needs to be saved restored at
various points: when starting a finalizer, when invoking a
foreign-exported function,
Also, this is a nested call to eval(), in a primitive, which can
invoke an IO action and therefore re-enter Haskell without going
through unsafePerformIO. Is that safe?
Yes, I think so. Most calls to IO actions from primitives are safe
and I believe these ones are too. (In some ways,
So, are we now claiming that my patch *is* safe? (Never mind about
IORefs, I'm talking about the implementation itself).
No.
And my recent focus on IORefs has simply been because they seemed the
strongest argument.
A
___
FFI mailing list
[EMAIL
However even if Haskell finalizers + MVars are impossible in NHC, I
don't think Haskell finalizers + mutable state have to be. For
example another mutable variable we could have would be a PVar which
is always full and has functions [snip]
updatePVar (PVar ioRef) updateFn =
do
However even if Haskell finalizers + MVars are impossible in NHC, I
don't think Haskell finalizers + mutable state have to be. For
example another mutable variable we could have would be a PVar which
is always full and has functions [snip]
updatePVar (PVar ioRef) updateFn =
do
29 matches
Mail list logo