Re: [Haskell-cafe] Re: Existentially-quantified constructors: Hugs is fine, GHC is not?

2006-05-19 Thread Ben Rudiak-Gould

Ross Paterson wrote:

Why would a type variable be present at runtime, let alone bound to _|_?


Well, it'd be present in jhc. And you want your intermediate language to 
typecheck even if the types disappear before you get to assembly language. 
And the same issue shows up with dictionaries, e.g. in


data MyObj = forall a. MyInterface a = MyObj a

f x = myFunc o where MyObj o = x

In order to make this work, you have to support lifted dictionaries, which 
means a potential performance penalty on every dictionary lookup.


Hugs does accept that code, so I assume it's paying that penalty, unless 
there's some other solution to this problem that I'm missing.


-- Ben

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: WordPtr,IntPtr,IntMax,WordMax

2006-05-11 Thread Ben Rudiak-Gould

John Meacham wrote:

also, incidentally, for anyone on x86 that cares about math performance,
use -optc-fsse2 to make it use the much nicer math coprocessor available
on modern x86 cpus.


I object to its characterization as nicer. It's faster, but *lower 
precision*. It worries me that people are so blithely abandoning those extra 
bits in the name of speed. A few years from now there's going to be an 
expensive engineering failure, and months of investigation will reveal that 
it was because a math library was compiled for SSE2. Be careful out there.


(And while I'm on the subject, Haskell should have a LongDouble type.)

-- Ben

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: WordPtr,IntPtr,IntMax,WordMax

2006-05-11 Thread Ben Rudiak-Gould

Simon Marlow wrote:

I suppose you might argue that extra precision is always good.


Well... I'm having a hard time thinking of a situation where it isn't. I 
realize that people want reproducibility, I'm just not convinced that they 
should. The situations where optimization flags make a significant 
difference in the result are situations where the original result wasn't 
trustworthy to begin with. Reproducible rounding gives you a false sense of 
security in those cases, and doesn't make a difference otherwise. If you 
know IEEE 754 inside out, and IEEE double-precision is exactly what you 
want, then it's nice if your programming language can cater to that, but 
along those lines it'd be nice if Haskell supported rounding modes, trap 
handlers, and so on.


I suppose this problem could be solved the way the analogous Int problem is 
solved, with a machine-precision Float and portable/reproducible Float32 and 
Float64.


The x87 isn't the only FPU with this problem. The PowerPC has a fused 
(double*double)+double operation that the optimizer can't use if you need 
reproducibility, because it doesn't round the intermediate product.


LongDouble would be fine, but storing intermediate Doubles in 80 bits is 
bad.


The x87 registers are actually 82 bits wide, so even using LongDouble 
doesn't guarantee reproducibility. Maybe there needs to be a Float80 also. 
(Available only on x86, presumably.)


-- Ben

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: WordPtr,IntPtr,IntMax,WordMax

2006-05-11 Thread Ben Rudiak-Gould

skaller wrote:

On Fri, 2006-05-12 at 00:34 +0100, Ben Rudiak-Gould wrote:

Simon Marlow wrote:

I suppose you might argue that extra precision is always good.
Well... I'm having a hard time thinking of a situation where it isn't. 


Wastes space in the cache tree, slowing down the program
and limiting the max size problem that can be handled.


But we were talking about flushing temporary values to memory to force 
rounding; in this case keeping the extra precision is cheaper than getting 
rid of it. When that's not true, I agree that less precision is sometimes 
better.


-- Ben

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: inside the GHC code generator

2006-02-24 Thread Ben Rudiak-Gould

Rene de Visser wrote:
Integer is about 30 times slower than it needs to be on GHC if you have 
over half the values between 2^-31 and 2^31. On i386 can you basically 
can test the low bit for free (it runs in parallel to the integer add 
instruction). This would allow only values outside this range to 
required calling the long integer code. Such an optimization is not 
easily done in C.


Currently GHC defines

data Integer = S# Int# | J# Int# ByteArray#

So it already avoids using GMP for small integers. There's a will this 
multiplication overflow primitive, but I'm not sure how it's implemented.


I think that changing this to use magic pointers would be pretty easy. You'd 
need to introduce a new primitive type, say Int31#, and then:


1. anytime you previously constructed a WHNF S# on the heap, make a
   magic pointer instead

2. anytime you dereference a pointer that might be an S#, check for a
   magic pointer first.

Even if a lot of code needs to be changed, it's straightforward because the 
changes are local. You're just changing the meaning of a pointer such that 
there's a statically allocated S# n at address 2n+1.


It would also be worth trying this for Char#, which is already a 31-bit 
type, to see if it would speed up text-processing code.


If only simple loops are optimized it will encourage people to always 
code loops in their haskell rather than perhaps using more appropriate 
constructs.


You could say that about any language. When your code needs to be fast it 
needs to be fast. I'd rather write ugly Haskell code than ugly C code, if it 
comes to that.


Also take the Maybe data type with Nothing and Just ... or any other 
datatypes with 0 and 1 variable constructors. Here these could be 
represent by special values for the 0 variable case and bit marking on 
the single constructor values.


I'm not sure this would help much. Nullary constructors are already 
allocated statically, so they're already represented by special values. You 
could check for those values instead of dereferencing the pointer, but in 
time-critical code the static object will presumably be in L1 cache anyway. 
I could be wrong of course, and it would be easy enough to extend the Int31# 
idea to nullary constructors (Int0#).


-- Ben

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: inside the GHC code generator

2006-02-24 Thread Ben Rudiak-Gould

Simon Peyton-Jones wrote:

| [...] put a static table in the executable with
| information about register and stack usage indexed by procedure return
| point, and use that to unwind the stack and find roots. 


Every accurate garbage collector (including GHC's) uses a technique like
this, but the solution is invariably tightly-coupled to the garbage
collector, compiler and run-time system.


Okay, I don't know what I was thinking when I wrote that no languages that 
compile via C use compacting collectors, since obviously lots of them do. 
But they do it by using various hacks to force local heap pointers into 
locations where the GC can find them. Most of this effort is wasted, because 
the local variables disappear before the GC runs. What I'm talking about is 
idiomatic C code which manipulates heap pointers like any other object, and 
which can be optimized as usual (e.g. holding heap pointers in callee-saved 
registers across function calls) without causing GC problems. There's no 
reason in principle that this couldn't be done.


-- Ben

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: inside the GHC code generator

2006-02-23 Thread Ben Rudiak-Gould

[EMAIL PROTECTED] wrote:

* multiple results can be returned via C++ paira,b template, if this is
efficiently implemented in gcc


There's a -freg-struct-return option in 2.x, 3.x and 4.x. I think it's off 
by default on many architectures.



* recursive definitions translated into the for/while loops if possible


I think recent versions of GCC do a good job of this. See

http://home.in.tum.de/~baueran/thesis/

All of this efficiency stuff aside, there's a big problem you're neglecting: 
GARBAGE COLLECTION. For a language that allocates as much as Haskell I think 
a conservative collector is absolutely out of the question, because they 
can't compact the heap and so allocation becomes very expensive. A copying 
collector has to be able to walk the stack and find all the roots, and that 
interacts very badly with optimization. All the languages I've seen that 
compile via C either aren't garbage collected or use a conservative or 
reference-count collector.


The closest thing I've seen to a solution is the technique used in Urban 
Boquist's thesis, which is to put a static table in the executable with 
information about register and stack usage indexed by procedure return 
point, and use that to unwind the stack and find roots. Some C compilers 
(including gcc) support debugging of optimized code, so it might be possible 
to parse the debugging tables and extract this information. As far as I 
know, no one has ever looked into the feasibility of doing this, which is 
kind of surprising since root-finding is probably the single biggest 
obstacle to compiling functional languages via C.


-- Ben

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: forall a (Ord a = a- a) - Int is an illegal type???

2006-02-11 Thread Ben Rudiak-Gould

David Menendez wrote:

Ben Rudiak-Gould writes:
|  forall a. (forall b. Foo a b = a - b) - Int
| 
| is a legal type, for example.


Is it? GHCi gives me an error if I try typing a function like that.


This works:

f :: forall a. (forall b. Foo a b = a - b) - Int
f _ = 3

I interpret this type as meaning that you can call the argument provided you 
can come up with an appropriate b. If you can't come up with one then you 
can't call the argument, but you can still ignore it and type check.


If you had, for example,

instance Foo a Char
instance Foo a Int

then you would be able to use the argument polymorphically at b=Char and b=Int.

f = undefined also works if you have instance Foo a b (which is only 
allowed with -fallow-undecidable-instances). I think it's because of 
predicativity that it fails without it.


Presumably forall a. (Ord a = a- a) - Int could be allowed as well, but 
if you're called with a = IO () then you can't call your argument, therefore 
you can never call your argument, therefore it's not a useful type in practice.


-- Ben

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: forall a (Ord a = a- a) - Int is an illegal type???

2006-02-09 Thread Ben Rudiak-Gould

Brian Hulley wrote:

I'm puzzled why GHC chooses to create illegal types instead  of
finding the innermost quantification point ie I would think that

(Ord a= a-a) - Int

should then obviously be shorthand for

 (forall a. Ord a= a-a) - Int

and surely this could easily be implemented by just prepending forall a 
b c onto any context restricting a b c... (?)


I agree that it's strange to add an implicit quantifier and then complain 
that it's in the wrong place. I suspect Simon would change this behavior if 
you complained about it. My preferred behavior, though, would be to reject 
any type that has a forall-less type class constraint anywhere but at the 
very beginning. I don't think it's a good idea to expand implicit 
quantification. Also, the rule would not be quite as simple as you make it 
out to be, since


forall a. (forall b. Foo a b = a - b) - Int

is a legal type, for example.

-- Ben

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Passing a matrix from C to Haskell

2006-02-08 Thread Ben Rudiak-Gould

Cyril Schmidt wrote:
I was thinking of something like passing the array as Ptr Int#, but how 
do I fetch the elements that I am interested in?


I think you want Ptr CInt and peekElemOff. If the array is declared in C as

int a[100][20]

and p :: Ptr CInt is a Haskell pointer to it, then a[y][x] can be accessed 
as peekElemOff p (y * 20 + x). This should be 100% portable.


-- Ben

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] Re: Double - CDouble, realToFrac doesn't work

2004-11-08 Thread Ben Rudiak-Gould
MR K P SCHUPKE wrote:
Why is there no Irrational class. This would make more sense for
Floats and Doubles than the fraction based Rational class. We could
also add an implementation of infinite precision irrationals using
a pair of Integers for exponent and mantissa.
That would just be a subset of the rationals, namely the dyadic 
rationals (mantissa / 2^^exponent).

-- Ben
___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] Re: Double - CDouble, realToFrac doesn't work

2004-11-08 Thread Ben Rudiak-Gould
Henning Thielemann wrote:
I wonder why Infinity has a sign in IEEE floating processing, as well as
0. To support this behaviour uniformly one would need a +0 or -0 offset
for each number, which would lead straightforward to non-standard analysis
...
See Branch Cuts for Complex Elementary Functions, or Much Ado About 
Nothing's Sign Bit by William Kahan, in The State of the Art in 
Numerical Analysis, (eds. Iserles and Powell), Clarendon Press, Oxford, 
1987.

(Note that I have not read this paper. However, Kahan was the primary 
architect of the IEEE floating point standard, so you can be pretty sure 
the reasons given in the paper are also the reasons IEEE floating point 
has signed zero.)

A good online presentation which mentions all kinds of interesting 
floating point pathologies, including those discussed in the above 
paper, is How Javas Floating-Point Hurts Everyone Everywhere 
(http://www.cs.berkeley.edu/~wkahan/JAVAhurt.pdf).

[...] Thus (a-b) is not the same as -(b-a) for IEEE floats!
Nor is x*0 equal to 0 for every x; nor does x == y imply f(x) == f(y) 
for every x, y, f; nor is addition or multiplication associative. There 
aren't many identities that do hold of floating point numbers.

-- Ben
___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users