Thanks for the explanations. Am I correct in understanding that objects are garbage-collected when they're only pointed to, and this causes the problem? This kills my naive idea of using pointers to BigInts as a 64-bit datatype compatible with ints, and is suggested by the following experiment:
julia> l = [pointer_from_objref(BigInt(i)) for i=1:10] 10-element Array{Ptr{Void},1}: Ptr{Void} @0x0000000118c77150 Ptr{Void} @0x0000000118c77170 Ptr{Void} @0x0000000118c77190 Ptr{Void} @0x0000000118c771b0 Ptr{Void} @0x0000000118c771d0 Ptr{Void} @0x0000000118c771f0 Ptr{Void} @0x0000000118c77210 Ptr{Void} @0x0000000118c77230 Ptr{Void} @0x0000000118c77250 Ptr{Void} @0x0000000118c77270 julia> unsafe_pointer_to_objref(l[7]) 7 julia> map(unsafe_pointer_to_objref,l) Segmentation fault: 11 On Friday, 8 April 2016 17:47:34 UTC+2, Stefan Karpinski wrote: > > This could help if both parts of the union were plain old data like Int64 > and Float64, but in the case of Int and BigInt, one of them is a more > complex type, which would currently force the whole thing to be boxed and > live in the heap anyway. What's needed here is the ability to > stack-allocate objects that refer to the heap. We cannot do that now, but > it is an important planned CG improvement. > > On Fri, Apr 8, 2016 at 11:28 AM, Scott Jones <scott.pa...@gmail.com > <javascript:>> wrote: > >> I like very much the idea of a discriminated union - that would help >> enormously with some of the stuff I work on. >> >> >> On Friday, April 8, 2016 at 8:47:59 AM UTC-4, Erik Schnetter wrote: >> >>> Laurent >>> >>> I'm afraid you can't use `reinterpret` with a type such as `BigInt`. >>> >>> I think what you want is an extension of `Nullable`: A nullable type >>> represents an object of a particular type that might or might not be >>> there; there's hope that this would be done efficiently, e.g. via an >>> additional bit. What you want is an efficient representation of a >>> discriminated union -- a type that can represent either of two types, >>> but without the overhead from boxing the types as is currently done in >>> a `Union`. Unfortunately, Julia currently doesn't provide this, but it >>> would make sense to have a feature request for this. >>> >>> This might look like this: >>> ```Julia >>> immutable FastInt >>> val::DiscriminatedUnion{Int, BigInt} >>> end >>> function (+)(a::FastInt, b::FastInt) >>> if typeindex(a.val) == 1 & typeindex(b.val) == 1 >>> ov,res = add_with_overflow(a.val[1], b.val[1]) >>> ov && return FastInt(BigInt(res)) >>> return FastInt(res) >>> end >>> # TODO: handle mixed case >>> FastInt(a.val[2] + b.val[2]) >>> end >>> ``` >>> >>> This would be the same idea as yours, but the `reinterpret` occurs >>> within Julia, in a type-safe and type-stable manner, in a way such >>> that the compiler can optimize better. >>> >>> You could try defining a type that contains two fields, both an `Int` >>> and a `BigInt` -- maybe `BigInt` will handle the case of a value that >>> is never used in a more efficient manner that doesn't require >>> allocating an object. >>> >>> -erik >>> >>> On Fri, Apr 8, 2016 at 2:07 AM, Laurent Bartholdi >>> <laurent....@gmail.com> wrote: >>> > Dear all, >>> > How hard would it be to code arbitrary-precision integers in Julia >>> with at >>> > worst 2x performance loss over native Ints? >>> > >>> > This is what I have in mind: have a bitstype structure, say 64 bits, >>> which >>> > is either the address of a BigInt (if even), or an Int64 (if odd). >>> Addition >>> > would be something like: >>> > >>> > function +(a::FastInt,b::FastInt) >>> > if a&b&1 >>> > (result,obit) = @llvm.sadd.with.overflow.i64(a,b&~1) >>> > obit ? reinterpret(FastInt,BigInt(a>>1)+(b>>1)) : result >>> > elseif a&1 >>> > reinterpret(FastInt,(a>>1) + reinterpret(BigInt,b)) >>> > elseif b&1 >>> > reinterpret(FastInt,reinterpret(BigInt,a) + b>>1) >>> > else >>> > reinterpret(FastInt,reinterpret(BigInt,a) + >>> reinterpret(BigInt,b)) >>> > end >>> > end >>> > >>> > (code not meant to be run, just a skeleton) >>> > >>> > This would be very useful in the development of computer algebra >>> systems, in >>> > which BigInts are too slow and eat up too much memory, but one is >>> ready to >>> > pay a small price for guard against arithmetic overflows. >>> > >>> > If this is too complicated, then perhaps at least a type of integers >>> that >>> > would raise an error in case of over/underflows? Those could be caught >>> in >>> > throw/catch enclosures, so the user could restart the operation with >>> > BigInts. >>> > >>> > TIA, Laurent >>> >>> >>> >>> -- >>> Erik Schnetter <schn...@gmail.com> >>> http://www.perimeterinstitute.ca/personal/eschnetter/ >>> >> >