Re: [julia-users] Re: Setting min()=Inf and max()=-Inf

2016-04-13 Thread Stefan Karpinski
The decision here was made to be consistent across types: since most types
don't have a way of representing ± infinite values, there's no general way
to do this. It's a bit too much of a subtle corner case to make this work,
but *only* for arrays of concrete floating-point types. Imagine you write
some code and notice that you're only ever putting moderately-sized integer
values into your array before you take its maximum. With the current
arrangement, you can safely change the type of the array to `Int` and
everything will work the same as before. It's one of these cases where
there is no perfect answer.

On Wed, Apr 13, 2016 at 3:30 AM, Niclas Wiberg 
wrote:

> What about minimum() and maximum() with empty floating-point arrays? Would
> it make sense for those to return -Inf and +Inf by default?
>
> I get:
>
> *julia> **a = Array(Float64,(0,))*
>
> *0-element Array{Float64,1}*
>
>
> *julia> **minimum(a)*
>
> *ERROR: ArgumentError: reducing over an empty collection is not allowed*
>
> * in _mapreduce at reduce.jl:139*
>
> * in minimum at reduce.jl:325*
>
>
> Niclas
>
>
>
> Den onsdag 13 april 2016 kl. 09:03:57 UTC+2 skrev Milan Bouchet-Valat:
>>
>> Le mardi 12 avril 2016 à 20:21 -0700, Anonymous a écrit :
>> > Those are good points, although I always kind of wondered why Float
>> > gets Inf while Int doesn't, I guess there's no way to have Inf belong
>> > to 2 distinct concrete types.
>> The problem is that native integers have no way of representing
>> infinite values, contrary to floating point. They can only store actual
>> values. (But you can use floating point formats to store integer data
>> if you need Inf.)
>>
>>
>> Regards
>>
>> > >
>> > > > Have the Julia developers considered the effects of setting
>> > > > Base.min()=Inf and Base.max()=-Inf.  This is common in real
>> > > > analysis since it plays nice with set theory, i.e.
>> > > >
>> > > It only plays nicely with sets of real numbers.  What about sets of
>> > > other types that have a total ordering?  e.g. strings?
>> > >
>> > > Also, one of the general principles guiding the design of the Julia
>> > > standard library is to provide idioms that don't cause types to
>> > > change arbitrarily underneath the user; this principle is critical
>> > > to being able to use the standard library in high-performance code
>> > > (since type stability is critical to compiler optimization).  For
>> > > example min(1,2) == 1 (an Int), min(1) == 1 (an Int), but then
>> > > min() = Inf (floating-point)?
>> > >
>>
>


Re: [julia-users] Re: Setting min()=Inf and max()=-Inf

2016-04-13 Thread Niclas Wiberg
What about minimum() and maximum() with empty floating-point arrays? Would 
it make sense for those to return -Inf and +Inf by default?

I get:

*julia> **a = Array(Float64,(0,))*

*0-element Array{Float64,1}*


*julia> **minimum(a)*

*ERROR: ArgumentError: reducing over an empty collection is not allowed*

* in _mapreduce at reduce.jl:139*

* in minimum at reduce.jl:325*


Niclas



Den onsdag 13 april 2016 kl. 09:03:57 UTC+2 skrev Milan Bouchet-Valat:
>
> Le mardi 12 avril 2016 à 20:21 -0700, Anonymous a écrit : 
> > Those are good points, although I always kind of wondered why Float 
> > gets Inf while Int doesn't, I guess there's no way to have Inf belong 
> > to 2 distinct concrete types. 
> The problem is that native integers have no way of representing 
> infinite values, contrary to floating point. They can only store actual 
> values. (But you can use floating point formats to store integer data 
> if you need Inf.) 
>
>
> Regards 
>
> > > 
> > > > Have the Julia developers considered the effects of setting 
> > > > Base.min()=Inf and Base.max()=-Inf.  This is common in real 
> > > > analysis since it plays nice with set theory, i.e. 
> > > > 
> > > It only plays nicely with sets of real numbers.  What about sets of 
> > > other types that have a total ordering?  e.g. strings? 
> > > 
> > > Also, one of the general principles guiding the design of the Julia 
> > > standard library is to provide idioms that don't cause types to 
> > > change arbitrarily underneath the user; this principle is critical 
> > > to being able to use the standard library in high-performance code 
> > > (since type stability is critical to compiler optimization).  For 
> > > example min(1,2) == 1 (an Int), min(1) == 1 (an Int), but then 
> > > min() = Inf (floating-point)? 
> > > 
>


Re: [julia-users] Re: Setting min()=Inf and max()=-Inf

2016-04-13 Thread Milan Bouchet-Valat
Le mardi 12 avril 2016 à 20:21 -0700, Anonymous a écrit :
> Those are good points, although I always kind of wondered why Float
> gets Inf while Int doesn't, I guess there's no way to have Inf belong
> to 2 distinct concrete types.
The problem is that native integers have no way of representing
infinite values, contrary to floating point. They can only store actual
values. (But you can use floating point formats to store integer data
if you need Inf.)


Regards

> > 
> > > Have the Julia developers considered the effects of setting
> > > Base.min()=Inf and Base.max()=-Inf.  This is common in real
> > > analysis since it plays nice with set theory, i.e.
> > > 
> > It only plays nicely with sets of real numbers.  What about sets of
> > other types that have a total ordering?  e.g. strings?
> > 
> > Also, one of the general principles guiding the design of the Julia
> > standard library is to provide idioms that don't cause types to
> > change arbitrarily underneath the user; this principle is critical
> > to being able to use the standard library in high-performance code
> > (since type stability is critical to compiler optimization).  For
> > example min(1,2) == 1 (an Int), min(1) == 1 (an Int), but then
> > min() = Inf (floating-point)?
> >