I'm trying to figure out promotions and noticed a few possible quirks -
perhaps these are bugs, as I can't figure out the logic. I realize float16
is a work in progress but I really like the data type as my datasets are
large.
julia> a=rand(Float16,1) # define a float16 variable
1-element Array{Float16,1}:
julia> a+1.0 # adding 1 gives a float16
1-element Array{Float16,1}:
julia> a+1im # but adding 1im gives a float32
1-element Array{Complex{Float32},1}:
julia> typeof(1im) # even tho 1im is a float64
Complex{Int64} (constructor with 1 method)
julia> a+float16(1im) # and 1im could be represented in
float16
1-element Array{Complex{Float16},1}:
julia> sparse(a) # define a sparse float16
matrix
1x1 sparse matrix with 1 Float16 entries:
julia> sparse(a+1im) # adding 1im causes promotion
to float32
1x1 sparse matrix with 1 Complex{Float32} entries:
julia> sparse(a)*1im # promotion to float32
1x1 sparse matrix with 1 Complex{Float32} entries:
[1, 1] = 0.0+0.335938im
julia> fft(a) # fft promotes to float64
1-element Array{Complex{Float64},1}:
For the last one, it probably need only promote to float32 to use the
single precision fftw functions. It would be nice if julia could silently
convert back to float16 when the in-place transform is requested:
julia> g=plan_fft!(a)
ERROR: `plan_fft!` has no method matching plan_fft!(::Array{Float16,2},
::UnitRange{Int64}, ::Uint32, ::Float64) in plan_fft! at fftw.jl:492