I am failing to understand why the following code produce type instability
(caveat: the code is a reduction o large and more complex code, but the
features are the same).
```
type Foo
f::Function
y::Array{Float64, 1}
x::Array{Float64, 2}
end
type Bar{T}
b::T
end
type A{T}
a::T
end
i think in the last versions of Atom the plot will show in the plot pane
without the need of tricks.
it seems a good idea JuliaPraxis. I have been struggling with trying to get
consistent naming and having a guide to follow may at least cut short the
struggling time.
Me too! I have been trying to update a package to `v0.5` and I do not
really see a clean way to support both 0.4 and 0.5 without an entry like
this in Compat.jl.
On Sunday, August 7, 2016 at 10:02:44 PM UTC+2, Andreas Noack wrote:
>
> It would be great with an entry for this in Compat.jl, e.g.
As Eric pointed out, with Julia 0.4.x functions passed as arguments are not
optimized as their type is difficult to infer. That's why the profiler shows
jl_generic_function being the bottleneck. Try it with 0.5 and things could get
dramatically faster.
```
import Base.+
type numerr
num
err
end
+(a::numerr, b::numerr) = numerr(a.num + b.num, sqrt(a.err^2 + b.err^2));
+(a::Any, b::numerr) = numerr(a + b.num, b.err);
+(a::numerr, b::Any) = numerr(a.num + b, a.err);
x = numerr(10, 1);
y = numerr(20, 2);
println(x+y)
println(2+x)
println(y+
I am pretty sure must something specific to your installation. On my
machine
```
Darwin Kernel Version 14.5.0: Wed Jul 29 02:26:53 PDT 2015; RELEASE_X86_64
x86_64
```
running the code, I get the following timings:
```
julia> timeit(1000,1000)
GFlop= 2.4503017546610866
GFlop (SIMD) = 1
ion to using recursion, you can also use a macro to generate the
> code.
>
> However, this functionality is available in the "Cartesian" module which
> is part of Base Julia. You are probably looking for "@nloops".
>
> -erik
>
>
> On Mon, Sep 28, 2
I am having problems (serious problems!) to deal with algorithms that boil
down do nested for loops but whose number of loops is not known at compile
time. As an example consider this:
```
function tmp()
r = 3
n = 100
λ = zeros(r)
u = rand(n, r)
counter = 0
Y = Arr
The issue is also present for 0.3.3 --- oldest version I have installed.
That's the LLVM IR code generated:
```
julia> code_llvm(sumofsins2, (Int, ))
define double @"julia_sumofsins2;19930"(i64) {
top:
%1 = icmp sgt i64 %0, 0, !dbg !848
br i1 %1, label %L, label %L3, !dbg !848
L:
Hi Gray
thank you very much. Yes, memory allocation is astonishing. This works
great.
On Friday, September 5, 2014 6:59:53 AM UTC+2, Gray Calhoun wrote:
>
> On Wednesday, September 3, 2014 6:46:07 PM UTC-5, Giuseppe Ragusa wrote:
>>
>> I have been struggling to find a fast
I have been struggling to find a fast and elegant way to implement the
following algorithm.
I have an Array{Float64, 2}, say X = randn(100,2) and an Array{Int64, 1},
call it cl which denotes row-blocks of X. More concretely:
0.863377 0.867817 1.0
-0.310559 -0.863393 1.0
1.749630
12 matches
Mail list logo