Is there an idiomatic Julian way to deal with these cases without a large
pre-allocation or push? I understand why your suggestion here would be
faster than the original. This application aside, I see a lot of push!()
usage in some of the libs (eg saving state in Optim.jl, readdlm in base,
etc).
L
It's not terrible, and you can help it out with sizehint, but it's just
possible that it will have to move the data around a bunch.
> On May 1, 2014, at 1:06 PM, Ethan Anderes wrote:
>
> Ok, so I'm getting the message that my push! stuff isn't a good default
> goto. I just felt so natural but
2.0 is a Float64 everywhere – it's only integer literals whose type depend on
the system. Even in old 32-bit machines, the FPUs had 64-bit floats, so there
was never a place where that was an issue; but you really don't want to be
using 64-bit integers on a 32-bit machine.
> On May 1, 2014, at
Ok, so I'm getting the message that my push! stuff isn't a good default goto.
I just felt so natural but I can see that John's code is much more performance
critical. I'll wean myself off push! :)
> On May 1, 2014, at 12:46 PM, Ethan Anderes wrote:
>
> BTW: thanks a ton...I can't believe I'm having Stefan look at my code!
Well, I can't believe someone can't believe I'm looking at their code, so that
makes us even ;-)
Jameson:
My response is going to expose my ignorance here ... but I wasn't worried that
Float64 wouldn't be fast, but rather that on someones machine typeof(2.0) would
say Float32 instead of Float64. In which case my code wouldn't run without
changing all the Float64 type initialization to Floa
For this kind of problem, you could allocate a big matrix in advance and fill
it in column-by-column, rather than push onto an array. Something roughly like
the following would work:
theta = Array(Float64, 3, 10_000)
theta[:, 1] = [2.0, 3.0, 4.0]
for k= 2:10_000
theta_cur = theta[:, k - 1]
Cameron:
I'm not sure what linked lists in Julia are but I will look into it. Thanks!
I don't know of a machine that can run Julia for which Float64 is not fast.
However, you could make a typealias and use that for embedded processors
without an fpu for the eventual day when someone ports Julia.
On Thursday, May 1, 2014, Ethan Anderes wrote:
> Oop the second line of the code
Stefan:
Ok so maybe that wasn't the best example to illustrate the Float64 issue. I
guess using the parametric types in functions works so solve this (but not the
push! issue)
function chaininit{T}(theta_init::Vector{T}, n)
chain = Array{T,1}[theta_init]
for k=1:n
Would linked lists be a more natural storage for these events?
On Thursday, May 1, 2014, Ethan Anderes wrote:
> Oop the second line of the code block should read
>
> chain = Array{Float64,1}[theta_init]
>
Oop the second line of the code block should read
chain = Array{Float64,1}[theta_init]
Ok, here is quickest example that I came up with.
theta_init = [2.0, 3.0, 4.0]
chain = Array{Float64,1}[init]
for k=1:10_000
theta_prop = chain[end]
theta_prop += rand(3)
likelihoodratio = computelr(theta_prop, theta_curr)
if rand() < minimum([1,likelihoodratio])
If it's in a function, you can just make the function generic, but a bigger
issue is that that push! pattern is bound to be quite slow. It's *much*
better if you can allocate the entire array up front. I'm having a hard
time coming up with examples based on what you wrote – it's a little too
vague.
14 matches
Mail list logo