>
> I hope to have a release version in about a month, which is the estimated
> time for releasing GMT5.2. There are still several issues from the GMT side
> but overall most of it works already. Feel free to test it (see the
> gallery.jl for several usage examples) but note that you'll have
Yes, we will be releasing Win binaries when it's released. It's for the
mean time that interested people needs to build from source.
domingo, 27 de Setembro de 2015 às 14:50:25 UTC+1, Marcio Sales escreveu:
>
> I hope to have a release version in about a month, which is the estimated
>> time
sexta-feira, 25 de Setembro de 2015 às 14:47:49 UTC+1, Marcio Sales
escreveu:
>
> all right... Well.. Julia's price will allways be better...
> Btw.. what about crowdfunded initiatives? Has anyone tried that? I would
> be wiling to donate for a geospatial visualization package, for example...
This just came to while I remembered that Matlab is so expensive.. I never
did this myself, but heard that Openstreetmaps, for example, was/is
crowdsourced, so why not trying it for Julia development?
+1 Someone give this man a cookie.
On 25 September 2015 at 01:46, Steven G. Johnson
wrote:
> To put it another way, there are plenty of problems that can't be
> vectorized effectively. ODEs, matrix assembly for FEM or BEM, implementing
> special functions... If you do
Although not understanding the final part of the performance tips, I would
like to make the point here that the "style guide" could also use some love.
I am too obsessive to aesthetic. I cannot accept to write ugly code.
On Thursday, September 24, 2015 at 10:13:18 PM UTC+2, Steven G.
all right... Well.. Julia's price will allways be better...
Btw.. what about crowdfunded initiatives? Has anyone tried that? I would be
wiling to donate for a geospatial visualization package, for example...
Then don't?
Marcio: crowdfunded julia packages is a really awesome idea... is there a
platform for this already for other languages? or should we create one?
On Fri, Sep 25, 2015 at 9:47 AM, Marcio Sales
wrote:
> all right... Well.. Julia's price will allways be better...
> Btw..
> Marcio: crowdfunded julia packages is a really awesome idea... is there a
> platform for this already for other languages? or should we create one?
>
> On Fri, Sep 25, 2015 at 9:47 AM, Marcio Sales
> wrote:
>
>> all right... Well.. Julia's price will allways be
[This (part A) is probably not implemented yet, and probably not even a
new idea.. possibly julia-dev material?]
On mið 23.sep 2015 16:26, Milan Bouchet-Valat wrote:
Le mercredi 23 septembre 2015 à 07:38 -0700, Páll Haraldsson a écrit :
instead of these two type-unstable variants:
type
Wow. All this discussion to make Julia only *as fast as* the old scripting
languages? I gotta say that worried me a bit. What do you do when there's
no code to compare? How will you know that it was really a good idea
switching from Matlab/Python to Julia?
Considering what the develops
I have been spending the past weeks trying to really understand how to
implement efficient code.
As far as I can tell (from first hand experience), Julia really does
give you a prominent edge over R and Matlab in terms of performance.
However, I also think that there are currently a lot of
There is no question that Julia needs more work. This applies to offering
speedy primitives and also doing more optimizations.
But I think you get one thing wrong.
The magic lays in the fact, that in Julia you have the chance to write the
vectorized implementation that are offered by languages
On Thursday, September 24, 2015 at 12:17:25 PM UTC, Marcio Sales wrote:
>
> Wow. All this discussion to make Julia only *as fast as* the old
> scripting languages?
>
This is not what I meant. Matching C is the goal I understand and often
that goal is met. *Should* always be possible.
A. By
"Idiot-proof" sounds awesome, but at least makes it so that the user could
be aggressive to the language, and not the language to be aggressive to the
newbie, would be a great to way to approach. It would also help avoinding
"scarecrowing" away potential contributors.
Yeah cause Haskell is so super newbie friendly???
I don't think the developers had the goal with Julia that you should be
able to write code with as few characters as possible.
On Thursday, September 24, 2015 at 5:54:37 PM UTC+2, Páll Haraldsson wrote:
>
> On Thursday, September 24, 2015 at
I think julia is very newbie-friendly, except for some very common patterns
that people run into. How many times do people run into the "globals are
slow" problem, post a long question about why "I thought julia was
fast...", and then we go through the same performance tips. It would be
nice if
My argument does be purely about performance. However, you won't know if
your Julia code is 1.3x slower than C or 130x slower, until you write the C
code.
On Thursday, September 24, 2015 at 7:14:46 PM UTC+2, Tom Breloff wrote:
>
> Unless you are experts of compilers and Julia language, you can
On Thursday, September 24, 2015 at 1:55:18 PM UTC-4, Sisyphuss wrote:
>
> However, Julia is assumed to be fast (high expectation), and performance
> varies a lot according to the knowledge/skill a programmer own (high
> variance).
>
Again, that's true in any language where you are trying to
Rewriting code in another language, unless it affords supports some manner
of proof and validation, is not a good approach to proving Julia code
operates as designed and intended.
There is a likelihood of doing something not quite right twice or two
different things differently.
Sometimes, I
>
> it would be nice to have a more obvious way to automatically fix things
> like globals that could be declared const, or maybe even automatically
> wrapping global code in functions?
I think these words reflect exactly the point I wanted to make. I
understand the grandiosity that is
On 24 September 2015 at 14:17, Marcio Sales wrote:
> Wow. All this discussion to make Julia only *as fast as* the old
> scripting languages? I gotta say that worried me a bit. What do you do when
> there's no code to compare? How will you know that it was really a good
>
These criticisms are frankly ridiculous. When your critique could be applied to
any programming language that exists then you are doing it wrong.
On 24 September 2015 at 22:00, Marcio Sales wrote:
> it would be nice to have a more obvious way to automatically fix things
>> like globals that could be declared const, or maybe even automatically
>> wrapping global code in functions?
>
>
> I think these words reflect
On 24 September 2015 at 22:31, Marcio Sales wrote:
> Matlab *requires* vectorized code because Matlab loops are very slow by
>> comparison
>>
> It is much faster these days (from R2014).
>
It is much faster than it used to be, but it is still slow. I know that
Matlab
On 24 September 2015 at 19:00, Sisyphuss wrote:
> What do you do when there's no code to compare?
>>
> This is a good point! When I write a piece of Julia code, how do I know I
> wrote it correctly? Should I write a C version to prove it?
> This is what I called the risk
This conversation is getting pretty tiresome. There are programs where
Matlab is already as fast as it's possible to be. If all you're doing is
computing a big matrix product, for example, then all any language is just
going to call BLAS. Julia is not going to be any faster than Matlab for
that,
Le jeudi 24 septembre 2015 à 13:31 -0700, Marcio Sales a écrit :
> > Matlab *requires* vectorized code because Matlab loops are very
> > slow by comparison
> It is much faster these days (from R2014). I remember I ran some very
> simple comparisons and it surprised me that Matlab ran a bit faster
+1
On Thu, Sep 24, 2015 at 5:28 PM, Stefan Karpinski
wrote:
> This conversation is getting pretty tiresome. There are programs where
> Matlab is already as fast as it's possible to be. If all you're doing is
> computing a big matrix product, for example, then all any
To put it another way, there are plenty of problems that can't be vectorized
effectively. ODEs, matrix assembly for FEM or BEM, implementing special
functions... If you do enough scientific computing, eventually you will hit a
problem where you need to write your own inner loops, and then with
>
> That is a meaningless comparison. First, you are not comparing loops, you
> are comparing matrix inversion. Second, neither Matlab nor Julia will
> natively perform a matrix inversion well. They are both going to use an
> external library (LAPACK) so what you are testing is the library,
On 24 September 2015 at 14:47, Christof Stocker
wrote:
> As far as I can tell (from first hand experience), Julia really does give
> you a prominent edge over R and Matlab in terms of performance. However, I
> also think that there are currently a lot of ways to shoot
>
> Matlab *requires* vectorized code because Matlab loops are very slow by
> comparison
>
It is much faster these days (from R2014). I remember I ran some very
simple comparisons and it surprised me that Matlab ran a bit faster than
Julia in a for loop of matrices multiplications and
On 24 September 2015 at 19:55, Sisyphuss wrote:
> People won't apply my critique on Matlab or R, because these languages are
> assumed to be slow and they must be slow. So there is no "risk/variance"
> (in the good sense) for these language. No sooner one learns to write
>
> What do you do when there's no code to compare?
>
This is a good point! When I write a piece of Julia code, how do I know I
wrote it correctly? Should I write a C version to prove it?
This is what I called the risk to write Julia code. Unless you are experts
of compilers and Julia language,
>
> Unless you are experts of compilers and Julia language, you can never know
> whether your code give you an edge or not
Isn't this true of all languages? How do you know you did that C pointer
arithmetic correctly? Or that python didn't silently clobber your data?
This is why integration
People won't apply my critique on Matlab or R, because these languages are
assumed to be slow and they must be slow. So there is no "risk/variance"
(in the good sense) for these language. No sooner one learns to write
vectorized code, than he reaches the limit of these languages.
However,
On Monday, September 21, 2015 at 12:18:12 AM UTC, Daniel Carrera wrote:
>
>
> On 21 September 2015 at 01:36, Páll Haraldsson > wrote:
>
>> I know about @devec but wander how close Julia is also able to match the
>> speed of its looks with more condensed code or
Le mercredi 23 septembre 2015 à 05:15 -0700, Páll Haraldsson a écrit :
>
>
> On Monday, September 21, 2015 at 12:18:12 AM UTC, Daniel Carrera
> wrote:
> >
> > On 21 September 2015 at 01:36, Páll Haraldsson <
> > pall.ha...@gmail.com> wrote:
> > > I know about @devec but wander how close Julia
On 23 September 2015 at 16:38, Páll Haraldsson
wrote:
> Yes,
>
> What prompted my question was Tim Holy: "String is not a concrete type.
> Consider ASCIIString or UTF8String." Maybe ok in this case. If he meant it
> as a general advise,
>
This is not advice. Tim was
On 23 September 2015 at 14:15, Páll Haraldsson
wrote:
>
> On Monday, September 21, 2015 at 12:18:12 AM UTC, Daniel Carrera wrote:
>>
>>
>> On 21 September 2015 at 01:36, Páll Haraldsson
>> wrote:
>>
>>> I know about @devec but wander how close
Yes,
What prompted my question was Tim Holy: "String is not a concrete type.
Consider ASCIIString or UTF8String." Maybe ok in this case. If he meant it
as a general advise, I thought maybe a Union of the UTF encodings (and
ASCII?) would be needed to be general. Or not, as you say (and I), this
Le mercredi 23 septembre 2015 à 07:38 -0700, Páll Haraldsson a écrit :
> Yes,
>
> What prompted my question was Tim Holy: "String is not a concrete
> type. Consider ASCIIString or UTF8String." Maybe ok in this case. If
> he meant it as a general advise, I thought maybe a Union of the UTF
>
On mið 23.sep 2015 15:34, Daniel Carrera wrote:
To match MATLAB in
speed, Julia code should typically be about the same length.
That is good enough for me (to match MATLAB and Python's length and at
least not be slower) and then no big concern there. C/C++ doesn't even
have real macros, and
On 23 September 2015 at 20:21, Páll Haraldsson
wrote:
> Between API calls. "second class platform" was probably too strong a
> language. I'm not sure if UTF8 is preferred anywhere in Julia (in Base).
I honestly don't know if UTF8 is preferred. I have never heard
The biggest problem seems to be the main loop in calc_net.jl:
for idx = 1:numO
o = o_vec[idx]
# conditions in which something occurs
cond_A_su = (o .< b_hist[:,3]) & (b_hist[:,1] .== 65)
cond_B_su = (o .> b_hist[:,3]) & (b_hist[:,1] .== 66)
# conditions in which something
Did you run the code twice to not time the JIT compiler?
For me, my version runs in 0.24 and Daniels in 0.34.
Anyway, adding this to Daniels
version: https://gist.github.com/KristofferC/c19c0ccd867fe44700bd makes it
run in 0.13 seconds for me.
On Sunday, September 20, 2015 at 7:28:09 PM
That might make a difference because there is a lot of performance
improvements on 0.4 (most notably the new garbage collector). PyPlot works
fine for me on 0.4 btw.
On Sunday, September 20, 2015 at 8:29:52 PM UTC+2, Daniel Carrera wrote:
>
> Thanks.
>
> No, I'm not on 0.4 yet. I thought it
Thanks.
No, I'm not on 0.4 yet. I thought it wasn't stable (and I think PyPlot
doesn't work on it yet). I'm on 0.3.11.
On 20 September 2015 at 20:28, Kristoffer Carlsson
wrote:
> https://github.com/timholy/ProfileView.jl is invaluable for performance
> tweaking.
>
> Are
As an interim step, you can also get text profiling information using
Profile.print() if the graphics aren't working.
On Sunday, September 20, 2015 at 11:35:35 AM UTC-7, Daniel Carrera wrote:
>
> Hmm... ProfileView gives me an error:
>
> ERROR: panzoom not defined
> in view at
> As an interim step, you can also get text profiling information using
> Profile.print() if the graphics aren't working.
You could also try https://github.com/mauro3/ProfileFile.jl which writes
the profile numbers into a file *.pro. Similar to the memory and
coverage files.
> On Sunday,
Whoo hoo! It looks like I got another ~6x or ~7x improvement. Using
Profile.print() I found that the hottest parts of the code appeared to be
the if-conditions, such as:
if o < b_hist[j,3]
It occurred to me that this could be due to cache misses, so I rewrote the
code to store the data
I just added the 'sim' variable to the outer scope and a return value. This
incurs a small speed penalty. I also removed the remaining `reshape()` from
calc_net and that produced a small speed improvements. The two changes
roughly cancel out. On my computer I measure a 3% net improvement, which is
Adding your loop increased performance by another 20%. Good stuff!
Changes uploaded to Github.
Cheers,
Daniel.
On 20 September 2015 at 23:13, Kristoffer Carlsson
wrote:
> For me, your latest changes made the time go from 0.13 -> 0.11. It is
> strange we have so different
Ah ja @code_warntype is in Julia v0.4. I thin in 0.3 you could just do
@code_type and look for ::Any
See the introduction PR https://github.com/JuliaLang/julia/pull/9349
On Sunday, 20 September 2015 23:49:55 UTC+9, Daniel Carrera wrote:
>
> Uhmm... I get an error:
>
> ERROR: @code_warntype not
Daniel, you are still using a Dict of params, which kills type inference. Pass
parameters directly or put them in (typed) fields of a composite type.
(On the other hand, common misconception: there is no performance need to
declare the types of function arguments.)
Thanks for the several comments and Daniel for the alternate versions of
the calc_net function! Viral, unfortunately I'm not a GitHub/version
control user (yet), but I've copied the code into a gist here:
https://gist.github.com/anonymous/cee196ee43cb9bf1c8b6. The code can be run
by running
Whoo hoo!
After replacing all the Dict's with appropriate composite types, I got an
additional ~4x speed improvement. Combined with my previous work, the code
is now 30x faster than the original. So now Julia should at least match
Matlab. I uploaded the modified code to GitHub:
On 20 September 2015 at 17:08, Adam wrote:
> Daniel, can you clarify your comment of "the first two lines require
> memory allocation and might also have a bad memory profile"? I'm not sure
> if it's addressed in this latest gist or not.
>
I don't know how much you know
The biggest problem right now is that Prob_o is global in calc_net. You
need to pass it as an argument too. It's one of the drawback of having
everything global by default, this kind of mistakes are sometimes hard to
spot.
Otherwise the # get things in right dimensions for calculation below
On 20 September 2015 at 17:39, STAR0SS wrote:
> The biggest problem right now is that Prob_o is global in calc_net. You
> need to pass it as an argument too. It's one of the drawback of having
> everything global by default, this kind of mistakes are sometimes hard to
> spot.
>
take a look at
@code_warntype calc_net(0, 0, 0, Dict{String,Float64}(), Dict{String,Float64
}())
It tells you where the compiler has problems inferring the types of the
variables.
Problematic in this case is
b_hist::Any
b_hist_col2::Any
numB::Any
b_hist_col2_A::Any
b_hist_col2_B::Any
This runs in 0.3
seconds:
https://github.com/KristofferC/calcnet/commit/2c252a31c34eb92842310b31d753a64727b95875
but I havent checked if the result is the same hehe. Should give you a feel
how to write fast code in Julia though.
Bottleneck now is all the repmats.
On Sunday, September 20,
Uhmm... I get an error:
ERROR: @code_warntype not defined
Do I need to update Julia or something? I have version 0.3.11.
On 20 September 2015 at 16:14, Valentin Churavy wrote:
> take a look at
> @code_warntype calc_net(0, 0, 0, Dict{String,Float64}(), Dict{String,
>
Only your first post should be moderated.
> On Sep 20, 2015, at 11:08 AM, Adam wrote:
>
> Thanks for the several comments and Daniel for the alternate versions of the
> calc_net function! Viral, unfortunately I'm not a GitHub/version control user
> (yet), but I've
I managed to get another 2.5x improvement with this function:
function calc_net( d::Int64, t::Int64, i::Int64, sim::Dict, param::Dict )
# name some things
gamma = param["gamma"]
o_vec = param["o_vec"]
numO = length(o_vec)
w0 = sim["w"][d, 1, i]
# retrieve some history
b_hist =
Hi Steven,
I am not the OP, I am trying to help the OP with his code. Anyway, the
first thing I did was replace Dict{Any,Any} by the more explicit
Dict{String,Float64} but that didn't help. I did not think to try a
composite type. I might try that later. It would be interesting to figure
out why
That's very useful. I didn't know about @code_warntype. I'm going to try to
replace all the Dict's with concrete types and see what happens.
On 20 September 2015 at 16:14, Valentin Churavy wrote:
> take a look at
> @code_warntype calc_net(0, 0, 0, Dict{String,Float64}(),
String is not a concrete type. Consider ASCIIString or UTF8String.
But if you don't need the flexibility of a Dict, a composite type will be a
huge improvement.
--Tim
On Sunday, September 20, 2015 03:55:43 PM Daniel Carrera wrote:
> Hi Steven,
>
> I am not the OP, I am trying to help the OP
This is pure speculation, but the reason you don't get the same improvement
could be that 0.4 is somehow more intelligent about the use of `reshape()`.
Maybe one of the reasons the program has been running faster on your
computer all along is that 0.4 was getting handling the memory better to
I just tried it. You are right. Using slice() makes the code 9x slower for
me.
On 20 September 2015 at 23:14, Kristoffer Carlsson
wrote:
> Oh, if you are on 0.3 I am not sure if slice exist. If it does, it is
> really slow.
>
> On Sunday, September 20, 2015 at 11:13:21 PM
sim is defined inside the while loop which means that it goes out of scope
after the while loop end,
see: http://julia.readthedocs.org/en/latest/manual/variables-and-scoping/
If you want to run a number of sims you could for example create an empty
vector in the start of main and push the sims
Oh, if you are on 0.3 I am not sure if slice exist. If it does, it is
really slow.
On Sunday, September 20, 2015 at 11:13:21 PM UTC+2, Kristoffer Carlsson
wrote:
>
> For me, your latest changes made the time go from 0.13 -> 0.11. It is
> strange we have so different performances, but then
For me, your latest changes made the time go from 0.13 -> 0.11. It is
strange we have so different performances, but then again 0.3 and 0.4 are
different beasts.
Adding some calls to slice and another loop gained some perf for me. Can
you try:
Thanks Daniel! That code ran in about 0.3 seconds on my machine as well.
More good progress! This puts Julia about ~5x faster than Matlab here.
I tried placing "return sim" at the end of your main() function, but I
still got an error saying "sim not defined." Why is that? Can I return
output
https://github.com/timholy/ProfileView.jl is invaluable for performance
tweaking.
Are you on 0.4?
On Sunday, September 20, 2015 at 8:26:08 PM UTC+2, Milan Bouchet-Valat
wrote:
>
> Le dimanche 20 septembre 2015 à 20:22 +0200, Daniel Carrera a écrit :
> >
> >
> > On 20 September 2015 at
Hmm... ProfileView gives me an error:
ERROR: panzoom not defined
in view at /home/daniel/.julia/v0.3/ProfileView/src/ProfileViewGtk.jl:32
in view at /home/daniel/.julia/v0.3/ProfileView/src/ProfileView.jl:51
in include at ./boot.jl:245
in include_from_node1 at ./loading.jl:128
while loading
Thanks for the comments! Daniel and Kristoffer, I ran each of your code on
my machine. Daniel's ran in 2.0 seconds on my laptop (~26% of time in gc);
Kristoffer's ran in about 7 seconds on my laptop (~21% of time in gc). I'm
not sure why Kristoffer's took so much longer to run for me than it
Le dimanche 20 septembre 2015 à 20:22 +0200, Daniel Carrera a écrit :
>
>
> On 20 September 2015 at 19:43, Kristoffer Carlsson <
> kcarlsso...@gmail.com> wrote:
> > Did you run the code twice to not time the JIT compiler?
> >
> > For me, my version runs in 0.24 and Daniels in 0.34.
> >
> >
Oh, wait, I'm an idiot. I just figured out how to use @profile... :-P
On 20 September 2015 at 20:22, Daniel Carrera wrote:
> Is there a good way to profile Julia code? So I have been profiling by
> inserting tic() and toc() lines everywhere. On my computer @profile seems
>
Looks like you were getting a partial installation of code that needs julia
0.4. Try Pkg.update() and it should downgrade you to an earlier version.
--Tim
On Sunday, September 20, 2015 08:35:28 PM Daniel Carrera wrote:
> Hmm... ProfileView gives me an error:
>
> ERROR: panzoom not defined
>
On 20 September 2015 at 19:43, Kristoffer Carlsson
wrote:
> Did you run the code twice to not time the JIT compiler?
>
> For me, my version runs in 0.24 and Daniels in 0.34.
>
> Anyway, adding this to Daniels version:
>
Just another note:
I suspect that the `reshape()` might be the guilty party. I am just
guessing here, but I suspect that the reshape() forces a memory copy, while
a regular slice just creates kind of symlink to the original data.
Furthermore, I suspect that the memory copy would mean that when
On Sunday, September 20, 2015 at 2:16:07 AM UTC+8, David Gold wrote:
>
> One thing I suspect is hurting you is that you're storing your parameters
> in a `Dict` object. Because the parameters are all of different types, (and
> because of the way you declare the `Dict`), they are stored in a
One thing I suspect is hurting you is that you're storing your parameters
in a `Dict` object. Because the parameters are all of different types, (and
because of the way you declare the `Dict`), they are stored in a `Dict{Any,
Any}`, which means that Julia's type inference system is unable to
Unless you make the globals `const`, the compiler doesn't know the type of
globals, so every single access to it must be "interpreted", so to speak.
There are talks of adding a way to specify the type of globals, that would
give them comparable speed to function arguments.
On Saturday,
87 matches
Mail list logo