a newbie comment: If it can be made a bit more easier to write code that uses all the cores ( I am comparing to Go with its channels), it probably doesn't need to be faster than Python.
>From an outsider's perspective, @everywhere is inconvenient. pmap etc doesn't cover nearly as many cases as Go channels. May be it is documentation problem. I wouldn't think it would be good to try to extract every last bit of speed when you are 0.4.. there are so many things to cleanup/build in the language and standard library (exceptions?, debugging, profiling tools) Thanks -- Harry On Thursday, April 30, 2015 at 3:43:36 PM UTC-7, Páll Haraldsson wrote: > > It seemed to me tuples where slow because of Any used. I understand tuples > have been fixed, I'm not sure how. > > I do not remember the post/all the details. Yes, tuples where slow/er than > Python. Maybe it was Dict, isn't that kind of a tuple? Now we have Pair in > 0.4. I do not have 0.4, maybe I should bite the bullet and install.. I'm > not doing anything production related and trying things out and using > 0.3[.5] to avoid stability problems.. Then I can't judge the speed.. > > Another potential issue I saw with tuples (maybe that is not a problem in > general, and I do not know that languages do this) is that they can take a > lot of memory (to copy around). I was thinking, maybe they should do > similar to databases, only use a fixed amount of memory (a "page") with a > pointer to overflow data.. > > 2015-04-30 22:13 GMT+00:00 Ali Rezaee <arv....@gmail.com <javascript:>>: > >> They were interesting questions. >> I would also like to know why poorly written Julia code >> sometimes performs worse than similar python code, especially when tuples >> are involved. Did you say it was fixed? >> >> On Thursday, April 30, 2015 at 9:58:35 PM UTC+2, Páll Haraldsson wrote: >> >>> >>> Hi, >>> >>> [As a best language is subjective, I'll put that aside for a moment.] >>> >>> Part I. >>> >>> The goal, as I understand, for Julia is at least within a factor of two >>> of C and already matching it mostly and long term beating that (and C++). >>> [What other goals are there? How about 0.4 now or even 1.0..?] >>> >>> While that is the goal as a language, you can write slow code in any >>> language and Julia makes that easier. :) [If I recall, Bezanson mentioned >>> it (the global "problem") as a feature, any change there?] >>> >>> >>> I've been following this forum for months and newbies hit the same >>> issues. But almost always without fail, Julia can be speed up (easily as >>> Tim Holy says). I'm thinking about the exceptions to that - are there any >>> left? And about the "first code slowness" (see Part II). >>> >>> Just recently the last two flaws of Julia that I could see where fixed: >>> Decimal floating point is in (I'll look into the 100x slowness, that is >>> probably to be expected of any language, still I think may be a >>> misunderstanding and/or I can do much better). And I understand the tuple >>> slowness has been fixed (that was really the only "core language" defect). >>> The former wasn't a performance problem (mostly a non existence problem and >>> correctness one (where needed)..). >>> >>> >>> Still we see threads like this one recent one: >>> >>> https://groups.google.com/forum/#!topic/julia-users/-bx9xIfsHHw >>> "It seems changing the order of nested loops also helps" >>> >>> Obviously Julia can't beat assembly but really C/Fortran is already >>> close enough (within a small factor). The above row vs. column major >>> (caching effects in general) can kill performance in all languages. Putting >>> that newbie mistake aside, is there any reason Julia can be within a small >>> factor of assembly (or C) in all cases already? >>> >>> >>> Part II. >>> >>> Except for caching issues, I still want the most newbie code or >>> intentionally brain-damaged code to run faster than at least >>> Python/scripting/interpreted languages. >>> >>> Potential problems (that I think are solved or at least not problems in >>> theory): >>> >>> 1. I know Any kills performance. Still, isn't that the default in Python >>> (and Ruby, Perl?)? Is there a good reason Julia can't be faster than at >>> least all the so-called scripting languages in all cases (excluding small >>> startup overhead, see below)? >>> >>> 2. The global issue, not sure if that slows other languages down, say >>> Python. Even if it doesn't, should Julia be slower than Python because of >>> global? >>> >>> 3. Garbage collection. I do not see that as a problem, incorrect? Mostly >>> performance variability ("[3D] games" - subject for another post, as I'm >>> not sure that is even a problem in theory..). Should reference counting >>> (Python) be faster? On the contrary, I think RC and even manual memory >>> management could be slower. >>> >>> 4. Concurrency, see nr. 3. GC may or may not have an issue with it. It >>> can be a problem, what about in Julia? There are concurrent GC algorithms >>> and/or real-time (just not in Julia). Other than GC is there any big >>> (potential) problem for concurrent/parallel? I know about the threads work >>> and new GC in 0.4. >>> >>> 5. Subarrays ("array slicing"?). Not really what I consider a problem, >>> compared to say C (and Python?). I know 0.4 did optimize it, but what >>> languages do similar stuff? Functional ones? >>> >>> 6. In theory, pure functional languages "should" be faster. Are they in >>> practice in many or any case? Julia has non-mutable state if needed but >>> maybe not as powerful? This seems a double-edged sword. I think Julia >>> designers intentionally chose mutable state to conserve memory. Pros and >>> cons? Mostly Pros for Julia? >>> >>> 7. Startup time. Python is faster and for say web use, or compared to >>> PHP could be an issue, but would be solved by not doing CGI-style web. How >>> good/fast is Julia/the libraries right now for say web use? At least for >>> long running programs (intended target of Julia) startup time is not an >>> issue. >>> >>> 8. MPI, do not know enough about it and parallel in general, seems you >>> are doing a good job. I at least think there is no inherent limitation. At >>> least Python is not in any way better for parallel/concurrent? >>> >>> 9. Autoparallel. Julia doesn't try to be, but could (be an addon?). Is >>> anyone doing really good and could outperform manual Julia? >>> >>> 10. Any other I'm missing? >>> >>> >>> Wouldn't any of the above or any you can think of be considered >>> performance bugs? I know for libraries you are very aggressive. I'm >>> thinking about Julia as a core language mostly, but maybe you are already >>> fastest already for most math stuff (if implemented at all)? >>> >>> >>> I know to get the best speed, 0.4 is needed. Still, (for the above) what >>> are the problems for 0.3? Have most of the fixed speed issues been >>> backported? Is Compat.jl needed (or have anything to do with speed?) I >>> think slicing and threads stuff (and global?) may be the only exceptions. >>> >>> Rust and some other languages also claim "no abstraction penalty" and >>> maybe also other desirable things (not for speed) that Julia doesn't have. >>> Good reason it/they might be faster or a good reason to prefer for >>> non-safety related? Still any good reason to choose Haskell or Erlang? I do >>> not know to much about Nim language that seems interesting but not clearly >>> better/faster. Possibly Rust (or Nim?) would be better if you really need >>> to avoid GC or for safety-critical. Would there be a best complementary >>> language to Julia? >>> >>> >>> Part III. >>> >>> Faster for developer time not CPU time. Seems to be.. (after a short >>> learning curve). This one is subjective, but any languages clearly better? >>> Right metric shouldn't really be to first code that seems right but >>> bug-free or proven code. I'll leave that aside and safe-critical issues. >>> >>> -- >>> Palli. >>> >>> > > > -- > Palli. >