I wish I walked faster with @simd - for some reason I'm getting the same performance as described here: https://groups.google.com/d/msg/julia-users/DycY6jwDcWs/yYXHsWF9AwAJ
One of the key objectives for LightGraphs is to be as fast as possible, so while Julia itself is pretty darn good, we want to eke every possible performance gain out of it so that we can scale to very large graphs. This also includes memory allocation, so fast and small. We're getting to the point where improving one results in bad performance in the other. That's both a good sign (that we've optimized away the low-hanging fruit) and a bad sign (that we may be hitting the limits of what we can do). I'm sure we've missed some stuff, though. Thanks for the discussion. Feel free to join us over at https://github.com/JuliaGraphs/LightGraphs.jl/issues if this sort of stuff interests you :) Seth. On Monday, November 16, 2015 at 11:28:15 AM UTC-8, hustf wrote: > > Seth, I appreciate that. As a novice programmer I appreciate to have > exchanges with you guys who are making this. I'm using commercial software > every day that hasn't made progress since the nineties. > > I worked through something like fifteen types for the adjacency matrix > (and, by the way, tuples for storage worked out, but was slower). I suppose > Julia is fast so that we don't actually need to optimize like this all the > time, but it's an excellent lesson. > > In the example I gave above you may have noted some confusion about > rows-columns order. Those algorithms which were adapted from text books or > other languages seem to walk along rows. In Julia, you walk faster > downhill. And with @simd. >