Re: upperBound/lowerBound in algorithm O(log n) or O(n) ?
Thank you treeform. Have a nice weekend.
upperBound/lowerBound in algorithm O(log n) or O(n) ?
Hi In Java binary search returns a negative index if target not found in array where negative index is where to insert (as positive) to preserve sorted array. Is upperBound/lowerBound following binary search speeds O(log n) worst case or linearly? I'm too stupid to read the source code: [https://github.com/nim-lang/Nim/blob/version-1-0/lib/pure/algorithm.nim#L281](https://github.com/nim-lang/Nim/blob/version-1-0/lib/pure/algorithm.nim#L281) I'm guessing the shr 1 is dividing by 2 so it would be O(log n)? Btw. the documentation on Sort and IsSorted specifies worst case as O notation, could be nice to add for others as well if possible. Cheers
Re: Automatic Algorithms Optimization via Fast Matrix Exponentiation
Thanks for your answer. I'm not smart enough to begin implementing stuff like that. The guy DannyBee on the hackernews forum claims that GCC easily could do this. [https://news.ycombinator.com/item?id=17592359](https://news.ycombinator.com/item?id=17592359) Just thought it was interesting. Are you referring to some c/c++ lib that does this in an optimized way? And then calling it from nim?
Automatic Algorithms Optimization via Fast Matrix Exponentiation
I recently stumpled upon this thread on HN: [https://kukuruku.co/post/automatic-algorithms-optimization-via-fast-matrix-exponentiation](https://kukuruku.co/post/automatic-algorithms-optimization-via-fast-matrix-exponentiation)/ Impressive. Can the same thing be achieved in Nim? I tried to use the library bignum: [https://github.com/FedeOmoto/bignum](https://github.com/FedeOmoto/bignum) To do: import bignum proc fib(n: Rat): Rat = var i = newRat(0) t = newRat(0) a = newRat(0) b = newRat(1) while i < n: t = a + b a = b b = t i += 1 return a proc main() = let limit = newRat(10 ^ 7) # let limit = newRat(10) # 55 echo fib(limit) main() Run Built with -d:release c| ---|--- also tried \--cc:clang -d:release --clang.options.speed=-Ofast -flto -fno-strict-aliasing -ffast-math c| ---|--- Seems to spin for long time.
Re: Optimizing Nim algorithm (compete with c++)
Thank you for the gist. You corrected some of my brainfarts as well. Clang seems to be faster for me than gcc! Now faster than my cpp implementation. What configurations sets gcc as standard when compiling Nim? My results: Clang: 0m0.144s GCC: 0m0.197s I'm on a MBP 2013 with Core i7 I7-4850HQ. clang --version: Apple LLVM version 9.1.0 (clang-902.0.39.1)
Re: Optimizing Nim algorithm (compete with c++)
Hi def You beat me to it! **Thank you** very much for looking into it and replying. You are right! I don't get quite the same performance boost as you do, cpp still seems faster but barely. Might be any number of reasons. Do you have any more nice tricks for the nim code? Did you also wrap the constants and the xoroshift variables inside the main proc?
Re: Optimizing Nim algorithm (compete with c++)
Okay. I noticed in another thread ([https://forum.nim-lang.org/t/1268/1#7848](https://forum.nim-lang.org/t/1268/1#7848)) Araq telling to wrap the code in a main proc. I did and now the speed is almost comparable to my cpp! Nim: 0m0.197s cpp 0m0.159s Impressive. I updated the master branch
Optimizing Nim algorithm (compete with c++)
I have written a differential evolution algorithm in cpp and Nim. My cpp version is faster (6-8 times) and I was hoping I could get comparable performance with the Nim implementation. I need a hand. Nim version: [https://github.com/peheje/nim_genetic/blob/master/DifferentialEvolution/app.nim](https://github.com/peheje/nim_genetic/blob/master/DifferentialEvolution/app.nim) Compiling with nim compile -d:release app.nim cpp version: [https://github.com/peheje/differential_evolution_cpp/blob/master/differential_evolution/main.cpp](https://github.com/peheje/differential_evolution_cpp/blob/master/differential_evolution/main.cpp) Compiling with -Ofast and Link Time optimization: Monolithic in XCode. When I run the Nim profiler it seem something called **system.nim: genericReset** is time consuming, what is this? Entry: 1/6 Calls: 14/52 = 26.92% [sum: 14; 14/52 = 26.92%] system.nim: app 52/52 = 100.00% Entry: 2/6 Calls: 13/52 = 25.00% [sum: 27; 27/52 = 51.92%] system.nim: genericReset 25/52 = 48.08% app.nim: app 52/52 = 100.00% Entry: 3/6 Calls: 12/52 = 23.08% [sum: 39; 39/52 = 75.00%] assign.nim: genericReset 25/52 = 48.08% assign.nim: genericReset 25/52 = 48.08% system.nim: app 52/52 = 100.00% Entry: 4/6 Calls: 5/52 = 9.62% [sum: 44; 44/52 = 84.62%] app.nim: f_rand 5/52 = 9.62% app.nim: app 52/52 = 100.00% Entry: 5/6 Calls: 4/52 = 7.69% [sum: 48; 48/52 = 92.31%] app.nim: i_rand 8/52 = 15.38% app.nim: app 52/52 = 100.00% Entry: 6/6 Calls: 4/52 = 7.69% [sum: 52; 52/52 = 100.00%] app.nim: xor128 4/52 = 7.69% app.nim: i_rand 8/52 = 15.38% app.nim: app 52/52 = 100.00% I have not used the random library. I think it is the strategy to make xoroshift unbiased with the while loop is slowing it ([https://github.com/nim-lang/Nim/blob/2a8d2ad4cde1c942a9dbc44598f65d4acf1e0aa6/lib/pure/random.nim#L77](https://github.com/nim-lang/Nim/blob/2a8d2ad4cde1c942a9dbc44598f65d4acf1e0aa6/lib/pure/random.nim#L77)) and my application don't care much about the bias introduced.
Re: Send data structures between threads?
Thanks mratsim!
Re: Send data structures between threads?
Anything new on this? I'm trying to implement a simple genetic algorithm where each thread shall have access to a shared seq[seq[int]], its easy to partition the data into chunks so the threads know which items to work on, but Nim seems IMPLICITLY copy the data, whether I try channels, parallel statements, spawn statements or even "bare" threads. Am I doing something wrong? I thought that because the object is a ref object only the ref: the address, would be copied? Code here: [https://github.com/peheje/nim_genetic/blob/master/mutable_state.nim](https://github.com/peheje/nim_genetic/blob/master/mutable_state.nim) I appreciate language features like parallel: with spawn does some magic in the background implicitly but I think being too creative might take many programmers by "unpleasent" surprise.