On Wednesday, 15 July 2020 at 07:51:31 UTC, 9il wrote:
On Wednesday, 15 July 2020 at 07:34:59 UTC, tastyminerals wrote:
On Wednesday, 15 July 2020 at 06:57:21 UTC, 9il wrote:
On Wednesday, 15 July 2020 at 06:55:51 UTC, 9il wrote:
On Wednesday, 15 July 2020 at 06:00:46 UTC, tastyminerals wrote:
On Wednesday, 15 July 2020 at 02:08:48 UTC, 9il wrote:
[...]

Good to know. So, it's fine to use it with sum!"fast" but better avoid it for general purposes.

They both are more precise by default.

This was a reply to the other your post in the thread, sorry. Mir algorithms are more precise by default then the algorithms you have provided.

Right. Is this why standardDeviation is significantly slower?

Yes. It allows you to pick a summation option, you can try others then default in benchmarks.

Indeed, I played around with VarianceAlgo and Summation options, and they impact the end result a lot.

ans = matrix.flattened.standardDeviation!(VarianceAlgo.naive, Summation.appropriate);
std of [300, 300] matrix 0.375903
std of [60, 60] matrix 0.0156448
std of [600, 600] matrix 1.54429
std of [800, 800] matrix 3.03954

ans = matrix.flattened.standardDeviation!(VarianceAlgo.online, Summation.appropriate);
std of [300, 300] matrix 1.12404
std of [60, 60] matrix 0.041968
std of [600, 600] matrix 5.01617
std of [800, 800] matrix 8.64363


The Summation.fast behaves strange though. I wonder what happened here?

ans = matrix.flattened.standardDeviation!(VarianceAlgo.naive, Summation.fast);
std of [300, 300] matrix 1e-06
std of [60, 60] matrix 9e-07
std of [600, 600] matrix 1.2e-06
std of [800, 800] matrix 9e-07

ans = matrix.flattened.standardDeviation!(VarianceAlgo.online, Summation.fast);
std of [300, 300] matrix 9e-07
std of [60, 60] matrix 9e-07
std of [600, 600] matrix 1.1e-06
std of [800, 800] matrix 1e-06

Reply via email to