On Tuesday, 2 May 2017 at 11:27:17 UTC, Ivan Kazmenko wrote:
On Tuesday, 2 May 2017 at 10:35:46 UTC, Ivan Kazmenko wrote:
I hope some part of the idea is still salvageable.
For example, what if we put the intervals in a queue instead of a stack?

I tried to implement a similar approach, but instead of a queue or a stack, I used a random-access array of intervals.

Sadly, it is still far from uniform, since it does not take interval lengths into account, and I don't see how to do that without at least O(log n) time per interval insertion or deletion.

Implementation and empiric frequencies for n=5 elements in a permutation: http://ideone.com/3zSxLN

Ivan Kazmenko.

Well, I thought of another idea that may not be technically random, but which I imagine would get pretty close in real world use cases:

1. Test a random element in the array. If it satisfies P, return it. 2. Choose a hopper interval, h, that is relatively prime to the number of elements in the array, n. You could do this by randomly selecting from a pre-made list of primes of various orders of magnitude larger than n, with h = the prime % n. 3. Hop along the array, testing each element as you go. Increment a counter. If you reach the end of the array, cycle back to the beginning starting with the remainder of h that you didn't use. I think that h being relatively prime means it will thereby hit every element in the array.
4. Return the first hit. If the counter reaches n, return failure.

The way I see it, with a random start position, and a *mostly* random interval, the only way to defeat the randomness of the result would be if the elements that satisfy P were dispersed precisely according to the random interval specified for that particular evocation of the function — in which case the first in the dispersed set would be chosen more often. This would require a rare convergence depending on both n and h, and would not repeat from one call to the next.


Reply via email to