I believe I have fixed the bug. Please allow me to also design a test
to ensure this.
In short, the problem was that when doing the column swapping for
efficient memory output, the algorithm would get fooled if a low-order
column would take the place of an already-chosen high-order column and
get
Interesting. I've been staring at the code but the algorithm itself
shouldn't be losing precision. On the other hand, there are those
"stopping conditions" that I had taken from the C implementation of
the author of the Cholesky-OMP paper. If it's as you say, it could be
that when it fails, OMP sto
On Tue, Oct 18, 2011 at 7:49 PM, Alexandre Gramfort
wrote:
> could you open an issue with a small test script with one X and y that
> produce a different result using both implementations?
Here it is:
https://github.com/scikit-learn/scikit-learn/issues/403
I notice that when it fails, the suppo
hi guys,
could you open an issue with a small test script with one X and y that
produce a different result using both implementations?
Alex
On Tue, Oct 18, 2011 at 1:22 PM, Alejandro Weinstein
wrote:
> On Tue, Oct 18, 2011 at 1:15 AM, Vlad Niculae wrote:
>> At the moment I have no idea what th
On Tue, Oct 18, 2011 at 1:15 AM, Vlad Niculae wrote:
> At the moment I have no idea what the cause is. Does it behave
> in the same way if you use the gram solver instead?
Yes. It behaves in the same way. This is the result of the same
experiment with the addition of p_gram, the probability of re
It seems that my implementation strays from the naive implementation
quite a lot.
p_diff is the proportion of time they were different
n: 256, m: 30, s: 10 -> p_diff = 0.81
n: 256, m: 50, s: 10 -> p_diff = 0.32
n: 256, m: 70, s: 10 -> p_diff = 0.16
n: 256, m: 90, s: 10 -> p_diff = 0.17
n: 256, m:
Thank you for the observation. I have been looking into this since
yesterday where the same thing has been reported on my blog by Bob L.
Strum. At the moment I have no idea what the cause is. Does it behave
in the same way if you use the gram solver instead?
Best,
Vlad
On Tue, Oct 18, 2011 at 5:1
Hi:
I am observing a behavior of the scikit.learn implementation of OMP
(sklearn.linear_model.orthogonal_mp) that I don't understand. I am
performing the following experiment:
- Generate a dictionary D (input data) with i.i.d. gaussian entries
(with the column norm normalized to one) with dimensi