Hello there, Is there any way to find matrix A for unknown data Xn using
estimated sources S?
for example if I use
1) ica_X = FastICA(n_components=xyz, algorithm='parallel',
whiten=True,fun='logcosh', fun_prime='', fun_args=None,
max_iter=1000,tol=0.0001, w_init=None, random_state=None)
2) ica
2013/9/7 Sean Violante :
>> Large Number of Dummy Variables or other sparse data.
>> Normally you would normalise your inputs and have common C
>> but then you lose sparsity increasing memory consumption and make
>> calculations longer
[snip]
>> I agree I could rescale my inputs. But imagine I
>
> Just to check that there is no way of passing a vector of C's
>
> Use Case:
>
> Large Number of Dummy Variables or other sparse data.
> Normally you would normalise your inputs and have common C
> but then you lose sparsity increasing memory consumption and make
> calculations longer
>
> Do
Thanks Jake, I was actually just reading this:
http://www.cs.mcgill.ca/~dprecup/courses/ML/Lectures/ml-lecture16.pdf
and starting to put all the pieces together when you sent this. In the
pdf, the K-means example you gave is basically Hard EM for GMM while the
latter is Soft EM that I am seeing
David,
Have you looked at the K Means algorithm? It uses a similar approach of a
two-phase iteration to determine clustering. In K means you're looking for
K cluster centers, such that when each point is assigned to the nearest
cluster, the total of the distances from points to their clusters is
m
Hi,
On Sat, Sep 07, 2013 at 06:24:26PM +0200, Sean Violante wrote:
> Do you agree a) that one can't b) that its important?
If I have understood the problem correctly, I would say:
a) yes, b) no: it seems to me that you can rescale your variables to
achieve the equivalent effect. How you rescale
Just to check that there is no way of passing a vector of C's
Use Case:
Large Number of Dummy Variables or other sparse data.
Normally you would normalise your inputs and have common C
but then you lose sparsity increasing memory consumption and make
calculations longer
Do you agree a) that
ok, this is what I can gather from the code:
Expectation Step
--
Calculate the loglikelihood and responsibilities for each sample.
a. for each sample the loglikelihood is calculated for each gaussian
and then sum across models (logprob
On Sat, Sep 7, 2013 at 5:21 AM, bthirion wrote:
> > I think single-linkage is what people are going to look for when they
> > want a clustering algorithm. The fact that this is equivalent to
> > finding an MST is an implementation detail (although it's still a good
> > thing to have that in the d
On 07/09/2013 12:35, Lars Buitinck wrote:
> 2013/9/7 Robert Layton :
>> This algorithm finds a minimum spanning tree, then cuts any edge higher than
>> a given threshold.
>>
>> This is equivalent to the single linkage clustering. Olivier and I are
>> talking about which name would be best to use. T
On 09/07/2013 12:35 PM, Lars Buitinck wrote:
> 2013/9/7 Robert Layton :
>> This algorithm finds a minimum spanning tree, then cuts any edge higher than
>> a given threshold.
>>
>> This is equivalent to the single linkage clustering. Olivier and I are
>> talking about which name would be best to use
2013/9/7 Robert Layton :
> This algorithm finds a minimum spanning tree, then cuts any edge higher than
> a given threshold.
>
> This is equivalent to the single linkage clustering. Olivier and I are
> talking about which name would be best to use. The leading option at the
> moment is SingleLinkag
12 matches
Mail list logo