Thank you so much!
On Sun, Aug 13, 2017 at 12:20 PM, Vlad Niculae wrote:
> Looks like you're misspelling the word "cluster".
>
> Yours,
> Vlad
>
> On Aug 13, 2017 12:19 PM, "Ariani A" wrote:
>
>> Dear all,
>>
>> I am writing
Dear all,
I am writing this import:
from sklearn.crluster.hierarchical import (_hc_cut, _TREE_BUILDERS,
linkage_tree)
But it gives this error:
ImportError: No module named crluster.hierarchical
Any clue?
Best regards,
-Noushin
___
Dear Shane,
Sorry bothering you!
Is the "precomputed" and "distance matrix" you are talking about, are about
"DBSCAN" ?
Thanks,
Best.
On Thu, Jul 13, 2017 at 7:03 PM, Ariani A wrote:
> Dear Shane,
> Thanks for your prompt answer.
> Do you mean that fo
, 2017 at 5:38 PM, Shane Grigsby
wrote:
> Hi Ariani,
> Yes, you can use a distance matrix-- I think that what you want is
> metric='precomputed', and then X would be your N by N distance matrix.
> Hope that helps,
> ~Shane
>
>
> On 07/13, Ariani A wrote:
>
>
rogram
> that you can cut for sub clusters if need be.
>
> DBSCAN is part of the stable release and has been for some time; OPTICS is
> pending as a pull request, but it's stable and you can try it if you like:
>
> https://github.com/scikit-learn/scikit-learn/pull/1984
>
stance(M, metric=metric)
> Z = hierarchy.linkage(X, algo, metric=metric)
> C = hierarchy.fcluster(Z,threshold, criterion="distance")
>
> Best,
> Uri Goren
>
> On Tue, Jul 11, 2017 at 7:42 PM, Ariani A wrote:
>
>> Hi all,
>> I want to perform agglomerative cl
Hi all,
I want to perform agglomerative clustering, but I have no idea of number of
clusters before hand. But I want that every cluster has at least 40 data
points in it. How can I apply this to sklearn.agglomerative clustering?
Should I use dendrogram and cut it somehow? I have no idea how to rela
ing
> for help with another module.
>
> On Fri, Jul 7, 2017 at 9:28 AM Ariani A wrote:
>
>> Yes , it is.
>> regards
>>
>> On Fri, Jul 7, 2017 at 12:23 PM, Carlton Banks wrote:
>>
>>> NLP as is Natural language processing?
>>>
>>> Den
Yes , it is.
regards
On Fri, Jul 7, 2017 at 12:23 PM, Carlton Banks wrote:
> NLP as is Natural language processing?
>
> Den 7. jul. 2017 kl. 18.18 skrev Ariani A :
>
> Dear all,
> I need an urgent help with NLP, do you happen to know anyone who knows
> nltk or NLP modules
Dear all,
I need an urgent help with NLP, do you happen to know anyone who knows nltk
or NLP modules? Have anybody of you read this paper?
"Template-Based Information Extraction without the Templates."
I am looking forward to hearirng from you soon!
Best,
-Ariani
___
s stable and you can try it if you like:
>
> https://github.com/scikit-learn/scikit-learn/pull/1984
>
> Cheers,
> Shane
>
>
> On 06/30, Ariani A wrote:
>
>> I want to perform agglomerative clustering, but I have no idea of number
>> of
>> clusters before hand
I want to perform agglomerative clustering, but I have no idea of number of
clusters before hand. But I want that every cluster has at least 40 data
points in it. How can I apply this to sklearn.agglomerative clustering?
Should I use dendrogram and cut it somehow? I have no idea how to relate
dendr
I have some data and also the pairwise distance matrix of these data
points. I want to cluster them using Agglomerative clustering. I readthat
in sklearn, we can have 'precomputed' as affinity and I expect it is the
distance matrix. But I could not find any example which uses precomputed
affinity a
13 matches
Mail list logo