> Try backdenting that statement. You're currently doing it at every
>
> iteration of the loop - that's why it's so much slower.
Thanks. I works now.
>>> def average_polysemy(pos):
synset_list = list(wn.all_synsets(pos))
sense_number = 0
lemma_list = []
for synset in synset_list:
lemma_list.extend(synset.lemma_names)
for lemma in list(set(lemma_list)):
sense_number_new = len(wn.synsets(lemma, pos))
sense_number = sense_number + sense_number_new
return sense_number/len(set(lemma_list))
>>> average_polysemy('n')
1
> But you'll probably find it better to work with the set directly,
>
> instead of uniquifying a list as a separate operation.
Yes, the following second methods still runs faster if I don't give a separate
variable name to list(set(lemma_list)). Why will this happen?
>>> def average_polysemy(pos):
synset_list = list(wn.all_synsets(pos))
sense_number = 0
lemma_list = []
for synset in synset_list:
lemma_list.extend(synset.lemma_names)
for lemma in list(set(lemma_list)):
sense_number_new = len(wn.synsets(lemma, pos))
sense_number = sense_number + sense_number_new
return sense_number/len(set(lemma_list))
>>> average_polysemy('n')
1
--
http://mail.python.org/mailman/listinfo/python-list