Once again, thank you for your help.


>> We don't really know what your problem is. Explaining that rathern
>> than the solution you have thought of might render a couple of
>> alternate solutions. Perhaps something could be precalculated and
>> stored in the documents. Perhaps feature selection (reduction) of the
>> terms might do the trick for you. And so on.
>
> I have a corpus of documents indexed with different fields.
> Approximately
> each document indexed has an average of 30 fields. Each field has
> about 100
> terms.
>
> Normally, the hit will return less than 100 documents. For each of
> the 30
> fields of the documents, I have to calculate the top 35 keywords
> from all
> the documents as well as the top 30 popular keywords (the keywords
> that are
> distributed in many documents - something like docFreq or IDF).

Right, but /why/ do you need these values? Do you present them as
they are, or do you use them for some secondary calculation? Then
what is the result of this secondary calculation?


Yes, I just want those values as they are. No second calculation is to be
performed.

Please let me know if you are still have some more questions.

I'll reask of of the questions I placed in my previous reply:

>> How slow is it, and how fast did you expect it to be?


We expect to get those values sorted as fast as possible. Currently for 100
documents, the process is about 1~1.5 minutes. I believe this is because of
the loop.

Can you limit the evaulation to the top n documents?


Yes, we limit to only the top 100 documents.

Thank you.

Regards,

Sengly

Reply via email to