Well all my fields are strings when I index them. They're all very short strings, dates, hashes, etc. The largest field has a cap of 256 chars and there is only one of them, the rest are all fairly small.
Can you explain what you meant by 'string or reader' ? Thanks, Chris On Tue, 1 Feb 2005 15:11:18 +0100, Kelvin Tan <[EMAIL PROTECTED]> wrote: > Hi Chris, are your fields string or reader? How large do your fields get? > > Kelvin > > On Tue, 1 Feb 2005 01:40:39 -0800, Chris Fraschetti wrote: > > Hey folks.. thanks in advance to any who respond... > > > > I do a good deal of post-search processing and the file io to read > > the fields I need becomes horribly costly and is definitely a > > problem. Is there any way to either retrieve 1. the entire doc (all > > fields that can be retrieved) and/or 2. a group of docs.. specified > > by say an array of doc ids? > > > > I've optimized to retrieve the entire list of fields instead of 1 > > by 1.. and also retrieve only the minimal number of fields that I > > can.. but still my profilers show me that the lucene io to read the > > doc fields is where I spend 95% of my time. Of course this is > > obvious given the nature of how it all works.. but can anyone think > > of a better way to go about retrieving docs in bulk? Are the > > different types of fields quicker/slower than others when > > retrieving them from the index? > > > --------------------------------------------------------------------- > To unsubscribe, e-mail: [EMAIL PROTECTED] > For additional commands, e-mail: [EMAIL PROTECTED] > > -- ___________________________________________________ Chris Fraschetti e [EMAIL PROTECTED] --------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]