Sausarkar: When you say the index went from 14G to 7G, did you notice whether the difference was tin the *.fdt and *.fdx files? That would be due to compression of stored fields which is now the default.... If you could, would you let us know the sizes of the files with those two extensions before after? I'm trying to gather real-world examples...
But about your slowdown, does the same thing happen if you specify &fl=score (and insure that lazy load is configured in solrconfig.xml)? I don't think that would be reading the fields off disk and decompressing them... what are you measuring? Total time to return to the client? It'd also help pin this down if you looked just at QTime in the responses, that should be exclusive of time to assemble the documents, it's purely searching. Thanks, Erick On Wed, Jan 9, 2013 at 8:50 PM, sausarkar <[email protected]> wrote: > We are using solr-meter for generating query load of around 110 Queries per > second per node. > > With 4.1 with the average query time is 300 msec if we switch to 4.0 the > average query time is around 11 msec. We used the same load test params and > same 10 million records, only differences are the version and index files, > 4.1 has 7GB and 4.0 has 14GB. > > > > -- > View this message in context: > http://lucene.472066.n3.nabble.com/jira-Created-SOLR-4112-Dataimporting-with-SolrCloud-Fails-tp4022365p4032084.html > Sent from the Lucene - Java Developer mailing list archive at Nabble.com. > > --------------------------------------------------------------------- > To unsubscribe, e-mail: [email protected] > For additional commands, e-mail: [email protected] > >
