You're missing the point. One of the things that can really affect
response time is too-frequent commits. The fact that the commit
configurations have been commented out indicate that the commits
are happening either manually (curl, HTTP request or the like) _or_
you have, say, a SolrJ client that does a commit. Or, your index never
changes.

The fact that the maxWarmingSearchers setting is 4 rather than the
default 2 indicates that someone did change the config file. The fact
that the autoCommit is all commented out additionally points to
someone modifying it as these are not default settings.

So again,
1> are commits happening from some client?
or
2> does your index just never change?

And you haven't posted the results of issuing queries with
&debug=all either, this will show the time taken by various Solr
Solr components and may point to where the slowdown is coming from.

Best,
Erick

On Thu, Jun 25, 2015 at 9:48 AM, Wenbin Wang <wwang...@gmail.com> wrote:
> Hi Erick,
>
> The configuration is largely the default one, and I have not made much
> change. I am also quite new to Solr although I have a lot of experience in
> other search products.
>
> The whole list of fields need to be retrieved, so I do not have much of a
> choice. The total size of the index files is about 1.2 G. I am not sure if
> this is a reasonable size for 14 M records in Solr. One field that could be
> removed is hotel name which can be retrieved/matched by mid-tier
> application based on hotelcode (in the search index).
>
> You mentioned maxWarmingSearchers and commented out configuration of
> "commit". That seems more related to indexing performance, and may not be
> related to query performance? Actually, these were out-of-box default
> configuration that I have not changed.
>
> Obviously the 1 second response time with a single request does not
> translate well in a concurrent users scenario. Do you see any necessary
> changes on the configuration files to make query perform faster?
>
> Thanks,
>
> On Thu, Jun 25, 2015 at 8:38 AM, Erick Erickson <erickerick...@gmail.com>
> wrote:
>
>> bq: Try not to store fields as much as possible.
>>
>> Why? Storing fields certainly adds lots of size to the _disk_ files, but
>> have
>> much less effect on memory requirements than one might think. The
>> *.fdt and *.fdx files in your index are used for the stored data, and
>> they're
>> only read for the top N docs returned (30 in this case). And since the
>> stored
>> data is decompressed in 16K blocks, you'll only really pay a performance
>> penalty if you have very large documents. The memory requirements for
>> stored fields is pretty much governed by the documentCache.
>>
>> How are you committing? your solrconfig file has all commits commented out
>> and it also has maxWarmingSearchers set to 4. Based on this scanty
>> evidence,
>> I'm guessing that your committing from a client, and committing far
>> too often. If
>> that's true, your performance is probably largely governed by loading
>> low-level
>> caches.
>>
>> Your autowarming numbers in filterCache and queryResultCache are, on the
>> face of it, far too large.
>>
>> Best,
>> Erick
>>
>> On Thu, Jun 25, 2015 at 8:12 AM, wwang525 <wwang...@gmail.com> wrote:
>> > schema.xml <http://lucene.472066.n3.nabble.com/file/n4213864/schema.xml>
>> > solrconfig.xml
>> > <http://lucene.472066.n3.nabble.com/file/n4213864/solrconfig.xml>
>> >
>> >
>> >
>> > --
>> > View this message in context:
>> http://lucene.472066.n3.nabble.com/How-to-do-a-Data-sharding-for-data-in-a-database-table-tp4212765p4213864.html
>> > Sent from the Solr - User mailing list archive at Nabble.com.
>>

Reply via email to