Are you by any chance doing store=true on the fields you want to search?

If so, you may want to switch to just index=true. Of course, they will
then not come back in the results, but do you really want to sling
huge content fields around.

The other option is to do lazyLoading=true and not request that field.
This, as a test, you could actually do without needing to reindex
Solr, just with restart. This could give you a way to test whether the
field stored size is the issue.

Regards,
   Alex.
----
Solr Analyzers, Tokenizers, Filters, URPs and even a newsletter:
http://www.solr-start.com/


On 23 August 2015 at 11:13, Zheng Lin Edwin Yeo <edwinye...@gmail.com> wrote:
> Hi Shawn and Toke,
>
> I only have 520 docs in my data, but each of the documents is quite big in
> size, In the Solr, it is using 221MB. So when i set to read from the top
> 1000 rows, it should just be reading all the 520 docs that are indexed?
>
> Regards,
> Edwin
>
>
> On 23 August 2015 at 22:52, Shawn Heisey <apa...@elyograg.org> wrote:
>
>> On 8/22/2015 10:28 PM, Zheng Lin Edwin Yeo wrote:
>> > Hi Shawn,
>> >
>> > Yes, I've increased the heap size to 4GB already, and I'm using a machine
>> > with 32GB RAM.
>> >
>> > Is it recommended to further increase the heap size to like 8GB or 16GB?
>>
>> Probably not, but I know nothing about your data.  How many Solr docs
>> were created by indexing 1GB of data?  How much disk space is used by
>> your Solr index(es)?
>>
>> I know very little about clustering, but it looks like you've gotten a
>> reply from Toke, who knows a lot more about that part of the code than I
>> do.
>>
>> Thanks,
>> Shawn
>>
>>

Reply via email to