Thanks, made heap size considerably larger and its fine now. Thank you
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
On 7/18/2018 8:31 AM, THADC wrote:
Thanks for the reply. I read the link you provided. I am currently not
specifying a heap size with solr so my understanding is that by default it
will just grow automatically. If I add more physical memory to the VM
without doing anything with heap size, won't t
Thanks for the reply. I read the link you provided. I am currently not
specifying a heap size with solr so my understanding is that by default it
will just grow automatically. If I add more physical memory to the VM
without doing anything with heap size, won't that possibly fix the problem?
Thanks
On 7/18/2018 7:10 AM, THADC wrote:
We performed a full reindex for the first time against our largest database
and on two new VMs dedicated to solr indexing. We have two solr nodes
(solrCloud/solr7.3) with a zookeeper cluster. Several hours into the
reindexing process, both solr nodes shut down w
Hi,
We performed a full reindex for the first time against our largest database
and on two new VMs dedicated to solr indexing. We have two solr nodes
(solrCloud/solr7.3) with a zookeeper cluster. Several hours into the
reindexing process, both solr nodes shut down with:
java.long.OutOfMemoryError
On 2/4/2016 12:18 AM, Srinivas Kashyap wrote:
> I have implemented 'SortedMapBackedCache' in my SqlEntityProcessor for the
> child entities in data-config.xml. When i try to do full import, i'm getting
> OutOfMemory error(Java Heap Space). I increased the HEAP allocation to the
> maximum extent
Hello,
I have implemented 'SortedMapBackedCache' in my SqlEntityProcessor for the
child entities in data-config.xml. When i try to do full import, i'm getting
OutOfMemory error(Java Heap Space). I increased the HEAP allocation to the
maximum extent possible. Is there a workaround to do initial
Hello,
I have implemented 'SortedMapBackedCache' in my SqlEntityProcessor for the
child entities in data-config.xml. When i try to do full import, i'm getting
OutOfMemory error(Java Heap Space). I increased the HEAP allocation to the
maximum extent possible. Is there a workaround to do initial
I noticed enormous number of commits, which reasonably triggers merges that
hits OOMe. Try to disable autocommits completely. Monitor commit
occurrences in the log.
On Sun, Apr 20, 2014 at 9:12 PM, Candygram For Mongo <
candygram.for.mo...@gmail.com> wrote:
> We have tried using fetchSize and we
On 4/20/2014 11:12 AM, Candygram For Mongo wrote:
> We have tried using fetchSize and we still got the same out of memory
> errors.
It needs to be batchSize, not fetchSize. I mentioned too much of the
internal details. The fetchSize name is only important if you're
writing source code that uses
We have tried using fetchSize and we still got the same out of memory
errors.
On Fri, Apr 18, 2014 at 9:39 PM, Shawn Heisey wrote:
> On 4/18/2014 6:15 PM, Candygram For Mongo wrote:
> > We are getting Out Of Memory errors when we try to execute a full import
> > using the Data Import Handler.
On 4/18/2014 6:15 PM, Candygram For Mongo wrote:
> We are getting Out Of Memory errors when we try to execute a full import
> using the Data Import Handler. This error originally occurred on a
> production environment with a database containing 27 million records. Heap
> memory was configured for
I have uploaded several files including the problem description with
graphics to this link on Google drive:
https://drive.google.com/folderview?id=0B7UpFqsS5lSjWEhxRE1NN2tMNTQ&usp=sharing
I shared it with this address "solr-user@lucene.apache.org" so I am hoping
it can be accessed by people in th
We consistently reproduce this problem on multiple systems configured with
6GB and 12GB of heap space. To quickly reproduce many cases for
troubleshooting we reduced the heap space to 64, 128 and 512MB. With 6 or
12GB configured it takes hours to see the error.
On Fri, Apr 18, 2014 at 5:54 PM,
I see heap size commands for 128 Meg and 512 Meg. That will certainly run out
of memory. Why do you think you have 6G of heap with these settings?
–Xmx128m –Xms128m
–Xmx512m –Xms512m
wunder
On Apr 18, 2014, at 5:15 PM, Candygram For Mongo
wrote:
> I have lots of log files and other files to
I have lots of log files and other files to support this issue (sometimes
referenced in the text below) but I am not sure the best way to submit. I
don't want to overwhelm and I am not sure if this email will accept graphs
and charts. Please provide direction and I will send them.
*Issue Descri
I got more information with the responses.Now, It's time to re look into the
number of facets to be configured.
Thanks,
Siva
http://smarttechies.wordpress.com/
--
View this message in context:
http://lucene.472066.n3.nabble.com/Facets-with-5000-facet-fields-Out-of-memory-error-durin
On 1/27/2013 10:28 PM, Rahul Bishnoi wrote:
Thanks for your reply. After following your suggestions we were able to
index 30k documents. I have some queries:
1) What is stored in the RAM while only indexing is going on? How to
calculate the RAM/heap requirements for our documents?
2) The docume
Hi Shawn,
Thanks for your reply. After following your suggestions we were able to
index 30k documents. I have some queries:
1) What is stored in the RAM while only indexing is going on? How to
calculate the RAM/heap requirements for our documents?
2) The document cache, filter cache, etc...are p
On 1/26/2013 12:55 AM, Rahul Bishnoi wrote:
Thanks for quick reply and addressing each point queried.
Additional asked information is mentioned below:
OS = Ubuntu 12.04 (64 bit)
Sun Java 7 (64 bit)
Total RAM = 8GB
SolrConfig.xml is available at http://pastebin.com/SEFxkw2R
Rahul,
The MaxPe
Thanks for quick reply and addressing each point queried.
Additional asked information is mentioned below:
OS = Ubuntu 12.04 (64 bit)
Sun Java 7 (64 bit)
Total RAM = 8GB
SolrConfig.xml is available at http://pastebin.com/SEFxkw2R
is being used as supplied in the schema.xml with the
solr setup.
maxIndexingThreads = 8(default)
15000false
We get Java heap Out Of Memory Error after commiting around 3990 solr
documents.Some of the snapshots of memory dump from profiler are uploaded
at following links.
http://s9.postimage.
l with the
solr setup.
maxIndexingThreads = 8(default)
15000false
We get Java heap Out Of Memory Error after commiting around 3990 solr
documents.Some of the snapshots of memory dump from profiler are uploaded
at following links.
http://s9.postimage.org/w7589t9e7/memorydump1.png
http://s7.postimage.org/p
On Wed, 2012-11-28 at 03:25 +0100, Arun Rangarajan wrote:
[Sorting on 14M docs, 250 fields]
> From what I have read, I understand that restricting the number of distinct
> values on sortable Solr fields will bring down the fieldCache space. The
> values in these sortable fields can be any integer
Erick,
Thanks for your reply. So there is no easy way to get around this problem.
We have a way to rework the schema by keeping a single sort field. The
dynamic fields we have are like relevance_CLASSID. The current schema has a
unique key NODEID and a multi-valued field CLASSID - the relevance s
I sure don't see how this can work given the constraints. Just to hold the
values, assuming that each doc holds a value in 150 fields, you have 150 *
4 * 14,000,000 or 8.4G of memory required, and you just don't have that
much memory to play around with.
Sharding seems silly for 14M docs, but that
> Solr Training - www.solrtraining.com
>>
>> 27. sep. 2012 kl. 11:22 skrev Shigeki Kobayashi <
>> shigeki.kobayas...@g.softbank.co.jp>:
>>
>>> Hi guys,
>>>
>>>
>>> I use Manifold CF to crawl files in Windows file server and index the
> >
> >
> > I use Manifold CF to crawl files in Windows file server and index them to
> > Solr using Extracting Request Handler.
> > Most of the documents are succesfully indexed but some are failed and Out
> > of Memory Error occurs in Solr, so I need some advice.
&
sing Extracting Request Handler.
> Most of the documents are succesfully indexed but some are failed and Out
> of Memory Error occurs in Solr, so I need some advice.
>
> Those failed files are not so big and they are a csv file of 240MB and a
> text file of 170MB.
>
> Here is envi
ot;
| To: solr-user@lucene.apache.org
| Sent: Thursday, September 27, 2012 2:22:06 AM
| Subject: ExtractingRequestHandler causes Out of Memory Error
|
| Hi guys,
|
|
| I use Manifold CF to crawl files in Windows file server and index
| them to
| Solr using Extracting Request Handler.
| Most of the docum
Hi guys,
I use Manifold CF to crawl files in Windows file server and index them to
Solr using Extracting Request Handler.
Most of the documents are succesfully indexed but some are failed and Out
of Memory Error occurs in Solr, so I need some advice.
Those failed files are not so big and they
Increase the Memory allocated to solr by setting XMX values.at
> > > >> least
> > >
> > > 12
> > >
> > > >> GB
> > > >> allocate to solr.
> > > >>
> > > >> But if your all index will fit into the Cache memory it will give
> > > >> you
> > >
> > > the
> > >
> > > >> better result.
> > > >>
> > > >> Also add more servers to load balance as your QPS is high.
> > > >> Your 7 Laks data makes 25 GB of index its looking quite high.Try to
> > >
> > > lower
> > >
> > > >> the index size
> > > >> What are you indexing in your 25GB of index?
> > > >>
> > > >> -
> > > >> Thanx:
> > > >> Grijesh
> > > >> --
> > >
> > > >> View this message in context:
> > > http://lucene.472066.n3.nabble.com/Solr-Out-of-Memory-Error-tp2280037p2
> > > 28 5779.html
> > >
> > > >> Sent from the Solr - User mailing list archive at Nabble.com.
> > > >
> > > > --
> > > > Thanks & Regards,
> > > > Isan Fulia.
fit into the Cache memory it will give you
> >
> > the
> >
> > >> better result.
> > >>
> > >> Also add more servers to load balance as your QPS is high.
> > >> Your 7 Laks data makes 25 GB of index its looking quite high.Try to
> >
> > lower
> >
> > >> the index size
> > >> What are you indexing in your 25GB of index?
> > >>
> > >> -
> > >> Thanx:
> > >> Grijesh
> > >> --
> >
> > >> View this message in context:
> > http://lucene.472066.n3.nabble.com/Solr-Out-of-Memory-Error-tp2280037p228
> > 5779.html
> >
> > >> Sent from the Solr - User mailing list archive at Nabble.com.
> > >
> > > --
> > > Thanks & Regards,
> > > Isan Fulia.
;> But if your all index will fit into the Cache memory it will give you
> the
> >> better result.
> >>
> >> Also add more servers to load balance as your QPS is high.
> >> Your 7 Laks data makes 25 GB of index its looking quite high.Try to
> lower
> >&
By adding more server means add more searchers (slaves) on Load balancer not
talking about sharding.
Sharding is required when your index size will increase the size of about
50GB.
-
Thanx:
Grijesh
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-Out-of-Memory
size
>> What are you indexing in your 25GB of index?
>>
>> -
>> Thanx:
>> Grijesh
>> --
>> View this message in context:
>> http://lucene.472066.n3.nabble.com/Solr-Out-of-Memory-Error-tp2280037p2285779.html
>> Sent from the Solr - User mailing list archive at Nabble.com.
>>
>
>
>
> --
> Thanks & Regards,
> Isan Fulia.
>
t;
> -
> Thanx:
> Grijesh
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Solr-Out-of-Memory-Error-tp2280037p2285779.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
--
Thanks & Regards,
Isan Fulia.
:
Grijesh
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-Out-of-Memory-Error-tp2280037p2285779.html
Sent from the Solr - User mailing list archive at Nabble.com.
ing?
> What is you ramBufferSize?
>
> -
> Thanx:
> Grijesh
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Solr-Out-of-Memory-Error-tp2280037p2285392.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
--
Thanks & Regards,
Isan Fulia.
On which server [master/slave] Out of Memory ocuur
What is your index in size[GB]?
How many documents you have?
What is query per second?
How you are indexing?
What is you ramBufferSize?
-
Thanx:
Grijesh
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-Out-of-Memory
e free memory on the host, try using the -Xmx command line
parameter to raise the max amount of memory solr can use. (-Xmx2g for example)
- Original Message
From: Isan Fulia
To: markus.jel...@openindex.io
Cc: solr-user@lucene.apache.org
Sent: Tue, January 18, 2011 9:04:31 PM
Subject: Re
Hi markus,
We dont have any XMX memory settings as such .Our java version is 1.6.0_19
and solr version is 1.4 developer version. Can u plz help us out.
Thanks,
Isan.
On 18 January 2011 19:54, Markus Jelsma wrote:
> Hi
>
> I haven't seen one like this before. Please provide JVM settings and Solr
Hi
I haven't seen one like this before. Please provide JVM settings and Solr
version.
Cheers
On Tuesday 18 January 2011 15:08:35 Isan Fulia wrote:
> Hi all,
> I got the following error on solr with m/c configuration 4GB RAM and
> Intel Dual Core Processor.Can you please help me out.
>
> jav
Hi all,
I got the following error on solr with m/c configuration 4GB RAM and Intel
Dual Core Processor.Can you please help me out.
java.lang.OutOfMemoryError: Java heap space
2011-01-18 18:00:27.655:WARN::Committed before 500 OutOfMemoryError likely
caused by the Sun VM Bug described in
https:/
Related: SOLR-846
Sent on the TELUS Mobility network with BlackBerry
-Original Message-
From: Erick Erickson
Date: Tue, 7 Dec 2010 08:11:41
To:
Reply-To: solr-user@lucene.apache.org
Subject: Re: Out of memory error
Have you seen this page? http://wiki.apache.org/solr
as "-1"
>
> The solr server is running on Linux machine with tomcat.
> i set tomcat arguments as ./startup.sh -Xms1024M -Xmx2048M
>
> Can anybody has idea, where the things are going wrong?
>
> Regards,
> JS
>
>
> --
> View this message in context:
erver is running on Linux machine with tomcat.
i set tomcat arguments as ./startup.sh -Xms1024M -Xmx2048M
Can anybody has idea, where the things are going wrong?
Regards,
JS
--
View this message in context:
http://lucene.472066.n3.nabble.com/Out-of-memory-error-tp2031761p2031761.html
Sent fro
set tomcat arguments as ./startup.sh -Xms1024M -Xmx2048M
Can anybody has idea, where the things are going wrong?
Regards,
JS
--
View this message in context:
http://lucene.472066.n3.nabble.com/Out-of-memory-error-tp2031761p2031761.html
Sent from the Solr - User mailing list archive at Nabble.com.
Oh, I don't know if this matters but I store text fields in Solr but I
never get them from the index (I only get the ID field from the index
and everything else is pulled from DB cache). I store all the fields
just in case I need to debug search queries, etc and want to see the
data.
Regards,
Moa
Hi,
If this is how you configure the field collapsing cache, then I don't
have it setup:
I didnt add that part to solrconfig.xml.
The way I setup field collapsing is I added this tag:
Then I modified the default request handler (for standard queries) with this:
can you tell us what are your current settings regarding the fieldCollapseCache?
I had similar issues with field collapsing and I found out that this cache was
responsible for
most of the OOM exceptions.
Reduce or even remove this cache from your configuration and it should help.
On 2010-09-0
Hi guys,
I have about 20k documents in the Solr index (and there's a lot of
text in each of them). I have field collapsing enabled on a specific
field (AdvisorID).
The thing is if I have field collapsing enabled in the search request
I don't get correct count for the total number of records that
rote:
Erik,
I have seen many posts regarding out of memory error but I am not sure
whether they are using cachesqlEntityProcessor..
I want to know if there is a way to flush out the buffer of cache instead of
storing everything in cache.
I can clearly see the heapsize growing like anything if I us
Erik,
I have seen many posts regarding out of memory error but I am not sure
whether they are using cachesqlEntityProcessor..
I want to know if there is a way to flush out the buffer of cache instead of
storing everything in cache.
I can clearly see the heapsize growing like anything if I use
gt;
> Thanks,
> Barani
>
> JavaGuy84 wrote:
> >
> > Hi,
> >
> > I am using CachedsqlEntityProcessor in my DIH dataconfig to reduce the
> > number of queries executed against the database ,
> >
> >
> > > cachekey="id" cachelookup=&q
me know what is the best
> way to overcome this issue?
>
> Thanks,
> Barani
>
--
View this message in context:
http://old.nabble.com/DIH---Out-of-Memory-error-when-using-CachedsqlEntityProcessor-tp27889623p27890751.html
Sent from the Solr - User mailing list archive at Nabble.com.
:
http://old.nabble.com/DIH---Out-of-Memory-error-when-using-CachedsqlEntityProcessor-tp27889623p27889623.html
Sent from the Solr - User mailing list archive at Nabble.com.
instance cannot process any more requests and
> returns "heap out of memory" error.
>
> This happens more often when I issue queries against the
> index that is being updated.
>
> Is there some configuration setting I need to change?
>
> Also, the server itself has
Hi,
I am running Solr within the Jetty using start.jar. I am indexing about 200,000
documents. Sometimes out of the blue, the Solr instance cannot process any more
requests and returns "heap out of memory" error.
This happens more often when I issue queries against the index tha
59 matches
Mail list logo