Hi Shawn,
I am using Java Version 8 Update 45 (build 1.8.0_45-b15). It is a 64-bit
Java.
Thank you.
Regards,
Edwin
On 8 January 2016 at 00:15, Shawn Heisey wrote:
> On 1/7/2016 12:53 AM, Zheng Lin Edwin Yeo wrote:
> >> Subtracting SHR from RES (or in your case, Shareable from Working)
> >> r
Hi Shawn,
Thank you for your explanation.
Yes, both of the top two processes are Solr. I have two Solr processes on
one machine now, as the second one is a replica of the first one. In the
future, the plan is to have them on separate machine.
>Subtracting SHR from RES (or in your case, Shareabl
On 1/5/2016 11:50 PM, Zheng Lin Edwin Yeo wrote:
> Here is the new screenshot of the Memory tab of the Resource Monitor.
> https://www.dropbox.com/s/w4bnrb66r16lpx1/Resource%20Monitor.png?dl=0
>
> Yes, I found that the value under the "Working Set" column is much higher
> than the others. Also, the
Hi Shawn,
Here is the new screenshot of the Memory tab of the Resource Monitor.
https://www.dropbox.com/s/w4bnrb66r16lpx1/Resource%20Monitor.png?dl=0
Yes, I found that the value under the "Working Set" column is much higher
than the others. Also, the value which I was previously looking at under
On 1/5/2016 9:59 AM, Zheng Lin Edwin Yeo wrote:
> I have uploaded the screenshot here
> https://www.dropbox.com/s/l5itfbaus1c9793/Memmory%20Usage.png?dl=0
>
> Basically, Java(TM) Platform SE Library, which Solr is running on, is only
> using about 22GB currently. However, the memory usage at the to
Hi Shawn,
Thanks for your reply.
I have uploaded the screenshot here
https://www.dropbox.com/s/l5itfbaus1c9793/Memmory%20Usage.png?dl=0
Basically, Java(TM) Platform SE Library, which Solr is running on, is only
using about 22GB currently. However, the memory usage at the top says it is
using 73%
Hi Toke,
I read the server's memory usage from the Task manager under Windows,
Regards,
Edwin
On 4 January 2016 at 17:17, Toke Eskildsen wrote:
> On Mon, 2016-01-04 at 10:05 +0800, Zheng Lin Edwin Yeo wrote:
> > A) Before I start the optimization, the server's memory usage
> > is consistent a
On 1/3/2016 7:05 PM, Zheng Lin Edwin Yeo wrote:
> A) Before I start the optimization, the server's memory usage
> is consistent at around 16GB, when Solr startsup and we did some searching.
> However, when I click on the optimization button, the memory usage
> increases gradually, until it reaches
On Mon, 2016-01-04 at 10:05 +0800, Zheng Lin Edwin Yeo wrote:
> A) Before I start the optimization, the server's memory usage
> is consistent at around 16GB, when Solr startsup and we did some searching.
How do you read this number?
> However, when I click on the optimization button, the memory u
Thanks for the reply Shawn and Erick.
What *exactly* are you looking at that says Solr is using all your
memory? You must be extremely specific when answering this question.
This will determine whether we should be looking for a bug or not.
A) Before I start the optimization, the server's memory
If you happen to be looking at "top" or the like, you
might be seeing virtual memory, see Uwe's
excellent blog here:
http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html
Best,
Erick
On Fri, Jan 1, 2016 at 11:46 PM, Shawn Heisey wrote:
> On 12/31/2015 8:03 PM, Zheng Lin Edwin Y
On 12/31/2015 8:03 PM, Zheng Lin Edwin Yeo wrote:
> But the problem I'm facing now is that during optimizing, the memory usage
> of the server hit the maximum of 64GB, and I believe the optimization could
> not be completed fully as there is not enough memory, so when I check the
> index again, it
Hi Yonik,
Yes, the plan is to do the optimizing at night after indexing, when there
are lesser user who will use the system.
But the problem I'm facing now is that during optimizing, the memory usage
of the server hit the maximum of 64GB, and I believe the optimization could
not be completed full
Wouldn't collection swapping be a better strategy in that case?
Load and optimise in a separate server, then swap it in.
On 30 Dec 2015 10:08 am, "Walter Underwood" wrote:
> The only time that a force merge might be useful is when you reindex all
> content every night or every week, then do not
Question: does anyone have example good merge settings for solrconfig ? To
keep the number of segments small like 6?
On Tue, Dec 29, 2015 at 8:49 PM, Yonik Seeley wrote:
> Some people also want to control when major segment merges happen, and
> optimizing at a known time helps prevent a major me
Some people also want to control when major segment merges happen, and
optimizing at a known time helps prevent a major merge at an unknown
time (which can be equivalent to an optimize/forceMerge).
The benefits of optimizing (and having fewer segments to search
across) will vary depending on the r
Thanks for the information.
Another thing I like to confirm is, will the Java Heap size setting affect
the optimization process or the memory usage?
Is the any recommended setting that we can use, for an index size of 200GB?
Regards,
Edwin
On 30 December 2015 at 11:07, Walter Underwood
wrote:
The only time that a force merge might be useful is when you reindex all
content every night or every week, then do not make any changes until the next
reindex. But even then, it probably does not matter.
Just let Solr do its thing. Solr is pretty smart.
A long time ago (1996-2006), I worked on
Hi Walter,
Thanks for your reply.
Then how about optimization after indexing?
Normally the index size is much larger after indexing, then after
optimization, the index size reduces. Do we still need to do that?
Regards,
Edwin
On 30 December 2015 at 10:45, Walter Underwood
wrote:
> Do not “opt
Do not “optimize".
It is a forced merge, not an optimization. It was a mistake to ever name it
“optimize”. Solr automatically merges as needed. There are a few situations
where a force merge might make a small difference. Maybe 10% or 20%, no one had
bothered to measure it.
If your index is co
To a very large extent, the capability of a platform is measurable by the skill
of the team administering it.
If core competencies lie in Windows OS then I would wager heavily the platform
will outperform a similar Linux OS installation in the long haul.
All things being equal, it’s really hard
On 1/21/2014 2:17 AM, onetwothree wrote:
> Does Solr on a Linux Os has a better memory management than a Windows Os, or
> can you neglect this comparison?
As Toke said, this is indeed debatable.
I personally believe that Linux is better at almost everything, but if
you're running a recent 64-bi
On Tue, 2014-01-21 at 10:17 +0100, onetwothree wrote:
> Does Solr on a Linux Os has a better memory management than a Windows Os, or
> can you neglect this comparison?
That is debatable, but in this context you can see them as fairly equal:
Out of the box, they will both use all free memory for
Does Solr on a Linux Os has a better memory management than a Windows Os, or
can you neglect this comparison?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Memory-Usage-on-Windows-Os-while-indexing-tp4112262p4112416.html
Sent from the Solr - User mailing list archive
Thanks for the reply, dropbox image added.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Memory-Usage-on-Windows-Os-while-indexing-tp4112262p4112403.html
Sent from the Solr - User mailing list archive at Nabble.com.
On 1/20/2014 3:02 AM, onetwothree wrote:
OS Windows server 2008
4 Cpu
8 GB Ram
We're using a .Net Service (based on Solr.Net) for updating and inserting
documents on a single Solr Core instance. The size of documents sent to Solr
vary from 1 Kb up to 8Mb, we're sending the documents in batc
On Mon, 2014-01-20 at 11:02 +0100, onetwothree wrote:
> Optional JVM parameters set xmx = 3072, xms = 1024
> directoryFactory: MMapDirectory
[...]
> So it seems that filesystem buffers are consuming all the leftover memory??,
> and don't release memory, even after a quite amount of time?
As long
Other thing, Solr use a lot the OS cache to cache the index and gain
performance. This can be another reason why the solr process has a high memory
value allocated.
/yago
—
/Yago Riveiro
On Mon, Jan 20, 2014 at 10:03 AM, onetwothree
wrote:
> Facts:
> OS Windows server 2008
> 4 Cpu
> 8 GB Ram
The fact that you see the memory consumed too high should be consecuency of
that some memory of the heap is only released after a full GC. With the
VisualVM tool you can try to force a full GC and see if the memory is released.
/yago
—
/Yago Riveiro
On Mon, Jan 20, 2014 at 10:03 AM, onetwothre
Sent: 03 September 2013 13:41
> To: solr-user@lucene.apache.org
> Subject: RE: Memory usage during aggregation - SolrCloud with very
large
> numbers of facet terms.
>
> > However, the Solr instance we direct our client query to is
consuming
> significantly more RAM (10GB) and i
> However, the Solr instance we direct our client query to is consuming
> significantly more RAM (10GB) and is still failing after a few queries when
> it runs out of heap space. This is presumably due to the role it plays,
> aggregating the results from each shard.
That seems quite odd... Wha
to:erickerick...@gmail.com]
Sent: Tuesday, November 15, 2011 8:37 AM
To: solr-user@lucene.apache.org
Subject: Re: memory usage keep increase
I'm pretty sure not. The words "virtual memory address space" is important
here, that's not physical memory...
Best
Erick
On Mon, Nov 14,
I'm pretty sure not. The words "virtual memory address space" is important
here, that's not physical memory...
Best
Erick
On Mon, Nov 14, 2011 at 11:55 AM, Yongtao Liu wrote:
> Hi all,
>
> I saw one issue is ram usage keep increase when we run query.
> After look in the code, looks like Lucene u
Taking Chris's information into mind I was able to isolate this to a test
case. I found this ticket that seems to indicate a fundamental problem in
the solr/lucene boundary.
https://issues.apache.org/jira/browse/SOLR-
Here's how to reproduce my results:
1. Create an index with a field like th
You can also sort on a field by using a function query instead of the
"sort=field+desc" parameter. This will not eat up memory. Instead, it
will be slower. In short, it is a classic speed v.s. space trade-off.
You'll have to benchmark and decide which you want, and maybe some
fields need the fast
I think you've probably nailed it Chris, thanks for that, I think I can get
by with a different approach than this.
Do you know if I will get the same memory consumption using the
RandomFieldType vs the TrieInt?
-Jeff
On Thu, Sep 30, 2010 at 12:36 PM, Chris Hostetter
wrote:
>
> : There are 14,6
: There are 14,696,502 documents, we are doing a lot of funky stuff but I'm
: not sure which is most likely to cause an impact. We're sorting on a dynamic
: field there are about 1000 different variants of this field that look like
: "priority_sort_for_", which is an integer field. I've heard that
There are 14,696,502 documents, we are doing a lot of funky stuff but I'm
not sure which is most likely to cause an impact. We're sorting on a dynamic
field there are about 1000 different variants of this field that look like
"priority_sort_for_", which is an integer field. I've heard that
sorting
How many documents are there? How many unique words are in a text
field? Both of these numbers can have a non-linear effect on the
amount of space used.
But, usually a 22Gb index (on disk) might need 6-12G of ram total.
There is something odd going on here.
Lance
On Wed, Sep 29, 2010 at 4:34 PM,
Could you give us a dump of http://localhost:port/solr/admin/luke ?
A huge max field length and random terms in 2000 2 MB files is going to
be a bit of a resource hog :)
Can you explain why you are doing that? You will have *so* many unique
terms...
I can't remember if you can set it in So
On Tue, Apr 14, 2009 at 11:30 AM, Gargate, Siddharth wrote:
> Hi all,
>I am testing indexing with 2000 text documents of size 2 MB
> each. These documents contain words created with random characters. I
> observed that the tomcat memory usage goes on increasing slowly. I tried
> by removin
41 matches
Mail list logo