on selection issue another query to get your additional data (if i
follow what you want)
On 22 January 2012 18:53, Dave dla...@gmail.com wrote:
I take it from the overwhelming silence on the list that what I've asked is
not possible? It seems like the suggester component is not well supported
I take it from the overwhelming silence on the list that what I've asked is
not possible? It seems like the suggester component is not well supported
or understood, and limited in functionality.
Does anyone have any ideas for how I would implement the functionality I'm
looking for. I'm trying to
I'm also seeing the error when I try to start up the SOLR instance:
SEVERE: java.lang.OutOfMemoryError: Java heap space
at org.apache.lucene.util.ArrayUtil.grow(ArrayUtil.java:344)
at org.apache.lucene.util.ArrayUtil.grow(ArrayUtil.java:352)
at
I don't think the problem is FST, since it sorts offline in your case.
More importantly, what are you trying to put into the FST?
it appears you are indexing terms from your term dictionary, but your
term dictionary is over 1GB, why is that?
what do your terms look like? 1GB for 2,784,937
In my original post I included one of my terms:
Brooklyn, New York, United States?{ |id|: |2620829|,
|timezone|:|America/New_York|,|type|: |3|, |country|: { |id| : |229| },
|region|: { |id| : |3608| }, |city|: { |id|: |2616971|, |plainname|:
|Brooklyn|, |name|: |Brooklyn, New York, United States|
I really don't think you should put a huge json document as a search term.
Just make Brooklyn, New York, United States or whatever you intend
the user to actually search on/type in as your search term.
put the rest in different fields (e.g. stored-only, not even indexed
if you dont need that) and
I'm using 3.5
On Tue, Jan 17, 2012 at 7:57 PM, Lance Norskog goks...@gmail.com wrote:
Which version of Solr do you use? 3.1 and 3.2 had a memory leak bug in
spellchecking. This was fixed in 3.3.
On Tue, Jan 17, 2012 at 5:59 AM, Robert Muir rcm...@gmail.com wrote:
I committed it already: so
Robert, where can I pull down a nightly build from? Will it include the
apache-solr-core-3.3.0.jar and lucene-core-3.3-SNAPSHOT.jar jars? I need to
re-build with a custom SpellingQueryConverter.java.
Thanks,
Dave
On Tue, Jan 17, 2012 at 8:59 AM, Robert Muir rcm...@gmail.com wrote:
I committed
Ok, I've been able to pull the code from SVN, build it, and compile my
SpellingQueryConverter against it. However, I'm at a loss as to where to
find / how to build the solr.war file?
On Tue, Jan 17, 2012 at 8:59 AM, Robert Muir rcm...@gmail.com wrote:
I committed it already: so you can try out
Hi Dave,
Try 'ant usage' from the solr/ directory.
Steve
-Original Message-
From: Dave [mailto:dla...@gmail.com]
Sent: Wednesday, January 18, 2012 2:11 PM
To: solr-user@lucene.apache.org
Subject: Re: Trying to understand SOLR memory requirements
Ok, I've been able to pull
Unfortunately, that doesn't look like it solved my problem. I built the new
.war file, dropped it in, and restarted the server. When I tried to build
the spellchecker index, it ran out of memory again. Is there anything I
needed to change in the configuration? Did I need to upload new .jar files,
Thank you Robert, I'd appreciate that. Any idea how long it will take to
get a fix? Would I be better switching to trunk? Is trunk stable enough for
someone who's very much a SOLR novice?
Thanks,
Dave
On Mon, Jan 16, 2012 at 10:08 PM, Robert Muir rcm...@gmail.com wrote:
looks like
I committed it already: so you can try out branch_3x if you want.
you can either wait for a nightly build or compile from svn
(http://svn.apache.org/repos/asf/lucene/dev/branches/branch_3x/).
On Tue, Jan 17, 2012 at 8:35 AM, Dave dla...@gmail.com wrote:
Thank you Robert, I'd appreciate that.
Which version of Solr do you use? 3.1 and 3.2 had a memory leak bug in
spellchecking. This was fixed in 3.3.
On Tue, Jan 17, 2012 at 5:59 AM, Robert Muir rcm...@gmail.com wrote:
I committed it already: so you can try out branch_3x if you want.
you can either wait for a nightly build or compile
I'm trying to figure out what my memory needs are for a rather large
dataset. I'm trying to build an auto-complete system for every
city/state/country in the world. I've got a geographic database, and have
setup the DIH to pull the proper data in. There are 2,784,937 documents
which I've formatted
What is the largest -Xmx value you have tried?
Your index size seems not very big
Try -Xmx2048m , it should work
On Tue, Jan 17, 2012 at 9:31 AM, Dave dla...@gmail.com wrote:
I'm trying to figure out what my memory needs are for a rather large
dataset. I'm trying to build an auto-complete
I've tried up to -Xmx5g
On Mon, Jan 16, 2012 at 9:15 PM, qiu chi chiqiu@gmail.com wrote:
What is the largest -Xmx value you have tried?
Your index size seems not very big
Try -Xmx2048m , it should work
On Tue, Jan 17, 2012 at 9:31 AM, Dave dla...@gmail.com wrote:
I'm trying to figure
you may disable FST look up and use lucene index as the suggest method
FST look up loads all documents into the memory, you can use the lucene
spell checker instead
On Tue, Jan 17, 2012 at 10:31 AM, Dave dla...@gmail.com wrote:
I've tried up to -Xmx5g
On Mon, Jan 16, 2012 at 9:15 PM, qiu chi
According to http://wiki.apache.org/solr/Suggester FSTLookup is the least
memory-intensive of the lookupImpl's. Are you suggesting a different
approach entirely or is that a lookupImpl that is not mentioned in the
documentation?
On Mon, Jan 16, 2012 at 9:54 PM, qiu chi chiqiu@gmail.com
looks like https://issues.apache.org/jira/browse/SOLR-2888.
Previously, FST would need to hold all the terms in RAM during
construction, but with the patch it uses offline sorts/temporary
files.
I'll reopen the issue to backport this to the 3.x branch.
On Mon, Jan 16, 2012 at 8:31 PM, Dave
I remembered there is another implementation using lucene index file as the
look up table not the in memory FST
FST has its advantage in speed but if you writes documents during runtime,
reconstructing FST may cause performance issue
On Tue, Jan 17, 2012 at 11:08 AM, Robert Muir rcm...@gmail.com
I think that if you have in your index any documents with norms, you
will still use norms for those fields even if the schema is changed
later. Did you wipe and re-index after all your schema changes?
-Peter
On Fri, May 15, 2009 at 9:14 PM, vivek sar vivex...@gmail.com wrote:
Some more info,
I've never paid attention to post/commit ration. I usually do a commit
after maybe 100 posts. Is there a guideline about this? Thanks.
On Wed, May 13, 2009 at 1:10 PM, Otis Gospodnetic
otis_gospodne...@yahoo.com wrote:
2) ramBufferSizeMB dictates, more or less, how much Lucene/Solr will consume
Some more info,
Profiling the heap dump shows
org.apache.lucene.index.ReadOnlySegmentReader as the biggest object
- taking up almost 80% of total memory (6G) - see the attached screen
shot for a smaller dump. There is some norms object - not sure where
are they coming from as I've
in the URL, not a
characteristic of
a field. :)
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Original Message
From: vivek sar
To: solr-user@lucene.apache.org
Sent: Wednesday, May 13, 2009 4:42:16 PM
Subject: Re: Solr memory requirements
1KB? I doubt that's
going to fly. :)
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Original Message
From: vivek sar vivex...@gmail.com
To: solr-user@lucene.apache.org
Sent: Wednesday, May 13, 2009 3:04:46 PM
Subject: Solr memory requirements
: vivek sar vivex...@gmail.com
To: solr-user@lucene.apache.org
Sent: Wednesday, May 13, 2009 3:04:46 PM
Subject: Solr memory requirements?
Hi,
I'm pretty sure this has been asked before, but I couldn't find a
complete answer in the forum archive. Here are my questions,
1) When
To: solr-user@lucene.apache.org
Sent: Wednesday, May 13, 2009 3:04:46 PM
Subject: Solr memory requirements?
Hi,
I'm pretty sure this has been asked before, but I couldn't find a
complete answer in the forum archive. Here are my questions,
1) When solr starts up what does it loads up in the memory
that's
going to fly. :)
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Original Message
From: vivek sar vivex...@gmail.com
To: solr-user@lucene.apache.org
Sent: Wednesday, May 13, 2009 3:04:46 PM
Subject: Solr memory requirements?
Hi,
I'm pretty
Hi,
I'm pretty sure this has been asked before, but I couldn't find a
complete answer in the forum archive. Here are my questions,
1) When solr starts up what does it loads up in the memory? Let's say
I've 4 cores with each core 50G in size. When Solr comes up how much
of it would be loaded in
To: solr-user@lucene.apache.org
Sent: Wednesday, May 13, 2009 3:04:46 PM
Subject: Solr memory requirements?
Hi,
I'm pretty sure this has been asked before, but I couldn't find a
complete answer in the forum archive. Here are my questions,
1) When solr starts up what does it loads up
- Original Message
From: vivek sar vivex...@gmail.com
To: solr-user@lucene.apache.org
Sent: Wednesday, May 13, 2009 3:04:46 PM
Subject: Solr memory requirements?
Hi,
I'm pretty sure this has been asked before, but I couldn't find a
complete answer in the forum archive. Here are my
:42:16 PM
Subject: Re: Solr memory requirements?
Thanks Otis.
Our use case doesn't require any sorting or faceting. I'm wondering if
I've configured anything wrong.
I got total of 25 fields (15 are indexed and stored, other 10 are just
stored). All my fields are basic data type - which I
, not a characteristic
of a field. :)
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Original Message
From: vivek sar vivex...@gmail.com
To: solr-user@lucene.apache.org
Sent: Wednesday, May 13, 2009 4:42:16 PM
Subject: Re: Solr memory requirements?
Thanks Otis
, not a
characteristic of a field. :)
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Original Message
From: vivek sar vivex...@gmail.com
To: solr-user@lucene.apache.org
Sent: Wednesday, May 13, 2009 4:42:16 PM
Subject: Re: Solr memory requirements?
Thanks Otis.
Our use case doesn't
in the URL, not a characteristic
of a field. :)
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Original Message
From: vivek sar vivex...@gmail.com
To: solr-user@lucene.apache.org
Sent: Wednesday, May 13, 2009 4:42:16 PM
Subject: Re: Solr memory requirements
/ -- Lucene - Solr - Nutch
- Original Message
From: vivek sar vivex...@gmail.com
To: solr-user@lucene.apache.org
Sent: Wednesday, May 13, 2009 4:42:16 PM
Subject: Re: Solr memory requirements?
Thanks Otis.
Our use case doesn't require any sorting or faceting. I'm wondering if
I've
: vivek sar vivex...@gmail.com
To: solr-user@lucene.apache.org
Sent: Wednesday, May 13, 2009 4:42:16 PM
Subject: Re: Solr memory requirements?
Thanks Otis.
Our use case doesn't require any sorting or faceting. I'm wondering if
I've configured anything wrong.
I got total of 25 fields
- Original Message
From: vivek sar vivex...@gmail.com
To: solr-user@lucene.apache.org
Sent: Wednesday, May 13, 2009 4:42:16 PM
Subject: Re: Solr memory requirements?
Thanks Otis.
Our use case doesn't require any sorting or faceting. I'm wondering if
I've configured anything
...@gmail.com
To: solr-user@lucene.apache.org
Sent: Wednesday, May 13, 2009 3:04:46 PM
Subject: Solr memory requirements?
Hi,
I'm pretty sure this has been asked before, but I couldn't find a
complete answer in the forum archive. Here are my questions,
1) When solr starts up what
Subject: Re: Solr memory requirements?
Disabling first/new searchers did help for the initial load time, but
after 10-15 min the heap memory start climbing up again and reached
max within 20 min. Now the GC is coming up all the time, which is
slowing down the commit and search cycles
, May 13, 2009 5:53:45 PM
Subject: Re: Solr memory requirements?
Just an update on the memory issue - might be useful for others. I
read the following,
http://wiki.apache.org/solr/SolrCaching?highlight=(SolrCaching)
and looks like the first and new searcher listeners would populate
/ -- Lucene - Solr - Nutch
- Original Message
From: vivek sar vivex...@gmail.com
To: solr-user@lucene.apache.org
Sent: Wednesday, May 13, 2009 5:12:00 PM
Subject: Re: Solr memory requirements?
Otis,
In that case, I'm not sure why Solr is taking up so much memory as
soon as we
On May 13, 2009, at 6:53 PM, vivek sar wrote:
Disabling first/new searchers did help for the initial load time, but
after 10-15 min the heap memory start climbing up again and reached
max within 20 min. Now the GC is coming up all the time, which is
slowing down the commit and search cycles.
44 matches
Mail list logo