well i found the problem... it was because of my code that process the query
request before sending it to the server... and because fq thing has two =
signs, the parsing was ruined... fixed now...
-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context:
http://lucene.47206
Is the goal to have the elevation data read from somewhere else? In
other words, why don't you want the elevate.xml to exist locally?
If you want to read the data from somewhere else, could you put a
dummy elevate.xml locally and subclass the QueryElevationComponent and
override the loadElevationM
Hi,
As suggested by the Solr Enterprise book, I have separate strategies for
updating the solr core (e.g. blah) when I need to do incremental
updates(every day) VS create a fresh index from scratch(once in a 4 months).
Assuming that the dataDir for the core "blah" is /home/blah/data/defaultData
in
On 10/28/2012 2:28 PM, Dotan Cohen wrote:
On Fri, Oct 26, 2012 at 11:04 PM, Shawn Heisey wrote:
Warming doesn't seem to be a problem here -- all your warm times are zero,
so I am going to take a guess that it may be a heap/GC issue. I would
recommend starting with the following additional argu
Any Suggestions on this?
On Sun, Oct 28, 2012 at 1:30 PM, Sujatha Arun wrote:
> Hello,
>
> I want suggestions for full content of several books with a filter
> that restricts suggestions to a single book .However the best options of
> suggester and terms component do not support filter.
>
>
hello iorixxx,
i have just tried url encoding because the raw format was also giving the
same error/exception... i was curious if it could fix...
anyone has any ideas on the exception? i still couldnt find a way to
overcome this
-
Zeki ama calismiyor... Calissa yapar...
--
View this messa
1) Do you use compound files (CFS)? This adds a lot of overhead to merging.
2) Does ES use the same merge policy code as Solr?
In solrconfig.xml, here are the lines that control segment merging. You can
probably set mergeFactor to 20 and cut the amount of disk I/O.
On Fri, Oct 26, 2012 at 11:04 PM, Shawn Heisey wrote:
> Warming doesn't seem to be a problem here -- all your warm times are zero,
> so I am going to take a guess that it may be a heap/GC issue. I would
> recommend starting with the following additional arguments to your JVM.
> Since I have no id
Hi Erick
> You're probably on your own here. I'd be surprised if people were
willing to work on code of that vintage.
Yes, this is not a vintage wine!
I just hoped someone would say, "ah, we had this issue before and..." :-)
I think best is to just upgrade like you suggested.
Thanks for your t
I'm afraid you have to give more details, this works fine for me with
the example docs and the example schema.
Best
Erick
On Sun, Oct 28, 2012 at 11:02 AM, adityab wrote:
> Hi
> Deployed 4.0 and while investigating the schema Browser for seeing the
> unique term count for each field observed fol
Oh, 1.4.1. You're probably on your own here. I'd be surprised if
people were willing to work on code of that vintage. Are
you sure you can't upgrade at least to 3.6?
Best
Erick
On Sun, Oct 28, 2012 at 12:43 PM, Eric Grobler
wrote:
> Hi Erick,
>
> It is Solr 1.41 (a Drupal installation) running o
Hi Erick,
It is Solr 1.41 (a Drupal installation) running on Jetty.
How can one get a stack trace? (there is no exception/error)
Could it be that solr does something like this?
start delete job
cannot find bogus id to delete
does some reindex or optimization anyway regardless which takes 80
Hi
Deployed 4.0 and while investigating the schema Browser for seeing the
unique term count for each field observed following error. The top term
shows "10/-1". its -1 all the time. Any idea what might be wrong.
thanks
Aditya
2012-10-28 10:48:42,017 SEVERE [org.apache.solr.servlet.SolrDispatchFi
That is very weird. What version of Solr are you using, and is there
any way you could get a stack trace when this is happening?
Best
Erick
On Sun, Oct 28, 2012 at 6:22 AM, Eric Grobler wrote:
> Hi
>
> I am a bit confused why the server sometimes takes 80 seconds to respond
> when I specify an i
Hi
I am a bit confused why the server sometimes takes 80 seconds to respond
when I specify an id to delete than does not even exist in the index.
If I loop this query and send a bogus id to delete every minute.
03:27:38 125 ms bogusidthatdoesnotexist commit
03:28:38 125 ms bogusidthatdoesno
Correction:
shard.qt is sufficient, but you cannot define only spellcheck component in
requestHandler as it doesn't create shard requests, seems like 'query'
handler is a must if you want distributed processing.
--
View this message in context:
http://lucene.472066.n3.nabble.com/ShardHandler-di
The only way I succeeded to forward to the right request handler was:
1. shard.qt = /suggest (shards.qt=%2Fsuggest actually) in query
2.handleSelect='true' in solrconfig
3. NO /select handler in solrconfig
Only this combination forces 2 things - shard handler forwards qt=/suggest
parameter to othe
17 matches
Mail list logo