Yes we do complex query with a lot of clauses and facets and data is growing
up bigger every day, i agree with you it might not on the hardware issue
maybe i need to tune up solr/OS/jetty system configration to optimize solr
process. Thank you so much for help.
Best regards,
Hendra
--
View
Hi,
I'm using solr 4.8.1and with following scenario I got a null pointer exception:
1. I'm trying to search over multi-cores and group search.
2. SearchHandler is called and when executing
for(SearchComponent c : components) {
c.finishStage(rb);
Do you index and search from this box ?
How many documents do you have ?
From: Shawn Heisey [s...@elyograg.org]
Sent: Thursday, August 28, 2014 7:48 AM
To: solr-user@lucene.apache.org
Subject: Re: Solr CPU Usage
On 8/27/2014 8:42 PM, hendra_budiawan
HI Jacques,
Yes we index and search from this box, we have 6 core with almost 4000K
document each core getting and bigger each day.
Regards,
Hendra Budiawan
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-CPU-Usage-tp4155370p412.html
Sent from the Solr - User
HI Hendra
That doesn't seem overly huge...
I agree with the other person saying, from the top/htop graph it doesnt look
too bad.
I will maybe try to split the searching/indexing as well try to schedule the
delta index for the cores at different times maybe
PS.We had a nice little bump in
Hi Jacques,
I will try your advice to schedule index with different times also will try
to start research with tomcat7 and java8.
Thank you so much,
Hendra
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-CPU-Usage-tp4155370p4155562.html
Sent from the Solr - User
I am using SolrJ API 4.8 to index rich documents to solr. But i want
to index these documents asynchronously. The function that I made send
documents synchronously but i don't know how to change it to make it
asynchronously. Any idea?
Function:
public Boolean indexDocument(HttpSolrServer server,
Hello solr users!
We have a case when any actions a user did to the solr shard should be
recorded for a possible later replay. This way we are looking at per user
replay feature such that if the user did something wrong accidentally or
because of a system level bug, we could restore a previous
Hi,
Version - 4.8.1
While executing this solr query (from solr web UI):
Gentle Reminder
On 21 August 2014 18:05, Sathyam sathyam.dorasw...@gmail.com wrote:
Hi,
I needed to generate tokens out of a URL such that I am able to get
hierarchical units of the URL as well as each individual entity as tokens.
For example:
*Given a URL : *
Hello,
Any thoughts on this? Should I open a jira ticket? Or how can we engage at
least one of Solr devs to this issue?
Best,
Alex
--
View this message in context:
http://lucene.472066.n3.nabble.com/Help-with-StopFilterFactory-tp4153839p4155582.html
Sent from the Solr - User mailing list
Sorry for the delay... take a look at the URL Classify update processor,
which parses a URL and distributes the components to various fields:
http://lucene.apache.org/solr/4_9_0/solr-core/org/apache/solr/update/processor/URLClassifyProcessorFactory.html
On 8/28/2014 3:10 AM, Dmitry Kan wrote:
We have a case when any actions a user did to the solr shard should be
recorded for a possible later replay. This way we are looking at per user
replay feature such that if the user did something wrong accidentally or
because of a system level bug, we
It may mean that I wasn't clear enough :)
The idea is to build a paper trail system (without negative connotation!).
Such that for instance if user deleted some data _by mistake_ and we have
hard-committed to solr (upon which the tlog has been truncated), we paper
trail'ed the document before the
I would like to send only one query to my custom request handler and have the
request handler expand that query into a more complicated query.
Example:
*/myHandler?q=kids+books*
... would turn into a more complicated EDismax query of:
*kids books kids books*
Is this achievable via a Request
Hello,
We have deployed a solr.war file to a weblogic server. The web.xml has been
modified to have the path to the SOLR home as follows:
Consider query:
http://10.208.152.231:8080/solr/wkustaldocsphc_A/search?q=title:(Michigan
Corporate Income Tax)debugQuery=truepf=titleps=255defType=edismax
The intention is to perform a search in field title and to apply a proximity
boost within a window of 255 words. If I look at the debug
Hi all,
I am trying to integrate Dictionary Annotator with Solr to find genotypes
in a multivalued field. It seems that it only works on the first row of
multivalued fields. I tried using SentenceAnnotation as well and the same
problem occurs.
--
View this message in context:
On 8/28/2014 8:28 AM, Kaushik wrote:
Hello,
We have deployed a solr.war file to a weblogic server. The web.xml has been
modified to have the path to the SOLR home as follows:
Hi Shawn,
Thanks for your reply.
We did some tests enabling shards.info=true and confirmed that there is not
duplicate copy of our index.
We have one replica but many times we see three versions on Admin GUI/Overview
tab. All three has different versions and gen. Is that a problem?
Master
Hello,
it looks like I ran into an old problem: I configured an entity for data
import with FileListEntityProcessor in data-config.xml. If the
baseDir-Attribut points to a non-existing directory, the whole import
process gets aborted no matter which value I provide in the
The issue I was facing was that there were additonal librarires on the
classpath that were conflicting and not required. Removed those and the
problem dissapeared.
Thank you,
Kaushik
On Thu, Aug 28, 2014 at 11:50 AM, Shawn Heisey s...@elyograg.org wrote:
On 8/28/2014 8:28 AM, Kaushik wrote:
Our index size is 110GB and growing, crossed RAM capacity of 96GB, and we
are seeing a lot of disk and network IO resulting in huge latencies and
instability(one of the server used to shutdown and stay in recovery mode
when restarted). Our admin added swap space and that seemed to have
mitigated
On 8/28/2014 11:57 AM, Ethan wrote:
Our index size is 110GB and growing, crossed RAM capacity of 96GB, and we
are seeing a lot of disk and network IO resulting in huge latencies and
instability(one of the server used to shutdown and stay in recovery mode
when restarted). Our admin added swap
Look into SolrCloud.
From: Ethan eh198...@gmail.com
Sent: Thursday, August 28, 2014 1:59 PM
To: solr-user solr-user@lucene.apache.org
Subject: How to accomadate huge data
Our index size is 110GB and growing, crossed RAM capacity of 96GB, and
kokatnur.vi...@gmail.com [kokatnur.vi...@gmail.com] On Behalf Of Ethan
[eh198...@gmail.com] wrote:
Our index size is 110GB and growing, crossed RAM capacity of 96GB, and we
are seeing a lot of disk and network IO resulting in huge latencies and
instability(one of the server used to shutdown
On Thu, Aug 28, 2014 at 11:12 AM, Shawn Heisey s...@elyograg.org wrote:
On 8/28/2014 11:57 AM, Ethan wrote:
Our index size is 110GB and growing, crossed RAM capacity of 96GB, and we
are seeing a lot of disk and network IO resulting in huge latencies and
instability(one of the server used
kokatnur.vi...@gmail.com [kokatnur.vi...@gmail.com] On Behalf Of Ethan
[eh198...@gmail.com] wrote:
Before adding swap space nodes used to shutdown due to OOM or crash
after 2-5 minutes of uptime. By bumping swap space the server came up
cleanly. ** We have 7GB of heap. I'll need to ask admin
: Yes i'm just worried about load average reported by OS, because last week
: suddenly server can't accessed so we have to hard reboot. I'm still
: investigating what is the problem, because this server is dedicated to solr
ok - so here is the key bit.
basically, nothing else you've mentioend
Hi Shay,
I'm not quite sure about this.
But, I think it is get fixed with this.
https://issues.apache.org/jira/browse/SOLR-6223
https://issues.apache.org/jira/browse/SOLR-4186
https://issues.apache.org/jira/browse/SOLR-4049
Could you try 4.10 from a svn branch and see if your problem is fixed?
I have hundreds of fields of the form in my schema.xml:
field name=F10434 type=string indexed=true stored=true
multiValued=true/
field name=B20215 type=string indexed=true stored=true
multiValued=true/
.
I also have a field 'text' that is set as the Default Search Field
An odd requirement has come my way. One of our indexes has uniqueness
on two different fields, but because Solr only allows one uniqueKey
field, we cannot have automatic document replacement on both of the
fields. This means that the indexing code must handle it, which (for
reasons I don't fully
Can't you do a composite unique key? Combine them during indexing in URP stage.
Regards,
Alex.
Personal: http://www.outerthoughts.com/ and @arafalov
Solr resources and newsletter: http://www.solr-start.com/ and @solrstart
Solr popularizers community: https://www.linkedin.com/groups?gid=6713853
On 8/28/2014 2:46 PM, Alexandre Rafalovitch wrote:
Can't you do a composite unique key? Combine them during indexing in URP
stage.
That's an interesting idea. If they aren't *independently* unique
(which would make it impossible to treat them as a single unit
together), that might work.
: That's an interesting idea. If they aren't *independently* unique
: (which would make it impossible to treat them as a single unit
: together), that might work. Thanks for the idea! I'll chase it down on
if they are independently unique, check out the
SignatureUpdateProcessorFactory, but
Hi,
Say I have an index of Product Types and a different index of Products
that belong to one of the types in the other index. Users will do their
searches for attributes of types and products combined so the two distinct,
but related indices must be combined into a single, flattened index so
We would enjoy this feature as well, if you'd like to create a JIRA ticket.
On Thu, Aug 28, 2014 at 4:21 PM, O. Olson olson_...@yahoo.it wrote:
I have hundreds of fields of the form in my schema.xml:
field name=F10434 type=string indexed=true stored=true
multiValued=true/
field
Hi,
just after we finished to restart our zk cluster SOLR started to fail with
tons of zk events.
We shut down all the nodes and restarted them one by one but looks like the
clusterstate.json does not get updated properly.
Example:
core_node11 {
state:active,
Just adding some info:
whan I do:
curl -v 'http://10.140.3.25:9765/zookeeper?wt=json'
it takes ages to come back and on the Admin UI I can't see the Cloud Graph.
Ugo
On Fri, Aug 29, 2014 at 12:52 AM, Ugo Matrangolo ugo.matrang...@gmail.com
wrote:
Hi,
just after we finished to restart our
I think that is configs not tuned well.
Can use jmx to monitor what is doing?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-CPU-Usage-tp4155370p4155747.html
Sent from the Solr - User mailing list archive at Nabble.com.
On 8/28/2014 5:52 PM, Ugo Matrangolo wrote:
just after we finished to restart our zk cluster SOLR started to fail with
tons of zk events.
We shut down all the nodes and restarted them one by one but looks like the
clusterstate.json does not get updated properly.
On IRC, you mentioned you
Here is a quick way you can identify which thread is taking up all your CPU.
1) Look at top (or htop) sorted by CPU Usage and with threads toggled on -
hit capital 'H'
2) Get the native process ids of the threads taking up a lot of CPU
3) Convert that number to hex using a converter:
OK, Do not, repeat NOT use different tokenizers
at index and query time unless you are _very_ sure
that you know exactly what the consequences are.
Take a look at the admin/analyzer page for the
field in question and put your values in. You'll see
that what's in your index is very different than
feels like a JIRA to me.
This _does_ seem weird.
if I omit the field qualification, i.e. my query is:
q=Michigan
http://10.208.152.231:8080/solr/wkustaldocsphc_A/search?q=title:(Michigan
Corporate
Income TaxdebugQuery=truepf=titleps=255defType=edismax
it works fine.
I can get the results I
First, I want to be sure you're not mixing old-style
replication and SolrCloud. Your use of Master/Slave
causes this question.
Second, your maxWarmingSearchers error indicates that
your commit interval is too short relative to your autowarm
times. Try lengthening your autocommit settings
Ahhh, thanks for bringing closure to this! Whew!
Erick
On Thu, Aug 28, 2014 at 10:47 AM, Kaushik kaushika...@gmail.com wrote:
The issue I was facing was that there were additonal librarires on the
classpath that were conflicting and not required. Removed those and the
problem dissapeared.
46 matches
Mail list logo