Hi,
We are frequently getting issues of index corruption on the cloud, this used to
not happen in our master slave setup with solr 3.6. I have tried to check the
logs, but don't see an exact reason.
I have run the index checker and it recovers, but I am not able to understand
as to why this
Hi,
Maybe you can describe how you are using Solr? Which version exactly?
Can you share the errors you are seeing? etc.
Otis
--
Solr ElasticSearch Support -- http://sematext.com/
Performance Monitoring -- http://sematext.com/spm
On Tue, Jul 9, 2013 at 2:07 AM, Cool Techi
Hi,
A general question:
Let's say I have Car And CarParts 1:n relation.
And I have discovered that the user had entered in the search field instead
of car name - a part serial number (SKU).
(I discovered it useing regex)
Is there a way to fetch different types of answers in Solr?
Is there a
On 9 July 2013 12:08, Mysurf Mail stammail...@gmail.com wrote:
Hi,
A general question:
Let's say I have Car And CarParts 1:n relation.
And I have discovered that the user had entered in the search field instead
of car name - a part serial number (SKU).
(I discovered it useing regex)
Is
I am migrating from solr 3.6 to 4.3.1. Using the core create rest call,
something like:
http://10.1.10.150:8090/solr/admin/cores?action=CREATEname=fooinstanceDir=/home/solrdata/foopersist=truewt=jsondataDir=/home/solrdata/foo
I am able to add data to the index it creates within the
5. No more than 32 nodes in your SolrCloud cluster.
I hope this isn't too OT, but what tradeoffs is this based on? Would have
thought it easy to hit this number for a big index and high load (hence
with the view of both the number of shards and replicas horizontally
scaling..)
6. Don't return
Greetings,
I am using nutch 2.x as my datasource for Solr 4.3.0. And nutch passes on
its own boost field to my Solr schema
field name=boost type=float stored=true indexed=false/
Now due to some reason I always get boost = 0.0 and due to this my Solr's
document score is also always 0.0.
Is
Hello to all,
I load solr by data-import.
I add in db_data_config.xml inside the product entity the tag entity as
follow :
|
|
entity name=product_tags
query=select t.name as tags, id_product
FROM ps_product_tag as pt
Hi Erick,
thanks for reply, I am doing the same thing already. But for paging
calculation i am depending on numFound=120 value. That result i want
.(result name=response numFound=120 start=0)
thanks
aniljayanti
--
View this message in context:
Hi Jack,
Thanks for your answer.
I upgraded Solr from 4.0.0 (LUCENE_40) to 4.3.0 (LUCENE_43), and later to
solr 4.3.1. As result the pivot queries I had already running against solr
4.0.0 that were taking a few milisecs (100ms, 150ms), now, with solr 4.3.1,
are taking arround 13 secs.
An index
Hi solr-user!!!
I have an issue
I want to know that is it possible to implement StopwordFilterFactory with
KeywordTokenizer?
example I have multiple title :
1)title:Canadian journal of information and library science
2)title:Canadian information of science
3)title:Southern information and
Hi Parul,
You might find this useful : https://github.com/cominvent/exactmatch/
From: Parul Gupta(Knimbus) parulgp...@gmail.com
To: solr-user@lucene.apache.org
Sent: Tuesday, July 9, 2013 12:03 PM
Subject: Phrase search without stopwords
Hi solr-user!!!
I
There's been a lot of action around this recently, this is
a known issue in 4.3.1.
The short form is it should all be better in Solr 4.4 which
may be out in the next couple of weeks, assuming we
can get agreement.
But look at Solr-4862, 4910, 4982 and related if you want
to see the ugly details.
According to code, at least in Solr 4.2, getParams of CoreAdminRequest.Unload
returns locally created ModifiableSolrParams.
It means that parameters that are set in such way won't be received in
CoreAdminHandler.
I'm going to open an issue in Jira and provide a patch for this.
Best regards,
I think Jack was mostly thinking in slam dunk terms. I know of
SolrCloud demo clusters with 500+ nodes, and at that point
people said it's going to work for our situation, we don't need
to push more.
As you start getting into that kind of scale, though, you really
have a bunch of ops
My guess is that you're not really passing on the boost field's value
and getting the default. Don't quite know how I'd track that down though
Best
Erick
On Tue, Jul 9, 2013 at 4:09 AM, imran khan imrankhan.x...@gmail.com wrote:
Greetings,
I am using nutch 2.x as my datasource for Solr
No, there's no good way to make Solr return
numFound=120 when there are 540 (or
whatever) records. Why do you care?
If you need to stop at 120, just stop at 120 and ignore
the numFound.
If you need to display the 120 to the end user even if there
are more docs, just do that.
Best
Erick
On Tue,
Hi,
I am new to solr. Currently i m using Solr-4.3.0. I had setup a solrcloud
setup in 3 machines. If I kill a node running in any of the machine using
/kill -9/, status of the killed node is not updating immediately in web
console of solr. I takes hardly /20+ mins/ to mark that as Gone node.
I've run a command to find term counts at my index:
solr/select/?q=*:*rows=0facet=onfacet.field=tenowt=xmlindent=on
it gives me a result like that:
...
result name=response numFound=3245092 start=0
maxScore=1.0/result
...
lst name=teno
int name=lev3107206/int
int name=tenu59821/int
...
when I
Am 05.07.2013 um 16:36 schrieb Shalin Shekhar Mangar:
Okay so just for the rest of the people who dig up this thread. You
had to put all the extra jar files required by typo3 into WEB-INF/lib
to make this work. Is that right?
Maybe this works aswell but I'd put it in a directory called lib
Hi
I solve it by copying the field in a string field type.
And query on this field only.
Regards
David
Le 09/07/2013 11:03, Parul Gupta(Knimbus) a écrit :
Hi solr-user!!!
I have an issue
I want to know that is it possible to implement StopwordFilterFactory with
KeywordTokenizer?
example
Any suggestion ?
Le 09/07/2013 12:29, It-forum a écrit :
Hello to all,
I load solr by data-import.
I add in db_data_config.xml inside the product entity the tag entity
as follow :
|
|
entity name=product_tags
query=select t.name as tags, id_product
1. Try facet.missing=true to count the number of documents that do not have
a value for that field.
2. Try facet.limit=n to set the number of returned facet values to a larger
or smaller value than the default of 100.
3. Try reading the Faceting chapter of my book!
-- Jack Krupansky
I am passing boost value (via nutch) and i.e boost =0.0.
But my question is why Solr is showing me score = 0.0 when my boost (index
time boost) = 0.0 ?
Should not Solr calculate its documents score on the basis of TF-IDF ? And
if not how can I make Solr to only consider TF-IDF while calculating
Simple math: x times zero equals zero.
That's why the default document boost is 1.0 - score times 1.0 equals score.
Any particular reason you wanted to zero out the document score from the
document level?
-- Jack Krupansky
-Original Message-
From: Tony Mullins
Sent: Tuesday, July
Ok, one more question. I have another field at my schema: *url*. How can I
get urls at each facet?
2013/7/9 Jack Krupansky j...@basetechnology.com
1. Try facet.missing=true to count the number of documents that do not
have a value for that field.
2. Try facet.limit=n to set the number of
Usually a car term and a car part term will look radically different. So,
simply use the edismax query parser and set qf to be both the car and car
part fields. If either matches, the document will be selected. And if you
have a type field, you can check that to see if a car or part was matched
I don't quite follow the question. Give us an example.
-- Jack Krupansky
-Original Message-
From: Furkan KAMACI
Sent: Tuesday, July 09, 2013 9:37 AM
To: solr-user@lucene.apache.org
Subject: Re: Document count mismatch
Ok, one more question. I have another field at my schema:
Hey thanks.
Its some what works for me
--
View this message in context:
http://lucene.472066.n3.nabble.com/Phrase-search-without-stopwords-tp4076527p4076598.html
Sent from the Solr - User mailing list archive at Nabble.com.
I've another field at my schema: it is *url*. When I get results as facet
I see that there are 3107206 numbers of *lev* (int
name=lev3107206/int). However what are the urls of that 3107206
documents? I tried grouping instead of facet:
/solr/select/?q=*:*group=truegroup.field=langwt=xmlfl=url
and
On 7/8/2013 11:10 PM, Learner wrote:
I wrote a custom data import handler to import data from files. I am trying
to figure out a way to make asynchronous call instead of waiting for the
data import response. Is there an easy way to invoke asynchronously (other
than using futures and
Something is wrong if it actually takes 20 minutes.
- Mark
On Jul 9, 2013, at 7:43 AM, Ranjith Venkatesan ranjit...@zohocorp.com wrote:
Hi,
I am new to solr. Currently i m using Solr-4.3.0. I had setup a solrcloud
setup in 3 machines. If I kill a node running in any of the machine using
I would like to be able to do it without consulting Zookeeper. Is there some
variable or API I can call on a specific Solr cloud node to know if it is
currently a shard leader? The reason I want to know is I want to perform index
backup on the shard leader from a cron job *only* if that node
On 7/9/2013 5:43 AM, Ranjith Venkatesan wrote:
I am new to solr. Currently i m using Solr-4.3.0. I had setup a solrcloud
setup in 3 machines. If I kill a node running in any of the machine using
/kill -9/, status of the killed node is not updating immediately in web
console of solr. I takes
The same scenario happens if network to any one of the machine is
unavailable. (i.e if we manually disconnect network cable also, status of
the node not gets updated immediately).
Pls help me in this issue
--
View this message in context:
We are going to use solr in production. There are chances that the machine
itself might shutdown due to power failure or the network is disconnected
due to manual intervention. We need to address those cases as well to build
a robust system..
--
View this message in context:
We are going to use solr in production. There are chances that the machine
itself might shutdown due to power failure or the network is disconnected
due to manual intervention. We need to address those cases as well to
build
a robust system..
The latest version of Solr is 4.3.1, and 4.4 is
On Tue, Jul 9, 2013 at 6:29 AM, It-forum it-fo...@meseo.fr wrote:
However when i use edimax query with the following details, I'm not able
to retreive the field tag. And it seems that it is not taken in match
score too.
You seem to have two problems here. One not matching (use debug flags
I'll give you the high level before delving deep into setup etc. I have been
struggeling at work with a seemingly random problem when solr will hang for
10-15 minutes during updates. This outage always seems to immediately be
proceeded by an EOF exception on the replica. Then 10-15 minutes
If you call /solr/zookeeper on a specific node, that servlet would tell you -
output is a bit verbose for what you want though.
- Mark
On Jul 9, 2013, at 10:36 AM, Robert Stewart robert_stew...@epam.com wrote:
I would like to be able to do it without consulting Zookeeper. Is there some
On 7/9/2013 9:50 AM, Jed Glazner wrote:
I'll give you the high level before delving deep into setup etc. I have been
struggeling at work with a seemingly random problem when solr will hang for
10-15 minutes during updates. This outage always seems to immediately be
proceeded by an EOF
Hi,
Is staggered replication possible in Solr through configuration?
We are concern with the CPU spike (80%) and GC pauses on all the slaves when
they try to replicate updated index from repeaters. We havent observed this
behavior in v3.5 (Max spike were 50% during replication)
In our case we
On 7/9/2013 10:37 AM, adityab wrote:
Is staggered replication possible in Solr through configuration?
You wouldn't be able to do this directly without switching to completely
manually triggered replication, but the concept of a repeater may
interest you.
Thanks Erick I made a private patch to the CoreContainer until the real deal.
C
On Jul 9, 2013, at 4:35 AM, Erick Erickson erickerick...@gmail.com wrote:
There's been a lot of action around this recently, this is
a known issue in 4.3.1.
The short form is it should all be better in Solr 4.4
Other than using futures and callables? Runnables ;-) Other than that you
will need async request (ie. client).
But in case sb else is looking for an easy-recipe for the server-side async:
public void handleRequestBody(.) {
if (isBusy()) {
rsp.add(message, Batch processing is already
This is primarily to Andy Lester, who wrote the WebService::Solr module
on CPAN, but I'll take a response from anyone who knows what I can do.
If I use the following Perl code, I get an error. If I try to build
some other query besides *:* to request all documents, the script runs,
but the
On Jul 9, 2013, at 2:48 PM, Shawn Heisey s...@elyograg.org wrote:
This is primarily to Andy Lester, who wrote the WebService::Solr module
on CPAN, but I'll take a response from anyone who knows what I can do.
If I use the following Perl code, I get an error.
What error do you get? Never
Hi
My solr 3.6.1 slave farm is suddenly getting stuck during replication. It
seems to stop on a random file on various slaves (not all) and not continue.
I've tried stoping and restarting tomcat etc but some slaves just can't get the
index pulled down. Note there is plenty of space on the
Hi Shawn,
I have been trying to duplicate this problem without success for the last 2
weeks which is one reason I'm getting flustered. It seems reasonable to be
able to duplicate it but I can't.
We do have a story to upgrade but that is still weeks if not months before
that gets rolled out
On 7/9/2013 2:02 PM, Andy Lester wrote:
What error do you get? Never say I get an error. Always say I get
this error: .
This is the actual error when trying *:* :
Can't locate object method _struct_ via package
WebService::Solr::Query at
Hello,
I am curious about the Deleted Docs: statistic on the solr/#/collection1
Overview page. Does Solr remove docs while indexing? I thought it only did
that when Optimizing, however my instance had 726 Deleted Docs, but then
after adding some documents that number decreased, eventually to 18
Solr (Lucene, actually) will be doing segment merge operations in the
background, continually, so generally you won't need to do optimize
operations.
Generally, an explicit delete and a replace of an existing document are the
only two ways that you would get a deleted document.
-- Jack
On 7/9/2013 3:38 PM, Katie McCorkell wrote:
I am curious about the Deleted Docs: statistic on the solr/#/collection1
Overview page. Does Solr remove docs while indexing? I thought it only did
that when Optimizing, however my instance had 726 Deleted Docs, but then
after adding some documents
Look at the speed and time remaining on this one, pretty funny:
Master http://ssbuyma01:8983/solr/1/replication
Latest Index Version:null, Generation: null
Replicatable Index Version:1276893670202, Generation: 127213
Poll Interval00:05:00
Local Index Index Version: 1276893670108,
Hi Jed,
This is really with Solr 4.0? If so, it may be wiser to jump on 4.4
that is about to be released. We did not have fun working with 4.0 in
SolrCloud mode a few months ago. You will save time, hair, and money
if you convince your manager to let you use Solr 4.4. :)
Otis
--
Solr
Hello,
I am trying to create a POC to test query joins. However, I was
surprised when I saw my test worked with some ids, but when my document ids
are UUIDs, it doesn't work.
Follows an example, using solrj:
SolrInputDocument doc = new SolrInputDocument();
doc.addField(id,
Your join is requesting to use the join_id field (from) of documents
matching the query of cor_parede:branca, but the join_id field of that
document is empty.
Maybe you intended to search in the other direction, like
acessorio1:Teclado.
-- Jack Krupansky
-Original Message-
From:
Oops... I misread and confused your q and fq params.
-- Jack Krupansky
-Original Message-
From: Jack Krupansky
Sent: Tuesday, July 09, 2013 7:47 PM
To: solr-user@lucene.apache.org
Subject: Re: join not working with UUIDs
Your join is requesting to use the join_id field (from) of
Hi there:
In solr4.3 source code , I found overseer use 3 queues to handle all
solrcloud management request:
1: /overseer/queue
2: /overseer/queue-work
3: /overseer/collection-queue-work
ClusterStateUpdater use 1st 2nd queue to handle solrcloud shard or
state
I have a field that has omitNorms=true, but when I look at debugQuery I see
that
the field is being normalized for the score.
What can I do to turn off normalization in the score?
I want a simple way to do 2 things:
boost geodist() highest at 1 mile and lowest at 100 miles.
plus add a boost for
Can we get a sample fieldType and field definition?
Thanks.
On Mon, Jul 8, 2013 at 8:40 AM, Jack Krupansky j...@basetechnology.comwrote:
Yes, you should be able to used nested query parsers to mix the queries.
Solr 4.1(?) made it easier.
-- Jack Krupansky
-Original Message- From:
Jack due to 'some' reason my nutch is returning me index time boost =0.0
and just for a moment suppose that nutch is and will always return boost =0.
Now my simple question was why Solr is showing me document's score = 0 ?
Why is it depending upon index time boost value ? Why or how to make Solr
62 matches
Mail list logo