Actually my changes in updateProcessor.0.1.jar were not reflecting
(functionality wise). I was getting no errors.
Well I dropped the jar file in shared folder only updateProcessor.0.1.jar .
The entry added in solrconfig file was
**
In /updateProcessor.0.1.jar/ I had class file with the path
Hi Scott, thank you for sharing your solution, appreciate it.
To me interms of maintainability I think it will bebetter to define all
the parameters either at the client end or solr end.
On 6/15/2016 9:47 AM, scott.chu wrote:
In my case, I write a HTTP gateway between application and Solr
Hi Renaud,
Thank you so much for your response. It is very helpful and it helped me
understand the need for turning on buffering.
Is it recommended to keep the buffering enabled all the time on the source
cluster? If the target cluster is up and running and the cdcr is started,
can i turn off
Hi Renaud,
Thank you so much for your response. It is very helpful and it helped me
understand the need for turning on buffering.
Is it recommended to keep the buffering enabled all the time on the source
cluster? If the target cluster is up and running and the cdcr is started,
can i turn off
In my case, I write a HTTP gateway between application and Solr engine. This is
long existed before I use Solr as SE. Back that time, I figure out one day I
might replace our old SE and it would cause two dilemma:
1> If our applications directly call the API of THE search engines, when we
Hi Emir
Yaguess one way is to implement a policy where new queries from client
application have to be reviewcouple with periodic search log grooming as
you have suggested.
On 6/14/2016 4:12 PM, Emir Arnautovic wrote:
Hi Derek,
Unless you lock all your parameters, there will always be a
Hi -
What's the current recommendation for searching/analyzing Korean?
The reference guide only lists CJK:
https://cwiki.apache.org/confluence/display/solr/Language+Analysis
I see a bunch of work was done on
https://issues.apache.org/jira/browse/LUCENE-4956, but it doesn't look like
that was
Any suggestions on how to handle result grouping in sharded index?
On Mon, Jun 13, 2016 at 1:15 PM, Jay Potharaju
wrote:
> Hi,
> I am working on a functionality that would require me to group documents
> by a id field. I read that the ngroups feature would not work in a
First, having 5 Zookeeper nodes to manage 4 Solr nodes
is serious overkill. Three should be more than sufficient.
what did you put in your configuration? Does your
directive in solrconfig.xml mention updateProcessor.0.1?
And what error are you seeing exactly?
When Solr starts up, part of the
If these are the complete field, i.e. your document
contains exactly "ear phones" and not "ear phones
are great" use a copyField to put it into an "exact_match"
field that uses a much simpler analysis chain based
on KeywordTokenizer (plus, perhaps things like
lowercaseFilter, maybe strip
BTW, the easiest way to check what schema
file you are _actually_ using is through the
admin UI. Select a core, go to files>>and click
on the schema file in question
Best,
Erick
On Tue, Jun 14, 2016 at 5:53 AM, Shawn Heisey wrote:
> On 6/14/2016 6:09 AM, Syedabbasmehdi
Thank you.
On 14 June 2016 at 20:03, Mikhail Khludnev
wrote:
> Sequentially.
>
> On Tue, Jun 14, 2016 at 12:32 PM, Zheng Lin Edwin Yeo <
> edwinye...@gmail.com>
> wrote:
>
> > Hi,
> >
> > i would like to find out, does Solr writes to the disk sequentially or
> >
I must chime in to clarify something - in case 2, would the source cluster
eventually start a log reader on its own? That is, would the CDCR heal over
time, or would manual action be required?
-Original Message-
From: Renaud Delbru [mailto:renaud@siren.solutions]
Sent: Tuesday, June
On 6/14/2016 6:09 AM, Syedabbasmehdi Rizvi wrote:
> Schema file has a field called timestamp of type date. Still it shows
> that error.
Did you restart Solr or reload the core/collection after modifying the
schema?
Are you absolutely sure that you are looking at the active schema file?
Users
Schema file has a field called timestamp of type date. Still it shows that
error.
-Original Message-
From: Erik Hatcher [mailto:erik.hatc...@gmail.com]
Sent: Tuesday, June 14, 2016 5:31 PM
To: solr-user@lucene.apache.org
Subject: Re: Solr - Error when trying to index the date field.
Sequentially.
On Tue, Jun 14, 2016 at 12:32 PM, Zheng Lin Edwin Yeo
wrote:
> Hi,
>
> i would like to find out, does Solr writes to the disk sequentially or
> randomly during indexing?
> I'm using Solr 6.0.1.
>
> Regards,
> Edwin
>
--
Sincerely yours
Mikhail Khludnev
You apparently don’t have a `timestamp` field defined in your schema. The
error message is:
unknown field ‘timestamp’
> On Jun 14, 2016, at 5:18 AM, Syedabbasmehdi Rizvi
> wrote:
>
> Hi,
>
> I am trying to index a CSV file that contains a date field.
Hi David,
Thanks for your explanation.
I don't see there's a need to to use the gps_0_coordinate and
gps_1_coordinate field for the time being, as the query which I'm using are
querying directly to the gps field.
Regards,
Edwin
On 14 June 2016 at 12:39, David Smiley
Hi,
i would like to find out, does Solr writes to the disk sequentially or
randomly during indexing?
I'm using Solr 6.0.1.
Regards,
Edwin
Hi,
I have indexed nested documents into solr.
How do I filter on the main query using block join query?
Here is what I have in the sense of documents:
Document A -> id, name, title, is_parent=true
Document B -> id, x,y,z
Document C -> id, a , b
Document B & C are child to A. I want to get
Here it is:
https://gist.github.com/shadow-fox/150c1e5d11cccd4a5bafd307c717ff85
On Tuesday 14 June 2016 01:03 PM, Mikhail Khludnev wrote:
OK. And how does response looks like on meaningful child.facet.field
request with debugQuery?
14 июня 2016 г. 8:12 пользователь "Pranaya Behera"
Hi,
I am trying to index a CSV file that contains a date field. I have the date
field configured in schema and config.xml
But somehow, it shows an error when I try to index this file which says:
SimplePostTool version 5.0.0
Posting files to [base] url http://localhost:8983/solr/polycom/update
Hi dmitry,
Was a commit operation sent to the 2 target clusters after the
replication ? Replicated documents will not appeared until a commit
operation is sent.
What is the output of the monitoring actions QUEUES and ERRORS ? Are you
seeing any errors reported ? Are you seeing the queue
Hi Bharath,
The buffer is useful when you need to buffer updates on the source
cluster before starting cdcr, if the source cluster might receive
updates in the meanwhile and you want to be sure to not miss them.
To understand this better, you need to understand how cdcr clean
transaction
Hi Derek,
Unless you lock all your parameters, there will always be a chance of
inefficient queries. Only way to fight that is to have full control of
Solr interface and provide some search API, or to do regular search log
grooming.
Emir
On 14.06.2016 03:05, Derek Poh wrote:
Hi Emir
Thank
On Mon, Jun 13, 2016 at 4:59 PM, Erick Erickson
wrote:
> Yes, Solr will pick that up. You won't have any replicas
> though so you'll have to ADDREPLICA afterwards.
> You could use the EMPTY option on the creteNodeSet
> of the Collections API to create a dummy collection
Hi,
I have documents with a field (data type definition for that field is
below) values as ear phones, sony ear phones, philips ear phones. when i
query for earphones sony ear phones is the top result where as i want ear
phones as top result. please suggest how to boost exact matches. PS: I have
OK. And how does response looks like on meaningful child.facet.field
request with debugQuery?
14 июня 2016 г. 8:12 пользователь "Pranaya Behera"
написал:
> Hi Mikhail,
> Here is the response for
>
> q=*:*=true:
>
>
Sas,
I have no idea why it might not work, perhaps debugging DIH in Solr Admin
or just via request param might answer this question.
09 июня 2016 г. 19:43 пользователь "Jamal, Sarfaraz"
написал:
> I am on SOLR6 =)
>
> Thanks,
>
> Sas
>
> -Original
29 matches
Mail list logo