, if only better error reporting at a minimum.
-- Jack Krupansky
-Original Message-
From: Shalin Shekhar Mangar
Sent: Thursday, July 17, 2014 12:40 AM
To: solr-user@lucene.apache.org
Subject: Re: problem with replication/solrcloud - getting 'missing required
field' during update
On Wed, Jul 16, 2014 at 10:20 PM, Nathan Neulinger nn...@neulinger.org wrote:
[{id:4b2c4d09-31e2-4fe2-b767-3868efbdcda1,channel: {add:
preet},channel: {add: adam}}]
Look at the JSON... It's trying to add two channel array elements...
Should have been:
[...]
From what I'm reading on JSON -
--
View this message in context:
http://lucene.472066.n3.nabble.com/problem-with-replication-solrcloud-getting-missing-required-field-during-update-intermittently-SOLR--tp4147395p4147724.html
Sent from the Solr - User mailing list archive at Nabble.com.
that parsing
technique, or didn't have the logic to kick out the problem, or didn't
process it properly. So, I think this is SOME kind of issue on the Solr
side, if only better error reporting at a minimum.
--
View this message in context:
http://lucene.472066.n3.nabble.com/problem
think this is SOME kind of issue on the Solr
side, if only better error reporting at a minimum.
--
View this message in context:
http://lucene.472066.n3.nabble.com/problem-with-replication-solrcloud-getting-missing-required-field-during-update-intermittently-SOLR--tp4147395p4147781.html
have the logic to kick out the problem, or didn't
process it properly. So, I think this is SOME kind of issue on the Solr
side, if only better error reporting at a minimum.
--
View this message in context:
http://lucene.472066.n3.nabble.com/problem-with-replication
FYI. We finally tracked down the problem at least 99.9% sure at this point, and it was staring me in the face the
whole time - just never noticed:
[{id:4b2c4d09-31e2-4fe2-b767-3868efbdcda1,channel: {add: preet},channel:
{add: adam}}]
Look at the JSON... It's trying to add two channel
Phew, thanks for tracking it down.
On Thu, Jul 17, 2014 at 7:50 AM, Nathan Neulinger nn...@neulinger.org
wrote:
FYI. We finally tracked down the problem at least 99.9% sure at this
point, and it was staring me in the face the whole time - just never
noticed:
Issue was closed in Jira requesting it be discussed here first. Looking for any diagnostic assistance on this issue with
4.8.0 since it is intermittent and occurs without warning.
Setup is two nodes, with external zk ensemble. Nodes are accessed round-robin
on EC2 behind an ELB.
Schema has:
.nabble.com/Problem-after-replication-using-solr-1-4-tp4005501.html
Sent from the Solr - User mailing list archive at Nabble.com.
hi Tomas ,
My queries are complex ,i am faceting on many fields ,and using highlighting
and using boosts etc in the same query .
auto warming takes hell lot of time hence i have removed it .
--
View this message in context:
http://lucene.472066.n3.nabble.com/problem-in-replication
queries on a standalone server they are fast .*
what may be the issue
Need help on a tight time line
--
View this message in context:
http://lucene.472066.n3.nabble.com/problem-in-replication-tp3984654.html
Sent from the Solr - User mailing list archive at Nabble.com.
(30
TO
90 seconds) and if i stop the delta replication, then the dismax queries
are
getting fast . If i run queries on a standalone server they are fast .*
what may be the issue
Need help on a tight time line
--
View this message in context:
http://lucene.472066.n3.nabble.com/problem
Actually, I get:
No files to download for index generation:
this is after deleting the data directory on the slave.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Problem-with-replication-tp2294313p3704457.html
Sent from the Solr - User mailing list archive at Nabble.com.
It may have been a permissions problem, or it stared working after the master
had done another fresh scheduled full-import and jumped an index version.
Timestamp issue?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Problem-with-replication-tp2294313p3704559.html
Sent from
Hi all,
we have implemented a Solr based search in our web application. We have one
master server that maintains the index which is replicated to the slaves using
the built-in Solr replication.
This has been working fine so far, but suddenly the replication does not send
the modified files
On which events did you configure master to perform replication? replicateAfter
Regards,
Stevo.
On Thu, Jan 20, 2011 at 12:53 PM, Thomas Kellerer spam_ea...@gmx.net wrote:
Hi all,
we have implemented a Solr based search in our web application. We have one
master server that maintains the
Here is our configuration:
lst name=master
str name=enabletrue/str
str name=replicateAftercommit/str
str name=replicateAfterstartup/str
str
name=confFilesstopwords.txt,stopwords_de.txt,stopwords_en.txt,synonyms.txt/str
/lst
Stevo Slavić, 20.01.2011 13:26:
On which events did you
Thomas Kellerer, 20.01.2011 12:53:
Hi all,
we have implemented a Solr based search in our web application. We
have one master server that maintains the index which is replicated
to the slaves using the built-in Solr replication.
This has been working fine so far, but suddenly the replication
So if on startup index gets replicated, then commit probably isn't
being called anywhere on master.
Is that index configured to autocommit on master, or do you commit
from application code? If you commit from application code, check if
commit actually gets issued to the slave.
Regards,
Stevo.
Stevo Slavić, 20.01.2011 15:42:
So if on startup index gets replicated, then commit probably isn't
being called anywhere on master.
No, the index is not replicated on startup (same behaviour: no files to
download)
Is that index configured to autocommit on master, or do you commit
from
We have tried that as well, but the slave still claims to have a higher index
version, even when the index files were deleted completely
Regards
Thomas
Stevo Slavić, 20.01.2011 16:52:
Not too elegant but valid check would be to bring slave down, delete
it's index data directory, then to commit
On Wed, Aug 26, 2009 at 11:53 PM, Ron Ellis r...@benetech.org wrote:
Hi Everyone,
When trying to utilize the new HTTP based replication built into Solr 1.4 I
encounter a problem. When I view the replication admin page on the slave
all
of the master values are null i.e. Replicatable Index
On Thu, Aug 27, 2009 at 12:27 PM, Shalin Shekhar
Mangarshalinman...@gmail.com wrote:
On Wed, Aug 26, 2009 at 11:53 PM, Ron Ellis r...@benetech.org wrote:
Hi Everyone,
When trying to utilize the new HTTP based replication built into Solr 1.4 I
encounter a problem. When I view the replication
Hi Everyone,
When trying to utilize the new HTTP based replication built into Solr 1.4 I
encounter a problem. When I view the replication admin page on the slave all
of the master values are null i.e. Replicatable Index Version:null,
Generation: null | Latest Index Version:null, Generation: null.
: Let's say I post a document on the master server, and the slaves do
: a snappuller/installer via crontab every 1 minutes.
:
: Then between in average 30 seconds, all my search servers are not
: synchronized.
:
: Is there a way to improve this situation ?
If your slaves all use NTP to
Hi All,
I've got here a small problem about replication.
Let's say I post a document on the master server, and the slaves do
a snappuller/installer via crontab every 1 minutes.
Then between in average 30 seconds, all my search servers are not
synchronized.
Is there a way to improve
- Original Message
From: Jérôme Etévé jerome.et...@gmail.com
To: solr-user@lucene.apache.org
Sent: Friday, May 15, 2009 2:48:39 PM
Subject: Synchronisation problem with replication
Hi All,
I've got here a small problem about replication.
Let's say I post a document on the master
!-- Deprecated --
!--maxBufferedDocs1000/maxBufferedDocs--
maxMergeDocs2147483647/maxMergeDocs
maxFieldLength1/maxFieldLength
Thanks a lot for your help,
Sunny
--
View this message in context:
http://www.nabble.com/Problem-for-replication-%3A-segment-optimized
in context:
http://www.nabble.com/Problem-for-replication-%3A-segment-optimized-automaticly-tp22601442p22649412.html
Sent from the Solr - User mailing list archive at Nabble.com.
--
Regards,
Shalin Shekhar Mangar.
--
--Noble Paul
--
View this message in context:
http
-for-replication-%3A-segment-optimized-automaticly-tp22601442p22649412.html
Sent from the Solr - User mailing list archive at Nabble.com.
--
Regards,
Shalin Shekhar Mangar.
--
View this message in context:
http://www.nabble.com/Problem-for-replication-%3A-segment-optimized-automaticly
this message in context:
http://www.nabble.com/Problem-for-replication-%3A-segment-optimized-automaticly-tp22601442p22649412.html
Sent from the Solr - User mailing list archive at Nabble.com.
--
Regards,
Shalin Shekhar Mangar.
--
View this message in context:
http://www.nabble.com
--
maxMergeDocs2147483647/maxMergeDocs
maxFieldLength1/maxFieldLength
Thanks a lot for your help,
Sunny
--
View this message in context:
http://www.nabble.com/Problem-for-replication-%3A-segment-optimized-automaticly-tp22601442p22649412.html
Sent from the Solr - User
/maxBufferedDocs--
maxMergeDocs2147483647/maxMergeDocs
maxFieldLength1/maxFieldLength
Thanks a lot for your help,
Sunny
--
View this message in context:
http://www.nabble.com/Problem-for-replication-%3A-segment-optimized-automaticly-tp22601442p22649412.html
Sent from the Solr - User
--
!--maxBufferedDocs1000/maxBufferedDocs--
maxMergeDocs2147483647/maxMergeDocs
maxFieldLength1/maxFieldLength
Thanks a lot for your help,
Sunny
--
View this message in context:
http://www.nabble.com/Problem-for-replication-%3A-segment-optimized-automaticly-tp22601442p22649412.html
/maxMergeDocs
maxFieldLength1/maxFieldLength
Thanks a lot for your help,
Sunny
--
View this message in context:
http://www.nabble.com/Problem-for-replication-%3A-segment-optimized-automaticly-tp22601442p22649412.html
Sent from the Solr - User mailing list archive at Nabble.com
,
Sunny
--
View this message in context:
http://www.nabble.com/Problem-for-replication-%3A-segment-optimized-automaticly-tp22601442p22601442.html
Sent from the Solr - User mailing list archive at Nabble.com.
37 matches
Mail list logo